Sie sind auf Seite 1von 663

Communications

in Computer and Information Science 214


Song Lin Xiong Huang (Eds.)

Advances in Computer Science,


Environment, Ecoinformatics,
and Education

International Conference, CSEE 2011


Wuhan, China, August 21-22, 2011
Proceedings, Part I

13
Volume Editors

Song Lin
International Science & Education Researcher Association
Wuhan Branch, No.1, Jiangxia Road, Wuhan, China
E-mail: 1652952307@qq.com
Xiong Huang
International Science & Education Researcher Association
Wuhan Branch, No.1, Jiangxia Road, Wuhan, China
E-mail: 499780828@qq.com

ISSN 1865-0929 e-ISSN 1865-0937


ISBN 978-3-642-23320-3 e-ISBN 978-3-642-23321-0
DOI 10.1007/978-3-642-23321-0
Springer Heidelberg Dordrecht London New York

Library of Congress Control Number: Applied for

CR Subject Classification (1998): I.2, C.2, H.4, H.3, D.2, H.5

Springer-Verlag Berlin Heidelberg 2011


This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface

The International Science & Education Researcher Association (ISER) puts its
focus on the study and exchange of academic achievements of international teach-
ing and research sta. It also promotes educational reform in the world. In addi-
tion, it serves as an academic discussion and communication platform, which is
benecial for education and scientic research, aiming to stimulate the interest
of all researchers.
The CSEE-TMEI conference is an integrated event concentrating on the eld
of computer science, environment, ecoinformatics, and education. The goal of the
conference is to provide researchers working in this eld with a forum to share
new ideas, innovations, and solutions. CSEE 2011-TMEI 2011 was held during
August 2122, in Wuhan, China, and was co-sponsored by the International
Science & Education Researcher Association, Beijing Gireida Education Co. Ltd,
and Wuhan University of Science and Technology, China. Renowned keynote
speakers were invited to deliver talks, giving all participants a chance to discuss
their work with the speakers face to face.
In these proceeding, you can learn more about the eld of computer science,
environment, ecoinformatics, and education from the contributions of several
researchers from around the world. The main role of the proceeding is to be
used as means of exchange of information for those working in this area.
The Organizing Committee made a great eort to meet the high standards of
Springers Communications in Computer and Information Science (CCIS) series.
Firstly, poor-quality papers were rejected after being reviewed by anonymous
referees. Secondly, meetings were held periodically for reviewers to exchange
opinions and suggestions. Finally, the organizing team held several preliminary
sessions before the conference. Through the eorts of numerous people and de-
partments, the conference was very successful.
During the organization, we received help from dierent people, departments,
and institutions. Here, we would like to extend our sincere thanks to the pub-
lishers of CCIS, Springer, for their kind and enthusiastic help and support of our
conference. Secondly, the authors should also be thanked for their submissions.
Thirdly, the hard work of the Program Committee, the Program Chairs, and the
reviewers is greatly appreciated.
In conclusion, it was the team eort of all these people that made our con-
ference such a success. We welcome any suggestions that may help improve the
conference and look forward to seeing all of you at CSEE 2012-TMEI 2012.

June 2011 Song Lin


Organization

Honorary Chairs
Chen Bin Beijing Normal University, China
Hu Chen Peking University, China
Chunhua Tan Beijing Normal University, China
Helen Zhang University of Munich, Germany

Program Committee Chairs


Xiong Huang International Science & Education Researcher
Association, China
Li Ding International Science & Education Researcher
Association, China
Zhihua Xu International Science & Education Researcher
Association, China

Organizing Chairs
ZongMing Tu Beijing Gireida Education Co. Ltd, China
Jijun Wang Beijing Spon Technology Research Institution,
China
Quan Xiang Beijing Prophet Science and Education
Research Center, China

Publication Chairs
Song Lin International Science & Education Researcher
Association, China
Xiong Huang International Science & Education Researcher
Association, China

International Program Committee


Sally Wang Beijing Normal University, China
Li Li Dongguan University of Technology, China
Bing Xiao Anhui University, China
Z.L. Wang Wuhan University, China
Moon Seho Hoseo University, Korea
Kongel Arearak Suranaree University of Technology, Thailand
Zhihua Xu International Science & Education Researcher
Association, China
VIII Organization

Co-sponsored by
International Science & Education Researcher Association, China
VIP Information Conference Center, China

Reviewers
Chunlin Xie Wuhan University of Science and Technology, China
Lin Qi Hubei University of Technology, China
Xiong Huang International Science & Education Researcher
Association, China
Gang Shen International Science & Education Researcher
Association, China
Xiangrong Jiang Wuhan University of Technology, China
Li Hu Linguistic and Linguidtic Education
Association, China
Moon Hyan Sungkyunkwan University, Korea
Guang Wen South China University of Technology, China
Jack H. Li George Mason University, USA
Marry. Y. Feng University of Technology Sydney, Australia
Feng Quan Zhongnan University of Finance and
Economics, China
Peng Ding Hubei University, China
Song Lin International Science & Education Researcher
Association, China
XiaoLie Nan International Science & Education Researcher
Association, China
Zhi Yu International Science & Education Researcher
Association, China
Xue Jin International Science & Education Researcher
Association, China
Zhihua Xu International Science & Education Researcher
Association, China
Wu Yang International Science & Education Researcher
Association, China
Qin Xiao International Science & Education Researcher
Association, China
Weifeng Guo International Science & Education Researcher
Association, China
Li Hu Wuhan University of Science and Technology, China,
Zhong Yan Wuhan University of Science and Technology, China
Haiquan Huang Hubei University of Technology, China
Xiao Bing Wuhan University, China
Brown Wu Sun Yat-Sen University, China
Table of Contents Part I

Convergence of the Stochastic Age-Structured Population System with


Diusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Dongjuan Ma and Qimin Zhang

Parallel Computer Processing Systems Are Better Than Serial


Computer Processing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Zvi Retchkiman Konigsberg

Smooth Path Algorithm Based on A* in Games . . . . . . . . . . . . . . . . . . . . . . 15


Xiang Xu and Kun Zou

The Features of Biorthogonal Binary Poly-scale Wavelet Packs in


Bidimensional Function Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Zhihao Tang and Honglin Guo

The Traits of Dual Multiple Ternary Fuzzy Frames of Translates with


Ternary Scaling Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
ShuKe Zhou and Qingjiang Chen

The Characteristics of Multiple Ane Oblique Binary Frames of


Translates with Binary Filter Banks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
YongGan Li

Generation and Characteristics of Vector-Valued Quarternary Wavelets


with Poly-scale Dilation Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Ping Luo and Shiheng Wang

A Kind of New Strengthening Buer Operators and Their


Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Ran Han and Zheng-peng Wu

Infrared Target Detection Based on Spatially Related Fuzzy ART


Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
BingWen Chen, WenWei Wang, and QianQing Qin
A Novel Method for Quantifying the Demethylation Potential of
Environmental Chemical Pollutants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Yan Jiang and Xianliang Wang

Study of Quantitative Evaluation of the Eect of Prestack Noise


Attenuation on Angle Gather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Junhua Zhang, Jing Wang, Xiaoteng Liang, Shaomei Zhang, and
Shengtao Zang
X Table of Contents Part I

A User Model for Recommendation Based on Facial Expression


Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Quan Lu, Dezhao Chen, and Jiayin Huang

An Improved Sub-pixel Location Method for Image Measurement . . . . . . 83


Hu Zhou, Zhihui Liu, and Jianguo Yang

The Dynamic Honeypot Design and Implementation Based on


Honeyd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Xuewu Liu, Lingyi Peng, and Chaoliang Li

Research of SIP DoS Defense Mechanism Based on Queue Theory . . . . . . 99


Fuxiang Gao, Qiao Liu, and Hongdan Zhan

Research on the Use of Mobile Devices in Distance EFL Learning . . . . . . 105


Fangyi Xia

Flood Risk Assessment Based on the Information Diusion Method . . . . 111


Li Qiong

Dielectric Characteristics of Chrome Contaminated Soil . . . . . . . . . . . . . . . 118


Yakun Sun, Yuqiang Liu, Changxin Nai, and Lu Dong

Inuences of Climate on Forest Fire during the Period from


2000 to 2009 in Hunan Province . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
ZhiGang Han, DaLun Tian, and Gui Zhang

Numerical Simulation for Optimal Harvesting Strategies of Fish Stock


in Fluctuating Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Lulu Li, Wen Zhao, Lijuan Cao, and Hongyan Ao

The Graphic Data Conversion from AutoCAD to GeoDatabase . . . . . . . . 137


Xiaosheng Liu and Feihui Hu

Research on Knowledge Transference Management of Knowledge


Alliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Jibin Ma, Gai Wang, and Xueyan Wang

The Study of Print Quality Evaluation System Using the Back


Propagation Neural Network with Applications to Sheet-Fed Oset . . . . . 149
Taolin Ma, Yang Li, and Yansong Sun

Improved Design of GPRS Wireless Security System Based on AES . . . . 155


TaoLin Ma, XiaoLan Sun, and LiangPei Zhang

Design and Realization of FH-CDMA Scheme for Multiple-Access


Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Abdul Baqi, Sajjad Ahmed Soomro, and Safeeullah Soomro
Table of Contents Part I XI

Design and Implement on Automated Pharmacy System . . . . . . . . . . . . . . 167


HongLei Che, Chao Yun, and JiYuan Zang

Research on Digital Library Platform Based on Cloud Computing . . . . . . 176


Lingling Han and Lijie Wang

Research on Nantong University of Radio and TV Websites Developing


Based on ASP and Its Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Shengqi Jing

Analysis of Sustainable Development in Guilin by Using the Theory of


Ecological Footprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Hao Wang, GuanWen Cheng, Shan Xu, ZiHan Xu, XiaoWei Song,
WenYuan Wei, HongYuan Fu, and GuoDan Lu

Analysis of Emergy and Sustainable Development on the Eco-economic


System of Guilin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
ZiHan Xu, GuanWen Cheng, Hao Wang, HongYuan Fu,
GuoDan Lu, and Ping Qin

Multiple Frequency Detection System Design . . . . . . . . . . . . . . . . . . . . . . . . 201


Wen Liu, Jun da Hu, and Ji cui Shi

The Law and Economic Perspective of Protecting the Ecological


Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Chen Xiuping and Liang Xianyan

Research on the Management Measure for Livestock Pollution


Prevention and Control in China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Yukun Ji, Kaijun Wang, and Mingxia Zheng

The Design of Supermarket Electronic Shopping Guide System Based


on ZigBee Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Yujie Zhang, Liang Han, and Yuanyuan Zhang

The Research of Flame Combustion Diagnosis System Based on Digital


Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
YuJie Zhang, SaLe Hui, and YuanYuan Zhang

Design and Research of Virtual Instrument Development Board . . . . . . . . 231


Lin Zhang, Taizhou Li, and Zhuo Chen

Substantial Development Strategy of Land Resource in Zhangjiakou . . . . 239


Yuqiang Sun, Shengchen Wang, and Yanna Zhao

Computational Classication of Cloud Forests in Thailand Using


Statistical Behaviors of Weather Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Peerasak Sangarun, Wittaya Pheera,
Krisanadej Jaroensutasinee, and Mullica Jaroensutasinee
XII Table of Contents Part I

Research on Establishing the Early-Warning Index System of Energy


Security in China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Yanna Zhao, Min Zhang, and Yuqiang Sun

Articial Enzyme Construction with Temperature Sensitivity . . . . . . . . . . 257


Tingting Lin, Jun Lin, Xin Huang, and Junqiu Liu

An Ecient Message-Attached Password Authentication Protocol and


Its Applications in the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
An Wang, Zheng Li, and Xianwen Yang

Research on Simulation and Optimization Method for Tooth Movement


in Virtual Orthodontics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Zhanli Li and Guang Yang

An Interval Fuzzy C-means Algorithm Based on Edge Gradient for


Underwater Optical Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Shilong Wang, Yuru Xu, and Lei Wan

A Generic Construction for Proxy Cryptography . . . . . . . . . . . . . . . . . . . . . 284


Guoyan Zhang

VCAN-Controller Area Network Based Human Vital Sign Data


Transmission Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Atiya Azmi, Nadia Ishaque, Ammar Abbas, and Safeeullah Soomro

Study on the Some Labelings of Complete Bipartite Graphs . . . . . . . . . . . 297


WuZhuang Li, GuangHai Li, and QianTai Yan

An Eective Adjustment on Improving the Process of Road Detection


on Raster Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Yang Li, Xiao-dong Zhang, and Yuan-lu Bao

Multi-objective Optimized PID Controller for Unstable First-Order


Plus Delay Time Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Gongquan Tan, Xiaohui Zeng, Shuchuan Gan, and Yonghui Chen

Water Quality Evaluation for the Main Inow Rivers of Nansihu


Lake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Yang Liyuan, Shen Ji, Liu Enfeng, and Zhang Wei

Software Piracy: A Hard Nut to CrackA Problem of Information


Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Bigang Hong

Study of Bedrock Weathering Zone Features in Suntuan Coal Mine . . . . . 330


XiaoLong Li, DuoXi Yao, and JinXiang Yang

Mercury Pollution Characteristics in the Soil around Landll . . . . . . . . . . 336


JinXiang Yang, MingXu Zhang, and XiaoLong Li
Table of Contents Part I XIII

The Research on Method of Detection for Three-Dimensional


Temperature of the Furnace Based on Support Vector Machine . . . . . . . . 341
Yang Yu, Jinxing Chen, Guohua Zhang, and Zhiyong Tao
Study on Wave Filtering of Photoacoustic Spectrometry Detecting
Signal Based on Mallat Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Yang Yu, Shuo Wu, Guohua Zhang, and Peixin Sun

Ontology-Based Context-Aware Management for Wireless Sensor


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Keun-Wang Lee and Si-Ho Cha

An Extended Center-Symmetric Local Ternary Patterns for Image


Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Xiaosheng Wu and Junding Sun
Comparison of Photosynthetic Parameters and Some Physilogical
Indices of 11 Fennel Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Mingyou Wang, Beilei Xiao, and Lixia Liu
Eect of Naturally Low Temperature Stress on Cold Resistance of
Fennel Varieties Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Beilei Xiao, Mingyou Wang, and Lixia Liu
Survey on the Continuing Physical Education in the Cities around the
Taihu Lake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
JianQiang Guo
The Gas Seepage and Migration Law of Mine Fire Zone under the
Positive Pressure Ventilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
HaiYan Wang, ZhenLong Zhang, DanDan Jiang, and FeiYin Wang
Study on Continued Industrys Development Path of Resource-Based
Cities in Heilongjiang Province . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Ying Zhu and Jiehua Lv
Cable Length Measurement Systems Based on Time Domain
Reectometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Jianhui Song, Yang Yu, and Hongwei Gao
The Cable Crimp Levels Eect on TDR Cable Length Measurement
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Jianhui Song, Yang Yu, and Hongwei Gao
The Clustering Algorithm Based on the Most Similar Relation
Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Wei Hong Xu, Min Zhu, Ya Ruo Jiang, Yu Shan Bai, and Yan Yu

Study of Infrared Image Enhancement Algorithm in Front End . . . . . . . . 416


Rongtian Zheng, Jingxin Hong, and Qingwei Liao
XIV Table of Contents Part I

Inuence of Milling Conditions on the Surface Quality in High-Speed


Milling of Titanium Alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Xiaolong Shen, Laixi Zhang, and Chenggao Ren

Molecular Dynamics Simulation Study on the Microscopic Structure


and the Diusion Behavior of Methanol in Conned Carbon
Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Hua Liu, XiaoFeng Yang, Chunyan Li, and Jianchao Chen

Spoken Emotion Recognition Using Radial Basis Function Neural


Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Shiqing Zhang, Xiaoming Zhao, and Bicheng Lei

Facial Expression Recognition Using Local Fisher Discriminant


Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Shiqing Zhang, Xiaoming Zhao, and Bicheng Lei

Improving Tracking Performance of PLL Based on Wavelet Packet


De-noising Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
YinYin Li, XiaoSu Xu, and Tao Zhang

Improved Algorithm of LED Display Image Based on Composed


Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Xi-jia Song, Xi-qiang Ma, Wei-ya Liu, and Xi-feng Zheng

The Development Process of Multimedia Courseware Using Authoware


and Analysis of Common Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
LiMei Fu

Design of Ship Main Engine Speed Controller Based on Expert Active


Disturbance Rejection Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Weigang Pan, Guiyong Yang, Changshun Wang, and Yingbing Zhou

Design of Ship Course Controller Based on Genetic Algorithm Active


Disturbance Rejection Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Hairong Xiao, Weigang Pan, and Yaozhen Han

A CD-ROM Management Device with Free Storage, Automatic Disk


Check Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
Daohe Chen, Xiaohong Wang, and Wenze Li

An Ecient Multiparty Quantum Secret Sharing with Pure Entangled


Two Photon States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Run-hua Shi and Hong Zhong

Problems and Countermeasures of Educational Informationization


Construction in Colleges and Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Jiaguo Luo and Jie Yu
Table of Contents Part I XV

Synthesis and Characterization of Eco-friendly Composite:


Poly(Ethylene Glycol)-Grafted Expanded Graphite/Polyaniline . . . . . . . . 501
Mincong Zhu, Xin Qing, Kanzhu Li, Wei Qi, Ruijing Su, Jun Xiao,
Qianqian Zhang, Dengxin Li, Yingchen Zhang, and Ailian Liu

The Features of a Sort of Five-Variant Wavelet Packet Bases in Sobolev


Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Yujuan Hu, Qingjiang Chen, and Lang Zhao

The Features of Multiple Ane Fuzzy Quarternary Frames in Sobolev


Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Hongwei Gao

Characters of Orthogonal Nontensor Product Trivariate Wavelet Wraps


in Three-Dimensional Besov Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Jiantang Zhao and Qingjiang Chen

Research on Computer Education and Education Reform Based on a


Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Jianhong Sun, Qin Xu, Yingjiang Li, and JunSheng Li

The Existence and Uniqueness for a Class of Nonlinear Wave Equations


With Damping Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
Bo Lu and Qingshan Zhang

Research on the Distributed Satellite Earth Measurement System


Based on ICE Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Jun Zhou, Wenquan Feng, and Zebin Sun

The Analysis and Optimization of KNN Algorithm Space-Time


Eciency for Chinese Text Categorization . . . . . . . . . . . . . . . . . . . . . . . . . . 542
Ying Cai and Xiaofei Wang

Research of Digital Character Recognition Technology Based on BP


Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Xianmin Wei

Image Segmentation Based on D-S Evidence Theory and C-means


Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
Xianmin Wei

Time-Delay Estimation Based on Multilayer Correlation . . . . . . . . . . . . . . 562


Hua Yan, Yang Zhang, and GuanNan Chen

Applying HMAC to Enhance Information Security for Mobile Reader


RFID System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
Fu-Tung Wang, Tzong-Dar Wu, and Yu-Chung Lu
XVI Table of Contents Part I

Analysis Based on Generalized Regression Neural Network to Oil


Atomic Emission Spectrum Data of a Type Diesel Engine . . . . . . . . . . . . . 574
ChunHui Zhang, HongXiang Tian, and Tao Liu

Robust Face Recognition Based on KFDA-LLE and SVM Techniques . . . 581


GuoQiang Wang and ChunLing Gao

An Improved Double-Threshold Cooperative Spectrum Sensing . . . . . . . . 588


DengYin Zhang and Hui Zhang

Handwritten Digit Recognition Based on Principal Component Analysis


and Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
Rui Li and Shiqing Zhang

Research on System Stability with Extended Small Gain Theory Based


on Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Yuqiang Jin and Qiang Ma

Research on the Chattering Problem with VSC of Supersonic Missiles


Based on Intelligent Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
Junwei Lei, Jianhong Shi, Guorong Zhao, and Guoqiang Liang

Research on Backstepping Nussbaum Gain Control of Missile Overload


System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Jianhong Shi, Guorong Zhao, Junwei Lei, and Guoqiang Liang

Adaptive Control of Supersonic Missiles with Unknown Input


Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
Jinhua Wu, Junwei Lei, Wenjin Gu, and Jianhong Shi

The Fault Diagnostic Model Based on MHMM-SVM and Its


Aplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
FengBo Zhu, WenQuan Wu, ShanLin Zhu, and RenYang Liu

Analysis of a Novel Electromagnetic Bandgap Structure for


Simultaneous Switching Noise Suppression . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Hua Yang, ShaoChang Chen, Qiang Zhang, and WenTing Zheng

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635


Convergence of the Stochastic Age-Structured Population
System with Diffusion

Dongjuan Ma and Qimin Zhang*

School of Mathematics and Computer Science, Ning Xia University


750021,Yinchuan, China
madongjuan_2008@163.com, zhangqimin64@sina.com

Abstract. In this paper, stochastic age-structure population system with jump


are studied. It is proved that the semi-implicit Euler approximation solutions
converge to the analytic solution for the stochastic age-structured population
system with Poisson jump. The analysis use Ito s formula, Burkholder-Davis-
Gundy's inequality, Gronwall's lemma and some inequalities for our purposes.

Keywords: Semi-implicit Euler method, Poisson jump, Numerical solution.

1 Introduction
The theories of stochastic partial differential equations have been extensively many
areas, such as economics, finance and several areas of science and engineering and so
on. There have been mach researched in deterministic age-structured population with
diffusion system and discussed the existence, uniqueness, stability regularity and
localization of the solution of this system [1-3].
In recent years, it is more necessary to consider the random behavior of the birth-
death process and the effects of the stochastic environmental noise for Age-structured
population systems. Most of papers are concerned about stochastic population system.
The random element is considered, there have been many results from stochastic age-
stochastic age-structured population system. For instance, Zhang discussed the
existence and uniqueness for a stochastic age-structured population system with
diffusion [4]. When the diffusion of the population is not considered, Zhang studied
the existence, uniqueness and exponential stability of a stochastic age-dependent
population system ,and numerical analysis for stochastic age dependent population
have been studied in [5-8]. Interest has been growing in the study of stochastic
differential equations with jumps, which is extensively used to model many of the
phenomena arising in the areas [9-10].
In general, most of stochastic age-structured population system with Poisson jump
have not analytic solutions, thus numerical approximation schemes are invaluable
tools for exploring their properties. In this paper, a numerical analysis for stochastic
age-structured population system which is described by Eqs(1) will be developed. The
first contribution is to study the Semi-implicit Euler approximation solutions
converge to the analytic solution. The second contribution is to consider diffuse form
div( Pu ). In particular, our results extend those in [6-8].
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 17, 2011.
Springer-Verlag Berlin Heidelberg 2011
2 D. Ma and Q. Zhang

2 Preliminaries and the Semi-implicit Euler Approximation


Let: O = ( 0, A) T , and V { | L2 ( O ) , L2 ( O ) , where are
xi xi
generalized partial derivatives}. Then V the dual space of V .we denote by and
the norms in V and V respectively; by , the duality product between V , V , and
by (, ) the scalar product in H .Let ( , F , P ) be a complete probability space with
a filtration {Ft }t 0 satisfying the usual conditions (i.e., it is increasing and right
continuous while F0 contains all P-null sets). In this paper, we consider the
convergence of stochastic systems with diffusion kdiv ( P u ),

P P
t + r = kdiv( Pu ) (r , t , x) P +

f (r , t , x, P ) + g (r , t , x, P ) Wt + h(r , t , x, P ) N t , inQ = (0, A) Q,


t t
A

A (1)
P (0, t , x ) = (r , t , x ) P (r , t , x )dr , in(0, T ) ,
0

P ( r , 0, x ) = P0 ( r , x ), in(0, A) ,
P (r , t , x) = 0, on A = (0, A) (0, T ) ,

u (t , x ) = P (r , t , x)dr ,
A
0 inQ,

Nt
div be the divergence operator. Let W(t) be a Wiener process, hr
( ,t, x,P) is the
t
poisson jump procession.
For system (1), the discrete time semi-implicit Euler approximation on t is defined
by the iterative scheme:
Qn+1 Q
Qn+1 = Qn + (1)[ + kdiv(Qn+1u) (r, t, x)Qn + f (r, t, x, Qn )]+t +[ n+1
r r
+kdiv(Qn+1u) (r, t, x)Qn+1 + f (r, t, x,Qn+1)]+t + g(r, t, x, Qn )Wn + h(r, t, x, Qn )Nn ,
Here [0,1], Qn is the approximation to P (tn , r , x) , for tn = nt , the time
increment is t = T  1 , and Brownian motion increment is +Wn = W(tn+1) W(tn ),
N
Possion process increment is +N n = N (t n +1 ) N (t n ).
Convergence of the Stochastic Age-Structured Population System with Diffusion 3

For (1),we define the continuous-time approximation :


t Q
Qt = P0 + (1 )[ s + kdiv(Qsu ) (r , s, x) Z1 ( s ) + f (r , s, x, Z1 ( s ))]ds
0 r
t Qs
+ [ + kdiv(Qs u ) (r , s, x) Z 2 ( s) + f (r , s, x, Z 2 ( s))]ds
0 r
t t
+ g (r , s, x, Z1 ( s )d W ( s ) + h(r , s, x, Z1 ( s )d N ( s ),
0 0

Z1 (t ) = Z1 (t, r, x) = kN=01Qk I[k t ,(k +1)t ] , Z2 (t ) = Z2 (t, r, x) = kN=01Qk +1I[kt ,(k +1)t ] ,

There I G is the indicator function for the set G and Z1(tk ) = Z2 (tk1) = Qk = Q(tk , r, x).
To establish the convergence theorem we shall use the following assumptions:
(i)(Lipschitz condition ) here exists a positive constant K such that P1 , P2 C

| f (r , t , x, P1 ) f (r , t , x, P2 ) | | g (r , t , x, P1 ) g (r , t , x, P2 ) |
| h(r , t , x, P1 ) h(r , t , x, P2 ) | K | P1 P2 |, a.e.t ;
(ii) ( r , t , x ) and (r , t , x) are continuous in Q such that
0 0 ( r , t , x ) < , 0 ( r , t , x ) < , k 0 k ( r , t ) k ;
t
(iii) f ( r , t , x, 0) = 0, g ( r , t , x, 0) = 0, | |+u || < k ;
2
0 3

3 The Main Results

In this section, we will provide some theorem which are necessary for the proof of Qt
convergence to the analytical Solution of this system Pt . (We only discuss the
iterative scheme of the continuous-time, for iterative scheme of the discrete time,
there is similar.)

Theorem 3.1. Under assumptions (i)-(ii) E sup | Qt |2 C1T .


0 t T

Proof. For (1), applying N (t ) = N (t ) t , and Itos formula to | Qt |2 ,


t Qs
| Qt |2 =| Q0 |2 +2(1 ) + kdiv(Qs u ) ( r , s, x) Z1 ( s) + f ( r , s, x, Z1 ( s)), Qs ds
0 r
t Qs
+2 + kdiv(Qsu) ( r, s, x) Z 2 ( s) + f ( r, s, x, Z 2 ( s)), Qs ds
0 r
t t
+2 <g ( r , s , x, Z1 ), Qs > d Ws + | | g (r , s, x, Z1 ) ||22 ds
0 0
t t
+2 Qs , h (r , s , x, Z1 ) d N s + |h ( r , s , x, Z1 ) |2 d N s
0 0
4 D. Ma and Q. Zhang

t Qs t t
| Q0 |2 2 , Qs ds + 2 kdiv(Qs u ), Qs ds 2 0 ((1
0 r 0 0 O
t
) Z1 ( s) + Z 2 ( s))Qs drdxds + 2 g ( r , s, x, Z1 )Qs drdxd Ws
0 O
t
+ 2 ((1 ) f (r , s, x, Z1 ) + f (r , s, x, Z 2 ))Qs drdxds
0 O

h(r , s, x, Z1 )Qs drdxdN


t t
+ | | g (r , s, x, Z1 ) ||22 ds + 2
0 0 O
t
+2 h(r , s, x, Z1 )Qs drdxds
0 O

+ |h(r , s, x, Z1 ) |2 dN + |h(r , s, x, Z1 ) |2 ds.


t t

0 0
t
Since 2 kdiv(Q u ), Q ds
0 s s
t t
k (Q u + Q +u ) drdxds + k |Q | ds
s s
2
s
2
0 O 0
t t t
2k | |+u || ds + 2k | | Q +u || ds + k |Q | ds
2
s
2
s
2
0 0 0
t t t t
2k | |+u || ds + 2k |Q | ds | |+u || ds + k |Q
2
s
2 2
s |2 ds.
0 0 0 0

By assumptions and the quality of operator, there exist k1 =(2k3 +1)k and k2 =2k3k
t t
such that 2 0
kdiv(Qs u ), Qs ds k1 |Qs |2 ds + k2 . ()
0
t Qs 1 t
Applying (7) and 0

r
, Qs ds A 2 |Qs |2 ds, we have
2 0
t t t t
| Qt |2 | Q0 |2 + A 2 |Qs |2 ds + k1 |Qs |2 ds + k2 + 2 0 |Z1 |2 ds + 2 0 |Z 2 |2 ds
0 0 0 0
t t t
+2 | f (r, s, x, z1 (s)) |2 ds + 2 | f (r, s, x, z2 (s)) |2 ds + (1+ + 0 ) |Qs |2 ds
0 0 0
t t t
+2 g(r, s, x, Z1)QsdrdxdWs + || g(r, s, x, Z1)||22 ds + |h(r, s, x, z1(s))|2 ds
0 O 0 0
t
 + |h(r, s, x, Z )|2 dN .
+ |h(r, s, x, z1(s))|2 ds + 2 h(r, s, x, Z1)QdrdxdN
t t

0 s 0 O 1 0

Hence, for any t1 [0, T ],


t
E sup | Qt |2 E | Q0 |2 + k 2 + ( A 2 + 1 + + 0 + k1 ) E sup | Qs |2 ds
0 t1 t 0 0 t1 t
t t
+ (20 + 3K 2 + 2 K 2 ) |Z1 |2 ds + (20 + 2 K 2 ) |Z 2 |2 ds
0 0
Convergence of the Stochastic Age-Structured Population System with Diffusion 5

t
+2E sup g(r, s, x, Z1 )Qs drdxdWs
0t1 t 0 O
t
h(r, s, x, Z )Q drdxdN + E sup |h(r, s, x, Z ) |
t
+2E sup 1 s 1
2
dN .
0t1 t 0 O 0t1 t 0

By Burkholder-Davis-Gundy's inequality, there exist positive constants K1, K2 such that


1
t t
E sup g (r, s, x, Z1 )Qs drdxd Ws 3E[sup | Qs | ( | | g (r, s, x, Z1 ) || ds) ] 2
2
2
0t1 t 0 O 0t1 t 0

1 t 1 t
E[sup | Qs |2 ] + K1 || g(r, s, x, Z1)||22 ds E[sup | Qs |2 ] + K2 K1 E | Z1 |2 ds,
8 0t1t 0 8 0t1t 0

t
In the same way, we obtain: O
E sup
0t1t 0 O
h(r, s, x, Z )Q drdxdN
1 s

1 t
E sup | Qs |2 + K 2 K 2 |Z1 |2 ds, byZ i sup | Qs |, i = 1, 2
8 0t1 t 0 0 t1 t
3 1 t
E sup | Qt |2 ( A 2 + + 1 + 50 + (5 + 2 K1 + 2 + 3K 2 ) K 2 ) E sup | Qs |2 ds + E | Q0 |2 +k2 .
0 t1 t 2 8 0 0 t1t

At present, applying Gronwall's lemma, the proof complete.

E[| Qt Z1 (t ) |2 ] C2 +t ,
Theorem 3.2. under assumption, for each t [0, T ],
E[| Qt Z 2 (t ) |2 ] C3 +t.
Proof. For arbitrary t[0,T], there exists k such that t [k+t,(k +1)+t], so we have
t Qs 2 t
| Qt Z1(t) |2 6+t
k +t
|
r
| ds + 6k 2 +tk1 |Qs |2 ds + 6k 2 +tk2
k +t
t t
+62t |(1)Z1(s) +Z2 (s)|2 ds +122t |h(r, s, x, Z1(s))|2 ds
k +t k +t
t t
+12t | f (r, s, x, Z1 (s)) | ds +12t | f (r, s, x, Z2 (s)) |2 ds
2
k +t k +t
t t
+6 |  |2 .
g (r , s, x, Z1 ( s))d Ws |2 +12 | h(r , s, x, Z1 ( s))dN
k +t k +t

By Burkholder-Davis-Gundy's inequality, there exist positive constants K3,K4such that


t t
E sup | g (r , s, x, Z1 ( s ))d Ws |2 K 3 E sup | Z1 ( s ) |2 ds,
0t1 T k +t k +t 0t1 T
t t
E sup | h(r , s, x, Z1 ( s ))dN |2 K 4 E sup | Z1 ( s ) |2 ds,
0 t1 T k +t k +t 0 t1 T

By Zi sup | Qs | and assumption, have


0t1 t
6 D. Ma and Q. Zhang

E sup | Qt Z1 (t ) |2 (6 K5 +t + 6k 2 +tk1 + 24 2 +t + 24+tK 2 + 6 K3 + 12 K 4


0 t1 T

+ 12 2 +tK 2 ) sup E | Qs |2 +6k 2 +tk2 .


0 t1 T

C2 = (6K5 + 6k k1 + 24 + 24K + 6K3 +12K4 +122 K2 )C1t + 6k 2 K2 , Such that


2 2 2

E[| Qt Z1(t)|2] C2+t.


(Similar method, we can obtain E[| Qt Z 2 (t ) | ] C3 +t. )
2

Theorem 3.3. Under assumption, for each t [0, T ], E[| P (t ) Qt | ] C4 t.


2

Proof. Applying N (t ) = N (t ) t and Itos formula to | Qt |2 ,


t ( P Qs ) t
| P(t ) Qt |2 = 2 Ps Qs , s ds + 2 Ps Qs , kdiv(( Ps Qs )u) ds
0 r 0
t
2 Ps Qs , (r , t , x)((1 )( Ps Z1 ) + ( Ps Z 2 ) ds
0
t
+2 Ps Qs ,(1)( f (r, t, x, Ps ) f (r, t, x, Z1)) +( f (r, t, x, Ps ) f (r, t, x, Z2 )ds
0
t t
+ || g(r, t, x, Ps ) g(r, t, x, Z1)||22 ds + 2 Ps Qs , g(r, t, x, Ps ) g(r, t, x, Z1)dWt
0 0
t t
+2 Ps Qs , h(r, t, x, Ps ) h(r, t, x, Z1)d Nt + |h(r, t, x, Ps ) h(r, t, x, Z1)|2 d Nt
0 0
t t t t
A 2 |Ps Qs |2 ds + k1 |Ps Qs |2 ds + k2 + 20 |Ps Z1 |2 ds + 20 |Ps Z2 |2 ds
0 0 0 0
t t
+2 | f (r, s, x, Ps ) f (r, s, x, z1(s)) | ds + 2 | f (r, s, x, Ps ) f (r, s, x, z2 (s)) |2 ds
2
0 0
t t
+(2 + + 0 ) |Ps Qs |2 ds + 2 (P Q )(g(r,t, x, P ) g(r,t, x, Z ))drdxdW
s s s 1 s
0 0 O
t t
+ |g (r, t , x, P ) g (r, s, x, z (s)) | ds + | | g (r, t, x, P ) g (r, s, x, Z ) || ds
s 1
2
s 1
2
2
0 0
t t
+ |h(r, t, x, P ) h(r, s, x, z (s))| ds + |h(r, t, x, P ) h(z (s))| ds
s 1
2
s 1
2
0 0

+2 (h(r, t, x, P ) h(r, s, x, Z ))(P Q )drdxdN + |h(r, t, x, P ) h(r, s, x, Z )| dN .


t t
2
s 1 s s s 1
0 O 0

Hence, for any t1 [0, T ],


t
E sup | Ps Qs |2 k 2 + ( A 2 + 1 + + 0 + k1 ) E sup | Ps Qs |2 ds
0 t1 t 0 0 t1 t
t t
+(20 + 3K2 + 2K2) |Ps Z1 |2 ds + (20 + 2K2 ) |Ps Z2 |2 ds
0 0
t
+2E sup (P Q )(g(r,t, x, P ) g(r,t, x, Z ))drdxdW
s s s 1 s
0t1t 0 O
Convergence of the Stochastic Age-Structured Population System with Diffusion 7

(h(r, t, x, P ) h(r, s, x, Z ))(P Q )drdxdN


t
+2E sup s 1 s s
0t1t 0 O

+E sup |h(r, t, x, Ps ) h(r, s, x, Z1) |2 dN .


t

0t1 t 0

By Burkholder-Davis-Gundy's inequality, there exist positive constants K5, K6 such that


t
E sup ( Ps Qs )( g ( r , t , x, Ps ) g ( r , t , x, Z1 )) drdxd Ws
0 t1 t 0 O

1 t
E[sup | Ps Qs |2 ] + K 2 K 5 E | ps Z1 |2 ds,
8 0t1 t 0

t
E sup ( h(r , t , x, Ps ) h( r , s, x, Z1 ))( Ps Qs ) drdxdN
O 0 t1t 0 O

1 t
E sup | Ps Qs |2 + K 6 K 2 E | Pt Z1 |2 ds,
8 0t1 t 0

t
Applying Theorem3.2, have E sup | Pt Qt | C5 +t + C6 E sup | P Q |
2 2
t t dt ,
0t1 t 0 0t1 t

Where C5 = (40 + 7 K + 4 K + 4K5 K + 6K6 K )C2 + (40 + 4K )C3 + k2 ,


2 2 2 2 2

C6 = A 2 + k1 + 0 + 2 + + 12K 2 + 4 K 2 + 4K5 K 2 + 6K6 K 2 .


At present, applying Gronwall's lemma, the proof complete.
So applying Theorem3.3, under the assumption condition, we obtain the semi-implicit
Euler numerical solution Qt convergence to the analytical Solution Pt of this system.

References
1. Hernandez, G.E.: Age-density dependent population dispersal in RN. Mathematical
Biosciences J. 149, 3756 (1998)
2. Hernandez, G.E.: Existence of solutions in a population dynamic problem. J. Appl.
Math. 509, 4348 (1986)
3. Hernandez, G.E.: Localization of age-dependent ant-crowding populations. J. Q. Appl.
Math. 53, 35 (1995)
4. Zhang, Q., Han, C.Z.: existence and uniqueness for a stochastic age-structured population
system with diffusion. J. Science Direct 32, 21972206 (2008)
5. Zhang, Q., Liu, W., Nie, Z.: Existence, uniqueness and exponential stability of stochastic
age-dependent population. J. Appl. Math. Comput. 154, 183201 (2004)
6. Zhang, Q., Han, C.Z.: Convergence of numerical solutions to stochastic age-structured
population system with diffusion. J. Applied Mathematics and Computation 07, 156 (2006)
7. Zhang, Q.: Exponential stability of numerical solutions to a stochastic age-structured population
system with diffusion. Journal of Computational and Applied Mathematics 220, 2233 (2008)
8. Zhang, Q., Han, C.Z.: Numerical analysis for stochastic age-dependent population
equations. J. Appl. Math. Comput. 176, 210223 (2005)
9. Gardon, A.: The Order of approximations for solutions of Ito-type stochastic differential
equations with jumps. J. Stochastic Analysis and Applications 38, 753769 (2004)
Parallel Computer Processing Systems Are
Better Than Serial Computer Processing
Systems

Zvi Retchkiman Konigsberg

Instituto Politecnico Nacional

Abstract. The main objective and contribution of this paper consists


in using a formal and mathematical approach to prove that parallel com-
puter processing systems are better than serial computer processing sys-
tems, better related to: saving time and/or money and being able to
solve larger problems. This is achieved thanks to the theory of Lyapunov
stability and max-plus algebra applied to discrete event systems modeled
with time Petri nets.

Keywords: Parallel and Serial Computer Processing Systems, Lyapunov


Methods, Max-Plus Algebra, Timed Petri Nets.

1 Introduction
Serial computer processing systems are characterized by the fact of executing
software using a single central processing unit (CPU) while parallel computer
processing systems simultaneously use multiple CPUs at the time. Some of the
arguments which have been used to say why it is better parallel than serial
are: save time and/or money and solve larger problems. Besides that there are
limits to serial processing computer systems due to: transmission speeds, lim-
its to miniaturization ans economic limitations. However we would like to be
more precise and give a denitive and unquestionable formal proof to justify the
claim that parallel computer processing systems are better than serial computer
processing systems. The main objective and contribution of this paper consists
in using a formal and mathematical approach to prove that parallel computer
processing systems are better than serial computer processing systems (better
related to: saving time and/or money and being able to solve larger problems).
This is achieved thanks to the theory of Lyapunov stability and max-plus alge-
bra applied to discrete event systems modeled with time Petri nets. The paper
is organized as follows. Sections 2 and 3 provide the mathematical results uti-
lized in the paper about Lyapunov theory for discrete event systems modeled
with Petri nets and max-plus algebra in order to achieve its goal (for a detailed
exposition see [1] and [2]). In section 4, the solution to the stability problem for
discrete event systems modeled with timed Petri nets using a Lyapunov, max-
plus algebra approach is given. Section 5, applies the theory presented in the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 814, 2011.

c Springer-Verlag Berlin Heidelberg 2011
Parallel Computer Processing vs. Serial Computer Processing 9

previous sections to formally prove that parallel computer processing systems


are better than serial computer processing systems. Finally, the paper ends with
some conclusions.

2 Lyapunov Stability and Stabilization of Discrete Event


Systems Modeled with Petri Nets [1]
Proposition 1. Let P N be a Petri net. P N is uniform practical stable if there
exists a strictly positive m vector such that

v = uT A 0 (1)

Moreover, P N is uniform practical asymptotic stable if the following equation


holds
v = uT A c(e), for c K (2)

Lemma 1. Let suppose that Proposition (1) holds then,

v = uT A 0 A 0 (3)

Remark 1. Notice that since the state space of a TPN (timed Petri net) is con-
tained in the state space of the same now not timed PN, stability of PN implies
stability of the TPN.

Definition 1. Let P N be a Petri net. P N is said to be stabilizable if there exists


a firing transition sequence with transition count vector u such that the reachable
markings M remain bounded.

Proposition 2. Let P N be a Petri net. P N is stabilizable if there exists a


firing transition sequence with transition count vector u such that the following
equation holds

v = AT u 0 (4)

3 Max-plus Algebra [2,3]


Theorem 1. The max-plus algebra max = (Rmax , , , , e) has the algebraic
structure of a commutative and idempotent semiring.

Theorem 2. The 5-tuple nn nn


max = (Rmax , , , E, E) has the algebraic structure
of a noncommutative idempotent semiring.

Definition 2. Let A Rnn


max and k N then the k-th power of A denoted by
Ak is defined by: Ak = A A A, where A0 is set equal to E.
  
ktimes
10 Z.R. Konigsberg

Definition 3. Let A Rnn


max then define the matrix A
+
Rnn
max as: A
+
=


k +
A . Where the element [A ]ji gives the maximal weight of any path from j
k=1
to i. If in addition one wants to add the possibility of staying at a node then one
must include matrix E in the definition of matrix A+ giving rise to its Kleene
star representation defined by:


A = Ak . (5)
k=0

Lemma 2. Let A Rnnmax be such that any circuit in the communication graph
G(A) has average circuit weight less than or equal to . Then it holds that:
n1

A = Ak . (6)
k=0

Definition 4. Let A Rnn n


max be a matrix. If Rmax is a scalar and v Rmax
is a vector that contains at least one finite element such that:

Av =v (7)

then, is called an eigenvalue and v an eigenvector.

Theorem 3. If A Rnn max is irreducible i.e., its communication graph G(A) is


strongly connected, then there exists one and only one finite eigenvalue (with
possible several eigenvectors). This eigenvalue is equal to the maximal average
weight of circuits in G(A):

|p|w
(A) = max (8)
pC(A) |p|1

Theorem 4. Let A Rnn n


max and b Rmax . If the communication graph G(A)
has maximal average circuit weight less than or equal to e, then x = A b
solves the equation x = (A x) b. Moreover, if the circuit weights in G(a) are
negative then, the solution is unique.

Definition 5. Let Am Rnn n


max for 0 m M and x(m) Rmax for M

M
m 1; M 0. Then, the recurrence equation: x(k) = Am x(k m); k
m=0
0 is called an M th order recurrence equation.

Theorem 5. The M th order recurrence equation, given by equation x(k) =


M
Am x(k m); k 0, can be transformed into a first order recurrence
m=0
equation x(k + 1) = A x(k); k 0 provided that A0 has circuit weights less
than or equal to zero.
Parallel Computer Processing vs. Serial Computer Processing 11

With any timed event Petri net, matrices A0 , A1 , ..., AM Nn Nn can be dened
by setting [Am ]jl = ajl , where ajl is the largest of the holding times with respect
to all places between transitions tl and tj with m tokens, for m = 0, 1, ..., M ,
with M equal to the maximum number of tokens with respect to all places.
Let xi (k) denote the kth time that transition ti res, then the vector x(k) =
(x1 (k), x2 (k), ...xm (k))T , called the state of the system, satises the M th order
M
recurrence equation: x(k) = Am x(k m); k 0 Now, assuming that all
m=0
the hypothesis of theorem (5) are satised, and setting x(k) = (xT (k), xT (k

M
1), ..., xT (k M + 1))T , equation x(k) = Am x(k m); k 0 can be
m=0
expressed as: x(k + 1) = A x(k); k 0, which is known as the standard
autonomous equation.

4 The Solution to the Stability Problem for Discrete


Event Systems Modeled with Timed Petri Nets

Definition 6. A TPN is said to be stable if all the transitions fire with the same
proportion i.e., if there exists q N such that

xi (k)
lim = q, i = 1, ..., n (9)
k k
Lemma 3. Consider the recurrence relation x(k + 1) = A x(k), k 0, x(0) =
x0 Rn arbitrary. A an irreducible matrix and R its eigenvalue then,

xi (k)
lim = , i = 1, ..., n (10)
k k
Now starting with an unstable T P N , collecting the results given by: proposition
(2), what has just been discussed about recurrence equations for T P N and the
previous lemma (3) plus theorem (3), the solution to the problem is obtained.

5 Parallel vs. Serial Computer Processing Systems

In this section, the main objective of this manuscript which consists in giving a
precise and denitive answer to the question why are parallel computer process-
ing systems preferred to serial computer processing systems (better related to:
saving time and/or money and being able to solve larger problems), is presented.

5.1 Serial Computer Processing System


Consider a serial computer processing system with T P N model as depicted in
Fig 1. Where the events (transitions) that drive the system are: q: a problem
of size Ca has to be solved, s: the problem starts being executed by the CPU,
12 Z.R. Konigsberg

Fig. 1.

d: the problem has been solved. The places (that represent the states of the
serial computer processing system) are: A: problems loading, P: the problems
are waiting for a solution, B: the problem is being solved, I: the CPU of capacity
Cd is idle. The holding times associated to the places A and I are Ca and Cd
respectively, (with Ca > Cd).

Remark 2. Notice that Ca, the size of q, is the time it takes to a problem until
is completely loaded in the computer in order to be solved, larger problems will
have larger Ca s, while Cd, the capacity of the CPU, is the time it takes to the
CPU to reset.

0 1 0 0
The incidence matrix that represents the P N model is A = 0 1 1 1
0 0 1 1
Therefore since there does not exists a strictly positive m vector such that
A 0 the sucient condition for stability is not satised. Moreover, the P N
(T P N ) is unbounded since by the repeated ring of q, the marking in P grows
indenitely i.e., the amount of problems that require a solution accumulate.
However, by taking u = [k, k, k]; k > 0 (but unknown), we get that AT u 0.
Therefore, the P N is stabilizable which implies that the T P N is stable. Now,
let us proceed to determine the exact value
of k. From the T P N model we
Ca
|p|
obtain that: A = A0 A1 = Ca Cd . Therefore, (A) = max |p|w =
pC(A) 1
Ca Cd
max{Ca, Cd} = Ca. This means that in order for the T P N to be stable and
work properly the speed at which the serial computer processing system works
has to be equal to Ca or being more precise, that all the transitions must re
at the same speed as the problems arrive i.e., they have to be solved as soon
as they are loaded into the computer which is attained by setting k = Ca. In
particular, transition s which is related to the execution time of the CPU has to
be red at a speed equal to Ca.
Parallel Computer Processing vs. Serial Computer Processing 13

Summary 6. The serial computer processing system works properly if transi-


tion s fires at a speed equal to Ca which implies that the execution frequency
of the CPU has to be equal to Ca. Now, if Ca increases due to the fact that
the problem to be solved becomes larger then, this will result in an increment
on the CPUs execution frequency. However, there is a limit to this increment
due to economical and physical limitations. One possible solution is to break this
large problem into several smaller problems but this will result in larger execution
times.

5.2 Parallel Computer Processing System


Consider a parallel computer processing systems with two CPUs with T P N
model as depicted in Fig 2. Where the events (transitions) that drive the system
are: q: a problem of size Ca has to be solved, s1, s2: the problem starts being
executed by the CPUs, d1,d2: the problem has been solved. The places (that
represent the states of the parallel computer processing system) are: A: problems
loading, P: the problems are waiting for a solution, B1, B2: the problem is being
solved, I1, I2: the CPUs of capacity Cd are idle. The holding times associated
to the places A and I1, I2 are Ca and Cd respectively, (with Ca > Cd). The
0 1 0 0 0 0
0 1 1 1 0 0

incidence matrix that represents the P N model is A = 0 1 0 0 1 1

0 0 1 1 0 0
0 0 0 0 1 1
Therefore since there does not exists a strictly positive m vector such that
A 0 the sucient condition for stability is not satised. Moreover, the P N
(T P N ) is unbounded since by the repeated ring of q, the marking in P grows
indenitely i.e., the amount of problems that require a solution accumulate.
However, by taking u = [k, k/2, k/2, k/2, k/2]; k > 0 (but unknown) we get that
AT u 0. Therefore, the P N is stabilizable which implies that the T P N is
stable. Now, let us proceed to determine the exact value of k. From the T P N
Ca
Ca Cd

model we obtain that: A = A0 A1 =

Ca Cd

Ca 0 Cd
Ca Cd
|p|w
Therefore, (A) = max |p|1 = max{Ca, Cd} = Ca. This means that in order
pC(A)
for the T P N to be stable and work properly the speed at which the parallel
computer processing system works has to be equal to Ca or being more precise,
that all the transitions must re at the same speed as the problems arrive i.e.,
they have to be solved as soon as they are loaded into the computer which is
attained by setting k = Ca. In particular, transitions s1 and s2 which are related
to the execution time of the CPUs have to be red at a speed equal to Ca/2.
14 Z.R. Konigsberg

Fig. 2.

Remark 3. The previous analysis is easily extended to the case with n CPUs,
obtaining that u = [Ca, Ca/n, Ca/n, ..., Ca/n] which translates into the condi-
tion that the transitions s1,s2,...,sn, have to be red at a speed equal to Ca/n.
Summary 7. The parallel computer processing system works properly if transi-
tions s1,s2,...,sn, fire at a speed equal to Ca/n which implies that the execution
frequency of the CPUs has to be equal to Ca/n.

5.3 Comparison
As a result of summaries (6) and( 7) the following facts are deduced:
1. (Saving time) It is possible to solve a problem of size Ca with one CPU
which takes time Ca, or there is the option of solving a problem of size
nCa (or n problems of size Ca each one) using n CPUs which will take the
same time as with one CPU.
2. (Saving Money) In order to execute a program, there is the option of pur-
chasing one CPU that costs Ca or n CPUs that cost Ca/n. This is
signicant for large Ca.
3. (Solving larger problems) If Ca increases due to the fact that the problem to
be solved becomes larger then this will result in an increment on the CPUs
execution frequency. As a consequence the serial computer processing option
becomes expensive and/or slow. This is also true for the parallel computer
processing alternative however, by distributing Ca between the n CPUs the
economical and/or time impact results to be much lower.

References
1. Retchkiman, Z.: Stability theory for a class of dynamical systems modeled with
Petri nets. International Journal of Hybrid Systems 4(1) (2005)
2. Heidergott, B., Olsder, G.J., van der Woude, J.: Max Plus at Work. Princeton
University Press, Princeton (2006)
3. Baccelli, F., Cohen, G., Olsder, G.J., Quadrat, J.P.: Synchronization and Linearity,
Web-edition (2001)
Smooth Path Algorithm Based on A* in Games

Xiang Xu and Kun Zou

Department of Computer Engineering,


University of Electronic Science and Technology of China Zhongshan Institute,
Zhongshan, P.R. China
xushawn@sina.com

Abstract. Pathfinding is a core component of many games especially real-time


strategy games. The paper proposed a smooth path generation strategy based on
the A * algorithm. Firstly, adopted an admissible heuristics function, and a
key-point optimization is applied to make the path to be connected by a series of
key points. Secondly, the paper applied Catmull-Rom splines to the key points
interpolation, through add several interpolation points between the key points,
make the whole path look more smooth. Experimental results show that the
proposed smooth path generation algorithm improves the path of smoothness,
make the pathfinding more realistic, and more suitable for use in games.

Keywords: pathfinding, A* algorithm, heuristic function, smooth path,


Catmull-Rom splines.

1 Introduction
In games, we often use a regular grid diagram to demonstrate game map, these grids in
certain proportion or resolution divide game map into small pieces of cells, each cell
called a node. Based on grid game map, pathfinding's main purpose is according to
different terrain and obstacles, find a shortest and lowest cost path. Many games use A*
algorithm as its pathfinding strategy, such as typical RTS games and RPG games. Due
to the characteristics of game software itself, its pathfinding algorithm has more
request, such as searching time should be short, path found should smooth and realistic,
etc. Therefore, the standard A* algorithm need to do many improvement before used in
games. Aiming at the special request of pathfinding in games, this paper analyses
various improvement method, and proposes a smooth path generation algorithm.

2 The Basics of A* Algorithm


A* algorithm is a typical kind of heuristic algorithm in artificial intelligence. In order to
understand the A* algorithm, we must first understand the state space search and
heuristic algorithm. State space search is a problem solving process to look for one path
from the initial state to the end state [1]. The popular spot said, because there are so
many problem solving methods, and so many solving path caused by many branches,
uncertain and incomplete in solving process. All these paths constitute a diagram, and
this diagram is called state space diagram. The solution is actually to find a path from

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 1521, 2011.
Springer-Verlag Berlin Heidelberg 2011
16 X. Xu and K. Zou

the starting point to the goal in this diagram. This search process is called state space
search. Common state space search methods have depth first search (DFS) and breadth
first search (BFS), BFS first searches the initial state layer, and then next layer until
find the goal so far. DFS is according to certain order first searches a branch, and then
another branch until find the goal. Breadth and depth first search have a big flaw is that
they are both in a given state space exhaustion, it can be adopted when the state space is
small, but will not desirable when the state space is very big, and unpredictable
circumstances. They will have much lower efficiency, and even not complete
pathfinding, In this case we should use heuristic pathfinding. Heuristic pathfinding will
estimate each search position in the state space, and get the lowest cost of position and
from this position until target. This may omit many useless path searches, improve
efficiency. In heuristic pathfinding, the estimate cost of position is very important, and
use different heuristic function will get different effect [2]. The heuristic functions
general form is as follows:
f(n)=g(n)+h(n) (1)
Among them, g (n) is the actual cost from the starting point to node n, and h (n) is the
estimate cost from node n to the goal. Because g(n) is known, it can be calculated by
reverse tracking from node n to starting point in accordance with a pointer to the parent,
then accumulate all the edge cost in the path. So, heuristic function f (n)s heuristic
information relies mainly on function h (n) [3]. According to a certain known
conditions of state space, heuristic function will select the node with minimum cost to
search, again from this node continue to search, until you reach the goal or failure, but
not expanded nodes need not search. Design of function h (n) will direct impact on
whether this heuristic algorithm can become A* algorithm [4].

3 Smooth Path Design


Design a path in games is more than just the application of pathfinding algorithm. It
also includes several techniques for achieving more realistic looking results from
pathfinding. We can use the following three main methods to improve the pathfinding
algorithm: make more straight-line movement and make more smoothly movement and
make more directly movement, all these optimizations bring more joyful gaming
experience for gamers. And these optimizations also directly affect the implement of
A* algorithm.

3.1 The Selection of Heuristic Function

Function h (n) in A* algorithm usually adopts the classic Manhattan heuristic function.
Namely obtain the minus of abscissa from current node to the goal, and also the minus
of ordinate from current node to the goal, again both absolute values adding together.
Its primary drawback is that in an eight-way pathfinder, this method is inadmissible, so
it is not guaranteed to find the shortest path. Also, because it is overweighted compared
to G, it will overpower any small modifiers that you try to add to the calculation of G
like, say, a turning penalty or an influence map modifier. So we need to find more
suitable heuristic function. And when choosing heuristic function, we still need to
Smooth Path Algorithm Based on A* in Games 17

consider calculated amount. Therefore we should take the compromise in the precise
function and its calculated amount. Here we adopted an improved heuristic function
shown below:
h(n)=max(fabs(dest.x-current.x),fabs(dest.y-current.y)) (2)
This heuristic function can satisfy the admissible condition and guarantee to give us
the shortest path from starting point to the goal.
In order to improve search efficiency, we preprocess those unreachable areas in the
game map. These areas possibly are separated by a bar obstacle, for example river in
addition one side, also possibly be surrounded by walls, etc. For such a terrain, A*
algorithm will detect all neighbour nodes around this unreachable node until failed,
waste a lot of time. Through put all unreachable nodes into the unreachable list
beforehand, and check whether destination is in the unreachable list before pathfinding.
And for the Open table, we adopted the binary heaps to enhance the efficiency of the
algorithm. The specific algorithm flow chart shown below:

Fig. 1. A* algorithm flow chart, adopted the binary heaps for the Open table, and preprocess
those unreachable areas in the game map before starting searching
18 X. Xu and K. Zou

3.2 Key-Point Optimization

The path generated by A* algorithm is a set of discrete node {N1 N2, N3,..., , NP-1, NP},
if a game role move along these nodes, it will encounter many twists, the path is too
long, and quite time-consuming, we can adopt key-point optimization strategy, namely
select limited key points to represent the whole path node set. There are many methods
to select the key points, we can select each direction change point as key point, and
other nodes are all omitted. Also we can calculate the original path node set{N1 N2,
N3,..., , NP-1, NP}, if in any two node central segment does not have any obstacles
(assume the grid map is known), then all other nodes between this two nodes, can be
omitted but only this two nodes are preserved. After such processing, the path
generated by A* algorithm is constituted by limited key points, the path length reduced
many, the track time reduced suddenly, and convenient for further smooth path design
(The experimental results is shown below) [5].

Fig. 2. Left figure shows A* algorithm generated the node sequence, and right figure shows the
key-point node sequence after optimization

3.3 The Generation of Smooth Path

After finished the key-point optimization, we can apply Catmull-Rom splines to the
key-point interpolation, through add several interpolation points between the key
points, make the whole path look more smooth.
Catmull-Rom splines are a family of cubic interpolating splines formulated such that
the tangent at each point Pi is calculated using the previous and next point on the spline,
(Pi+1 Pi1). The geometry matrix is given by

0 1 0 0 pi2
0 0 p i 1
p ( s ) = [1 u u2 u 3 ] (3)
2 3 3 2 pi

2 2 p i +1

Catmull-Rom splines have C1 continuity, local control, and interpolation, but do not
lie within the convex hull of their control points.
Smooth Path Algorithm Based on A* in Games 19

Fig. 3. A Catmull-Rom spline

Fig. 4. The effect of

Note that the tangent at point p0 is not clearly defined; oftentimes we set this to (p1
p0) although this is not necessary for the assignment (you can just assume the curve
does not interpolate its endpoints).
The parameteris known as tension and it affects how sharply the curve bends at
the (interpolated) control points (figure 4). It is often set to 0.5 but you can use any
reasonable value for this assignment.
Catmull-Rom formulas required four input point coordinates, the calculation results
is the point for the second point to the third point between an approximate u% places.
When = 0.5, computation formula is as follows [6]:
p ( s ) = p i 2 * ( 0 .5 * u + u * u 0 .5 * u * u * u )
+ p i 1 * (1 2 . 5 * u * u + 1 . 5 * u * u * u ) (4)
+ p i * ( 0 .5 * u + 2 * u * u 1 .5 * u * u * u )
+ p i +1 * ( 0 . 5 * u * u + 0 . 5 * u * u * u )

Note that if u=0, the result is Pi-1; u=100, results of Pi.


Smooth processing also need time consume, in order to reduce the processing time,
we should as less as possible insert intermediate points under the quite satisfactory
processing effect premise, this paper chooses three insert points. Respectively the
choice u=0.25, 0.5 and 0.75, and produce three equal-distance points. So, every n key
points, after the interpolation increased 3*(n-1) points, after smooth processing the
nodes number is original (4*n-3)/n times.
Because Catmull-Rom splines can only generate interpolation points between the
second and the third point, so aiming at the first two key points and the last two key
points, we need separately using the key point sequence (1, 1, 2, 3) and (n-2, n-1, n, n),
but the other key point sequence by simply use (s-1, s, s+1, s+2). This can obtain the
uniform distribution of interpolation points, specific effects are shown below.
20 X. Xu and K. Zou

Fig. 5. Catmull-Rom smooth path production. Left figure shows the key points (marked as red
circle), right figure shows the smooth path.

4 Other Problems to Be Considered


There are many other problems need to consider in game pathfinding [7], For instance:
(1) The terrain problems. The path from A to B has two choices: one path is Steep
mountain road but the distance is near, second path is smooth road but the distance is
far. Sometimes we will choose the latter, but A* algorithm can only select the former.
General method to solve this kind of problem is to give different terrain with different
influence factor, but in order to make the choice more randomness, we can set a
tolerable difference value like C, when the differentials with the mountain and the
smooth road cost does not exceed C, we can allow random selection of one of the paths.
For example, when the mountain road is not very steep, we can also select this path.
(2) The traffic problems. For instance in a relatively narrow terrain, there is a large
group of game roles requires passed by, can produce traffic congestion. It's probably
best to let later roles know this situation, and dynamic increase the terrain cost, so that
they can change their route.
(3) March algorithm. In a real-time strategy game, if we need simultaneously to
move many game roles, we will call A* algorithm to calculate many times, the
time-consuming unbearable. The solution has two kinds: a kind of simple method is to
all the cells pressed into the queue, carries on the pathfinding in order. This can quickly
get response, but each unit can not move together, some roles can first move, behind of
slowly catch up with. Another common way is in the army, random select a role as a
guide, just once pathfinding, other roles as long as the tailgate afterward can.
(4) Other moving units. Due to the map still exists other moving units, their positions
are not fixed, so it is very difficult to calculate one path that can avoid other units at the
beginning. The most simple and feasible method is another collision detection, after the
collision to search path or direct selection to the right or left movement.
(5) The number of pathfinders. In many real-time strategy games, also exist such
problems: need to allow multiple units to move together. In order to reduce pathfinding
time, must reduce the number of pathfinders, namely a March unit's population upper
limit.
Smooth Path Algorithm Based on A* in Games 21

5 Conclusion
This paper analyzed the A* standard algorithm, and proposed one kind of improved
strategy. The algorithm adopted an admissible heuristics function, and performed
key-point optimization on the node sequence, in view of the key-point sequence, the
paper realized a kind of smooth path generation based on the Catmull-Rom splines.
Generated by the search path can be reflected in the game actual path effect, and
embodies the certain intelligence and humanization. But in execution efficiency, take
sacrifices the storage space and the CPU time as the price. Future game need more
intelligences, more user-friendly game roles, therefore, hoped that can have better
algorithms to solve problems in game pathfinding.

References
1. Tao, Z.H., Hang, C.Y.: Path Finding Using A* Algorithm. Micro Computer Information
23(17), 238240 (2007)
2. Lester, P.: A* pathfinding for beginners (2005),
http://www.policyalmanac.org/games/aStarTutoria.lhtm
3. Heping, C., Qianshao, Z.: Applicaion And Implementation of A*Agorithms in the Game
Map Pathfinding. Computer Applications and Software 22(12), 118120 (2005)
4. Lester, P.: Using Binary Heaps in A* Pathfinding (2003),
http://www.policyalmanac.org/games/binaryHeaps.htm
5. Wei, S., Zhengda, M.: Smooth path design for mobile service robots based on improved A*
algorithm. Journal of Southeast University (Natural Science Edition) 40sup(I) (September
2010)
6. Deloura, M.: Game Programming Gems. Charles River Media, Inc., London (2000)
7. Higgins Daniel, F.: Pathfinding Design Architecture. In: AI Game Programming Wisdom.
Charles River Media, London (2002)
The Features of Biorthogonal Binary Poly-scale Wavelet
Packs in Bidimensional Function Space

Zhihao Tang* and Honglin Guo

Department of Fundamentals, Henan Polytechnic Institute, Nanyang 473009, China


jhnsx123@126.com

Abstract. Wavelet analysis has become a developing branch of mathematics for


over twenty years. In this paper, the notion of orthogonal nonseparable bivariate
wavelet packs, which is the generalization of orthogonal univariate wavelet
packs, is proposed by virtue of analogy method and iteration method. Their
biorthogonality traits are researched by using time-frequency analysis approach
and variable separation approach. Three orthogonality formulas regarding these
wavelet wraps are obtained. Moreover, it is shown how to draw new
orthonormal bases of space L2 ( R 2 ) from these wavelet wraps. A procedure for
designing a class of orthogonal vector-valued finitely supported wavelet
functions is proposed by virtue of filter bank theory and matrix theory.

Keywords: Nonseparable, binary wavelet packs, wavelet frame, Bessel sequen-


ce, orthonormal bases, time-frequency analysis approach.

1 Introduction and Notations


The main advantage of wavelet packs is their time-frequency localization property.
Construction of wavelet bases is an important aspect of wavelet analysis, and
multiresolution analysis method is one of importment ways of constructing various
wavelet bases. There exist many kinds of scalar scaling functions and scalar wavelet
functions. Although the Fourier transform has been a major tool in analysis for over a
century, it has a serious laking for signal analysis in that it hides in its phases
information concerning the moment of emission and duration of a signal. Wavelet
analysis [1] has been developed a new branch for over twenty years. Its applications
involve in many areas in natural science and engineering technology. The main
advantage of wavelets is their time-frequency localization property. Many signals in
areas like music, speech, images, and video images can be efficiently represented by
wavelets that are translations and dilations of a single function called mother wavelet
with bandpass property. Wavelet packets, owing to their good properties, have
attracted considerable attention. They can be widely applied in science and
engineering [2,3]. Coifman R. R. and Meyer Y. firstly introduced the notion for
orthogonal wavelet packets which were used to decompose wavelet components. Chui
C K.and Li Chun L.[4] generalized the concept of orthogonal wavelet packets to the
case of non-orthogonal wavelet packets so that wavelet packets can be employed in
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 2228, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 23

the case of the spline wavelets and so on. Tensor product multivariate wavelet packs
has been constructed by Coifman and Meyer. The introduction for the notion on
nontensor product wavelet packs attributes to Shen Z [5]. Since the majority of
information is multidimensional information, many researchers interest themselves in
the investigation into multivariate wavelet theory. But, there exist a lot of obvious
defects in this method, such as, scarcity of designing freedom. Therefore, it is
significant to investigate nonseparable multivariate wavelet theory. Nowadays, since
there is little literature on biorthogonal wavelet wraps, it is neces-sary to investigate
biorthogonal wavelet wraps .
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 2 denotes the 2-
dimentional Euclidean space. L2 ( R 2 ) denotes the square integrable function space.
i 2
Let x = ( x1 , x2 ) R , = (1 , 2 ) R 2 , k = ( k , k ) Z , z = e , z2 = e 2 .
2 2 i 2

1 2 1

The inner product for any functions ( x ) and ( x ) ( ( x ), ( x) L2 ( R 2 )) and the


Fourier transform of ( x ) are defined, respectively, by

, = ( x) ( x ) dx, ( ) = ( x) e i x dx,
R2 R2

where x = 1 x1 + 2 x2 and ( x) denotes the complex conjugate of ( x ) . Let R


and C be all real and all complex numbers, respectively. Z and N denote, respectively,
all integers and all positive integers. Set Z + = {0} N , a, s N as well as a 2 By
2
algebra theory, it is obviously follows that there are a elements d 0 , d1 , , d a 2 1 in
Z + = {( n1 , n2 ) : n1 , n2 Z + } such that Z 2 = d ( d + mZ 2 ) ; (d1 + mZ 2 ) (d2 + mZ 2 ) = ,
2
0

where 0 = {d 0 , d1 , , d a 2 1} denotes the aggregate of all the different representative


elements in the quotient group Z 2 /(mZ 2 ) and order d 0 = {0} where {0} is
the null element of Z and d1 , d 2 denote two arbitrary distinct elements in 0 .Let
2
+

= 0 {0} and , 0 to be two index sets. Define, By L2 ( R 2 , C s ) ,


we denote the set of all vector-valued functions L2 ( R 2 , C s ) := { ( x) ,
= (h1 ( x)), h2 ( x), , hu ( x))T : hl ( x) L2 ( R 2 ), l = 1, 2, , s} ,where T means the transpo-
-se of a vector. For any L ( R , C ) its integration is defined as follows
2 2 s

R2 ( x)dx = ( R2 h1 ( x)dx, R2 h2 ( x)dx, , R2 hs ( x)dx) .


T

Definition 1. A sequence { n ( y ) nZ 2 L2 ( R 2 , C s )} is called an orthogonal set, if

n , v = n ,v I s , n, v Z 2 , (1)

where I s stands for the s s identity matrix and n , v , is generalized Kronecker


symbol, i.e., n ,v = 1 as n = v and n ,v = 0 , otherwise.

2 The Bivariate Multiresolution Analysis


2 2
Firstly, we introduce multiresolution analysis of space L ( R ). Wavelets can be
constructed by means of multiresolution analysis. In particular, the existence
24 Z. Tang and H. Guo

theorem[8] for higher-dimentional wavelets with arbitrary dilation matrice has been
given. Let h( x) L ( R ) satisfy the following refinement equation:
2 2

f ( x) = m2 kZ 2 bk f (mx k ) (2)

where {b(n)}nZ 2 is real number sequence which has only finite terms.and f ( x) is
called scaling function. Formula (1) is said to be two-scale refinement equation. The
frequency form of formula (1) can be written as

f ( ) = B ( z1 , z2 ) f ( m), (3)

where

B( z1 , z2 ) = b(n , n ) z1n z2 n .
2
1 2
1 2
(4)
( n1 , n2 )Z

Define a subspace X j L2 ( R 2 ) ( j Z ) by
V j = closL2 ( R2 ) m j f ( m j x k ) : k Z 2 . (5)

Definition 2. We say that f ( x) in (2) generate a multiresolution analysis {V j } jZ of


L2 ( R 2 ) , if the sequence {V j } jZ defined in (4) satisfy the following properties:
(i) V j V j +1 , j Z ; (ii) V j = {0}; V j is dense in L2 ( R 2 ) ; (iii) ( x) V
jZ jZ
(mx) Vk +1 , k Z (iv) the family { f (m j x n) : n Z 2 } forms a Riesz basis
for the spaces V j .
Let Yk (k Z ) denote the complementary subspace of V j in V j +1 , and ass-ume
that there exist a vector-valued function G ( x) = {g1 ( x), g 2 ( x), , g m2 1 ( x)} constitutes
a Riesz basis for Yk , i.e.,
Y j = closL2 ( R2 ) g : j , n : = 1, 2, , m 2 1; n Z 2 , (6)

where j Z , and g : j , k ( x) = m j / 2 g ( m j x k ), = 1, 2, , m 2 1; k Z 2 . Form con-


dition (5), it is obvious that g1 ( x ), g 2 ( x ), , g m2 1 ( x ) are in Y0 X 1. Hence
( )
there exist three real number sequences {q }( = {1, 2, n , m 2 1}, n Z 2 ) such
that

g ( x) = m 2 qk( ) f (mx k ), (7)


kZ 2

Formula (7) in frequency domain can be written as

g ( ) = Q ( ) ( z1 , z2 ) f ( a ), = 1, 2, , m 2 1. (8)
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 25

where the signal of sequence {qk( ) }( = 1, 2, , m 2 1, k Z 2 ) is


Q ( ) ( z1 , z2 ) = q((n ),n ) z1 z2 .
1 2
n1 n2
(9)
( n1 , n2 )Z
2

A bivariate function f ( x ) L (R ) is called a semiorthogonal one, if


2 2

f (), f ( k ) = 0, k , n Z 2 . (10)

We say G ( x ) = {g1 ( x), g 2 ( x), , g m2 1 ( x)} is anorthogonal bivariate vector-


valued wavelets associated with the scaling function f ( x ) , if they satisfy:

f (), g ( k ) = 0 , , k Z 2 , (11)

g (), g ( n) = , 0,n , , , n Z 2 (12)

3 The Traits of Nonseparable Bivariate Wavelet Packs


To construct wavelet packs, we introduce the following notation: a = 3, h0 ( x ) = f ( x),
h ( x ) = g ( x), b( 0) (n) = b(n), b( ) (n) = q ( ) (n), where We are now in a position
of introducing orthogonal bivariate nonseparable wavelet wraps.

Definition 3. A family of functions { hmk + ( x) : n = 0,1, 2, 3, , } is called a


nonseparable bivariate wavelet packs with respect to an orthogonal scaling function
0 ( x) , where
mk + ( x) = nZ 2 b( ) (n) k (mx n), (13)

where = 0,1, 2, 3. By taaking the Fourier transform for the both sides of (12), we have
h nk + ( ) = B ( ) ( z1 , z2 ) h k ( 2 ) . (14)

where

B ( ) ( z1 , z2 ) = B ( ) ( / 2 ) = b ( )
(k ) z1k1 z2k2 (15)
k Z 2

Lemma 1. [6]. Let ( x ) L2 (R 2 ). Then ( x) is an orthogonal one if and only if

| ( + 2k ) | 2
=1 . (16)
kZ2

Lemma 2. Assuming that f ( x ) is an semiorthogonal scaling function. B ( z1 , z2 ) is


the symbol of the sequence {b( k )} defined in (3). Then we have

= B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 )
2 2 2 2
(17)
26 Z. Tang and H. Guo


2
Proof. If f ( x) is an orthogonal bivariate function, then kZ 2 f ( + 2k ) =1 .
Therefore, by Lemma 1 and formula (2), we obtain that

1= | B (e i (1 2+ k1 )
, e i ( 2 2 + k 2 )
) f ((1 , 2 ) 2 + (k1 , k2 ) ) |2
kZ 2

=| B( z1 , z2 ) k Z 2 f ( + 2k ) |2 + | B( z1 , z2 ) k Z 2 f ( + 2k + (1,0) ) |2

+ | B( z1 , z2 ) k Z 2 f ( + 2k + (0,1) ) |2 + | B( z1 , z2 ) k Z 2 f ( + 2k + (1,1) ) |2

= B ( z1 , z2 ) + B ( z1 , z2 ) + B ( z1 , z2 ) + B ( z1 , z2 )
2 2 2 2

This complete the proof of Lemma 2. Similarly, we can obtain Lemma 3 from (3), (8), (13).
Lemma 3. If ( x ) ( = 0,1, 2,3 ) are orthogonal wavelet functions associated with
h( x) . Then we have
{ B (( 1) z1 , ( 1) z 2 ) B (( 1) z1 , ( 1) z 2 ) + B
( ) ( ) ( )
(( 1)
j +1
z1 , ( 1) z 2 )
1 j j j j j
j =0

( )
B (( 1)
j +1
z1 , ( 1) z2 )}: = ,
j
= , , , {0,1, 2, 3}. (18)

For an arbitrary positive integer n Z + , expand it by


n = j =1 j 4 j 1 , j = {0,1, 2,3} .

(19)

Lemma 4. Let n Z + and n be expanded as (17). Then we have



( j )
h n ( ) = B (e i1 / 2 , e i12 / 2 )h 0 ( 0 ) .
j j

j =1

Lemma 4 can be inductively proved from formulas (14) and (18).

Theorem 1. For n Z + , k Z 3 , we have

hn (), hn ( k ) = 0,k . (20)

Proof. Formula (20) follows from (10) as n=0. Assume formula (20) holds for the
case of 0 n < 4r0 ( r0 is a positive integer). Consider the case of 4 r0 n < 4r0 +1 .
For , by induction assumption and Lemma 1, Lemma 3 and Lemma 4, we have
2
( 2 ) hn (), hn ( k ) = 2 h n ( ) exp {ik} d
2
R
4 ( j1 +1) 4 ( j2 +1)

= B ( ) ( z1 , z2 ) h[ n / 8] ( / 2) eik d
2

jZ
2
4 j1 4 j2

(z , z ) | h
4 4 2

= 0 0
B
( )
1 2 [n / 8] ( + 2 j ) | e d
2 ik

j Z
2 2
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 27

4 4 2 2 2
= B ( ) ( z1 , z 2 , z 3 ) e ik d = e ik d = o , k
0 0 0 0

Thus, we complete the proof of theorem 1.

Theorem 2. For every k Z 2 and m, n Z + , we have


hm (), hn ( k ) = m ,n 0,k . (21)

Proof. For the case of


m = n (20) follows from Theorem 1.As m n and
m, n 0 , the result (20) can be established from Theorem 2, where
0 = {0,1, 2,3} . In what follows, assuming that m is not equal to n
and at least one of {m, n} doesnt belong to 0 , rewrite m , n as m = 4 m1
+1 , n = 4n1 + 1 , where m1 , n1 Z + , and 1 , 1 0 .

Case 1. If m1 = n1 , then 1 1 . By (17), formulas (21) follows, since

(2 )2 hm (), hn ( k ) = R2 h4 m1 + 1 ( )h 4 n1 + 1 ( ) exp{ik}d
= [0,4 ]2 B ( 1 ) ( z1 , z2 ) hm ( 1
2 + 2s ) h m1 ( 2 + 2s ) B ( 1 ) ( z1 , z2 ) eik d
sZ 2
1
=
(2 )2 [0,2 ]2
1 , 1 exp{ik} d = O.

Case 2. If m1 n1 we order m1 = 4m2 + 2 , n1 = 4n2 + 2 , where m2 , n2 Z + ,


and 2 , 2 0 . If m2 = n2 , then 2 2 . Similar to Case 1, we have (21)
follows. That is to say, the proposition follows in such case. As m2 n2 , we
order m2 = 2m3 + 3 , n2 = 2n3 + 3 , once more, where m3 , n3 Z + , and
3 , 3 0 . Thus, after taking finite steps (denoted by r ), we obtain mr , nr 0 , and
r , r 0 . If r = r , then r r . Similar to Case 1, (21) holds. If r r ,
Similar to Lemma 1, we conclude that

1
r (2 ) R
2 h 4 m1 + 1 ( ) h 4 n1 + 1 ( ) e d
ik
hm (), hn ( k ) = 2
r
1
2 [0,2r +1 ]2
= { B ( ) ( / 2 )} O { B ( ) ( / 2 )} eik d = O.
(2 ) =1 =1

Theorem 3. If {G ( x), Z +2 } and {G ( x), Z +2 } are vector-valued wavelet packs


with respect to a pair of biorthogonal vector-valued scaling functions G0 ( x) and
G0 ( x) , then for any , Z +2 , we have

G (), G ( k ) = , 0,k I s , k Z 2 . (22)


28 Z. Tang and H. Guo

References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal.~26(4), 1061--1074
(1995)

6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-


dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
The Traits of Dual Multiple Ternary Fuzzy Frames of
Translates with Ternary Scaling Functions*

ShuKe Zhou1,** and Qingjiang Chen2


1
Dept. of Math. & Physics, Henan University of Urban Construction, Pingdingshan 467036
2
School of Science, Xi'an University of Architecture & Technology, Xian 710055,
P.R. China
asdfgh84sxxa@126.com

Abstract. The rise of frame theory in appled mathematics is due to the fle-
xibility and redundancy of frames. Structured frames are much easier to con-
struct than Structured orthonormal bases. In this work, the notion of the ternary
generalized multiresolution structure (TGMS) of subspace L2 ( R 3 ) is proposed,
which is the generalization of the ternary frame multiresolution analysis. The
biorthogonality character is characterized by virtue of iteration method and va-
riable separation approach. The biorthogonality formulas concerning these wa-
velet packages are established.The construction of a TGMS of Paley-Wiener
subspace of L2 ( R 3 ) is studied. The pyramid decomposition scheme is obtained
based on such a TGMS and a sufficient condition for its existence is provided.
A procedure for designing a class of orthogonal vector-valued finitely support-
ed wavelet functions is proposed by virtue of multiresolution analysis method.

Keywords: Dual pseudoframes, trivariate wavelet transform, Haar wavelet,


Bessel sequence, dual ternary fuzzy frames, time-frequency representation.

1 Introduction

Every frame(or Bessel sequence) determines an synthesis operator, the range of which
is important for a lumber of applications. The main advantage of wavelet function is
their time-frequency localization property. Construction of wavelet functions is an
important aspect of wavelet analysis, and mul-tiresolution analysis approach is one of
importment ways of designing all sorts of wavelet functions. There exist a great many
kinds of scalar scaling functions and scalar wavelet functions. Although the Fourier
transform has been a major tool in analysis for over a century, it has a serious laking
for signal analysis in that it hides in its phases information concerning the moment of
emission and duration of a signal. The frame theory has been one of powerful tools
for researching into wavelets. Duffin and Schaeffer introduced the notion of frames

*
Foundation item: The research is supported by Natural Scientific Foundation of Sh-aanxi
Province (Grant No:2009J M1002), and by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No:11JK0468).
**
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 2935, 2011.
Springer-Verlag Berlin Heidelberg 2011
30 S. Zhou and Q. Chen

for a separable Hilbert space in 1952. Later, Daubechies, Grossmann, Meyer, Benede-
tto, and Ron revived the study of frames in[1,2], and since then, frames have become
the focus of active research, both in theory and in applications, such as signal
processing, image processing and sampling theory. The rise of frame theory in
applied mathematics is due to the flexibility and redundancy of is due to the flexibility
and redundancy of frames, where robust-ness, error tolerance and noise suppression
play a vital role [3,4]. The concept of frame multiresolution analysis (FMRA) as des-
cribed in [2] generalizes the notion of MRA by allowing non-exact affine frames.
Inspired by [2] and [5], we introduce the notion of a trivariate generalized multi-
resolution structure(TGMS) of L2 ( R 3 ) , which has a pyramid decomposition scheme. It
also lead to new constructions of affine frames of L2 ( R 3 ) . Sampling theorems play a
basic role in digital signal processing. They ensure that continuous signals can be
represented and processed by their discrete samples. The classical Shannon Sampl-
ing Theorem asserts that band-limited signals can beexactly represented by their
uniform samples as long as the sampling rate is not less than the Nyquist rate. Wave-
let wraps, owing to their good properties, have attracted considerable attention. They
can be widely applied in science and engineering. Since the majority of information is
multidimensional information, many researchers interest themselves in the investig-
ation into multivariate wavelet theory. But, there exist a lot of obvious defects in this
method, such as, scarcity of designing freedom. Therefore, it is significant to study
nonseparable multivariate wavelet theory. Nowadays, since there is little literature on
biorthogonal wavelet wraps, it is necessary to research biorthogonal wavelet wraps.

2 Notations and the Dual Ternary Pseudoframes of Translate

We start from some notations. Z and Z + denote all integers and all nonnegative in-
tegers, respectively. R denotes all real numbers. R3 denotes the 3-dimentional Euclide-
an space. Let L2 ( R 3 ) be the square integrable function space on R 3 . Set and Z 3 =
{( z1 , z2 , z3 ) : zr Z , r = 1, 2,3}, Z +3 = {{( z1 , z2 , z3 ) : : zr z+ , r = 1, 2,3}. Let U be a
separable Hilbert space and is an index set. We say that a sequence { }v U
is a frame for U if there exist positive real constrants L1 , L2 such that

v | , v | L2
2 2 2
U , L1 , (1)

A family { }v U is a Bessel sequence if (only) the upper inequality of (1) holds.


If only for all U , the upper inequality of (1) follows, the sequence
{ }v U is a Bessel sequence with respect to (w.r.t.) . If a family {v } U is
a frame for U , there exists a dual frame {v } such that
*

U = ,v
v v* = , v* v .
v
(2)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 31

For a sequence c = {c(v)} 2 ( Z ) , we define its discrete-time Fourier transform as


the function in space L2 (0,1)3 as follows
Fc ( ) = C ( ) = uZ 3 c ( u )e 2 iu (3)

Note that the discrete-time Fourier transform is 1-periodic. Let v g ( x ) stand for
integer translates of a function g ( x) L2 ( R 3 ) , i.e., ( va g )( x) = g ( x va) , and g n ,va
= 2 n g (2n x va ) , where a is a positive real constant number. Let ( x) L ( R 2 3
)
and let V0 = span{Tv : v Z } denote a closed subspace of L ( R ) . Assume that
2 2 3


( ) := vZ 3 | ( + v ) |2 L [ 0,1] . In [ 3] , the sequence { v ( x )}v is a frame for
3
V0 if
and only if there exist positive constants L1 and L2 such that
L1 ( ) L2 a.e., [0,1] \ N = { [0,1] : ( ) = 0} .
3 3
(4)

We begin with introducing the concept of pseudoframes of translates.

Definition 1. Let { sa , s Z 3 } and { sa , s Z } be two sequences in L2 ( R 3 ) . Let


3

Y be a closed subspace of L2 ( R 3 ) . We say { va , v Z 3 } forms an affine


pseudoframe for Y with respect to { va , v Z 3 } if

h( x) Y , h( x) = vZ h, va va ( x )
3 (5)

Define an operator K : Y 2
( Z 3 ) by

h( x) Y , Kh = { h, va } , (6)

and define another operator S: 2


( Z 2 ) Y such that
c = {c(u )} 2 ( Z 3 ) , Sc = uZ c(u ) ua .
3 (7)

Theorem 1. Let { va }vZ 3 L2 ( R 3 ) be a Bessel sequence with respect to the subs-


pace Y L2 ( R 3 ) , and { va }vZ 3 is a Bessel sequence in L2 ( R 3 ) . Assume that K be
defined by (7), and S be defined by (8). Assume P is a projection from L2 ( R 3 ) onto
Y . Then { va }vZ 3 is pseudoframes of translates for the subspace Y with re-spect to
{ va }vZ 3 if and only if
KSP = P . (8)

Proof. The convergence of all summations of (7) and (8) follows from the assumptio-
ns that the family { va}vZ 3 is a Bessel sequence with respect to the subspace Y , and
he family { va }vZ 3 is a Bessel sequence in L2 ( R 3 ) with which the proof of the
theorem is direct forward.
32 S. Zhou and Q. Chen

We say that a trivariate generalized multiresolution structure (TGMS){Vn , f ( x),


f ( x)} of L2 ( R 3 ) is a sequence of closed linear subspaces {Vn }nZ of L2 ( R 3 ) and two
elements f ( x) , f ( x) L2 ( R 3 ) such that (i) Vn Vn +1 , n Z ; (ii) V = {0} ; n Z n

V is dense in L ( R ) ; (iii) g ( x) Vn if and only if g (2 x) Vn+1 n Z .


n Z n
2 3

(iv) ( x ) V0 implies va ( x ) V0 , for v Z ; (v) { va }vZ forms pseud-oframes of


3

translates for with respect to { va }vZ 3 .

Proposition 1[3]. Let f ( x) L2 ( R 3 ) satisfy | f | a.e. on a connected neighbourhood of


0 in [ 12 , 21 )3 , and | f | = 0 a.e. otherwise.
Define { R 2 : | f ( ) | C > 0}, and
V0 = PW = { L2 ( R 3 ) : supp( ) } . Then for arbitraryfun. f ( x) L2 ( R 3 ) ,
{ v f : v Z 3 } is pseudoframes of trananslates for V0 with respect to { v f : v Z 3 } if
and only if

f ( ) f ( ) ( ) = ( ) a. e. , (9)

where is the characteristic function on . Moreover, if f ( ) is the above


conditions then { v f : v Z 3 } and { v f : v Z 3 } are a pair of commutative pseudo-
frames of translates for V0 , i. .e.
h( x) V0 , h ( x ) = k Z 3 h , k f k f ( x ) = k Z 2 h , k f k f ( x ) . (10)

Proposition 2[3]. Let { va f }vZ 3 be pseudoframes of translates for V0 with respect to


{ va f }vZ 3 . Define Vn by
Vn {h( x) L2 ( R 3 ) : h( x / 2n ) V } nZ , (11)
Then { f n , va }vZ 3 is an affine pseudoframe for Vn with respect to { f n ,va }vZ .
3

The filter functions associated with a TGMS are presented as follows. Define filter
functions D0 ( ) and D 0 ( ) by D0 ( ) = sZ 3 d 0 ( s ) e 2 is and
B 0 ( ) = sZ 3 b0 ( s ) e 2 is
of the sequences d 0 = {d 0 ( s)} and d 0 = {d 0 ( s )} , resp-
ectively, wherever the sum is defined. Let {b0 (v)} be such that D0 (0) = 2 and
B0 ( ) 0 in a neighborhoood of 0. Assume also that D0 ( ) 2 . Then there exists
f ( x) L2 ( R 2 ) (see ref.[3]) such that

f ( x) = 2 sZ 3 d0 ( s ) f (2 x sa ) . (12)

There exists a scaling relationship for f ( x) under the same conditions as that of d0
for a sequence d 0 , i.e.,

f ( x) = 2 sZ 3 d 0 ( s) f (2 x sa) . (13)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 33

3 The Traits of Nonseparable Trivariate Wavelet Packages

Denoting by G0 ( x) = F ( x ), G ( x ) = ( x ), G0 ( x ) = F ( x ), G ( x) = ( x), Qk(0) = k ,


Qk = Bk , Qk(0) = k , Qk = Bk , , k Z , M = 4 I v . For any Z + and the
() () ( ) () 3 3

given vector-valued biorthogonal scaling functions G0 ( x) and G0 ( x) , iteratitively


define, respectively,
G ( x) = G2 + ( x) = Q G (2 x k ),
( )
k 0 , (14)
k Z 3

G ( x) = G2 + ( x) = Q G (2 x k ),
( )
k 0 . (15)
k Z 3

where Z +3 is the unique element such that = 4 + , 0 follows.

Lemma 1[4]. Let F ( x), F ( x) L2 ( R 3 , C v ). Then they are biorthogonal if and only if

F ( + 2k ) F ( + 2k ) *
= Iv . (16)
kZ 3

Definition 2. We say that two families of vector-valued functions {G2 + ( x ), Z +3 ,


0 } and {G2 + ( x), Z +3 , 0 } are vector-valued wavelet packets with
respect to a pair of biorthogonal vector-valued scaling functions G0 ( x) and G0 ( x) ,
resp., where G2 + ( x) and G2 + ( x) are given by (14) and (15), respectively.
Applying the Fourier transform for the both sides of (14) and (15) yields, resp.,

G 2 + (2 ) = Q ( ) ( )G ( ), 0 , (17)


G2 + (2 ) = Q ( ) ( )G ( ), 0 , (18)

Lemma 2[6]. Assume that G ( x), G ( x) L ( R , C ), are pairs of biort-


2 3 v

hogonal vector-valued wavelets associated with a pair of biorthogonal scaling


functions G0 ( x) and G0 ( x) . Then, for , 0 , we have



Q (( + 2 ) / 2)Q
0
( ) ( )
(( + 2 ) / 2)* = , I v . (19)

Theorem 2[8]. Assume that {G ( x), Z +3 } and {G ( x), Z +3 } are vector-


valued wavelet packets with respect to a pair ofbiorthogonal vector-valued functions
G0 ( x) and G0 ( x) , respectively. Then, for Z +3 , , v 0 , we have
[G (), G ( k )] = 0, k I v , k Z 3 . (20)

[G2 + (), G2 +v ( k )] = 0, k , I v , k Z 3 . (21)


34 S. Zhou and Q. Chen

Theorem 3. If {G ( x), Z +3 } and {G ( x), Z +3 } are vector-valued wavelet wraps


with respect to a pair of biorthogonal vector scaling functions G0 ( x) and G0 ( x) ,
then for any , Z +3 , we have

[G (), G ( k )] = , 0, k I v , k Z 3 . (22)

Proof. When = , (22) follows by Theorem 2. as and , 0 , it


follows from Theorem 1 that ((22)) holds, too. Assuming that is not equal to ,
as well as at least one of { , } doesnt belong to 0 , we rewrite , as
= 21 + 1 , = 2 1 + 1 , where 1 , 1 0 .
Case 1. If 1 = 1 , then 1 1 . (22) follows by virtue of (17), (18) as well as
Lemma 1 and Lemma 2, i.e.,

(2 )3 [G (), G ( k )] = 3 G 21 + 1 ( )G21 + 1 ( )* exp{ik }d
R

= , I v exp{ik }d = O.
[0,2 ]3 1 1

Case 2. If 1 1 , order 1 = 2 2 + 2 , 1 = 2 2 + 2 , where 2 , 2 Z +3 , and


2 , 2 0 . If 2 = 2 , then 2 2 . Similar to Case 1, (22) follows.
As 2 2 , order 2 = 2 3 + 3 , 2 = 2 3 + 3 , where 3 , 3 Z +3 , 3 , 3 0 .
Thus, taking finite steps (denoted by ), we obtain 0 , and , 0 .

8 3 [G (), G ( k )] = G ( )G ( )* eik d
R3

= 3
G 21 + 1 ( )G2 1 + 1 ( )* exp{ik }d
R

{Q ( l ) ( / 2l )} O { l =1Q ( l ) ( / 2l )}* exp{ik }d = O.

=
([0,22 ] 3
l =1
Therefore, for any , Z +3 , result (22) is established.

4 Conclusion

The construction of a TGMS of Paley-Wiener subspace of L2 ( R 3 ) is studied. The


pyramid decomposition scheme is obtained based on such a TGMS. The biorthogona-
lity formulas concerning these wa-velet packages are established.

References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 35

3. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of


translates. Fourier Anal. Appl. 7(1), 2340 (2001)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Chen, Q., Shi, Z.: Biorthogonal multiple vector-valued multivari-ate wavelet packets
associa-ted with a dilation matrix. Chaos, Solitons & Fractals 35(3), 323332 (2008)
The Characteristics of Multiple Affine Oblique Binary
Frames of Translates with Binary Filter Banks

YongGan Li*

Office of Financial affairs, Henan Quality Polytechnic, Pingdingshan 467000, P.R. China
txxpds@126.com

Abstract. Frame theory has been the focus of active research for twenty years,
both in theory and applications. In this paper, the notion of the bivariate
generalized multiresolution structure (BGMS) of subspace L2 ( R 2 ) , which is the
generalization of frame multiresolution analysis, is proposed. The biorthogona-
nality traits on wavelet wraps are researched by using time-frequency analysis
approach and variable separation approach. The construction of a BGMS of
Paley-Wiener subspace of L2 ( R 2 ) is studied. The pyramid decomposition
scheme is obtained based on such a GMS and a sufficient condition for its
existence is provided. A procedure for designing a class of orthogonal vector-
valued finitely supported wavelet functions is proposed by virtue of filter bank
theory and matrix theory.

Keywords: Affine pseudoframes, bivariate wavelet wraps, wavelet frame,


Bessel sequence, orthonormal bases, time-frequency analysis approach.

1 Introduction and Notations

The main advantage of wavelet function is their time-frequency localization property.


Construction of wavelet functions is an important aspect of wavelet analysis, and mul-
tiresolution analysis approach is one of importment ways of designing all sorts of
wavelet functions. There exist a great many kinds of scalar scaling functions and
scalar wavelet functions. Although the Fourier transform has been a major tool in
analysis for over a century, it has a serious laking for signal analysis in that it hides in
its phases information concerning the moment of emission and duration of a signal.
The frame theory has been one of powerful tools for researching into wavelets. Duffin
and Schaeffer introduced the notion of frames for a separable Hilbert space in 1952.
Later, Daubechies, Grossmann, Meyer, Benedetto, Ron revived the study of frames
in[1,2],and since then, frames have become the focus of active research, both in
theory and in applications, such as signal processing, image processing and sampling
theory. The rise of frame theory in applied mathematics is due to the flexibility and
redundancy of is due to the flexibility and redundancy of frames, where robustness,
error toleranceand noise suppression play a vital role [3,4]. The concept of frame
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 3641, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Characteristics of Multiple Affine Oblique Binary Frames 37

multiresolution analysis (FMRA) as described in [2] generalizes the notion of MRA by


allowing non-exact affine frames. However, subspaces at different resolutions in a
FMRA are still generated by a frame formed by translates and dilates of a single
function. This paper is motivated from the observation that standard methods in
sampling theory provide examples of multiresolution structure which are not FMRAs.
Inspired by [2] and [5], we introduce the notion of a bivariate generalized
multiresolution structure(BGMS) of L2 ( R 2 ) , which has a pyramid decomposition
scheme. It also lead to new constructions of affine frames of L2 ( R 2 ) . Wavelet wraps,
owing to their good properties, have attracted considerable attention. They can be
widely applied in science and engineering. Since the majority of information is
multidimensional information, many researchers interest themselves in the investigation
into multivariate wavelet theory. But, there exist a lot of obvious defects in this method,
such as, scarcity of designing freedom. Therefore, it is significant to investigate
nonseparable multivariate wavelet theory. Nowadays, since there is little literature on
biorthogonal wavelet wraps, it is necessary to investigate biorthogonal wavelet wraps .
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 2 denotes the 2-
dimentional Euclidean space. L2 ( R 2 ) denotes the square integrable function space.
i 2
Let x = ( x1 , x2 ) R , = (1 , 2 ) R 2 , k = ( k1 , k 2 ) Z 2 , z1 = e i 2 , z2 = e 2 .
2

The inner product for any functions ( x ) and ( x ) ( ( x ), ( x) L2 ( R 2 )) and the


Fourier transform of ( x ) are defined, respectively, by
, = ( x) ( x ) dx, ( ) = ( x) e 2 i x dx,
R2 R2

where x = 1 x1 + 2 x2 and ( x) denotes the complex conjugate of ( x) . Let R


and C be all real and all complex numbers, respectively. Z and N denote, respectively,
all integers and all positive integers. Set Z + = {0} N , m, s N as well as m 2 By
algebra theory, it is obviously follows that there are m elements 0 , 1 , , m2 1 in
2

Z +2 = {(n1 , n2 ) : n1 , n2 Z + } such that Z = d ( + mZ )


2 2

0
;
(1 + mZ ) (2 + mZ ) = , where 0 = {0 , 1 , , m2 1} denotes the aggregate of
2 2

all the different representative elements in the quotient group Z 2 /(mZ 2 ) and order
= {0} where {0} is the null element of Z and 1 , 2 denote two arbitrary distinct
2
0 +

elements in 0 .Let = 0 {0} and , 0 to be two index aggregates. By


L2 ( R 2 , C s ) , we denote the set of vector-valued functions L2 ( R 2 , C s ) := { ( x) = (h1 ( x),
h2 ( x), , hu ( x))T : hl ( x) L2 ( R 2 ), l = 1, 2, , s} ,where T means the transpose of a vector.

Definition 1. A sequence { n ( x) nZ 2 L2 ( R 2 , C s )} is called an orthogonal set, if

n , v = n ,v I s , n, v Z 2 , (1)

where I s stands for the s s identity matrix and n ,v , is generalized Kronecker


symbol, i.e., n ,v = 1 as n = v and n ,v = 0 , otherwise.
38 Y. Li

Let W be a separable Hilbert space and is an index set. We recall that a


sequence { v : v Z } W is a frame for H if there exist positive real numbers C ,
D such that

W , C v | , v | D
2 2 2
, (2)
A sequence { v : v Z } W is a Bessel sequence if (only) the upper inequality of
(2) holds. If only for all W , the upper inequality of (2) holds, the sequence
{ v } W is a Bessel sequence with respect to (w.r.t.) . If { f v } is a frame, there
*
exists a dual frame { f v } such that

W = , f v
v f v* = , f v* f v .
v
(3)

For a sequence c = {c(v)} 2


( Z ) , we define its discrete-time Fourier transform as
the function in L2 (0,1) 2 by

Fc ( ) = C ( ) = vZ 2 c ( v )e 2 ix dx (4)

Note that the discrete-time Fourier transform is 1-periodic. Let Tv ( x ) stand for
integer translates of a function ( x) L ( R ) , i.e., (Tva )( x) = ( x va) , and
2 2
n,va
= 4 (4 x va ) , where a is a positive real constant number. Let ( x) L ( R 2 )
n n 2

and let V0 = span{Tv : v Z 2 } be a closed subspace of L2 ( R 2 ) . Assume that



( ) := v |( + v ) | L [0,1] . In [5] , the sequence {Tv ( x)}v is a frame for V0 if

2 2

and only if there exist positive constants L1 and L2 such that

L1 ( ) L2 a.e., [0,1] \ N = { [0,1] : () = 0} .


2 2
(5)

2 The Bivariate Affine Pseudoframes of Translate

We begin with introducing the concept of pseudoframes of translates.

Definition 1. Let {Tva , v Z 2 } and {Tva , v Z 2 } be two sequences in L2 ( R 2 ) .


Let U be a closed subspace of L2 ( R 2 ) . We say {Tva , v Z 2 } forms an affine
pseudoframe for U with respect to {Tva , v Z 2 } if

f ( x) U , f ( x) = vZ f , Tva Tva ( x )
2 (6)

Define an operator K : U 2
( Z 2 ) by

f ( x) U , Kf = { f , Tva } , (7)

and define another operator S: 2


( Z 2 ) W such that
The Characteristics of Multiple Affine Oblique Binary Frames 39

c = {c(k )} 2 ( Z 2 ) , Sc = vZ c(v) Tva .


2 (8)

Theorem 1. Let {Tva }vZ 2 L2 ( R 2 ) be a Bessel sequence with respect to the subs-
pace U L2 ( R 2 ) , and {Tva }vZ 2 is a Bessel sequence in L2 ( R 2 ) . Assume that K be
defined by (7), and S be defined by(8). Assume P is a projection from L2 ( R 2 ) onto
U . Then {Tva }vZ 2 is pseudoframes of translates for U with respect to {Tva }vZ 2
if and only if

KSP = P . (9)
Proof. The convergence of all summations of (7) and (8) follows from the assump-
tions that the family {Tva }vZ 2 is a Bessel sequence with respect to the subspace ,
and he family {Tva }vZ 2 is a Bessel sequence in L2 ( R 2 ) with which the proof of the
theorem is direct forward.
We say that a bivariate generalized multiresolution structure (BGMS) {Vn , f ( x),
f ( x)} of L2 ( R 2 ) is a sequence of closed linear subspaces {Vn }nZ of L2 ( R 2 ) and two
elements f ( x) , f ( x) L2 ( R 2 ) such that (i) Vn Vn +1 , n Z ; (ii) V = {0} ;n Z n

V is dense in L ( R ) ; (iii) h ( x ) Vn if and only if h ( 4 x ) Vn +1 n Z .


n Z n
2 2

(iv) g ( x) V0 implies Tva g ( x) V0 , for v Z ; (v) {Tva f }vZ forms pseudoframes of


2

translates for with respect to {Tva f , v Z } . 2

2
( )
Proposition 1[6]. Let f L R satisfy | f | a.e. on a connected neighbourhood of
2

0 in [ 12 , 12 )2 , and | f | = 0 a.e. otherwise. Define


{ R 2 : | f ( ) | C > 0}, and V0 = PW = { L ( R ) : supp( ) } .
2 2

Then for f L2 ( R 2 ) , {Tv f : v Z 2 } is pseudoframes of trananslates for V0 with


respect to {Tv f : v Z 2 } if and only if

h ( ) f ( ) ( ) = ( ) a. e. , (11)

where is the characteristic function on . Moreover, if f ( ) is the above


conditions then {Tv f : v Z 2 } and {Tv f : v Z 2 } are a pair of commutative pseudo-
frames of translates for V0 , i. .e.
( x) V0 , ( x) = kZ 2 , Tk f Tk f ( x) = kZ 2 , Tk f Tk f ( x) . (12)

Proposition 2[5]. Let {Tva f }vZ 2 be pseudoframes of translates for V0 with respect to
{Tva f }vZ 2 . Define Vn by
Vn {( x) L2 ( R 2 ) : ( x / 4 n ) V } nZ , (13)
40 Y. Li

Then, { f n , va }vZ 2 is an affine pseudoframe for Vn with respect to { f n ,va }vZ . 2

The filter functions associated with a BGMS are presented as follows. Define filter
functions D0 ( ) and D 0 ( ) by D0 ( ) = sZ 2 d 0 ( s ) e2 i and
B 0 ( ) = sZ 2 b0 ( s) e 2 i
of the sequences d 0 = {d 0 ( s )} and d 0 = {d 0 ( s )} , res-
pectively, wherever the sum is defined. Let {b0 (v)} be such that D0 (0) = 2 and
B0 ( ) 0 in a neighborhoood of 0. Assume also that D0 ( ) 2 . Then there exists
f ( x) L2 ( R 2 ) (see ref.[3]) such that

f ( x) = 2 sZ 2 d 0 ( s ) f (4 x sa ) . (14)

There exists a scaling relationship for f ( x) under the same conditions as that of
d0 for a seq. d 0 , i.e.,
f ( x) = 2 sZ 2 d0 (v) f (4 x sa) . (15)

3 The Traits of Nonseparable Bivariate Wavelet Packs

To construct wavelet packs, we introduce the following notation: a = 3, h0 ( x ) = f ( x),


h ( x ) = g ( x), b( 0) (n) = b(n), b( ) (n) = q ( ) (n), where We are now in a position
of introducing orthogonal bivariate nonseparable wavelet wraps.

Definition 3. A family of functions { hmk + ( x ) : n = 0,1, 2, 3, , } is called a


nonseparable bivariate wavelet packs with respect to an orthogonal scaling function
0 ( x) , where
mk + ( x ) = nZ 2 b( ) (n) k (mx n), (16)

where = 0,1, 2, 3. By taaking the Fourier transform for the both sides of (12), we have
h nk + ( ) = B ( ) ( z1 ,2 z ) h k ( 2). (17)

B ( )
( z1 , z2 ) = B ( / 2 ) = b
( ) ( ) k1
(k ) z z
1
k2
2 (18)
k Z 2

Lemma 1[6]. Let ( x) L2 (R 2 ). Then ( x) is an orthogonal one if and only if

| ( + 2k ) | 2
=1 . (19)
kZ 2

Lemma 2[6]. Assuming that f ( x) is an semiorthogonal scaling function. B ( z1 , z2 )


is the symbol of the sequence {b(k )} defined in (3). Then we have

= B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 )
2 2 2 2
(20)
The Characteristics of Multiple Affine Oblique Binary Frames 41

Lemma 3[8]. If ( x) ( = 0,1, 2,3 ) are orthogonal binary wavelet functions


associated with h( x ) . Then we have
{ B (( 1) z1 , ( 1) z 2 ) B (( 1) z1 , ( 1) z 2 ) + B
( ) ( ) ( )
(( 1)
j +1
z1 , ( 1) z 2 )
1 j j j j j
j =0

( )
B (( 1)
j +1
z1 , ( 1) z 2 )}: = ,
j
= , , , {0,1, 2, 3}. (21)

Lemma 4[6]. Let n Z + and n be expanded as (17). Then we have



( j )
h n ( ) = B (e i1 / 2 , e i12 / 2 )h 0 ( 0 ) .
j j

j =1

Lemma 4 can be inductively proved from formulas (14) and (18).


Theorem 1[6]. For n Z + , k Z , we have
3

hn (), hn ( k ) = 0,k . (22)

Theorem 2[7]. For every k Z 2 and m, n Z + , we have


hm (), hn ( k ) = m, n 0, k . (23)

References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Yang, S., Cheng, Z., Wang, H.: Construction of biorthogonal multiwavelets. J. Math. Anal.
Appl. 276(1), 112 (2002)
Generation and Characteristics of Vector-Valued
Quarternary Wavelets with Poly-scale Dilation Factor*

Ping Luo1,** and Shiheng Wang2


1
Department of Fundamentals, Henan Polytechnic Institute, Nanyang 473009, China
2
Department of Computer Science, Nanyang Agricultural College, Nanyang 473003, China
fghjkp147@126.com

Abstract. Wavelet analysis has become a developing branch of mathematics for


over twenty years. In this paper, the notion of orthogonal nonseparable bivariate
wavelet packs, which is the generalization of orthogonal univariate wavelet
packs, is proposed by virtue of analogy method and iteration method. Their
biorthogonality traits are researched by using time-frequency analysis approach
and variable separation approach. Three orthogonality formulas regarding these
wavelet wraps are obtained. Moreover, it is shown how to draw new
orthonormal bases of space L2 ( R 4 ) from these wavelet wraps. A procedure for
designing a class of orthogonal vector-valued finitely supported wavelet
functions is proposed by virtue of filter bank theory and matrix theory.

Keywords: Nonseparable, quarternary wavelet packs, frames of transform,


Bessel sequence, Gabor frames, time-frequency analysis approach.

1 Introduction and Notations


There is a large diversity of research fields where Gabor systems and frames play a role.
The main advantage of wavelet packs is their time-frequency localization property.
Construction of wavelet bases is an important aspect of wavelet analysis, and
multiresolution analysis method is one of importment ways of constructing various
wavelet bases. There exist many kinds of scalar scaling functions and scalar wavelet
functions. Although the Fourier transform has been a major tool in analysis for over a
century, it has a serious laking for signal analysis in that it hides in its phases
information concerning the moment of emission and duration of a signal. Wavelet
analysis [1] has been developed a new branch for over twenty years. Its applications
involve in many areas in natural science and engineering technology. The main
advantage of wavelets is their time-frequency localization property. Many signals in
areas like music, speech, images, and video images can be efficiently represented by
wavelets that are translations and dilations of a single function called mother wavelet
with bandpass property. They can be widely applied in science and engineering [2,3].

* Foundation item: The research is supported by Natural Scientific Foundation of Shaanxi


Province (Grant No:2009J M1002), and by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No:11JK0468).
** Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 4248, 2011.
Springer-Verlag Berlin Heidelberg 2011
Generation and Characteristics of Vector-Valued Quarternary Wavelets 43

Coifman R. R. and Meyer Y. firstly introduced the notion for orthogonal wavelet
packets which were used to decompose wavelet components. Chui C K.and Li Chun
L.[4] generalized the concept of orthogonal wavelet packets to the case of non-
orthogonal wavelet packets so that wavelet packets can be employed in the case of the
spline wavelets and so on. Tensor product multivariate wavelet packs has been
constructed by Coifman and Meyer. The introduction for the notion on nontensor
product wavelet packs attributes to Shen Z [5]. Since the majority of information is
multidimensional information, many researchers interest themselves in the investigation
into multivariate wavelet theory. Therefore, it is significant to investigate nonseparable
multivariate wavelet theory.
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 2 denotes the 4-
dimentional Euclidean space. L2 ( R 4 ) denotes the square integrable function space.
x = ( x1 , x2 , x3 , x4 ) R , = (1 , 2 , 3 , 4 ) R 4 , k = (k , k , k , k ) Z ,
4 4
Let 1 2 3 4

, i = 1, 2, 3, 4 . The inner product for any functions ( x ) and ( x ) and the


l
i
z =e
2

Fourier transform of ( x ) are defined, respectively, by

, = 4 ( x) ( x ) dx, ( ) = 4 ( x) e i x dx,
R R

where x = 1 x1 + 2 x2 + 3 x3 + 4 x4 and ( x) denotes the complex conjugate of


( x) . Set Z + = {0} N , m, s N as well as m 2 By algebra theory, it is obviously
4
follows that there are m elements d 0 , d1 , , d m4 in Z +4 = {(n1 , n2 , n3 , n4 ) : nl Z + }
nl Z + } such that Z = (d + mZ ) ; (d1 + mZ 4 ) (d 2 + mZ 4 ) = , where
4
d 0
4

0 = {d 0 , d1 , , d m4 1} denotes the aggregate of all the different representative


elements in the quotient group Z 4 /(mZ 4 ) and order d 0 = {0} where {0} is the null
element of Z and d1 , d 2 denote two arbitrary distinct elements in 0 .Let
4
+

= 0 {0} and , 0 to be two index sets. Define, By L2 ( R 4 , C s ) , we denote the


set of all vector-valued functions L2 ( R 4 , C s ) := { ( x) , = (h1 ( x)), h2 ( x), , hu ( x))T :
hl ( x ) L2 ( R 2 ), l = 1, 2, , s} ,where T means the transpose of a vector. For any
L2 ( R 4 , C s ) its integration is defined as follows
R4
( x)dx = ( R4 h1 ( x)dx, R4 h2 ( x)dx, , R4 hs ( x)dx )T .

Definition 1. A sequence { n ( y ) nZ 4 L2 ( R 4 , C s )} is called an orthogonal set, if

n , v = n ,v I s , n, v Z 4 , (1)

2 The Bivariate Multiresolution Analysis


2 4
Firstly, we introduce multiresolution analysis of space L ( R ) . Wavelets can be
constructed by means of multiresolution analysis. In particular, the existence
44 P. Luo and S. Wang

theorem[8] for higher-dimentional wavelets with arbitrary dilation matrice has been
given. Let ( x ) L2 ( R 4 ) satisfy the following refinement equation:

( x) = m 4 kZ bk (mx k ) , 4 (2)

where {b(n)}nZ 4 is real number sequence which has only finite terms.and ( x )
L2 ( R 4 ) is called scaling function. Formula (1) is said to be two-scale refinement
equation. The frequency form of formula (1) can be written as

( ) = B( z1 , z2 , z3 , z4 ) ( m), (3)
B( z1 , z2 , z3 , z4 ) = b( n) z1n z2 n z3n z4 n 1 2 3 4
. (4)
2
nZ

Define a subspace V j L ( R ) ( j Z ) by
2 4

V j = closL2 ( R4 ) m 2 ( m j x k ) : k Z 4 . (5)

Definition 2. We say that ( x) in (2) generate a multiresolution analysis {V j } jZ of


L2 ( R 4 ) , if the sequence {V j } jZ defined in (4) satisfy the following properties:
(i) V j V j +1 , j Z ; (ii) V j = {0}; V j is dense in L ( R ) ; (iii) h( x) Vk
2 4

jZ jZ j
h( mx) Vk +1 , k Z (iv) the family { (m x n) : n Z 4 } forms a Riesz basis for
the spaces V j .
Let U k (k Z ) denote the complementary subspace of V j in V j +1 , and assume
that there exist a vector-valued function ( x) = {1 ( x), 2 ( x), , m4 1 ( x)} constitutes
a Riesz basis for U k , i.e.,

U j = closL2 ( R4 ) : j , n : = 1, 2, , m4 1; n Z 4 , (6)

where j Z , and : j , k ( x ) = m j / 2 ( m j x k ), = 1, 2, , m 4 1; k Z 4 . Form con-


dition (5), it is obvious that 1 ( x ), 2 ( x), , m4 1 ( x) are in U 0 X 1. Hence there
( )
exist three real number sequences {q }( = {1, 2,
n , m 4 1}, n Z 4 ) such that
( x) = m 4 qk( ) (mx k ), (7)
kZ 4

Formula (7) in frequency domain can be written as

( ) = Q ( ) ( z1 , z2 , z3 , z4 ) f ( m), = 1, 2, , m 4 1. (8)
where the signal of sequence {q ( )
k }( = 1, 2, , m 1, k Z ) is
4 4

( )
Q ( z1 , z2 , z3 , z4 ) = q ( )
n
z1n z2 n z3n z4 n .
1 2 3 4
(9)
nZ
4
Generation and Characteristics of Vector-Valued Quarternary Wavelets 45

A bivariate function ( x ) L (R ) is called a semiorthogonal one, if


2 4

(), ( k ) = 0, k , n Z 4 . (10)

We say G ( x) = {1 ( x), 2 ( x), , m4 1 ( x)} is an orthogonal binary vector-valued


wavelets according to the scaling function ( x) L2 (R 4 ) ,if they satisfy:

(), ( k ) = 0 , , k Z 4 , (11)

(), ( n) = , 0, n , , , n Z 4 (12)

3 The Traits of Nonseparable Bivariate Wavelet Packs

Denoting by H 0 ( x) = F ( x), H ( x) = ( x), H 0 ( x) = F ( x), H ( x) = ( x),


Q (0)
k
( )
= Pk , Q
k =B ( )
k , Q (0)
k = Pk Q( )
k = B , , k Z . Let us order S = 4 I v .
()
k
4

For any Z and the given vector-valued biorthogonal scaling functions G0 ( x)


4
+

and G0 ( x ) , iteratively define, respectively,


H ( x) = H 4 + ( x) = Q H (4 x k ),
( )
k (13)
kZ 4

H ( x) = H 4 + ( x) = Q ( )
k H (4 x k ). (14)
kZ 4

where 0 , Z +4 is the unique element such that = 4 + , 0 holds.

Lemma 1[4]. Let F ( x ), F ( x ) L2 ( R 4 , C v ). Then they are biorthogonal ones if and


only if

F ( + 2k ) F ( + 2k ) *
= Iv . (15)
kZ 4

Definition 4. We say that two families of vector functions {H 4 + ( x ), Z +4 , 0 }


and {H 4 + ( x), Z +4 , 0 } are vector wavelet packets with respect to a pair
of biorthogonal vector scaling functions H 0 ( x) and H 0 ( x) , respectively, where
H 4 + ( x) and H 4 + ( x) are given by (13) and (14), respectively.
Taking the Fourier transform for the both sides of (13) and (14) yields, resp.,

H 4 + ( ) = Q ( ) ( / 4) H ( / 4), 0 , (16)

H 4 + (4 ) = Q ( ) ( ) H ( ), 0 , (17)
1
Q ( ) ( ) = 4 Qk( ) exp{ik }, 0 (18)
4 kZ 4
46 P. Luo and S. Wang

1
Q ( ) ( ) =
44
Q ( )
k exp{ik }, 0 . (19)
kZ 4

Lemma 2[7]. Assume that H ( x), H ( x) L ( R ) , are pairs of biorth-


2 4 n

ogonal vector-valued wavelets associated with a pair of biorthogonal scaling


functions H 0 ( x) and H 0 ( x) . Then, for , 0 , we have

0
Q ( ) (( + 2 ) / 4)Q ( ) (( + 2 ) / 4)* = , I v (20)

Theorem 1. Assume that {H ( x), Z +4 } and {H ( x), Z +4 } are vector-


valued wavelet packets with respect to a pair of biorthogonal vector-valued functions
H 0 ( x) and H 0 ( x) , respectively. Then, for Z +4 , , v 0 , we have

[ H 4 + (), H 4 +v ( k )] = 0,k , I v , k Z 4 . (21)

Proof. Since the space R 4 has the following partition: R 4 = uZ ([0, 2 ]4 + 2u ) , and 4

([0, 2 ]4 + 2 u1 ) ([0, 2 ]4 + 2 u2 ) = , where u1 u2 , u1 , u2 Z 4 ,then


1
4 R 4
[ H 4 + (), H 4 + ( k )] = H 4 + ( ) H 4 + ( )* exp{ik }d
(2 )
1
=
(2 )
4 Rs
Q ( ) ( / 4) H ( / 4) H ( / 4)*Q ( ) ( / 4)* eik d

2
= ( ) 4 [0,2 ] + 2 k Q ( ) ( ) H ( ) H ( )* Q ( ) ( )* ei 4 k d
k Z
4
4

1 1
=
(2 ) 4
[0,8 ]4
Q ( ) ( / 4)Q ( ) ( / 4)* eik d =
(2 )4
[0,2 ]4
,v I u ei 4 d = 0, k , v I v .

This completes the proof of Lemma 1.

Theorem 2. If {H ( x), Z + } and {H ( x), Z + } are vector wavelet packets


4 4

with respect to a pair of biorthogonal vector-valued scaling functions H 0 ( x) and


H 0 ( x) , then for any , Z +4 , we have

[ H (), H ( k )] = , 0, k I v , k Z 4 . (28)

Proof. When = ,(28) follows by Lemma 3. as and , 0 , it follows


from Lemma 4 that (28) holds, too. Assuming that is not equal to , as well as at
least one of { , } doesnt belong to 0 , we rewrite , as
= 41 + 1 , = 4 1 + 1 , where 1 , 1 0 . Case 1. If 1 = 1 , then 1 1 .
(28) follows by virtue of (24), (25) as well as Lemma 1 and Lemma 2,i.e.,
Generation and Characteristics of Vector-Valued Quarternary Wavelets 47

(2 ) 4 [ H (), H ( k )] =
R4
H 41 + 1 ( ) H 41 + 1 ( )* exp{ik }d

= Q ( ) ( / 4){ H ( / 4 + 2u ) H ( / 4 + 2u )* }Q ( ) ( / 4)* e ik d
1 1

[0,8 ]
4 1 1

u Z
s

= , I v exp{ik }d = O.
[0,2 ]4 1 1

Case 2. If 1 1 , order 1 = 4 2 + 2 , 1 = 4 2 + 2 , where 2 , 2 Z +s , and


2 , 2 0 . Provided that 2 = 2 , then 2 2 . Similar to Case 1, (28) can be
established. When 2 2 ,we order 2 = 4 3 + 3 , 2 = 4 3 + 3 , where
3 , 3 Z +4 , 3 , 3 0 . Thus, after taking finite steps (denoted by ), we obtain
0 , and , 0 . If = , then . Similar to the Case 1, (28)
follows. If , then it gets from (12)-(15) that

16 4 [ H (), H ( k )] = 4 H ( ) H 1 ( )* eik d
R

= 4 H 41 + 1 ( ) H 4 1 + 1 ( )* exp{ik }d =
R



= {Q ( ) ( l )}{ H ( l + 2u ) H ( l + 2u )*}{Q ( ) ( l )}* eik d
l l

[0,24 ]4 4 4 4 4
l =1 uZ 4 l =1



)} O { l =1Q ( ) ( l )}* exp{ik }d = O.
{Q ( l ) (

=
l

([0,2 4 ] 4
4 l
4
l =1

Therefore, for any , Z + , result (28) is established.


4

4 Conclusion

The concept of biorthogonal vector four-dimensional wavelet packets was introduced.


Three biorthogonality formulas with respect to these wavelet packets are obtained.
The direct decomposition of space L2 ( R 4 ) n is proposed by constructing a series of
subspaces of the wavelet wraps.

References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
48 P. Luo and S. Wang

6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-


dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683
(2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
A Kind of New Strengthening Buffer Operators and
Their Applications

Ran Han* and Zheng-peng Wu

College of Science, Communication University of China, Beijing 100024, China


hanran@cuc.edu.cn, wuzhengpeng@126.com

Abstract. Under the axiomatic system of buffer operator in grey system theory,
two new strengthening buffer operators are established, which have been based
on strictly monotone function. Meanwhile, the characters and the inherent
relation among them are studied. The problem that there are some
contradictions between quantitative analysis and qualitative analysis in
pretreatment for vibration data sequences is resolved effectively. A practical
example shows theirs validity and practicability.

Keywords: Grey system, Monotone increasing function, Buffer operator,


Strengthening buffer operator, Fixed point.

1 Basic Concept
Definition 1. Assume that the sequence of data representing a systems behavior is
given, X=(x(1),x(2),,x(n)) then
(1)X is called a monotonously increasing sequence if k = 1,2, " , n 1 ,
x(k ) < x(k + 1) .
(2) X is called a monotonously decreasing sequence, if k = 1,2, " , n 1 ,
x(k ) > x(k + 1) .
(3) X is called a vibration sequence if k , k {1,2, " , n 1} , x(k ) < x(k + 1) and
x(k ) > x(k + 1) .
And M= max x ( k ) , m= min x (k ) , then M-m is called the amplitude of X.
1 k n 1 k n

Definition 2. Assume that X is a sequence of raw data, D is an operator worked on X,


and the sequence, obtained by having D worked on X, is denoted as X=(x(1)d,x(2)
d,,x(n) d).
Then D is called a sequence operator, and XD is the first order sequence worked on
by the operator D. Sequence is referred to as operator.
A sequence operator can be applied as many times as needed. It can obtain a
second order sequence, even order sequence, they can be denoted as XD , " , XD .
2 r

Axiom 1. Axiom of Fixed Points. Assume that X is a sequence of raw data and D is a
sequence operator, then D must satisfy x(n) d= x(n).
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 4954, 2011.
Springer-Verlag Berlin Heidelberg 2011
50 R. Han and Z.-p. Wu

In the effect of the Axiom of Fixed Points which limit the sequence operator,
x(n) in the raw data is never changed.
Axiom 2. Axiom on Sufficient Usage of Information. When a sequence operator is
applied, all the information contained in each datum x(k), (k=1,2,,n)of the sequence
X of the raw data should be sufficiently applied, and any effect of each entry x(k),
(k=1,2,,n) should also be directly reflected in the sequence worked on by the
operator.
The Axiom 2 limits any sequence operator which should be defined without the
sequence of raw data and based on the information in the sequence we have kept.
Axiom 3. Axiom of Analytic Representations. For any x(k)d, (k=1,2,,n) can be
described with a uniform and elementary analytic representation in x(1),x(2),,x(n).
The Axiom 3 requires procedures which the operator is applied to the raw data
clearly, standardized, and as simple as possible, in order to calculate and makes
calculation easier on the computer.
Definition 3 All sequence operators, satisfying these three axioms, are called buffer
operators, XD is called buffer sequence.
Definition 4 Assume X is a sequence of raw data, D is an operator worked on X,
when X is a monotonously increasing sequence, a monotonously decreasing sequence
or a vibration sequence, if the buffer sequence XD increases or decreases more rapidly
or vibrate with a bigger amplitude than the original sequence X, the buffer operator D
is termed as a strengthening operator.
Theorem 1 (1) When X is a monotonously increasing sequence, XD is a buffer
sequence, then D is a strengthening operator x( k ) d x ( k ) ( k = 1, 2,3,..., n ) ;
(2)When X is a monotonously decreasing sequence, XD is a buffer sequence, then
D is a strengthening operator x( k ) d x( k ) ( k = 1,2,3,..., n) ;
(3)When X is a monotonously vibration sequence and D is a strengthening
operator, XD is a buffer sequence ,then
max x(k ) max x( k )d , min x(k ) min x( k )d .
1 k n 1 k n 1 k n 1 k n

That is, the data in a monotonously increasing sequence shrink when a strengthening
operator is applied and data in a monotonously decreasing sequence expand when a
strengthening operator is applied[1].

2 Study on a Kind of New Strengthening Buffer Operators


Theorem 2. Assume X=(x(1),x(2),,x(n)) is a sequence of raw data, x(k)>0,
(k=1,2,,n), fi is a strictly monotonic function and fi>0,i=1,2,,n, gi is its inverse
function, and XD1=(x(1)d1,x(2) d2,,x(n) dn),
f (x (k )) n k +1 k = 1, " , n ,
1

x(k ) d 1 = g i f i ( x(k )) i ,
f i (x (n ))

when X is a monotonously increasing sequence, a monotonously decreasing sequence
or a vibration sequence, D1 is a strengthening buffer operator.
A Kind of New Strengthening Buffer Operators and Their Applications 51

Proof: It is easily proved that D1 satisfies the three axioms of buffer operator, so D1 is
a buffer operator. We prove that D1 is strengthening buffer operator.
(1) When X is a monotonously increasing sequence, because 0x(k)x(n)
and fi is a strictly monotonic increasing function and fi>0,i=1,2,,n, 0fi (x(k))fi

[ (( (( )))) ] 1 , f i ( x(k )) [ (( (( )))) ] f i ( x(k )) .


1 1
fi x k n k +1 fi x k n k +1
(x(n)) , fi x n fi x n
And because gi is inverse function of fi,

x(k )d1 = g i f i ( x(k )) [ (( (( )))) ] g [ f x(k )] = x(k ) .


1
fi x k n k +1


fi x n i i

According to theorem 1, D1 is a strengthening buffer operator.


(2)When X is a monotonously decreasing sequence, because x(k)x(n)>0,
fi is a strictly monotonic increasing function and fi>0,i=1,2,,n,, fi (x(k))

[ (( (( )))) ] 1 , f i ( x(k )) [ (( (( )))) ] f i ( x(k )) ,


1 1
fi x k n k +1 fi x k n k +1
fi (x(n))>0, fi x n fi x n
and gi is inverse function of fi ,so

x(k )d1 = g i f i ( x(k )) [ (( (( )))) ] g [ f x(k )] = x(k ) .


1
fi x k n k +1


fi x n i i

According to theorem 1, D1 is a strengthening buffer operator.


(2) When X is a vibration sequence, let x( k ) = max x(i ), x( h) = min x (i ),
1i n 1 i n
for {1,2,
i , (k) (1),, ( )
n},we have x x x n ;
x(h) (1),, ( )
x (k) ( ), (h) ( ),
x n ,because x x n x x n fi is a strictly monotonic
increasing function, f i > 0 , f i ( x(k )) f i ( x(n )), f i ( x(h )) f i ( x (n )) ,

[ (( (( )))) ] 1, [ (( (( )))) ] 1 , f (x(k ))[ (( (( )))) ] f (x(k ))


1 1 1
fi x k n k +1 fi x h n k +1 fi x k n k +1
fi x n fi x n i fi x n i ,

f (x (h ))[ (( (( )))) ] f ( x(h )) , and g is inverse function of f , therefore


1
fi x h n k +1
i fi x n i i i

x(k )d = g f ( x(k ))[ (( (( )))) ] g [ f x(k )] = x(k ) ,


1
fi x k n k +1


1 i i fi x n i i

x(h )d = g f ( x(h ))[ (( (( )))) ] g [ f x(h )] = x(h )


1
fi x h n k +1


1 i i fi x n i i

max{x(k )d 1 } max{x(k )}
1 k n 1 k n

and min{x(k )d 1 } min{x(k )} .According to theorem 1, D1 is a strengthening


1 k n 1 k n
buffer operator.

Theorem 3. Assume X = ( x(1), x(2), " , x(n)) is a sequence of raw data,


x(k ) > 0, k = 1,2," , n ,
f i is a strictly increasing monotonic function and f i > 0, i = 1,2, ", n , g i is its
inverse function, and XD 2 = ( x(1)d 2 , x(2) d 2 , " , x( n)d 2 ) is its buffer sequence,
52 R. Han and Z.-p. Wu

f ( x (k )) n f ( x(i )) n i +1
1

x( k )d 2 = g i i
n k + 1
i , k = 1, " , n ,
i = k f i ( x (n ))

when X is a monotonously increasing sequence or a monotonously decreasing
sequence, D2 is a strengthening buffer operator.

Proof: It is easily proved that D2 satisfies the three axioms of buffer operator,
so D2 is a buffer operator. We prove that D2 is strengthening buffer operator.
(1)When X is a monotonously increasing sequence, because
0x(k)x(n)and fi is a strictly increasing monotonic function and
[ (( (( )))) ] 1 ,
f i > 0, i = 1,2," , n , 0 f i ( x(k )) " f i ( x(n )) ,
1
fi x k n k +1
fi x n

f ( x(k ))
[ ( ( )) ] [ ( ( )) ]
n n
f ( x (k )) , and
1
( ( ))
( ( ))
1 1

1,
fi x i n i +1 i fi x i n i +1

n k +1 n k +1
fi x n fi x n i
i =k i=k

because g i is inverse function of fi,


f ( x(k )) n f i ( x (i )) n 1i +1
x(k )d 2 = g i i [ ]
fi ( x(n )) g i [ f i x(k )] = x(k ) .
n k + 1 i =k
According to theorem 1, D2 is a strengthening buffer operator.
We can follow the example of the proof of theorem 2, when X is a monotonously
decreasing sequence, D2 is a strengthening buffer operator.

3 Case Study
We take Per capita power consumption (Unit :kwh) for example to prove the effect of
strengthening buffer operator in this paper on GM(1,1) prediction model. Per capita
power consumption of China from 2000 to 2006 is chosen as a sequence of raw data
(Table1).

Table 1. Per capita power consumption(Unit: kwh)

Year 2000 2001 2002 2003 2004 2005 2006


Percapita
power 132.4 144.6 156.3 173.7 190.2 216.7 249.4
consumpti
on

At present, China is on economic development of the process of industrialization


and more dependent on electric power. From 2000 to nowadays, China overcame the
effects of Asian financial crisis, the government adopted a proactive fiscal policy and
A Kind of New Strengthening Buffer Operators and Their Applications 53

prudent monetary policy for economic growth, injecting vigor, cause the power
demand increased sharply. Therefore, this article takes China's per capita power
consumption of raw data sequence from 2000 to 2005 as modeling data, the data in
2006 as a model test data. Calculating per capita consumption growth power is as
follows: 9.215%, 8.091%, 11.132%, 9.499, 13.933%, 15.090%, an annual average
increase rate of 9.468%.
We test the primary data sequence for quasi-smooth, when t 2003 , its smooth
ratio is 0.401,0.313, 0.272, 0.246 which is in (0,0.5).Its smooth ratio is decreasing and
satisfies quasi-smooth. Therefore a accumulation of the primary data sequence has
quasi-exponential law. But it is clear that the former has a little slow growth rate, the
latter has fast growth rate. Therefore, it is best to smooth the original data sequence to
weaken the impact of the shock disturbance and highlight the laws of the data.
Take f i ( x ) = x 2 , g i ( x ) = x 0.5 , i = 1,2," , n to construct the buffer operator,
and apply the second order buffer operator which in this article to strengthen the raw
data. Then we build the prediction model and compare with the raw data sequence
(Table 2).
The GM(1,1) model directly built by the raw data sequence without buffer
operator is
x (2000 + t ) = 1317.848e0.102t 1185.448 .
The GM(1,1) model built by the strengthening buffer operator sequence XD1 is

x ( k + 1) = 654.7787e 0.1549 k 544.7797 .


The GM(1,1) model built by the strengthening buffer operator sequence XD2 is

x ( k + 1) = 801.78e 0.139 k 686.784 .

Table 2. Case 3 GM(1,1) model

Sequence Predictive Predictive


value relative error(%)
X 236.831 5.040

XD12 238.2672 4.53

XD22 241.23 3.28

From the table 2, all the predictive relative error of the strengthening buffer
sequences which are applied by the second order buffer operator D1 and D2 are
smaller than the predictive relative error of the raw data sequences in the model. The
predictive relative error applied with D2 is smaller, and its predictor is 241.23 and
closer to the observation 249.4. Its one step predictive error is only 3.28%, namely the
predictive accuracy is the highest.
54 R. Han and Z.-p. Wu

4 Conclusion
Based on the recent literature, two new strengthening buffer operators are established,
which have been based on strictly monotone function. The example shows that the
predictive accuracy increases by the strengthening buffer sequences which are applied
by the second order buffer operator D1 and D2 .

References
1. Liu, S.-f., Dang, Y.-g., Fang, Z.-g.: Grey system theory and its application. Science Press,
Beijing (2004)
2. Liu, S.-f.: The three axioms of buffer operator and their applications. The Journal of Grey
System 3(1), 3948 (1991)
3. Liu, S.-f.: Buffer operator and its application. Theories and Practices of Grey System 2(1),
4550 (1992)
4. Liu, S.-f.: The trap in the prediction of a shock disturbed system and the buffer operator.
Journal of Huazhong University of Science and Technology 25(1), 2527 (1997)
5. Dang, Y.-g., Liu, S.-f., Liu, B., et al.: Study on the strengthening buffer operators. Chinese
Journal of Management Science 12(2), 108111 (2004)
Infrared Target Detection Based on Spatially Related
Fuzzy ART Neural Network

BingWen Chen1, WenWei Wang1, and QianQing Qin2


1
College of Electronic Information, Wuhan University, 430079 Wuhan, China
2
State Key Laboratory for Information Engineering in Surveying,
Mapping and Remote Sensing, Wuhan University, 430079 Wuhan, China
wangww@whu.edu.cn

Abstract. In order to solve the ghosts, the halo effect as well as the lower
signal-to-noise ratio problems more effectively, this paper presents a spatially
related fuzzy ART neural network. We introduce a laterally-inspirited learning
mode into the background modeling stage. At first, we combine the region-
based feature with the intensity-based feature to train the spatially related fuzzy
ART neural network by the laterally-inspirited learning mode. Then two
spatially related fuzzy ART neural networks are configured as master-slave
pattern to build the background models and detect the infrared targets
alternately. Experiments have been carried out and the results demonstrate that
the proposed approach is robust to noise, and can eliminate the ghosts and the
halo effect effectively. It can detect the targets effectively without much more
post-process.

Keywords: infrared target detection, laterally-inspirited learning mode,


spatially related fuzzy ART neural network, master-slave pattern.

1 Introduction
Visual surveillance is a hot topic in computer vision. In recent years, thanks to the
improvement of infrared technology and the drop of its cost, the thermal imagery has
been widely used in the surveillance field. In comparison with visible imagery,
adopting the thermal imagery has many benefits, such as all-day working ability,
robust to light change, no shadow, etc. However, the thermal imagery has its own
difficulties, such as a lower signal-to-noise ratio, a lower resolution, an uncalibrated
white-black polarity change as well as the halo effect which appears around the
very hot or cold objects [1]. The comparison between visible and thermal imagery can
be learned from Lins review [2].
Many detection approaches have been proposed for thermal imagery. Some
approaches base on the targets appearance [3-5]; some approaches base on the
targets motion [1, 6].
A fuzzy ART neural network approach is proposed to detect the targets in our
previous works [7]. It is capable of learning the total number of categories adaptively
and stable, but it has some shortages. It does not take advantage of the strong spatial
correlation of thermal imagery. It only exploits the time domain information, and

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 5561, 2011.
Springer-Verlag Berlin Heidelberg 2011
56 B. Chen, W. Wang, and Q. Qin

utilizes only one simple feature to build the background models; moreover its
background model updating strategy is not comprehensive. In this paper, we exploit
and combine the spatial information with the temporal information, and propose a
novel infrared target detection approach based on spatially related fuzzy ART neural
network. The remainder of this paper is described as follows: Sec. 2 introduce the
laterally-inspirited learning mode and the master-slave working pattern, and present
the detection approach comprehensively. The experimental evaluation of our
approach is described in Sec. 3. Finally Sec. 4 includes conclusions and further
research directions.

2 The Spatially Related Fuzzy ART Neural Network


The infrared target detection approach using spatially related fuzzy neural network
can be divided into two stages: background modeling stage; target detection and
model updating stage. The whole detection schema is the same for each location of
infrared image. To facilitate the interpretation, we present the approach for one
location pixel in the following.

2.1 Background Modeling Stage

Several frames ahead (Ft, t=0N) are allocated to build the background model, so we
can obtain the sample data like formula 1.

[ f 0 ( i , j ), ..., f t ( i , j ), ..., f N ( i , j )] . (1)

The background modeling stage includes two sub-stages: initial background model
sub-stage and build background model sub-stage.

Initial Background Model Sub-stage. This stage is focused on the fuzzy ART neural
network initialization and sample data filter out.
A fuzzy ART neural network is initialized and trained with the sample data. Taking
advantage of the strong spatial correlation of infrared image, we combine the region-
based feature that the median of pixels neighbor with the intensity-based feature that
the pixels own intensity to train the fuzzy ART neural network (like formula 3).

Pt (i, j ) = [ f t (i, j ); Median{ f t (i k , j k ) | k = 0,1,..., ( M 1) / 2}] . (2)

P ( i , j ) = [ P0 (i , j ),..., Pt ( i , j ),..., PN ( i , j )] . (3)

Where, M denotes the size of pixels neighbor, P (i,j) denotes the input feature vector.
After training, we can obtain each categorys information, such as categorys
appearing frequency and each sample datas category, and then filter each category
out according to its properties. If a category belongs to the background, then its
appearing frequency must be bigger than others or a threshold value, so we set a
threshold , if the categorys frequency larger than , then this category belongs to
background, or else, this category belongs to non-background and must be deleted. In
the same way, the sample data are filtered out according to its category. By means of
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 57

filtering, the proposed approach can suit for the real situation that the foreground
objects appear during the background modeling period.

Build Background Model Sub-stage. This stage is focused on the fuzzy ART neural
network retrained and background model refined.
Owing to the thermal imagery mechanism, the thermal image has a strong spatial
correlation in a local region, so the background models should also reflect this
property. In order to exploit and fuse the spatial and temporal correlation effectively,
we introduce the laterally-inspirited learning mode as follow:
According to the category of Pt(i,j), for each location, its initialized fuzzy ART
neural network and those initialized fuzzy ART neural networks in its neighbor are
retrained, like formula 4.

{FART(i k, j k) | k = 0,1,...,(R 1) / 2} are retrained with P (i, j), if Pt (i, j) belongs to background

t
(4)
nothing, if Pt (i, j) belongs to foreground.

Where, R denotes the size of laterally-inspirited region, FART (i,j) denotes the
fuzzy ART neural network of location (i,j).
The fuzzy ART neural network is retrained by the laterally-inspirited learning
mode can associate the background models with the spatial information. Furthermore,
it combines with the region-based feature can exploit the spatial correlation
information effectively, and make the detection more stable and robust to noise. We
name this type of neural network as spatially related fuzzy ART neural network.

2.2 Target Detection and Model Updating Stage


The approach in reference [7] only updates the background model when the current
pixel belongs to the background model, but do not consider the updating method
when the current pixel belongs to the non-background model. Therefore, when this
approach works for a long time, the background model may be not fit the actual
situation, and the false objects, often referred to as ghosts [8] will be turn out. For
example, the still background objects start moving, or the moving foreground objects
stop for a long time. These may be lead to ghost appearance, and the performance
may also worsen.
In order to solve this problem, two spatially related fuzzy ART neural networks are
configured as master-slave pattern to detect the infrared targets alternately.
Here, we suppose that: A, B denote the two spatially related fuzzy ART neural
networks; T denotes the alternation time interval; Wj denotes the weights of category
j; denotes the learning rate parameter.
Then the target detection and model updating strategy as follow:
1) In the background modeling stage, A is established as the master network and
trained with the sample data follow the strategy in background modeling stage;
2) In target detection stage, A is utilized to detect the targets. If the new input pixel
can find the matched category and the resonance happen, then the corresponding
categorys weights are updated like formula 5. At the same time, B is established as
the slave network and trained with current data follow the strategy in background
modeling stage.
58 B. Chen, W. Wang, and Q. Qin

W jt+ 1 = ( P t ( i , j ) ^ W jt ) + ( 1 - ) W jt . (5)

3) When the alternation time interval T passes, the slave network B turn to be
master network and begin to detect the targets, and the master network A is reset and
established as slave network. When each alternation time interval T passes, the master
network and the slave network swap.

3 Experimental Results
To evaluate the performance of the proposed approach, we tested the approach with
the OTCBVS Benchmark Dataset Collection [9]. Some example images of five
original thermal sequences are shown in the first row of Fig. 1. We can see from these
images that the thermal images have a lower resolution; the halo effect appears
around objects; the targets temperature is not always higher than environments.
In our experimental evaluation, all detection results obtained by any approach are
the raw detection results without the post-processing. In order to quantitatively
measure the performance of the detection results, we manually segment the person
regions and take it as the ground truth to compare with the automatic detection results.
Some silhouettes segmented are shown in the second row of Fig. 1.
We compare the spatially related fuzzy ART neural network (SRFART) with three
other approaches: the single weighted Gaussian approach (SWG) [1]; the codebook
approach (CB) [10] and the fuzzy ART neural network (FART) [7]. The single
weighted Gaussian approach is an improved version of single Gaussian model. The
codebook approach is one of prominent detection approaches. About parameter
setting, for single weighted Gaussian approach, we set the stand deviation = 5 ,
the detection threshold T=8; for codebook approach, we set the light control
parameters =0.7, =1.3; for FART approach and the proposed approach, we set the
choice parameter =0, the learning rate parameter =1, the vigilance parameter
=0.85 for seq.1, 2, 5, =0.75 for seq.3, 4; for the proposed approach, we set the
alternation time interval T=10s, the size of pixels neighbor M=3, the size of laterally-
inspirited region R=3.
We show the silhouette results obtained by each detection approaches on
representative images (from the five sequences) in Fig. 1. From the third row, we can
see that the detection results obtained by codebook approach have much noise, and
even exist the ghosts like the seq. 4, 5. From the fourth row, we can see that the
detection results obtained by single weighted Gaussian approach also have much
noise, and the halo effect is very clear which severely impairs the detection results
like the sequences 3, 4. From the fifth row, we can see that the detection results
obtained by fuzzy ART neural network approach have some noise, like seq. 1, and
cannot detect the targets effectively when the contrast between target and
environment is low, like seq. 5. In contrast, from the last row, we can see that the
detection results obtained by the proposed approach have less noise and do not exist
the ghosts; the proposed approach is able to detect most portions of peoples bodies
even when the contrast between target and environment is low; the proposed approach
can eliminate most of halo and extract the people effectively.
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 59

Fig. 1. Visual comparisons of detection results of the four approaches across different images

Fig. 2. Visual comparison of detection results between whether to utilize the master-slave
pattern

Fig. 2 shows the visual comparison of detection results between whether to utilize
the master-slave pattern. Image (a) in Fig. 2 is one example image of the background
sample frames. We can see that there is a person standing in front of door and the
door is open. After a while, the person goes into the house and the door is closed, and
we can see it from the image (b). Image (c) shows the ground truth detection result for
image (b). From the image (d) we can see that: when we do not utilize the master-
slave pattern, the detection result has two ghosts (false object) obviously. In contrast,
60 B. Chen, W. Wang, and Q. Qin

when we do utilize the master-slave pattern, the background model can reflect the real
situation as accurately as possible, and can eliminate the ghosts effectively.
In order to evaluate the detection results objectively and quantitatively, the F1
metric [8] is adopted. The F1 metric, also known as Figure of Merit or F-measure, is
the weighted harmonic mean of precision and recall, as defined in formula 6. Recall,
also known as detection rate, gives the percentage of detected true positives as
compared to the total number of true positives in the ground truth; Precision, also
known as positive prediction rate, gives the percentage of detected true positives as
compared to the total number of detected items. Higher value of F1 means a better
detection performance.

2 * R e c a ll* P r e c is io n
F1= (6)
R e c a ll+ P r e c is i o n
The F1 measurements of each approach for five sequences in the Fig. 1 are shown
in table 1. Comparing the measurements in table 1, we can see that: Due to the
presence of halo and ghosts, the F1 values of single weighted Gaussian and codebook
are poor, and due to the lower contrast between target and environment, the F1 values
of the fuzzy ART neural network is low, like seq. 5. In contrast, the spatially related
fuzzy ART neural network approach performs well all the same even when the halo
and ghosts exist.

Table 1. The F1 measurements of four approaches for five sequences

Detection approach Seq.1 Seq.2 Seq.3 Seq.4 Seq.5


SWG 0.709 0.668 0.593 0.528 0.626
CB 0.678 0.600 0.655 0.536 0.343
FART 0.702 0.815 0.829 0.833 0.566
SRFART 0.772 0.858 0.834 0.881 0.756

Generally speaking, the proposed approach is robust to noise and can reflect the
real situation as accurately as possible. It can eliminate the halo and the ghosts
effectively and detect the targets effectively without much more post-process.

4 Conclusions
We presented a novel infrared target detection approach based on spatially related
fuzzy ART neural network. It associates the background models with the spatial
information by the laterally-inspirited learning mode, and it reflects the real situation
as accurately as possible by adopting the master-slave working pattern. The approach
can detect targets more stable and robust to noise. It can eliminate the halo and ghosts
effectively and detect the targets effectively without much more post-process.
In the implementation of the proposed approach, we do not take account of the
processing speed. In the end, the detection speed of the proposed approach is only 2
fps. So we are going to study how to speed the model up in the future.
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 61

Acknowledgments. This research was supported in part by the Natural Science


Foundation of Hubei Province of China No. 2009CDA141.

References
1. Davis, J.W., Sharma, V.: Background subtraction in thermal imagery using contour
saliency. International Journal of Computer Vision 71, 161181 (2007)
2. Lin, S.-S.: Review: Extending visible band computer vision techniques to infrared band
images. Technical Report, University of Pennsylvania (2001)
3. Li, Z., Bo, W., Ram, N.: Pedestrian detection in infrared imaged based on local shape
features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, pp. 18. IEEE Press, Minneapolis (2007)
4. Kai, J., Michael, A.: Feature based person detection beyond the visible spectrum. In: IEEE
Conference on Computer Vision and Pattern Recognition, pp. 3033. IEEE Press, Miami
(2009)
5. Stephen, O.H., Ambe, F.: Detecting People in IR Border Surveillance Video Using Scale
invariant image moments. In: Proceedings of SPIE, pp. 73400L-16. SPIE, Orlando (2009)
6. Fida, E.B., Thierry, B., Bertrand, V.: Fuzzy Foreground Detection for Infrared Videos. In:
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Workshops, pp. 16. IEEE Press, Anchorage (2008)
7. Chen, B., Wang, W., Qin, Q.: Infrared target detection based on Fuzzy ART neutral
network. In: The Second International Conference on Computational Intelligence and
Natural Computing, pp. 240243. IEEE Press, Wuhan (2010)
8. Lucia, M., Alfredo, P.: A self-organizing approach to background subtraction for visual
surveillance applications. IEEE Transactions on Image Processing 17, 11681177 (2008)
9. Object Tracking and Classification in and Beyond the Visible Spectrum,
http://www.cse.ohio-state.edu/otcbvs-bench/
10. Kyunqnam, K., Chalidabhonqse, T.H., David, H., et al.: Real-time foreground-background
segmentation using codebook model. Image Segmentation 11, 172185 (2005)
A Novel Method for Quantifying the Demethylation
Potential of Environmental Chemical Pollutants

Yan Jiang1,2,3 and Xianliang Wang2,*


1
College of Oceanography and Environmental Science, Xiamen University, Xiamen, Fujian
361005, Peoples Republic of China
2
State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research
Academy of Environmental Sciences, Beijing 100012, Peoples Republic of China
Tel.: +86-10-84916422; Fax: +86-10-84916422
xlwang@craes.org.cn
3
College of Nuclear Science and Technology,
Beijing Normal University, Beijing 100875,
Peoples Republic of China

Abstract. We developed a novel method to quantify the demethylation


epigenetic toxicity of pollutants. A hyper-methylated pEGFP-C3 plasmid
eukaryotic expression vector was constructed and used to evaluate the
epigenetic toxicity of aquatic pollutants samples from polluted coastal waters of
Tianjin, China. The methylated pEGFP-C3 plasmid was transfected into HepG-
2 cells and incubated with 5-aza-2-deoxycytidine at various concentrations.
The HepG-2 cell line reporter gene vector was used to assess the epigenetic
toxicity of heavy metal extracts from polluted marine waters, and shellfish
samples. Results indicated that the demethylation ability of 5-aza-dC at doses
between 0.0008 and 0.1 M could be quantitatively detected. Nine of the 19
aquatic samples showed strong demethylation ability at values between 0.0064
and 0.0387 M 5-AZA equivalents. A GFP reporter gene vector with a hyper-
methylated CMV promoter was constructed, and a relatively sensitive response
relationship between GFP gene expression and 5-AZA dose was observed,
providing a novel method for quantifying the demethylation ability of
pollutants.

Keywords: Demethylation, Epigenetic toxicity, Cellular test system, Enhanced


green fluorescent protein, Aquaculture.

1 Introduction
Before chemical compounds can be used, evaluation of their safety is essential.
Currently, assessment of contamination of genetic material, such as DNA mutation,
focuses mainly on detecting direct damage. However, many pollutants that may be
deemed safe by existing safety evaluation protocols can have indirect effects. For
example, pollutants may cause DNA methylation and activate other epigenetic
mechanisms that can lead to slight changes in long-term biological traits but may not
cause genetic mutations, chromosomal aberrations, and other genetic damage [1; 2]

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 6271, 2011.
Springer-Verlag Berlin Heidelberg 2011
A Novel Method for Quantifying the Demethylation Potential 63

Low-dose pollutant exposure can cause DNA methylation changes, and long-term,
mild, recurring methylation changes can begin the process of damaging health. Thus,
developing a tool to assess indirect damage is essential. [3; 4]
Currently, research of the epigenetic toxicity of environmental pollutants is in its
infancy. [5] Some researchers have proposed the use of serum biochemical analysis,
histopathological evaluation, and analysis of the DNA methylation status in healthy
animals treated with chronic and/or sub-chronic pollutant exposure in vivo. [6; 7]
However, such a detection system has not been developed, and there is an urgent need
to establish a stable and practical detection method to evaluate the epigenetic toxicity
of pollutants. [8; 9]
Our approach to quickly evaluate epigenetic toxicity of pollutants was to use the
non-toxic enhanced green fluorescent protein (EGFP) to build a screening system,
which we call quantifying the demethylation potential (QDMP) . The basic principles
of the system are as follows: DNA methylase Mss.I was used to artificially modify
commercial green fluorescent protein plasmid (EGFP) in vitro so that the EGFP gene
promoter was highly methylated. [10] The methylated EGFP plasmid was transfected
into the human hepatoma cell line HepG-2, and successfully transfected HepG-2 cells
were selected through several rounds of screening of plate cultures. [11] The selected
HepG-2 cells that contained the methylated EGFP plasmid were used as the target and
confirm the demethylation matter arsenate, hydralazine as a positive control. Next, the
heavy metals Cd and Ni, atmospheric particulate matter extracts, and contaminated
shellfish samples extracts in a gradient of doses were tested using the screening
system. [12; 13; 14] For each pollutant type, gene promoter methylation, cell
expression of EGFP mRNA, and green fluorescence intensity of EGFP in cells were
measured to establish a linear relationship between these parameters and pollutant
dose. In this way, the level of EGFP gene promoter methylation was linked to the
level of cell green fluorescence intensity, so that the former could be calculated from
the latter. Thus, the lower the level of methylation of the EGFP gene promoter of
cultured cells, the stronger the green fluorescence. [15] When pollutants and HepG-2
cells transfected with hypermethylated pEGFP-C3 were co-cultured, those with a
strong DNA demethylation function exhibited significantly enhanced green
fluorescence; such pollutants were considered to have low-methylation epigenetic
toxicity. [16] In contrast, the lack of a significant increase in green fluorescence was
indicative of weak DNA demethylation potential, and such compounds were deemed
to have low-methylation epigenetic toxicity (Fig. 1).

Fig. 1. Chemical contaminants demethylation principles of epigenetic toxicity. A: Positive test


substance, B: control.
64 Y. Jiang and X. Wang

2 Materials and Methods

2.1 Preparation of the Methylated C3 Plasmid

DNA (500 ng) digested with the restriction enzymes Msp and Hpa was treated with
sodium bisulfite as previously reported (Gonzalgo, 2002; Xianliang, 2006). [17]

2.2 Cell Culture and C3 Plasmid Transfection

The human liver cancer cell line HepG-2 was purchased from the China Type Culture
Collection (Chinese Academy of Medical Sciences, Beijing, China) and grown in
DMEM supplemented with 10% fetal bovine serum (FBS; JRH Bioscience, San
Antonio, TX, USA) and 1% NEAA (Chinese Academy of Medical Sciences).
A 599 kb CMV promoter of the pEGFP-C3 plasmid was transfected into HepG-2
cells using the FuGENE HD transfection reagent (F. Hoffmann-La Roche Ltd, Basel,
Switzerland). This reagent (3, 4, 5, 6, or 7 l) was directly pipetted into the medium
containing the diluted pEGFP-C3 DNA (0.02 g/l) without allowing contact with the
walls of the plastic tubes. The transfection reagent:DNA complex was incubated for
15 min at room temperature, and then the transfection complex was added to the cells
in a drop-wise manner. The wells were swirled to ensure distribution over the entire
plate surface. Cell growth was best and the fluorescence was brightest when 7 l of
the transfection reagent were used.

2.3 Treatment with 5-aza-dC

Cells were seeded at a density of 3105 cells/10 cm dish on day 0. On day 1, the
medium was changed to one containing 5-aza-dC (Amresco Co Ltd, Amresco, ,USA),
which was freshly dissolved in PBS and filtered through a 0.2m filter. On day 4,
cells were harvested and genomic DNA was obtained by serial extraction with
phenol/chloroform and ethanol precipitation. Total RNA was extracted by ISOGEN
kit (QIAGEN Co Ltd, Germany) HepG-2 cells, into which the hypermethylation
EGFP-C3 plasmid gene had been introduced by homologous recombination, were
exposed to heat at 37 C for 6 h on days 2, 3, and 4 because this treatment effectively
induced higher expression of hypermethylated EGFP-C3 mRNA.

2.4 Quantification of Methylation of the CMV Promoter

In the next step, competent bacteria were prepared using E. coli strain DH5. After
the PCR product was purified, it was linked with the pGEM2T vector, transformed
into the competent bacteria, and screened via blue-white screening. The product was
separated by 1.5% agarose gel electrophoresis and photographed using a gel imaging
system camera. Positive clones were screened by PCR amplification, and bacteria
were cultured with shaking at 37 C, and confirmed using universal primer T7 and
SP6 sequencing by the Beijing Genomics Institute of Bacteria Biotechnology
Company.
A Novel Method for Quantifying the Demethylation Potential 65

2.5 Relative Quantification of CMV Promoter by Real-Time PCR


To determine the quantitative PCR amplification efficiency, a PCR standard curve
was generated. Samples of extracted cDNA were used to create a series of solutions
representing 1/8, 1/4, 1/2, 1, and 1.5 times the cDNA content of 1, 2, and actin
cDNA, and these samples were analyzed using TaKaRa's sybrr premix Ex Taq
quantitative , (TaKaRa' Ltd, Tokyo,Japan) PCR kit. In a 12.5 l reaction system, in
which the first three times higher than the concentration of cDNA diluted liquid
samples, the following components (1 l) were added. Components are composed of
1.5 times the liquid content (1.5 l) is added, adding sybr quantitative PCR and
reagent mixture (6.25 l), the upstream and downstream primer mix solution (0.25
l), the final complement to 12.5 l with additional ultrapure water of sterilization.
The PCR reaction conditions were as follows: predenaturation at 94C for 10 s;
denaturation at 94 C for 45 s; annealing at 57 C for 45 s; extension at 72 C for 45 s;
95 C for 1 min post-production thermal melting point curve to determine the
amplification product specificity.
Real-time PCR analysis was performed using a Stratagene MX3100 (USA).
Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used to normalize the
mRNA expression levels. Primer sequences were as follows: EGFP-C3, 5'-
TAATGGGAGTTTGTTTTGGTATT-3' (sense), 5'-TTA TAC TCC AAC TTA TAC
CCC AAA A-3' (antisense); and GAPDH, 3'-AGGTGAAGGTCGGAGTCAACG-3'
(sense), 5' -AGGGGTCATTGATGGCAACA-3' (antisense). For both genes, PCR
was performed for 40 cycles with annealing at 60 C. Primer sequences and PCR
conditions for other genes have been reported.

2.6 Detection of EGFP Protein by Fluorescent Imaging and Flow Cytometry


Prior to flow cytometry and fluorescent imaging analysis, cells were washed with
PBS and fixed with 4% paraformaldehyde. Cells were assayed using a Beckman
Coulter Altra flow cytometer. Cells were placed into 24-well. After cells digestion,
adding 1 ml PBS puff uniform each sample was placed into a 1.5 ml tube and
centrifuged at 2000 g to remove the supernatant. Next, 1 ml PBS was added, followed
by washing and centrifugation. 1 ml PBS re-suspended cell pellet, gently puff
uniform, through 300 mesh filter to the stream-specific tube, into flow cytometry and
to detect positive EGFP fluorescent percentage.

2.7 Analysis of Environmental Samples Using the QDMP


Fish and shellfish samples were collected from the coast of Tianjin and washed, dried,
and digested as previously reported (C. Brunori 2006) [18] Laid in Tianjin coastal
sampling sites from north to south, a total of eleven sampling points, including two
breeding base for seawater samples, seven samples for the marine capture fishing
port, two sampling points for the seafood wholesale market, are important local sea
Product supply base. Representative to collect the local residents like to eat fish,
shellfish, eight varieties, a total of forty samples of the same species in each location,
as close to the size of individual samples collected. For each sample type, the
digestion solution was transferred carefully to a 1.5 ml polypropylene centrifuge tube.
The sample was adjusted to a concentration of 1 g/L, and the test liquid was assayed
for HepG-2 cell EGFP green fluorescence using the QDMP.
66 Y. Jiang and X. Wang

3 Results and Discussion


3.1 Preparation of Methylated C3 Plasmid
Unmethylated plasmids that were digested with Hpa and Msp and methylated
plasmids digested with MspI exhibited the "smearing" phenomenon on gels, whereas
methylated plasmids digested with Hpa showed a single band, suggesting the
plasmid that treatment by methylation in the methylation status.

3.2 Exposure Optimization


To establish a sensitive detection system, we first searched for an endogenous
methylated CMV promoter in HepG2 cells that would show a sensitive response to 5-
AZA-dC. The CMV promoter of pEGFP-C3 was found to be methylated in HepG2
cells, and no mRNA expression was detected by RT-PCR before the addition of 5-
AZA-dC. By treating cells with very low doses (0.020.1 M) of 5-aza-dC for 30 h,
expression of pEGFP-C3 was restored. Next, expression levels were quantitatively
analyzed by quantitative real-time RT-PCR, and CMV of pEGFP-C3 was found to be
sensitively and abundantly expressed.

3.3 The Demethylation Potential of 5-aza-dC and Its Effect on EGFP


Expression
Methylation of the CMV promoter. The sequencing results were compared with
the sequence of the C3 plasmid EGFP gene CpG island DNA from the
GenBank database. Our sequence showed no mutation and the CpG cytosines were
not modified by sodium bisulfite; these results indicated that the HepG2 cell
lineC3 plasmid EGFP gene sub-CpG islands were methylated. Figure 2 shows

Fig. 2. Representative treated with sodium bisulfite sequencing group and the untreated group.
Note: the picture in the sequence of the nine target sites in the CG, the figure shows the sites of
which three from the methylation sites, six loci from the non-methylated sites; the chart below
shows nine loci in the methylation status of all.
A Novel Method for Quantifying the Demethylation Potential 67

are presentative sequence diagram and pattern of methylation, which showed that
methylation of plasmid construction management is better, the plasmid EGFP gene
promoter methylation after treatment with 90.4% of CG sites was methylated, and
hypermethylation status of the EGFP gene promoter was quantitatively expressed.

Relative quantification of EGFP expression. Quantitative real-time PCR detection


results map: real-time fluorescence detection of fluorescence were displayed all sorts
of quality control within the 22 cycles are significantly enhanced, showing all sorts of
quality control have been good amplification. The fluorescence signal was stable, and
a moderate elevation gradient was observed (Fig. 3). Real-time quantitative PCR
results showed that the EGFP expression vector and the relative amount of 5-aza-dC
exhibited a good dose-response relationship: y = 37.022x + 0.3087; R2 = 0.9821.

Fig. 3. Real-time quantitative PCR detection of cell sample results and dissolution curves of
PCR products

Fluorescence of the EGFP protein. An obvious fluorescence gradient was detected


in samples cultured with different doses of 5-aza-dC for 30 h; as the 5-aza-dC dose
increased, the fluorescence intensity also increased. Flow cytometry results revealed a
68 Y. Jiang and X. Wang

good linear relationship between the 5-aza-dC dose gradient and the mean
fluorescence intensity :y = 10.402x + 6.0334; R2 = 0.829. This result shows that
eukaryotic cells containing the green fluorescent protein gene vector exhibit a
methylation-sensitive response relationship. The intensity of cellular green
fluorescence can be used as an indicator of the presence of certain pollutants. [18]

3.4 Performance of the QDMP

The sensitivity of the assay system was tested by adding various doses of 5-aza-dC to it.
Demethylation of the promoter CMV of pEGFP-C3 was clearly observed at doses of
0.1M or higher. Furthermore, fluorescence intensity was induced in a dose-dependent
manner. We also examined the appearance of green fluorescence before and after the
addition of 5-aza-dC. Under a fluorescence microscope, significant fluorescence was
observed in the cytoplasm after the addition of 0.0008 M of 5-aza-dC.

Fig. 4. Green fluorescent flow cytometry results


A Novel Method for Quantifying the Demethylation Potential 69

3.5 The Application of QDMP for Aquatic Sampling

To aquatic mixture for detection, according to the above method based on


fluorescence flow cytometry to the relationship between methylation capacity
equation, there are eight positive, obtained demethylation potential of the port of
Tianjin Tanggu is strongest, equally 8 times the minimum concentration of 5-aza-dC
dose toxicity.
The epigenetic toxicity of polluted marine samples collected from the coast of
Tianjin was evaluated based on the HepG-2 cell line reporter gene vector (i.e., the
QDMP). The heavy metal extracts from aquatic samples were prepared and then co-
cultured with the test system. The demethylation potential of the samples was
quantified relative to the corresponding equivalent of 5-aza-dC. EGFP fluorescence
was quantified using microscopy. Demethylation by 5-aza-dC could be successfully
detected in a quantitative manner at doses between 0.0008 and 0.1M. Nine of the 19
aquatic samples had a relatively strong demethylation ability, with values ranging
from 0.0064 to 0.0387M 5-aza-dC equivalents. (Fig. 4)

4 Conclusions

A system to assay for demethylating agents was established using the promoter CMV
of the plasmid pEGFP-C3. To our knowledge, this is the first assay system that uses
an endogenous CMV promoter. Methylation stability is high for endogenous
sequences , and CGIs(CpG islands)in promoter regions have higher fidelity than those
outside [19]. Mechanisms involved in the maintenance and monitoring of the
methylated status of CGIs are expected to function well for the promoter CMV of the
plasmid pEGFP-C3. Therefore, accurate estimation of epimutagens should be possible
with this system.
To establish a sensitive detection system, it was necessary to use a methylated
promoter CMV that responds to low doses of demethylating agents (i.e., 5-aza-dC). In
this study, we used the CMV promoter of the plasmid pEGFP-C3 because our recent
studies [20] identified it as one that met the requirements. The CMV was
demethylated by 5-AZA-dC at doses as low as 0.01 M in parental HepG2 cells. This
represents high sensitivity, considering that laboratory use of 5-aza-dC is between 0.1
and 10 M. Demethylation of the CMV and expression of the introduced EGFP were
observed after addition of 5-aza-dC at doses of 0.1 M or higher. Fluorescence of the
EGFP product was detected by fluorescence microscopy after addition of 1 M 5-
AZA-dC. In summary, we established a detection system for demethylating agents
using an endogenous promoter CMV; this system is expected to allow accurate
detection of epimutagens.

Acknowledgements. This work was supported by the National Nature Science


Foundation of China(No. 20907047) and National Nonprofit Institute Research Grant
of CRAES (No. 2008KYYW05).
70 Y. Jiang and X. Wang

References

1. Eriksen, T.A., Kadziola, A., Larsen, S.: Binding of cations in Bacillus subtilis
phosphoribosyldiphosphate synthetase and their role in catalysis. Protein Sci. 11, 271279
(2002)
2. Zoref, E., Vries, A.D., Sperling, O.: Mutant feedback-resistant
phosphoribosylpyrophosphate synthetase associated with purine overproduction and gout.
Phosphoribosylpyrophosphate and purine metabolism in cultured fibroblasts. J. Clin.
Invest. 56, 10931099 (1975)
3. Becker, M.A., Smith, P.R., Taylor, W., Mustafi, R., Switzer, R.L.: The genetic and
functional basis of purine nucleotide feedback-resistant phosphoribo sylpyro phosphate
synthetase superactivity. J. Clin. Invest. 96, 21332141 (1995)
4. Reichard, J.F., Schnekenburger, M., Puga, A.: Long term low-dose arsenic exposure
induces loss of DNA methylation. Biochem. Biophys. Res. Commun. 352, 188192 (2007)
5. Olaharski, A.J., Rine, J., Marshall, B.L., et al.: The flavoring agent dihydrocoumarin
reverses epigenetic silencing and inhibits sirtuin deacetylases. PLoS Genet. 1(6), e77
(2005)
6. Birnbaum, L.S., Fenton, S.E.: Cancer and developmental exposure to endocrine disruptors.
Environ. Health Perspect. 111, 389394 (2003)
7. Salnikow, K., Zhitkovich, A.: Genetic and epigenetic mechanisms in metal carcinogenesis
and cocarcinogenesis: nickel, arsenic, and chromium. Chem. Res. Toxicol. 21, 2844
(2008)
8. Tang, W.Y., Newbold, R., Mardilovich, K., et al.: Persistent hypomethylation in the
promoter of nucleosomal binding protein 1 (Nsbp1) correlates with overexpression of
Nsbp1 in mouse uteri neonatally exposed to diethylstilbestrol or genistein.
Endocrinology 149, 59225931 (2008)
9. Reik, W., Dean, W., Walter, J.: Epigenetic reprogramming in mammalian development.
Science 293, 10891093 (2001)
10. Bombail, V., Moggs, J.G., Orphanides, G.: Perturbation of epigenetic status by toxicants.
Toxicol. Lett. 149, 5158 (2004)
11. Feil, R.: Environmental and nutritional effects on the epigenetic regulation of genes.
Mutat. Res. 600, 4657 (2006)
12. Wu, C., Morris, J.R.: Genes, genetics, and epigenetics: a correspondence. Science 293,
11031105 (2001)
13. Feinberg, A.P., Ohlsson, R., Henikoff, S.: The epigenetic progenitor origin of human
cancer. Nat. Rev. Genet. 7, 2133 (2006)
14. Suzuki, M.M., Bird, A.: DNA methylation landscapes: provocative insights from
epigenomics. Nat. Rev. Genet. 9, 465476 (2008)
15. Barreto, G., Schaefer, A., Marhold, J., et al.: Gadd45 promotes epigenetic gene activation
by repair-mediated DNA demethylation. Nature 445, 671675 (2007)
16. Wade, P.A., Archer, T.K.: Epigenetics: environmental instructions for the genome.
Environ. Health Perspect. 114, A140A141 (2006)
17. Schmelz, K., Sattler, N., Wagner, M., et al.: Induction of gene expression by 5-aza-2-
deoxycytidine in acute myeloid leukemia (AML) and myelodysplastic syndrome (MDS)
but not epithelial cells by DNA-methylation-dependent and -independent mechanisms.
Leukemia 19, 103111 (2005)
18. Olaharski, A.J., Rine, J., Marshall, B.L., et al.: The flavoring agent dihydrocoumarin
reverses epigenetic silencing and inhibits sirtuin deacetylases. PLoS Genet. 1, e77 (2005)
A Novel Method for Quantifying the Demethylation Potential 71

19. Appanah, R., Dickerson, D.R., Goyal, P., et al.: An unmethylated 3 promoter-proximal
region is required for efficient transcription initiation. PLoS Genet. 3, e27 (2007)
20. Okochi-Takada, E., Ichimura, S., Kaneda, A., et al.: Establishment of a detection system
for demethylating agents using an endogenous promoter CpG island. Mutat. Res. 568,
187194 (2004)
21. Wang, X., et al.: High-throughput assay of DNA methylation based on methylation-
specific primer and SAGE. Biochem. Biophys. Res. Commun. 341, 749754 (2006)
22. Brunori, C., Ipolyi, I., Massanisso, P., Morabito, R.: New Trends in Sample Preparation
Methods for the Determination of Organotin Compounds in Marine Matrices. Handbook
Environment Chemistry, Part O(5), 5170 (2006)
23. Barreto, G., Schaefer, A., Marhold, J., et al.: Gadd45 promotes epigenetic gene activation
by repair-mediated DNA demethylation. Nature 445, 671675 (2007)
24. Cheetham, S., Tang, M.J., Mesak, F., et al.: SPARC promoter hypermethylation in
colorectal cancers can be reversed by 5-aza-2deoxycytidine to increase SPARC
expression and improve therapy response. Br. J. Cancer 98, 18101819 (2008)
Study of Quantitative Evaluation of the Effect of Prestack
Noise Attenuation on Angle Gather

Junhua Zhang, Jing Wang, Xiaoteng Liang,


Shaomei Zhang, and Shengtao Zang

College of Geo-Resource and Information, China University of Petroleum,


Qingdao, 266555

Abstract. With the increasing refinement of exploration activity, the


information of different angles has to be used in the extraction of prestack
attributes and prestack inversion. For large angle gather, denoising methods
should be different from conventional prestack noise attenuation methods used
in CSP and CMP gathers. We established specific theoretical model and
quantitatively study the effect of prestack noise attenuation on angle gather.
Through the research, we draw the following conclusions: 1) The antinoise
ability of angle gather stacking is much better than that of CMP stacking. 2)
The larger the apparent velocity of coherent noise is, the more seriously the
effect of it on angle gather. Otherwise it is opposite. 3) Surface wave has strong
energy and low frequency and a small amount of residues will affect the angle
gather largely. After suppressing surface wave, there will be energy loss in near
angle gather, which should be compensated.

Keywords: angle gather, random noise, coherent noise, surface wave,


quantitative evaluation of noise attenuation.

1 Introduction
Seismic data preserved-amplitude processing is crucial to achieve reliable prestack
characteristic and prestack inversion results. Although the importance of AVO
forward and inversion technique has been shown by Jonathan E et al. (2006) [1] and
Heidi Anderson Kuzma et al. (2005) [2], there will be false abnormal responses in
AVO analysis because of disadvantages of prestack noise attenuation and energy
compensation methods which easily lead to energy inconsistencies of near and far
offset. Then the following inversion and attribute analysis processing will appears
multi-solutions, making inversion results not able to truly reflect subsurface lithology
and physical property changes, which is not conducive to lithologic inversion,
reservoir prediction and fluid discrimination, as for example in Gary Mavko et
al.(1998) [3], Yinbin Liu et al.(2003) [4] and Fatti J L et al.(1994) [5].
The information of different angles has to be used in the extraction of prestack
attributes and prestack inversion. For big angle gather, because of the characteristics
of angle gather and the impact of oil/gas AVO and random noise and coherent noise,
the denoising methods may be different from conventional methods used in CSP and
CMP gathers. As the real data is complex and direct quantitative evaluation is not

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 7277, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Quantitative Evaluation of the Effect of Prestack Noise Attenuation 73

easy to achieve, it has both necessity of theoretical study and feasibility of achieving
that we establish specific theoretical model and quantitatively study the effect of
prestack noise attenuation on angle gather. Now we will carry out quantitative
evaluation for random noise, coherent noise and surface wave always occurring in
real data.

2 Quantitative Evaluation of the Effect of Random Noise


Attenuation on Angle Gathers
We established the horizontal layer model (Figure 1). Figure 1, left shows velocity
model and right shows the shot gather. The model has 7 layers and the fourth layer is
used for quantitative evaluation. The number of total traces is 241, sampling points is
3001, sampling interval is 2ms and trace offset is 50m. While establishing model, we
applied time-varying wavelet and the frequency from the first layer to the seventh
layer becomes smaller. Based on this model, we added random noise of signal to
noise ratio(S/N) of 3, 2, 1 and 0.5, then we made quantitative evaluation of the effect
of random noise attenuation. By experiment, we found that adding random noise with
S/N of 3 and 2 is similar to that of 1. In the paper, for the limitation of space, we
discuss the S/N 1 directly.

Fig. 1. Horizontal layer model. Left) Velocity model. Right) Shot gather.

Figure 2, left, shows CMP gather before and after NMO and the angle gather
without random noise and with noise of signal to noise ratio of 1 and 0.5. While S/N
is 1, random noise affects event slightly and AVO phenomenon can be obviously
observed. After attenuating noise, a small amount of residual random noise has a little
effect on events. While S/N is 0.5, random noise affects event very largely and we
cant observe AVO phenomenon. After attenuating noise, a great amount of residual
random noise affects angle gather to a large extent.
In order to see more obvious changes in angle gather, we selectively stacked 1-
13, 13-26 and 27-39 of the angle gather into one trace and repeated it five times
(Figure 3). When S/N is 1, AVO phenomenon of small reflection efficient has been
difficult to distinguish visually, the middle angle information in the deeper formation
has been largely polluted and far angle information has been also greatly affected.
After attenuating noise, the middle angle gather in the deeper formation has been
74 J. Zhang et al.

clearly improved, but the distortion of the contaminated weak signal slightly
increased. When S/N is 0.5, only strong reflection information can be distinguished.
Far angle, middle angle and near angle gather are all greatly polluted. After
attenuating noise, the angle gather section has been greatly improved entirely, but the
contaminated events of far angle, middle angle and near angle gather have not.

Fig. 2. Stacking angle gather. From left to right: without random noise; before attenuating
random noise (S/N=1), after attenuating random noise(S/N=1), before attenuating random
noise(S/N=0.5) and after attenuating random noise(S/N=0.5).

Fig. 3. Stacking angle gather. From left to right: without random noise; before attenuating
random noise (S/N=1), after attenuating random noise, before attenuating random
noise(S/N=0.5) and after attenuating random noise.
Study of Quantitative Evaluation of the Effect of Prestack Noise Attenuation 75

Fig. 4. Comparison of AVA curve of the fourth layer. Left) S/N=1. Right) S/N=0.5.

Figure 4 shows comparison of AVA curve of the fourth layer when S/N is 1 and
0.5. We can find that as signal to noise ratio decreases, the jitter phenomenon is more
and more serious and the reflection coefficient occurred a very significant jump.

3 Quantitative Evaluation of the Effect of Coherent Noise


Attenuation on Angle Gathers
Based on the horizontal layer model (Figure 1), we adding coherent noise to carry out
quantitative evaluation (Figure 5). Figure 5, left, shows coherent noise has a certain
degree of cross with effective wave, and the apparent velocity changes a little on
angle gather section. Right shows coherent noise has almost been eliminated, only a
small amount of which is remained because of edge effect.
Figure 5 right shows that inphase property of coherent noise has almost
disappeared, but some glitches occur on stacking angle gather section. After
attenuating coherent noise there is little effect of coherent noise on angle gather
section, but there are residual locally, especially in big angle traces.

Fig. 5. From left to right: CMP gather after NMO and angle gather before and after attenuating
coherent noise, stacking angle gather before and after attenuating coherent noise

Fig. 6. Comparison of AVA curve of the fourth layer


76 J. Zhang et al.

Figure 6 shows comparison of AVA curve of the fourth layer before and after
attenuating coherent noise. We can clearly observe the main interference point of
coherent noise, and the reflection coefficient has a very significant jump.

4 Quantitative Evaluation of the Effect of Surface Wave


Attenuation on Angle Gathers
We carry out finite difference surface wave simulation through programming. A free
surface boundary condition for numerical simulation of Rayleigh wave is built by
using 210 order high precision staggered grids finite difference when the source is at
the free surface. PML absorbing boundary condition is applied to eliminate boundary
reflections, as shown in Xu Yixian et al. (2007) [6] and Saenger E H et al. (2004) [7].
Then we add surface wave into the horizontal layer model (Figure 1) and get the
results shown in Figure7.
From Figure 7 we can see that surface wave has almost been eliminated, only a
small amount of which is remained. Figure 7 right shows that surface wave largely
affects angle gather because of strong energy. The events in near angle gather can
hardly be distinguished, but middle angle and far angle gather are less affected. After
attenuating surface wave, there is little effect of surface wave on angle gather section.
However, because surface wave has strong energy and low frequency, a small amount
of residue affects the angle gather largely, especially affects the near offset.
Figure 8 shows the comparison of AVA curve of the fourth layer. We find that the
effect of surface wave mainly occurs in near offset and reflection coefficient has a
slight jump after attenuating it.

Fig. 7. From left to right: CMP gather after NMO and angle gather before and after attenuating
surface wave, stacking angle gather before and after attenuating surface wave

Fig. 8. Comparison of AVA curve of the fourth layer


Study of Quantitative Evaluation of the Effect of Prestack Noise Attenuation 77

5 Conclusions
Through the analysis, we draw the following conclusions:
1) Through adding random noise of different signal to noise ratio and attenuating
it, we find that the anti-noise ability of angle gather stacking is much better than that
of stacking CMP gather. For original angle gather, the threshold of signal to noise
ratio is 1. For angle gather stacking, the minimum signal to noise ratio can be up to
0.5.
2) Through adding random noise of different signal to noise ratio and attenuating
it, we find that the effect of coherent noise on angle gather is smaller than that on
CMP gather. After attenuating coherent noise, stacking angle gather became better
and only a small amount of high frequency glitches are remained. The larger the
apparent velocity of coherent noise is, the more seriously the effect of it on angle
gather.
3) Based on attenuating surface wave, we find that surface wave has strong energy
and low frequency and a small amount of residue will affect the angle gather largely.
As surface wave mainly occurs in near offset, the effect of it on near offset than that
on far offset. After suppressing surface wave, there will be energy loss in near angle
gather, which should be compensated.

References
1. Downton, J.E., Ursenbach, C.: Linearized amplitude variation with offset (AVO) inversion
with supercritical angles. Geophysics 71(5), E49E55 (2006)
2. Kuzma, H.A., Rector, J.W.: The zoeppritz equations, information theory and support vector
machines. In: SEG/Houston 2005 Annual Meeting, pp. 17011705 (2005)
3. Mavko, G., Mukerji, T.: A rock physics strategy for quantifying uncertainty in common
hydrocarbon indicators. Geophysics 63(6), 19972008 (1998)
4. Liu, Y., Schmitt, D.R.: Amplitude and AVO responses of a single thin bed.
Geophysics 68(4), 11611168 (2003)
5. Fatti, J.L., Vail, P.J., Smith, G.C., et al.: Detection of gas in sandstone reservoirs using
AVO analysis: A case seismic case history using the Geostack technique.
Geophysics 59(5), 13621376 (1994)
6. Xu, Y., Xia, J., Miller, R.D.: Numerical investigation of implementation of air-earth
boundary by acoustic-elastic boundary approach. Geophysics 72(5), 147153 (2007)
7. Saenger, E.H., Bohlen, T.: Finite difference modeling of viscoelastic and anisotropic wave
propagation using the rotated staggered grid. Geophysics 69, 583591 (2004)
A User Model for Recommendation Based on Facial
Expression Recognition

Quan Lu1, Dezhao Chen2, and Jiayin Huang2

1
Center for Studies of Information Resources,
Wuhan University, Wuhan, China
2
Research Center for China Science Evaluation,
Wuhan University, Wuhan, China
mrluquan@sina.com, chendezhao0023@163.com, hjy1120@126.com

Abstract. User modeling is crucial in recommendation system. By analyzing


users behavior, gathering users interest information, the user model can well
express the users need. While users emotion is also good expression of users
needs. A user model based on facial expression recognition is built. The user
model is on the base of traditional user model, affective function is provided
to reflect users emotion. As well as the user model updating times t is also
discussed in the model. And then a discussion of the updating strategy and
application in recommendation is had.

Keywords: user model, recommendation, facial expression recognition, emotion


recognition.

1 Introduction

User modeling is crucial in personalized services such as recommendation system.


Personalized service provides the necessary information to the user who needs it, and
the user model expresses the users need.
User modeling firstly needs getting users characteristics, such as users profile,
users behavior when browsing and clicking. Some researchers have researched in user
modeling and provided some methods. Topic method uses some topics to express
users interest and this method has applied to My Yahoo system[1]. Bookmark and
keywords are also used to express users interest, some systems such as SiteSeer[2],
WebWatcher[3] use the method to recommend users interesting information. To reflect
users semantic information, ontology technology has adopted to the user model.
Researchers build domain ontology to mine users interesting information, and track
users interest shifting[4].
Users interest shifts as the environment changes. User model should reflect the
dynamic interest. Document[5] founds three hierarchical user model and uses weight
function to update users interesting information.
Users emotion may also express users interest, while user models at present dont
contain the emotion. A user model based on facial expression recognition is built in this
paper. This model avoids the fault that a user clicked some information while he isnt
interested in them.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 7882, 2011.
Springer-Verlag Berlin Heidelberg 2011
A User Model for Recommendation Based on Facial Expression Recognition 79

Section 1 makes an introduction to this paper, Section 2 provides the user model and
section 3 gives the method to build the user model, including t and value. Section 4
introduces updating mechanism of this model. Section 5 makes a summary of this
paper.

2 User Modeling Based on Facial Expression Recognition

When a user browses one topic of information, a reaction may be made to it, then he
may click it, restore it, and his expression may also reflect his interest. In the user
model, emotion is important to obtain users interest. Users emotion can be recognized
and expressed, document [6] has researched in the affective visualization, document [7]
has built affective model for emotion. The user model built in this paper is combined
with affective model, and users emotion is added to the model.
This user model can be described as:

U = {T , C , W , t , } (1)

Ti = {(C , W ), t i , i } (2)

(C ,W ) = {(C1 ,W1 ), (C 2 , W2 ),.....(C n ,Wn )} (3)

Formula (1) describes the elements of the model. In this model, T are the topics which
the user is interested in. C represents the characteristic of the topic. W represents the
weight related to the characteristic, t represents the times user model changes. is an
affective function reflecting the user emotion by facial expression recognition, is
emotional sign of interest degree. Every topic Ti has a series of (C, W) and ti, i.

Very active

Very negative Very possitive

Very passive

Fig. 1. Continuous emotion space


80 Q. Lu, D. Chen, and J. Huang

The result affective function computes is based on coordinate which decides the
emotion of expression. Users emotion is continuous and can be classified by two
dimensions[8] , as it shows in firgure1. Users emotion may be mapped in this space.
In this model, it neednt recognize the emotion, but the degree of the emotion, such
as positive or negative. Because in recommendation system, t just knows whether user
is interested in the topic or not, and to what degree. So it should map the emotion to the
emotion dimension, then it may use 0 to express very negative emotion, and 1 very
positive emotion, the value 0.5 just represents neutral.

3 User Model Building


Building the user model, topics are from the information user browsed, such as the web
page user clicked. Every topic has some characteristics and related weight, as it shows
in Formula (2) and (3). Ci represents the characteristic of each topic and is decided by
the statistics. Wi represents the weight of the characteristic and reflects the importance
of the Ci. Ci is also expressed by ontology. Wi is decided by TF/IDF formula,

TFIDF(ti) = TF(ti)*log(n/DF(ti)) (4)

TF(ti) represent times item ti occurs in the document, and DF(ti) represents the amount
of documents.
t represents the times topic T updates, it reflects the frequency users interest
updates. If t is increasing, it may infer that the user is paying more attention to the topic.
At the same time the t should include the time t value changes. The times user model
updates and the time it updates will be referred when the model eliminates topics.
To decide the topics, it should also gather users facial expression which is used to
analyze users emotion. Affective function computes users emotion, which is a
sign of users interest. Affective function computes users emotional value between
0 and 1, and the value represents different emotion in different degree. Affection
computing depends on the facial expression recognition, and decides the emotion by
analyzing the facial characteristics such as structure of face, shape and so on[9]. And
then the function will compute the emotion, the result is presented as potential value. It
is decided by expression potential formula.

1
K (e, Ex) = (5)
1 + a || e Exc || 2
||||2 represents the distance norm of expression, a is a constant, controls the fading
speed of basic expression Exs potential. Exc represents the center of expression.
Expression potential can be used to classify expression, and then the expression should
be mapped to the emotion value, its value is between 0 and 1.
Affective function can express users real emotion, avoiding the problem of
traditional model that user clicks a web page or browses one topic information while he
isnt interested in it.
A User Model for Recommendation Based on Facial Expression Recognition 81

4 User Model Updating and Shifting Mechanism

After building the user model, it will update in the following learning. It includes two
aspects: one is to update the model, such as to adjust the value Ci, Wi, , t , and to add
new topic. Another is to eliminate some topics when the storage is not so large.
When a topic is in the user model, the user model will compute similarity based on
SVM theory. And then the user model will be updated including the Ci, Wi, , t.
Every time the topic updates, the t related to the topic will be added:

t = t+1 (6)

When a new topic is found, it will be added to the model, if the storage can store the
topic, or some topics will be eliminated. Elimination strategy includes , and t. If the
emotion value according to is low, it indicates that users interest is not so strong.
So the topic will have priority of elimination. If the t is low, it may indicate that this
topic is not updated constantly and user doesnt browse this topic information. This
topic may be context information and user does not pay attention to this topic after a
period of time. From the point view of information lifecycle management theory, the
topic has no value or low value[10]. So the topic is chosen to eliminate. If t is high
but the time model updates is long, it means the model doesnt update for a long
time, so the topic may be out of date, and the topic can be chosen. The long time is
relative.
When deciding which topic to be eliminated, the user model should consider t and
, and adjust the user model to reflect users interest.

5 Summary

This paper makes an introduction to our user model, and details the processing and
application of the user model. In the personalized service, users emotion is also
important, while the user model doesnt express it. So the model takes advantage of the
facial expression recognition and builds a user model which contains users emotion.
This model utilizes affective function , as a sign of users interest. At the same time,
this model provides t to reflect the frequency of the topic changes. t is also used to
decide whether the user pays attention to the topic or not. t and values are important
in updating the model and recommending information to users. As users require more
accurate information and affective computing develops, user model will adopt the
emotion factor and applied to the recommendation system.

Acknowledgement. This research is supported by National Natural Science


Foundation of China (No: 70833005), the MOE Project of Key Research Institute of
Humanities and Social Sciences at Universities (No: 2009JJD870002) and Education
Ministry's Humanities & Social Sciences Program (No: 09YJC870020).
82 Q. Lu, D. Chen, and J. Huang

References

1. Ying, X.: The Research on User Modeling for Internet Personalized Services. National
University of Defence Technology (2003)
2. Rucker, J., Polanco, M.J.: Siteseer: Personalized Navigation for the Web. Communications
of the ACM 40(3), 7375 (1997)
3. Joachims, T., Freitag, D., Mitchell, T.: WebWatcher:A tour guide for the world wide web.
In: Artificial Intelligence, Japan (August 1998)
4. Yan, D., Liu, M., Xu, Y.: Toward Fine-rained User Preference Modeling Based on Domain
Ontology. Journal of the China Society for Scientific and Technical Information 29(3),
442443 (2010)
5. Li, S.: The Representation and Update for User Profile in Personalized Service. Journal of
the China Society for Scientific and Technical Information 29(1), 6771 (2010)
6. Zhang, S., Huang, Q.: Affective Visualization and Retrieval for Music Video. IEEE
Transactions on Multimedia 12(6) (2010)
7. Qin, Y., Zhang, X.: A HMM-Based Fuzzy Affective Model For Emotional Speech
Synthesis. In: 2nd International Conference on Signal Processing Systems (2010)
8. Li, J.: Study on Mapping Method of Image Features and Emotional Semantics. Taiyuan
University of Technology (2008)
9. Skelley, J.P.: Experiments in Expression Recognition. Masters thesis, Massachusetts
Institute of Technology, EECS (2005)
10. Rief, T.: Information lifecycle management. Computer Technology Review 23(8), 3839
(2003)
An Improved Sub-Pixel Location Method for Image
Measurement

Hu Zhou, Zhihui Liu, and Jianguo Yang

College of Mechanics Engineering, Donghua University, Shanghai, China


{tigerzhou,liuzhihui,jgyangm}@dhu.edu.cn

Abstract. Sub-pixel edge location is an effective way to improve measurement


accuracy. For the deficiency of traditional gaussian interpolation algorithm for
sub-pixel location, this paper presents an improved algorithm: After obtaining
pixel-accuracy edge of object to be measured by LoG operator, use Hough
transform to get the curve slope of the boundary line and the normal of the
corresponding point on the edge; Weighing Lagrange interpolation in the
normal direction is performed to obtain the gray values in the direction of the
gradient under new coordinate system; finally, perform sub-pixel relocation in
the gradient direction based on the gaussian interpolation algorithm.
Experimental results show that the improved method can get much better
precision than the traditional algorithm.

Keywords: Sub-Pixel Edge Detection, Precision Measurement, Machine


Vision, Lagrange Interpolation.

1 Introduction
Machine vision based measurement is one of the key technologies in manufacturing. It can
perform shape and dimensional inspection to ensure that they lie within the required
tolerances [1]. In order to improve the precision of image measurement, many scholars
have put forth some effective sub-pixel location algorithm [2-6]. As the actual measurement
system should meet the requirements of precision, efficiency and reliability, it should not be
too complicated. Interpolation based sub-pixel edge detection method has been widely used
in practice for its fast calculation speed. It uses the interpolation function to restore one-
dimensional continuous light intensity approximately.
However, for the discrete distribution of pixel points in the digital image, the
traditional gaussian interpolation algorithms are only available in horizontal, vertical
and diagonal direction ( 45 , 135 ). Certain errors will be generated inevitably for
the location of arbitrary edge direction. Consequently, it is necessary to improve the
algorithm to improve the sub-pixel location precision.

2 Principle and Problem


2.1 Sub-Pixel Location Algorithm Based on Gaussian Interpolation
The distribution of gray values on the object edges for general image is shown in Fig.
1(a), and the distribution of the gray value difference is shown in Fig.1(b).

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 8392, 2011.
Springer-Verlag Berlin Heidelberg 2011
84 H. Zhou, Z. Liu, and J. Yang

(a) (b)

Fig. 1. Distribution of gray values and difference

The position on the maximum of the gray value difference is the boundary to
discriminate the background and the object. Because the integration effects and
optical diffraction effects of optical components, as well as the aberrations of the
optical system, the change of the gray values become gradient style in the image,
which should be drastic changes in reality. Classic edge extraction principle considers
that the maximum difference presents the edges of image objects.
According to Square aperture sampling theorems, optical components always
perform the integration of the light intensity which projected onto the photosensitive
surface with a fixed size area at fixed interval, the output results are the gray values in
an image. As integral time and integral area is fixed, so the outputs depend only on
the light intensity distribution on the surface. A pixel gray values output can be
expressed as:
j +1 2 i +1 2
f (i , j ) = g ( x, y )dxdy (1)
j 1 2 i 1 2

Here, f (i, j ) is pixel gray values, g ( x, y ) is the light intensity distribution of the
continuous images. Theoretically, variations of edge gray values should be a gaussian
distribution, which is shown in Fig.2, the vertex position of the curve is the precise
location of the edge point.
Expression of the Gaussian curve is:
1 ( x ) 2
y= exp( ) (2)
2 2 2

is a mean value; is a standard deviation. Because it is difficult to fit directly,


and the purpose here is just to find the vertex position. Therefore perform logarithmic
operation on both sides to transform the equation:
( x )2 1
ln y = + ln (3)
2 2 2

Obviously, the equation is conic style. To simplify the calculation, we can use the
values after logarithmic operation to fit a parabola to obtain vertex coordinate. So, we
use conic instead of Gaussian curve to improve the efficiency.
An Improved Sub-Pixel Location Method for Image Measurement 85

Mm

W2
W1

P1 P Pmax P2

Fig. 2. Variations of edge gray values

Surpose the conic form is y = Ax2 + Bx + C , the gray value output for each pixel is:
n +1 2
y (n) = ( Ax 2 + Bx + C ) dx (4)
n 1 2

Let the number of the maximum point of grayscale value difference be 0, and its
value be represented as f0. This position can be calculated by classic operator
mentioned above. The number of the two points which is nearby the maximum point
are represented as -1 and 1, and their values are represented as f-1and f1 , We can get
the gray values as follow:
1
1 2 1 1 2 13
f 1 = ( Ax 2 + Bx + C )dx = Ax3 + Bx 2 + Cx = A B + C (5)
3 2
3 2 32 12

1
f0 = A+C (6)
12

13
f1 = A+ B +C (7)
12

Combine the equations from (5) to (7), we can obtain the expression of A, B, C as
follows:
1 1 13 1 1
A= ( f 1 + f 1 2 f 0 ) B= ( f 1 f 1 ) C= f0 f 1 f (8)
2 2 12 24 24

Thus, the abscissa value of the parabolas vertex is:


B f1 f 1
x= = (9)
2 A 2(2 f 0 f1 f 1 )

This solution is the result after taking logarithms in the Gaussian curve and the
pixel gray values difference in (9) should be substituted by logarithms, so we get:
ln f1 ln f 1
x= (10)
2(2 ln f 0 ln f1 ln f 1 )
86 H. Zhou, Z. Liu, and J. Yang

2.2 The Deficiency of Traditional Gaussian Interpolation Algorithm

For the discrete distribution of digital images pixel points, the traditional gaussian
interpolation algorithms can only be performed in horizontal, vertical and diagonal
direction ( 45 , 135 ). In most cases, edges are in arbitrary direction, so the
accuracy of location will be decreased inevitably.
Fig.3 shows an object edge of the original image. Fig.4(a) shows the trend curve of
gray value change in arbitrary horizontal direction, and Fig.4(b) in gradient direction.
As can be seen from the figures, the gray values in normal direction change more
drastically than in the other direction, which means that edge location in normal
direction will be more accurate.

Fig. 3. Variations of edge gray values

(a) (b)

Fig. 4. Gray values change chart in horizontal and normal direction

3 Algorithm Optimization

3.1 Interpolation in the Gradient Direction

After obtained the edge location of object using LoG operator, we use Hough
transform to get the curve slope of the boundary tangent line, and thus the
corresponding normal could be obtained. Suppose the angle of edge normal to
horizontal axis is , as shown in Fig.5. Take a certain edge point after pixel-precise
location as center, rotate the coordinate system to make the normal line be the x-axis,
then the edge direction become the y-axis. Do gaussian curve fitting in gradient
direction to get the sub-pixel location more accurately.
An Improved Sub-Pixel Location Method for Image Measurement 87

Fig. 5. Interpolation in edge normal direction

(a) Input image (b) Output image

Fig. 6. The interpolation if image gray values

The algorithm is possible to achieve a high precision theoretically. However, there


is a critical problem that must be resolved: because the discrete distribution of digital
image pixels, the original pixel point f-1 and f1 in the new rotated coordinate system
' ' ' '
( f 1 and f1 ) may not be integers. The gray values of f 1 and f1 after coordinate
rotation must be determined. To solve this problem, reintroduce gray value
interpolation method, as shown in Fig. 6.
Suppose the existing pixel gray values as f(u1,v1) f u2 v2 )
f un vn ,
the new pixel gray values in the new coordinate system can be expressed as:
n
f (u 0 , v0 ) = f(u i ,vi )h(u i u0 , vi v0 ) (11)
i =1

f(ui ,vi )(i = 1,2,0..., n )


h(,) is interpolation kernel function, is weight coefficient.

3.2 The Chosen of Interpolation Kernel Function


The precision and calculation of interpolation algorithm depend on the interpolation
kernel function, and the design of kernel function is the core of algorithm. The paper
presents a weighing Lagrange interpolation algorithm to keep a balance between
precision and complexity.
From mathematical analysis we found that if any function f(x) has n+1 derivative
at point x0, then the function can be expanded as a Taylor series in that neighborhood:
88 H. Zhou, Z. Liu, and J. Yang

1 f (n ) (x 0 )
f ( x ) = f ( x 0 ) + f' ( x 0 )( x x 0 ) + f " ( x 0 )( x x 0 ) 2 + " + (x x 0 ) + R n (x ) (12)
2! n!

f (n +1) ( ) n +1
In the formula, R n ( x ) = (n + 1)! (x x 0 ) , which is Lagrange remainder.
In General, second order Taylor series is sufficient to approach the original
function and this paper try to use Taylor series to the interpolation function. As shown
in Fig.7, a single pixel is surrounded by 4 pixels and we can choose any 3 points
3
( C 4 ) to do Lagrange interpolation. Finally, we take the average as the output gray
value for the pixel.

(0,0) (0,1)
(x,y)
(1,0) (1,1)

Fig. 7. Interpolation of pixels

Suppose the gray value for points (0, 0), (0, 1) (1, 0), (1, 1) is y0, y1, y2 and y3
respectively. Do second order Lagrange interpolation for point (0, 0), (0, 1) and (1, 0)
to get f0(x, y):
( x x1 )( x x 2 ) ( x x 0 )( x x 2 ) ( x x 0 )( x x 1 )
f 0( x, y) = y 0 + y1 + y2 (13)
( x 0 x1 )( x 0 x 2 ) ( x1 x 0 )( x1 x 2 ) ( x 2 x1 )( x 2 x 0 )

Similarly, Lagrange interpolation for points (0, 0), (0, 1) and (1, 1), we get f1(x,y):
(x x1 )(x x 3 ) ( x x 0 )(x x 3 ) ( x x 0 )(x x1 )
f 1(x, y) = y0 + y1 + y3 (14)
( x 0 x1 )(x 0 x 3 ) ( x1 x 0 )(x1 x 3 ) ( x 3 x1 )(x 3 x 0 )

Lagrange interpolation for points (0, 1), (1, 0) and (1, 1), we get f2(x, y):
( x x 2 )( x x 3 ) ( x x 1 )( x x 3 ) ( x x 2 )( x x 1 )
f 2( x , y) = y1 + y2 + y3 (15)
( x 1 x 2 )( x 1 x 3 ) ( x 2 x 1 )( x 2 x 3 ) ( x 3 x 1 )( x 3 x 2 )

Lagrange interpolation for points (0, 0), (1, 0) and (1, 0), we get f3(x, y):
( x x 2 )( x x 3 ) ( x x 0 )(x x 3 ) ( x x 2 )(x x 0 )
f 3( x , y) = y 0 + y2 + y3 (16)
( x 0 x 2 )( x 0 x 3 ) ( x 2 x 0 )(x 2 x 3 ) ( x 3 x 0 )(x 3 x 2 )

Finally, Calculated the pixel gray value f(x, y) for point (x,y):
1
f ( x, y) = (f 0 ( x, y) + f1 ( x, y) + f 2 (x, y) + f 3 ( x, y)) (17)
4
After obtaining the gray values for the interpolation points in the gradient direction
under the new coordinates system, we can relocate the edge points for sub-pixel
precision in the gradient direction based on the Gaussian interpolation algorithm.
An Improved Sub-Pixel Location Method for Image Measurement 89

4 Experiment Results
In order to verify the algorithm, we adopted the standard Gauge which has a high
quality of straight edges for sub-pixel location experiment. Fig.8(a) illustrate a
standard Gauge with 20mm of working length (the long side is the working face) and
Fig.8(b) shows the gray value data of gradual partial edge.

(a) (b)
Fig. 8. Original image of Gauge and gray values of partial edge

Table 1. The list of edge point coordinates by different edge extraction methods

No. Pixel precision Traditional Interpolation Improved algorithm


1 (437,444) (436.7507, 444.1063 ) (436.9439,444.0181)
2 (437,445) (437.1611,444.7419 ) (437.1932,444.9377)
3 (437,446) (437.3867,446.5000) (437.3832,445.8765)
4 (438,447) (437.7011,446.7521) (437.9123,447.0282)
5 (438,448) (438.2676,448.4210) (438.2705,447.9128)
6 (438,449) (438.4088,448.5000) (438.4052,448.8693)
7 (439,450) (438.5354,449.8514) (438.7990,450.0648)
8 (439,451) (439.1241, 451.3852) (439.1645,450.9470)
9 (439,452) (439.3931,452.0611) (439.3851,451.8758)
10 (440,453) (439.5411, 453.4563) (439.7986,453.0649)
11 (440,454) (440.1897,454.4351) (440.2177,453.9298)
12 (440,455) (440.3845,454.7162) (440.3815,454.8770)
13 (441,456) (440.6585,455.8461) (440.8256,456.0562)
14 (441,457) (441.0398, 456.6891) (441.1015,456.9673)
15 (441,458) (441.2947,458.0000) (441.2945,457.9050)
16 (442,459) (441.6210,458.6502) (441.3920,459.1961)
17 (442,460) (441.7600,460.4556) (441.9337,460.0214)
18 (442,461) (442.2098, 461.4322) (442.2306,460.9253)
19 (442,462) (442.3906,461.6704) (442.3854,461.8757)
20 (443,463) (442.8143,463.4500) (442.9165,463.0269)
21 (443,464) (443.0912,464.6721) (443.1451,463.9532)
22 (443,465) (443.3627,465.2967) (443.3624,464.8831)
90 H. Zhou, Z. Liu, and J. Yang

Execute the LoG edge extraction (=1, thresh=70) and we can get the edge binary
image. Do Hough transform for the left side edge we get the line equation as
y=3.0994x-909.3564. To precisely locate the edge point, rotate the coordinate system
about the LoG zero crossing point to make the edge be y-axis and the normal
direction become x-axis.
Conduct sub-pixel location according to the algorithm mentioned above. Notice
that the operation is only carried out for the zero crossing point along the normal
direction. Table 1 lists the edge point coordinates for pixel precision, sub-pixel
precision by traditional Gaussian Interpolation and the sub-pixel precision by
improved algorithm.
Fig.9 shows the 2D curve drawn by positions of each sub-pixel points. We can find
that the curve made by pixel level precision shapes like a zigzag, traditional gaussian
interpolation method remedy such errors to some extent. However, because it was not
carried on the edges normal, so it inevitably has some errors compared with the real
edge. The improved algorithm executes second sampling in edges normal and uses

470
pixel accuracy edge
traditional sub-pixel edge
465
improved sub-pixel edge

460
y coordinate

455

450

445

440
436 437 438 439 440 441 442 443 444
x coordinate

Fig. 9. Comparison of edge point curve made from different algorithms

Fig. 10. Ring gauge of 20 Fig. 11. Comparison of edge point curve
An Improved Sub-Pixel Location Method for Image Measurement 91

second-order Lagrange interpolation algorithm to restore the gray values in the


gradient direction. Consequently, this method can improve the accuracy of edge
location to some extent. As shown in the figure, line connected by sub-pixel points
obtained by executing improved algorithm, get even closer to the real straight line.
Fig.10 is a ring gauge of 20. Similarly, We obtained the edge point coordinate by
pixel precision, sub-pixel precision by tradition Gaussian interpolation, sub-pixel
precision by improved algorithm. Fig.11 is the comparison of the edge curve derived
from different methods.
Fig.12 shows the error data curve between the improved arc sub-pixel location
method and pixel-level location method.







Error
values 
                        






Pixel point

Fig. 12. Error data curve

Because standard gauge boasts high-quality straight lines and curve edges, we can
figure out the sub-pixel location precision by checking the shape errors of the object
edges from the image. The shape errors of gauge are shown in Table 2 and we can
find that the new algorithm improved the precision of sub-pixel location obviously.

Table 2. Shape Errors of gauge image

Algorithm Pixel-level Location Traditional Sub-pixel Improved Sub-pixel


Location Location
Straightness 0.849 0.318 0.223
Radian 0.868 0.403 0.238

5 Summary
Edge detection and sub-pixel precision location for edge points are the basis for image
measurement. Classical Gaussian interpolation location can achieve high speed but
low precision. As for arbitrary oriented edges, gaussian interpolation on the edge
gradient can improve the location precision. To restore the gray values on the gradient
direction, perform second order Lagrange interpolation and then the gaussian
interpolation of sub-pixel relocation. Experiment proves that improved sub-pixel
interpolation algorithm get much better precision than traditional algorithm.

Acknowledgment. Supported by Chinese Universities Scientific Fund.


92 H. Zhou, Z. Liu, and J. Yang

References
1. Steger, C., Ulrich, M., Wiedemann, C.: Machine Vision Algorithms and Applications,
pp. 12. Tsinghua University Press, Beijing (2008)
2. Qu, Y.D.: A fast subpixel edge detection method using Sobel-Zernike moment operator.
Image and Vision Computing 23, 1117 (2005)
3. Tabatabai, A.J., Mitchell, O.R.: Edge location to sub-pixel values in digital imagery. IEEE
Trans. Pattern Anal. Machine Intell. PAMI-6(2), 188201 (1984)
4. van Assen, H.C., Egmont-Petersen, M., Reiber, J.H.C.: Accurate object localization in gray
level images using the center of gravity measure:accuracy versus precison. IEEE
Transactions on Image Processing 11(12), 13791384 (2002)
5. Malamas, E.N., Petrakis, E.G.M., Zervakis, M., et al.: A survey on industrial vision
systems, applications and tools. Image and Vision Computing 21, 171188 (2003)
6. Li, Y., Pang, J.-x.: Sub-pixel edge detection based on spline interpolation of D2 and LoG
operator. Journal of Huazhong University of Science and Technology 28(3), 7779 (2000)
The Dynamic Honeypot Design and Implementation
Based on Honeyd

Xuewu Liu1, Lingyi Peng2, and Chaoliang Li3

1
Hunan University of Commerce Beijin College
2
Hunan First Normal University
3
School of Computer Hunan University of Commerce, Changsha, China
{12870595,494680234,522396825}@qq.com

Abstract. Along with the rapid development of Internet technology, Network


security has become a very serious problem. At present the main security
technologies include firewall technology, intrusion detection technology, access
control technology, data encryption technology and so on. These safety
technologies are based on the passive defence, so they are always in a passive
position when thay are face to the up-to-date attack means. So,we put forward a
kind of active defense network security technology -Honeypot technology and
research detailedly the dynamic honeypot design and implementation based on
Honeypot.

Keywords: Network security, Honeyd, Dynamic honeypot, Virtual honeypot.

1 Preface

Along with the the rapid development of Internet technology, Network information
safety has to be face to a serious threat. The current network security technologies
mainly use the passive defense methods, but these methods are very tough to deal with
complex and changeable attacks from hacker. Since passive defense modes are difficult
to deal with the complex and changeable attacks, we must solve the problem of
defensive measure which is from the passive into active.This is also our research new
topic. In this context , We put forward a kind of active defense network security
technologyHoneypot. The Honeypot system elaborate network resources for
hackers, which is a strict monitoring network deception system.The system aims at
attracting hacker attacks through offerring real or analog networks and services
,,collecting the information and analyzing its attack behavior and process during the
hacker attacks.In this way,we can hold the hackers motivations and goals, repair
security holes The system attacked before , which can avoid the attacks occurred.

2 Honeyd Analysis and Research


The Honeyd is designed by the Niels Prowvos from the Michigan university.Its a
application-oriented honeypot with low interactive. The Honeyds software frame

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 9398, 2011.
Springer-Verlag Berlin Heidelberg 2011
94 X. Liu, L. Peng, and C. Li

includes configuration database, central bag dispensers, agreement processor,


personality engines and an optional routing component several parts. The structure is
shown in figure 1:

Fig. 1. Logical frame of honeyd

After Honeyd receives packets, The central bag splitter will check IP packet length
and confirm bag checksum. Honeyd main response three Internet protocols which are
ICMP, TCP and UDP , Other protocols are discarded after credited into log .Before
Packets are processed, The central bag splitter will search the honeypot configuration
corresponding with the packet destination address. If they can't find the corresponding
configuration, the System uses a default configuration..After the configuration is given,
Packet and corresponding configuration will be assigned to specific protocol processor.

3 The Dynamic Honeypot Design Based on Honeyd

3.1 The Design of Dynamic Honeypot Environment Obtain

The main purpose obtaining the environment around is to learn about the Internet
environment. Its the necessary conditions to solve the honeypot system configuration ,
That is, to solve allocation problems must know surrounding network environment
first.

1) Active detection technology


In order to get the network operating system and server types, We can use tools Nmap
in detecting the entire network, After that ,we can get the feedback of target system that
help us to determine its operating systems and services provided by it. But if active
detection by excessive used can also cause faults, Namely excessive active detection
will consume extra bandwidth,,which may cause the system shutdown.

2) Passive fingerprint identification technology


Passive fingerprint identification technology is based on the principles which each
operating system IP protocol has its own characteristic, maintains a fingerprinting
The Dynamic Honeypot Design and Implementation Based on Honeyd 95

database, Records data packets characteristic of different kinds of operating system .


After it catches data packets in the network ,it will compare with the record of the
database ,thereby it can judge the operating system categories.

3) Design of the active detection combining with the passive fingerprint

According to the above active detections and passive fingerprint designs, we will be
able to determine approximately the kind of operating system and obtain the basic
situation of the environment. Its design is shown in figure 2:

Fig. 2. Design of the active detection combining with the passive fingerprint

3.2 The Architecture Design of the Dynamic Honeypot System

Dynamic honeypot technology is first proposed as a kind of design method by the


honeynet organization. For the challenge existing in the honeypot configuration and
maintenance, we have to analyze it with dynamic honeypot technology. The system
mainly uses active detection technology, passive fingerprint identification technology
and Honeyd technology. The overall structure design which is made for the above
content is in the following figure 3:

Fig. 3. The dynamic honeypot overall design based on Honeyd


96 X. Liu, L. Peng, and C. Li

4 The Dynamic Honeypot Realization Based on Honeyd


4.1 Set Virtual Machine
We can use installing Linux operating system hosts to do the honeypot host, Linux is
main operating system in the machine honeypot mainframe, And let the system with
bridge function, Another installation Vmware virtual machine, Used to support
multiple guest operating system. Also installed the virtual machine to support guest
operating system. In this honeypot system, We adopt Settings gateway for 2 Bridges
mode, Through the use of two-layer gateway, Honeypot system with real system in a
network environment, So in tracking and understand the external network attack, Can
understand the internal network security problems. In addition, we through the source
code to realize system support bridge mode.

4.2 Establish Honeyd Configuration Files

The system through the Honeyd to simulate the virtual honeypot. Through creating
Honeyd.config files to configure the template. The system created a default host
templates, Used to store those not in other templates defined in packets, In addition,
also created a Windows operating system template and a XP operating system template
and router template.

4.3 Data Control

Honeyd usually has two layers of data control, Respectively is a firewall data control
and router data control, Firewall data control mainly through a firewall to control the
honeypot out connection, Firewall adopt "wide into severe out" strategy, In fire
prevention wall of honeypot machine from outside sends the number of connections set
a rational threshold, Generally allow outside sends number of connections Settings for
5 to 10 more appropriate, Won't cause invaders doubt, Also avoid honeypot system
become the invaders against other systems and tools. Routing control by routers
completed, Basically is to use routers to go out the access control function of the packet
filtering, Lest honeypot is used to attack other parts of a network. Mainly used to
prevent Dos attack, IP deception or some other deceptive attack. In this system, we
adopt gateway to replace, The advantage of using gateway is: Gateway no network
address, Control operation will more latent, Hackers perceive is not easy. We adopt
Honeynet development of rc. Firewal scripts to the configurations and realization, And
using IPTables to restrict. IPTables is Linux self-contained open source firewall,
According to the need to Forsake a bag, In a given period allowed only a certain
number of new connection, Possibly through discard all packages to completely isolate
the honeypot system. Every time the connection initialization out the connection,
Firewall count, When the total limit is reached then, IPTables will block any Honeypot
launched from any connection. Then IPTable reset itself, Allow each time period
allowed out connection number. In this script installed per hour allow TCP and UDP,
ICMP or other arbitrary IP packet out number of connections, When an intruder outside
sends a packet to specified value, Automatically cut off all foreign connections, To
reduce the network risk.
The Dynamic Honeypot Design and Implementation Based on Honeyd 97

4.4 Data Capture

Data capture is the key of the honeypot system, We need to use data capture
information to determine the invaders behavior and motivation, In order to determine
the invaders gain access after had done, We need to capture data can provide invaders
keystroke records and attack effect.

1) Realize data capture by snort

Snort is a lightweight intrusion detection system, It has three working mode: Sniffer,
packet recorder, network intrusion detection system. We mainly use Snort intrusion
detection model. Above configuration files is Snort collected data output to local called
Snortdb Mysql database, The user name is Snortoper, Verification code is Password. At
the same time will be recorded in Tcpdump format packets Snort.log file.

2) Realize data capture by sebek

Sebek is a based on the kernel's data capture tools, It can be used to capture the
honeypot concealed all activities. Sebek caught in the packet encryption has great
advantage, Because no matter what kind of encrypted data to the destination host to
have after action, Will be decrypted to call system calls. Hackers who get packets, Use
its own agreement will the packet on the Internet, Thus obtained by the Sebek Server.
Sebek Client are through some hidden technology makes the invaders feel oneself be
monitored, Convenient for us to capture the real data. Sebek Client capture the data
package into UDP packets, Through the nic driver sent to the Internet, Avoid being
invaders may install the sniffer to detect. Sebek consists of two parts: The client and the
server. The client from the honeypot capture data and the output to network lets
server-side collection. The server have two ways to collect data: The first kind is
directly from the network activity packet capture, The second from Tcpdump format
preservation packets files. When data collected can upload the relational database, Also
can instantly display keystroke records.

4.5 Log Record

log record is mainly to the honeypot host capture data recorded, Its main function is to
collect and record hackers behavior, For the future analysis hackers the tools used,
strategies and their attack purposes or take lawsuit hackers crime to provide evidence.
In order to ensure that capture hacker attacks data security, We design a log
oportunidades programme to the backup data in the system. The honeypot host is


running with Linux ep-red Hat 9.0 operating system of real host,the Syslog of Linux
Red Hat 9.0 function is powerful, Syslog can send recording system kernel and tools
generated information. We can configure their Syslog. Conf files, To realize the virtual
honeypot collected log message transferred to log server. By modifying Syslog. Conf,
Realized the local log information transfer to remote log server for backup. Finally, we
began to capture the hacker information for analysis, Thus learning hackers means and
98 X. Liu, L. Peng, and C. Li

methods, In view of its attack means to take corresponding defensive measures, To this,
the Honeyd based on dynamic honeypot is realized basically.

5 Summary

Dynamic honeypot although is based on virtual honeypot, But it is a low interaction of


honeypot, With the interaction between the attacker is very limited, Capture data also is
very limited, Improve the honeypot interactivity can gain more attack information, To
study the attacker to attack has very great help. We can interact with high virtual
honeypot honeypot combined, To capture more attacks, information. Dynamic virtual
honeypot on network security role is mainly indirectly, Namely recognition threat,
divert attacks flow. Thus the honeypot with other security technology, Such as firewall
technology and intrusion detection system combining but also the future of a very
important developing direction. Along with the development of honeypot technology,
Some problems will be solved step by step, Some new techniques and applications will
further development, Make honeypot technology better for network security provided
protection.

References

1. Fu, X., Yu, W., Cheng, D., et al.: On Recognizing Virtual Honeypots and Countermeasures.
In: The 2nd IEEE Interational Symposium on Dependable, Autonomic and Secure
Computing, vol. 9, pp. 220230 (2006)
2. Leita, C., Mermoud, K., Daeier, M.: SeriptGen:all automated script generation tool for
honeyd. In: 21st Annual Computer Security Applications Conference, vol. 9, pp. 125135
(2008)
3. Zhang, F., Zhou, S., Qin, Z., et al.: Honeypot:a supplemented active defense system for
network security. In: Proceedings of the Fourth International Conference, PDCAT 2009,
vol. 8, pp. 231235 (2003)
4. Kreibich, C., Crowcroft, J.: Honeycomb-Creating intrusion Detection Signatures Using
Honeypots (EB/PDF),
http://www.sigcomm.org/I-IotNets-II/papershaoneycomb.pal.2010
5. Domscif, M., Holz, T., Mathes, J., Weisemoller, I.: Measuring Security Threats with
Honeypot Technology, 2129 (2009)
6. Kwong, L., Yah: Virtual honeynet srevisited.SMC Information Assurance Workshop. In:
Proceedings from the Sixth Annual IEEE, vol. 9, pp. 230240 (2010)
Research of SIP DoS Defense Mechanism Based on
Queue Theory

Fuxiang Gao, Qiao Liu, and Hongdan Zhan

College of Information Science and Engineering, Northeastern University,


Shenyang 110819, China
gaofuxiang@mail.neu.edu.cn,
{dongqin4060432,hongdanzhan}@163.com

Abstract. The SIP is becoming the core of multimedia communication network


through the IP, based on which the 3G network has been operated in our country.
The easy implementation, high destruction and hard tracking against attacking
source which are the characters of DoS attack cause the great security threat to
SIP. Focusing on the research on defense mechanisms of DoS attacks in SIP
protocol, this paper proposes a model based on queue theory to approximately
analyze the DoS attack, and then presents a DoS attack defense method.
Simulation was taken to analyze the performance of the defense mechanism.

Keywords: SIP, DoS attack, queue model, defense mechanism.

1 Introduction
SIP was proposed in 1999 by IETF as a signaling protocol that based on IP network
environment. And it has been widely used in NGN at present. SIP is vulnerable to
DoS attack because it runs on IP network environment and has openness. For
example, attackers can create false messages which have false source addresses and
via field. And the SIP proxy server which was attacked is set as request initiator, then
these messages are sent to large numbers of SIP users. Hence the spoofed users will
send many DoS attack messages to the attacked server.
Therefore, research on DoS detection defense system of SIP has become a hot point
at present and also has been a problem that urgently needed to be solved in NGN
deployment. According to the definition of VolPSA, DoS attack problems of SIP
system can be divided into five categories: request flooding, malformed requests and
messages, QoS abuse, spoofed messages, call hijacking [1]. This paper mainly
researches on the flooding problems of SIP, and combining with the M/M/1/K
mathematical model it proposes a SIP DoS attack defensive scheme that based on the
queue theory. At last, this paper simulates and analyzes the performance of the scheme.

2 Analytical Model of SIP DoS Attack


2.1 Related Assumptions
We assume that the arrival time of SIP request messages obeys exponential
distribution and the average arrival rate is / s. We also assume that the consuming

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 99104, 2011.
Springer-Verlag Berlin Heidelberg 2011
100 F. Gao, Q. Liu, and H. Zhan

time that the server processes the unit message is Q seconds, the size of messages
obeys exponential distribution whose average is L, the average service rate of SIP
system is and the average service time is S, thus:
S=1/=LQ (1)
For convenient analysis, we assume that the service time and the interarrival time
still obey exponential distribution during the system under DoS attacking [2]. We also
assume that the whole system has only a core processing unit and the size of the cache
is K, namely there are a maximum of K request messages wait to be processed, the
rest of the arrival messages will be discarded [3]. So we can analyze the whole system
using the M/M/1/K model.

2.2 The Establishment of the Analytical Model

For the M/M/1/K system, we write as = / , thus we can obtain the rate (p0) when
the queues length of the system is 0:
1
k
p0 = i (2)
i =1
The rate of messages being discarded when the system statistical balance is:
=pk=kp0 (3)
The average waiting time of messages when there are no messages are lost is:
1 0
P (1 k k 1 + ( k 1) k ) ( 1)
1
1
(1 )
2

W = (4)
1 1 ( k 1) k
1 0
2

The average delay of the messages equals the waiting time of the messages in the
queue plus the average service time, thus the average response time of the system is:
R = W ( , L, K ) + S ( L ) (5)
We can see that the response time of system is related with three parameters: the
arrival rate of the request messages , the average size of the messages L and the size
of the system cache K. Normally, the value of K is sure, so the response time of
system depends on and L.

3 The Design of SIP DoS Attack Defensive Scheme


According to the M/M/1/K model above, the new coming message will be discarded
when the number of the received messages larger than k. That means that other legal
users will not be able to accept normal services if the server is suffering a DoS attack
and causing its own buffer space exhausted.
Research of SIP DoS Defense Mechanism Based on Queue Theory 101

For example, if the illegal INVITE message blocks up the header of the queue, the
legal INVITE message behind it will not be able to obtain the service so that the call
can not be established according to the normal process, which will lead to sessions is
failed because of the timeout at last. Even if the illegal INVITE message does not
block up the header of the queue, a large number of illegal messages block at the head
of the queue, that may cause the new coming session response message cannot be
accepted or be discarded, the call of legal users will also be timeout because of not
accepting any response [4], what is shown in Fig.1.

180 Running The normal INVITE message


The normal INVITE message The illegal INVITE message
200 OK
Programming
Incoming
message
Programming

The illegal INVITE message

Fig. 1. The blocked queue of INVTE message. The former is the situation of blocking the
header of the queue and the later is blocking the queue.

Others Others Others


Programming Programming

SIP INVITE INVITE


message INVITE
Programming Programming

Fig. 2. The Sketch maps of single queue and priority queue. The former is the situation of the
signal queue and the later is the priority queue.

In order to reduce the influence that the INVITE flooding has on the proxy server, we
consider the scheme in which the priority queue is introduced. SIP server generally uses
FIFO queue when it processes the message queue. When there is a DoS attack, it will
produce the problem above. If we adopt the priority queue, the INVITE messages will
be assigned with low priority and be put into low priority queue, others that are not
INVITE messages will get high priority and be put into high priority queue. The two
queues respectively follow FIFO principle, but only when the higher priority queue is
empty, they will process the messages of lower priority, what is shown in Fig.2.
Thus, we can assign the original cache sources of server to two queues, and the
way of assigning the sources according to the actual conditions. For convenient
analysis, this paper assign the cache sources to two queues averagely, namely two
M/M/1/K/ queues [5].
The priority queue adopts the non-preemptive priority rules, a message which is
accepting services is allowed to finish this service without interference, even if there
is a higher priority message arrives. Every priority message has an independent
queue. When the server can be used, the first message of the non-preemptive priority
queue that has the highest priority will be the first to accept the service. It will give
the corresponding average delay equations for every priority category. For the
M/M/1/ (K/2) system which has multiple priority levels, firstly, we define various
parameters as followed: qwi is the average length of queue whose priority is i; Wi is the
average waiting time whose priority is i; p0i is the rate when the length of the message
102 F. Gao, Q. Liu, and H. Zhan

queue is 0 whose priority is i; i= i / i is the utilization rate of the message whose


priority is i on the system; TR is the average residual service time.
Assuming that the whole utilization rate of the system is lesser than 1, namely:
1+2++n<1. (6)
If it cannot meet this assumption, the average delay time of the messages whose
priority is not larger than k will be equal to infinity, while that of the messages whose
priority is larger than k will be limited values.
We can get the average length of the message queues whose priority is i:
i 2 k k2 1 k k2
2
1 i + 1 i p0i ( i 1)
(1 ) 2 2
qwi = (7)
kk
1
2 2 p ( = 0)
2
0i i

This paper will only discuss the situation of having two priority levels, we can
easily get the average delay time of the messages that having high priority is:
1
W1 = TR + qwi (8)
1

For the messages of low priority, the expression of average detention is generally
similared with the expression of high priority expect that there is still a message
waiting in the queue when a message of high priority arrives at the server. At this
time, the expression is:
1 1
W2 = TR + qwi + qwi + 1W2 (9)
1 2

According to P-K equation, we can deduce the average residual service time is:
1 n
TR = i X i 2
2 i =1
(10)

For the message whose priority is i, the response time is:


1
Ri = + Wi (11)
i

For the system which has two priority levels, its average response time is:
1 R1 + 2 R2
R= (12)
1 + 2

4 The Simulation and Comparison of Performance


The node model consists of a group of node module, the internal of every node
module need to process domain modeling. This paper just gives the domain modeling
of the crucial ARS node module, what is shown in Fig.3.
Research of SIP DoS Defense Mechanism Based on Queue Theory 103

Fig. 3. The domain modeling of the crucial ARS node

Fig. 4. The performance comparison between three priority queues and two priority queues

The parameters of simulation are set as followed: the average arrival rate of
messages, namely, the attacking factor obtains its value between 0 msg/s and 10
msg/s. The average service rate of SIP transaction process is 10 msg/s. We can
obtain the result through running the simulation program. We can observe
dramatically that with the growth of , the average response time of messages obeys
exponential distribution. So the attracting factor has a huge affect on the response
time of the system. When the value of attracting factor is larger than 8, the response
time of message of the target node will increased greatly, even causing the target node
cannot response to any message, the system will collapse. In order to accurately
compare the superiority of three levels queue and two levels queue, we obtain the
curve of the two queues under the same parameters, what is shown in Fig.4. We can
obviously find that the response time of three levels queue is far better than that of
two levels queue from the comparative curve.

5 Conclusion
Although the M/M/1/K model that was established in this paper just aims at the
INVITE flooding attack, it summarizes the advantages of the detecting defense
104 F. Gao, Q. Liu, and H. Zhan

mechanisms in a certain extent and realizes easily. All kinds of the existing SIP DoS
attack defense mechanism has its own advantages and disadvantages, the key factors
to analyze the performance of the scheme is to establish an effectively mathematical
analysis model. This paper verified the effective of the DoS attack defense scheme
that is mentioned through emulating the ARS model based on queuing theory. In the
future research, it will obtain better effect if it couples with certain discarding
INVITE message algorithms.

References
1. Rosenberg, J., Schulzrinne, H., Handley, M., et al.: SIP: Session Initiation Protocol. RFC
3261 (2002)
2. Yin, Q.: The reserch on SIP DoS attack defense mechianism. The Journal of Chongqing
University of Posts and Telecommunications 20(4), 471474 (2008)
3. Zhang, G., Fischer-Hbner, S., Ehlert, S.: Blocking attacks on SIP VoIP proxies caused by
external processing. Telecommunication Systems 45(1), 6176 (2009)
4. Ormazabal, G., Nagpal, S., Yardeni, E., Schulzrinne, H.: Secure SIP: A Scalable
Prevention Mechanism for DoS Attacks on SIP Based VoIP Systems. In: Schulzrinne, H.,
State, R., Niccolini, S. (eds.) IPTComm 2008. LNCS, vol. 5310, pp. 107132. Springer,
Heidelberg (2008)
5. El-moussa, F., Mudhar, P., Jones, A.: Overview of SIP Attacks and Countermeasures. In:
Weerasinghe, D. (ed.) ISDF 2009. LNICST, vol. 41, pp. 8291. Springer, Heidelberg
(2010)
Research on the Use of Mobile Devices in Distance EFL
Learning

Fangyi Xia

School of Foreign Languages, Tianjin Radio & TV University,


300191 Tianjin, P.R. China
xfy76@163.com

Abstract. This research focuses on exploring the possible use of mobile devices
in distance learning of English as a Foreign Language. Firstly, it studies mobile
devices application in language teaching. Secondly, it analyzes the current
problems of distance EFL learners in China. Then, it makes suggestions on how
to use mobile devices in distance EFL learning in China. Finally, it finds that
mobile learning could partly address some of the current problems of distance
EFL learners, such as lack of constant exposure to learning contents and lack of
big chunk of time for study and revision etc. Some effective ways are to
represent learning contents on mobile devices, create an online submission
system for oral and written assignment and design a mobile interactive quiz
system.

Keywords: mobile devices, mobile learning, EFL, distance learning.

1 Introduction
With the rapid development of wireless mobile technology, mobile, portable and
handheld devices, such as mobile phones, personal digital assistants (PDAs), MP3
and MP4, etc. have become very popular in peoples daily life. And some have
powerful functions as personal computers do. They are now regarded as ideal tools
for students with little time because they can access increasingly sophisticated content
no matter where they are, or when they have time to study.
Research in the field of mobile learning is on the rise in recent years. A lot of
studies and projects have been conducted to explore the possibilities of mobile phones
for educational use. As Mohamed Ally believes, mobile learning is transforming the
delivery of education and training. This study finds that mobile learning has growing
significance in distance education. The slogan of COLs (Commonwealth of
Learning) Lifelong Learning for Farmers initiative says Mobile phones: not just a
tool for talking, but also a tool for learning. However, in China, with the worlds
greatest number of cell phones, most people still use mobile phones just as a tool for
talking or recreation but not a tool for learning. Mobile learning enables learning
anywhere at anytime, which matches the mission of Radio & TV Universities
(RTVU), the largest provider of open and distance learning (ODL) in China.
Nevertheless, mobile learning has not found widespread use at RTVUs except for a
few researches focusing on model exploration. This research will explore the use of

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 105110, 2011.
Springer-Verlag Berlin Heidelberg 2011
106 F. Xia

mobile devices in distance learning of English as a Foreign Language (EFL) and


commits itself to fitting mobile learning into learners daily life.

2 An Overview of Mobile Devices Application in Language


Teaching
The use of mobile phones in language teaching and learning has increased in the last
decade. One of the first projects was developed by the Stanford Learning Lab to
support Spanish study utilizing both voice and email with mobile phones. These
programs offered vocabulary practice, quizzes, word and phrase translations, and
access to live talking tutors.
Thornton and Houser studied the use of mobile phones in Japan to teach English as
a Foreign Language (EFL). They introduced two types of materials for studying EFL
on mobile devices. The first, Learning on the Move (LOTM), emailed lessons to
students mobile phones at timed intervals. The results of pre- and post-tests indicated
that the Mobile E-mail students learned about twice the number of vocabulary words
as the Web students and the Paper students. The second, Vidioms, display short, Web-
based videos and 3D animations and to give visual explanations of English idioms
The two projects show that mobile devices such as phones and PDAs can be effective
tools for delivering foreign language learning materials to students.
Levy and Kennedy created a similar program for Italian learners in Australia in
2005. It sent, in a spaced and scheduled pattern, words and idioms, definitions, and
example sentences to students cell phones via SMS and requested feedback in the
form of quizzes and follow-up questions.
In 2007, Cavus and Ibrahim carried out an experimental study to support English
vocabulary learning at Near East University by using SMS. They developed a
Window-based program on a PC, called the Mobile Learning Tool (MOLT). The
mobile phone attached to the PC via Bluetooth interface received SMS text messages
and phone numbers from the PC and then sent these messages to the recipient
students mobile phones at the times requested by the PC. The messages sent are
English words with brief descriptions of their meanings.
The literature review generally presents a positive and favorable picture of mobile
phones in language teaching. While these researches and projects are focused on
vocabulary learning and practicing, this research will shift its focus to review, practice
and examination preparation.

3 Current State of Distance EFL Learning in China


The Ministry of Education requests all colleges to include English in their compulsory
curriculum. At regular universities, non-English majors who don't pass the College
English TestBand Four (CET-4) can't get a degree, while at Radio & TV
universities, students who dont pass a national online examination in English
(College English A for English majors and College English B for non-English majors)
cant get a diploma. Governments emphasis and teachers hard work have not
produced desired effect in trying to improve students proficiency in English. At
Research on the Use of Mobile Devices in Distance EFL Learning 107

present, most of the distance learners of EFL in China are in-service adults and part-
time students, so they are encountering more difficulties than those full-time students
at regular universities.

3.1 Lack of English Language Environment

In China, English, as a foreign language, is not a language for daily communication,


so most students only learn English in classrooms but not use the language in their
daily life. Even in classrooms, for most of the time, they are not learning English,
instead, they are learning about English. Although the communicative approach
places an emphasis on communicative skills rather than grammatical rules and
sentence structures, due to a lack of authentic English-speaking situations, many
students havent developed very good oral communicative skills. It true that more
students are braver than before and have the nerve to speak in front of people, but
theres not obvious improvement in their listening and speaking skills. When made to
speak English, a lot of students are just talking or uttering some English words but not
communicating because they cant form logically complete sentences. Thats because
they havent developed the concept of English sentence structures, which are very
different from those of Chinese. Without a basic concept of English sentence
structures, their English writing skills are generally poorer than those of the students
taught in traditional approaches such as translation-grammar method etc. Owing to a
lack of English language environment as well as imbalanced and hard-to-handle
English teaching methodology, English learners often get half the result with twice
the effort. This problem is more typical of distance English learners because they are
at a distance and self-guided for most of the time.

3.2 Lack of Big Chunk of Time

Most of the students at Radio & TV universities in China are working adults and have
many roles in one with many conflictive tasks to deal with. They have to take care of
their work, family, study etc., so they are always pressed for time. They cant manage
to always give top priority to study and sit down to learn English for even one whole
hour without interruption. They only have sporadic and bits of time every day that
could be used for study. But many of them even do not know how to make full use of
such bits of time, so they dont have time left for study, which constitutes great
impediment to language learning.

3.3 Lack of Constant Exposure to the Learning Contents

Radio & TV universities have a hybrid delivery mode, partly online and partly in
classroom. In the limited hours of classroom teaching, tutors bombard students with
huge amount of information and contents which students can not digest for the
present. It needs repeated exposure to the learning materials to digest and internalize
what they have learned in the classroom. But the learning materials are in textbooks
or online. The working adult students are often on the move, traveling from home to
workplace, from workplace to their childrens schools, from this city to that city on
business etc. Its almost impossible for them to carry those thick and heavy books
108 F. Xia

about with them and access to a computer and Internet is not always available for
every one of them anywhere. Without constant exposure to the learning contents,
what they have learned in classroom slips out of their memory quickly. As a result, a
lot of the students are constantly in the agony of learning-then-forgetting, learning-
then-forgetting.

3.4 Poorly Motivated

A large part of the students are poorly motivated. They dont want to work hard to
acquire knowledge so as to work better and live better, or to give themselves a sense
of achievement or satisfaction. Their only purpose of getting registered with the
university is to get a diploma, which could be helpful to their promotion or anything
else. Thus, they dont care about what they can learn in the process, but focus their
attention on examinations. They like exam-oriented teaching very much.
All these problems should be blamed for the fact that distance EFL learning is not
as effective as desired. Something has to be done to solve some of the problems so as
to enhance the delivery of distance English language courses.

4 The Promise of Mobile Device Application in Distance Learning


of EFL
Since the absolute majority of these distance learners carry mobile devices (especially
mobile phones) about with them almost 24 hours a day and 7 days a week
,mobile learning could be utilized to solve some of the above problems and improve
the situation.
As Browns Stanford Learning Lab project indicates, mobile phones tiny screen
sizes are deemed unsuitable for learning new content but effective for review, practice
and quiz delivery if delivered in small chunks. Based on the characteristics of distance
language learning, this research finds that distance educators should focus their efforts
on the following aspects to enable mobile distance language learning.

4.1 Learning Content Representation on Mobile Devices for Review

Reading materials such as the passages and articles from the textbook should be
redesigned in the format that could be read on mobile devices, such as textfile (.txt).
Listening materials should be represented in mp3 or other formats so that they could
be retrieved and played on mobile devices. Focal language points, such as words,
phrases, collocations, sentence structures could be made in small chunks so that
students could make bits of time to review and recite them. In this way, students can
store these learning materials in their phones and they can access these materials
anywhere at anytime.
Since the mobile devices are together with their users almost round-the-clock, this
will increase distance learners exposure to the learning materials, thus enhancing
their language sensitivity, which is of great help to language learning.
Research on the Use of Mobile Devices in Distance EFL Learning 109

4.2 An Online Submission System for Oral and Written Assignment

An assignment submission system should be designed for students to submit their oral
and written assignment online. This system should be very easy to use, similar to an
email system. Students can read the assignment requirements online, do the
assignment and submit it to the system, all of which could be done either on their
computers or on mobile phones, whichever is handy for them. As for oral assignment,
students can record their own voice responses to the oral tasks and send them back to
the teacher for marking, evaluation and feedback. Tutors could retrieve and download
students submitted assignment through a computer or a mobile phone, mark them
and submit their feedback to the system.

4.3 A Mobile Interactive Quiz System

An interactive quiz system should be designed for a range of interactive exercises and
diagnostic quizzes to help examination preparation. The quizzes should be delivered
in small chunks and students could get immediate feedback after submitting their
answers. The system should be able to record the students points. It would be more
appealing for students if it were game-like. Students could earn bonus points toward
their continuous assessment by answering the quiz questions.
Students can make full use of their bits of time to review what they have learned
from the text book as well as tutorials and test themselves even when they just have 5
minutes. Next time when the learner accesses the quiz system, it will start where it
stopped last time.

5 Conclusion
China has the greatest number of mobile phone users in the world, but mobile phones
features and capabilities are not fully explored and utilized. This leads to a great waste
of mobile resources. While many countries are researching into the use of mobile
devices in education, the largest providers of ODL in China, Radio & TV universities
should take the initiative to research, explore and develop mobile learning technology
for use in its delivery distance courses. Only by doing so could they live up to their
promise to allow for learning anywhere at anytime.
While a lot of previous researches and projects are focused on vocabulary learning
and practicing, this research shifts its focus to review and examination preparation. It
finds that the portability of mobile devices could be utilized to address some of the
current problems of distance EFL learners, such as lack of constant exposure to
learning contents and lack of big chunk of time etc. Mobile learning could become a
daily reality for distance EFL learners through learning content representation on
mobile devices, an online submission system for oral and written assignment as well
as a mobile interactive quiz system.
Mobile learning enables students to review what they have learned and prepare for
the exams in a more leisurely, relaxed and effective way. It is expected that students
would be better prepared for, more confident and achieve better results in the exams
than otherwise.
110 F. Xia

References
1. Ally, M. (ed.): Mobile Learning Transforming the Delivery of Education and Training
(2009)
2. Brown, E. (ed.): Mobile Learning Explorations at the Stanford Learning Lab. Speaking of
Computers, vol. 55. Board of Trustees of the Leland Stanford Junior University, Stanford
(2001)
3. Cavus, N., Ibrahim, D.: m-Learning: An Experiment in Using SMS to Support Learning
New English Language Words. British Journal of Educational Technology 40(1), 7891
(2009)
4. Cui, G., Wang, S.: Adopting Cell Phones in EFL Teaching and Learning. Journal of
Educational Technology Development and Exchange 1(1), 6980 (2008)
5. Levy, M., Kennedy, C.: Learning Italian via Mobile SMS. In: Kukulska-Hulme, A.,
Traxler, J. (eds.) Mobile Learning: A Handbook for Educators and Trainers. Taylor and
Francis, London (2005)
6. Prensky, M.: What Can You Learn From A Cell Phone? Almost Anything (2005),
Information on http://innovateonline.info/pdf/vol_issue5/
What_Can_You_Learn_from_a_Cell_PhoneAlmostAnything.pdf
7. Thornton, P., Houser, C.: Using Mobile Phones in English Education in Japan. Journal of
Computer Assisted Learning 21, 217228 (2005)
Flood Risk Assessment Based on the Information
Diffusion Method

Li Qiong

School of Mathematics and Physics, Huangshi Institute of Technology,


Huangshi, Hubei, China

Abstract. According to the fact that the traditional mathematical statistical


model can hardly analyze flood risk issues when the sample size is small, this
paper puts forward a model based on information diffusion method. Taking
Henan province for example, the risks of different flood grades are obtained.
Results show that by using this risk analysis method we can avoid the problem of
inaccuracy faults by small sample size, the estimations obtained by this risk
method conform with the actual disasters and the method is satisfactory.

Keywords: information diffusion, flood, risk assessment.

1 Introduction

Flood disasters are more and more frequent in our country, in ordinary flood risk
assessment, probability statistics method is the main tools which is used to estimate
hydrological variables exceedance probability. This method has the advantage that its
theory is mature and its application is easy. But when it comes to solving practical
problems, problems exist in the feasibility and reliability without considering fuzzy
uncertainty. Once encountering small sample problem, results based on the classical
statistical methods are very unreliable sometimes. In fact it is rather difficult to collect
long sequence of extremum data and the sample is often small. So we can use fuzzy
mathematical method for comprehensively disaster risk evaluation. This paper uses
information diffusion-a fuzzy mathematics method to establish flood risk assessment
model with small sample and then applies it to the flood risk analysis in henan province
successfully.

2 Information Diffusion

Information diffusion is a fuzzy mathematic set-value method for samples, considering


optimizing the use of fuzzy information of samples in order to offset the information
deficiency([1],[2],[3],[4]). The method can turn an observed sample into a fuzzy set,
that is, turn a single point sample into a set-value sample. The simplest model of
information diffusion is normal diffusion model.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 111117, 2011.
Springer-Verlag Berlin Heidelberg 2011
112 L. Qiong

Information diffusion: Let X be a set of samples, and V be a subset of the universe,


: X V [0,1] is a mapping from X V to [0,1]. ( x, v) X V is called a
kind of information diffusion of X on V .
If the random variables domain is discrete, suppose it is U = {u1 , u2 ," , um } , the
( x, u ) = 1 x X .
m
conservation condition is j
j =1
Let X = {x1 , x2 ," , xn } be a sample, and U = {u1 , u2 ," , um } be the discrete
universe of X. xi and u j are called a sample point and a monitoring point,
respectively. xi X
u j U we diffuse the information carried by xi to u j
at gain fi (u j ) by using the normal information diffusion shown in Eq. (1).

( x u )2
fi (u j ) = exp i 2 j u U
j
(1)
2h

where h is called normal diffusion coefficient, calculated by Eq. (2).

0.8146(b a), n = 5;
0.5690(b a), n = 6;

0.4560(b a),

n = 7; 2)
h = 0.3860(b a), n = 8;
0.3362(b a), n = 9;

0.2986(b a), n = 10;

0.6851(b a) / (n 1), n 11

where b = max{xi }; a = min{xi }


1i n 1i n

3)
m
Ci = f i (u j )
j =1

Let
We obtain a normalized information distribution on U determined by xi , shown in
Eq. (4).

x (u j ) =
i
f i (u j )
Ci
4)
Flood Risk Assessment Based on the Information Diffusion Method 113

For each monitoring point u j , summing all normalized information, we obtain the
information gain at u j , which came from the given sample X. The information gain is
shown in Eq. (5).

5)
n
q(u j ) = xi (u j )
i =1

q(u j ) means that, with the information diffusion technique we infer that there are
q(u j ) (generally is not an integer) sample points in terms of statistic averaging at the
monitoring point u j .Obviously q(u j ) is not usually a positive integer, but is certainly
m
Q = q(u j )
a number not less than zero. And assumption j =1
, (6) where Q is the sum of

the sample size of all q(u j ) , theoretically, there will be Q = n, but due to the numerical
calculation error, there is a slight difference between Q and n. Therefore, we can

employ Eq. (7) to estimate the frequency value of a sample falling at uj .

p(u j ) =
q(u j )
Q
7)
The frequency value can be taken as the estimation value of its probability.

Apparently, the probability value of transcending u j should be

m
P (u j ) = p (u j ) (8)
k= j

P (u j ) is the required risk estimation value.

3 Application Example

3.1 Flood Disaster Index

According to the 41 years practical series material from 1950 to 1990 in


henna province, we take disaster area and direct economic loss as the disaster
degrees index and by frequency analysis the floods are classified into four grades as
seen in table 1, and the four flood grades are small, medium, large and extreme
flood.
114 L. Qiong

Table 1. Henan flood disaster rating standard

Disaster level Inundated areahm2 Direct economic losses (Billion Grade


yuan) number
small flood 0~46.7 0~9.5 1
medium 46.7~136.7 9.5~31.0 2
flood
large flood 136.7~283.3 31.0~85.0 3
extreme 283.3~ 85.0~ 4
flood

Table 2. Disaster index based on the projection pursuit model

number Inundated Direct Degree number Inundated Direct Degree


area economic value area economic value
hm2 losses hm2 losses(Billion
(Billion yuan)
yuan)
i X(1,i) X(2,i) i X(1,i) X(2,i)
1 38.70 7.900 1.369 17 157.30 38.600 2.486
2 38.50 7.800 1.366 18 283.30 85.000 3.498
3 32.10 6.500 1.315 19 556.90 67.100 3.967
4 24.20 4.900 1.256 20 649.50 194.900 3.987
5 36.40 7.400 1.350 21 602.30 180.700 3.979
6 46.70 9.500 1.432 22 446.50 134.000 3.897
7 97.60 21.700 1.895 23 694.90 208.500 3.992
8 60.40 12.800 1.552 24 72.92 9.900 1.574
9 112.60 25.200 2.033 25 148.13 20.656 2.156
10 56.20 11.800 1.515 26 203.92 27.521 2.559
11 80.60 17.600 1.736 27 179.10 24.858 2.389
12 136.70 31.000 2.258 28 375.46 94.927 3.726
13 259.10 76.100 3.363 29 301.24 47.836 3.233
14 200.10 54.400 2.915 30 141.97 116.439 3.368
15 280.10 83.800 3.481 31 279.84 121.127 3.699
16 236.10 67.600 3.209 32 172.06 51.619 2.750
Flood Risk Assessment Based on the Information Diffusion Method 115

To raise the grade resolution of flood disaster loss, a new modelprojection pursuit
(PP) model ([5]) is used for evaluating the grade of flood disaster and the flood degree
values are calculated as in table 2.

3.2 Flood Risk Evaluation Based on Information Diffusion

Based on the disaster degree values of the 32 samples (see Table 2), that is the sample

points set X = {x1 , x2 ," , x32 } .The universe discourse of the disaster degree values

namely the monitoring points set is taken as U = {u1 , u2 ," , u41} = {0, 0.1, 0.2," 4.0} .

The normalized information distribution of each xi ,that is, x (u j ) ,can be obtained


i

according to equation (1)(2)(3)and(4) then based on equation (5),(6),(7)and(8),


disaster risk estimate namely probability risk value in Henan province is calculated out.
The relationship between the recurrence interval N and probability p can be
expressed as N = 1 p , then the exceedance probability curve of flood to disaster

degree value are shown as Figure 1.

Fig. 1. The exceedance probability curves of flood to disaster degree value based on information
diffusion and frequency analysis
116 L. Qiong

Due to the standard of four grades, so we have Chen (2009) ([6]):


(a) If 1.0 H 1.5,then desertification degree belongs to small (1 grade).
(b) If 1.5 < H 2.5,then it belongs to medium (2 grade).
(c) If 2.5 < H 3.5, then it belongs to large (3 grade).
(d) If 3.5 < H 4, then it belongs to extreme (4 grade)

The result in Figure 1 illustrates the risk estimation i.e. the probability of exceeding the
disaster degree value. From Figure 1 we know the risk estimation is 0.2745 when the
disaster index is 3.5, in other wods, in Henan Prvince, floods exceeding 3.5 degree
value (extreme floods) occur every 3.64 years. Similarly, the probability of floods
exceeding 2.5 degree(large floods) is 0.5273, namely Henan Province suffers the floods
exceeding that intensity every 1.90 years. This indicate the serious situation of floods in
Henan Province whether on the aspect of frequency or intensity . In Figure 1 the curve
so estimated is compared to the frequency analysis based on the results of Jin et al.([5]).
Figure 1 shows that our results are consistent with those of frequency analysis. It also
means that normal information diffusion is useful to analyze probability risk of flood
disaster. Because the flood disaster belong to the fuzzy events with incomplete data,
therefore, the method proposed is better than frequency method to analyze the risk of
the flood disaster.

4 Conclusion

Floods occur frequently in China and cause great property losses and casualties. In
order to implement a compensation and disaster reduction plan, the losses caused by
flood disasters are among critically important information to flood disaster managers.
This study develops a method of flood risk assessment disasters based on information
diffusion method, and it can be easily extended to other natural disasters. It has been
tested that the method is reliable and the results are consistent with the real values.

References

1. Huang, C.F.: Integration degree of risk in terms of scene and application. Stochastic
Environmental Research and Risk Assessment 23(4), 473484 (2009)
2. Huang, C.F.: Information diffusion techniques and small-sample problem. Internat. J.
Information Technol. Decision Making 1(2), 229249 (2002)
3. Huang, C.F.: Risk Assessment of Natural Disaster: Theory & Practice, pp. 8698. Science
Press, Beijing (2005)
4. Huang, C.F., Shi, Y.: Towards Efficient Fuzzy Information Processing-Using the Principle
of Information Diffusion. Physica-Verlag (Springer), Heidelberg, Germany (2002)
5. Jin, J.L., Zhang, X.L., Ding, J.: Projection Pursuit Model for Evaluating Grade of Flood
Disaster Loss. Systems Engineering-theory & Practice 22(2), 140144 (2002)
6. Chen, S.Y.: Theory and model of variable fuzzy sets and its application. Dalian University of
Technology Press, Dalian (2009)
Flood Risk Assessment Based on the Information Diffusion Method 117

7. Chen, S.Y.: Fuzzy recognition theory and application for complex water resources system
optimization. Jilin University Press, Changchun (2002)
8. Chen, S.Y.: Theory and model of engineering variable fuzzy sets - mathematical basis for
fuzzy hydrology and water resources. Journal of Dalian University of Technology 45(2),
308312 (2005)
9. Jin, J.L., Jin, B.M., Yang, X.H., Ding, J.: A practical scheme for establishing grade model of
flood disaster loss. Journal of Catastrophology 15(2), 16 (2000)
Dielectric Characteristics of Chrome Contaminated Soil

Yakun Sun, Yuqiang Liu, Changxin Nai, and Lu Dong

Research Institute of Solid Waste Management,


Chinese Research Academy of Environmental Sciences,
No.8 Dayangfang BeiYuan Road,
Beijing, 100012, China
metalgod2008@yahoo.cn

Abstract. In order to research the feasibility of monitoring chromium


contaminated field using complex dielectric constant method, we designed an
experiment to compare the soil complex dielectric constants in conditions of
different chromium pollution concentration, soil water content and void ratio.
The result shows that, the complex dielectric constant of contaminated soil
decreases obviously with the increasing frequency. And, when the frequency is
lower than 50MHz, a significant change of the real and imaginary parts of soil
complex dielectric constant can be observed. Moreover, with the increasing
chromium pollutants concentration, water content and void ratio, both of the real
and imaginary parts of soil complex dielectric constant increase as well. With the
analysis of the relationship among soil dielectric constant, moisture content, and
chromium pollution concentration, we built two soil dielectric constant models
on the real quantity and imaginary quantity. Therefore, through the comparison
of the real and imaginary quantity of the complex dielectric constant, we could
evaluate and monitor the chrome pollutions.

Keywords: chrome contaminated soil, complex dielectric constant, dielectric


dispersion, electrical monitoring method.

1 Introduction

Recently, nearly all the chromium contaminated soil is due to chromium residue, and
chromium residue is a kind of hazardous waste containing Cr6+, which is caused in the
proceeding of chromium salt production. Now the unsettled chromium residue is still
up to 400 million tons, crossing more than 20 provinces of the whole country[1]. Some
chromium salt enterprises closed down after the chromium production, and the
chromium waste residue is placed directly on the open air without any windproof,
waterproof and leakproof facilities, so most of the domestic chromium contaminated
sites formed. Some contaminated sites are near the surface water, and the leaching Cr6+
caused serious groundwater, surface water and soil pollution[1-3].
The former work of contaminated sites restoration is to monitor the pollution
accurately. Currently to monitor the contaminated soil and groundwater, the basic way
is to collect samples for the physical and chemical analysis, however, this traditional

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 118122, 2011.
Springer-Verlag Berlin Heidelberg 2011
Dielectric Characteristics of Chrome Contaminated Soil 119

method may consume much time and high cost [4-6]. There are obvious shortcomings:


the samples is limited, and it is difficult to achieve a thorough understanding of the
pollution conditions; geological drilling could destroy the original pollutants


distribution and concentration in the ground, and easily make the pollutants migrate to
the deeper layer of the ground; monitoring period is very long, not suitable for
long-term monitoring.
Because of the above problems, a rapid and effective monitoring method needs to be
developed in the chromium contaminated sites. Geophysical studies show that as a very
important parameter, the dielectric properties are rapid and non-destructive in the
measurement[7], so dielectric properties of the contaminated soil may evaluate the
underground pollution conditions[8]. Dielectric constant, also known as the
permittivity, dimensionless, is a factor that shows insulation material properties; it is to
measure the degree of charge polarization in external electric field. In the previous
study of dielectric properties of soil, relationship between dielectric properties and
water content, soil contaminants is researched [9-11].
There is no report about the complex dielectric constant of chromium contaminated
soil. This study will search for the relationship between complex dielectric constant and
physicochemical property (such as water content, pollutant concentration and void
ratio) of chromium contaminated soil at the different frequency.

2 Materials and Methods

2.1 Soil Sampling and Pretreatment

Use random distribution methods in soil sampling. The main pollutants in the
contaminated sites are Na2CrO4 and Na2Cr2O4. As the need for simulation with the
contaminated soil in the sites, adding the Na2CrO4 into the unpolluted soil to make the
soil sample. Each soil sample is about 100g after quartation. Table 1 shows the physical
properties of soil samples, and table 2 shows the leachate concentration of the soil
samples.
Table 1. Physical properties of soil sample

w/% /
pH CEC/(cmol/kg) TOC/% T/
sand slit clay g/cm3

7.61 23.0 63.3 13.7 2.62 7.61 8.42 20

Table 2. Leachate concentration of soil sample mg/kg


Na+ K+ Mg2+ Ca2+ Ba2+ SO42- Cl- CO32-
3.24 0.78 1.23 5.68 1.02 2.56 6.01 1.43
120 Y. Sun et al.

2.2 Experimental Analysis

The pollutant concentration, water content and frequency are the main factors of the
complex dielectric constant. And prepare a series of soil samples with different
concentrations of chromium contamination (50, 100, 150, 200, 500, 1000mg/kg),
different water content (8, 15, 25%).
In the range of 10MHz to 200MHz frequency, the real and imaginary parts of soil
samples are measured by Agilent E5061A microwave network analyzer.

3 Experimental Results

3.1 Water Content

Figure 1 shows the soil samples complex dielectric constant with different water
contents. Both the real and imaginary parts of samples decrease when the frequency
increases. When the frequency is less than 50MHz, the real and imaginary part sharply
decline; when higher than 50MHz, the downward trend is smooth. The imaginary part
is largely affected by water content, and the real part is not affected by water content
significantly.

Fig. 1. Dielectric dispersion characteristics of 1000mg/kg chrome contaminated soil

3.2 Concentration of Chromium Pollutants

Figure 2 shows the constant properties of samples with different pollutants


concentration. Overall, as the frequency increases, complex dielectric constant is
gradually decreased. In the low frequency range (below 100MHz), the real part has a
slight rise with the increasing pollutants concentration; when above 100MHz, the
decreasing trend is small; the real part stay stable with 200MHz. Water and air co-exist
in the soil pores, and the charge polarization on the gas-solid interface of the soil pores
makes the real part increase. The imaginary part rises with the pollutants increase when
the frequency is below 50MHz.
Dielectric Characteristics of Chrome Contaminated Soil 121

Fig. 2. Dielectric dispersion characteristics of different chrome pollution

3.3 Dielectric Model of Soil

Based on the above analysis, considering the water content, void ratio and other factors
impact on the soil constant, the real part could be represented by extending Topps
formula[9]

v = m + n 2 + p 3 + k b (1)

Where is real part of dielectric constant. v is volumetric water content of soil,


and b is volume weight of soil. According to the experimental data, the formula is
given as

v = 9.548 104 7.605 105 2 + 1.95110 6 3 + 8.694 10 2 b (2)

The relationship between pollution concentration and imaginary part could be


represented by empirical formula as[12]

= K1 v w + K 2 (3)

Where is imaginary part. v is volumetric water content of soil, and w is


pollution concentration. According to the experimental data, this equation is
given as

= 4.312 v 0.669 w 0.546 + 10.586 (4)

Therefore, taking into account real part and imaginary part, dielectric method could
be used in the evaluation of chromium polluted soil.
122 Y. Sun et al.

4 Conclusions

In the conditions of different water content and pollution concentration, both the real
part and imaginary part of soil samples dielectric constant have a significant dispersion.
The complex dielectric constant has a dramatic change when the frequency is below
50MHz, therefore, 10-50MHz is the suitable frequency range for the dielectric constant
measurement.
As the real part is largely affected by the water content, the imaginary part is
necessary for the evaluation of the chromium pollution in the soil. And the complex
dielectric method could be used in the chromium pollution monitoring.

Acknowledgement. This research was financed by the Chinese central commonweal


research institute basic scientific research special project (2009KYYW04), and by the
National High-tech Research and Development (863) Program (2007AA061303).

References

1. Gu, C., Shan, Z., Wang, R.: Investigation on pollution of chromic slag to local soil. Mining
Safety & Environmental Protection 32, 1820 (2005) (in Chinese)
2. Li, J., Zhu, J., Xie, M.: Chromium and health. Trace Elements Science 4, 810 (1997) (in
Chinese)
3. Zhang, H., Wang, X., Chen, C.: Study on the polluting property of chrome residue
contaminated sites in plateau section. Chinese Journal of Environmental Engineering 4,
915918 (2010) (in Chinese)
4. Cheng, Y., Yang, J., Zhao, Z.: Status and development of environmental geophysics.
Progress in Geophysics 22, 13641369 (2007) (in Chinese)
5. Li, Z., Nai, C., Nian, N.: Application and prospect of physical exploring technology for solid
waste. Environmental Science & Technology 29, 9395 (2006) (in Chinese)
6. Kaya, A., Fang, H.Y.: Identification of contaminated soils by dielectric constant and
electrical conductivity. Journal of Environmental Engineering 123, 169177 (1997)
7. Campbell, J.E.: Dielectric properties and influence of conductivity in soils at one to fifty
megahertz. Soil Science Society of America 54, 332341 (1990)
8. Thevenayagam, S.: Environmental soil characterization using electric dispersion. In:
Proceedings of the ASCE. Special Conference of the Geoenvironment 2000, pp. 137150.
ASCE, New York (2000)
9. Topp, G.C., Davis, J.L., Annan, P.: Soil water content: measurements in coaxial
transmission lines. Water Resource 16, 574582 (1980)
10. Dobson, M.C., Ulaby, F., Hallikainen, T., et al.: Microwave dielectric behavior of wet soil.
Part II. Dielectric mixing models. IEEE Transactions on Geoscience and Remote
Sensing 23, 3546 (1985)
11. Arulanandan, K.: Electrical dispersion in relation to soil structure. Journal of the Soil
Mechanics and Foundations Division 99, 11131133 (1973)
12. Francesco, S., Giancarlo, P., Raffaele, P.: A strategy for the determination of the dielectric
permittivity of a lossy soil exploiting GPR surface measurements and a cooperative target.
Journal of Applied Geophysics 67, 288295 (2009)
Influences of Climate on Forest Fire
during the Period from 2000 to 2009
in Hunan Province

ZhiGang Han1,2, DaLun Tian1,2, and Gui Zhang1


1
Central-South University of Forestry and Technology, 498 Shaoshan South Road,
Changsha, 410004, China
2
National Engineering Laboratory of Southern Forestry Ecological Applied Technology,
498 Shaoshan South Road, Changsha, 410004, China
gzhzg@163.com, csfuywd@hotmail.com, csfu3s@163.com

Abstract. Climate change affects the dynamic changes of forest fire. This paper
aims to analyze the influence of climate on forest fire in Hunan province by using
multiple linear regression and correlation analysis. The results showed that the
average affected area of forest and forest loss caused by forest fire showed a
significant linear relationship with climate and could be expressed by multiple
regression equation. According to the meteorological factors, we can forecast the
trend of forest fire development in most areas of Hunan. The formation of a large
number of fuel was caused by ice crystal pressure off forests during special
freezing climate, which led to remarkably increasing occurrence of forest fire.
The correlation coefficient between the thickness of ice during freezing disaster
and the frequency of forest fire in March after disaster reached 0.798 with
significant correlation.

Keywords: Hunan, forest fire, climate, multivariate regression analysis,


correlation analysis, freezing.

1 Introduction
Climate is an important factor in dynamic change of forest fire. The dynamic change
can lead to climate change of forest fire in frequency occurrence, fire area, fire cycle
can change the structure and function of forest landscape[1~3]. With the population
increasing, the influence of human activities on forest fire is growing, but climate is still
the dominant factor of forest fire dynamic change[4,5]. Since the early 1900s, the

global average surface temperature has increased by 0.74 , the average temperature
increase rate over the past 50 years is almost twice over the past 100 years[6]. As the
global gets warming, the extreme weather events such as El Nio, droughts, floods,
thunderstorms, hail, storms, high temperatures and sandstorms occur frequently both in
their frequency and intensity. Under the extreme weather conditions, the condition of
forest fire is key to the research of relationship between the climate and forest fire. In
various countries, the scientists have studied the relationship between abnormal climate
and forest fire, the results show that: Along with the global warming, the accelerating

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 123130, 2011.
Springer-Verlag Berlin Heidelberg 2011
124 Z. Han, D. Tian, and G. Zhang

frequency of the extreme weather affected the changes in forest fire danger and
increased the possibility of forest fire and extra serious forest fire in the extreme
weather areas[7~12].
The study of relationship between the climate change and forest fire has a great
practical significance in forestry production, which is the important basis in national
and local formulated long-term forest management strategy[13].
Hunan province has abundant forest resources and frequent forest fire. In recent
years, the annual temperature has rised, the rainfall has declined gradually and the rare
freezing weather has occurred in Hunan. Those factors result in the increase of the
forest fire and the decrease of the industrial and agricultural production, a serious threat
to people's life and property security. This paper has set Hunan province as the research
area, studied and quantified various meteorological factors and forest factor relations,
explored the occurrence and development in different counties and cities of Hunan
under the climatic conditions influence, which has a great significance to the proper
management of forest fire.

2 Data Sources and Methods


2.1 Research Areas Review
Hunan province is located in the east between 10847'~11415' and north between
2438'~3008', which is subtropical monsoon climate, with the features of the mild


climate, concentrated rainfall and rich sunlight resources. The yearly mean temperature
is around 16 ~18.5 , the annual average duration of sunshine is 1250~1850 hours,
the yearly precipitation is 1200~1700mm.
Hunan covers a total area of 211875 square kilometers and is characteristic of
abundant mountains and hills. It is an important agricultural province in the south of
China. Hunan is troubled with frequent forest fire, 18782 fires during 2000~2009. Due
to the huge agricultural population, productive fire is the main cause for forest fire
which accounts for 62% of the total number, unproductive fire 31%, thunders and other
natural fire less, which occur only 20 times during 2000~2009. Winter and spring are
seasons of forest fire. Fires take place the most from February to April.

2.2 Data Sources


Wildfires data is from the statistical data of Hunan forest public security bureau and
Hunan forest fire prevention headquarters, which includes forest fire records of 101
counties from 2000 to 2009. Because of the missing data of forest fire affected area,
trees loss and economic loss, this paper has analyzed the mean from various parameters
of statistical data at hand.
The meteorological data comes from the weather data from 2000 to 2009 of the 46
meteorological stations distributed evenly in Hunan. This paper calculated the missing
data by interpolation, and finally got the climate factors of temperature, precipitation,
humidity, sunshine and wind speeds yearly and monthly average values.
Influences of Climate on Forest Fire during the Period from 2000 to 2009 125

2.3 Analysis Method


As multiple meteorological factors and forest fire factors are related, namely multiple
meteorological factors have much influence on forest fire occurrence. Therefore,
multiple linear regression analysis method is used in analyzing their mutual
relationship, with each forest factors variables as the dependent variable, each
meteorological factors variables as the independent variables. The dependent variable
and independent variables respectively are:
Y1 x1 12 12 " 1p
Y
Y = 2 x
X =
2
=
21 22 " 2p

(1)
# # # # # #

YP xm
p1 p2 " p2

Among them, Y ~ N p (Y , Y ), y = (Y , Y ,", Y )T , X is normal vector or general


1 2 P

vector, and set that Y with X exist linear regression relationship, namely at the point
of X , the Y expectations (average) are:

y1 = 01 + 11 x1 + 21 x 2 + " + m1 x m ,

y 2 = 02 + 12 x1 + 22 x 2 + " + m 2 x m , (2)

#
y p = 0 p + 1 p x1 + 2 p x 2 + " + mp x m .

The formula is called Y linear regression equation about X [13].
101 counties of Hunan Province as a unit, select a total of four indicators including
the frequency of forest fire, the average area of affected forest (hm2/time), the average
forest tree loss (m3/time), the average economic loss (ten thousand RMB /time) as
dependent variables, and 7 indicators of the annual average temperature, annual
precipitation, annual relative humidity, annual average wind speed, annual sunshine
hours, stumpage stock and regional altitude as independent variables. The paper
extracted 280 sets of data from the statistical data, made multiple linear regression
analysis, regression effect test and prediction error test.
There are some exceptions in the statistical data, such as the incidental data (forest
fire increase sharply) during the special freezing climate period and the data of much
less fire areas (average annual number of forest fire is 0 to 3 times) including
Hengyang, Changsha, Xiangyin, Yuanjiang, Anxiang, Huarong. As these data can not
reflect the inherent relationship between meteorological factors and forest fire factors
in normal climatic conditions, and thus affect the accuracy of the model, so they should
be removed beforehand.
In early 2008, Hunan affected a freezing disaster, the number of forest fire increased
dramatically after the disaster. The climate data such as the temperature, relative
humidity, ice thickness during the disaster and the regional forest fire data in February
and March were collected and studied. The correlation and comparative analysis of the
forest fire data in February, March and the meteorological data during the disaster had
been done respectively.
126 Z. Han, D. Tian, and G. Zhang

3 Results and Analysis

3.1 Multiple Regression Analysis of Forest Fire and Climate

The frequency of forest fire, the average fire forest area, average fire forest loss and
average economic loss are expressed by Y1, Y2, Y3 and Y4 respectively, x1, x2, x3,
x4, x5. x6 and x7 separately stand for annual average temperature, annual precipitation,
annual sunshine hours, annual average relative humidity, annual average wind speed,
altitude and stumpage. The relationship calculated by multiple linear regression
analysis between forest fire factors and meteorological, environmental factors is
respectively showed as follow:
(1) Regression equation of forest fire number and meteorological factors:
Y1 = 42.6291+ 5.433x1 0.0075x2 0.008x3 + 0.2502x4 4.9669x5 0.0046x6 (3)

R =0.149281, F = 1.0371, p=0.4014.


(2) Regression equation of average affected forest area and meteorological factors:
Y2 = 8.0579 + 0.2625x1 0.0013x2 + 0.004x3 + 0.0946x4 0.4996x5 + 0.0011x 6 (4)

R =0.815992, F =90.6637, p=0.001.


(3) Regression equation of average tree loss and meteorological factors:
Y3 = 1.3983 + 1.1236x1 0.0138x2 + 0.0203x3 + 0.4686x4 0.0624x5 + 0.0035x6 (5)

R =0.832049, F =102.3741, p=0.001.


(4) Regression equation of average economic loss and meteorological factors:
Y4 = 3.5646 0.1853x1 + 0.003 x2 + 0.001x3 0.0046 x 4 + 0.1456 x5 + 0.0002 x6 (6)

R =0.092729, F =0.3946, p=0.8822.


In the equation above, the all estimated coefficients (x7) of stumpage volume are zero,
indicating that the stumpage volume united by county and the forest fire factors has no
linear corresponding relationship. Four regression models results are: the number of
forest fire (Y1), economic losses of forest fire (Y4) and the regression coefficient R and
F values of meteorological factors are too small, p is not at the significant level
(p>0.05). Therefore, the regression relationship is not obvious. The reason is that the
forest fire source in Hunan is mainly from productive and non- productive fire, the
forest fire caused by human activities account for more than 93% of the total forest fire
number, there is no significant linear relationship between the fire occurrence number
and meteorological factors each year in different counties and cities. Meanwhile, fire
economic loss statistics are mainly based on the fire suppression expenditures and other
losses (the estimated value of other losses), which also have no significant correlation
between meteorological factors.
The regression coefficient R between the average area of affected forest (Y2), average
forest loss (Y3) and the meteorological factors gets over 0.8, F is large, there is
significant difference (p <0.05).
We get 20 groups random sample from the statistics (not including modeling data)
for further testing of the two regression equations, the results can be seen in Tab.1 and
Influences of Climate on Forest Fire during the Period from 2000 to 2009 127

Tab.2: the average difference of the predictive and the actual sampling data between the
average area of affected forest and meteorological factors model is 0.183. The mean
difference of the predictive and the actual sampling data between average forest loss
and meteorological factors model is 7.5242. The results show that the predicting and
actual values are close to each other and the regression model (4) and (5) can accurately
predict the average affected forest area and the average forest loss.

Table 1. Validation of regression equation for average area of forest damage and meteorological
data

The average area of affected forest


Mean Maximum Minimum
(hm2 / time)
Actual value 3.205 4.1 1.4

Predictive value 3.388 4.0379 2.4282

Difference 0.183 -0.23592 2.0582

Table 2. Validation of regression equation for average forest lost and meteorological data

The average forest loss


Mean Maximum Minimum
(m3 / time)
Actual value 59.01947 69.5924 49.5288

Predictive value 66.7947 60.17163 48.9252

Difference 7.5242 -9.8265 -1.13448

3.2 Special Freezing Climate Influence on Forest Fire

In recent years, under the influence of global climate change, the frequency of special
climate is increasing in Hunan, which has a significant impact on forest fire. Especially
in early 2008 (from 13th January to 3rd February), there was low temperature freezing
weather in a large area of Hunan, causing the probability of forest fire increasing
(Fig.1).
As the result of the correlative analysis between the fire frequency of affected area
and temperature of the ice disaster on February and March in 2008, the correlation
coefficient between every regions fire and the temperature is 0.095 in February, 0.352
in March. At the same time, the history data shows Changsha area minimum
temperatures reached -9.5 on 31st January, 1969 and -11.3 on 9th February, 1972.
And the minimum temperature of Chenzhou reached -9 on 11th January, 1955[14].
According to the historical record, it was said that the temperature of each district was
below this temperature of the weather at the same period, but neither such ice freezing
phenomenon on a large-scale appeared, nor the increasing frequency of forest fire
caused by large-scale affected forest occurred. These indicate that the temperature is
not the key element to trees loss caused by the wildfires after the ice disaster.
128 Z. Han, D. Tian, and G. Zhang

2500
2500
s2000 se2000
e ri
r
i1500 f1500
f
t ts
s1000 er1000
e oF
r
o 500
F 500
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12
month month

a) forest fire in previous years b) forest fire in 2008

Fig. 1. The number of forest fire in previous years (average monthly) and in 2008 ( each month)

The results of the correlative analysis between the fire frequency of affected area and
the relative humidity on February and March in 2008 show that the correlation
coefficient between every regions fire and the relative humidity is 0.171 in February,
0.268 in March. The correlation is not significant, too.
The sharp rise of the frequency of forest fire after freezing disaster is attributed to the
ice crystals which are formed and accumulated under the special climate conditions in
freeze disaster. And these make the branches and trunks mechanically wrecked, lots of
combustible fuel formed. While ice crystals are created under the specific climate
condition of nearly saturated relative humidity cooperated with moderate low
temperature in a long time. Therefore, we can study the influence of climate on forest
fire during the freeze disaster by analyzing the relationship among ice crystals, climate
and forest fire.
As the result of the correlative analysis between the ice thickness of each area and
climate conditions, the correlative coefficient between the ice thickness and the days
(relative humidity85%) is 0.59, the correlative coefficient between the ice thickness
and the days (relative humidity85% and temperature below 0 ) is 0.648, the
correlative coefficient between the ice thickness and the days (relative humidity85%
and -1~0) is 0.75.
The number of forest fire in February and March of 2008 and the thickness of ice
during the freezing disaster are used to analyze their correlation respectively(Tab.3).
The results show that: in February, the correlation coefficient between the number of
forest fire and the ice thickness during the freezing disaster is 0.596, while the
correlation coefficient in March is 0.798, the correlation is significant.
The above analyses indicate that the freezing disaster on forest fire mainly makes an
effect on the formation of ice crystals in an appropriate climatic condition (air relative
humidity85% and the temperature is kept at -1~0). Under this condition, the climate
period gets longer, the ice gets thicker. Ice accumulation causes trees impairment,
makes combustible fuel formed, and then the probability of forest fire increases
correspondingly, the ice thickness and the number of forest fire after disaster is positive
correlation.
Influences of Climate on Forest Fire during the Period from 2000 to 2009 129

Table 3. Thickness of ice accumulated around electric wires and the number of forest fire during
ice disaster 2008

Site Chenzhou Yongzhou Changsha Loudi Yiyang Changde

thickness of
341 335 250 174 148 34
icemm

number of
forest fire 70 65 36 55 48 59
in February
number of
forest fire 376 167 232 190 102 46
in March

After the freezing disaster, the number of forest fire in February in Hunan is 1030
times, it sharply rises to 2744 times until March. As to the correlation analysis between
the ice thickness and forest fire number in February and March, the impact of freezing
disaster on forest fire number is delayed. According to the number of forest fire each
time in February and March (10 days as a period), from Feb.1 to Feb.10, the number of
forest fire is 85, from Feb.11 to Feb.20 and Feb.21 to Feb.29, 472 and 473 respectively.
From March 1 to March 10, the number of forest fire is up to 2300, 84% of forest fire in
March. It can be concluded that March 1~10 is the hot time of forest fire affected by the
freezing disaster, and the lag effect is about one month. After the freezing disaster,
climate gets warming, human activities gradually resume, coupled with the formation
of lots of combustible fuel during the freezing disaster period, a large number of forest
fire occur finally.
Fitting the ice thickness with the number of forest fire in March of affected areas
during the ice storm, we set y as the number of fire, x as the ice thickness, then the
fitting curve is y = 2.9674x0.7672 , the confidence level is 95%, R2 is 0.8241.

4 Conclusion
As more than 93% fires in Hunan are man-made fires, there is no significant linear
relationship between the climate and the forest fire number. However, the average
forest fire affected area and average forest fire loss have a linear relationship with
meteorological factors, it can be expressed by multiple regression equation. According
to the meteorological factors, we can forecast the trend of forest fire development under
the general climatic conditions in most areas of Hunan. But the model does not apply to
so few forest fire and frozen areas.
The special freezing climate largely affects the forest fire by making the ice crystals
formed under the appropriate climate condition. And ice accumulation causes trees
impairment, makes combustible fuel formed, and then the probability of forest fire
increases correspondingly, the number of forest fire finally surged greatly in February
and March after the freezing disaster. The correlation coefficient between the thickness
of ice during the freezing disaster and the number of forest fire in March is at 0.798.
130 Z. Han, D. Tian, and G. Zhang

The impact of the freezing disaster on forest fire is delayed, the delay period is about
one month. Therefore, the use of the delay time to reinforce the combustible fuel
removing and standardize the forestry activities after the disaster is important and
effective to the forest fire management.

Acknowledgment. This work was carried out at the Natural Science Foundation
program of Hunan Province of Studies on the Influences of Global Climate Change on
the Spatial -temporal Pattern of Forest Fire in Hunan.

References
1. Florent, M., Serge, R., Richard, J.: Simulating climate change impacts on fire frequency and
vegetation dynamics in a Mediterranean-type ecosystem. Global Change Biology 8(5),
423437 (2002)
2. Donald, M., Gedalof, Z., David, L., Peterson,, Philip, M.: Climatic change, Wildfire, and
Conservation. Conservation Biology 18(4), 890902 (2004)
3. Tian, X., Shu, L., Wang, Y.: Forest fire and climatic change review. World Forestry
Research 19(5), 3842 (2006)
4. Mollicone, D., Eva, H.D., Achard, F.: Ecology: human role in Russian wild fires.
Nature 440, 436437 (2006)
5. Shu, L., Tian, X.: The forest fire conditions in recent 10 years. World Forestry
Research 11(6), 3136 (1998)
6. WMO. Press Release No.805, http://www.wmo.int/pages/index_zh.html
7. Williams, A.A.J., Karoly, D.J.: Extreme fire weather in Australia and the impact of the El
Nino Southern Oscillation. Australian Meteorological Magazine 48, 1522 (1999)
8. Wang, S.: The Study of the causes regularity of forest fire on big time scale and the
prediction theory in medium-term. Technology Metallurgica (2), 4750 (1994)
9. Zhao, F., Shu, L., Tian, X.: The forest combustible fuel dry conditions changes of
Daxinanling forest region in Inner Mongolia under the climate warming. Ecological
Journal 29(4), 19141920 (2009)
10. Ding, Y., Ren, G., Shi, G.: National Climate Change Assessment Report (I).Chinese Climate
Change History and Future Trends. Climate Change Review 2(1), 38 (2006)
11. Zhang, T.: The Impact Analysis and Suggestion of Guangdong Freeze Disaster on Forest
fire. Forestry Survey Planning 33(5), 7984 (2008)
12. Zhao, F., Shu, L.: The Study of climatic abnormals impact on forest fire. Forest Fire
Prevention (1), 2122 (2007)
13. Yuan, Z., Song, S.: Multiple Statistical Analysis. Science Press, Beijing (2009)
14. Liu, X., Tan, Z., Yuan, Y.: The Cause of Hunan Freezing Disaster Weather Damage to
Forest. Forestry Science 44(11), 134140 (2008)
Numerical Simulation for Optimal Harvesting Strategies
of Fish Stock in Fluctuating Environment

Lulu Li1, Wen Zhao1, Lijuan Cao2, and Hongyan Ao1

1
College of Life Science and Biotechnology, Dalian Ocean University,
Dalian, China, 116024
2
Institute of Mechanical Engineering, Dalian Ocean University, Dalian, China, 116024
{zhaowen,clj}@dlou.edu.cn

Abstract. The population size of fish stock is affected by the variability of its
environment, both biologic and economic. The classical logistic growth equation
is applied to simulate fish population dynamics. Environmental variation was
included in the optimization of harvest to obtain a relation in which the maximum
sustainable yield and biomass varied as the environment varied. The fluctuating
environment is characterized by the variation of the intrinsic growth rate and
environmental carrying capacity. The stochastic properties of environment
variables are simplified as normal distribution. The influence of stochastic
properties of environment variables to population size of fish stock is discussed.
The investigation results relation can be applied for management of fisheries at
the optimum levels in a fluctuating environment.

Keywords: fish stock, fluctuating environment, optimal harvesting strategies,


normal distribution.

1 Introduction

Fish growth is a major process of fish biology and is part of the information necessary
to estimate stock size and fishing mortality in stock assessments models. Modeling
growth of fishes, crustaceans and mollusks has received considerable attention in
studies of population dynamics and management of wild and cultivated species[1].
Knowledge of the size of a fish population is fundamentally important in the
management of fisheries. The measurement of these populations is difficult and fishery
scientists have developed a large body of both theory and practical experience that
bears on the problem[2]. Bousquet investigated the biological reference points, such as
the maximum sustainable yield (MSY), in a common Schaefer (logistic) surplus
production model in the presence of a multiplicative environmental noise. This type of
model is used in fisheries stock assessment as a firsthand tool for biomass modeling[3].
Die reviewed some methods to propose to calculate maximum sustainable yield[4].
Wang used the Novikov theorem and the projection operator method, and obtained the
analytic expressions of the stationary probability distribution, the relaxation time, and
the normalized correlation function of this system[5]. Bio-economic fisheries models,
depicting the economic and biological conditions of the fishery, are widely used for the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 131136, 2011.
Springer-Verlag Berlin Heidelberg 2011
132 L. Li et al.

identification of Pareto improvement fisheries policies. Sustainable development may


be characterized by three dimensions: the ecological, social, and economic dimension.
The main objective of this paper is to show that the fluctuating environment can affect
the population size of fish stock, maximum sustainable yield and optimal harvesting
strategies.

2 Estimation of Sustainable Yield from Biomass Data

In order to model growth of biological systems numerous models have been


introduced. These variously address population dynamics, either modelled discretely
or, for large populations, mostly continuously. Others model actual physical growth of
some property of interest for an organism or organisms. The rate of change of fish stock
dx/dt is determined by natural reproductive dynamics and harvesting. Functional
relationship commonly used to represent the natural growth rate of fish stock is the
logistic model. The simplest realistic model of population dynamics is the one with
exponential growth [6].
dx x
= rx(1 ). (1)
dt K
Where r is the intrinsic growth rate, K is the environmental carrying capacity, and x is
the population biomass associated with the intrinsic growth rate. Eq. (1) has solution
Kx0
x(t ) = . (2)
( K x0 ) exp( rt ) + x0

Where x0 is the population size at time t=0. The main features of the logistic model are
characterized as follows: 1) The optimal size of fish population, in which the
population size has a maximum intrinsic growth rate.

K
x opt = . (3)
2
Where xopt represents optimal size of fish population.
2) The maximum growth rate, which is also called as maximum sustainable
yield(MSY)
rK
(dx / dt ) max = . (4)
4
3) The relative growth rate declines with increasing population, and it will equal
zero when population size reaches its maximum.
dx x
/ x = r (1 ). (5)
dt K
Numerical Simulation for Optimal Harvesting Strategies 133

Relative growth rate


0.8 r=0.3
r=0.5
0.6 r=0.7

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
Relative population size

Fig. 1. Variation of relative growth rate versus relative population size(x/K)

3 Optimal Fish Stock Harvesting Strategies

The rate of change of fish stock dx/dt is determined by natural reproductive dynamics
and harvesting [7]

x = f ( x, t ) h(e, x, t ). (6)

Where f(x,t) is the natural growth rate of fish stock which is dependent on the current
size of the population x. The quantity harvested per unit of time is represented by
h(e,x,t). The net growth rate dx/dt is obtained by subtracting the rate of harvest h(e,x,t)
from the rate of natural growth f(x,t).
The rate of harvest h(e,x,t) is assumed proportional to aggregate standardized fishing
effort (e) and the biomass of the stock x; that is [8]

h(e, x, t ) = e(t ) x(t ). (7)

Where is the catchability coefficient. Once average fishing power has been
calculated, the standardized fishing effort is computed as [9]

e(t ) = P n. (8)

Where e is the standardized fishing effort ; P represents average relative fishing


power ; is the average fishing days at time t; and n denotes the number of vessels at
time t. Fishing cost is evaluated by

C (e, x, t ) = ce(t ). (9)

Where C(e,x,t) is the total cost function. It would be an economically optimal


long-term strategy for commercial fisheries, if each factory tries to maximize the
present value profit[10]
134 L. Li et al.


max = [ p h C]exp( t )dt
0
(10)

= [ p e(t ) x(t ) ce(t )]e ( t )
dt.
0

In this formulation, is the ultimate performance measure of the fishery. P is output


price of fisheries. T is starting catching time. xopt is optimal size of fish population. hopt is
optimal rate of harvest. The optimal rate of harvest is expressed as follows:
hopt = eopt xopt
xopt rK (11)
= xopt r (1 )= .
K 4
Where hopt represents optimal rate of harvest, and is also called as maximum
sustainable yield (MSY). The MSY is achieved when fishing effort is adjusted to the
optimum, which is half the biotic potential. The biotic potential is the instantaneous per
capita growth of a population when it is not limited by food or other environmental
constraints. The optimal fishing effort is deduced as follows

xopt r
eopt = r (1 )/ = . (12)
K 2
From about analysis we can find that the fish stock has a maximum growth rate
when population size of fish stock equal to half of the environmental carrying capacity.
Maximum sustainable yield has been postulated by Shaefer for a fishery on an isolated
population of fish which is growing according to the logistic law. When a fished
population is found to be below half of the pristine population (i.e. the population prior
to fishing), the population is termed overfished. When a population is overfished, the
catch per unit effort (CPUE) decreases as the fishing effort increases.

4 Influence of Stochastic Properties of Environment Variables to


Population Size of Fish Stock

The logistic equation changes when the system is affected by some external factors
such temperature, drugs, radiotherapy. Under equilibrium conditions in a deterministic
environment the relationship between yield and biomass is a parabola and there is one
maximum sustainable yield located as the top of parabola where h=rK/4 and e=r/(2 ).
In a fluctuating environment, as the environment changes it will affect both the carrying
capacity and the rate of increase and there is a family of parabolas relating yield and
biomass[11].
Numerical Simulation for Optimal Harvesting Strategies 135

15 SDr=0.05

probability density
12 SDr=0.025
9
6
3
0
0.5 0.6 0.7 0.8 0.9 1
r

Fig. 2. Stochastic distribution properties of intrinsic growth rate

0.02
probability density

SDk=20
SDk=40
0.01

0
900 930 960 990 1020 1050 1080
k

Fig. 3. Stochastic distribution properties of environmental carrying capacity

1200
1000
Population size

800
600
400
SDk=60,SDr=0.075
200
SDk=40,SDr=0.050
0
0 2 4 6 8 10 12 14 16 18 20
T ime

Fig. 4. Variation of population size versus time in fluctuating environment (Ka=1000, ra=0.75,
x0=200)

Suppose that the intrinsic growth rate and environmental carrying capacity are
subject to normal distribution. Fig. 2 shows stochastic distribution properties of
intrinsic growth rate. Fig. 3 shows stochastic distribution properties of environmental
carrying capacity. Fig. 4 shows the variation of population size versus time in
fluctuating environment.
136 L. Li et al.

From Fig. 4, we can conclude that the variation of population size versus time in
fluctuating environment is stochastic changeable.

5 Conclusion
Fishery systems have very complex interactions between resource stocks and the
factors such as labor, fluctuating environment and capital used to harvest fish stocks.
Stochastic simulation technique is used to describe the influence of the highly variable
marine environment. The complexity of the fisheries management stems from the
dynamic nature of the marine environment and numerous interest groups with different
objective. The undermining properties of marine environment affect the variation of the
intrinsic growth rate, environmental carrying capacity and optimal harvest strategies.

References

1. Hernandez-Llamas, A., David, A.: Estimation of the von Bertalanffy, Logistic, Gompertz
and Richards Curves and a New Growth Model. Marine Ecology Progress Series 282,
237244 (2004)
2. Swierzbinski, J.: Statistical Methods Applicable to Selected Problems in Fisheries Biology
and Economics. Marine Resource Economics 1, 209234 (1985)
3. Bousquet, N., Duchesne, T., Rivest, L.-P.: Redefining the Maximum Sustainable Yield for
the Schaefer Population Model Including Multiplicative Environmental Noise. Journal of
Theoretical Biology 254, 6575 (2008)
4. Die, D.J., Caddy, J.F.: Sustainable Yield Indicators From Biomass: are there appropriate
reference points for use in tropical fisheries? Fisheries Research 32, 6979 (1997)
5. Wang, C.-Y., Gao, Y., Wang, X.-W.: Dynamical Properties of a Logistic Growth Model
with Cross-correlated Noises. Physica A 390, 17 (2011)
6. Tsoularis, A., Wallace, J.: Analysis of logistic growth models. Mathematical
Biosciences 179, 2155 (2002)
7. Jensen, A.L.: Maximum Harvest of a Fish Population that Has the Smallest Impact on
Population Biomass. Fisheries Research 57, 8991 (2002)
8. Eisennack, K., Kropp, J.: Assessment of Management Options in Marine Fisheries by
Qualitative Modeling Techniques. Marine Pollution Bulletin 43, 215224 (2001)
9. Arnason, R.: Endogenous Optimization Fisheries Models. Annals of Operations
Research 94, 219230 (2000)
10. Sun, L., Xiao, H., Li, S.: Forecasting Fish Stock Recruitment and Planning Optimal
harvesting strategies by Using Neural Network. Journal of Computers 4, 10751082 (2009)
11. Jensen, A.L.: Harvest in a Fluctuating Environment and Conservative Harvest for the Fox
Surplus Production Model. Ecological Modelling 182, 19 (2005)
The Graphic Data Conversion from AutoCAD to
GeoDatabase

Xiaosheng Liu and Feihui Hu

School of Architectural and Surveying Engineering,


Jiangxi University of Science and Technology, Ganzhou, 341000, China
lxs9103@163.com

Abstract. According to the research of problem of data conversion from


AutoCAD to GIS, this paper analyzes the characteristics of AutoCAD data and
GIS data, and finds out the comparison relationship of graphic data conversion
from AutoCAD to GIS. With the aid of the component libraries of ArcEngine
and ObjectARX, the research implements the graphic data conversion using c #
language in .NET environment, including point, line, polygon, etc. The
implementation promotes the application of GIS technology.

Keywords: ArcEngine, ObjectARX, CAD, GeoDatabase, Data conversion.

1 Introduction
As AutoCAD software has powerful functions of graphics drawing and graphics
editing, most companies use it to draw digital topographic map, so most of the vector
data are CAD format [1]. However, to a large extent, these CAD data are also
important source of GIS data, and now many related companies have established their
own fundamental geographic information system. In order to satisfy the GIS
application requirements and reduce the cost of buying GIS data [2], they hope their
CAD data can be applied to fundamental geographic information system. To achieve
the goal of applying CAD data to GIS system, we must complete the conversion form
CAD data to GIS data. The conversion process includes two parts: one is Graphical
data conversion, and the other is attribute data conversion. And graphics data
conversion is the more complex and important part, so in order to solve the problem
of graphics data conversion, we use ArcEngine components and ObjectARX
components provided by ESRI to develop a data conversion tool from CAD to
GeoDatabase in the .NET environment.

2 ArcEngine and ObjectARX


Provided by ESRI, ArcEngine is component development kits which can help GIS
developer develop independent GIS application [3]. By the component development
kit of ArcEngine, we can create and draw graphic features, including points, lines,
surfaces and other geometric entities, create GeoDatabase spatial database and edit
spatial features, create spatial modeling and analysis, etc [4]. ObjectARX is a

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 137142, 2011.
Springer-Verlag Berlin Heidelberg 2011
138 X. Liu and F. Hu

component development kit of CAD secondary development which is offered by


AutoDesk Company. It is the most powerful customized development tool of
AutoCAD [5]. By ObjectARX components, we can develop CAD applications more
rapid, efficient and concise. It can help CAD developers to access AutoCAD
graphical systems and database structure, and can expand AutoCAD functions.

3 The Characteristics of CAD Data and GIS Data


3.1 CAD Data Characteristics
AutoCAD software has powerful drawing function which can draw many different
geometries, such as the points, lines, polygon, circle, arc, ellipse and ellipse arc, etc [6].
The position of every CAD graphic element in the coordinates is determined by one or
more groups (x, y, z) coordinates. Meanwhile, the graphic element also includes some
attribute information, such as color, linetype, line width, etc. In order to identify
different graphic elements, every graphic element contains a unique Object ID [7, 8].

3.2 GIS Data Characteristics


In ArcGIS, the geometric type of graphic elements is mainly determined by Geometry
Object which includes point, multipoint, polyline, polygon, multipatch and so on.
Each graphic element contains spatial information and attribute information [9, 10].
At the some time, according to different geometric shape, the graphic elements are
stored in point, polyline and polygon feature class of GeoDatabase [11].

3.3 The Comparison Table of Graphic Data Conversion


To achieve graphic data conversion from CAD to GIS, we must find one-to-one
relationship of their graphic elements. Only in this way can we achieve the graphic
elements conversion of these two format data, the Comparison Table of Graphic Data
Conversion such as shown in table 1.

Table 1. The Comparison Table of Graphic Data Conversion

CAD Graphic Data GIS Graphic Data


DBPoint, Simple BlockReference
Point MultiPoint
Line, Polyline(not closed), Circle, Arc, Ellipse, elliptical arc Polyline
Complex BlockReference MultiPolyline
Polyline(closed) Polygon

4 The Realization of Graphic Data Conversion from CAD to GIS


4.1 Graphic Data Conversion Process
Because of the converted graphic data is stored in the GeoDatabase. Therefore, first
by the programming create new GeoDatabase, feature dataset and feature class in
The Graphic Data Conversion from AutoCAD to GeoDatabase 139

Fig. 1. The Conversion Interface of Graphic Data from CAD to GeoDatabase

the.NET environment; Then read the point, straight line, polyline and other graphic
elements of DWG files using ObjectARX development kit in the.net environment,
and acquire the conversion data; Finally, based on the conversion data obtained which
are in the relevant feature class of the GeoDatabase, program to realize creating
feature. This is the conversion process of graphic data. The conversion interface is
shown in figure 1.

4.2 The Detailed Process of AutoCAD Graphics Data Conversion

4.2.1 The Conversion of Point Graphic Elements


The conversion of point graphic elements mainly included converting from CAD
point to GIS point and CAD simple block to GIS point. Basically, their
implementation processes are the same. However, the difference is that the conversion
data what the point want to obtain is the coordinate of point, and the conversion data
what the simple block want to obtain is the coordinate of insertion point.

4.2.2 The Conversion of Line Graphic Elements


The conversion of line graphic elements included converting from the straight line,
polyline, arc, circle, ellipse and elliptical arc in CAD to GIS polyline features. Due to
a variety characteristics of graphical elements, thus their realization methods and the
conversion data what we need to obtain are different.
The conversion of straight line: you only need to obtain the coordinates of start
point and end point of straight line for conversion.
Conversion of not closed polyline: you need to obtain each node coordinates of
polyline, and then according to the point coordinates to create polyline features of
GeoDatabase.
The conversion of circle and arc: The conversion data of circle and arc which you
need to obtain are mainly starting angle, central angle, coordinates of centre point and
140 X. Liu and F. Hu

radius. Actually, the circle is a peculiar arc whose central angle equals 360, so their
conversion method is the same.
The conversion of ellipse and elliptical arc: you should gain the conversion data
which mainly included coordinates of centre point, starting angle, central angle,
rotation angle, semi-major axis and elliptic axial ratio. Besides rotation angle, other
conversion data can be obtained from graphic elements of ellipse and elliptical arc by
ObjectARX. So you must calculate the rotation angle by writing an independent
algorithm. Because the ellipse is a peculiar elliptical arc whose starting angle equals
zero, they used the same conversion method.

4.2.3 The Conversion of Polygon Graphic Elements


The conversion of polygon graphic elements includes closed polyline, rectangle and
positive polygon in CAD convert to GIS polygon features. Because that the types of
polyline, rectangle and polygon in CAD are polyline, and what we want to obtain are
polygons each vertex coordinates, so their conversion method is the same.
We need to be solving two problems when CAD polygon graphic elements
converted:
A) Judging whether polyline is closed or not
Judging whether the beginning point coordinates and the destination point
coordinates are equal or not, we solve the problem. If they are equal, the polyline is
closed, and converted to GIS polygon features; otherwise, it is not closed, and then
converted to GIS line features.
B) Judging whether CAD polygon graphic elements are clockwise or not
If the vertex point sequence of CAD polygon graphic elements is
counterclockwise, the area of GIS polygon what we converted to is negative; And If
the vertex point sequence of CAD polygon graphic elements is clockwise, the area of
GIS polygon what we converted to is positive. Therefore, when the CAD polygon
graphic elements convert to GIS features, we must first judge that if the vertex point
sequence of CAD polygon graphic elements is clockwise.
The paper mainly adopts vector product method to solve the problem. The specific
process is: using the neighboring three vertexes of polygon constitute two vectors,
then find out its value [12]. If the value of vector product obtained is negative, the
vertex point sequence of polygon is clockwise; Otherwise, the vertex point sequence
of polygon is counterclockwise, then we need to reversely obtain vertex coordinates
of CAD polygon graphic elements.

4.3 The Display of Conversion Effect

Through above the conversion job, then we can see the effect that AutoCAD graphic
data are converted to GeoDatabase. Obviously, the AutoCAD graphic data and the
GIS graphic data which is converted to GeoDatabase maintain a high degree of
consensus. Thus, we achieve the lossless conversion of AutoCAD graphic data. The
effect is shown in figure 2 and figure 3.
The Graphic Data Conversion from AutoCAD to GeoDatabase 141

Fig. 2. AutoCAD Graphic Data

Fig. 3. GIS Graphic Data

5 Conclusion
This paper mainly introduced how to use ArcEngine components and ObjectARX
components to realize graphic data conversion from AutoCAD to GIS in the .NET
environment. And solved some problems which existed in the graphic data
conversion, such as geometry distortion of graphic elements, coordinate location
dislocation, area of polygon appeared negative value, etc. Finally, we achieve the
consistency and integrity of graphic data before and after conversion, and accomplish
graphic data lossless conversion from AutoCAD to GIS. Thus, these will promote the
application of GIS technology.
142 X. Liu and F. Hu

References
1. Weiler, K.J.: Boundary Graph Operators for Nonmanifold Geometric Modeling Topology
Representations. In: Geometric Modeling for CAD Applications. Elsevier Science,
Amsterdam (1988)
2. Li, S.: Research Based On the Conversion of Topographic Map and GeoDatabase.
Geographic Space Information (2), 2628 (2010)
3. Erden, T., Coskun, M.Z.: Analyzing Shortest and Fastest Paths with GIS and Determining
Algorithm Running Time. Visual Information and Information System, 269278
4. Yao, J., Tawfik, H., Fernando, T.: A GIS Based Virtual Urban Simulation Environment.
In: Computational Science, ICCS 2006, pp. 6068 (2006)
5. Qin, H., Cui, H., Sun, J.: The development training course of Autodesk. Chemical Industry
Press, Beijing (2008)
6. Rebecca, Tse, O.C., Gold, C.: TIN Meets CAD Extending the TIN Concept in GIS. In:
Computational Science, ICCS 2002, pp. 135144 (2002)
7. Liu, R., Liu, N., Su, G.: Combination and Application of Graphic Data and Relational
database. Acta Geodaetica et Cartographica Sinica 29(4), 229333 (2000)
8. Sun, X.: DWG Data Format and Information Transmission of Graphical File. Journal of
XIAN University of Science and Technology (4), 372374 (2001)
9. Ebadi, H., Ahmadi, F.F.: On-line Integration of Photogrammetry and GIS to Generate
Fully Structured Data for GIS. Innovations in 3D Geo Information System Part 2, 8593
(2006)
10. Bordogna, G., Pagani, M., Psaila, G.: Spatial SQL with Customizable Soft Selection
Conditions. STUDFUZZ, vol. 203, pp. 323346 (2006)
11. Song, Z., Zhou, S., Wan, B., Wei, L., Li, G.: Research for CAD Data Integrated In GIS.
Bulletin of Surveying and Mapping, Beijing (2008)
12. Frischknecht, S., Kanani, E.: Automatic interpretation of scanned topographic maps: A
raster-based approach. In: Chhabra, A.K., Tombre, K. (eds.) GREC 1997. LNCS,
vol. 1389, pp. 207220. Springer, Heidelberg (1998)
Research on Knowledge Transference
Management of Knowledge Alliance

Jibin Ma, Gai Wang, and Xueyan Wang

Hebei University of Engineering


056038 Hebei, China
majibin@tsinghua.org.cn

Abstract. Knowledge alliance is the necessary for new era businesses, and
knowledge transference has largely affected the Alliance's decision-making and
behaviors. Based on the two companies form of alliance the paper makes the
game analysis of knowledge transference, and then through the analysis shows
how the knowledge alliances to manage knowledge transference and make
management decisions. This paper researches on how to manage knowledge
alliance well. Research finding, for guiding enterprises through the effective use
of external knowledge of business alliances to enhance the competitiveness, for
guiding the Alliance to effectively manage knowledge transfer in order to
promote enterprise development, has important application value.

Keywords: knowledge alliance, knowledge transference, management.

1 Introduction
According to the transferability of knowledge, knowledge can be divided into explicit
knowledge and tacit knowledge. Explicit knowledge can be used by formal language,
including procedures, mathematical expression, plans, manuals, etc, recording these
to transfer and share. Companies in advantages of explicit knowledge are more easily
imitated by others, and then relatively weakened. Of course, companies can also learn
explicit knowledge they need from other companies, which can be integrated with
existing knowledge into new knowledge. Tacit knowledge refers to not be clearly
expressed by a system or dominant language, which can be called only not words but
meanings. In business, experience, skills and mental models are important asset of
enterprises, also the most core ability of companies. Because of this knowledge is
often implicit, it cannot easily be imitated. It is the most lasting competitiveness of
enterprises.
Knowledge alliance, a risk-sharing network, with other enterprises, universities and
research institutions through a variety of contract or equity shares complementary
advantages and risks, during the process of achieving the strategic objectives, in order
to share knowledge resources and promote knowledge flowing and create new
knowledge. The purpose is to learn and create knowledge. Alliance partners can not
only get the experience, ability and other tacit knowledge market transactions cannot
do, also, by the complementary knowledge, they can also create new knowledge a
single enterprise cannot, so that alliance partners benefit.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 143148, 2011.
Springer-Verlag Berlin Heidelberg 2011
144 J. Ma, G. Wang, and X. Wang

The concept of knowledge transference was first proposed in 1977 by Teece, U.S.
technology and innovation management theorists. He believes that the international
transference of knowledge can help companies accumulate valuable knowledge
and promote technology diffusion, thereby reducing the technology gap between
regions.

2 Analysis of Knowledge Transference Using Game Theory


In the internal knowledge alliance, alliance members continue to carry out the transfer
of knowledge, because of the different total knowledge leading to the potential
difference. It can not only improve total knowledge of union members, also do well to
the increase of total union knowledge. From the view of economic, knowledge
transference can occur or not, depending on expected benefits from the main body of
knowledge. The introduction of knowledge is often accompanied by the consumption
of resources, which means knowledge transfer requires to paying price. Knowledge
transfer can take place only knowledge transference obtained an expected return
greater than the process of knowledge transference cost paid. In addition, the main
bodys subjective consciousness and objective state of knowledge transference will
have a major impact on knowledge transference.
In order to analyze conveniently, this paper analyses knowledge transference
between two main bodys behaviors, game of several main bodys can be done so on.
Assuming the two companies for A and B, the total knowledge of the two


companies were UA and UB. In the process of knowledge transference, the absorption
capacity of A is a, and B is b, 0 a 1, 0 b 1. Knowledge transference is U


generated by alliance, and condition of U is integration knowledge of two companies,


that is three possibilities: A obtained knowledge from B; B obtained knowledge


from A; A and B companies jointly acquired knowledge. A occupies knowledge


exclusively in the case of , and B occupies knowledge exclusively in the case of ,
and A, B share knowledge in the case of .
Suppose A transferred knowledge, but B did not, so that the income of B is
UB+b(UA+U). B received additional revenue but no knowledge transference, and A
transferred knowledge but no receive additional revenue, thus this situation is some
loss for A, though this loss is psychology. Suppose the loss is k, so the revenue for A
is UA-K. Similarly, if B transferred knowledge ,but A not, so the revenue for A and B
is: UA+a(UB+U),UB-K. If A and B selected transference both, then the revenue for A
and B is: UA+a(UB+U), UB+b(UA+U). On the contrary, if A and B did not select
transference, then the revenue is UA, UB. Game payoff matrix of knowledge
transference for A and B show table 1.
If A selected knowledge transference, then the choice of transference or not was
same for B--UB+b(UA+U). If A selected non-transference, then the revenue UB of B
selecting non-transference was greater than the revenue UB-K of transference, so the
non-transference is Bs strategy. If B selected knowledge transference, then A had the
same revenue UA+a(UB+U), no matter of transference or not. If B selected non-
transference, then the revenue UA of A selecting non-transference was greater than
Research on Knowledge Transference Management of Knowledge Alliance 145

Table 1. Game payoff matrix of knowledge transference

A B
transference non-transference
transference UA+a(UB+U), UB+b(UA+U) UA-K, UB+b(UA+U)
non-transference UA+a(UB+U), UB-K UA, UB

the revenue UA-K of transference. Thus, (transference, transference) and (non-


transference, non- transference) constitute the static Nash equilibrium. From the result
of Nash equilibrium, if one selected transference, the second one had to selected non-
transference. And if one selected non-transference, the both would not transfer. But
from the view of revenue, the two sides built knowledge alliance once, they would
transfer inevitably. Because transference choices of both have greater revenue than
the revenue of only one, also than the revenue of neither. So, in this state, both have to
transfer knowledge, which is the ideal state of building alliance knowledge
transference. Further, the choice of (non- transference, non- transference) is a typical
prisoner's dilemma game. Since our argument is the knowledge transfer of knowledge
alliance, knowledge alliances focus is long-term cooperation, even permanent
cooperation, rather than a one-time cooperation. In this way, the game becomes
unlimited number of repeated games, and (transference, transference) becomes sub-
game perfect Nash equilibrium.
But there is also another case: the existence time of the knowledge alliance may
be limited, so that knowledge alliance may be going to be dissolved when one party
of alliance can ensure constant revenue itself, who would make speculation strategy
but no knowledge transference. Because the forthcoming dissolution of the alliance,
even if they do not transfer leading to alliances revenge, the potential losses are
limited.
The following part the author will analyze the alliance strategies of dynamic game
of all parties. Knowledge transference in the knowledge alliance is a dynamic
complete information game, if ones strategy were not to transfer knowledge, he
would have been "eye for an eye" revenge by the other one, but the other one would
choose not to transfer.

Game time is assumed for the n, n = 0, 1, 2, , because the future value needs to be
discounted to present value to be compared, we assume the discount factor for the u, 0
<u <1, and u was more greater that indicates the companies valued the future more.
On the contrary, u was more smaller that companies valued present value. Since the
two companies benefits and strategy was symmetric, for simplicity, the author
analyses A for choice, the same conclusion applies for B.
When A has been selected knowledge transference, its effectiveness was:
F1=[UA+a(UB+U)]+u[UA+a(UB+U)]+u2[UA+a(UB+U)]++un[UA+a(UB+U)]=[uk/ (1-
u)][UA+a(UB+U)]. If A selected non-transference in k period, it will have been "eye
for an eye" retaliation by B, then the Nash equilibrium was (non-transference,
non-transference). In this case, A's utility is: F2=[UA+a(UB+U)]+u[UA+a(UB+U)]
++uk[UA+a(UB+U)]=(n-k)UA. If A has always been selecting transference strategy,

there must be: F1 F2, then:
146 J. Ma, G. Wang, and X. Wang

uk
1 u
[UA+a(UB+U)] (n-k)UA (1)

Particularly, when n = 2, k = 1, (1) can be simplified as:


u UA
(2)
2U A +a(U B + U )

Inequality (2) is the condition of A carrying out knowledge transference. That


establishment condition of (2) is the equation left is large enough, or the equation
right is small enough. Inequality (2)s left part is large enough, which means both
game sides pay greater attention to future revenue, so, both sides choose knowledge
transference more possibility. Inequality (2)s right is small enough, which requires
analysis of situation of UA, a, UB, U:
UAs impact on knowledge transference. The inequality (2) divided by the UA we
1
have the result: a(U B + U )
2+
UA
It can be seen, UA is larger, then the smaller the denominator, the greater the
formula. This indicates that the total knowledge value of A is greater, A will more
likely not to transfer knowledge congenially. It is true that the closer dissolution of the
alliance, the strong side generally less likely continue to choose knowledge
transference.
Us impact on knowledge transference. From the inequality (2), UB is larger,
(2)s right part is smaller. This shows that the greater total amount of knowledge of
the other one, companies have less likely not to choose transfer knowledge. This is
not difficult to understand, the greater the amount of knowledge of one alliance side,
means who will get greater revenue through knowledge transference, and companies
are less reluctant to carry out knowledge transfer.
as impact on knowledge transference. From the inequality (2), a is larger, (2)s
right part is smaller. This indicates that, A has greater knowledge absorption capacity,
who will have the greater choice likelihood of knowledge transference. This situation
is also consistent with the intuitive case, greater visual absorption capacity means
greater revenue obtained from knowledge transference, the greater likelihood of
knowledge transference.
Us impact on knowledge transference. From the inequality (2)s right part we
can see, U bigger, (2)s right part is smaller. This suggests that greater value of new
knowledge through knowledge transference, less likely the alliance party choose not
to transfer congenially.

3 Knowledge Transference Management


The author early introduced knowledge concepts simply, then analyses the enterprise
knowledge transference process from the perspective of game. The above results of
game shows that: the greater the discount factor, the more amount of the alliance
Research on Knowledge Transference Management of Knowledge Alliance 147

knowledge transference; more amount knowledge of alliances counterpart, more


amount knowledge transference the other one will carry out; the greater amount of
knowledge one side owns, who will transfer knowledge less; the stronger absorption
knowledge capacity of one side, the more who will transfer knowledge. For this
result, the author proposes the following corporate management strategies of
knowledge transference:
(A) To establish knowledge-sharing basis. The basis of knowledge sharing can be
achieved through the following channels: mechanisms of information technology and
organization, business process to be re-combined, promoting tacit knowledge to
explicit knowledge transforming process, reducing the cost of the knowledge- sharing
at the same time of expanding the scope of its share; in order to protect strategic value
knowledge it must be graded authorization and protection; to encourage knowledge
transference. When only transferor and recipient side promote knowledge
transferring, we must take appropriate incentives for organizations and guiding
behavior of other members, to make them fully aware of significance of improving
the enterprises competitiveness by knowledge transference.
(B) To integrate allied enterprises knowledge transference mechanisms. A common
feature of both is also very important to relationship between the unions. If the
common feature is far away, narrow, then the understanding of knowledge
transference between then will be deviation, and the knowledge is difficult to avoid
damage. Through the strategic integration to make two sides a better understanding of
cooperation and innovation strategic planning, reducing the gap between each other.
(C) To promote trust cooperation and to enhance knowledge transference will of
member enterprises. The degree of confidence between both is the first most
important condition to impact of the knowledge transference. To enhance the
transferors will of knowledge, the cooperator should try to establish transparent
cooperation relations, and to promote mutual trust, in particular of tacit knowledge
transference, learning, absorbing, depending on the openness and transparency in
mechanism design.
(D) All parties should strive to build learning organizations. The total amount of
knowledge between both is the critical to influence knowledge transference, and the
most effective accumulation knowledge organization is learning organization. In the
process of pursuing the goal of creating a learning organization, we can obtain
development from information systems, intellectual capital management, organizational
learning, and several other aspects. Thus balancing total amount of knowledge of the
parties, it can form a coalition situation to conduct knowledge transference.
(E)To integrate transfer channels and organizational structure knowledge transference
required. Different companies have different learning culture and atmosphere.
Cultivation and integration of learning culture is an effective way to enhance knowledge
base and enhance absorptive capacity. Enterprises should establish a long-term vision of
knowledge, provide effective knowledge management platform. Also note that the
culture compatibility between partners, accelerate the pace of knowledge integration,
reduce damage knowledge as much as possible, enhance the efficiency of knowledge
transference, and promote cooperation and innovation performance. This can increase
the absorption capacity of the two sides to further facilitate the knowledge transference.
148 J. Ma, G. Wang, and X. Wang

References
1. Hu, Y., Liu, X.: Game analysis of knowledge sharing in knowledge alliance. Science &
Technology Progress and Policy, 143145 (April 2009)
2. Wang, Y.: Research on knowledge alliances cooperation innovation based on game
theory. Dalian University of Technology Master thesis, pp. 2627 (2009)
3. Wang, Z., Li, J.: Introduction to Game Theory. China Renmin University Press, Beijing
(2004)
The Study of Print Quality Evaluation System Using the
Back Propagation Neural Network with Applications to
Sheet-Fed Offset

Taolin Ma, Yang Li, and Yansong Sun

School of Printing and Packaging, Wuhan University, Wuhan, China


mtl1968@whu.edu.cn

Abstract. With the continuous development of the printing industry, print


quality has significantly improved and enhanced. The evaluation of the print
quality is mainly based on the objective and supplemented subjective methods,
and it has changed from the traditional empirical judgments to science of
quantitative analysis. So it improves the quality of management, scientific,
standardized and rationality. This paper is to solve the color print quality
problems by establishing the evaluation of index system, identifying the standard
data and combining with BP neural network theory.

Index Terms: Neural network, offset, print quality evaluation.

1 Introduction

Subjective evaluation, objective evaluation and comprehensive evaluation are the three
methods for color print quality evaluation. And the comprehensive evaluation is widely
used, which is based on data obtained by objective evaluation and partly subjective
evaluation with a variety of factors. The approach of combining psychological
impression of subjective and objective data analysis makes the evaluation criteria more
scientific. So the print quality evaluation system using the Back Propagation neural
network is more efficient.

2 Overview of Back Propagation (BP) Neural Network


BP neural network is an algorithm based on BP artificial neural network model. BP
neural network is the most commonly used, which is the essence part of the artificial
neural network.
BP neural network is also known as multi-layer feed forward neural network. It is
made of several layers neurons. They are input layer, hidden layer and output layer.
By adjusting the connection weights and the size of the network, BP neural network
can achieve non-linear classification problems, and also can approximate any nonlinear
function with arbitrary precision. After determining the structure of BP network, we
use the sample set training the BP network. The trained BP network can also give the
right output when the sample do not input at the same time.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 149154, 2011.
Springer-Verlag Berlin Heidelberg 2011
150 T. Ma, Y. Li, and Y. Sun

Although the BP network is the most widely used network model, but there are two
shortcomings: first, the convergence rate of study is not so fast. Second, it will be
trapped in local minima. Therefore, the improved BP algorithm is divided into two
categories: The first category refers to the BP heuristic algorithms that use the
information technology; the other is to join the BP algorithm for numerical
optimization techniques.

3 The Construction of the Color Print Quality Evaluation Model


Based on BP Neural Network

This article attempts to introduce BP neural network to the print evaluation [1],
establishing the color print quality evaluation model based on BP neural network,
hoping to study for the evaluation of printed materials a new way of thinking. We
confirmed the standards of each index printing witch have 34 targets in this paper. It
comes from our having the print quality level as the four levels: excellent, good, fair,
and poor. In accordance with People's Republic of industry standards - offset print
quality requirements, and others authority standard.

3.1 The Steps of General Evaluating BP Neural Network

(1) Determine assessment index system [2]. Index number of the BP network is the
number of input nodes.
(2) Determine number of layers BP network. The model used has a three-tier
network structure with one input layer, one hidden layer and one output layer.
(3) Determine the evaluation target. The node of output layer is 1.The evaluation
target level is the corresponding each print value of quality.
(4) Treat the samples and evaluation of target value positively and standardized.
(5)Initialize the weights of the network nodes and network threshold with a random
number (usually a number between 0 and 1).
(6)Put samples and target of evaluation in the network after the standardization, and
give the corresponding desired output.
(7) Forward-propagation. Calculate the output layers nodes.
(8) Calculate errors of the layers nodes.
(9) Back-propagation. Correct weights.
(10) Check that all samples of the input is completed or not.
(11) Calculate errors. When the total error is less than the error limit, the network is
end of the training. Otherwise the process goes to step (6), and continue to train.
(12) Trained network can be used for formal evaluation.

3.2 Generating Samples

Selection of learning samples is very important, and it is the final factor of


generalization ability to neural network mapping. We randomly generated 300 samples
The Study of Print Quality Evaluation System 151

in the MATLAB software for each grades, and generated a total of 1200 samples (also
more). For using the method of the early termination to improve the network
generalization ability, the random sample was divided into three parts: training sample,
test samples and test samples. MATLAB neural network toolbox dividevec function is
used in this paper.
dividevec function call format:
[trainV,valV,testV] =dividevec(p,t,valPercent,testPercent)

Here, trainV is training samples; valV is the test samples; testV is the samples for
testing; p is input vector; t is the output vector; valPercent represent samples to test the
percentage of the total sample; testPercent is total sample for the percentage of test
samples.

3.3 The Choice to Each Kind of Parameter

Setting the model parameter is the important work of the evaluation system [3]. We will
introduce the parameter involved and reason of choosing them.
(1) Number of network layer: because the BP neural network can achieve any
nonlinear mapping and its network characteristics, this model adopted an three-tier
network including input layer, a hidden layer and an output layer.
(2) Activation function: because the nonlinear approximation ability of BP network is
reflected by the activation function, the activation function of S-type is usually
used in hidden layer and the output layer activation function can be linear or
S-type. We use the S-type activation function in the both hidden layer and output
layer in this paper.
(3) Input layer: we selected 34 evaluation indexes, namely the dimension of input
vector is 34. So the input layer nodes are 34.
(4) Output layer: In this paper, we determined that the corresponding print quality
scale superior, good, qualified and the bad network model value of output is
0.2,0.4, 0.6, 0.8 respectively.
(5) The choice of initial weight: In order to make the learning process convergence, we
initialized the weight random (-1, 1).
(6) The choice of learning algorithm: Selecting the learning algorithm, we must
consider the performance of the algorithm itself and think about the complexity of
the problem, the size of sample set, network error target and the type of problems
to be solved. Through taking the experiments and analysis, this study used the
method of an early termination method and SCG algorithm.
(7) Number of training steps: After extensive testing, we found that combining the
early termination method and the SCG algorithm method to train the network is
very fast. So we selected 1000 as the training steps.
When the network parameters have been identified, we can write programs in
MATLAB to carry out training and simulation.
152 T. Ma, Y. Li, and Y. Sun

4 Experiment

4.1 Experimental Condition

1. Evaluation sample request


The sample to evaluate should have control strip, color batch, stepped scale and other
elements that equipment can be used for quantitative measurement and control to
facilitate the objective evaluation [4].
2. The condition of subjective evaluation observe
The technology measurement for color, control, and visual evaluation should have a
uniform standard of lighting conditions. In this system, the qualitative indicators on the
evaluation of color printing should be under the standard.
(1) Illumination to print and observation condition
a) The light source to print observed: standard illuminator D65
b) The light source of related color temperature: 6504K.
c) Color rendering index: the general color rendering index Ra is not less than
90, the special color rendering index Ri should be not less than 80.
d) Illumination of observed surface: it produce the uniform diffuse lighting on
the observed surface, illumination rang from 500lx to 1500lx.
e) The uniformity of illumination: the surface of observed surface have to be as
uniform as possible, and differences between the surface center and the
surrounding illumination can not be greater than 20%.
f) The color of surround lighting: the surround of the observed surface should
be the neutral gray. To avoid the color contrast, the background and the
substrate of observed sample should be gray.
a) Observation method: we use the illumination and observation of 0/45or
45/0 which CIE recommended.
(2) The influence and control of environmental factor
In actual production, the environment is the biggest factor to standard lighting
and observed condition. So observers should try to eliminate the influence of
the environment.
a) Avoid the additional light source or light beam to influence the correct color.
b) Avoid the strong color in the observed field or the environmental surface.
c) Temperature and humidity control: room temperature is 232 centigrade,
and relative humidity is 50%5%.
d) For the subjective impression play an important role in evaluation to sample.
So we should have the eye relax for a while before observation.
(3) Introduction of objective measuring instruments
a) Spectrophotometer. We use the spectrophotometer SpectroEye and control
strip to measure the print density of the field, dot gain, relative contrast,
trapping rate, color gray, color cast and so on.
b) Loupe with a scale 40 times.
The Study of Print Quality Evaluation System 153

4.2 Experimental Results and Conclusions


To test the applicability and accuracy of the color print quality evaluation in BP
network, this paper selected four color samples to evaluate. Each target [5] of the
sample all comply with the above conditions and requirements. We can get the value of
the sample by putting the data into the program. We use a variety of instruments to
measure the related data of following requirements and put these data into the program
we made to get the output value of the BP network model.

Table 1. The sample and output of model

Sample
Output of model 0.5104 0.3566 0.4558 0.7645

We can see in Table 1. The output value of the first sample is 0.5104. So its level of
quality is qualified. The model output of sample II and III is in [0.2975, 0.4935], so
the level of them is good. The output value of the forth sample is 0.7645. It is greater
than 0.7039, so the quality level is disqualification. Though the sample II and III are
all good, the output value of sample II is smaller. So we can get the quality of sample
II is better than the sample III, which reflects the advantage of the BP network
evaluation.
In this paper, we use the subjective evaluation method for comparison. Under the
standard lighting conditions, several experts with extensive experience have the four
samples subjective evaluate. The mainly items are represented as following. Check the
sheets clean or not; check the of high, medium, dark tone with a magnifying glass;
check the replication for color with signal bar or color control strip; check clarity of dot
with a magnifying glass; check the replication of text. After these series of progress, we
can get the sample I is qualified, the sample II is excellent, and sample III is good. The
forth sample is failed. Compared with the result of BP network evaluation, we can see
the BP neural network model is effective.

5 Conclusions

In this paper we proposed a sheet-fed offset print quality evaluation model based on BP
neural network and achieve it though using MATLAB neural network toolbox. The
applicability and accuracy of the evaluation model is confirmed through measurement,
evaluation and comparison with the subjective evaluation to practical samples.
As future works we have to supplement more targets to the evaluation index system
for its comprehensive, and enable the parameters of BP neural network we established
to be better. The following study can try to establish an independent evaluation of
visual system to make it convenient without MATLAB which this paper used.
154 T. Ma, Y. Li, and Y. Sun

References

[1] Ma, T.: Study of the Method of Evaluating Color Digital Image. In: International Conference
on Computer Science and Software Engineering, ICGC 2008, pp. 225228 (December 2008)
[2] Otaki, N.: Colour image evaluation system. OKI Technical Review 7.0(194), 6873 (2003)
[3] Guan, L., Lin, J., Chen, G., Chen, M.: Study for the Offset Printing Quality Control Expert
System Based on Case Reasoning. IEEE, Los Alamitos
[4] Bohner, M., Sties, M., Bers, K.H.: An automatic measurement device for the evaluation of
the print quality of printed characters. Pattern Recognition 9, 1119 (1997)
[5] Guan, L., Lin, J., Chen, G., Chen, M.: Study for the Offset Printing Quality Control Expert
System Based on Case Reasoning. IEEE, Los Alamitos
Improved Design of GPRS Wireless Security System
Based on AES

TaoLin Ma1, XiaoLan Sun1, and LiangPei Zhang2

1
School of Printing and Packaging,
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote
Sensing, Whuhan University, Wuhan, 430079, China
mtl1968@whu.edu.cn

Abstract. This paper mainly studied how to improve security system by


analyzing the current situation and defects of existing encryption security
systems. Based on existing studies, optimized GPRS wireless security system
(GPRS-based WSS) based on AES was designed by using principles of
designing a security system and basic theories in Cryptography. This new system
can increase its query efficiency, accuracy and security.

Keywords: GPRS-based WSS, Encryption Algorithm, AES, S-boxes.

1 Introduction

With the development of network technology, the dangerous factors are increasing in
the transmitting process of digital information. In China, to solve those issues, much
attention was paid to form encryption systems combining with theories in
Cryptography. The creation of those systems increased its query efficiency and
accuracy, blew to criminals and protected the consumers and producers. In [1] RFID
security system was studied. Its useful by signing a digital signature for products ID
via SHA-1 algorithm. However, how to prevent the deception of false data from illegal
readers is an issue to be solved. In addition, literature [2] designed a GPRS-based WSS,
which combining networks and encryption technology was used in products security
system. But an obvious issue is that the selection of encryption algorithm.

2 GPRS-Based Wireless Security System


The composition and work of GPRS-based WSS [2] was shown in Fig. 1.
Although the performance of this system is good, it is difficult to guarantee the
systems security with the rapid development of computer technology. Therefore, its
necessary to set up multi-level security for the systems key parts. In [2], readers
acquire security codes through specific agreements. But there is no difference between
security code and ordinary code if agreements cant ensure safety. Its necessary to
adopt multi-level measurements. In addition, readers with encrypted data cant

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 155160, 2011.
Springer-Verlag Berlin Heidelberg 2011
156 T. Ma, X. Sun, and L. Zhang

Fig. 1. Composition of GPRS-based WSS

communicate with serves until ID authentication. If digital signature technology was


applied in data transmission, what servers need to do is verify senders ID. This kind of
certification is more convenient and has higher accuracy. Based on above analysis, the
system will have two kinds of constitution programs. First, the method between readers
and security codes is double security, and the communication of readers and servers
was designed according to the way in [2]. Second, setting up double security for the
communication between readers and servers, the security method of readers and servers
kept the same with that in [2]. The safety coefficient of the methods above is higher
than that in the single security system. In this paper, we will improve the GPRS-based
WSS based on the first program.

3 Introduction of Several Commonly Used Algorithms

3.1 Advantages and Disadvantages of Encryption Algorithms


Block cipher encryption, as an important encryption means, has been deeply studied. In
block cipher encryption, the representative algorithms are DES, IDEA and AES.
DES. The key size of DES is 56-bit, which is too short. With the emerging of new
attacks, DES faced an actual threat and will gradually be eliminated.
IDEA. IDEA was developed based on the DES. With the calculation speed increasing,
it is possible to be attacked by exhaustive search 128-bit key space of IDEA.
AES. AES combines security, performance, efficiency, ease of implementation and
flexibility [3]. But the key size of AES totally depends on the choice of S-boxes [4]. It is
easy to be an attack breakthrough if its S-boxes were inappropriately designed.
Improved Design of GPRS Wireless Security System Based on AES 157

3.2 The Necessity of Optimizing AES

Many cryptographists have discovered that existing S-boxes have some fatal
weaknesses. They have properties of short periods and bad distribution [5]. In terms of
AES-192 (12 rounds iteration), the number of rounds which can be successfully
attacked is at least 6, and the highest record is 9 in 2005 [3].
Considering above analysis and the actual application requirements, we will select
AES as the basic algorithm in this paper. However, we must use the optimized AES
algorithm to decrease the possibility of being successfully attacked and ensure
operation security of the system.

4 Theory of AES Algorithm and Algorithm Optimization


4.1 Theory of AES Algorithm
The basic principle of AES can be described as following: plaintext block is treated as a
matrix array of bytes in AES structure. The matrix array is encrypted by several rounds
of iteration, and becomes cipher text finally. Key in every round is extended by the
main key. Each round iteration operation covers SubBytes, ShiftRows, MixColumns
and AddRoundKey. Because we will optimize the S-box, so we only introduce the
SubBytes transformation.
SuBbytes Transformation is a non-linear operation. It replaces the bytes in state by
S-box. S-box operation is compounded by two transformations. One of them is affine
transformation. The formula (1) shows the method of affine transformation. We use (B5,
D3) to do affine transformation [6].

(1)

4.2 Algorithm Optimization

In this paper, we selected optimizing S-box to better AES. Because SubBytes


transformation is the only nonlinear operation in AES, so it is efficient to optimize AES
by optimizing S-box.
Increasing the Security Bytes. In 8 8 matrix of (1), the elements in line i +1 are
generated by those in line i. Only if we dont change the inverse feature of this matrix,
we can change it as we want to improve S-box. In my opinion, the given matrix can be
split into two 48 matrixes. In this case, we use (B5, D3) and (8F, D3) to do affine
transformation. The formula was shown in (2). In (2), Hex B5 and D3 are shifted by
Binary 10110101 and 10001111 respectively, then the first rows of two 48 matrixes
158 T. Ma, X. Sun, and L. Zhang

are 10110101 and 10001111. This change can destroy the regularity of matrix and
increase the difficulty of attack through increasing the number of private bytes.

(2)

Change of the matrix has broken regularity of the original data. Although the
operation complexity in calculation programs didnt change heavily, it has largely
increased attack difficulty. Thus, the security factor of this system has been increased
overall.
Designing S-box with Better Nonlinearity. Many literatures show it is an efficient
way to increase security of S-box by improving the nonlinearity of given S-box [7]. We
can improve its nonlinearity by swapping two output vectors in its truth table [8]. To
find a new S-box by this method, we need to define some sets of ( , ) pairs for which
( , ) is maximum and near-maximum.

Definition 1. Let b(x)=y be a bijective nn S-box with Walsh-Hadamard


transformation B ( ) .
(a)Let WHmax denote the maximum absolute value in the B matrix. Let us define the
following sets (See 3):

W
+
1
{ }
{
= ( , ) : B ( , ) = WH max ,W 1 = ( , ) : B ( , ) = WH max
}. (3)

(b)Defining sets of ( , ) for which the WHT magnitude is close to the maximum
(See 4).

W = {(, ) : B(, ) = WH } { }
+
2 max
2 ,W 2 = (, ) : B(, ) = WH max + 2
(4)
W = {(, ) : B(, ) = WH 4},W = {(, ) : B(, ) = WH + 4}
+
2 max 2 max

Further, defining + + +
= W 2 W 23 ,W 2,3 = W 2 W 23

.
W 2, 3

Theorem 2. Theory for incremental improvement of a bijective S-box.


Let b(x) =y be a bijective S-box. Further, let x1 and x2 be distinct input vectors with
corresponding outputs y1=b(x1) and y2=b(x2). Let b(x1) be a S-box the same as b(x)
except that b(x1)=y2, b(x2)= y1. Then we can get S-boxes whose nonlinearity is better
than that of b(x) if the following conditions are satisfied:
Improved Design of GPRS Wireless Security System Based on AES 159

(a) L ( x ) L ( x ), ( , ) W + W ;
1 2 1 1

(b) ( y ) ( ), ( , ) + , ( y )
L ( x ), ( , ) W ;
+
L 1 L x 2 W L 1 2 1 1

(c)
L ( y ) = L ( x ), ( , ) W , L ( y ) = L ( x ), ( , ) W ;

1 2 1 2 1 1

(d) For all ( , )


W , not all of the following are true:
+
2 ,3

L ( y ) = L ( x ), L ( y ) = L ( x ), L ( y ) L ( x ), L ( y ) L ( x ) ;
2 1 1 2 1 1 2 2


(e)For all ( , )
W , not all of the following are true:
2,3

L ( y ) L ( x ), L ( y ) L ( x ), L ( y ) = L ( x ), L ( y ) = L ( x ) .
2 1 1 2 1 1 2 2

We can acquire S-boxes with better nonlinearity based on the above method and
satisfied those five conditions. AES algorithm can be optimized if S-box has
successfully optimized.

4.3 Evaluation of the Optimized Results

In the encryption procedure of AES, S-box in SubBytes can be existing S-box, new
S-box respectively.
Inputting the same plaintext into AES structure with different S-boxes, we can get
different cipher texts through N rounds transformation. Comparing cipher texts
generating by the same plaintext and through related calculating, we can draw a
conclusion: cipher text encrypted via optimized AES has stronger anti-attack
characteristic.

5 Optimized GPRS-Based Wireless Security System

In given GPRS-based WSS, if we use optimized AES to encrypt products serial


number, and store this security code in e-label. In addition, the communication between
security readers and servers need be encrypted by optimized AES. Via encrypting a
common systems key parts, we can get a new system called optimized GPRS-based
WSS. At present, it is still a newer technology that applying GPRS wireless network to
security system. This technology isnt very mature. However, if this technology turns
stronger and combines with Cryptography theory, it will play an important role in
anti-counterfeiting field in the future.

6 Conclusions

In our work, we have analyzed the current situation and insecurity of current security
systems. Otherwise, we selected AES which is much safer as the basic algorithm in the
basis of existing GPRS-based WSS. In addition, S-box has been properly adjusted so as
to get optimized AES, which has been applied to GPRS-based WSS to form a more
reliable and safer security system. It will perform the product authenticity enquiry
service more exactly and protect the benefits of firms and consumers more effectively.
160 T. Ma, X. Sun, and L. Zhang

This work mainly stated some possible methods on how to optimize S-box of AES.
To improve S-box, we analyzed the theory of AES. In our paper, improved S-boxes
have been proposed. They are extremely practical in theory, what we should do is to
program procedures and implement it. Now that we have built a useful theory, putting it
into practice is our future work.

References

1. Ni, W., et al.: Design and Implementation of RFID-based Multi-level Product Security
System. Computer Engineering & Design 15(30) (2009)
2. Jia, Y.: Research and Implementation of GPRS-based Wireless Security System. Dalian
Technology University, Liaoning (2006) (in Chinese)
3. Liu, N., Guo, D.: AES Algorithm Implemented for PDA Secure Communication with Java
(2007) (in Chinese)
4. Zheng, D., Li, X.: Cryptographyencryption algorithm and agreement. Electronic Industry
Press, Beijing (2009) (in Chinese)
5. Wang, Y.B.: Analysis of Structure of AES and Its S-box. PLA Univ. Sci. Tech. 3(3), 1317
(2002)
6. Chen, L.: Modern Cryptography. Science Press, Beijing (2002) (in Chinese)
7. Millan, W.: How to Improve the Nonlinearity of Bijective S-boxes. In: Boyd, C., Dawson, E.
(eds.) ACISP 1998. LNCS, vol. 1438, pp. 181192. Springer, Heidelberg (1998)
8. Millan, W.: Smart Hill Climbing Finds Better Boolean Functions. In: Workshop on Selected
Areas in Cryptology 1997, Workshop Record, pp. 5063 (1997)
Design and Realization of FH-CDMA Scheme for
Multiple-Access Communication Systems

Abdul Baqi, Sajjad Ahmed Soomro, and Safeeullah Soomro

Department of Computer Science and Engineering,


Yanbu University College, Yanbu Al-Sinaiyah, Kingdom of Saudi Arabia
{abdul.baqi,sajjad.soomro,safeeullah.soomro)@yuc.edu.sa

Abstract. In this paper we proposed the Frequency Hopping Code Division


Multiple Access (FH/CDMA) Scheme. In order to improve the spreading
process of the spread spectrum modulation system, the conventional pseudo-
random code has been replaced by chaotic signal owing to its dynamical
behaviour. The spreading code generator has been implemented using discrete
integrated circuits and components in which output is controlled by a
conventional PN code. The proposed circuit has been tested using real time
voice signal and has been transmitted as frequency hopping signal with
different hopping patterns. A FH/CDMA Transmitter and Receiver based on
proposed chaotic signal generator has been experimentally verified. The results
of experimental investigation have been presented in the paper. The waveforms
obtained at various check nodes have also been presented.

1 Introduction
This Spread Spectrum is a type of modulation that spreads the modulated signal
across available frequency band, in excess of minimum bandwidth required to
transmit the modulating signal [1] and [5]. Spreading makes signal resistant to noise,
interference and eavesdropping. Spread Spectrum is commonly used in personal
communication systems including mobile radio communication and data transmission
over LANs. Spread Spectrum has many unique properties that cannot be found in
other techniques of modulation.
These include the ability to eliminate multi-path interference, privacy of message
security, multi-user handling capacity and low power spectral density since signal is
spread over a large frequency band [2] and [6]. There are two commonly used
techniques to achieve spread spectrum. Viz, Direct Sequence Spread Spectrum (DS-
SS) and Frequency Hopping Spread Spectrum (FH-SS).A DS-SS transmitter converts
an incoming data (bit) stream into a symbol stream. Using a digital modulation
technique like Binary Phase Shift Keying (BPSK) or Quadrature Phase Shift Keying
(QPSK), a transmitter multiplies the message symbols with a pseudo random (PN)
code. This multiplication operation increases the modulated signal bandwidth based
on length of chip sequence. A Code Division Multiple Access (CDMA) system is
implemented via these coding. Each user over a CDMA system is assigned a unique
PN code sequence. Hence, more than one signal can be transmitted at the same time
on same frequency. In this paper Frequency Hopping (FH-SS) Spread Spectrum

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 161166, 2011.
Springer-Verlag Berlin Heidelberg 2011
162 A. Baqi, S.A. Soomro, and S. Soomro

Fig. 1. Voice to voltage Converter

modulation technique has been used with a new spreading code in which conventional
PN code controls a typical chaos oscillator. The resultant chaotic signal has a wide
frequency range from few KHz to few MHz (12.34 kHz to 9.313 MHz). The
motivation to use chaotic signal in place of conventional PN code has been because,
chaotic systems are nonlinear dynamical systems with certain distinct characteristics.
These systems can generate highly complex waveforms even though the number of
interacting variables is minimal [3].
For an iterated map a dynamical system with single variable can result in chaotic
behaviour while for a continuous system, three coupled differential equations can
result in a complicated dynamics. Time series generated from chaotic dynamics have
the following three interesting properties: (i) wide-band spectrum, (ii) noise-like
appearance, and (iii) high complexity. In a chaotic system, trajectories starting from
slightly different initial conditions diverge exponentially in time which is known as
sensitive dependence on the initial conditions. Because of these distinctive properties,
chaotic systems are widely being studied for secure communication and multiple user
communication applications [4]. In this paper we are presenting the methodology in
section and in section 3 we are presenting the results which came through our
technique and finally we make conclusion.

2 Frequency Hoping Spread Spectrum (FH-SS)


Frequency hopping is a radio transmission technique where the signal is divided into
multiple parts and then sent across the air in random pattern of jumping or hopping,
frequencies. When transmitting data, these multiple parts are data packets. The
hopping pattern can be from several times per second to several thousand times per
Design and Realization of FH-CDMA Scheme 163

Fig. 2. Circuit Description of Receiver of Proposed Scheme

second. Frequency hopping is the easiest spread spectrum modulation to use. Any
radio with a digitally controlled frequency synthesizer can, theoretically, be converted
to a frequency hopping radio. This conversion requires the Addition of a pseudo noise
(PN) code generator to select the frequencies for transmission or reception. Most
hopping systems use uniform frequency hopping over a band of frequencies. This is
not absolutely necessary, if both the transmitter and receiver of the system know in
advance what frequencies are to be skipped. Thus a frequency hopper in two meters
could be made that skipped over commonly used repeater frequency pairs. A
frequency hopped system can use analogue or digital carrier modulation and can be
designed using conventional narrow band radio techniques. De-hopping in the
receiver is done by a synchronized pseudo noise code generator that drives the
receivers local oscillator frequency synthesizer. FH-SS splits the available frequency
band into a series of small sub channels. As transmitter hops from sub channel to sub
channel, transmitting short bursts of data on each channel for predefined period,
referred to as dwell time (the amount of time spent on each hop). The hopping
sequence is obviously synchronized between transmitter and receiver to enable
communications to occur. FCC regulations define the size of the frequency band, the
number of channels that can be used, and the dwell time and power level of the
transmitter. In the frequency hopping spread spectrum a narrowband signal mover
hops from one frequency to another using a pseudorandom sequence to control
hopping. This result in a signals lingering at a predefined frequency for a short period
of time, which limits the possibility of interference from another signal source
generating radiated power at a specific hop frequency.
164 A. Baqi, S.A. Soomro, and S. Soomro

2.1 Types of Frequency Hopping Spread Spectrum

Frequency Hopping Spread Spectrum Systems are categorized into following sections
which are written below:-

2.2 Slow Frequency Hopping (SFH)

In an SFH spread system the hop rate (fh chip rate) is less than the base band message
bit rate fb. Thus two or more (in several implementations, more than 1000) base band
bits are transmitted at the same frequency before hopping to the next RF frequency.
The hop duration, TH is related to the bit duration Tb by:

TH = KTb for K = 1, 2, 3... And fc = fH = 1/TC.

2.3 Fast Frequency Hopping Spread Spectrum (FFH)

In an FFH spread spectrum the chipping rate, fc, (chipping rate is same as hopping
rate) is greater than the base band data rate fb. In this case one message bit Tb is
transmitted by two or more frequency hopped RF signals. The hop duration or chip
duration (TH = TC), is defined by:

TC = TH = 1/KTb for k = 1, 2, 3... And fc = fh = 1/TC

2.4 Advantages of Frequency Hopping Spread Spectrum (FH-SS)

Some of the advantages of Frequency Hopping Spread Spectrum Modulation are as


under:-

1. Frequency Hopping Spread Spectrum has an ability to provide diversity in fixed


wireless access applications or slowly moving systems.
2. Frequency hopping system has a relatively short acquisition time than that of Direct
Sequence system.
3. It can achieve greatest amount of spreading.
4. It can be programmed to avoid unwanted portions of spectrum.

The second input to Modulo-Two adder has been generated by the frequency
synthesizer driven by chaos oscillator. As such, this signal changes in a
pseudorandom manner. The Chaotic signal generator is the heart of the proposed
FHCDMA system. The chaotic signal generator is designed around a Wein bridge
oscillator in which digitally controlled variable resistance technique using Linear
Feedback Shift Register (LFSR), Decoders and array of transistors have been use to
select randomly the resistor values and is given in Figure 2 and 1. The output of the
oscillator thus produces the sustained analogue signal with varying frequency and
amplitude.
This FH-SS signal is thus regenerated at the receiver by means of the chaos
oscillator in association with a locally generated PN sequence in a similar fashion as
that used at the transmitter for Proper synchronization between transmitter and
receiver. The FH- S generated at the receiver is modulo-2-added with the received
Design and Realization of FH-CDMA Scheme 165

Fig. 3. Waveform of output of FH-CDMA Receiver Transmitted Voice signal

modulated signal. The output of the Modulo-2-Adder is converted into Parallel form
by using serial-to-parallel converter (Demultiplexer). The Demultiplexer output drives
a digital to analogue converter. The resultant analogue signal is amplified and after
low passes filtering to deliver the transmitted information signal at the receiver. The
circuit diagram representation of the proposed scheme is outlined in Figure 2.
The resultant FH-SS signal at the receiving end is shown in figure 3. The output of
the receiver after de-spreading with the locally generated spreading code has been in
the figure. It can be seen that the received signal which is similar to that of the
transmitted signal shown in Figure 2.

3 Experimental Verification and Results

The proposed Frequency Hopping Code Division Multiple Access (FH-CDMA) has
been experimentally tested for its performance by transmitting and receiving a speech
signal over the FH- CDMA system. Various waveforms obtained while transmitting
and receiving the speech signal have been recorded for technical observation by using
readily available non-linear and linear ICs. The waveforms obtained at various check
points have been found satisfactory and are in conformity with the theoretical
observations. The waveforms obtained at various check points are shown in figure 3
as under:-
166 A. Baqi, S.A. Soomro, and S. Soomro

4 Conclusion
Spread Spectrum based Code Division Multiple Access (CDMA) are increasingly
becoming more popular for multi-user communication systems. In most of such
multi-user systems, the given bandwidth (in a given area) is to be divided and
allocated to various communication channels. However, in order to share the same
bandwidth by many users in a given service area, an equal number of unique pseudo-
random codes with good correlation and statistical properties are required. Chaotic
signal generators are generally used for generating pseudorandom codes with good
correlation properties. In this paper hardware based chaotic signal generator has been
proposed which can be easily programmed to generate a large number of unique
random codes best suited for a multiuser CDMA system. The proposed programmable
chaotic signal generator has been used to implement an FH-CDMA communication
system and subsequently tested for the transmission and reception of a voice signal.
The results of experimental verifications have been presented in the paper and are in
conformity with theoretical observation. The proposed scheme will find a range of
applications in Spread Spectrum modulation, CDMA, Global Positioning Systems
(GPS) etc. Further, the proposed scheme guarantees adequate security with low
system complexity.

References
1. Goodman, D.J., Henry, P.S., Prabhu, V.K.: Frequency -Hopping multilevel FSK for mobile
radio. Bell Syst.-Tech., J. 59(7), 12571275 (1980)
2. Einarssen, G.: Address assignment for a Time-Frequency coded Spread Spectrum. Bell
Syst. Tech. J. (7), 12411255 (1980)
3. Jiang: A note on Chaotic Secure Communication System. IEEE Trans. Circuits Systems,
Fundamental Theory Applications 49, 9296 (2002)
4. Bhat, G.M., Sheikh, J.A., Parah, S.A.: On the Design and Realization of Chaotic Spread
Spectrum Modulation Technique for Secure Data Transmission, vol. (14-16), pp. 241244.
IEEE Xplore (2009)
5. Linnartz, J.P.M.G.: Performance Analysis of Synchronous MC-CDMA in mobile Rayleigh
channels with both Delay and Doppler spreads. IEEE VT 50(6), 13751387 (2001)
6. Win, M.Z.: A unified spectral analysis of generalized time-hopping spread-spectrum
signals in the presence of timing jitter. IEEE J. Sel. Areas in Communication 20(9), 1664
1676 (2002)
Design and Implement on Automated Pharmacy System

HongLei Che*, Chao Yun, and JiYuan Zang

School of Mechanical Engineering and Automation,


Beihang University, NO. 37 Xueyuan Road, Haidian District,
Beijing, 100191, P.R. China
sychehonglei@163.com, cyun@vip.sina.com,
zangjy1981@126.com

Abstract. This paper introduces that the research design of the automated
pharmacy system which is aim at accessing the packed drugs, which shows that
the system should have three main functions and implementations of three main
functions. Introducing the detailed structural design of the automatic medicine-
input system, dense medicine-store system and medicine-output system; and
researching on control methods of each executing agency in the system. In the
end, it is to design the function of system software. The system has been
developed and applied in Hospital outpatient pharmacy and it is in good
working condition.

Keywords: Packed Drugs, Automated pharmacy, Automatic medicine-input


system, Dense medicine-store system, Medicine-output system, Controlling
system.

1 Introduction
The pharmacy is the pivotal issue of hospital. At present, the method of medicine-
store mainly is fixed shelves in domestic hospital pharmacy, but this accessing mode
has its unavoidable disadvantages: 1) drug storage is scattered and space utilization
rate is very low; 2) pharmacist has high labor intensity and low working efficiency; 3)
manual medicine-output makes mistakes easily, and cause drug accidents. Therefore,
the hospital pharmacy automation is the new trend of development of pharmacy, and
it is also an important sign of the service and working concept innovation [3].
This paper introduces the automated pharmacy system which mainly consists of
automatic medicine-input system, dense medicine-store system and medicine-output
system and database management system. The system can implement three basic
functions which are medicine-input, dense medicine-store and medicine-output.

2 Automated Pharmacy System Ontology Structure


Automated pharmacy system ontology structure, is shown in Fig. 1 as follows,
including automatic medicine-input system, dense medicine-store system and
medicine-output system.

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 167175, 2011.
Springer-Verlag Berlin Heidelberg 2011
168 H. Che, C. Yun, and J. Zang

Fig. 1. Automated Pharmacy

2.1 Automatic Medicine-Input System

Automatic medicine-input system mainly consists of medicine-input test system,


medicine-input transmission system and medicine-input manipulator.
Medicine-input detection system consists of operating platform, 3 laser range-
finding sensors and a PISO813 data acquisition card, as shown in Fig. 2. Data
acquisition card collect real-time the kit length, width and height dimensions that are
needed to add to dense storage system through three laser range-finding sensors, if the
size of the collection is same as the size of kit in the data base, then it is transmitted to
medicine-input manipulator, or generating error message.

Fig. 2. Detection of Medicine-input System


Design and Implement on Automated Pharmacy System 169

Medicine-input transmission system consists of horizontal linear motion


unit, vertical linear motion unit and synchronous transmission mechanism, as Fig. 3
shows.

Fig. 3. Transmission of Medicine-input System

Medicine-input manipulator consists of pedestal body, stepping motor, lifting


board, transmission system, and rotating electromagnet, as shown in Fig. 4.

Fig. 4. Manipulator

The manipulators accept drugs that are tested, medicine-input transmission system
will locate the medicine-input manipulator in the storage number of dense storage
system, and stepping motor turned a certain Angle according to height of the kit, the
lifting board lifts up the corresponding height driven by the synchronous belt. When kits
rise to higher than the frame former board, because of the gravity, drugs will slide into
170 H. Che, C. Yun, and J. Zang

the slot of the frame former board, and then slide into the slot of medicine storage in the
dense storage system. Step motor repeats the above movements, until the lifting board
hands out the last kit. In the process, if drugs decline due to friction, then rotate
electromagnet work, drive the dial the piece to click, and make drugs slide successfully.

2.2 Dense Storage Systems

Dense storage system consists of roller type slope store pharmacy and roller store
medicine slot, as shown in Fig. 5.
Roller type slope store pharmacy mainly consists of frame body, support beam, and
roller medicine-store slot, frame body is 3540mm x built 1440mm x 2450mm cube
structure which is composed by section aluminum; Support beam is constructed by
aluminum extrusion, it and frame body are assembly constituted the installation
matrix of roller store medicine slot.

Fig. 5. Dense Storage System

Roller medicine-store slot consists of roller, rolling shaft, parting strip, border and
beams, as shown in figure 6. Beams and border compose the installation matrix of the
roller medicine-store slot, the roller that is 10mm diameter is set into rolling shaft, the
rolling shaft uniformly distributed with 20mm of separation distance. Because widths
of pharmacy packed drugs are different, the bar is set into the rolling shaft; the space
between adjacent parting strips constructs the minimum storage unit of packed drugs,
because the parting strip is set into the rolling shaft, to enhance the overall rigidity of
the roller medicine-store slot.
Design idea is based on gravity blanking principle; the roller medicine-store slot is
installed in store pharmacy with 15 angle, then drugs are affected by its own gravity
after getting into store, and automatic slide into the opening of medicine-out of the
dense storage system, wait for medicine-out.
Design and Implement on Automated Pharmacy System 171

Fig. 6. Roller Medicine-store Slot

2.3 Automatic Medicine-Out System

Automatic medicine-out system consists of the medicine-out driver and elevators. The
packed drugs in the dense storage system is placed at the tilt roller medicine-store
slot, each medicine-store slot keeps the same type of drugs, and form matrix
arrangement as a whole. When there is no medicine-out action, the packed drugs is
reliably located by fixed block shaft that is installed in the flanges of both ends.
Medicine-out driver consists of electromagnet and flap, as shown in Fig. 7. The
flap is installed in the ejector rob which is in front of electromagnets, After
electromagnet electrified, the ejector rob contracts to drive flap rotate around of fixed
axis, the front-end of the kit is jacked up by the flap at this time, when the bottom of
the kit is higher than the limit block shaft, then take the medicine-out action. Then,
the kit slides to belt of the elevator from the slope of the elevator.

Fig. 7. Medicine-out Driver

Elevator consists of guide rail, belt line, aluminum paddles, flap, transmission
system, photoelectric sensor that is for testing drugs and self-protection sensor, as
shown in Fig. 8 below. Elevator takes the lifting motion along the two guide rails in
the vertical plane, after locating the position according to the layer position of
medicine-store, the medicine-out driver acts and completes the medicine-out. Drugs
are taken out and go through the photoelectric sensor testing surface; sensor sends
172 H. Che, C. Yun, and J. Zang

detected signals to PLC, then count, and match with the number of drugs in the
prescription. Taken drugs fall on the belt line directly, when drugs are taken
completely, belt line transport drugs to the opening of medicine-out, then the flap is
open, drugs are sent out, and complete the deployment of prescription drugs.

Fig. 8. Elevator

When drugs are blocked in the roller medicine-store slot because of the package
quality, or because the slot surface is not smooth enough and so on, if the elevator
continues to move, then the kit will interfere with its movement, when the problem is
serious, which will damage the elevator and medicine-store slot. So there are two
groups of bijective photoelectric sensors that are installed in elevator to be used for
protection and detection. When kits or other objects block the detective light rays of
sensors, the control system will stop elevators movement; when objects are removed,
and detective light rays can pass, then the control system will control elevator to
continue to move.

3 Automated Pharmacy System Control System


The control system of automated pharmacy system adopts the function structure
framework as shown in Fig. 9. System is divided into four functional modules:
management level, monitoring level and control level, executive level.
Management level: system prescribing information, drugs warehousing information,
inventory information management center. Management level establishes the drug
inventory database, which can receive HIS prescription information, according to the
drug inventory, to construct medicine-input list, according to the principle that is
first-in first-out, add the store address and quantity information to prescription
drugs, distribute the prescription information that is dealt with, after receiving
feedback value from the number of medicine-out, modify drugs inventory
information, then dynamic real-time manage drugs inventory.
Monitoring level: in the middle layer, makes information interaction with
management level, receives management levels prescribing information, makes task
scheduling for each executive device, sends the control instruction that is generated by
monitor program to control level; and communicates with control level, reads feedback
quantitative values, monitors all kinds of signs and makes logical judgment processing.
Design and Implement on Automated Pharmacy System 173

Hospital Information Management System HIS

TCP/IP
Manageme
nt Level
Automated Pharmacy Service Computer

TCP/IP
Monitoring
Automated Pharmacy IPC
Level

RS232
PCI

ISA
PISO813 Data Controlling
PMAC Motion Control Card CP1H PLC
Acquisition Card Level
Laser Ssensor 1

Llaser Sensor 3
Laser Sensor 2

Electromagnet
Setpper Motor

Photoelectric
Servo Drive

Limit Level
Zero Level

AC Motors
Bijective

Rotating
Sensors

Sensors
Sensors

Sensors
Drive

Executive
Level
Setpper Motors
Servo Motors

Fig. 9. The Structure of Automated Pharmacy Controlling System

Control level: according to control instruction that is sent by monitoring level calls
the corresponding bottom control procedures, then to control execution level parts.
Executive level: accepts control level programming instructions, drives executive
level motor to run and accords with requirements; kits detection, protection detection
and system zero limit detection.

4 Automated Pharmacy System Software System


Software system mainly consists of automatic medicine-input system and automatic
medicine-output system, specific functions as follows.

4.1 Automatic Medicine-Input Software System

Automatic medicine-input program flow is shown in Fig. 10 as follows.


174 H. Che, C. Yun, and J. Zang

Beginning

Initialization
NO
Whether the kit dimensions
NO consistent with the database?
Whether medicine-in?
YES
YES Delivery the kits to the
Determine the number of the kits manipulator
which will be put in the roller
medicine-store slot. Medicine-input transmission
system take the manipulator to the
Manipulator prepare to motion. roller medicine-store slot

The kits which will be put in the The manipulator let the kits slide
roller medicine-store slot would be in the roller medicine-store slot
put on belt line of automatic
The manipulator and medicine-
medicine-input system.
input transmission reset

Laser sensors detect length, width


and height dimensions of the kit.
Medicine-input accomplish

Fig. 10. The Program of Automatic Medicine-input

The combination of medicine-input testing system, medicine-input transmission


system and medicine-input manipulator realized the drug batch supplies. As
medicine-input, pharmacist put supply of drugs into medicine-input detection system,
through the analysis of the size information that is collected by the laser displacement
sensors, the drug that is correctly placed is conveyed to the medicine-input
manipulator by medicine-input detection system. After the manipulator with
medicine-input transmission system moves to supply medicine-store slot, the
manipulator makes the action, supplies the drug into medicine-store slot of dense
storage system.

4.2 Automatic Medicine-Output Software Systems

The combination of medicine-output driver and elevator realized the drugs batch
output. As drugs out, the system reads the drug store bits information in database,
elevator moves to the location of drugs, the medicine-out driver acts, counting sensor
sends the quantity of drugs of feedback to database.
After completing this prescription medicine-out, elevator moves to the opening
of medicine-out, the belt line of elevator and flap act at the same time, completes
the medicine out. Automatic medicine-out program flow is shown in Fig. 11 as
follows.
Design and Implement on Automated Pharmacy System 175

Beginning

NO Whether the kits of


Initialization
prescription is
accomplished?
Read prescribing information YES
The elevator moves to the kits
Belt line of elevator motion export

The flap of elevator open


Servo motor positioning motion

Delay 3 seconds, the kits slide out


Whether positioning NO of the elevator
motion of elevator is
accomplished?
The flap of elevator reset
YES
Counting the number of the kits The belt line of elevator reset
and feedback the actual number

to the database
Medicine-out accomplished

Fig. 11. The Program of Automatic Medicine-output

5 Conclusions
This paper introduces the ontology structure, the control system and the design of
software system of automation pharmacy system, at present, the operation of this
system in hospital is in good condition, which proves that the system design is
reasonable and feasible.

References
1. Liu, X.g., Yun, C., Zhao, X.f., Wang, W., Ma, Y.: Design and application on
automatization device of pharmacy. Journal of Machine Design, 6567 (2009)
2. Zhao, X.f., Yun, C., Liu, X.g., Wang, W.: Optimization for Scheduling of Auto-pharmacy
System. Computer Engineering 193-195, 200 (2009)
3. Chen, L.: The Research and Its Implement of a New Type Intelligent Dispensing System
for Chinese Medicine. Sichuan University, ChenDu (2004)
4. Subramanyan, G.S., Yokoe, D.S., Sharnprapai, S., Nardell, E., McCray, E., Platt, R.: Using
Automated Pharmacy Records to Assess the Management of Tuberculosis. Emerging
Infectious Disease 5(6), 788 (1999)
5. Thomsen, C.J.: A Real Life Look at Automated Pharmacy Workflow Systems. National
Association of Chain Drug Stores 1, 29 (2005)
6. Thomas, M.P.: Medication Errors. Clinical Pediatrics 4, 287 (2003)
Research on Digital Library Platform
Based on Cloud Computing

Lingling Han and Lijie Wang

Heibei Energy Institute of Vocation and Technology, Tangshan, Hebei, China


hanlingling2002@126.com, wanglj509@163.com

Abstract. Cloud computing is a new computing model. The emergence and


development of cloud computing have a great effect on the development and
application of digital library. Based on the analysis of the problem in the
existing digital library, a new digital library platform architecture model based
on cloud computing is put forward. The model consists of four lays:
infrastructure layer, data layer, managment layer, and service layer. The
structure and function of each layer are describled in great detail. The new
digital library platform can be used to solve the problem of library resourse
storing and sharing effectively, and provide fast, safe, convenient and efficient
services to users.

Keywords: cloud computing, digital library, security model, service interface.

1 Introduction
With the development of computer, network and information technologies, digital
library faces to great challenge, such as resource storing and sharing, various personal
services requirement, and so on. In order to solve these problems and put digital
library into full play, the existing library space and time constraints must be break. In
the new time, digital library should carry out the humanist service. Cloud computing
is an effective way to promote digital library development [1].
Cloud computing concept is proposed by Google firstly. IBM, Microsoft and
so on also defined cloud computing. And now there is no a unified concept. In a
word, cloud computing is a new new emerging
computing model, which compromises the merits
of Parallel Computing, Distributed Computing,
Grid Computing, Utility Computing, Networkstorage
Technologies, Virtualization and Load Balance. The
principle of cloud computing is that integrating
computers distributed in network into one entity with
a strong ability to perfect computer system, and using
Saas, PaaS, IaaS and MSP business model to put
computing power to terminal computers. The
services of cloud computing is managed by a Data
processing center, who provides unified services Fig. 1. Cloud computing model

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 176180, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Digital Library Platform Based on Cloud Computing 177

interface to users and meets users personal needs. Cloud computing services model is
as figure 1.
Since 2006, cloud computing concept is proposed by Google, cloud computing
becomes a new research hot topic in IT area [2]. Now, cloud computing is wildly used
in digital library, office system and so on. In 2009, the concept of cloud computing
liabrary was proposed by Richard Wallis [3]. OCLC has announced that liabrary
management service based on cloud computig will be supported to their number.
Besides, District of Columbia Public Library and Eastern Kentucky University
Library are offering services based on cloud computing. In our country, cloud
computing is in the stage of theoretical research, many scholars do some research on
cloud computing [4-6]. In order to promote cloud computings application in digital
library, the paper put forward a new digital library model based on cloud computing.
This model plays the role of cloud computings distributed store and power
computing, can realize resource sharing and promote digital library serving
efficiency.

2 Digital Library Model Based on Cloud Computing


Today, there is a serial of problem in digital library, such as resource independent
of each other, Low level of information technology, non-uniform resource form and
hardware limitation. In order to solve
these problems, this paper proposes a
new digital library platform based on
cloud computing, which can offer unified User
Application Program
service interface and provide personal
service to different terminal users, such
as computer, PC, and so on. The model
of digital library platform is as figure 2. User Interface Service Interface

Digital library platform consists of


four layers: infrastructure layer, data Service Layer
layer, management layer and service Service Interface, Searching...
layer. Ditailed description of each layer
is as figure 3. The lower and upper layer
Managment Layer
uses XML technology to communicat.
Finally, the platform provides service to
Task and safety management..

users through two ways: user interface


and service interface. According to Data Layer
different authority, user can visit digital Database, Relationship..
library and meet need through the above
two ways. In the course of service, how Infrastructure Layer
realization of the service is transparent to Hardware, Software..
users. Users only need to consider what
Digital Library Platform
services, regardless of the service
implementation course.
Fig. 2. Digital library platform model
178 L.L. Han and L.J. Wang

2.1 Infrastructure Layer


User Interface

Service
Layer
The infrastructure layer of digital library
platform is composed by a lot of public cloud Application Interface
and private cloud, which are integrated
through internet to form a virtual a huge data Hardware Management

Managemet
Layer
center or a supercomputer. Data Managment
Refer to public cloud, librarys digital

Cloud Computing
resource storage and application enviroment Safety Management
of data center can be built usint LaaS. Refer
Object Sets
to private cloud, local library can build its

Date Layer
digital library platform under main server and Object-relational Mapping
APP server provided by public cloud. Private
cloud can protect some resource, which Databases

can not be allowed to visit, and most of


Infrastruc-
ture Layer
Software Hardware
resource is open to the other cloud. Through
this way, it not only realizes resource Clound
sharing, but also ensures the safety of local
resource.
Fig. 3. Structer of four layers

2.2 Data Layer

The funciton of data layer is converting non-uniform data to unified resource object.
It include databases, object-relational mapping and object sets.

2.2.1 Databases
Different library platform may take different database. But there are almost useing
following several databases: Oracle, SQL Server, and so on.

2.2.2 Object-Relational Mapping


Different database has different driver, form of data storage. It should convert
different data form to unified object. Object-relational Mapping can finish this work.

2.2.3 Object Sets


Object sets include resource file, meta-information data, Source data directory, and so
on, which is a set with unifrom format. The manager of digital library can complete
the work of building digital library using object sets.

2.3 Management Layer

Management layer is the core layer of digital library platform. Its function is to
manage the hardware in Infrastructure layer, data resource in data layer, and system
security.
Research on Digital Library Platform Based on Cloud Computing 179

2.3.1 Hardware Management


Hardware management is to schedule hardware in cloud to make system opreation
efficiently, it include network control, server cluster, and so on. It not only makes the
computing opration parallelization, but also deals with mostly system failure
automatically.

2.3.2 Data Management


L.Richardson proposed there are seven standard services provided by digital library
platform based on cloud computing as figure 4. There are resource creating service,
resource cataloging service, index creating service, resource searching service, library
management, resource browsing service, and meta- information management Service
[7]. These seven services are realized by different event, and can finish object create,
selection and delete.

Meta-information Management Service


Data Management

Library Management Resource Browsing Service

Index Creating Service Resource Searching Service

Resource Creating Service Resource Cataloging Service

Fig. 4. Structure of data management

2.3.3 Security Management


In order to ensure systems safety, a serial of security measure should be taken.
System security model is as figure 5.

Operation Authority User


Access Control

Opretaing transparency Mechanism

Data Encryption Mechanism

Internet

Cloud Trust Mechanism

Information Security Assessment

Fig. 5. Security model


180 L.L. Han and L.J. Wang

In private cloud, data backup, system log, device monitor, et al are taken to ensure
local resource safe. Between the different clouds, system takes information safety
assesment, Mutual trust mechanism, and Data Encryption to protect the security of
communications. Besides, opration is Transparent to users. The opration of data
storage, computing, invalidation, and so on are all isolated to users. And digital
library manager can assign different permissions according to users identity.

2.4 Service Layer

Service layer provide visiting interface to users. Users with administrative privileges
can finish the work of library management, lending management, library charges, and
application development and expansion. Personal users can login digital library and
enjoy online services, such as books borrow, books scheduled, documentation
retrieval, and academic exchanges.

3 Summary
Cloud computing is a new effctive way to build modern digital library platform.
Based on cloud computing the paper presents a new digital library platform model,
and its architecture is given in detail. This digital library platform implements
resource storage and sharing efficiently, and provides users with fast, convenient and
efficient services. This study could provide the reference effect for the design and
realization of digital library.

References
1. Hu, X.J., Fan, B.S.: Cloud Computing: The Challenges to Library Management. Journal of
Academic Libraries 27(4), 712 (2009)
2. Buyyaa, R.: Cloud computing and emerging IT platfrorm: Vision, Hype, and reality for
delivering computing as the 5th utility. Future Generation Computer System 6, 599616
(2009)
3. Wallis, R.: Cloud Computing Libraries and OCLC. The Library 20 Gang, EB (2009),
http://librarygang.talis.com/2009/05/06/library-20-gang-0509-
cloud-computing-libraries-and-oclc/,2009-05-15
4. Zhou, X.B., She, K., Ma, J.H.: Compostion Approach for Software as a Service Using C
loud Computing. Journal of Chinese Computer Systems 31(10), 9421953 (2010)
5. Zhang, G.W., He, R., Liu, Y., Li, D.Y.: An Evolutionary Algorithm Based on Cloud
Model. Chinese Journal of Computers 31(7), 10821091 (2008)
6. Zheng, P., Cui, L.Z., Wang, H.Y., Xu, M.: A Data Placement Strategy for Data-Intensive
Applications in Cloud. Chinese Journal of Computers 8, 14721480 (2010)
7. Richardson, L., Ruby, S.: Restful web services, EB/OL (2010),
http://home.cci.lorg/~cowan/restws.pdf
Research on Nantong University of Radio and TV
Websites Developing Based on ASP and Its Security

Shengqi Jing

Radio and TV University of Nantong, Jiangsu, China

Abstract. The paper firstly introduced the current research and development of
the ASP technology at home and abroad. Secondly it introduced particularly ASP
technology based on Nantong University of Radio and TV Website, database
technology, security technology of network and security of Web station. Basing
on the research above, the author presented the functions of management,
display, query and so on. Then I introduced particularly several technologies
which had to be mastered in the aspects of security of the Web station, including
encryption and attestation technology, firewall technology, intrusion detection
technology, system copying technology, and so on. At last the paper made a
conclusion and analysis. It presented use for reference of the exploitation and
security research of information query system based on ASP and received good
result.

Keywords: ASP, SQL database, security, Nantong.

1 Introduction
ASP (Microsoft Active Server Pages) is Microsoft's development of a service-side
scripting environment, which is a collection of objects and components. ASP file is an
executable script embedded in HTML documents, HTML and Active Control will
combine to produce and implement dynamic, interactive, high-performance Web
server applications with the extension. Asp. ASP technology is a replacement for CGI
(Common Gateway Interface, Common Gateway Interface) technology. Simply
speaking, ASP is a server-side script in the operating environment, through such an
environment, users can create and run dynamic, interactive Web server applications,
such as interactive dynamic web pages, including the use of HTML forms to collect and
process information, upload and downloads, as users use their own CGI programs in
the same, but it is much simpler than the CGI. More importantly, ASP uses ActiveX
technology is based on an open design environment, users can define and produce their
own components to join, so White has the dynamic web page with almost unlimited
expansion capacity, which is the traditional CGI programs are far less than other place.
In addition, ASP can use ADO (Active Date Object, Microsoft, a new data access
model) easy access to the database, so that the development of applications based on
WWW is possible [1].
ASP advantages.ASP technology is intuitive, easy to learn the advantages of a
beginner in the short term can also write reliable code. It also provides a more powerful,

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 181186, 2011.
Springer-Verlag Berlin Heidelberg 2011
182 S. Jing

it can greatly speed up the development of the project. Although this is only a few years
the birth of ASP technology, already widely used reasons.
ASP is very obvious advantages: First, it is really nothing to do with the browser, the
system is running on the server side, just the standard HTML language of the results
returned to the client browser, regardless of which browser the client is; Second, it is
completely server-side operating systems, software maintenance, upgrades, completely
on the server side, the client does not need any preparation; again, ASP scripting
language can be any Script language, as long as the engine on the line to provide the
appropriate, ASP VB Script support itself and Java Script, free to decide which one to
use. Now a common network database through a browser to access the technology has
CGI, JDBC, etc., but the realization of the technology is much more complicated than
the use of ASP, taking into account the actual needs of the system and the advantages of
ASP, so use ASP technology to achieve system [2] .
Specific advantages are the following:
(1) Increased efficiency in development: ASP provides an easy to learn the script,
and with many built-in objects, which greatly simplifies the development of Web
applications, so development efficiency can be improved.
(2) Interaction: ASP page is a page with computing power, it can in the run-time
environment and the use of different parameters produce different HTML output.
Although ASP is a server-side applications, but it can also be the traditional client-side
scripting and plug the control outside the mixed use, dynamically generated for the
browser running on the layout of the script and negative outside the object inserted in
the client browser dynamically generated graphical user interface.
(3) Security enhancements: ASP script executed on the server, the user's browser is
just spread the implementation of the results of ASP generated HTML documents, this
was to reduce the requirements for the browser, on the other hand strengthen the
security of the system .
(4) Cross-platform: ASP is the main body and platform-independent HTML and
various scripts, both of which do not have to be built, the connection procedure can
change the content timely, direct running in various operating environments, ActiveX
component is developed by a variety of programming languages, and
vendor-independent, cross-operating environment across the network implementation
of binary components, and through the HTML and script, developers can easily set
various functions of the ActiveX Service ASP web applications composed of a set
piece.
(5) IIS and ASP technologies: taking into account the interactive features of the
system, you can use IIS and Active Server Pages technology (ASP) to achieve
dynamic, interactive Web design [3].
IIS (Internet Information Server) is Microsoft Windows-based system provided by the
Internet Information Server, the WWW service includes ASP script server to support
ASP technology. IIS has a simple installation, powerful, and protocol compatibility
with other Microsoft software compatibility, the company's development and so on [1].
IIS and ASP technology with simple and efficient to Web and database connections.
HTML scripts, and other components will be compared to the co. Establish an efficient
interactive environment for dynamic Web applications. The interaction is expressed in,
according to information submitted by users and respond to, no manual update a file on
Research on Nantong University of Radio and TV Websites 183

the page to meet the application needs. Database data can change at any time, and the
server application running on it without having to change [4].

2 Establish an Information Inquiry System Website


1. We click on the "Start" / "Settings" / "Control Panel" / "Administrative Tools" /
"Internet Services Manager" item, you can open Internet Information Services window.
2. And then right-click "Default Web Site" in the pop-up menu, select "New" /
"virtual directory" to be "Virtual Directory Creation Wizard" dialog box.
3. Click "Next" button in the dialog's "Alias" text field enter the alias news.
4. Click "Next" button to the folder that contains the site designated for the virtual
directory path. (Virtual directory access permission to take the system default settings)
Click "Next."
5. Click "Done" button, IIS configuration is completed [4].

Configuration Database.The system uses SQL Server2000 Chinese version of the


database, the database server and Web server can be configured in the same computer
can also be configured on both computers. The configuration of these two methods is
similar. Specific steps are as follows:
1. In the Windows desktop, select "Start" / "procedure" / "Microsoft SQL Server" /
"Enterprise Manager" menu command, open SQL Server Enterprise Manager control
panel.
2. Right-click the tree menu in the "database" item, choose the pop-up menu of the
"New Database" menu item.
3. In the "Database Properties" dialog "General" tab, "name" text box, enter the
database name of the news, click "OK" button, create an empty database news.
4. Right-click the new database, news, pop-up menu, select "All tasks" / "Restore
database" menu item.
5. In the "Restore Database" dialog box, select "From device" single option.
6. Click the "Select Device" button, select the "Choose Restore Devices" dialog
"Restore from [F]: Disk" single option. Then in the text box of the original file path.
7. Click "OK" button to complete the configuration of the database restore.
8. In articleconn.asp file set the database connection string.
Connstr = "PROVIDER = SQLOLEDB;
DATA SOURCE = (local); UID = sa; PWD = password; DATABASE = news "[4]

2.1 Website Design

A typical information search site should contain at least information management,


information display and information inquiry function 3.
Design objectives.Information search site to achieve the following functions:
1. Information Management
information is added
Information Change
Delete information
184 S. Jing

2. Information Display
Show all information
Do not display information by the
query information by keyword
3. Station by keyword query [4]
System Functional Analysis and Design.Information query is divided into three
modules: information management module, the information display module and the
information query module, the function modules shown in Figure 1.

Add

Management Information

Delete

Show all information

Information Display Information by Category

Query information by

Information query Check station by keyword

Fig. 1. Functional Block Diagram

2.2 Database Structure Design

According to the system functional design requirements and module division,


information search site mainly contains information on the recorded data and data
structures.

2.2.1 Database Requirements Analysis


Information recorded includes the following:
information record number.
information about name, type of information.
description.
message size.
Time and Views information.
Research on Nantong University of Radio and TV Websites 185

2.2.2 Logical Database Design


Information recording table learning, as shown in Table 1:

Table 1. Information Record Form

Allow
Column name Data Type Length
air
articleid int 4
type nvarchar 50
title nvarchar 255
url nvarchar 255
content ntext 16
hits int 4
big nvarchar 50
vote nvarchar 50
[from] nvarchar 50
fromurl nvarchar 255
dateandtime smalldateint 4

2.3 Information Management Module

Website management module contains the following sub-modules.


The relationship between the page shown in Figure 2.

2.4 Information Management Login

The module involves a total of three pages: login.asp, chklogin.asp, manage.asp.


login.asp. This page requires the user fill out the HTML form element has only two.
This page is presented to the system administrator user name and password, so not
related to the operation of the database table.
chklogin.asp. The page for the administrator name and password verification page,
need not complete the HTML form elements, and administrator name and password for
the file exists, and therefore not related to the operation of the database table.
manage.asp. This page is seen after a successful administrator login page, need not
complete the HTML form element, the use of the information in the system log sheet
learning.
Added information. The module involves a total of three pages: add.asp, save.asp,
art_class1_put.asp
186 S. Jing

Administrato
r Login
login.asp

Audit
Account
chklogin.asp

Yes No

Management Web page


Interface index.asp
manage.asp

Add Information Delete


information Change information
add.asp edit.asp delete.asp

Fig. 2. The relationship between the pages

3 Conclusions
This paper describes in detail the information inquiry system based on ASP web site
design and implementation process and the security of the site, which is based on ASP
web development with theoretical and practical significance,at last,with the method, set
up Web page of Nantong University of Radio and TV.

References
1. Wang, C.H., Xu, H.X.: Site planning, construction and management and maintenance of
tutorials and training. Beijing University Press, Beijing (2008)
2. Zhang, Y.Y.: Site Management Manual. China Water Power Press, Beijing (2010)
3. Tao, T.: Dynamic Web-based ASP technology design and implementation. Fujian PC
(November 2010)
4. Gu, X.M., Zhang, Y.P.: ASP-based security research. Computer Engineering and Design
(August 2004)
5. Kern, T., Kreijger, J., Willcocks, L.: Exploring ASP as sourcing strategy: theoretical
perspectives, propositions for practice. Journals of Strategic Information Systems
(November 2009)
Analysis of Sustainable Development in Guilin by Using
the Theory of Ecological Footprint

Hao Wang, GuanWen Cheng*, Shan Xu, ZiHan Xu,


XiaoWei Song, WenYuan Wei, HongYuan Fu, and GuoDan Lu

College of Environmental Science and Engineering,


Guilin University of Technology, No.12 Jiangan Road, Guilin, Guangxi, China
jasson1124@163.com, chenggw@glite.edu.cn

Abstract. This study is a project which calculates ecological footprint of Guilin


in the past decade by theory of ecological footprint. The results showed that: from
1999 to 2008, per-capita biological capacity was stable, per-capita ecological
footprint grew from 1.3434 hm2/cap to 2.0499hm2/cap, per-capita ecological
deficit increased in study period. Ecological footprint per ten-thousand GDP
declined by almost 50%. According to the results we can find ecosystem of
Guilin was in a unsustainable state. Therefore, it is necessary to accelerate
economic growth mode transformation, develop ecological industries and
advance biological capacity in Guilin.

Keywords: Ecological footprint, Biological capacity, Sustainable development,


Guilin city.

1 Introduction
Natural ecosystem is the material basis on which human rely for existence, and the
sustainable development should be conducted on biological capacity of ecosystem [1].
With the development of economy and increasing population, its important to analyze
quantitative data and interpret the sustainable development in Guilin.
Ecological Footprint (henceforth EF) was firstly proposed by Canadian scholar Rees
and his student Wackernagel [2,3]. At prensent, many domestic and foreign scholars
has been conduced much study and applied it to research the regional sustainable
development [4,5]. This paper is based on EF model and statistical data [6] to calculate
and analyze EF. The results showed that the state of biological capacity and resource
utilization, proposing scientific suggestions during the process of sustainable
development in Guilin.

2 Overview of Guilin City


Guilin is located in northeast of Guangxi Zhuang Autonomous Region, main terrain of
Guilin is hill country, in the north, east and west of Guilin are mountains, the elevation
are higher than the central, south and northeast part of the city which are plains. The
total administrative region is 27828 square kilometers. There were 508.32 million

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 187193, 2011.
Springer-Verlag Berlin Heidelberg 2011
188 H. Wang et al.

inhabitants, 88.303 billion Yuan GDP and the production of three industries was 19.4:
45.2: 35.4 in 2008, Its initially established some pillar industries such as electronics,
rubber, and processed food etc.

3 Methods
EF refers to the geographical space occupied by human being with ecological
productivity, and providing resources or waste acceptance under the existing living
standard. Calculation of EF starts from two assumption with description as follows:
Firstly, humans can determine their own consumption of resources, energy and waste;
Secondly, these resources and waste can be converted into biological productive areas
or ecological production areas that can produce and consume them. Biological
productive areas included the following 6 items: cropland, forest, grassland, fishing
ground, built-up land and fossil fuel land. The formula is:
ef =EF/N=aiAi =Ci/pi (1)
2 -1
ef denotes per-capita ecological footprint (hm cap ); EF indicates that the total
ecological footprint of regional ecosystem; N is the population of Guilin; ai is
equivalence factor; Ai represents one item consumption of per-capita ecological
footprint (hm2cap-1); Ci stands for one item of per capita consumption; pi is one item of
the global average land productivity (kghm-2).
Biological capacity (henceforth BC) is considered as ability for providing
biologically productive areas in a country or a region. The equation is:
bc= BC/N=0.88ajrjyj (2)
BC is biological capacity; bc denotes per-capita biological capacity; aj represents
per-capita areas; yj stands for yield factor; rj is equivalence factor; N is the population
Yield factor donates the dispersion between the local yield, which is represented by
the items of biologically productive areas in different countries or regions and average
yield of the world. The values of yield factors in this paper are: cropland 1.7, built-up
land 1.7, forest 0.9, grassland 0.2, fishing ground 1.0, and fossil fuel land 0 [7].
It is used that equivalence factors to standardize different items of lands. The values
of equivalence factors in this paper are described as follows: cropland 2.8, built-up land
2.8, forest 1.1, fossil fuel land 1.1, grassland 0.5 and fishing ground 0.2 [8]. It is
deducted from 12% biodiversity areas in the calculation of BC.
Compared resources and energy consumption of a region with its BC, it is find that
whether the EF beyond its BC. If the EF is greater than its BC, its called ecological
deficit; otherwise called ecological surplus.
EF per ten-thousand GDP is a ratio of per-capita EF and the region's gross domestic
product (GDP), this indicator can reflect the utilization of land resources in a region.
The smaller value it is, the higher land productivity will be.
Setting year and per-capita GDP as independent variables, per-capita EF as
dependent variable, getting scatter plot and finding the best fitting curves, then utilizing
SPSS to analyze the statistical data and judging whether the fitting curves can
characterize the trends of per-capita EF with per-capita GDP and year.
Analysis of Sustainable Development in Guilin 189

4 Results and Discussion


Table 1 shows the per-capita EF and per-capita BC in Guilin from 1999 to 2008. As can
be seen from the table, per-capita EF rises 52.6% during the decade(Fig.1). The
proportion of cropland, forest, grassland and fishing ground representing consumption
of biological resources are much higher than the proportion of fossil fuel land and
built-up land representing consumption of energy in EF. It indicates that the
consumption of biological resources are greater than the consumption of energy.

Table 1. Per-capita EF and per-capita BC in Guilin from 1999 to 2008 [hm2/cap]

Built-up Fishing Fossil fuel Per-capita


Years Cropland Forest Grassland
land ground land EF
1999 1.0803 0.1082 0.0299 0.0008 0.0099 0.1144 1.3434
2000 1.1246 0.1027 0.0318 0.0008 0.0103 0.1048 1.3749
2001 1.1504 0.1179 0.0352 0.0009 0.0107 0.1163 1.4314
2002 1.1489 0.1345 0.0358 0.0009 0.0109 0.1507 1.4817
2003 1.1667 0.1802 0.0404 0.0012 0.0117 0.1977 1.5980
2004 1.2265 0.1846 0.0450 0.0019 0.0123 0.2117 1.6821
2005 1.3221 0.1937 0.0515 0.0021 0.0131 0.2225 1.8051
2006 1.3736 0.2266 0.0553 0.0025 0.0141 0.2388 1.9110
2007 1.2983 0.2343 0.0604 0.0029 0.0150 0.2829 1.8938
2008 1.1599 0.5589 0.0495 0.0034 0.0110 0.2673 2.0499
Built-up Fishing Fossil fuel Per-capita
Years Cropland Forest Grassland
land ground land BC
1999 0.2452 0.2756 0.0006 0.0758 0.0026 0 0.5999
2000 0.2421 0.2886 0.0006 0.0769 0.0026 0 0.6108
2001 0.2390 0.2835 0.0006 0.0781 0.0026 0 0.6038
2002 0.2340 0.2836 0.0005 0.0792 0.0026 0 0.5999
2003 0.2233 0.2836 0.0005 0.0803 0.0026 0 0.5902
2004 0.2216 0.2769 0.0005 0.0813 0.0026 0 0.5828
2005 0.2217 0.2774 0.0005 0.0831 0.0026 0 0.5852
2006 0.2199 0.2751 0.0005 0.0839 0.0025 0 0.5820
2007 0.2177 0.2722 0.0005 0.0845 0.0025 0 0.5775
2008 0.2167 0.3066 0.0005 0.0854 0.0025 0 0.6118

In the consumption of biological resources, both proportion of cropland and forest


beyond 80%, mainly for the comsumption of pork and agricultural products (including
cereals), fruit and wood. In the consumption of energy, fossil fuels such as coal and
coke are larger than other energy. It is shown that citizens living conditions,
urbanization and industrial development are still lag behind other affluent areas.
Relatively low level of industrialization and traditional structure of energy
consumption are not changed significantly.
From 1999 to 2008, the EF of six items of lands showed an upward trend. With
forest and built-up land grow rapidly, the average annual growth rate are 17.5% and
16.03% respectively, the growth of forest indicates that demand of forest and farm
190 H. Wang et al.

11 Cropland Forest
Fishing ground Grassland
10 Fossil fuel land Built-up land
9

EF(10 6 hm 2 )
8
7
6
5
4
3
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
Year

Fig. 1. The proportion of ecological footprint

produce such as woods, oil cropland fruit are increased; growth of built-up land area
reflects the result of urbanization in the past decade. Slower growth lands are fishing
ground and cropland, average annual growth rates are 1.08% and 0.71% respectively,
the results showed the limitation of natural resources, especially the major biological
resources restriction.
During the decade, BC remained at 0.60 hm2/cap, cropland and forest providing
much than 0.5hm2/cap, while the grassland, fishing ground and built-up land provide
less (Table 1). From the perspective of the growth rate of various lands, it is clearly that
an increasing trend of built-up land, the average annual growth rate is 1.21%, by
process of urbanization; forest changes in a weak fluctuation condition, it shows that
urban ecological construction is in a relatively stable state; cropland is negative growth
(-1.23%), the average annual decrease rate is roughly equal to the average annual rate
of built-up land. Increasing infrastructural land is the main factor of cropland reduction.
It is pivotal to insure BC and achieve sustainable socio-economic development by
strengthen urban and rural planning, reduce depletion of cropland.
Because of the restriction of self-sustain, self-control, the resource and environment
capacity of eco-system, per-capita BC grows slowly in Guilin in the past decade. But
per-capita EF increases rapidly. Hence, per capita ecological deficit grows from
0.7436hm2/cap in 1999 to 1.4382 hm2/cap in 2008, increases 93.4% (Fig.2). There is a
lack of energy and main productive resources such as ironstone, phosphorite and other

Per-capita Biological Capacity


2.5 Per-capita Ecological Footprint
2.0 Per-capita Ecological Difict
1.5
1.0
hm 2 /cap

0.5
0.0
-0.5
-1.0
-1.5
1999200020012002200320042005200620072008
Year

Fig. 2. The trend of ecological deficit


Analysis of Sustainable Development in Guilin 191

constraints in Guilin. To maintain the sustainable development of ecosystem and


improve people's living standards, its necessary to input biological resources and
energy resources from external environment.
There are different trends of per-capita GDP and EF per ten-thousand GDP from Fig.3.
Per-capita GDP increases with year, but EF per ten-thousand GDP decreases from
2.3130hm2/cap to 1.1801hm2/cap. With the recent economic restructuring, economic
growth mode transformation, and technology optimization and equipment upgrade in every
walk of life, the efficiency of resources utilization increases and resources dependence of
ten-thousand GDP decreases, leading EF per ten-thousand GDP decreasing significantly.
Per-capita EF forecast with year By means of 1 to 10 instead of the years from 1999
to 2008 respectively, and analyze the main statistics by SPSS, the regression equation is
y = 1.2501e 0.0491x, the main statistics are: R2 = 0.982; F = 442.615; sig = 0.000, the
results show that the relationship between per-capita EF and year is notable(Fig.4).
According to the equation, we can predict per-capita EF in Guilin next five years: from
2009 to 2013, the values are 2.1511hm2/cap, 2.2598hm2/cap, 2.3740hm2/cap,
2.4940hm2/cap, 2.6170hm2/cap respectively.
Per-capita EF forecast with economic development Firstly, taking per-capita GDP
and per-capita EF as independent variable and dependent variable respectively, then
making fitting curve (Fig.5). The best fitting equation is the logarithmic equation

Per-capita GDP EF per ten thousand GDP


2 2.4

1.6 2 EF per ten thousand


Per-capita GDP

1.6
1.2
GDP

1.2
0.8
0.8
0.4 0.4
0 0
1999200020012002200320042005200620072008
Year

Fig. 3. The trend of EF per ten-thousand GDP

2.2 2.4 y = 0.6617Ln( x ) + 1.7086


Per-capita EF(hm 2 /cap)
Per-capita EF(hm 2 /cap)

y = 1.2501e0.0491x R 2 = 0.968
2.2
2.0 R 2 = 0.982
2.0
1.8
1.8
1.6 1.6
1.4 1.4
1.2 1.2
0 1 2 3 4 5 6 7 8 9 10 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9
Year Per-capita GDP(ten thousand yuan)

Fig. 4. The trend of EF with year Fig. 5. The relationship between per-capita
EF and per-capita GDP
192 H. Wang et al.

y = 0.6617lnx +1.7086. Analyzing the main statistics by statistical software SPSS:


R2 = 0.968, F = 245.748, sig = 0.000. They indicate that the relationship between
per-capita EF and per-capita GDP is notable too, the fitting equation can interpret and
forecast the trend of per-capita EF with per-capita GDP.
The two results showed that per-capita EF grows with year and per-capita GDP , if
BC of Guilin maintained a stable state, ecological deficit would be grown year after
year and eco-system will face greater pressure. Growing per-capita GDP reflects
improvement of citizens living standards and growth of various resources
consumption, but increasing consumption is satisfied by excessive consuming
resources and inputing resources from outside, neither conducive to sustainable
development of Guilin, nor the best way to decrease excessive resources consumption.
Therefore, improving the local BC and reducing EF is imperative.

5 Suggestions
(1) Controlling population size and improving population quality. Guilin has
been in condition of ecological deficit in the past decade, and the deficit increased
constantly. As the expansion of population year by year, biological resources and
energy demand will be certainly increased, as well as the waste amount will be
increased. Hence, the ecological environment will face with more pressure. It is
important to control population to decrease EF. At the same time, knowledge of
ecological environmental protection and sustainable development should be educated,
its help to guide reasonable consumption and reduce resources waste in Guilin.
(2) Increasing the level of technological development and promoting industrial
restructuring. Science and technology are primary productive forces. The higher
productivity it is, the stronger BC will be. It is important for improving BC and
reducing ecological deficit to develop technology productivity. It is held to strengthen
science and technology research and develop local pillar industries, promote
technological innovation, transfer scientific and technological achievements to
practical productive forces. Meanwhile, concentrate manpower and resources to
develop local characteristics and competitive industries, such as motor vehicles and
parts, food and beverage processing, rubber, pharmaceutical and biological products
and promote development of industrial clusters or industrial chain (such as Fruit
cultivation - Drinks and local and special products processing - Eco-tourism), improve
ecological footprint of cropland and forest, accelerate economic development rapidly
and reduce waste emissions.
(3) Strengthening agricultural production and ensuring cropland and forest
supply. Cropland consumption occupy a large proportion of the EF in Guilin, it is
utilized that a business model likes "company + base + farmers" to accelerate integrative
process of agricultural cultivation, processing and marketing. Furthermore adjust
agricultural structure and product structure, optimize the layout of agricultural areas,
improve the quality and production of major agricultural products such as food, fruits,
vegetables by reforming low-yielding farmland, change farming method, improve crop
varieties and cultivation techniques. Developing three-dimensional planting and
breeding, promoting a planting and breeding mode expressing biogas as a link to pig,
methane, fruit (vegetables) to achieve integrated development of agriculture, and enhance
people's way to get rich. Forest resource is abundant in Guilin. It is important to rational
exploit and utilize of forest resource in precondition of ecological protection.
Analysis of Sustainable Development in Guilin 193

(4) Increasing the efficiency of fossil fuels and developing clean energy. Fossil
fuels are the main energy consumption in Guilin, but the city have little coal resource
which requires input coal resources from outside, it will bring urban air pollution.
Therefore, it is necessary to promote hydropower, solar and other clean energy, making
the best of straw, animal waste and other biomass resources to develop biogas energy to
achieve waste recycling and accelerate eco-agricultural development. Its beneficial to
improve the energy structure and reduce dependence of external energy.

6 Conclusions
In study period, EF indicated that an increasing trend, the values grew from
1.3434hm2/cap to 2.0499hm2/cap and BC was stable. Therefore, per-capita ecological
deficit increased. Growing efficiency of resource utilization led to EF per ten-thousand
GDP decrease; per-capita EF increased with year and per-capita GDP by predictive
analysis in future. In the past decade, resource supply could not meet the resource
consumption in Guilin. Ecosystem was under a condition of unsustainable
development. On the basis of population control, development of science and
technology should be energetically promoted, and economic growth mode should be
transformed to enhance BC, meanwhile resources and energy consumption patterns
should be changed to improve the efficiency of resource utilization, reduce the EF and
promote socio-economic sustainable development in Guilin.
Acknowledgements. The work was supported by a Program Sponsored for
Educational Innovation Research of University Graduates in Guangxi Province
(No.200910596R01) and the Guangxi Key Laboratory of Environmental Engineering,
Protected Assessment.

References
1. Xu, Z.-m., Zhang, Z.-q., Cheng, G.-d.: Ecological Economic theory method and application.
Yellow River Conservancy Press, Zhengzhou (2003)
2. Rees, W.E.: Ecological footprints and appropriated carrying: what urban economics leaves
out. Environmental Urbanization 4, 121130 (1992)
3. Wackernagel, M., Rees, W.E.: Our Ecological Footprint: Reducing Human Impact on the
Earch. New Society Publishers, Gabriola Island (1996)
4. Scotti, M., Bondavalli, C., Bodini, A.: Ecological Footprint as a tool for local sustainability:
The municipality of Piacenza (Italy) as a case study. Environmental Impact Assessment
Review 29, 3950 (2009)
5. Zhang, H.-Y., Liu, W.-D., Lin, Y.-X., et al.: A modified ecological footprint analysis to a
sub-national area: the case study of Zhejiang Province. Acta Ecologica Sinica 29,
27382748 (2009)
6. Guilin Economic and Social Statistics Yearbook, editorial board: Guilin Economic and
Social Statistics Yearbook. Chinese Statistics Press, Beijing (1999-2008)
7. Xu, Z.-m., Chen, D.-j., Zhang, Z.-q., Cheng, G.-d.: Calculation and analysis on ecological
footprints of China. Acta Pedologica Sinica 39, 441445 (2002)
8. Zhang, Y., Wu, Y.-m.: The eco-environmental sustainable development of the Karst areas in
southwest China analyzed from ecological footprint model-a case study in Guangxi region.
Journal of Glaciology and Geocryology 28, 293298 (2006)
Analysis of Emergy and Sustainable Development
on the Eco-economic System of Guilin

ZiHan Xu*, GuanWen Cheng, Hao Wang, HongYuan Fu,


GuoDan Lu, and Ping Qin

Guilin University of Technology, Guilin, P.R. China


693926312@qq.com

Abstract. Using the emergy theory, the eco-economic system of Guilin is


analyzed, which has related to about 9 indexes such as net emergy yield ratio
(EYR), emdollar value (Em$), emergy investment ratio (EIR). Finally,drawin g
some conclusions as follows:(1)The net emergy yield ratio (EYR) of Guilin has
rosen from 5.748 in 2003 to 6.749 in 2008, which indicates the net benefits of
eco-economic system in Guilin has improved, and the production efficiency has
enhanced. (2)The trend of environment loading ratio (ELR) of Guilin has
ascended, which suggests the environmental pressure has increased. (3)The
emergy index for sustainable development (EISD) of Guilin has fallen from
2.710 in 2003 to 0.865 in 2008, suggesting that the ability of sustainable
development has quickly diminished.

Keywords: Emergy analysis, Eco-economic system, Analysis of sustainable


development, Guilin city.

Ecological economic system is the basis for sustainable development to mankind,


however, with the social progress and economic development, the eco-economic
system was damaged unprecedentedly, which has been a serious threat to humans
survial and development, therefor, sustainable development has become a humans
consensus and one of the main goals for the future exploration. The emergy analysis
theory proposed by the famous American ecologists H.T.O dum has balanced well the
energy, information and economy flow for the first time, which helps us to conduct
correct analysis on the value and relationship of nature and human, environmental
resources and social economy[1-3]. This article using the emergy analysis method, and
looking the society, economy and environment of Guilin as a compound eco-economic
system to analyse, wants to evaluate the sustainability of Guilin quantificationally and
put forward corresponding countermeasures.

1 Situation in Researched Area and Research Method

1.1 General Situation in Researched Area

Guilin is located in the southwest of Nanling ridge and northeast of Guangxi


autonomous region, which is the typical karst landform and one of the famous scenic
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 194200, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis of Emergy and Sustainable Development on the Eco-economic System of Guilin 195

cities and historical cities in the world. In the late of 2008, the city had 27800 km2 land
area, population reached 508.32 ten thousand, and production value reached 883.02
hundred million yuan, which was 18.64% in growth over the previous year, and
the structure of three industries was 19.42: 45.24: 35.34[4]. Guilin is a emerging
industrial city which has a good basis, and it has formed electronics, rubber
manufacturer, machine industry, cottonocracy, pharmacy, coach manufacture,
handicraft, and food and light industry as its pillar industries, the construction of going
to scale for industries and enterprises is effective, which makes economic increase
substantially, the city's industrial enterprises above designated size has reached 765,
and the output value of industry has reached 35.071 billion yuan.

1.2 Energy Analysis Method and Procedure

The theory and method of emergy analysis established by H.T.Odum ststes that any
form of energy is derived from solar energy, solar emergy is used as a benchmark to
measure energy value of various energies in the real application. Based on the emergy,
we can measure and compare the true value of different types and grades; we can also
use it to transform different types and incomparable energies to emergy by emergy
conversion rates. This study includes the following steps:
(1) Collect the relevant geographical and economic data in Economic and

Social Statistic Yearbook of Guilin from 2003 to 2008.
(2) According to the "energy system language" legend presented by H.T.Odum,
determine the boundaries of system-wide, the main energy source, the main
components of the system, and list the system processes and relationships within
various components.
(3) List the main items of energy input and output, and convert each category and
material into common emergy unit with the corresponding conversion rate to evaluate
their contribution in the system and status.
(4) Merge the similar and important project.
(5) In line accordance with the local characteristics, the table of emergy analysis
and system classification, optimize and select the overallemergyindexes.
(6) Propose the rational countermeasures based on the results.

2 Analysis of Research Results


According to the raw data from major sources of eco-economic system and currency
flows in Guilin from 2003 to 2008, the summary of emergy flows can be calculated
(Table 1), from the table, it shows the city resource emergy flow is not high, of which
about 74% are non-renewable resources. Resource constraint is very obvious.
Meanwhile, based on the statistics in Table 1, the main indicators of economic
development in Guilin from 2003 to 2008 (Table 2) are calculated.

2.1 Net Emergy Yield Ratio

Net emergy yield ratio (EYR) is output emergy of the system to economic feedback
emergy ratio, an indicator to measure the size of the economic contribution of the
196 Z. Xu et al.

systems output, and also a standard to measure the systems efficiency. The EYR of
Guilin can be seen from Table 2 that EYR of Guilin has improved overall, but still has
some fluctuations, which shows the net benefit of eco-economic system in Guilin has
increased, the production efficiency and competitiveness has been enhanced in the
same condition of input economic emergy.

Table 1. The major emergy flows of Guilin in 2003~2008

Items (1020sej) year


2003 2004 2005 2006 2007 2008
The emergy flow of local
renewable resoures:EmR 60.20 60.40 62.90 60.70 62.50 61.60
The emergy flow of local
non-renewable resources:EmN 97.53 76.18 105.05 144.87 149.21 173.92
The emergy flow of input
renewable resoures:EmIR 87.97 95.76 102.90 111.80 121.69 137.22
The emergy flow of input
non-renewable resoures 25.82 26.53 28.18 28.76 36.54 32.75
emergy:EmIN
The emergy flow of input labor
emergy:EmIL 59.40 62.10 65.30 73.60 79.10 92.70
The emergy flow of actually used
foreign currency:EmIM 2.48 0.90 1.16 2.06 2.61 4.41
The emergy flow of input:EmI=
EmIR+EmIN+EmIL+EmIM 176.67 185.29 197.54 216.23 237.33 267.08
The emergy flow of input
production:EmIS=EmIR+EmIN 113.79 122.29 131.08 140.57 158.23 169.98
The emergy flow of currency
from brought production:EmISP 52.07 63.55 80.03 101.84 148.87 183.35
The total
emergy:EmU=EmR+EmN+EmI 334.40 321.87 365.49 421.80 449.04 502.60
The emergy flow of output
production and labor:EmEO 1003.80 1060.24 1588.95 1440.18 1624.82 1788.64
The emergy flow of waste:EmW 11.73 11.42 11.88 11.27 13.12 13.91
The emergy flow of
output:EmO=EmEO+EmW 1015.53 1071.66 1600.83 1451.45 1637.94 1802.55

2.2 Emdollar Value

Emdollar value (Em$) is annual using emergy gross in a country or region to the gross
domestic product of that year ratio. Generally speaking, the Em$ in less developed
regions is higher, and their extent of economic development is lower. However, the
developing regions buy a lot of external resources, and the GDP is also higher, so
their Em$ are lower. Em$ of Guilin decreased year by year (Table 2), which was lower
than Gansus(11.881012 sej/$) in 2000 and Xinjiangs(14.71012sej/$) in 1999
significantly, but still than other coastal provinces, such as Jiangsus3.021012sej/$ in
2000 and Fujians 2.761012sej/$ in 2004[5-8], showing openness of economy, extent
of circulation are not high, and economic development is in a lower level.

2.3 Emergy Investment Ratio


Emergy investment ratio (EIR) is economic feedback emergy to gratuitous emergy
from the environment ratio, an indicator to measure the degree of economic
Analysis of Emergy and Sustainable Development on the Eco-economic System of Guilin 197

development and environmental loading level, also one to assess industrial


competitiveness. Currently, the global average EIR is about 2, which is lower in
developing countries. The condition of EIR in Guilin from 2003 to 2008, in recent
years, maintaining at 1.1 or so, its level of development was low (Table 2),
demonstrating that the level of environmental loading in Guilin was not high,
conducive to protection of "Guilin scenery", but its pace of economic development and
level are still low, the scale of industry is smaller, and less competitive.

Table 2. Table of eological economic emergy system in Guilin

year
Indicators and methords
2003 2004 2005 2006 2007 2008
Emergy self-sufficiency 0.472 0.424 0.460 0.487 0.471 0.469
ratio:(EmR+EmN)/EmU
Input emergy ratio:EmI/EmU 0.528 0.576 0.540 0.513 0.529 0.531
Renewable resoures emergy 0.180 0.188 0.172 0.144 0.139 0.123
ratio:EmR/EmU
Emergy per capita:EmU/P 6.818 6.517 7.382 8.448 8.899 9.887
(1015sej/person)
Emergy density 12.029 11.578 13.147 15.173 16.153 18.079
(1011sej/m2):EmU/area (m2)
Population carrying capacity 3474186 3769986 3528041 3278054 3369255 3324365
(person):(EmR+EmI)/(EmU/P)
Emdollar value
(1012sej/$):Em$ =EmU/GDP 6.964 5.818 5.578 5.323 4.589 3.907
($)
Net emergy yield
ratio:EYR=EmO/EmI 5.748 5.784 8.104 6.713 6.902 6.749
Environment loading
ratio:ELR=(EmU-EmR)/EmR 4.555 4.329 4.811 5.949 6.185 7.159
Waste emergy index:EWI
=EmW /(EmR+ EmIR) 0.079 0.073 0.072 0.065 0.071 0.070
Fraction of emergy used from
electricity:Emel /EmU 0.068 0.079 0.074 0.083 0.076 0.072
Emergy exchange
ratio:EER=EmIS/EmISP 2.185 1.924 1.638 1.380 1.018 0.927
Emergy investment
ratio:EmI/(EmR+EmN) 1.120 1.357 1.176 1.052 1.121 1.134
Waste emergy/renewable
emergy ratio:EmW /EmR 0.195 0.189 0.189 0.186 0.210 0.226
Emergy sustainable
index:ESI=EYR/ELR 1.262 1.336 1.684 1.128 1.116 0.943
Emergy index for the
sustainable development: 2.710 2.528 2.718 1.540 1.123 0.865
EISD=EYREER/(ELR+EWI)

2.4 Fraction of Emergy Used from Electricity

Fraction of emergy used from electricity (FEE) is the amount of electricity tothe
totalamount of used emergy ratio. The case of FEE in Guilin recently fluctuated largely
for many years, but decreased significantly in recent two years (Table 2). Guilins
electrical emergy was obviously lower than Fujians (19%) in 2004,
Jiangsus(20.8%) in 2000, Gansus (11.36%) and levels of others provinces [5,7,8].
This indicates that:
Guilins industries were underdeveloped, low level of
198 Z. Xu et al.

electrification, and some industries might use coal. Electronics, rubber mill,
machinery industry, cottonocracy, pharmacy, coach manufacture, handicraft and other
pillar industries in Guilin generally used lower energy. It shows that the current
orientation of industrial development in Guilin is appropriate by and large.

2.5 Emergy Density

Emergy density (ED) is the total amount of used emergy in a country or region to
the national or regional area ratio, which reflects the objects strength and level
of economic development.
In recent years, and the index of ED in Guilin rose from 12.0291011sej/(m2a) in
2003 to 18.0791011sej/(m2a) in 2008 (Table 2), which had an increase of 50.30%,
but there was a large gap with some areas along the coast, showing that the level of
economic development of Guilin is underdeveloped, and its environmental pressure is
relatively limited.

2.6 Emergy Per Capita

Emergy per capita (EPC) is the amount of used emergy to the total population of a
country or a region ratio, it is used to evaluate the standard of peoples living. In
general, the greater the EPC is, the higher the per capita emergy is. The EPC of Guilin
rose from 6.8181015sej/person in 2003 to 9.8871015sej/person in 2008 (Table 2),
showing that the living standard of Guilins people had been improved to some extent,
but there was also a large difference with some developed cities, such as Guangzhou
(13.391015sej/personyear) and Beijing (17.891015 sej/personyear)[3,9].
Currently, GDP per capita is 17435 yuan in Guilin, the disposable income of urban
resident per capita is only 14636 yuan/year, and the net income of farmer per capita is
4465yuan/ year, so people's life is at such a relative low standard.Guilin is
still behindhand region in southwest of China.

2.7 Environment Loading Ratio

Environment loading ratio (ELR) is the amount of non-renewable resources input


emergy in system to that of renewable-resources input emergy ratio. The ELR of Guilin
had an increasing trend (Table 2). Looking for the reason, besides the emergy of
renewable resources in Guilin was lower (Table 1), the consumption of nonrenewable
resources and the emergy of total labor input increasing constantly wass the other
reason, especially wood, clay, limestone and consumption of other nonrenewable
resources, resulting in an increase of environment pressure.

2.8 Population Carrying Capacity

Population carrying capacity (PCC) is carrying capacity of population, according to the


current state of environment. From 2003 to 2008, actual population of Guilin rose
from 4904663 to 5083215(Table 2), but PCC decreased from 3474186 to 3324365. In
2008, the population was 1.53 times than PCC in Guilin, suggesting that resources,
ecology, environment of Guilin were faced with tremendous population pressure in the
Analysis of Emergy and Sustainable Development on the Eco-economic System of Guilin 199

present developmental patterns and economic conditions. It is vital for sustainable


development of regions to protect the environment, maintain the existing plough, and
develop the high efficiency agriculture and high technology industries.

2.9 Emergy Index for Sustainable Development

Emergy index for sustainable development (EISD) is the product of Net emergy yield
ratio (EYR) and emergy exchange ratio (EER) to the sum of environment loading ratio
(ELY) and emergy waste index ratio (EWI). The EISD of Guilin decreased from 2.710
in 2003 to 0.865 in 2008 sharply(Table 2), the capability for sustainable development
dropped off rapiadly. Analyzing the causes, from 2003 to 2008, the EYR of Guilin rose
from 5.748 to 6.749, but the corresponding adjustment for industrial structure and
circular economy did not keep up with, and the ELY and EWI decreased somewhat,
making the EER decline from 2.185 to 0.927, so EISD fell quickly. With the rapid
expansion of economy, the contradiction among Guilinseconomic development,
resources, and entironment appeared gradually, it is the key for sustainable
development to speed up industrial restructure, develop the circular economy, and
build circular industry with local characteristics.

3 Conclusions and Recommendations


According to emergy analysis, it shows that the EIR of Guilin is about 1.1, indicating
that the speed and level of economic development in Guilin are still low, and the
economic development rely on external inputs excessively, which is mainly the
economic expansion in number; the ELR of Guilin ascended from 4.555 in 2003 to
7.159 in 2008, but its EISD declined from 2.710 in 2003 to 0.865 in 2008 sharply,
suggesting that in the process of rapid urbanization and new industrialization, the
resources constraint increased, pressure on the environment rose, and the current
approach to develop is not sustainable. For that, the following measures are presented:
(1) Speed up the industrial restructuring, optimiz the structure of energy, and
transform the model of economic growth. Weed out the technology or relative
industries who are in serious pollution, behindhand process and equipment, and
low-tech, increase investment in science and technology, strengthen the optimization
and upgrade in electronics, rubber manufacturer, machine industry, cottonocracy,
pharmacy, coach manufacture, and food and light industry, and develop the tertiary
industry and new industries vigorously.
(2) Develop and utilize local ecological resources and land resources, build
sustainable resources system reasonably. Meanwhile, exploit the local renewable and
non-renewable emergy at no cost moderately, such as water resource and solar energy,
to provide security for sustainable development of Guilin.
(3) Bulit the circular economy and build eco-industrial system. Establish ecological
industries characterized by clean production and recycling economy, rely on science,
technology and policy supports to develop ecological agriculture, ecological industry,
and ecological services, especially the recycling industries with local characteristics
such as agricultural planting and breeding-process for drug product-ecological tourism
and special marketing of local products.
200 Z. Xu et al.

Acknowledgment. The work was supported by a Program Sponsored for


Educational Innovation Research of Univerisity Graduates in Guangxi Province
(No.200910596R01) and the Guangxi Key Laboratory of Environmental Engineering.

References
[1] Lan, S.-f., Qin, P.: Emergy analysis of ecosystem. Chinese Journal of Applied
Ecology 12(1), 129131 (2001)
[2] Odum, H.T.: Guidance of energy, environmental and economic system. Oriental Press,
Beijing (1992); translated by Lan sheng-fang
[3] Sui, T.-h., Lan, S.-f.: Emergy analysis of Guangzhou urban ecosystem. Chongqing
Environmental Science 23(5), 423 (2001)
[4] Statistical bureau of Guilin. Economic and social statistical yearbook of Guilin. China
Statistical Press, Beijing (2004-2009)
[5] Zhao, S., Li, Z.-z.: Study on emergy analysis of Gansu ecological-economic systems.
Northwest Botanic Journal 24(3), 464470 (2004)
[6] Li, H.-t., Liao, Y.-c., Yan, M.-c., et al.: A study on emergy evaluation of Xinjiang
ecological-economics systems. Geographica Journal 58(5), 765772 (2003)
[7] Li, J.-l., Zhang, Z.-l., Zeng, Z.-p.: Study on emergy analysis and sustainable development of
Jiangsu ecological-economics system. China Population Resources and
Environment 13(2), 7378 (2003)
[8] Yao, C.-s., Zhu, H.-j.: Emergy analysis and assessment of sustainability on the ecological
economic System of Fujian province. Fujian Normal University 23(3), 9297 (2007)
[9] Song, Y.-q., Cao, M.-l., Zhang, L.-x.: Beijing, Emergy-based comparative analysis of urban
ecosystem in Beijing, Tianjin and Tangshan. Ecological Journal 29(11), 58825890 (2009)
Multiple Frequency Detection System Design*

Wen Liu, Jun da Hu, and Ji cui Shi

The College of Information Engineering,


Xiangtan University, Xiangtan, China
liuwen345@yahoo.com.cn, hjd112233@126.com,
sjc1106@sina.com

Abstract. In electronics, the frequency is the most basic parameters, and


therefore frequency measurement becomes even more important.This design
frequency detection system including controller system and PC system.
Controller system with AT89C51 as the key device. PC design using Visual
Basic 6.0, including the serial data receiving and processing, Displaying of
frequency data and curve drawing, data storage and query modulehs.
With Proteus7.1 and Visual Basic 6.0 test. The System to achieve a frequency
of 1-1MHZ measurement, It can in the digital tube and Visual Basic6.0
interface display the current frequency channel number and frequency value at
the same time. And to store data to Access2003 database for reference , The PC
hardware stucture of this system is simple and stable operation, and PC parts
design is reasonable, stable operation, easy operation and maintenance, and it
has certain expansibility.

Keywords: Microcontroller, Therequencymeter, 89C51, frequency division


circuit.

1 Introduction
So-called measuring frequency is in certain time intervals count of digital pulse in
simple way. In order to realize intelligent count frequency measure, a kind of
effective method is to used microcontroller in the design of the frequency. The
traditional frequency meter just detect external signal frequency value, and the simple
display it. This method is effective, but can not continuous reflected signal frequency
changes, can not track signals timely, lost a large number of signal detection of
information, It harm to design of the timely analysis and processing signal .In this
paper, the design combines the microcontroller and VB6.0 display interface,
Measuring the frequency steps: MCU measuring frequency, the tube for display,
serial transmission of data, the PC to draw the frequency curve, stored frequency data,
and so on.

* Project supported by Provincial Natural Science Fundation of Hunan, China (Grant


No.09JJ3094). The Research Foundation of Education Bureau of Hunan Province,China
(Grant No.09B022). The Open Fund Project of Key Laboratory in Hunan Universities
:Hunan Institute of Engineering electrical control laboratory. Undergraduate Research Study
and Innovative Test Program Projects in Hunan Province(No.09281).

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 201206, 2011.
Springer-Verlag Berlin Heidelberg 2011
202 W. Liu, J. da Hu, and J.c. Shi

2 Frequency Measurement Principle and Scheme Selection

This frequency measurement system of basic design principle is to use directly


decimal number display measured signal frequency in the measuring device. It with
direct measurement per unit of time pulse number, namely frequency, automatic
measurement. The so-called frequency is periodic signals in unit of time (1s) change
frequently. If measured in a certain time interval T may change the number of
periodic signals N, then the frequency can be expressed as f = N / T. the role of pulse
forming circuit will be measured signal into the pulse signal, The repetition rate equal
to the measured frequency fx, Time reference signal generator to provide standard
time pulse signal, if its cycle 1s, the door control for the duration of the output signal
circuit that accurately equals 1s. Gate circuit controlled by standard seconds signal, as
second signal came, gate opening, tested pulse signal through the gate to the display
circuit. At the end of the second pulse the gate cut-off, counter stop counting. The
counter was the number of pulses N is the cumulative number of times a second, so
the signal frequency fx = N.

2.1 Optional Plan One

Implementation of a combination of hardware and software, Main component has


AT89C5y registers chip, 74LS48 bits choo1, 74HC164 drive digital displase chip
counts circuit, LED digital pipe and some capacitor, resistors. Principle chart is as
follows:

$
PLFURFRQWUROOHU

6\QFKURQRXV % UDN H ' LJ LWDO


GRRU
( Y HQ WFR X Q WHU
FRQWURODQG G LVS OD\
$ FK DQ Q HO
FKDQQHO %
G X D O ZLQGRZ % UDN H / X P LQ R X V
PHDVXUHPHQWV GRRU 7LPHFRXQWHU LQ VWUX FWLR
% FK DQ Q HO
Q V

6\VWHPWLPH

Fig. 1. Combination of hardware and software implementation principle of measuring the


frequency histogram

The program can detect multi-channel signals, by synchronizing the gate and
function switching circuit for time division multiplexing circuit, time to achieve with
two counters count and event count separately. Other channels in the display data,
when in another channel the data storage in the Microcontroller, and may through the
corresponding keys are being set.

2.2 Optional Plan Two


Pure hardware implementation method, the main components are bistable flip-flop
MC4583B, count \ decoder chip CD4026, two-stage dual time base control NE556,
Multiple Frequency Detection System Design 203

Dual complementary symmetry inverter CD4007, integrated voltage regulator 7805,


six digital And some capacitors, resistors, and so on. Principle chart is as follows:
The program by blocking the input signal and Output divided frequency after the
request for the counter input signal, namely to meet TTL input level requirements.
With gate signal circuit output signal to control the count to start, stop and clearl.
Finally, seven common cathode LED meter on the counter of the number of pulses
was displayed.

3RLQWV &RXQW 'HFRGH


,QSXWVLJQDO IUHTXHQF\ /('
GULY HU
SUR FHVVLQJ

7LPHEDVH
*DWHFKDQQHO
X Q LW

Fig. 2. Measuring the frequency of pure hardware diagram

The first program can do more powerful, soft features better. Simple in principle,
the specific circuit in real time is better, but also in achieving high accuracy in the
preparation of software features is relatively high demand. Another use of traditional
microcontroller chip can not be directly measured by counting the frequency of
1MHz, usually the clock frequency of microcontroller at the under of 12MHz, the
machine cycle of at least 1us, if the frequency of 1MHz measured must be added
frequency circuit microcontroller, This can greatly improve the frequency meter
measuring range, Application of space more widely. The greatest feature of the
second program is the full hiardware circuit, crcuit stability, high accuracy, without
tedious debugging process, greatly reducing the production cycle, but the frequency
measurement range is limited, with great limitations. The requirements of this design
can be measured at frequencies up to 1MHz, which must be applied frequency circuit.
In summary, the design uses the first option to achieve the frequency measurements.
In conclusion, this design chooses the first implementation of frequency
measurement.

3 The Hardware Design of System Simulation


The frequency meter data collection system main components are AT89C51, it
measured signal frequency, count and shows them. Outside it has divider, monitors
and other functional devices. There can be divided into the following modules: second
pulse generator module, microcontroller, frequency division circuit, channel selection
unit and seven-segment LED display unit. The following simple introduction a few
basic unit:

3.1 Signal Input Circuit

The design uses a multi-channel signal input through the switch to select the signal
down into the circuit, The circuit design of the eight frequency measurement circuit,
204 W. Liu, J. da Hu, and J.c. Shi

Through the 74151 data selector selects one of inputs the design of frequency
requirements measure frequency range up to 1MHz, so sent measure frequency value
into SCM before them split processing, while D flip-flop frequency has this
function,so frequency by the D flip-flop treating first.

3.2 Digital Decoding Unit

This design uses 8-bit common cathode LED on the measured frequency and the
number of channels selected for display, Dan connections consist of light-emitting
diodes figure 8, when use some pen segment the light emitting diode luminescence
can show 0 ~ 9 number. LED seven segment displays, also known as LED digital
tube, the display can be divided into a total of yin and yang are two common types.
The so-called digital tube of anode, is the public termination positive, when the
cathode combined with low levels the section to shine. Common cathode is the digital
negative public termination, when the anode with high levels the section to shine.

4 The Software Sesign of System Simulation

The core of the frequency design is the AT89C51,it finish the function of signal and
frequency measurement and so on, so it is necessary to develop appropriate
procedures, to messure the frequency. C51 language was used to debug the led
display and counter circuit, divider circuit and serial port to send the results of
instruction and so on. The program can be divided into the following specific
procedures for the preparation of several modules: the main program, counting
process, switch the selection process, display routines, and serial port procedures.
The PC software is designed to be completed based on VB6.0. Writing the
corresponding function program received through the serial communication of
information from the next crew to complete the system, including the frequency of
sending and curve display.
Software design includes the main program, counting process, switch the selection
process, display routines, and serial port procedures.
Main program to complete the setting of timer, serial port baud rate settings, and
the interrupt type, etc.
Counting program External interrupt frequency was set to 1s, during the 1s time to
count on the external frequency. The frequency was the frequency of the external
signal is the number of values.
Switch to select program Setting P1.6, P1.7 to select the port switch from top to
bottom, by choosing a different channel, determine the frequency of the channel.
Display program display the measured data, open the appropriate place, call the
appropriate section of code to display.
Serial port and data saving program the measured frequency value was send by the
serial port to PC. When PC received data curves depict, drawing the curves by
different colors to correspond to the frequency value and frequency value for database
save for inquiries. And the flow diagram was showed below:
Multiple Frequency Detection System Design 205

6 W D U W
6WDUW

6 H Q G 
,QLWLD OL]H WKH  F R P P D Q G V
FRUUH VSRQGLQJGD WD
1
: K H W K H U  1 R  G D W D  LV 
< (QWH UWKH  D Q \  G D W D U H W X U Q H G
& RQWLQXH WR 1 &RQWLQXH WRFRXQW 
LQWH UUXSWVH UY LFH 
FRXQW VH FRQGLQWH UUXSWLRQ <
UR X WLQ H
' U D Z  F X U Y H

(QG
( Q G

Fig. 3. Program flow chart of counting program Fig. 4. Serial transmit data flow diagram

Proteus and vb6.0 was used to joint simulate. By testing, the system can function
normally on a pre-requirement. The external input frequency information will be first
measure by MCU and displayed by seven segment LED display decoder, through the
serial port to the PC, host computer system can quickly and accurately under the crew
received the frequency of information transmission. After corresponding treatment,
drawing in the corresponding linear axis and the frequency of the measured data
stored in the database. Simulation was shown in Figure.5 to Figure.7:

Fig. 5. Local machine measured the fifth Fig. 6. Depicts the corresponding PC curves
channel frequency display

Fig. 7. The data stored in a database query

It can be seen from Figure.3: the frequency testing by microchip is simple,


practical, wide measuring range, response, compared with the traditional frequency
counter it has a higher accuracy and reduce the error. From Figure.6 we can found
that using serial port to send data, the measured frequency is displayed on the host
206 W. Liu, J. da Hu, and J.c. Shi

computer, more intuitive reflecting the real-time frequency. Figure.7 showed us the
measured frequency value is stored to the database, which will help on the data query
and analysis.

5 Conclusion
The simulation results can be seen from the above. Compared with the traditional
multi-channel frequency counter the frequency based on MCU has better
performance, and enhanced the overall function of the frequency meter. Of course, the
whole system is still in the testing process and a lot of errors were found, such as the
measurement error.also,there are some errors in the channel, such as 5-channel input
10000Hz frequency, the tests showed that as 9998Hz, but the measured data allowed
within the error range.

References
[1] Zhang, Y.: Microcontroller theory and applications, pp. 100154. Higher Education Press,
Beijing (2008)
[2] Guoqi: Visual Basic Database system development technology, pp. 236251. Posts &
Telecom Press, Beijing (2003)
[3] Cao, W.B.: C / C + + serial communication typical application programming practice, pp.
20136. Electronic Industry Press, Beijing (2009)
[4] Wang, P.: Visual Basic Programming fundamentals tutorial, pp. 77160. Tsinghua
University Press, Beijing (2006)
[5] Li, X.W., Chao, J.: Electronic Measurement Technology, pp. 78102. University of
Electronic Science and Technology Press, Xian (2008)
[6] Yan, S.: Fundamentals of electronic technology, pp. 185213. Higher Education Press,
Beijing (2004)
The Law and Economic Perspective of Protecting
the Ecological Environment

Chen Xiuping and Liang Xianyan

Zhongnan University of Economics and Law,


Three Gorges University, Yichang, Hubei
cxp9453@163.com

Abstract. The core of the law and economic theory is to obtain the maximum
benefits at the minimum costs. The environment and economic growth of the
contradictions are becoming increasingly prominent today, how to obtain most
of the economic performance with loading down the environment at the same
time? The text proposed the sustainable development as guiding ideology, in
coordination to develop as guiding principles, the "guidelines on prevention" as
guidelines.

Keywords: The law and economic theory, cost, efficiency, environmental


protection.

1 The Core of the Law and Economic Theory: Costs and Benefits
Law and economic theory, from jurisprudence view, "Economic analysis of law".
Economic analysis of law is a theory to apply the concepts and study method of
economics to study and understand the problem of the law, it rose in America firstly
in the 1960s. It is Richard Allen Posner who collected the major achievements of the

economic theory, his book the economic analysis of law completed a task that
comprehensive and systematic analysis and summary of the economic theory.
The Law and economic theory by nature "apply the economic theory and
economical methods fully in legal systems analysis.[1] In economic analysis of
jurists eyes, "economics is one of our rational choices and in the world, resources
are limited relative to the human desires " .Based on the hypothesis that man is

"maximize" selfinterests the economists get the core idea of the law and

economics theory from the three basic principles the law of supply and deman d,
y
maximize the efficienc , and maximizing value ), the core idea of the law and
economics theory efficiency
to allocate and use resources with value can be
maximized.[2]
Apply this theory to the legal field, he will come to a conclusion: maximized
wealth is the purpose of the law. All the legal activities (including legislative and law
,
enforcement and judicial activities etc and all the legal systems (public law system,
private law system and judicial system etc) ,their ultimate aim is to wait for the most
effective use of natural resources and to maximize the increasing social wealth.
Therefore, the core of law and economy theory is to obtain the maximum
effectiveness with the minimum costs.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 207213, 2011.
Springer-Verlag Berlin Heidelberg 2011
208 C. Xiuping and L. Xianyan

2 The Proposal of Ecological Environmental Problem


Ecological environmental problem is the question that the changing nature or human
activity may cause the ecological system off balance and ecological environmental
degradation, and bring about the adverse effects to development of the human beings
and the whole community.
The ecological environment of human society in the early days of mainly laid in
the lack of biological resources in local areas because of the human settlement and the
increased population. Since human entered industrial society, in the condition that
natural resources and environmental issues become more seriousl , and more y
pollution problems, the so-called public hazards. Especially after the Second World
War, the productive forces improved greatly, an unprecedented increase of the ability
of the using and transformating the environment, but human society especially the
developed worlds level of consumption demand is growing and expanding. The
human consumption of resources and discharge of wastes unprecedented grows, has
brought about serious environmental problems. At present, the ecological
environment has transcended national borders and become unavoidable common
problems.
Since the founding of New China, our government has attached great importance
to protecting the ecological environment, improving environmental protection work
and intensified legislation on environmental protection and law enforcement efforts,
the ecological environment conditions progressively improved. However, China's
ecological environment of the situation remains grim. For example, environmental
pollution is still serious (the high sewage, water pollution is serious etc), the trend of
ecological deterioration has continued (land degradation and reduced etc).[3]
Undeniably, the existence of environmental problems, is related to how we
understand conservation laws, and also related to our growing too fast population ,
and also related to the government's decisions and acts, but the main reason is that we
are anxious for utilities, we focused on the development of production and increase of
GN P for economic development to sacrifice the ecological environment. In other
words, we are in pursuit of economic benefits whilst at the same time, we are in
breach of the ecological environment, what a great cost? So now we must think such a
question: how to load down the environment at the same time, most of the economic
performance?

3 Mitigate Environmental Load and Reap Higher Economic


Returns

3.1 Sustainable Development as Guiding

3.1.1 Relating to Sustainable Development


In the early 1980s, the united nations fought against the three big challenges mankind
facing: the north-south question, disarmament and security, environment and
development, set up three senior specialist committees, published three programmatic
The Law and Economic Perspective of Protecting the Ecological Environment 209

documents. The three documents spontaneously come to the same conclusion that
world must organize and implement new strategy of sustainable development , they
have repeatedly emphasized that sustainable development is the only alternative way
at the end of the 20th century, and the 21st century mankind survive and develop.
In June 1992 session of the United Nations conference on environment,
development and sustainable development has become the most powerful voice.
Human finally chose the sustainable development, this is the historical evolution of
human civilization, and it is an important milestone for man to break the traditional
mode of development and open the development of modern civilization.
The report Our common future believed that human society ever gain the rapid
economic growth has also brought the deterioration of ecological environment,
thereby seriously restricted the humans economic and social development further.
Therefore, we must change its traditional development and choose a new
development strategy to ensure social development of persistence.

3.1.2 Sustainable Development of Present Stage


At this stage, the conflict between economic growth and resources and environment
will still exist, in the rapid industrialization and urbanization stage it would be even
more prominent. This is the product of high-speed economy development, it is also
the performance of economic underdevelopment. Sustainable development is the
economic and social development of a new strategic, economic and social
development is the only correct choice.[4]
At this stage how we cope with and how to implement sustainable development, I


think we should focus on the following:
First, we should turn to promote growth mode in a prominent place put more
emphasis on technology progress. To develop cycle economy is significant to
improve the utilization of resources and environment.
Second, we should turn to optimize the industrial structure in a prominent place,
put more emphasis on services, particularly modern service industries development.
Third, people should be more rationally planning industrial development, and
analyze market trends for long-term supply and demand, to avoid excessive heavy
industry.
Fourth, we should pay more attention to coordination between economic growth
and saving resources and environmental protection.
Fifth, we should pay more attention to construct conserve resources typical society,
to eliminate waste and advocate economy in planning construction, circulation,
consumption and production.

3.2 Coordinated the Development for Guidelines

3.2.1 The Concept of Coordinated Development


The coordinated development of environmental protection usually refers to economic
growth and social progress mutual coordination and joint development, and therefore
are often referred to as environmental protection and economic development and
social development coordinatedprinciple.[5]
210 C. Xiuping and L. Xianyan

3.2.2 The Meaning of Coordinated Development Principle

3.2.2.1 It is effective measures to prevent environmental and ecological damage. The


coordinated development learned history lessons from the pollution before the
treatment, the destruction before the protection. The coordinated development review
and analysis the environmental impact of decision-making on the macroscopic, and
fundamentally solve environmental problems, control the policy source of the
environmental pollution and ecological damage.

3.2.2.2 It is the important guarantee for government to change in policy decisions


and scientific policy. The coordinated development requests changes the thinking
mode that made by the department to make decision singly, change a decision-making
from by individuals decision-making groups, to changeover decision-making from by

the departments only to involved in all the departments. An overall making policy
change from the trust of the wise leadership, to be able to rely on the system of
security constraints and the rule of law. It has made decisions more comprehensive,
democratization and scientific.

3.2.2.3 It is advantageous to change people's perspectives. The coordinated


development is key measures to keep sustainable development,it transfers from a
single development to more complex development. Sustainable development
decision-making asks for economic development to making overall plans for
population resources and environment, not only to consider the economic benefit, also
consider the environment efficiency and social benefit.

3.2.2.4 It facilitates optimization disposition of resources. To perform the coordinated


development is the implementation of the basic state policy of environmental
protection and important measure to implement sustainable development. Therefore,
we must take important steps in the management and to establish and improve the
coordination of development mechanism, make it standard and scientific, to utilize the
limited resources in rational distribution and science, throw it into some places and
projects that create good social and economic and environmental benefits.

3.2.2.5 It facilitates reduce the development costs. In the course of industrialization,


our country spent a long time practicing the economic mode of extending and quantity
type, economy developed at a high speed, at the same time, we pay the enormous cost
of natural resources and the ecological environment. China's economic development
cost higher than the average of the world, far higher than the developed countries. The
coordinated development considers environmental factors and resources value, it
facilitates reduce the development cost to makes a unified planning and a reasonable
layout.

3.2.3 The Implementation Approach of the Coordinated Development [6]


Today, the contradiction between environment and economic growth of are becoming
increasingly prominent, coordinated progress has been regarded as the best option to
solve conflict to the international community.
The Law and Economic Perspective of Protecting the Ecological Environment 211

The economic development of scientific development, not only the expansion, it


needs to improve productivity and to maximize utilization of resources, but this
precisely is the ultimate goal of environmental protection work.
Environmental protection, refers to take various measures such as administrative,
economy, science and technology, publicity and education and lega l, rationally use
of the natural resources, preventing pollution and other hazards to protect and
improve their living environment and ecology environment, make it more suitable for
human existence and development. That is in the implementation of its economic
development goals, not to damage the environment for the environment, or to
instability and disorder in the direction of movement, especially can't make life and
destruction continues to support from the web of life. The growing disintegration of
the environment is environmental protection as a means to improve and promote
economic growth, in order to achieve a double purpose of environmental protection
and economic development. After all environmental damage because of the economic
growth is not our desire and objective.
If put environmental protection into economic growth system and take it as an
industry, you can promote economic growth[7]. According to the China Environment
News reported, with the emergence of its industry, it produced a large number of naoh
and waste solid, and polluted ecological environment. To improve the ecological
environment caused by pollution, government led Kunshan people to develop cycle
economy actively introducing advanced technology, it can weave the "link", for over
twenty families were organized to become special "eat waste" enterprises, industrial
waste production can be of high added value of the new product. In the conditions of
optimized growth, environmental industries become a new economic growth.
Economic growth accelerated to protect the environment, manifested in the
following aspects:

3.2.3.1 Economic growth will make more countries in efforts to protect the
environment. the purpose of economic development is enhancing the overall national
strength, the country's strengthen up, we can protect and improve environment to
provide the necessary financial and material resources to promote environmental
quality improvement. with economic development and the improvement of people's
living standards for environmental quality, higher, stronger and environmental
awareness and environmental protection more conscientious, will no doubt to promote
the environmental protection.

3.2.3.2 Economic growth will optimize the survival of humanity. as populations[8]


With the people's material needs and environment increasing, to keep the environment
of indigenous forms be more and more impossible. Only if economic developed, can
people make use of the ecological guidance practice in accordance with law, to create
artificial environment according to ecological systems, can we live in the earth into
the most optimized human environment and the environment is more suitable for
human life and social development.
In summary, environment and development are mutually reinforcing and
harmonious development. If the policies used wisely, they can be a sound and
promote development. To improve the environment quality is human social
development goals.
212 C. Xiuping and L. Xianyan

3.3 Take the "Guidelines on Prevention" as Guidelines

"Prevention first" is in short of prevention, control, comprehensive polic y


it is to
say, the state should put prevent environmental problems in the first place in
environmental protection , take preventive measures to prevent the question arises and
worse; and should take positive measures to control the inevitable or have occurred in
the environmental pollution and ecological damage.
At present, the environmental legislation is passed from passive control pollution
to active prevent the pollution, and pointed the view that we would take the
environmental problems as a rule, the prevention before it appears rather than
controlling pollution after pollution appears, adopted prevention and comprehensive
environmental policy and make prevention, control, comprehensive policy of
environmental legislation as the important principles.
Prevention is for man prior to prevent all the acts interfere the environment within
the existing level of technology in the possible extent, load down the environment to a
minimum level. After the 1980s during every country environmental policy
adjustments and the process, the policyprevention first were paid more and more
attentions, and become a national environmental management and an important
legislative guidelines.
Our country value "prevention first" truly, in general from the 1970s to the end of
the early 1980s, and realized the necessity of prevention first from China's great
loss caused by environmental problems. According to the incomplete statistics in the
early 1980s, our economic damage caused by environmental pollution every year
amounted to 690 million Yuan, part of the economic loss caused by the ecological
damage amounted to 765 million Yuan every year, these two adds to a total as high as
955 million Yuan, take total value of industrial and agricultural about 14 percent.
Such a huge economic loss make people realize that for China it is utterly
impracticable if the environment is not mentioned strategic and take effective
measures, the economic development will be difficult to keep well. For this reason,
"the policy of prevention first" was embodied in our law of environment protection,
the law of the prevention and control of atmosphere pollution.
In short, under the current grim situation of the ecological environment, we must
take sustainable development view as the guiding ideology, take coordination to
develop as guiding principles, take the "guidelines on prevention" as guideline, could
we to mitigate environmental load and reap higher economic returns.

References
[1] Posner, R.A.: Economic Analysis of Law. The Encyclopedia Publishing House of China,
Beijing (1997) (preface)
[2] Posner, R.A.: Economic Analysis of Law, 5th edn., New York, pp. 1315 (1998)
[3] Chen, Q.S.: The basic theory of the environmental sciences, pp. 3637. The Environmental
Sciences Press of China, Beijing (2004)
[4] Zhang, K.: The theory of sustainable development, p. 74. The Environmental Sciences
Press of China, Beijing (1999)
[5] Zhu, B.: The resources, environmental and social development. The Impact of Science to
Social 1 (1994)
The Law and Economic Perspective of Protecting the Ecological Environment 213

[6] Chen, Q.S.: The basic theory of the environmental sciences, p. 136. The Environmental
Law Press of China, Beijing (2004)
[7] Nie, G.: The economic analysis of our environmental protection in the transition period, p.
27. The China Economy Press, Beijing (2006)
[8] Li, K.: The environmental economics, p. 185. The Environmental Sciences Press of China,
Beijing (2003)
Research on the Management Measure for Livestock
Pollution Prevention and Control in China

Yukun Ji1, Kaijun Wang2,*, and Mingxia Zheng2


1
University of Science & Technology Beijing, Beijing, China
j-yk@163.com
2
Tsinghua university, Beijing, China
wkj.jep@gmail.com,zhengmingxia@gmail.com

Abstract. In recent years, government and some organizations have published a


series of polices and measures in order to resolve the problem of livestock
pollution. Based these existing policies and economical measures, many
deficiencies regarding to livestock pollution prevent and control have been
analyzed. At the same time, suggestions to these problems were given.

Keywords: livestock and poultry pollution, management measure, pollution


abatement.

1 Introduction
As the development of Chinas economy, peoples matter and cultural level has been
increase continuously, China's livestock breed industry has a very fast development,
for example, the meat, egg and milk output increase fast, but at the same time, the
livestock pollutes also more and more serious, become a main agriculture pollution
source. In order to solve the livestock problem, in recent years, our country try to
control the pollution by doing some policy guidance and giving some money support ,
we get a good result. But because the livestock pollution prevention starts late, we
have some problems at management, for example, the relative regulation and standard
system is not so tightness and perfect. This text has a analyzing about the
development for the livestock industry form economy and policy area.

2 China's Livestock Industry Development Situation


As the development of Chinas economy, peoples matter and cultural level has been
increase continuously, China's livestock breed industry has a very fast development,
in 2003, our country livestock breed industry is 34% of the agriculture production,
becomes the biggest meat and egg produce country in the world, in 2008, the number
reach to 35.5%[1].
As our country promotes new village construction, Livestock industry structure
change into intensive type and the scale becomes large, According the Monitor of
Agriculture Department, in 2008, the livestock breed industry ,whichs the pig's
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 214218, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Management Measure for Livestock Pollution Prevention 215

production is more than 50s, cow's production is more than 10, chicken is more than
2000s, milk cow save column more than 20s, the egg chicken saves a column more
than 500 , has been reach 56.0%,38.0%,81.6% 36.1% and 76.9% respectively of
intensive type level, we can see that the small scale livestock raise is above 80% of
the intensive type raising [1].
Compare with the livestock raising, the livestock pollution prevention level is not
very high. The environmental protection department has a inspection for the 23
provinces and cities, 90% scale livestock arising didn't have the environment
evaluation, 60% arising place has no anti-pollution measurement. Livestock arising
has became the main pollution source for the village region.
In 2008, form the state council pollution inspection result, we can see that the
livestock breed industry generates the dejection as 243,000,000 tons a year, and
generate wet liquid 163,000,000 tons a year. Among them, chemical oxygen demand, T
total nitrogen and total phosphorus exhaust are 12682600 tons, 1024800 tons,
160400tons, and it is 41.87%, 21.67%, 37.90% of the whole countrys total pollution.
Pollution from the agriculture source, more outstanding one is livestock farming pollute,
the livestock farming industry chemical oxygen demand, total nitrogen and total
phosphor take up 96%, 38% and 56%of the agriculture source respectively [4 1].
3 Livestock Breed Industry Pollution Political Measure
In recent years, as the increasing of the intensive type and scale livestock breed, its
scale also became bigger and bigger, the pollution problem is getting serious. The

Table 1. the main measures for anti- livestock pollution

NO. Document for standard and regulation


The livestock pollution treatment manages
GB 18596-2001 the exhaust standard of The livestock pollution
HJ/T 81- 2001 The livestock industry pollution treatment technical specifications
The livestock raising place environment and hygiene control
NY/T 1167-2006
specification
NY/T 1168-2006 The livestock excrement treatment the specification
The livestock raising place environment controls technical
NY/T 1169-2006
specifications
The goingtoscale livestock arising place methane project running -
NY/T 1221-2006 time in livestock farm and maintenance and it safe technique
regulations
The goingtoscale livestock arising place methane project design
NY/T 1222-2006
specification
The livestock industry pollution treatment project technical
HJ 497-2009
specifications
Implementation of "urge the manage by prize" to solve the village
environmental pollution treatment
HJ 588-2010 The agriculture solid pollution control technique leads
216 Y. Ji, K. Wang, and M. Zheng

government takes a lot of measurements, standards and regulations to control the


livestock pollution. The follow table shows us the main measures what our
government take for the recent 10 years. (Table 1)
Livestock arising guidance document is separate into three parts: administrative
management, technical specifications, environment economy, the document which
already put out is mainly technical specification, it has some restriction of the place
chose, scale, project design, environment, pollution management, exhaust tandard etc,
it will take some functions, we dont have too much document about administrative
management and environment economy, we dont have special administrative
management document for livestock pollution, even in <environment protection
regulation> has no regulation about this, this is because our country take measures a
little bit late, a lot of work to do to improve it.

3.1 Government Department has Different Duty Makes the Management Problem

Agriculture department make the village economic development and agriculture structure
as the main point of their work, recently, they adjust the industry structure intensification
and going to scale, but they have not see the problem caused by this. Environment
department focus on water, air, voice, waste residue, they main manage big city and
industry pollution, didnt pay attention to the livestock pollution. So, in a sort of sense,
government duty doesnt match is a reason to make livestock pollution serious.

3.2 Lack of the Regulation for the Small Livestock Breeds Industry

Recently, the environment department has enhance the management about the big
scale livestock breeds industry, increase the management for the big and middle scale
company, "Three Meantime", environment evaluation, but small scale livestock
breeds industry is more than 80% of all the livestock company, the average number is
650 pigs,22000 chicken for each small company. Big caw which is above 200 is only
5% of the total number[2].Lack of the management to the small company is a policy
problem.

3.3 Lack of the Regulation to Encourage Livestock Breeds Industry to Make the
Livestock Pollution Prevention

livestock is influenced by ill and market, it belongs to high risk and low profits
industry, recently, our countrys main management for livestock pollution is penalty,
lack of related protection and policy to support, company has on interesting to build
up pollution treatment equipment, even they build up the equipment, they always
exhaust waste when it is no inspection.

3.4 Lack of the Related Data about Livestock Breeds Pollution

Livestock breed is the main pollution in the village, but we cant find any data about
pollution situation and pollution exhaust in <Environmental conservation yearbook>,
<Chinese livestock husbandry yearbook> and <Chinese agriculture yearbook>, it will
impede our livestock pollution prevention work improvement, and also impede our
target come true.
Research on the Management Measure for Livestock Pollution Prevention 217

4 Conclusion and Suggestion


It is a long way work of the controlling about livestock breed pollution and improving
the environment, it should be done step by step, it need the policys guidance, new
technique, and economic support, it also need management department, company and
market work together, than we can have a good control of the livestock breed
pollution, according our country situation, the suggestion as follow:

4.1 Improving the Regulation about the Livestock Breed Industry

Improving regulation system about livestock breed industry pollution, add special
regulation for livestock breed pollution prevention, for example, make < regulation
for livestock breed pollution prevention>, make clear about the responsibility and
obligation for the environment protection, give a serious punishment to the illegal
company.
Enhance the making of the inspection document about livestock breed industry,
clear the responsibility of different department, to avoid administrative management
problem. Focus on the whole situation; make the treatment of livestock breed
industrys pollution prevention and improving the village environment as our final
target. Each department takes its own function.

4.2 Make Clear about the Department Function, Make the Livestock Breeding
Industry Management Integration

Environment protection is not only the work for the environment protection
department, it is related to work about industry, agriculture, commerce, sanitation,
programming department etc. We should let every department take their function to
engage in management about the pollution prevention, they should considerate about
the environment problem when they want to take any measures, enhance the pollution
prevention work, to solve the problem between economic development and
environment protection.
Enhance the communication between department, build up the horizontal
management system between each environment protection department, let the
livestock breed industry under management from building up plant improvement,
extension, normal inspection to pollution exhaust, control.

4.3 Enhance the Research about Economize-Type and Utility-Type Measure for
Livestock and Poultry Industrys Pollution Prevention

The livestock excrement pollution projects cost for pollution prevention is very high;
it is difficult to accept for the company without government support. Recently, it is
hurry to enhance the research for pollution prevention technique, to reduce the
livestock pollution prevention cost, and increase the treatment efficiency. Particularly
increase the input of the pollution prevention technique which can change into good
productivity. The livestock pollutes research should match request of market, should
be economize-type and utility-type, to promote the development of the pollution
prevention technique.
218 Y. Ji, K. Wang, and M. Zheng

4.4 Support Organic Fertilizers Production and Using, Manage Biogas, Biogas
Slurry, Biogas Residue Exchange Market

Strictly examine and approve of building up the new biogas projects, around the
biogas projects should have farm, fruit tree forest to use biogas slurry, biogas residue,
build up biogas projects, at the same time, finishing the piping for biogas slurry.
Secondly, for which biogas projects have no enough earth, they should make the
biogas slurry and biogas residue into organic fertilizer, and also supply to them the
way for selling.
Organic fertilizer is good agricultural fertilizer, but benefit offspring. It hasnt more
price advantage than chemical fertilizer. In order to push forward the building of
biogas projects, to establish good market environment, it is the most important to
strengthen organic fertilizer market management. It is necessary to encourage
production and use organic fertilizer on the political and economic side.

4.5 Intensify the Livestock Breeding Industry Inspection

Intensification and large-scale development is make the livestock breeding industry


pollution becomes the main pollution source in the agriculture pollution. supervision
and management of the livestock breed industry has been tightened. Strictly the
provement of breeding place and environment evaluate, "three simultaneous" system,
increase the punishment to which fails to meet the requirements for pollutant
discharge.

4.6 Strengthen Management of the Small and Middle Livestock Breeding


Industries

Because the cost and pollution quantity is low, medium and small scaled Livestock
farming always became the blind spot for the environment management, but long time
pollution of the fugitive discharge will cause big problem. The medium and small
scaled livestock breed industry is spread around, it is difficult to control on time, so,
for the medium and small scaled livestock breed industry, our main measure is guide
and encourage, and teach them the excrement treatment. Reduce the load of the
organic matter in wastewater. To build fertilizer company or sewage treatment plant
around medium and small scaled Livestock farming, which can resolve the fecal
treatment problem of medium and small scaled Livestock farming.

References
[1] National Bureau of Statistics, Ministry of Environmental Protection:China Statistical
Yearbook on Environment. China Statistics Concern, Peking (2009)
[2] Su, Y.: Research of Countermeasures on Waste Treating of Intensive Livestock and
Poultry Farms in China. Chinese Ecosystem Agriculture College Journal 2, 1518 (2006)
[3] Yang, J.: The Tension Rings A Red Alarm. Environment 4, 45 (2002)
[4] Peng, X.: Our Livestock Farming Pollution Treatment Policy and Its Character.
Environment Economy 1 (2009)
[5] Chinese The Editorial Board of The Livestock Husbandry Yearbook: Chinese Livestock
Husbandry Statistical Yearbook (2008), Peking, Chinese Agriculture, Book Concern (2010)
The Design of Supermarket Electronic Shopping Guide
System Based on ZigBee Communication

Yujie Zhang, Liang Han, and Yuanyuan Zhang

College of Electrical and Information Engineering,


Shaanxi University of Science and Technology,
Xian, China

Abstract. Based on the ZigBee networking technology and protocol analysis,


The paper designs a ZigBee Network Model with the features of
communication and location which mainly rests on CC2431 ZigBee network
location property and further propose a mobile electronic supermarket shopping
guide system which can locate expected goods , navigation and offer the latest
information on supermarket product.

Keywords: Wireless Networking Technology, Positioning Engine, ZigBee,


CC2430, CC2431.

1 Introduction

With the development of economy and society, the emergence of large supermarkets
provides people with a convenient place to buy necessities. To some extent, it
facilitates the purchase and save time. However, the enormous size of the supermarket
with the increase quantities and types of goods makes customer inconvenient to find
out what they need and get the latest goods information. This paper presents a mobile
electronic supermarket shopping guide system, which can be fitted in the shopping
cart locating expected goods and offering the latest information on supermarket
product.

2 ZigBee Technologies

ZigBee is a standard network protocols based on IEEE 802.15.4 wireless [1],


communicating in the 2.4G band with high efficiency and low rate. ZigBee
Network supports Full-Function Device (FFD) and Reduced Function Device
(RFD) two physical devices, which are made up of Coordinator, routers and terminal
equipment [2]. A Coordinator and a router are full function devices (FFD),
completing a large number of services set by ZigBee protocol and communicating
with any node of the network. While Terminal equipment can be either a
FFD or a streamlining of the agreement node (RFD) only communicating with the
FFD.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 219224, 2011.
Springer-Verlag Berlin Heidelberg 2011
220 Y. Zhang, L. Han, and Y. Zhang

3 Systems and the Network Model

3.1 Location

The achievement of this wireless location system mainly depends on CC2431 ZigBee
network environment and the built-in wireless location engine [3]. There are three
types of nodesCentral node, blind nodes and reference nodes. Central node is
constituted of the coordinator (FFD), initiating a network. Blind node made up of
terminal device (RFD) is a key to locate, which can calculate the current coordinate.
The coordinates of Reference node comprising the router are known in their
respective networks, (FFD) which can help the blind node position.
CC2431 location engine is on the basis of RSSI technology [4], RSSI refers to the
strength of the wireless signal the node receives. The signal propagation loss can be
calculated by the strength of known transmitter node and receive node. Then it can be
changed into the distance using empirical models. Then the location of the node can
be known by existing algorithms and the fixed coordinates. Blind node receives the
packet signal from the reference node, which access to reference node coordinates
parameters and the corresponding RSSI value, and then send them into the location
engine. We only need to write the required parameters into the positioning engine.
The result can be read out after waiting for completion of the engine. The theoretical
RSSI value can be shown as equation (1) [5].
RSSI=-(10nlgd+A) (1)
Among them, the RF parameters A and n are used to describe the network
operating environment. RF parameter is defined as 1 m from the transmitter at the
absolute value of the received signal strength. RF parameter n is defined as the path
loss index, which indicates the rate of decay with the distance increases during the
signal energy to the transceiver. d is the distance between the transmitter and the
receiver.

;<  ;<    


; ;

P
G
G G
G
P

G G EOLQGQRGH
G G UHIHUHQFHQRGH
RSWLPDOUHIHUHQFHQRGH
< <
D E

Fig. 1. Location diagram

Positioning operation use the " best" reference nodes the highest RSSI value.
For example, in the area as shown in Figure 1 (a), in the X, Y directions a reference
node is placed every 30 m. Firstly, find a reference node with the highest RSSI
values. Since each reference node of the horizontal and vertical coordinates has the
maximum length of 63.75m, thus identify a 64 m 64 m range in the center of "best"
reference nodes. As the RSSI value of this node is known, the distance d1 is available
The Design of Supermarket Electronic Shopping Guide System 221

between the blind nodes to this node. Locate the reference nodes except the "best"
node and calculate these distances away from the blind node (d2 ~ d8) in the same
way. After completion of this calculation, Blind node position is fixed in the global
grid as is shown in figure 1(b). Finally, get the value of all fixed coordinates into the
location engine and read out the results of the final position.

3.2 Network Model

The net is made up of the structural design of components based on the mesh network
[6] basic network topology, as is shown in figure 2. It has self-establishment and
maintenance functions without human intervention. Each node can communicate with
at least one node, supporting jump multi-level routing.

Fig. 2. Network topology

Net structure of the system is shown in Figure 3, the supermarket will be divided
into several region, each region based on MESH-based network topology to establish
sub-networks. Central node (also known as the gateway) initiates the regional sub-
networks. At the same time it completes the communication between server and the
wireless network. Each shopping Cart is fitted a mobile device with a blind node to
complete the positioning. Blind nodes can be discovered independently adding and
exiting networks, receive and send data, but not for router. Whats more, reference
nodes are established in specific locations, reference nodes in their network
coordinates are known, so it can help the blind node position and routed the data.
Central node for each partition of the data can be summarized by the way of wire
transfer to the server.
Mobile devices first add into the sub-network which carrying the blind nodes after
power, reference node will notice the blind node using self coordinates by sending data
packet. Blind node receives the data packet signal from the reference node, obtains the
reference node coordinates and the corresponding RSSI value, sending it into
positioning engine. Then the currently own position after the positioning engine
calculation can be read out, that is the initial coordinates. After customers enter a query
term, the initial coordinate was sent into the server with customer inquiries through the
reference node and the central node. Server can obtain the aim coordinates after the
sever search the query term that can calculate the best path using these two coordinate
values and map of supermarket. Then server can replay this path to the mobile device
in form of a set of coordinates. Customer mobile follows this path, because blind node
can independently join and leave different sub network, so it can constantly refreshing
the current coordinates to achieve the navigation function. In this network model, blind
nodes can communicate with the server in any location, so customers can get the latest
product information anytime, anywhere in the supermarket.
222 Y. Zhang, L. Han, and Y. Zhang

U H JLRQ  U H JLRQ  U H JLRQ 



*DWHZD\ *DWHZD\ *DWHZD\

UH IH UH QF H QRGH
VHUY HU EOLQGQRGH
JD WH ZD Q\ FH QWUD OQRGH

Fig. 3. Network Model

4 System Configuration and Implementation


4.1 System Hardware Structure
According to the above network, model system is divided into mobile device, the
reference node module, the gateway node module and the server these parts.
(1)Mobile device: It is composed of the touch screen (four-wire resistive screen),
flash memory, power and blind node, and such Peripherals devices, which is the core
produced by Samsung company ARM9 microprocessor SC2410.Touch-screen can
realize the information into the query and query results display, and navigation
interface display functions. FLASH memory can store of the map of the regional
supermarket. Blind node designed by the CC2431 to achieve the positioning, it is
shown in figure 4 (a).

Fig. 4. Hardware structure

(2) Reference node (gateway node): This part was made up with the CC2430,
power supply, reset circuit and an antenna minimum system which is the basic unit in
Network communication and positioning system. Structure is shown in Figure 4 (b).

4.2 System Node Software

This system nodes are all using OSAL real-time operating system implementation,
OSAL layer can provide information management, task synchronization, time
The Design of Supermarket Electronic Shopping Guide System 223

management, interrupt management, task management, memory management, power


management and non-volatile storage management service.
(1) Gateway node: It is positioning System Coordinator which connected with the
server through the RS-232. Firstly, it offers the reference node and blind node
configuration data and sends it to the corresponding node [8]. Secondly, the data feedback
by each node was received and sent to the server. Work flow chart is shown in Figure 5.

VWDU W


1 1
HFHLYHHH YH U \RQH
5 HFHLYH
5 HFHLYH3&GDWD QRGH GDWD "

< <
1
&DOLEU DWLRQ 7KU RXJK W K H
FRUUHFW
FRUUHFW" " VH U LDOSRU WVH QG

<
<
&DOFXODWLRQFDOLEUDWLRQ
6H QWWRWKH QRGH YDOXH WKU RXJK W K H VH U LDO
SRU WWRVH QG

H QG

Fig. 5. Gateway Node software process

(2) Reference node: Itself coordinates are known in positioning system which as
the router. Its service is to provide an accurate data packet that contains position (X,
Y) coordinates and the RSSI value of itself to blind node [8]. This node must be
configured correctly in the area. Work flow chart is shown in Figure 6.
(3) The blind node: It is a mobile node in positioning system, belong to the
terminal device. Through receiving the coordinates of all reference nodes and RSSI
value in positioning the region, it can calculate the self-coordinate using positioning
algorithm [8].Work flow chart is shown in Figure7.

VWDU W VWDU W

5 HFHLYHWKHGDWD "
1
5HFHLYHVGDWD
5HFHLYHVGDWD
1
<
<
; < 5 66, < 0 DQGDWRU\
UHTXHVW SRVLW LRQI RXQG

< 6HQG5 66,
; < 5 66, UHTXHVW
UHTXHVW 1
DYHUDJH YDOXH
DYHUDJHYDOXH
/RFDWLQJQRGH < 0 DQGDWRU\
IRXQGUHTXHVW " SRVLW LRQI RXQG
1

5HIHUHQFHQRGH
< &RQILJXUDWLRQ 1
LQIRU PDWLRQ
PDWLRQ Z ULWH FRQILJXUDWLRQ
FRQILJXUDWLRQ "
LQWR )/6$ +
/RFDWLQJQRGH < LQIRUP DWLRQ Z ULWH
FRQILJXUDWLRQ "
LQW R) /$ 6+
1
5HPRYH 1
< FRQILJXUDWLRQ
5 HP RYH
FRQILJXUDWLRQ
5HIHUHQFHQRGHUHTXHVW /RFDWLQJQRGHUHTXHVW <
VHQWWR FRQILJXUDWLRQ "
VHQWWR
FRQILJXUDWLRQ " FRRUGLQDWH
FRRUGLQDWH LP SOHP HQW
LPSOHPHQW 1
1 < < 5 HFHLYH5 66,
& ROOHFW5 66, "
&ROOHFW566, DYHUDJH Y D OXH
&ROOHFW566,""
&ROOHFW566, WLP H V
1
1 H QG

H QG

Fig. 6. Reference node software process Fig. 7. Blind Node software process
224 Y. Zhang, L. Han, and Y. Zhang

5 Conclusions
The paper designs a ZigBee Network Model with the features of communication and
location which mainly rests on CC2431 ZigBee network location property and further
proposes a mobile electronic supermarket shopping guide system which can locate
expected goods and offer the latest information on supermarket product. Combining
the positioning technology, communication technology and computer technology is a
new and meaningful field for study accompanied by good economic and social
benefits we just update the related goods information on the server and check them on
mobile devices. The employment of the system can comfort customer and facilitate
management of supermarket.

References
1. Qu, L., Liu, S., Hu, X.: Technology and Application of ZigBee. Press of Beihang
University
2. Li, W., Duan, C.: Professional training on Wireless Network and Wireless location of
ZigBee. Press of Beihang University (2006)
3. Yao, Y., Fu, X.: Network Location of Wireless Sensor based on CC2431. Information and
Electronic Engineering
4. Gao, S., Wu, C., Yang, C., Zhao, H., Chen, Q.: Teaching of ZigBee Technolog. Press of
Beihang University
5. Chai, J., Yang, L.: Location System of Patients based on ZigBee. Measuring and
Engineering of Compter
6. Ren, F., Huang, H., Lin, C.: Network of Wireless sensoring. Journal of Software
7. Sun, T., Yang, Y., Li, L.: Development Status of Wireless Sensor Network. Application of
Electronic
8. Sun, M., Chen, L.: Application of ZigBee in field of Wireless Sensor Network. Modern
Electronics Technique
The Research of Flame Combustion Diagnosis System
Based on Digital Image Processing

YuJie Zhang, SaLe Hui, and YuanYuan Zhang

College of Electrical and Information Engineering,


Shaanxi University of Science and Technology, Xian, China

Abstract. According to the characteristics of boiler combustion process, we use


image processing technique to extract features that characterize the features of
flame. On requirements of good real-time performance boiler burning diagnose
system, we develop a set of PC add DSP real-time image acquisition and
burning diagnosis system on the 200MW boilers. The results show that the
system is simple, practical, and can realize combustion diagnosis in the boiler
operation process. The system provides a reliable basis for the safety of the
power station boilers and economic operation, and has certain engineering
application prospects.

Keywords: Image acquisition, combustion diagnosis, vc++.

1 Preface
The combustion process inside power station boiler furnace is a complex physical and
chemical process. The flame temperature field distribution and the combustion
condition have the extremely vital practical significance for the safe operation of
power station. With the development of computer technology and electronic
technology, the flame image monitoring system based on the core of digital image
processing technology becomes the mainstream day by day.
This article based on the characteristic boiler combustion process, uses image
processing technology to extract features to characterize the feature of flame. In view
of high real time requirements of combustion diagnostics, we developed a set of PC
add DSP real-time image acquisition and burning diagnosis system and tested on the
200WM power plant boiler. Finally, we have a preliminary result of the study.

2 System Hardware Structure


System structure is shown in Fig.1., it is made up by the optical system, CCD camera,
image processing and burning diagnosis system.

2.1 Optical System

The optical system is optical periscope, we uses it to gain the flame image in the stove,
after the prism changes the direction and the optical fiber draws out in the CCD camera
target surface. In order to make the optical system work safely in the furnace, the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 225230, 2011.
Springer-Verlag Berlin Heidelberg 2011
226 Y. Zhang, S. Hui, and Y. Zhang

double-decked drive pipe structure is used to decrease temperature through cooling


wind and to clean the lens through the air in case dust forms. The optical system is
installed 30m.above the boiler and its field of view can effectively cover the entire
chamber across section to obtain the flame combustion image of the entire chamber.

2.2 CCD Camera

The CCD camera uses Japanese WATEC Corporations WAT-250D color camera,
the parameter is: 1/3 colored CCD, the horizontal resolution is 450, signal-to-noise
ratio 48dB; the electric power supply is 12VDC.

Fig. 1. System Structure

2.3 Imagery Processing and Combustion Diagnosis System

The imagery processing and the combustion diagnosis system are made up by a PC
and DSP coprocessor inserted in the PCs PCI slots. DSP coprocessor completes
image collection, processing and characteristic extraction. PC as the host completes
the work of real-time display combustion diagnosis result and system control.
The working process of the system is that the system images through optics
periscope, and passes to the CCD target surface through the image optical fiber. The
CCD camera gathers flame image by the speed of 25 flames per second, and
transforms it into the video signal. Gain the real-time image information of the boiler
through DSP coprocessor and send the information to PC through PIC to further
processing and burning diagnosis.

3 Flame Image Characteristic Quantity Extraction


Through testing the system from the power plant boiler combustion flame image
taken by extracting the following characteristic: flame average gray and gray
variance, flame effective area and the area of variance, high temperature area and area
variance, thus reflecting the combustion characteristics, reaction flame burning and
finally realizing the diagnosis.

3.1 Average Gray Iavre

The definition of the average gray is the ratio of gradation mathematical expectation
and image gray, it reflects the average light intensity of flame radiation. Takes a
threshold value r in the imagery processing process, supposes p(i) is the probability of
the ith level gradation, where r<i<L, L is the maximum of image gray, Iavre
represents average gray of the image, we have:
The Research of Flame Combustion Diagnosis System Based on Digital Image Processing 227

m
1 (1)
I aver =
m
i
ip (i )

In the formula, m is the maximum gradation progression of the image.

3.2 Variance
The variance had reflected the non-uniform degree of the flame temperature
distributed, the bigger the variance, the bigger temperature difference in the
expression temperature field space scope is. When combustion chamber in low load
operation, due to the coal-burning quantity's reduction, the combustion exothermic
quantity reduces along with it, thus the flame temperature in the stove and the water
cooling wall surface temperature drops, so as to reduces combustion reaction speed,
reduces speed the heat release rate, enables the powdered coal air current fire to have
the detention compared with the high load situation. And causes the combustion flame
kernel to rising, the powdered coal combustion share changes in the spatial scope,
therefore stove's temperature field is evener in the spatial distribution compared with
the high load situation.
According to the probability of variance in knowledge can be defined as follows:
2
M N
_
(2)
= f (i,j )- f
i =1 j =1
In the formula, M, N are the height and width of the image (all pixels), f(i,j) is gray
_ _
value at(i,j), f is the average gray, whose value is f = I . aver

If the current burning stabilization, the grey level should be in a threshold value, and
the variance should be more stable. If combustion instability, have fire extinguishing
trends, which will shows: gray value falls, grayscale variance change is big.

3.3 Effective Area of the Flame


As geometry characteristic quantity that attributes combustion condition, it has the
massive research in the combustion theory. Here, the applying flame active surface
corresponds to projected area which vertical to the camera direction. Through
pretreatment to the primitive flame image, we can count in the image all pixels total
quantity in a gradation level. As for each pixel occupy flame area roughly uniform, so
it essentially represents flame projector area size. Flame effective area calculation
formula is as follows:
G
S i
= j=1
L(g i
j g th ) (3)

i
Si is the flame area of ith sampling, G is the number of image pixels, g j represents
the ith pixel gray value of the image in the ith sampling, gth are the pre-setting
threshold, L(x) is step function, whose definition is:
1 x 0 (4)
L ( X ) =
0 x < 0
228 Y. Zhang, S. Hui, and Y. Zhang

3.4 Area of High-Temperature Region

Known by the combustion theory, in the case of other conditions certain, the higher
the combustion temperature is, the more stable combustion will be, the flame ability
of resistance disturbance is strong, and the combustion is stable. Therefore the flames
effective area and the high temperature region area value may reflect the combustion
the stability, and can take the combustion diagnosis as the basis.

4 System Software Design


DSP uses the C language programming, completes the debugging and the
development under the CCS integrated development environment, and downloads
through the JTAG connection to the DSP memory. The superior machine uses the
VC++6.0 programming, completes combustion diagnosis and other works.

4.1 DSP Software Design


The flow is the DSP loads the program and initializes the system after power-on
reset,, After self-checking unmistakably, DSP sends out the gathering signal, carries
on the digital image gathering work. After DSP completes confirmation odd and even
field image gathering, and makes two field data composite into a frame, then enters
into the image processing program, reads the image gray value, and filters, divides
noise, detects edge, extracts image feature amount ion, finally transmits the image
data and the result to PC through the PCI interface.

VWDUW
LQLWLDOL]H (UURU+DQGOLQJ

6 HOI WHVWSDVV "

,PDJH$FTXLVLWLRQ

$QDFTXLVLWLRQFRPSOHWHG 1

<
,PDJH3URFHVVLQJ

7UDQVIHUWKHUHVXOWVRI
LPDJHSURFHVVLQJ

6 HOI WHVWSDVV " 1 $ODUPLQGLFDWRU


<
&RPPXQLFDWLRQZLWKWKH &RQWURORXWSXW
KRVWFRPSXWHU

Fig. 2. System Flowchart

4.2 PC Software Design


The PC is the interactive window between system and the operation personnel, using
the image information processed by DSP, completes the combustion diagnosis, the
The Research of Flame Combustion Diagnosis System Based on Digital Image Processing 229

warning output, the flame combustion situation real time display, the entire chamber
combustion information and the temperature field and so on. PC software uses VC +
+6.0 for development, including the data communications module, the data recording
module and historical trends module.

5 Test and Result


By using the system described, we carried on the performance test on some 200MW
utility boiler.

Fig. 3. Flame Original Image Fig. 4. Image Characteristic Quantity Extraction Surface

Fig. 3. is an image with bituminous coal altogether burns in the starting process.
Fig. 4. shows computed result of the flame image pretreatment and the characteristic
size through DSP.

Fig. 5. Flame Combustion Condition History Tendency Chart

Fig. 5. is the history tendency chart in the starting process recorded by PC, the left
box is the history tendency after the image potential, in the chart A is the flame
average gradation, B is the average gradation variance, C is the flame active surface,
D is the active surface variance; right side of the box is settings for the relevant
230 Y. Zhang, S. Hui, and Y. Zhang

parameter. We may see from this chart, the above flame characteristic quantity is able
real-time to reflect the boiler flames combustion condition, and it is take the basis for
the combustion diagnosis.

6 Conclusion
Compares with the traditional flame examination method, the method based on
imagery processing's combustion diagnosis has the incomparable superiority. The
preliminary experiment indicated that this article designs on the PC machine adds
DSP the real-time image gathering and the combustion diagnosis system, can real-
time completes the image processing and effective carries on the combustion
diagnosis.

References
1. Wei, C., Yan, J., Shang, M.: Combustion diagnosis based on stove in flame image. J.
Power Engineering 24(3), 24202427 (2003)
2. Yujie, Z., Pinglin, W., Changming, L.: Research based on digital image processing and
neural network chamber temperature field survey and combustion diagnosis method. J. In
Thermal Energy Electricity Generation (7), 1821 (2004)
3. Wang, Y., Liu, H.: Combustion diagnosis method study based on stove flame image. J.
Modern Electric Power 23(2), 6467 (2006)
4. He, W., Shen, A., Wu, Q.: Flame detector system based on DSP and digital video
technology image. J. Chinese Electric Power (10), 6467 (2000)
5. Mu, H.: Coal-fired boilers visualization combustion diagnoses and discharges the forecast.
D. Chinese Academy of Science masters degree paper (2006)
Design and Research of Virtual Instrument
Development Board

Lin Zhang1, Taizhou Li2, and Zhuo Chen2


1
School of Mechanical and Engineering,
Huazhong University of Science and Technology
2
School of Electronic Information, Wuhan University
776396043@qq.com

Abstract. At present, the technology of virtual instrument has been developed


very quickly in the field of electronic measurement and automatic control. The
virtual instrument is a kind of functional instrument combining of software and
hardware .This article is from the perspective of exploration and research, and as
a reference when encounter hardware problems in the research and development
of virtual instrument. Virtual Instrument Development board, based on
LabVIEW development platform, can achieve virtual oscilloscope, virtual signal
generator, virtual spectrum analyzer and so on. And the use of virtual reality on
the oscilloscope signal testing and analysis, while the virtual signal generator's
output signal also had a measured and analysis, the results are satisfactory, the
virtual spectrum analyzer at the request of the software is relatively high, in this
paper did not achieve this feature, but the hardware has been tested successfully.

Keywords: Virtual Instrument Development board, DSP, FPGA(CPLD), SCM,


LabVIEW ,Quartus II.

1 The Conception of Virtual Instrument


As traditional instruments, virtual instruments contain the same data acquisition and
control, data analysis, which resulted in the expression of three function modules.
Virtual instrument in a transparent way combines the computer resources and
equipment hardware testing capabilities to achieve functional operation of the
instrument. Virtual instrument with a variety of icons or controls simulates a variety of
conventional devices on the virtual instrument panel.
By kinds of icons to achieve instrument power switch-off; The button icons from a
variety of test signals to set the "magnification", "channel" and parameters; A variety of
display controls use values or waveforms to display measurement or analysis results;
Using computer mouse and keyboards operation to simulate the traditional practice on
the instrument panel; Using a flow chart of the graphical programming software to
achieve a variety of signals measurement and data analysis. Electrical parameters based
on virtual instrument by the tester hardware and software, most of the structure percent.
Virtual instrument hardware usually consists of general-purpose computer and
peripheral hardware devices. General-purpose computer can be a notebook computer,
desktop computers or workstations. GPIB peripheral hardware devices can choose the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 231238, 2011.
Springer-Verlag Berlin Heidelberg 2011
232 L. Zhang, T. Li, and Z. Chen

system, vxl system, PXI systems, data mining collection system or other systems, and
you can also choose from two or more systems consisting of hybrid systems. One of the
most simple and cheapest form of ISA or PCI bus based data acquisition card, or based
on RS - 232 or USB bus portable data acquisition module. Virtual instrument software,
including operating systems, instrument drivers and application software at three
levels. Operating system can be WindowSgx/NT/2000/XP, SUN05, Linux and so on.
Instrument driver software is the direct control made a variety of hardware interface
drivers, and application software and peripheral equipment drive hardware modules to
achieve a communication link. Functional applications include software programs to
achieve equipment and implement virtual instrument panel. The user through the
virtual surface Interact communicates with the virtual instrument panel.

2 The Choice of Virtual Instrument's Development Environment


Currently, the test system used for virtual instrument development, and relatively mature
software platform, there are two main categories: one category is common visualization
software programming environment, and there are Microsoft's Visual C + + and Visual
Basic, inprise company's Delphi and C + + Builder, etc.; the other is the introduction of a
number of companies dedicated to the development of software programming virtual
instruments Environment, mainly Agilent Company (former HP company, a separate
company) Agilent graphical programming environment VEE, NI's Lab VIEW graphical
programming environment, and text programming environment Lab Windows / CVI.
Common visualization software VB, VC, Delphi, are based on the text of a certain
graphical features of the programming language. They are just text language graphical
environments Or as support visual environments, not the graphical languages.
Development of virtual instrument application softwares based on these platforms, require
a lot of text programming languages,therefore the difficulty was increased, difficult to
upgrade, and the efficiency is not high. A dedicated virtual instrument software,
instrument-oriented interactive C language development platform Lab Windows / CVI has
programming simple and intuitive that they have to provide automatic code generation and
a large number of standardized instruments meet VPP driver source code available for
reference and use, etc., is a virtual instrument system integrators to use more of the software
programming environment for the formation of large multi-based test systems or complex
virtual instrument. Agilent VEE and Lab VIEW is a graphical programming environment
or as G programming environment and programming language used is different from the
text of the flowchart programming method, suitable for professionals formed small test
systems and simple virtual instrument.

3 Design of Virtual Instrument Development Board

3.1 Design of MCU and PC-

3.1.1 RS232 Computer Interface with the PC


Because PC-232 logic level is different from MCU interface level, so you need level
conversion, PC machines only can communicate with the microcontroller.
Design and Research of Virtual Instrument Development Board 233

Fig. 1. MAX232 chip design for applications with PC computer interface

3.1.2 PS / 2 Interface
1: data line (DATA);
Figure. 1 MCU and PC computer interface circuits
Figure. 2 SP / 2 wins 0 figure cited
2: Not used:
3: Power ground (GND);
4: Power (+5 V);
5: Clock (CL K);
6: not used
Now widely used PC, PS / 2 interface is miniODIN6pin connector, and PS / 2 devices
are the main from the points, female socket with the master device from the vice device
using Male plug. Now widely used PS / 2 keyboard and mouse are working under the
vice device. PS / 2 interface clock and data lines are open collector structure, must be an
external pull-up resistor (generally Pull-up resistor to set the main device 1.Data
communication between master and slave synchronous serial bidirectional
transmission, the clock signal from the Generated from the device).

Fig. 2. PS/2 pin

3.2 CPLD Application Circuit Design

3.2.1 CPLD Hardware Design


Because many of the virtual instrument development equipment, instruments, need to
work on high speed, so I use CPLD chip to achieve. One chip manufacturer, is used as
234 L. Zhang, T. Li, and Z. Chen

the mainstream-ALTERA CPLD, mainly due to the cost-effective feature of ALTER,


and because the free build environment, the company's chips in many instruments
devices have applications, such as with the 7000S series EPM7128SLC84-6 chip. This
development board with CPLD is to achieve a special selection of waveform and
high-speed transformation, meanwhile, CPLD and DSP-left interface, can achieve
spectral analysis, high-speed oscilloscopes and other functions.

Fig. 3. CPLD(EPM7128SLC8415)

3.2.2 Design of CPLD Interface with AD


As a result of CPLD interface with the AD, through the CPLD to achieve Data buffer
and primary treatment, AD in 12 bit AD1674, AD1674 minimum conversion time l0us,

Fig. 4. AD circuit
Design and Research of Virtual Instrument Development Board 235

can choose the less precision of speed 8-bit AD to make higher conversion. Then,
higher in accuracy can be used 12-bit, general requirements for precision can be
achieved in the virtual instrument development board. Consequently, this chip can
complete the application needs, different requirements for the development of virtual
instruments.

3.2.3 CPLD Interface Design and DA

Fig. 5. DA circuit

3.2.4 JTAG Interface


JTAG is the FPGA (CPLD) program download interface, using ByteBlasterMV
parallel download cable, ByteBlasterMV parallel download cable has standard parallel
port connector with the PC's 25-pin , support 3.3V Vcc voltage Or 5.OV,allowing PC
users from the MAX + PLUS II or Quartus II development software to download the
data.

Fig. 6. JTAG Interface

3.3 Low Pass Filter Design

Figure 7 for the 30MHz-pass filter design parameters and circuit diagram.
236 L. Zhang, T. Li, and Z. Chen

Fig. 7. 30MHz low-pass filter

3.4 Single Chip Design

Figure 8 is the minimum external circuit of single chip ATM89S52.the microcontroller


bidirectional data bus as port PO which connected with the CPLD's 10.one-way
read-write control output control bus including the Rd and Wr, address latch signal
ALE (AddressLockEnable). 232 serial access microcontroller achieve Serial Port
Communication, and Pl connect with DDS chip complete data transfer, register the
command bus. The relationship of the DDS output frequency f0,the reference clock fr,
the phase accumulator length N and the frequency control word FSW is:
fO = fr.FSW/2N
DDS frequency resolution is:
AfO = fr/2N
Because the maximum DDS output frequency by the Nyquist sampling
theorem limit, so:
fmax = fr / 2 [1 1]
Currently, DDS products have Qualcomm's Q2334, Q2368; AD's AD7008,
AD9850,
AD9851, etc., This development board uses the company's AD9851 AD design.
Meanwhile, SCM needs to do data type conversion.

Fig. 8. Basic circuit microcontroller


Design and Research of Virtual Instrument Development Board 237

3.5 Reservation and DSP Interface

Fig. 9. Interface with DSP

4 Verilog HDL Language Design Triangular Wave Generator

4.1 Keil C Compiler Environment

Cx51 compiler and expansion of the 8051 traditional microprocessor optimized C


compiler and library reference.

Fig. 10. Keil C interface

4.2 Build Successful Window

Fig. 11. Build successful window


238 L. Zhang, T. Li, and Z. Chen

5 The Test of Signal Generator


Serial debugging assistantis an indispensable tool in the research of the microcontroller
serial PC with SCM communication, and virtual instrument development board uses
serial communication, so I applied to serial debugging assistant for testing the
development board.
Figure 12 for the test signal generator waveform.

Fig. 12. signal generator waveform

References
[1] IEEE1451.2 A Smart Transducer Interface for Sensors and Actuators-Transducer to
Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS)
Formats. IEEE Standards Department
[2] Andrade, H.A., Kovner, S.: Software synthesis from dataflow models for G and LabVIEW
TM. In: IEEE (ed.) Proceedings of the IEEE Conference Record of the 32nd Asilomar
Conference on Signals, Systems and Computers, vol. 2. IEEE, Pacific Grove (1998)
[3] Goldberg, H.: What is Virtual Instrumentation? IEEE Instrumentation and Measurement
Magazine 3(4) (2000)
Substantial Development Strategy of Land
Resource in Zhangjiakou

Yuqiang Sun1, Shengchen Wang2, and Yanna Zhao3


1
Hebei Province Soils and Fertilizers General Station,
Shijiazhuang, China
2
Hebei Province Yuanshi county Agrotechnical Centre, China
3
Shijiazhuang University of Economics, Shijiazhuang, China
syq0401@yahoo.com.cn

Abstract. Land resource is the foundation of the human beings survival and
development, and as well as a kind of nonrenewable resource. So substantial
development of land resource is very important. This paper analyses problems of
the development of land resource in Zhangjiakou. Then we put forward
some proposals such as classifying the land resource, improving land use
efficiency etc.

Keywords: land resource, substantial development, strategy.

1 Introduction
Marx has said that land is the first resource for human being, is mother of wealth, and
is the origin of all production and wealth. Indeed, land resource is the basement and
origin of human beings life and development, and is non-renewable. However,
Chinese people occupy lower per capita land resource, and the growing contradiction
between human and land makes people realize the importance and necessity of earth
resources substantial development. Hence, to cherish every inch of land, develop, use
and manage land resource reasonably and scientifically, and achieve land resources
substantial development is one important task we are faced with now.

2 Main Problems during the Land Resources Substantial


Development in Zhangjiakou
Zhangjiakou lies in the northwestern of Hebei province, it is next to the capital Beijing
in the east, to Datong in the west, to NC in the south, and to Mongolia in the north. The
whole city is divided into 2 sectors of dam-top and dam-bottom, 7 areas and 13
countries, with the total column of 37000km and population of 4600000. Among these
sections 4 countries in the middle-dam lie to the south of Mongolia, with the column of
13800km and population of 1060000. There is plow land of 8630000 hm, among
which only irrigated land of 2420000 hm, 28% of the total. Zhangjiakou belongs to the
symbolic continental arid and semi-arid climates area, where has much sand and strong
wind. So there appear the following bad aspects during the development:

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 239243, 2011.
Springer-Verlag Berlin Heidelberg 2011
240 Y. Sun, S. Wang, and Y. Zhao

2.1 Little Land and Worse Environment

Little lands in Zhangjiakou are covered with plants, and are full of water resource,


which leads to less mountains, bold and sandy earth, and prohibits the substantial
development. In detail, these bad aspects are represented in the following: earth
erosion. 63.8% of the basin is eroded, among which 9.6% of strong erosion scale

10000t/(kma) , 15.6% of serious erosion 5000-10000t/(kma) , 52.4% of medium

erosion 500-50000t/(kma) ,and 22.4% of light erosion
500t/(kma) . Besides,

there are about 30000hm topsoil(0-10cm) eroded per year. lack of water. As 86% of
the plow land is lack of water, some agricultural activities cannot be taken as usual.
some is short of nutrient. Rapidly-available phosphorus in 81% of the plow land is less
than 5mg/kg, total nitrogen in 46% of the land is less than 0.075%, organic matter in
27% of the land is less than 1%, and Rapidly-available potassium in 44% of the land is
less than 100 mg/kg. And the land which is lack of trace element is: boron in 24% is less
than 0.5 mg/kg, aluminum in 82% is less than 0.15 mg/kg, iron in 51% is less than 4.5

mg/kg, manganese in 81% is less than 5 mg/kg, and zinc in 79% is less than 0.5 mg/kg.
thin earth. Here 2.5% of the earth is thinner than 30cm, so it is not good for the forest
to grow. more salinity-alkalinity. 7.4% of the land here is salty land. some
bad-produced soil. 22.6% of the soil has calcium deposition layer, with 92% within
50cm far away from the topsoil, besides, 4% of the soil has red soil, 4.45 of the soil has
sand and gravel layer,1.44% of the topsoil is hardened, and there is more than 10% of
gravel in 21% of the topsoil.

2.2 Unreasonable Land Resource Development

The economy in Zhangjiakou develops rather slowly, and belongs to the poor continent
in Hebei Province with poor agriculture. Because of the serious natural environment,
there exists much system output and little system input, which is really not balanced.
And then, because of the strong wind and much sand, and over-ploughed land,
1200000hm of the land has been desertification, which is 33% of the citys total land
and 44.1% of the provinces desertification. And among these sections, the
desertification in Kangbao, Zhangbei, uyuan and Sangyi are the most serious. And
according to the research in Kangbao, the thickness of the eroded soil has declined by
5-10cms, and this has been the prominent aspect which influences local development.
Therefore, after the savage, simple and week balance is broken, the new one is ongoing,
the eco-system lost its self-sustaining ability, and will absolutely give rise to a set of
malignant recycle.

3 Strategy

3.1 Calcify the Land Resource

The first class land lies in vally-basin in river-bottom, irrigation-silted soil in lake-side,
brown soil and cinnamon soil, which should be the material foundation when solving
Substantial Development Strategy of Land Resource in Zhangjiakou 241

the problem of grains. The third class land lies in the meadow soil on the hill, brown
calcium soil, chestnut soil and meadow soil, and forage should be planted to enlarge the
grazing capacity and fertilize the soil, which is helpful for little input and much output.
The second class land is the link between the above two sorts. After the reorganization,
the barn feeding of animal husbandry will be replaced with grazing, for the barn
feeding can not only enhance the forage grasss using ratio, but also roost the soil
fertility, and decrease the harm and quicken kinds of forests grow, therefore, a new
scientific eco-balance will be on the way.

3.2 Extend the Use of Dry Farming

As we know, water is the first problem to solve in Zhangjiakou. There are two ways:
open source and throttling. The former refers to develop and use water, develop
irrigation farming and water conservancy projects, dig hells and build reservoirs. All
these buildings are helpful on the whole. But at the same time we should realize the
facts that there is little surface water, much losing water, poor groundwater, deep bury
and unearthing difficulties. Therefore, solving the water problems in a short period of
time is beyond the citys economic ability. Besides, throttling refers to making full use
of the existed water and rain, enhance the using ratio and developing dry farming. So,
developing dry farming is suitable for the natural conditions in Zhangjiakou, and is one
necessary choice.
At first, basic construction and building should be achieved to enhance the
agricultural production. For instance, basic dry farms should be built in the dam-top
area; in the dam-bottom hill area, water should be rechanneled and collected; and in the
dam-bottom deep-water area, the low-production farms should be transformed. And
then, we should fertilize the soil and plant plants. When doing these, spreading farm
manure and fertilizing soil should be the basic points. And more than 45m3of farm
manure and 450kgs of phosphate in per acre land, with the advanced fertilizing
technology popularized. Meanwhile, building some shelter forests and caragana
korshinskii or medlar is necessary. And at last, we should reorganize the dry-land
plants layout, and popularize the drought resisting and drought-enduring sorts. For
example, in the dam-top and dam-bottom areas, the wheat and naked oats should be
controlled, and potatoes and beans should be enhanced; but in the dam-bottom hills
area grains and potatoes should be the main.

3.3 Adjust Measures to Local Conditions and Manage Comprehensively

The dam-top areas unbalance is the most serious one in Hebei. There are so frequent
drying, wind and sand, salinization and so on that many lands are medium-low yield
field. Hence, we should keep the number and quantity of the basic lands, improve the
low-produced and sandy soils, prevent the soil from desertification and degradation,
and developing throttling agriculture, choose the lands and plant according to water,
and not open up virgin soil aimlessly. In the addition, the sandstorm should be managed
242 Y. Sun, S. Wang, and Y. Zhao

in desert areas. And grass should be planted in agriculture- husbandry areas. In the
detail:
Firstly, in the northern dam-top plateau, 2 sand-preventing systems should be built.
Referring to a sort of bad aspects, improving the sandy earth within the nets could be
achieved by returning the plow lands, forests and grasses; taking those advantages, we
can cover the soil with plants, enlarge the covering rate, to build a set of new
eco-balance and re-represent the grass scenery.
And then, the main problems which exist in the southeast dam-bottom area are,
serious water-eroded, thin soil layer and shortage of nutrient; but there are also many
advantages. Hence, preventing from soil erosion, protecting earth and fertilizing the
soil are the prominent aims.
Added to these, there are lots of disadvantages in the southern dam-bottom area:
sparse plants, large yellow land, serious soil erosion, thin hill soil layer, dry soil and
little nutrient. Meanwhile, it is suitable for agriculture; therefore, preventing from soil
erosion, protecting earth, keeping water, and fertilizing the soil are the prominent aims.
However, when using the hills, the covering rate should be largely enlarged. Besides,
grain producing base with crops and melons and fruits should be built.

3.4 Enlarge the Technology Input, Soil Using Efficiency and Benefit

First and foremost, depending on dry farming technology subjects, we should exclude
the obstacles to low production, to form stable comprehensive producing capability. At
the same time, concluding the experience of managing storm-sand-soil, we should hold
the key technologies (such as water-saving Irrigation on dry farmland, manage and
improve planting techologies and so on )and matched technologies, and make the best
of them. In addition, we should increase the capital and material input, to alleviate the
contradiction between input and output.
Secondly, we should develop precision farming, protect the basic land,practice
multiple cropping, and develop land-saving agriculture; popularize water-saving
irrigating technology, and develop water-saving agriculture, and make best of lands,
and develop courtyard economy and cubic economy. What is more, we should intake
and bring the advanced agricultural technologies, and set up scientific farming pattern.
In the end, we can enhance the land using efficiency and benefit by improving the
matching and level of technological personnel and fostering technology model
households.

4 Conclusion

This article analyzed the causes of the unreasonable development of land resource in
Zhangjiakou and figured out some detailed strategies, such as optimum use of land,
promoting management and enhancing high-technology input etc. The author hopes
that these measures could do something for the substantial development of land
resource in Zhangjiakou.
Substantial Development Strategy of Land Resource in Zhangjiakou 243

References
1. Sun, G., Wang, S.: Scientific and Technological Progress Promoting the Sustainable
Development of Land Resources of TaiHang Mountainous Areas in Hebei Province.
Chinese Agricultural Science Bulletin 18, 504506 (2009)
2. Li, J.: Issues and Some Strategies of Sustainable Development of Black Soil Resources in
Black Soil Zone in Northeast China. Chinese Agricultural Science Bulletin 4, 3236 (2005)
3. Lin, D.: Continuous development and utilization of soil resources. Journal of Science of
Teachers College and University 3, 7578 (2002)
4. Wang, L.-b., Zhang, R.-p.: Mortuary Management and Sustainable Development of Land
Resources. Ecological Economy 7, 4346 (2008)
Computational Classification of Cloud Forests in
Thailand Using Statistical Behaviors of Weather Data

Peerasak Sangarun, Wittaya Pheera,


Krisanadej Jaroensutasinee, and Mullica Jaroensutasinee

Centre of Excellence for Ecoinformatics,


School of Science,
Walailak University,
Nakhon Si Thammarat, 80160, Thailand
{ppsangarun,wittayapheera,krisanadej,jmullica}@gmail.com

Abstract. This study attempted to distinguish the atmospheric characteristic of


cloud forest quantitatively in order to classify their typical features. The nine
weather stations were installed in different sites divided into three specific
areas: (1) four cloud forest areas (Duan Hok, Dadfa, Mt. Nom and Doi Inthanon
stations), (2) two lowland rainforest areas (Huilek and Mt. Nan Headquarters
stations) and (3) three coastal areas (Walailak University, Khanom and Nakhon
Si Thammarat stations). The atmospheric data consisted of temperature, the
percentage of relative humidity and solar radiation collected from July 2006 to
April 2010. The bimodal distribution and power law distribution were used to
analyze parameters and classify atmospheric characteristics of the obtained data
then classified them by cluster analysis. The results showed that those study
sites can be classified into two groups: (1) cloud forest and (2) lowland
rainforest and coastal area.

Keywords: cloud forest, atmospheric data, weather station, bimodal


distribution, power law distribution.

1 Introduction

Tropical Montane Cloud Forests (TMCFs) occurs in mountainous altitudinal band


frequently enveloped by orographic clouds [3]. This forest obtains more moisture
from deposited fog water in addition to bulk precipitation [4] and [6]. The main
climatic characteristics of cloud forests include frequent cloud presence, usually high
relative humidity and low irradiance [4]. TMCFs normally found at altitudes between
1,500 to 3,300 m a.s.l., occupying an altitudinal belt of approximately 800 to 1,000 m.
at each site [7]. The lowermost occurrence of low-statured cloud forest (300600 m
a.s.l.) is reported from specific locations like small islands, where the cloud base may
be very low and the coastal slopes are exposed to both, high rainfall and persistent
wind-driven clouds [1]. On small tropical islands, TMCFs can be found at lower
altitudes. TMCFs obtain more moisture from deposited fog water in addition to
precipitation [1], [8] and [9]. All tropical forests are under threat but cloud forests are

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 244250, 2011.
Springer-Verlag Berlin Heidelberg 2011
Computational Classification of Cloud Forests in Thailand 245

uniquely threatened both by human and by climate change impacting on temperature,


rainfall and the formation of clouds in mountain areas [2].
Grace and Curran [5] used a bimodal distribution to model daily maximum
temperature frequency distribution at southern Australian coastal sites. The model
could assess impact of changes in the local atmospheric circulation patterns. This
study attempted to quantify weather variables from automatic weather stations i.e.
temperature, the percentage of relative humidity and solar radiation by using bimodal
distribution and power law distribution, so that we can learn more about their
seasonality. This study investigated the atmospheric data from nine study sites (Fig.
1a and 1b).
This study was aimed to use weather data monitoring by automatic weather
stations at cloud forests study sites and the other sites for studying and analyzing the
atmospheric characteristics. Furthermore, we may use these data to predict the
possibility of climate change and global warming of Thailand in the future.

2 Materials and Method


Mt. Nan National Park is situated at Noppitam sub-district, Thasala district, and
Sichon district in Nakhon Si Thammarat province, Thailand (Fig. 1a and 1b). Mt. Nan
is a part of Nakhon Si Thammarat mountain range lie in North-South direction with
an area of 406 km2. 90% of the area in Mt. Nan is a primary tropical evergreen forest
which is an important watershed source of Nakhon Si Thammarat.
There were nine study sites: three coastal sites (Khanom, Walailak University and
Nakhon Si Thammarat), two tropical rainforest sites (Mt. Nan Headquarters and
Huilek), and four cloud forest sites (Dadfa, Duan Hok, Mt. Nom and Doi Intanon)
(Fig. 1a and 1b). Khanom (KHN), Walailak University (WLU) and Nakhon Si
Thammarat (NST) sites were coastal area with the elevation of 8 m a.s.l. Mt. Nan
Headquarters (NHQ) and Huilek (HUL) sites were tropical montane rain forests
located within Mt. Nan National Park with the elevation of 182 and 234 m a.s.l.,
respectively. Dadfa (DFC) cloud forest site is a lower montane cloud forest located at
Khanom district, Nakhon Si Thammarat province near coastal area with the elevation
of 680 m a.s.l. Mt. Nom (MNC) cloud forest with the highest peak of 1,270 m a.s.l.
and Duan Hok (DHC) cloud forest with the highest peak of 1,053 m a.s.l. are located
at Mt. Nan National Park. Doi Intanon (DIC) cloud forest is located at Jomtong
district, Maejam district, Maewang district and Doilo sub-district, Changmai
Province. The geographical characteristic of Doi Intanon is a high mountainous range
in North-South direction which is a part of Thanonthongchai mountain range with the
elevation range of 400-2,565 m a.s.l.

2.1 Data Collection

We installed automatic weather station (Davis weather station model Vantage Pro II
Plus) at nine study sites (Fig. 1a and 1b). There were different installation periods at
each study site as followings: (1) KHN (September 2007), (2) WLU (August 2006),
(3) NST (June 2006), (4) NHQ (September 2007), (5) HUL (November 2006), (6)
246 P. Sangarun et al.

DFC (January 2009), (7) DHC (March 2007), (8) MNC (January 2009) and (9) DIC
(March 2008). We used the data logger for data storage interval time of 30 min. for all
weather data.

(a) (b)
Fig. 1. (a) The map of Thailand and Doi Intanon (DIC) study site (9) and (b) Mt. Nan National
Park. The numbers represent (1) KHN, (2) WLU, (3) NST, (4) NHQ, (5) HUL, (6) DFC, (7)
DHC, and (8) MNC.

2.2 Data Analysis

In this study, the weather data (i.e. temperature, relative humidity and solar radiation)
at nine study sites were used to fit the bimodal and power law distribution of relative
frequency. For temperature data, we used a bimodal distribution to fit the histogram
distribution curves in all sites, except at DIC where it had three normal distributions.
For relative humidity data, we analysed data as followings: (1) the bimodal
distribution was used to analyse KHN, WLU, NST and NHQ sites and (2) the power
law distribution was used to fit the first subpopulation distribution and the normal
distribution was then used to fit the second and third subpopulation distributions
where it applied. For solar radiation data, solar radiation data during 0600-1800 hours
were used for data analysis. The power law distribution was used to fit the first
subpopulation distribution and the normal distribution was then used to fit the second
and third subpopulation distributions at all sites.
The bimodal distribution and power law distribution are given by equation (1) and
(2), respectively.

(1)
Computational Classification of Cloud Forests in Thailand 247

3 Results

The temperature at all sites was bi-modally distributed, except at DIC site (Fig. 2a-2i)
which showed multimodal distribution according to three subpopulations. Cloud
forest sites had lower and than other sites (Fig. 2a-2i).
The relative humidity at KHN, WLU, NST and NHQ sites were fitted
by the bimodal distribution (Fig. 3a-3d). HUL, DFC, DHC and MNC sites were
fitted by power law distribution and normal distribution (Fig. 3e-3h). DIC site
was different from others sites that was fitted by power law and bimodal distribution
(Fig. 3i).
Relative Frequency

(a) (b) (c) (d) (e)

(f) (g) (h) (i)

Fig. 2. Temperature distribution with bimodal curve at nine study sites. (a-c) coastal sites,
(d-e) lowland rainforests and (f-i) cloud forests

Solar radiation at KHN, WLU, NST, NHQ and MNC sites were fitted by power
law and bimodal distribution (Fig. 4a-4d and 4h). HUL, DHC and DIC sites were
fitted by power law and normal distribution (Fig. 4e, 4g and 4i). DFC site was fitted
by power law distribution (Fig. 4a-4i).
248 P. Sangarun et al.

Relative Frequency

(a) (b) (c) (d) (e)

(f) (g) (h) (i)

Fig. 3. The percentage of relative humidity distribution with bimodal curve at nine study sites.
(a-c) coastal sites, (d-e) lowland rainforests and (f-i) cloud forests
Relative Frequency

(a) (b) (c) (d) (e)

(f) (g) (h) (i)

Fig. 4. Solar radiation distribution with bimodal curve at nine study sites. (a-c) coastal sites,
(d-e) tropical forests and (f-i) cloud forests

The results from cluster analysis of temperature, relative humidity and solar
radiation identified clarification of two different groups of forests: (1) DFC, DHC,
DIC, HUL and MNC study sites. (2) NHQ, NST, KHN and WLU sites (Fig. 5).
Computational Classification of Cloud Forests in Thailand 249

Fig. 5. Cluster analysis of temperature, relative humidity and solar radiation data

4 Discussion
The bimodal model of temperature distribution composed of two subpopulations
which were the mean temperature in rainy and summer season. DIC site had three
subpopulations. The coolest subpopulation was the mean temperature in winter.
The solar radiation was also grouped in the study sites into two differences: (1)
DFC, DHC, DIC and HUL study sites. (2) MNC, NHQ, NST, KHN and WLU sites
(Fig. 5).
This study showed that MNC, DHF and DFC lower and than other sites. This
indicated the peak of first subpopulation can be used as an indicator of cloud forest
that is located near the equator. On the other hand, this could not be applied to high
latitude cloud forest like DIC because in winter (November to February) the
temperature was lower than the other month. The first peak of temperature was 6.97
C and the second peak was 11.75 C that was the mean temperature in rainy season
and the third peak was showed the mean temperature in summer was 14.95 C. It
was found that DFC had slightly higher temperature than DHC and MNC that =
21.12, 19.22, 17.80 C respectively. This could be due to the fact that DFC was
located near coastal area at low elevation (i.e. 700 m a.s.l). These results support the
Bruijnzeels study that can be found in TMCFs at lower altitudes. [1], [8] and [9].
The atmospheric characteristics of nine study sites could be classified by cluster
analysis into two groups of forests within cloud forest sites; MNC, DFC, DHC and
DIC and coastal sites; NHQ, NST, KHN and WLU. HUL site was different from the
other sites. This may be the result from the influence of specific location, HUL was
located at the valley that had frequent cloud presence and the water stream. The
canopies of the tall trees in this site are covered with fog in the morning. MNC study
site was located at the top of the hill that is exposed to the solar radiation and didnt
have the canopy cover.
The bimodal distribution of solar radiation showed the mean of the first and second
subpopulations and . NHQ, NST, KHN and WLU had higher than the other sites.

5 Conclusion
Bimodal distribution of temperature can be used to separate forest types. From nine
study sites, the mean and variance of the bimodal distribution can group them into
two types: (1) four cloud forest sites (DHC, DFC, MNC, and DIC stations), (2) two
250 P. Sangarun et al.

lowland rainforests (HUL and NHQ stations) and three coastal sites (WLU, KHN and
NST stations).
In this study, the relative humidity distribution of coastal sites that were WLU,
KHN and NST sites was fitted by power law and bimodal distribution of 4 study sites
and by using power law and normal distribution of 5 study sites.
The data of temperature, relative humidity, and solar radiation could be used to
analyze the weather variation and distribution. Further, these data could be
generalized for better understanding of the fluctuation of weather which tends to
change according to the global warming climate situation.

Acknowledgments. This work was supported in part by PTT Public Company


Limited, TRF/Biotec special program for Biodiversity Research Training grant BRT
R351151, BRT T351004, BRT T351005, Walailak University Fund 05/2552 and
07/2552, WU50602, and Center of Excellence for Ecoinformatics, the Institute of
Research and Development, Walailak University and NECTEC. We thank Mt. Nan
National Park staff for their invaluable assistance in the field.

References
1. Bruijnzeel, L.A., Proctor, J.: Hydrology and biochemistry of tropical montane cloud forest:
what do we really know? In: Hamilton, L.S., Juvik, J.O., Scatena, F.N. (eds.) Tropical
Montane Cloud Forest, pp. 3878. Springer, New York (1995)
2. Bubb, P., May, I., Miles, L., Sayer, J.: Cloud Forest Agenda. UNEP-WCMC, Cambridge
(2004)
3. Foster, P.: The potential negative impacts of global climate change on tropical montane
cloud forests. Earth-Science Reviews 55, 73106 (2001)
4. Gonzlez-Mancebo, J.M., Romaguera, F., Losada-Lima, A., Surez, A.: Epiphytic
ryophytes growing on Laurus azorica (Seub.) Franco in three laurel forest areas in Tenerife
(Canary Islands). Acta Oecologica 25, 159167 (2004)
5. Grace, W., Curran, E.: A binormal model of frequency distributions of daily maximum
temperature. Australian Meteorology Management 42, 151161 (1993)
6. Hamilton, L.S., Juvik, J.O., Scatena, F.N.: The Puerto Rico Tropical Cloud Forest
Symposium: Introduction and Workshop Synthesis. In: Hamilton, L.S., Juvik, J.O., Scatena,
F.N. (eds.) Tropical Montane Cloud Forests, pp. 123. Springer, New York (1995)
7. Stadtmller, T.: Cloud forests in the humid tropic: a bibliographic review, pp. 181. The
United Nations University, Tokyo (1987)
8. Still, C.J., Foster, P.N., Schneider, S.H.: Simulating the effects of climate change on tropical
montane cloud forests. Nature 398, 608610 (1999)
9. Weathers, K.C.: The importance of cloud and fog in the maintenance of cosystems. Trends
in Ecology and Evolution 14, 214215 (1999)
Research on Establishing the Early-Warning Index
System of Energy Security in China

Yanna Zhao1, Min Zhang2, and Yuqiang Sun3


1
School of Management Science & Engineering,
Shijiazhuang University of Economics, Shijiazhuang, China
2
China Geological Survey, Beijing, China
3
Hebei Province Soils and Fertilizers General Station, Shijiazhuang, China
zhyn1023@yahoo.com.cn

Abstract. Energy security is an important factor that affect development of


economy in our country. So, China has faced a great challenge on energy
security. Firstly, this paper defined the energy security from three aspects.
Secondly, it analyzed the factors that affected energy security in China. Finally,
the scientific index system of energy security were established in order to using
for reference to macro-management of energy security in China.

Keywords: energy, security, early-warning, index system.

1 Introduction
Energy is the important material base on which people rely during surviving and
producing, is the artery of our economy. Minable reserve of oil in China is 3% of the
volume of the world. In recent years, Chinese energy security is faced with
unprecedented threat. Take the oil for example, our cost amount of oil is only less
than that of America, so we have to make up the gap between the poor self-
sufficiency ability and the increasing cost demand by importing. According to the
survey, our importing proportion has increased to more than 50% in 2008 from 7.6%
in 1995, which undoubtedly enhanced the uncertainty of oil supply. On the other
hand, with fluctuation of raw oil price at the international market, for instance, it had
been 100 dollars per hail since February 2nd 2008, and just has increased to 147.24
dollars per hail, which has enhanced the cost of our oil trade, besides, the large
fluctuation will extends to the national economy, which will not only influence
peoples normal life, but also pose a threat to national safety. So, the energy security
has been a sensitive issue.

2 The Basic Concept of Energy Security


With the development of social economy, peoples awareness of the energy security
has experienced about 3 periods:
In 1950s, with the technological advancement and global industrialization, the
energy consumption has increased and great changes have been taking place on the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 251256, 2011.
Springer-Verlag Berlin Heidelberg 2011
252 Y. Zhao, M. Zhang, and Y. Sun

consumption system, and the coal-cored has transferred to the oil-cored system, and
as the oil energy is dense on the earth, it evokes peoples sorrow towards the oil
supply assurance. At that time, energy security refers to whether there is enough
energy storage, productivity and smooth selling channel.
In 1970s, about during the forth Middle-East War, when the oil exporter countries
enhanced the raw oil price, the most serious global economic crisis after the second
world war prevailed the world. And until then, did people realize that the fluctuation
of the energy price could influence national economic work. So the concept of energy
security extended to enough supply and stable price.
With the increasing of global energy cost, a series of problems such as global
warming and air pollution have been turned out. And people begin to realize that the
energy use should not pose a threat to humans survival and environment. Therefore,
the meaning of energy usage safety is mixed into the previous concept of energy
security.
The concept of energy security mainly includes the following 3 levels: enough
supply, which refers to the energy supply which could meet a country or an areas
need; stable price, which refers to the price fluctuation could not exceed the country
or areas convey; safe use, which refers to the energy could not threaten the
environment which human beings depend on. Among these 3 levels, the first one is
the base, and the most focused one as well.

3 The Analysis on the Factors Which Influence Energy Security of


China
In the 21st century, with Chinese industrialization and modernization, and the
advanced energy consumption system, our energy security is faced with the ever most
serious challenge. On the whole, there are lots of reasons which influence our energy
security: supply and demand, price, and safety.

3.1 The Analysis on Energy Supply

1) Energy reserves shortage leads to the poor assurance ability


China is a developing country with large population, which is lack of per-capita
energy, has no balance between energy system and storage. According to calculation,
we have comparatively rich coal energy, among which the concealed costs 33.8%,
and the left is 90 billion tons. And if we conceal like this, we can still do it for 100
years. So we have strong coal assurance ability. But we have comparatively short oil
energy, of which the left is 2.3 billion tons; the per-capita is only 8.1%. So the left can
be concealed for 14 years. Besides, as the oil energy is related to the politics, military,
living and economy, the poor oil assurance ability has influenced the national energy
security.
2) Low technological level reduces the productive efficiency of energy prodution
From the previous data, we can see that China is a country with little oil energy,
and low use rate, which worsens our oil energy. For we own poor concealing
technology, mainly water injection concealing, which only leads to the rate of about
Research on Establishing the Early-Warning Index System of Energy Security in China 253

30%, and means the other 70% is left under the earth. But in fact, the rate with some
advanced technology is 50% to 60%. So the oil productivity in the developed
countries is once or twice higher than our country.
3) The highly centralized trade routes leads to the more serious supply crisis of
energy
Entering the 21st century, our oil imports amount increase fast with the economic
growth. And because the home product cannot meet the increasing oil consumption
demand, so we mainly import. In 2007, the total imports amount is 211.394 million
tons, which is 62% of our total consumption. And since 2000, we have imported
about over 50% of oil from the Middle-East, and more year by year. So our oil
depends much on the outside, and the area from where we import is dense, in the
Middle-East, which is the most unsteady area in society, politics, and military,
because of its oil energy. As oil is not renew, so only if there is no new fuel to take
place of oil, international conflict for oil would not stop.

3.2 The Analysis on Energy Demand

As the largest developing country in the world, Chinese economy is developing


quickly recently. As the calculation, our GDP increase in 1990s is about 9%. But it is
clear that our economic developing depends on the extensive consumption. The high-
consumed industry is the major strength of our economy, and will be the more one.
Though the technology has been increased, the consumption is still too high and the
low technology leads to low productivity, especially high consumption. On the whole,
our use rate is only about 33%, 10% lower than that of the developed countries.
Currently, our product of high-consumption industry is the first in the world. Our steel
product is 502 million tons; the cement is 1.388 billion tons, all the first in the world.
Currently, our 8 industries unit consumption is 40% higher than that the international
levels. As we see, the extensive consumption system and the low use rate give rise to
the high demand, which further expand the supply chasm.

3.3 The Analysis on Energy Price

Oil is not only the important energy in the national economy, but also the important
raw material in chemical industry, so we can say that oil is the food of industry. So we
call it black gold and economic blood. Wolfenson has said that if the oil price of
per hail keeps enhancing 10 dollars for one year, the international economic rate will
decline by 0.5%, but the developing countrys will decline by 0.75%, so the influence
with the oil price fluctuation is more obvious. As in 2008, the fluctuation in the
international oil price has once made our country faced with serious inflation, so once
the oil price inflates, a series of reactions will come out: it will increase the cost,
which will lead to national economic inflation and recession. On the other hand, too
low price will influence the interest of the oil industry, which decides to limit and stop
producing, which will give rise to the oil crisis, and once the supply stops, it will also
influence the national economic development. So no matter too high or too low, the
oil price will pose a serious threat to our national economic development.
254 Y. Zhao, M. Zhang, and Y. Sun

3.4 The Analysis on the Use Safety of Energy

In the middle 1980s, with the global warming and declined air quantity, the
environmental problem is once again focused on. Our consumption system is
unreasonable, though lots of experts are calling for declining the rate of coal; coals
present rate is about 70% of the total energy consumption. As coal is impure energy,
especially the raw coal is not completely burned, sending out a lot of CO2 and SO2,
and worsens the environment. We all know that SO2 is the maker of soar rain, and
CO2 is the leader of the global warming, as 80% is directly burned, so it further
worsens the pollution, which will threaten our present environment but also influence
our international image, and spread to our descendants and damage our long-term
interest.

4 The Establishing the Early-Warning Index System of Energy


Security in China
This article bases on the energy security, builds the energy security early-warning
index system according to its 3 levels, as is shown in the follow chart:

Table 1. Early-warning index system of energy security in China

Level 1 Level 2
1)Oil reserve-production ratio
2)Oil recovery ratio
3)Concentration ratio of oil imports
4)Dependency of oil consumption on
Degree of energy import
security 5)Energy consumption per GDP
6)Energy consumption elastic index
7)Rate of change in oil price
8)Reducing discharge rate of CO2
9)Reducing discharge rate of SO2

1)Oil reserve-production ratio



Oil reserve-production ratio oil left storage/oil concealed this year
It refers to the life span of the oil left, according to the present concealing scale. It
is used for measuring the storage assurance ability. The higher it is the stronger the
assurance ability is and the safer the energy is.
2) Oil recovery ratio
It is used for measuring the oil concealing rate, and refers to the ratio between the
total oil concealed within this period and the initial oil storage. The higher it is the
higher the concealing rate is, which means that while others remain, the oil self-
sufficiency ability and energy security are enhanced.
Research on Establishing the Early-Warning Index System of Energy Security in China 255

3)Concentration ratio of oil imports


Concentration ratio of oil imports=the total net importing amount from the 3 first
importing countries/the total net importing amount
It refers to the density of the oil importing areas, the higher it is the more dense the
oil importing areas are, and once some emergencies happen lead to unsmooth
importing channel, and will give rise to big risk, so the energy security will be not so
well.
4) Dependency of oil consumption on import
Dependency of oil consumption on import=importing amount/national
consumption
It shows how heavy the national consumption depends on the outside. The higher it
is the larger the supply uncertainty is and the lower the energy security is; or the
higher the energy security is.
5) Energy consumption per GDP
Energy consumption per GDP=the total energy consumption/10000 yuan GDP
It refers to the energy consumption during producing per 10000 yuan GDP, within
one period. And it shows the economic outcome produced by energy consumption, so
the lower it is the more economic outcome is and the higher the energy using rate is,
so the safer the energy will be.
6) Energy consumption elastic index
Energy consumption elastic index=the energy consuming average increasing
rate/GDP average increasing rate
It shows the relationship between the energy consumption and national economic
development, and it identifies the restrict relationship between the energy
development and social economic development and the developing trend and
principle, and the energy using rate at the same time. The lower it is the higher the
energy using rate is and the safe the energy will be.
7) Rate of change in oil price
Rate of change in oil price = (the current price-the basic price)/ the basic price
It identifies the oil price changes in the international market. When it is more than
0, the current oil price is increasing when compared with the basic price, and the cost
of the oil product will enhance, and the relevant industries cost will enhance too, so
some negative impact will be put. And the bigger the number is the lower the safety
is; or the higher the safety is. When it is less than 0, the oil price in the international
market will decline, and the larger it is the higher the safety will be.
8) Reducing discharge rate of CO2
Reducing discharge rate of CO2= (the previous emission-the current emission)/ the
previous emission
9) Reducing discharge rate of SO2
Reducing discharge rate of SO2 = (the previous emission-the current emission)/ the
previous emission
Reducing discharge rate of CO2 and Reducing discharge rate of SO2 are used for
identifying the energy using safety. The higher they are the better the harmful gas is
controlled and the smaller the influence towards environmental pollution is and higher
the energy using safety is.
256 Y. Zhao, M. Zhang, and Y. Sun

5 Conclusion
The problem of the energy security has begun to hinder the further development of the
national economy and threat the social stability. This article has built a set of energy
security early-warning index system based on the analysis of national energy security.
At the same time, the changing of index can reflect the problems of national energy
security. The author hopes that the energy security early-warning index system this
article builds could do something for strengthening the management on risk and
realizing sustainable development of the national economy.

References
1. Kong, L.B., Hou, Y.B.: Study on the National Energy Safety Model. Metal Mine 11, 35
(2002)
2. He, Q.: Discussion and Strategy about the Energy Security of China. China Safety Science
Journal 6, 5257 (2009)
3. Ma, W., Wang, Z., Huang, C.: Some Issues of Energy Security of China and Tentative
Solutions. Studies In Interntional Technology and Economy 4, 711 (2001)
4. Gao, J.-l., Ou, X.-y.: Discussion on Chinas Low-carbon Economy under the Restriction of
Energy Security. Journal of Lanzhou Commercial College 2, 3843 (2010)
5. Li, S.-x.: Study on the Energy Efficiency Strategy and the Improvement of National Energy
Security. Journal of China University of Geosciences (Social Sciences Edition) 3, 4750
(2010)
Artificial Enzyme Construction with
Temperature Sensitivity

Tingting Lin1, Jun Lin1, Xin Huang2, and Junqiu Liu2


1
College of Instrumentation & Electrical Engineering,
Jilin University, Changchun, 130061, China
ttlin@jlu.edu.cn
2
State Key Laboratory of Supramolecular Structure and Materials,
Jilin University, Changchun, 130061, China

Abstract. Poly (N-isopropylacrylamide) (PNIPAAm) is one of the most popular


thermoresponsive polymers, which shows dramatic and reversible phase transition
behavior in water. Therefore two strategies for the design of temperature-sensitive
enzyme model based on PNIPAAm derivatives are presented: 1) A temperature-
sensitive block copolymer (PAAm-b-PNIPAAm-Te) with a glutathione
peroxidase-like active site was synthesized via ATRP. 2) An artificial bifunctional
enzyme with both superoxide dismutase (SOD) and glutathione peroxidase (GPx)
activities was constructed by the self-assembly of a porphyrin core with four
suspensory adamantyl moieties and b-cyclodextrin-terminated temperature-
sensitive copolymer (-CD-PEG-b-PNIPAAm-Te) through hostguest interaction
in aqueous solution. With increasing temperature, the PNIPAAm chain becoming
hydrophobic leads to a change in the self-assembly structure of the polymer,
which plays a key role in modulating the catalytic activity. And two new artificial
enzymes were found to exhibit the highest enzymatic catalytic efficiency in close
to physical temperature.

Keywords: biomimetic, block copolymers, micelles, temperature sensor,


synthesis.

1 Introduction
In biological organisms, overproduction of reactive oxygen species (ROS), such as
superoxide anions, H2O2, organic peroxide, and hydroxyl radical, can result in a
variety of human diseases, example for ischemia/reperfusion injury, atherosclerosis,
neurodegenerative diseases, cancer, and allergy etc [1]. The antioxidant enzymes,
superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx)
contribute dominatingly to enhance cellular antioxidative defense against oxidative
stress in the human body [2]. SOD is a metalloenzyme that catalyzes the dismutation
of superoxide radical anion to form H2O2 and dioxygen. H2O2 is then detoxified either
to H2O and O2 by catalase (CAT) or to H2O by glutathione peroxidase (GPx) [3].
However, only when an appropriate balance between the activities of these enzymes
is maintained could the optimal protection of cells be achieved [4]. Owing to their
biologically crucial role, considerable effort has been devoted to designing artificial

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 257262, 2011.
Springer-Verlag Berlin Heidelberg 2011
258 T. Lin et al.

enzymes that mimic the properties of antioxidant enzymes in recent years.


Importantly, a desirable antioxidant enzyme model should be regulated by some
environmental stimulation, such as temperature, pH, ion strength or magnetism. Then
the properties of ROS can be controlled wisely.

Fig. 1. Schematic representation of the enzyme model (A) the smart GPx enzyme model (B)
preparation of the bifunctional enzyme model

In the last decade, stimuli-responsive polymers have attracted strong scientific and
technical interest due to their reversible properties in response to changes in
environmental factors such as pH or temperature [5]. Among these materials, the
thermal sensitive poly(N-isopropylacrylamide) (PNIPAAm) attracts particular
attention. PNIPAAm are also called smart material. They have a volume phase-
transition temperature or lower critical solubilization temperature (LCST) at about 32
C. This means that below the LCST, the particles are swollen and above this
temperature they are collapsed [6]. A detailed knowledge and control of the swelling
behavior will also invoke a better physical understanding of the mechanism of the
observed volume phase transition. This knowledge can then be used to design
complex materials. Due to the reversible phase transition, PNIPAAm based microgels
can be used in many different application fields, for example, as functional materials
for drug delivery, optical filtering and controlled biomolecule recovery.
To construct antioxidant mimics with high catalytic efficiency, we focus on
PNIPAAm in the following aspects: 1) we designed and synthesized a smart GPx
enzyme model, the block copolymer PAAm-b-PNIPAAm-Te, by ATRP, in which the
GPx active site (tellurium moiety) was incorporated into the PNIPAAm chain (the
designed functional monomers and the substrates structure are given in Fig. 1A). 2)
To further design a smart bifunctional antioxidant enzyme model with both SOD and
Artificial Enzyme Construction with Temperature Sensitivity 259

GPx activities which is close to the bodys physiological environment, we used a


temperature-sensitive block copolymer (b-CD-PEG-b-PNIPAAm-Te) which can self-
assemble with MnTPyP-M-Ad through hostguest complexation (Fig. 2B).

2 Result
2.1 Stimuli-Responsive Micellization
Fig. 2A shows the temperature dependence of the average hydrodynamic radius <Rh>
for PAAm-b-PNIPAAm-Te in aqueous solutions. The critical micellization
temperature of PAAm-b-PNIPAAm-Te was found to be 34 C, which is higher than
that of PNIPAAm homopolymers. This is due to the fact that PNIPAAm is now
attaching with a hydrophilic PAAm. Furthermore, the actual morphology of the
micellar structure was observed by SEM (Fig. 3A).

Fig. 2. Temperature dependence of the hydrodynamic diameters of the PAAm-b-PNIPAAm-Te


block copolymer (A) and using a Malvern ZETAS12-ERNANOSERIES instrument. From a to
e, the temperature was 15, 25, 33, 35 and 45 C, respectively. b-CD-PEG-b-PNIPAAm-Te (B)
was also determined. From a to d, the temperature was 30, 35, 37 and 35 C, respectively.

A B

Fig. 3. SEM images for PAAm-b-PNIPAAm-Te (A) and b-CD-PEG-b-PNIPAAm-Te(B) at


silica wafer on a JEOL FESEM 6700F scanning electron microscope with primary electron
energy of 3 kV
260 T. Lin et al.

Fig. 2B shows the temperature dependence of the <Rh> for the pseudo-block
copolymer and b-CD-PEG-b-PNIPAAm-Te in aqueous solutions.Then, the
morphology of the b-CD-PEG-b-PNIPAAm-Te and star-shaped pseudo-block
copolymer was intuitively characterized via SEM, and the average diameter was
about 300 and 150 nm, respectively (Fig. 3B).

2.2 Catalytic Behavior

The GPx-like activity of the block copolymer catalyst for the reduction of cumene
hydroperoxide (CUOOH) by 3-carboxyl-4-nitrobenzenethiol (TNB, 1) was evaluated
according to a modified method reported by Hilvert et al [7]. The activities were
given (vide infra) assuming one molecule catalytic center (Te-moiety) as one active
site of the enzyme. The content of tellurium in the polymer was determined via UV
titration analysis. The reaction was initiated by the addition of hydroperoxide, and the
decrease of the absorbance at 410nm (pH 7.0) was recorded for a few minutes to
calculate the reaction rate. The relative activities are summarized in Table 1.
Assuming that the rate has a first-order dependence on the concentration of catalysts,
these data suggest that the GPx activity of the block copolymer catalyst is at least
251000-fold more efficient than that of PhSeSePh, and is also about 7-fold more
efficient than the previously reported selenium-micelle enzyme model [8,9].

Table 1. The Initial Rates for Reduction of Hydroperoxides by Thiol TNB and NBT in the
Presence of PAAm-Te-b-PNIPAAm

In addition to PAAm-b-PNIPAAm-Te, MnTPyP-M-Ad was assembled with b-CD-


PEG-b-PNIPAAm-Te through hostguest interaction. The SOD activity of the
enzyme model was quantified using the standard xanthine/xanthine oxidase assay
system first developed by McCord and Fridovich [10]. In order to determine the
concentration of the catalyst which achieves 50% inhibition (IC50) of the reaction (a
generally used indicator for comparing the efficiencies of enzymes and enzyme
mimics), it equals to 2.15% of the activity of native SOD enzyme (Table 2). The GPx
activities were given assuming one molecule catalytic center (Te-moiety) in
the enzyme model as one active site of the enzyme, and the content of tellurium
in the model was determined by ICP. The relative activities were summarized in
Table 3.
Artificial Enzyme Construction with Temperature Sensitivity 261

Table 2. SOD Activity of Star-shaped pseudo-block Copolymer Catalyst

Table 3. GPx Activity of Star-shaped pseudo-block Copolymer Catalyst

3 Conclusion
Through preparation of a novel block copolymer by ATRP, We have designed and
synthesized two block copolymers (BCP) which exhibited stable antioxidant catalytic
efficiency with temperature-responsive dependence characteristic. We anticipate that
this study will open up a new field in artificial enzyme design with environmental
responsive functions, and hope that such smart enzyme mimics could be developed to
use in an antioxidant medicine with controlled catalytic efficiency according to the
needs of the human body in the future.

References
1. Mugesh, G., du Mont, W.W., Sies, H.: Chemistry of biologically important synthetic
organoselenium compounds. Chem. Rev. 101, 21252179 (2001)
2. Berlett, B.S., Stadtman, E.R.: Protein oxidation in aging, disease, and oxidative stress. J.
Biol. Chem. 272, 2031320316 (1997)
3. Floh, L., Loschen, G., Gnzler, W.A., Eichele, E.: Glutathione peroxidase, V. The kinetic
mechanism. Hoppe. Seylers. Z. Physiol. Chem. 353, 987999 (1972)
4. Spector, A.: Oxidative stress-induced cataract: mechanism of action. FASEB. J. 9, 1173
1182 (1995)
262 T. Lin et al.

5. Stayton, P.S., Shimoboji, T., Long, C., Chilkoti, A., Chen, G., Harris, J.M., Hoffman, A.S.:
Control of protein-ligand recognition using a stimuli-responsive polymer. Nature 378,
472474 (1995)
6. Karl, K., Alain, L., Wolfgang, E., Thomas, H.: Volume transition and structure of
triethyleneglycol dimethacrylate, ethylenglykol dimethacrylate, and N,N-methylene bis-
acrylamide cross-linked poly(N-isopropyl acrylamide) microgels: a small angle neutron
and dynamic light scattering study. Colloids. Surf. 197, 5567 (2002)
7. Wu, Z.P., Hilvert, D.: Selenosubtilisin as a glutathione peroxidase mimic. J. Am. Chem.
Soc. 112, 56475648 (1990)
8. Huang, X., Dong, Z.Y., Liu, J.Q., Mao, S.Z., Xu, J.Y., Luo, G.M., Shen, J.C.: Tellurium-
based polymeric surfactants as a novel seleno-enzyme modelwith high activity. Macromol.
Rapid. Commun. 27, 21012106 (2006)
9. Huang, X., Dong, Z.Y., Liu, J.Q., Mao, S.Z., Xu, J.Y., Luo, G.M., Shen, J.C.: Selenium-
mediated micellar catalyst: An efficient enzyme model for glutathione peroxidase-like
catalysis. Langmuir. 23, 15181522 (2007)
10. McCord, J.M., Fridovich, I.: Superoxide dismutase. An enzymic function for
erythrocuprein (hemocuprein). J. Biol. Chem. 244, 60496055 (1969)
An Efficient Message-Attached Password Authentication
Protocol and Its Applications in the Internet of Things

An Wang1, Zheng Li2, and Xianwen Yang2


1
Key Lab of Cryptographic Technology and Information Security Ministry of Education,
Shandong University, 250100 Jinan, China
wanganl@mail.sdu.edu.cn
2
Department of Electronic Technology,
Information Science and Technology Institute, 450004 Zhengzhou, China
{lizhzzdy,yxw200420042004}@163.com

Abstract. Based on the shortcoming of low efficiency of secure message


transmission in the Internet of things, a new design philosophy is proposed:
combining the authentication with message transmission. Accordingly, a secure
and efficient message-attached password authentication protocol is presented.
We design universal message transmission architecture of the Internet of things,
consisting of clients, server, and network node. As a result, some experiments
show that our scheme is efficient and convenient which can solve all the
security problems.

Keywords: Information security, the Internet of things, message-attached


password authentication protocol, cryptography engineering.

1 Introduction and Motivation


The Internet of things [1, 2], an Internet for connecting all kinds of things together, is
becoming a central research direction of information security. Many companies have
designed their products, such as household electrical appliances, street lamp, display
monitor, et al. These products usually consist of a remote controller / monitor and
some local devices. According to cryptographic protocol [3], we may just as well call
the former server and the latter client. The conventional basic architecture of the
Internet of things is described in Figure 1.
Because of the insecure of internet channel, almost all products of the Internet of
things adopt some security strategies. With the help of hash functions or public key
cryptographic schemes [3], the mutual authentication protocols [3] are used for the
identity authentication and digital signatures are used for verifying the prime source
of information. By this method, the former usually includes no less than three steps,
and the latter must cost one or two transmissions. In most cases of the Internet of
things, the message transferred between server and clients is very short, sometimes
even only one bit (such as a switch of street lamp). In fact, to finish such a simple task
so wordily is not expected, and the low efficiency is becoming a serious problem in
this field. Therefore, we present the concept of message-attached authentication
protocol in order to solve this problem.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 263269, 2011.
Springer-Verlag Berlin Heidelberg 2011
264 A. Wang, Z. Li, and X. Yang

Server Client

Function
Computer
Module

Wired/Wireless
Internet Internet
Interface Interface
Channel

Fig. 1. Principle of the Internet of things

A new design philosophy of the message transmission scheme for the Internet of
things is proposed in this paper: combining the authentication with message
transmission. The outstanding character of our scheme is the decrease of interaction.
The number of steps in the whole protocol can be decreased to three, and one
signature will be economized. How to design a message-attached authentication
protocol? We can borrow ideas from the currently existing homogeneous protocols.
The conventional password authentication protocol is one of the most simple,
convenient and most widely used authentication modes. Lamport [4] first proposed
encrypting the users password with the safe one-way hash function. Hwang and Yeh
[5] proposed a mutual authentication protocol. Although the original author claimed
that this scheme could resist the known attacks, Chun et al [6] pointed out that this
protocol was vulnerable to denial of service (DoS) attack. Peyravian and Jeffries [7]
also raised a hash-based authentication proposal and declared that it could resist DoS
and off-line password guessing attacks. In 2008, Wang et al [8] proposed a mutual
anonymous password authentication scheme and declared that the efficiency was
equivalent to the Hwang and others protocol but it was more secure.
We find that due to the application of public key cryptography, the efficiency of
Wangs protocol is much lower than the Hwang and others. Besides, the
characteristics of resist-DoS are not very good and the so-called weak password
attack isnt really inexistent. In this case, an efficient message-attached password
authentication protocol based on hash functions is proposed and we also evaluate the
security and efficiency in detail.
Based on the new message-attached password authentication protocol, we design
universal system architecture of the Internet of things. Experiments show that the
protocol given in this paper, with high efficiency, can resist a variety of known
attacks and achieve the security requirement of the Internet of things perfectly.

2 Efficient Message-Attached Password Authentication Protocol


A new message-attached password authentication protocol is proposed in this part for
the security problem of Wangs scheme [8] and security requirement of password
authentication. The new proposal can improve the efficiency greatly, make up the
Wangs security deficiencies and resist various known attack.
An Efficient Message-Attached Password Authentication Protocol and Its Applications 265

2.1 New Scheme

Initial Conditions and Symbols.


S / C: the server / clients;
M: a message from S to C or from C to S;
IDC: the identity of user C (unique);
PWC: the password of user C;
H(): strong collision free hash function;
Rc, Rs: random number generated by C, S;
SKC: symmetric key between C and S;
Comma in the message: splice.

User Register. When the user C registers at the server S, C passes H(PWC), the hash
value of the password PWC , to the server S. Then S establishes the password verifier,
which is shown in Table 1.

Table 1. Password Verifier

User ID Passwords Hash Value


ID1 H(PW1)
ID2 H(PW2)

IDn H(PWn)

Furthermore, S generates the symmetric key SKC, imparts the value to C and both
sides store the key in safety.

Implementation

Step 1: C or S submits the communication request to the other participant.


Step 2: S sends a random number RS to C.
Step 3: C generates a random number RC, calculates H(H(PWC), RS, RC, IDC, SKC,
MC) with key SKC, and then sends the hash value to S with RC, IDC, MC .
Step 4: After receiving the data from C, S looks up the password verifier and gets
H(PWC) based on the IDC. Then S calls SKC, calculates the hash value of H(PWC), RS,
RC, IDC, SKC, MC, and then verifies whether this value is the same with the received
hash value. If the same, S will authenticates C successfully, save the message MC, and
the protocol can continue; Otherwise S will refuse the authentication request from C.
Step 5: S calls the key SKC, calculate the hash value of RS, RC, IDC, SKC, MS, and
then sends this value to C with MS.
Step 6: C calls the key SKC to calculate the hash value of RS, RC, IDC, SKC, MS, and
verifies whether this value is the same with the received hash value. If the same, C
will authenticate S successfully, save the message MS, and give a notice to S, which
means the protocol is accomplished; Otherwise C will refuse the authentication
request from S.
266 A. Wang, Z. Li, and X. Yang

C S (or S C ) : Request a communication


S C : RS
C S : RC , IDC , M C , H ( H ( PWC ), RS , RC , IDC , SK C , M C )
S verifies C and saves M C
S C : M S , H ( RS , RC , IDC , SK C , M S )
C verifies S and saves M S

Fig. 2. Message-attached password authentication protocol

2.2 Security Analysis

Denial of Service Attack. When an attacker carries out the denial of service attack on
the server S, S can discover the users illegality in Step 4. S doesnt need to do
complex calculations before that and just needs to receive and send data three times.
Meanwhile the step of verification just needs one look-up table and one calculation of
hash. The RS can be reused when the authentication failed, so the overhead of random
number generation can be completely ignored under the DoS attack. Therefore this
protocol can resist DoS attack very well.

Replay Attack. The preimage of the hash used to verify the identity contains the
random number chosen by the verifier, so the preimage will never repeat as long as
the random number doesnt repeat. Namely, ever-used hash values will arise with a
neglectable probability, which is equal to the probability of the collision in the hash
algorithms. Therefore this protocol can resist the replay attack very well.

Password Guess Attack. Password guess attack can be divided into two kinds: online
and offline. The online password guess attack can be prevented by limiting the logins.
The offline attack is still equivalent to the probability of the collision of hash
algorithms. So the protocol can resist the password guessing attack because of the
neglectable probability.

Stealing Verifier Attack. In the new proposal, there are nothing else than the users
identities and the hash values of the password in the verifier stored by S. The
symmetric key is not stored so the attacker cant generate the right hash value even if
he gets the password verifier, not to mention obtaining any useful information. For the
management of the symmetric key, we can usually adopt USB key and other
technologies to ensure the storage security.

Forge Server Attack. If an attacker wants to forge the server S, he must call the key
SKC to calculate the hash value of RS, RC, IDC, SKC, MS or attempt to replay by
means of previous hash value. Obviously the latter is not possible. Although the
preimage of the hash operation is transported in a public unsafe channel, its
impossible for the attacker to obtain the key SKC. As a result, the attacker can not
forge the server to communicate with users.
An Efficient Message-Attached Password Authentication Protocol and Its Applications 267

Forge Client Attack. Because the replay is infeasible, the attacker can but calculate
the hash value of H(PWC), RS, RC, IDC, SKC, MC to fake C. But even he can steal the
verifier and get H(PWC), he will not be able to generate the right hash value without
the key. Consequently it is also impossible to fake the client.

2.3 Performance Evaluation

In this paper we implement password authentication protocol based on hash function


and message-attached technology, which decreases an authentication of messages
source and a communication at least. Due to the absent use of public key, its far more
efficient than the Wangs authentication protocol based on the public key. Because
the server doesnt have any large overhead before the server authenticate the client,
the proposal in this paper is better to resist the DoS attack than Wangs. In addition,
the use of hash functions avoids the symmetric encryption algorithms, which
improves the overall efficiency greatly [9].

2.4 Contrast

In this part, Hwangs scheme proposed in 2002 [5], Peyravians scheme proposed in
2006 [7], Wangs scheme proposed in 2008 [8] and the new proposal in this paper are
compared from security and efficiency. The security details are given in Table 2,
which Yes means the scheme can be against the attack and No means the
resistance to the attack is in vain. Table 3 gives the efficiency contrast of the four
schemes.
As is shown in the table, the scheme proposed in this paper is the most effective,
meeting the security needs, and is better than the other three. We will take security
system of the Internet of things for example and discuss the specific implementation
and application in the following.

Table 2. Security Contrast

Hwang Peyravian Wang Our Scheme


DoS No Yes No Yes
Replay No Yes Yes Yes
Password Guess Yes Yes Yes Yes
Verifier Lost No No Yes Yes
Forge S Yes Yes Yes Yes
Forge C Yes No Yes Yes

Table 3. Efficiency Contrast

Hwang Peyravian Wang Our Scheme


Without PKC Yes Yes No Yes
Hash Function Yes Yes Yes Yes
Number of (practical) Steps 4 4 4 3
Number of Authentications 4 4 4 2
268 A. Wang, Z. Li, and X. Yang

3 System Architecture and Implementation


Figure 3 shows universal system architecture of the Internet of things consisting of
three parts - the devices which are the clients of our protocol, the central controller or
monitor of the whole system which is the server of our protocol, and a network node
which transfers messages between the server and some clients. Usually, the channel
between remote server and local node is wired network. But in local network, the
devices can interactive with the node through either wired or wireless channel.

Device
(Local Client)
Function
Module

Controller Network Function


/Monitor Node Interface
(Remote Server) (Local Node)
Central Cryptographic
Computer
Controller Controller

Wired Wireless
Internet Network Wireless
Interface Interface Interface
Channel Channel

Fig. 3. Universal system architecture

We implement a system of street lamps with the fashion of the Internet of things.
In a control center, a computer can transfer a command for switching on or off some
lamp. The message is attached into our password authentication protocol which is
carried out between the control center and the concrete lamps. According to a further
test, we conclude that client and server can carry out our scheme exactly. Thats to
say an adversary cant finish any attacks on our system of street lamps.

4 Conclusions
In this paper, we proposed a message-attached password authentication protocol and
universal system architecture of the Internet of things. And this scheme can also solve
some problems of similar products such as low efficiency, complex procedure et al.
So it is of great significance and practical value to the development of the Internet of
things in the future. Next, we will continue to study the various needs of the users of
the Internet of things and design special schemes for different users. Moreover, as
people are highly dependent on mobile phones [10], we can design more function-rich
and practical products of the Internet of things with the control of mobile phones to
provide with human more convenient services.
An Efficient Message-Attached Password Authentication Protocol and Its Applications 269

Acknowledgments. Supported by the National Natural Science Foundation of China


(NSFC Grant No.61072047), Innovation Scientists and Technicians Troop
Construction Projects of Zhengzhou City (096SYJH21099), and the Open Project
Program of Key Lab of Cryptologic Technology and Information Security (Shandong
University), Ministry of Education, China. Moreover, Im very grateful to the
anonymous referees for their helpful suggestions.

References
1. International Telecommunication Union (ITU), ITU Internet Reports, The Internet of
Things (2005)
2. European Research Projects on the Internet of Things (CERP-IoT) Strategic Research
Agenda (SRA), Internet of Things Strategic Research Roadmap (2009)
3. Menezes, A.J., Van Oorschot, P.C., Vanstone, S.A.: Handbook of Applied Cryptography.
CRC Press, Boca Raton (1997)
4. Lamport, L.: Password Authentication with the Insecure Communication. Communications
of the ACM 24, 770772 (1981)
5. Hwang, J., Yeh, T.: Improvement on Peyravian-Zunics Password Authentication
Schemes. IEICE Transactions on Communications E85-B(4), 823825 (2002)
6. Chun, L., Hwang, T.: A Password Authentication Scheme with Secure Password Updating.
Computers & Security 22(1), 6872 (2003)
7. Peyravian, M., Jeffries, C.: Secure Remote User Access Over Insecure Networks.
Computer Communications 29(5/6), 660667 (2006)
8. Wang, B., Zhang, H., Wang, Z., Wang, Y.: A Secure Mutual Password Authentication
Scheme with User Anonymity. Geomatics and Information Science of Wuhan
University 33(10), 10731075 (2008)
9. Koc, C.K.: Cryptographic Engineering. Springer, Heidelberg (2008)
10. Google: Google Projects for Android (2010),
http://code.google.com/intl/en/android
Research on Simulation and Optimization Method for
Tooth Movement in Virtual Orthodontics

Zhanli Li and Guang Yang

College of Computer Science and Technology,


Xian University of Science and Technology,
Xian, China
lizl@xust.edu.cn

Abstract. Based on the analysis of the path planning on tooth movement in


virtual orthodontics treatment system. The objective of path planning and the
constraints of tooth moving were proposed and a mathematical model was
established. The A* algorithm was selected to solve the path planning on tooth
movement. The destination was defined that the smaller of the sum of all teeths
move distance and rotation angle the better, meanwhile it is satisfy with the
constraints of physiology, so we should consider many constraints in the system
and then design a new heuristic function so as to get the better routes. At last,
according to these information we use stage algorithm to iterate out each stage
data on tooth movement. Based on the model and method, we made an
experiment in the matlab, at last the results showed that this method is reliable
and effective.

Keywords: Tooth movement, A*algorithm, Open Table, Closed Table.

1 Introduction

People's living conditions have been greatly improved with the progress of
economy, and people gradually pay more attention to their appearance. In
recent years, with the development of image processing technology, computer
technology, and virtual reality technology, virtual surgery have become hot research
topics[1].
The virtual orthodontics can help medico design the treatment and can output data,
the tooth path planning is the important part in the virtual orthodontics, it can simulate
the tooth movement in the dental treatment and medico can look at the whole
processes and results in virtual treatment.

2 Description of the Path Planning for Tooth Movement

The tooth path planning for tooth movement is to form a movement path
from the initial position to the target position for each tooth. According to

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 270275, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Simulation and Optimization Method for Tooth Movement 271

characteristics of tooth movement. The moving of each tooth is restricted


by the adjacent tooth. This is a very complex non-linear constrained optimization
problem.

2.1 Objective of Path Planning

Assume there are N teeth, and M stages for teeth movements from the initial position
to the final position (M is determined by doctor according to the teeth maximum
moving distance d0 and experience). There are two kinds of movement for each tooth
in every stage, one is translation and the other is rotation. The first objective is to
minimize the sum of all teeths translational distance. The calculation formulas are
defined as follows:

n m
fA = d ij

i =1 j =1

(1)

dij = (xij xij1 ) +(yij yij1 ) + (zij zij1 )


2 2 2
(2)

Where d ij represents the movement distance of tooth i in stage j. x ij , y ij , z ij is


the coordinates of tooth i in phase j. The second objective is to minimize the sum of
teeths rotation angle. The calculation formulas are defined as follows:

(L L )
G G
ij = arccos ij ij 1 (3)

n m
fB = ij


(4)
i =1 j =1
G
Where ij is the rotation angle of tooth i in stage j, L ij is the direction of tooth i in
stage j.

2.2 Constraints of Tooth Movement

In the process of tooth movement, the tooth may collide to each other. Assume f c is
the times of collision, it can be expressed as follows:

n 1
g 1 = f c = ci = 0 (5)
i =0

Where ci represents tooth i collide to tooth i-1, if the collision occurred, its value is
1, otherwise is 0. In addition, according to medical experts experience, the translation
272 Z. Li and G. Yang

or rotation angle should not be too high each time in the orthodontic process. The
dij ij in mathematical model must meet the following constraints: d0
and 0 are the maximum move distance and maximum angle of rotation of each stage.

g 2 = d ij d 0 0
(6)
g 3 = ij 0 0

3 Principle of Algorithm
A* algorithm is a typical heuristic intelligent algorithm, because it contains predict
function to calculate the path which will go, so it include the information of heuristic
and it is better than other local search algorithm[3].

3.1 The Improved Heuristic Function of A* Algorithm

According to the heuristic algorithm of A*, we offer a new fitness function to meet
this problem combine with move distance and angle of rotation, the function such as
follows:
h
f =g+ + sin (7)
cos

The above formula is the new fitness function based on the A* algorithm combined
with the constraints of tooth movement. Where g represents the distance which teeth
has moved and suppose the distance as diagonal mode. The
angle ( 0 0 90 0 )is composed by the vector which is formed by the current
node and its parents node and the other vector is formed between the initial point
and the target point, such as figure 1, ( 0 90 )is the vector which is formed
0 0

between the current motion vector and its previous motion vector, such as figure 2.
The reflects the characteristic of rotary movement of each teeth, which the formula
( h + sin ) just is the new heuristic function.
cos

Initial
point

Previous
Parent
point
motion vector
Current motion
Current Target vector
point point

Fig. 1. The angle of Fig. 2. The angle of


Research on Simulation and Optimization Method for Tooth Movement 273

3.2 The Analysis and Determination of Collision Points of Tooth

According to the features of A* algorithm, we need to first determine the collision


points in the process of tooth movement.
First we should consider whether the mobile vectors of adjacent tooth are in one
plane. If in one plane, we can get the number d which is the diagonal length of
bounding box and the d will be the diameter of the projection circle. Then we
calculate the angle formed by the adjacent teeths motion vectors. If the is less
than 45 degrees, then to calculate the distance D between the center points of
projection circles, and assume D is the threshold. Generally speaking, the D is the

distance of the center points of adjacent projection circles. If the actual distance D is
less than D , then the collision occurred. If the routes are not in one plane,

meanwhile to calculate the line which perpendicular the two motion vector, if the
intersection points are between the two motion vectors, then to calculate the
intersection points which just are the collision points, otherwise we assume that the
collision does not occur temporarily. At this time the teeth p1 can move in the plane
formed by points p1p2p1, and the teeth p2 can move in the plane formed by points
p1p2p2, The area Q and Q are the two teeths collision area, such as follows :

P1' P2' P2' P1'


Q

P1 P2 P1 P2

Fig. 3. In one plane Fig. 4. In Different plane

P2' P1' P2' P1 '


Q
Q
Q
Q
p1 p2 P1 P2

Fig. 5. The collision area of teeth p1 Fig. 6. The collision area of teeth p2

In this system, the number d determines the size of the projection circle, and the
projection circle area just is the collision area in the grid environment. Based on the
collision area and A* algorithm this system search the final path, the flow chart of
collision points as follows:
274 Z. Li and G. Yang

Start

Split tooth model and Get the arch wire


build bounding box and and the target
get initial coordinates coordinate

Calculate the Whether the adjacent routes are in


length of diagonal one plane
of bounding box no
yes

Calculate the line


perpendicular the two motion
yes Whether the angle of no vector
motion vector less than
45 degree
No collision
Calculate the diameter d and keep
and the distance of two move
center of circle and the Whether the
no intersection points are yes
sum of r1 and r2
between motion
vectors
No collision Make one
and keep circle based
no No collision and move on the d and
D<=r1+r2
keep move intersection
points ,the
yes circle is the
collision area
Collision
occurred ,the circle
area just is the
collision area

End

Fig. 7. The flow chart of collision points

4 Implementation and Results


The partial results of path planning are shown in matlab as follows:

Initial state process 1 process 2


Research on Simulation and Optimization Method for Tooth Movement 275

process3 process 4 process5


5 Conclusions
The path-planning on tooth movement is a complex, parallel, multi-target, nonlinear
system. In this article, A* algorithm is used to solve the problem of path planning on
tooth movement. The results showed that the model and the method is effective. It is a
new attempt of theory and application in the virtual orthodontics system.

References
1. Sen, Y.: Orthodontic diagnosis and treatment plan. World Publishing Company, Singapore
(2002)
2. Gao, H., Yan, Y., Qi, P., et al.: Three-dimensional digital dental cast analysis and diagnosis
system. Computer Aided Design and Computer Graphics
3. Gao, Q.J., Yu, Y.S., Hu, D.: The feasibility of improving the path search and optimization
based on A* algorithm. College of China Civil Aviation (August 2005)
4. Huizhong puzzle of Beijing Science and Technology Co. Ltd., Client programming about
online game / Information Ministry software and integrated circuit promotion center
5. Wang, W.: Principles and Applications of Artificial Intelligence. Electronic Industry Press,
Bingjing
6. Lihui, Z., Zhang, L., Hou, M.: The application of A* algorithm in game routing. Inner
Mongolia Normal University (February 2009)
An Interval Fuzzy C-means Algorithm Based on Edge
Gradient for Underwater Optical Image Segmentation

Shilong Wang, Yuru Xu, and Lei Wan

National Key Laboratory of Science and Technology on Autonomous Underwater Vehicle,


Harbin Engineering University, Haerbin, China
wangshilong@hrbeu.edu.cn

Abstract. In underwater environment, the underwater images would have low


S/N and the detail is fuzzy due to scattering and absorption of a variety of
suspended matter in water and the water itself and uneven lightness. If
traditional methods are used to dispose underwater images directly, it is
unlikely to obtain satisfactory results. As the mission of the vision system of
autonomous underwater vehicle (AUV), it should deal with the information
about the object in the complex environment rapidly and exactly for AUV to
use the obtained result for the next task. So, aiming at realizing the clustering
quickly on the basis of providing a high qualified segmentation of an
underwater image, a novel interval fuzzy c-means algorithm based on gradient
edge for underwater image segmentation is proposed. Experimental results
indicate that the novel algorithm can get a better segmentation result and the
processing time of each image is reduced and enhance efficiency and satisfy the
request of highly real-time effectiveness of AUV.

Keywords: underwater image, image segmentation, autonomous underwater


vehicle (AUV), fuzzy C-means, real-time effectiveness.

1 Introduction

Autonomous underwater vehicle (AUV) is an important carrier for exploring in ocean


area and survey of sea-bed service. Underwater target detection, search and identify are
the premise for AUV to execute certain task and has become the key to realize
autonomy[1]. Therefore, underwater vision system is particularly important, and image
information processing capacity is the essential premise to dynamic sensing
environment, fast locating and tracking object at all.
Underwater images are much sensitive to various noises and other interference[2].
The underwater image segmentation is one of the classic topics in computer vision
research field, and is also one of the difficulties in underwater image processing.
Thousands of current image segmentation algorithms are mostly for specific issues,
no general algorithm can be applied to all images segmentation[3]. Compared with
traditional hard segmentation algorithms, fuzzy segmentation has drawn increasing
attention, because it can keep more original image information. Especially Fuzzy C

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 276283, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 277

means (FCM) proposed by Dunn [4] and promoted by Bezdek[5], has been widely
used in various areas of image segmentation[6-7]. In the actual operation of underwater
robots, because of the vibration of imaging equipment and other electronic
interference, result in the existence of measurement error of the pixel information, that
is to say, the actual data obtained are inaccurate. In order to study this imprecision
clustering, the most basic form of data - interval-valued data is set for analysis.
Using weighted interval number, Ishibuchi et al. designed an interval neural
networks[8]. Z Lv, et al. given similarity measure method of interval numbers with
Gauss distribution[9]. In the areas of unsupervised classification, many scholars have
conducted in-depth discussion. Fan Jiu-lun et al. proposed an interval-valued fuzzy c-
means (FCM) clustering algorithm to deal with unsupervised classification
problems[10]. Francisco et al. presented a fuzzy clustering algorithm for symbol
data[11]. Intuitionistic fuzzy sets clustering algorithm are discussed in detail by Z Xu,
et al.[12]. Antonio Irpino, et al.[13] proposed a dynamic clustering algorithm based on
interval-valued data Wasserstein distance. Marie-Helene, et al.[14] used credibility
function to interval-valued data clustering.
However, above various algorithms are based on simulated data, an interval
segmentation algorithm based on underwater image is firstly put forward in this paper,
and in order to improve the timeliness of calculation, combining with edge gradient
for image segmentation. Comparative experiments show that the new algorithm
can achieve better quality and higher timeliness and can be applied to the actual
task of AUV. Although the new algorithm is based on underwater images, but the
principles have also a high reference value for dealing with other types of image
segmentation, the knowledge about the interval clustering segmentation will be fully
tapped.

2 FCM Segmentation Algorithm Based on Interval Boundary

2.1 Theory Introduction


+
Suppose I ( R) = {x | x = [ x , x + ] R + } x I ( R ) , x is called as a interval
+
number, which x and x is left value and right value of the interval, respectively.
And interval median and interval size are defined as follows:

x + x+
x& =
2
x = x +
x (1)

Let an image sized M N , f ( x, y ) is gray value at the position


of ( x, y ) , f {0,1,...L 1} , L is the maximum gray.
Taken ( x, y ) as center, its a a neighborhood is as follows: (here, a = 5 )
278 S. Wang, Y. Xu, and L. Wan

f ( x 2, y 2 ) f ( x 1, y 2 ) f ( x , y 2 ) f ( x + 1, y 2 ) f ( x + 2, y 2
f ( x 2, y 1) f ( x 1, y 1) f ( x , y 1) f ( x + 1, y 1) f ( x + 2, y 1)
(2)
f ( x 2, y ) f ( x 1, y ) f ( x, y) f ( x + 1, y ) f ( x + 2, y )

f ( x 2, y + 1) f ( x 1, y + 1) f ( x , y + 1) f ( x + 1, y + 1) f ( x + 2, y + 1)
f ( x 2, y + 2 ) f ( x 1, y + 2 ) f ( x , y + 2 ) f ( x + 1, y + 2 ) f ( x + 2 , y + 2 )

In the a a region, the mean and standard deviation are denoted by


Ex , y and x , y , respectively, and then given the following definitions:
Definition 1: Define interval characteristics of any point in an image as follows:
interval median being x& = Ex , y , interval size being x =2 x , y , and the pixel value at
this point being [ f ( x, y ) x , y , f ( x, y ) + x , y ] .
Definition 2: Let any two points pixels in the image are represented as:
[ f ( x1 , y1 ) x1 , y1 , f ( x1 , y1 ) + x1 , y1 ] , [ f ( x2 , y2 ) x2 , y2 , f ( x2 , y2 ) + x2 , y2 ] ,
only when f ( x1 , y1 ) x1 , y1 = f ( x2 , y2 ) x2 , y2 , two pixels are equal.
Definition 3: By interval median and interval size, the distance between two pixels is
defined as follows:

D = ( x&1 x&2 ) 2 + ( x1 x2 ) 2 = ( Ex1 , y1 Ex2 , y2 ) 2 + ( x1 , y1 x2 , y2 ) 2 (3)


Definition 4: Let the cluster number of FCM clustering be c and a total number of
pixels of the image is n= M N , if satisfy the following conditions:
c n
(1) ik = 1, k = 1, 2,L , n ; (2) ik > 0, i = 1, 2,L , c
i =1 k =1

Then U = ( ik )cn is called fuzzy membership matrix, U is called as


(c, n) fuzzy partition, U f (c, n) is denoted as the whole set of (c, n) fuzzy
partition.

Definition 5: Define objective function of interval FCM clustering is:


c n
J (U ,V ) = ( ik ) m vi xk
2

i =1 k =1

Where, V = {v1 , v2 , L , vc } R is the set of cluster centers, ik expresses the


+

membership of k -pixel interval value xk belonging to i -class center vi .

2.2 Introduction of Gradient Operator

Edge is important feature to image understanding and pattern recognition and can
keep feature information while effectively reducing the amount of data processing.
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 279

The first derivative can be used to detect whether a point is edge to highlight the
details of the image. By gradient sharpening [15] , the gradient of image at position

( x, y ) can be obtained:

f f T
f = [Gx G y ]T = [ ] (4)
x y
Definition 6: Define image gradient as follows:

f = ( Ei, j Ei +1, j ) 2 + ( i, j i +1, j )2 + ( Ei , j Ei, j +1 )2 + ( i, j i , j +1 )2 (5)

Where, (i + 1, j ) and (i, j + 1) are respectively right and below point of (i , j ) in


four neighborhood. And use specified threshold to determine the validity of a value,
that is to say, if f is greater than the threshold, the gradient will take the place of
the gray value, or else, keep the gray value of the point unchanged.

2.3 Steps of the Algorithm

Step 1: by definition 1, the original pixel is converted to interval value;


Step 2: according to definition 6, the gradient is calculated;
Step 3: randomly given original clustering center: V = {v1(0) , v2 (0) ,L , vc (0) }

vi = {Ei , i } here, i = 1, 2,L c
Step 4: By definition 3 to calculate the distance between every pixel interval value
and the cluster centers, and the membership ik of k - pixel interval value
xk belonging to i -clustering center vi can be expressed as follows:

if d dik m21
c
For any i, k ik >0 then ik = 1/ ( ) (6)
j =1 d jk

1, j = k
For any i, j , k if d =0 then = (7)
0, j k
ik ij

n n
Step 5: update cluster center matrix: vi = ik m xk / ik m , i = 1, 2,L , c (8)
k =1 k =1
Step 6: by definition 5 to calculate the objective function J, and compare with the
previous calculated value of J. if the difference is less than the threshold , then the
loop ends, continue to step 6, otherwise go to step 4 to continue the loop.
Step 7: post-processing: by the obtained cluster centers and fuzzy membership
matrix, execute the fuzzy clustering based on the interval value of every pixel
identified in step 1 and get the segmentation results.
280 S. Wang, Y. Xu, and L. Wan

3 Experimental Results and Analysis


To validate the feasibility and effectiveness of edge gradient based FCM algorithm,
compared it with the traditional FCM and the algorithm proposed in literature [16]
using for underwater image segmentation.

3.1 Comparison of Accuracy of Segmentation Results

In a computer with XP operating system, the main frequency is 2.60GHz, the memory
is 2G. For four types of representative underwater targets (image size is 576 768),
using the three algorithms for the segmentation experiments, iteration stop threshold
is set 1e-9, let clustering number be 2, and a taken 7. The segmentation results
were shown in Figs.1-4. (a-d is respectively to the original image, the traditional
algorithm, algorithm in literature [16], and new algorithm).

Fig. 1. The three-prism segmentation results

Fig. 2. The four-prism segmentation results

Fig. 3. The ellipsoid segmentation results

Fig. 4. The multi-objects segmentation results


An Interval Fuzzy C-means Algorithm Based on Edge Gradient 281

Figs. 1-2 show that the segmentation results by three different segmentation
algorithms can reflect clustering effects to different degree, and to deal with images
captured with low light environment all is indicative of the excellence. However, it
can be seen from the segmentation results of three-prism that the traditional FCM
method failed to separate the complete region of the target, and to the algorithm
proposed in literature [16], the results is better, but the upper part noise of the target is
still present, and the new algorithm not only can obtain a clear edge, but also to the
four-prism segmentation, can clearly distinguish details such as the cable in the lower
right of the image.
To the targets with serious uneven light and heavy noise, it can be seen from figs.
3-4 that the object in image can not be segmented from the background by traditional
FCM clustering algorithm, and using of the algorithm in literature [16], only few
object information can be segmented out when processing ellipsoid image, but for
multi-objects with more serious noise, it is very difficult to obtain the whole outline,
and the new algorithm can clearly obtain most of edge information of the target,
whats more, we can see the cable above the ellipsoid from the segmentation result of
multi-objects.
In summary, the segmentation results from figs. 1-4 sufficiently show that using
the new algorithm can all obtain good segmentation results to process the images both
with insufficient light and with heavy noise and serious uneven illumination. With the
good accuracy, high robustness and broad applicability, the new algorithm has
irreplaceable advantages on segmenting images captured in complex environment
with noise difficult to be eliminated.

3.2 Analysis and Comparison on Timeliness

To one of four types of targets, take 50 pieces of images captured in pool as the
samples, and to three algorithms, the number of cluster is set 2 and the let threshold
be = 1.0 109 , the average time consumption of single image is in table 1.

Table 1. Single image time consumption with different algorithms

target three-prism four-prism ellipsoid multi-objects


FCM 1.094 1.109 1.513 1.969
literature [16] 0.844 0.765 1.032 0.906
new algorihm 0.11 0.109 0.141 0.201

It can be seen from table 1 that to four types of underwater image segmentation,
the time consumption using the new algorithm can save about 10 times than
traditional FCM algorithm, and the timeliness is improved 4-8 times than the
algorithm in literature [16], which fully show that the new algorithm can excellently
meet the need of timeliness for AUV to execute practical underwater task.
282 S. Wang, Y. Xu, and L. Wan

4 Conclusions
Through the analysis and research on FCM algorithm, considering the inevitable
errors in the imaging process, for four types of images taken in pool, an interval fuzzy
c-means algorithm based on edge gradient is proposed, and some definitions and
parameters are given, and thus the membership matrix and clustering center matrix
are modified. Experimental results show that compared with the traditional FCM and
the algorithm in literature [16], using the new algorithm, segmentation quality are
obviously good, especially to the image with serious noise and uneven light, the
segmentation result is with high robustness. Moreover, the computing speed of the
new algorithm is quicker than the other two algorithms. Above all, by the new
algorithm, not only good quality segmentation is obtained, but also the timeliness is
improved, which meet the certain requirement for AUV to complete the special
mission[17], and provide a strong guarantee to feature extraction and target tracking.

References
1. Yuan, X.H., Qiu, C.C., et al.: Vision System Research for Autonomous Underwater
Vehicle. In: Proceedings of the IEEE International Conference on Intelligent Processing
System, vol. 2, pp. 14651469 (1997)
2. Wang, S.-L., Wan, L., Tang, X.-D.: A modified fast fuzzy C-means algorithm based
on the spatial information for underwater image segmentation. In: ICCDA 2010, vol. 1,
pp. 15241528 (2010)
3. Zhang, M.-j.: Image Segmention. Science Press, Peking (2001)
4. Dunn, J.C.: A fuzzy relative of the ISODATA process and its use in detecting compact,
well-separated clusters. J. Cybern. 3, 3257 (1974)
5. Bezdek, J.C.: Pattern recognition with fuzzy objective function algorithms. Plenum Press,
New York (1981)
6. Szilagyi, L., Benyo, Z., Szilagy, S.M., et al.: MR Brain Image Segmentation Using an
Enhanced Fuzzy C Means Algorithm. In: Proc. of the 25th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, Cancun, Mexico,
vol. 1, pp. 724726 (2003)
7. Rezaee, M.R., Zwet, P., Lelieveldt, B., et al.: A multiresolution image segmentation
technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans. on Image
Processing 9(7), 12381248 (2000)
8. Ishibuchi, H., Tanaka, H.: An Architecture of Networks with Interval Weights and Its
Applications to Fuzzy Regression Analysis. Fuzzy Sets and Systems 57(1), 2739 (1993)
9. Lv, Z., Chen, C., Li, W.: A new method for measuring similarity between intuitionistic
fuzzy sets based on normal distribution functions. In: Fourth International Conference on
Fuzzy Systems and Knowledge Discovery, Haikou, China, vol. (2), pp. 108113 (2007)
10. Fan, J.-l., Pei, j.-h., Xie, W.-x.: Interval fuzzy c-means clustering algorithm. In: Fuzzy Sets
Theory and Applications- The Ninth Annual Meeting of the Committee with the Fuzzy
System and Fuzzy Mathematics in China, pp. 127213 (1998)
11. de Francisco, A.T., de Carvalho: Fuzzy c-means clustering methods for symbolic interval
data. Pattern Recognitions Letters 28(4), 423437 (2007)
12. Xu, Z., Chen, J., Wu, J.: Clustering algorithm for intuitionistic fuzzy sets. Information
Sciences 178(19), 37753790 (2008)
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 283

13. Irpino, A., Verde, R.: Dynamic clustering of interval data using a Wasserstein-based
distance. Pattern Recognition Letters 29(11), 16481658 (2008)
14. Masson, M.-H., Denoeux, T.: Clustering interval-valued proximity data using belief
functions. Pattern Recognition Letters 25(2), 163171 (2004)
15. Gonzalez, R.C., Woods, R.E.: Digital image processing, 2nd edn., pp. 9799. Electronic
Industry Press, Beijing (2002)
16. Wang, S.-l., Wang, L., Tang, X.-d.: An improved fuzzy C-means algorithm based on gray-
scale histogram for underwater image segmentation. In: CCC 2010, vol. 1, pp. 27782783
(2010)
17. Balasuriya, A., Ura, T.: Vision-based underwater cable detection and following using
AUVS. In: Proceedings of the Oceans 2002 Conference and Exhibition, pp. 15821587.
IEEE, Piscataway (2002)
A Generic Construction for Proxy Cryptography

Guoyan Zhang

School of Computer Science and Technology,


Shandong University
guoyanzhang@sdu.edu.cn

Abstract. Proxy cryptosystem allows the original decryptor to delegate his


decryption capability to the proxy decryptor. Due to the extensive application of
proxy cryptography, some schemes have been presented, but there are not a
general model for proxy cryptography. In this paper, we give the first general
model and security model for proxy-protected anonymous proxy cryptography
in which only the proxy decryptor can decrypt the ciphertexts for the original
decryptor. Finally we give one concrete scheme according to the model as
example.


Keywords: Proxy Cryptography Proxy-Protected, DBDH.

1 Introduction

The need of delegation of cryptographic operation leads to the introduction of proxy


cryptography. M.Mambo, K.Usuda and E.Okamoto first invented the notion of the
proxy cryptosystem[1] in 1997. In their scheme, the original decryptor can delegate
his decryption capability to the proxy decryptor, but only after the ciphertext being
transmitted into another ciphertext, the proxy decryptor can recover the plaintext. In
comparison to proxy signature, only few research efforts have been put on delegation
of decryption progresses. In 1998, M. Blaze, G. Bleumer, M. Strauss[2] proposed the
notion of atomic proxy cryptography. In this method, the original decryptor and
delegated decryptor together publish a transformation key by which a semi-trusted
intermediary transforms ciphertext encrypted for the original decryptor directly into
ciphertext that can be decrypted by the delegated decryptor. In order to relax the
burden of transformation, series ciphertext transformation-free proxy cryptosystems,
in which the proxy (or proxies) can do the decryption operation without ciphertexts
transformation[3,4,5], have also been studied. Recent related works on Proxy Re-
encryption[6,7,8] needed the proxy to transform the ciphertext.
Because we can't always trust a person, the drawback of delegation is important in
proxy cryptography, especially in proxy decryption. There is not a more efficient
method to solve the problem. In this paper, we introduce a model which is implicit
delegation, it is to say, the encryptor needn't checks the validity of delegation,
because, if the decryption power of a proxy decryptor is revoked, he can not get the
plaintext. Further in our model, the channel between the original decryptor and the
proxy decryptor needn't be secure and authenticated.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 284289, 2011.
Springer-Verlag Berlin Heidelberg 2011
A Generic Construction for Proxy Cryptography 285

2 Our Generic Model and Attack Model

2.1 The Generic Model

Definition 1. A proxy-protected anonymous proxy decryption (PPAPD) scheme


includes the following five algorithms:
Setup: Given the security parameter 1k , the delegater runs the key generation
algorithm of id-based encryption scheme IBEGen to get the master secret key SK O as
his secret key and the master public parameter PKO as his public key.
Delegation Algorithm : The delegation algorithm includes two phases: user
secret key generation and the partial proxy private key derivation.
Secret Key Generation: Taken the public parameter PKO as input, the delegatee
randomly picks a secret key SK P and computes the corresponding public key PK P .
Following, he chooses an existential unforgeable signature
scheme S = (Gensign , Sign,Verify ) , and computes the signature for the public
key PK P . Finally, he sends ( , PK P ) with the verifying public key to the delegater.
Partial Proxy Private Key Derivation : Assuming the proxy time is t . Given the
tuple ( , PK P ) , the public key PKO , the secret key SKO and the proxy time t , the
delegater first checks the validity of the public key PK P and of the signature , if
either invalid, he aborts, otherwise, he runs the private key extraction algorithm of id-
based encryption scheme IBEExtract of IBE , and gets the partial proxy private
key SK pp . Then he sends SK pp to the delegatee.
Encrypt : Taken the public keys PKO and PK P , the proxy time t and the
message M , the probabilistic encryption algorithm Enc returns a ciphertext C on
message M .
Decrypt : On receiving the ciphertext C , the delegate gets the plaintext m
using the secret key SK P and the partial proxy private key SK pp , or outputs the
special symbol indicating invalid.

2.2 The Attack Model

In our notion, we consider all potential actions of the adversary. There are two types
of adversaries: the Type adversary presents the outside adversaries who aren't
delegated, the Type adversary presents the delegatee.
Definition 2. A proxy-protected anonymous proxy decryption ( PPAPD) scheme is
secure against adaptive chosen ciphertext attack (IND-CCA) if no probabilistic
polynomial time bound adversary has non-negligible advantage in either Game 1 or
Game 2.
286 G. Zhang

Game1
This game for Type
adversary. Taken a security parameter1k , the challenger runs
the Setup algorithm to get the delegatee's secret key SK O and the delegatee's public
key PKO , and he gives PKO to the adversary, keeping SK O secret.
The adversary can request two oracles: Partial-Proxy-Private-Key-Oracle and
Decryption Oracle.
Partial Proxy Private Key Oracle : On receiving the oracle
< PK P , SK P , ti , = Sign( PK P ) > :
The challenger checks the validity of the public key and the signature , if either
invalid, he aborts. Otherwise, he searches the PartialProxyPrivateKeyList for a
tuple < PK P , SK PP , ti > , if exists, he sends SK PP to the adversary.
Otherwise, the challenger runs the Partial-Proxy-Private-Key-Derivation
algorithm to get SK PP , and adds the tuple < PK P , SK PP , ti > to the
PartialProxyPrivateKeyList. He sends SK PP to the adversary.
Decryption oracles: On receiving the oracle < PK P , SK P , Ci , ti > :
The challenger checks the validity of the public key, if invalid, he aborts.
Otherwise, he searches the PartialProxyPrivateKeyList for a tuple < PK P , SK PP , ti > ,
if exists, he decrypts the ciphertext Ci using SK PP and SK P . And he sends the
plaintext M to the adversary.
Otherwise, the challenger runs the Partial-Proxy-Private-Key-Derivation algorithm
to get SK PP , and adds the tuple < PK P , SK PP , ti > to the PartialProxy
PrivateKeyList. He decrypts the ciphertext Ci using SK PP and SK P to get M and
sends the plaintext M to the adversary.
Challenge: The adversary generates a request challenger
< PK , SK , t* , M 0 , M1 > , ti* is the proxy time, and M 0 , M1 are equal length
P* P* i
plaintext. If the public key PK is valid, the challenger picks a random bit b {0,1}
P*
and sets C* = Enc( M b , ti* , PK , PKO ) . It sends C* to the adversary.
P*
The adversary can make polynomial queries, and the challenger responds as the
second steps.
At the end of the game, the adversary outputs b {0,1} and wins the game
if b = b , furthermore, there are also two restrictions that the adversary has never
request the partial proxy private key oracle on a tuple < PK , SK , t * > and the
P* P* i
decryption oracle on a tuple < C * , ti* , PK , SK >.
P* p*
Game 2
This game for Type
adversary. Taken a security parameter1k , the challenger runs
the Setup algorithm to get the delegatee's secret key SKO and the delegatee's public
key PKO , and he gives ( PK O , SK O ) to the adversary.
A Generic Construction for Proxy Cryptography 287

The adversary can request one oracle: decryption oracle.


Decryption oracles : On receiving the oracle < PK P , SK P , Ci , ti > :
The challenger checks the validity of the public key, if invalid, he aborts.
Otherwise, he searches the PartialProxyPrivateKeyList for a tuple < PK P , SK PP , ti > ,
if exists, he decrypts the ciphertext Ci using SK PP and SK P . And he sends the
plaintext M to the adversary.
Otherwise, the challenger runs the Partial-Proxy-Private-Key-Derivation algorithm
to get SK PP , and adds the tuple < PK P , SK PP , ti > to the
PartialProxyPrivateKeyList. He decrypts the ciphertext Ci using SK PP and SK P to
get M and sends the plaintext M to the adversary.
Challenge : The adversary generates a request challenger
< PK , SK , t* , M 0 , M1 >, ti* is the proxy time, and M 0 , M1 are equal length
P* P* i
plaintext. If the public key PK is valid, the challenger picks a random bit b {0,1}
P*
and sets C* = Enc( M b , ti* , PK , PKO ) . It sends C* to the adversary.
P*
The adversary can make polynomial queries, and the challenger responds as the
second step.
At the end of the game, the adversary outputs b {0,1} and wins the game
if b = b , furthermore, there are also one restriction that the adversary has never
queried the decryption oracle on a tuple < C * , ti* , PK >
P*

3 The Proxy-Protected Proxy Decryptionur Generic Model

3.1 The Construction

Let (G , GT ) be bilinear map groups of order p > 2k and let e : G G GT denote a


bilinear map. H1 :{0,1}* {0,1}n and H 2 :{0,1}* {0,1}n are two collision-
resistant hash functions.

Setup(1k , n) : The original decryptor chooses g as a generator for G . Set g1 = g


for random Z *p , and pick a group element g2 G and

vectors (u , u1 , , un ), (v, v1 , , vn ) G n +1 . These vectors define the following hash


functions:
n i n w
Fu (W1 ) = u (u jj ), and Fv (W2 ) = v (v j j ),
i =1 i =1
288 G. Zhang

where W1 = i1i2 in and W2 = w1w2 wn . The original decryptor's public key is


PKO = (G, GT , e, g , g1, g 2 , u , u1, u2 , , un , v , v1 , v2 , , vn ), and the original
decryptor's secret key is SK O = .

Delegation algorithm :
Secret key generation : The proxy decryptor P randomly picks x Z *p , and

computes the public key PK P = ( X , Y ) = ( g1x , g 2x ) . He runs Gensign of a secure


signature scheme S = (Gensign , Sign,Verify ) to get the signature key pair ( sk , vk ) ,
and he runs Sign to get the signature on ( X , Y ) . Finally he sends (( X , Y ), , vk ) to
the original decryptor O .
Partial proxy private key derivation : Given (( X , Y ), , vk ) , the original decryptor
O verifies : e( X , g 2 ) = e(Y , g1 ), Very ( , X , Y , vk ) = 1. If either fails, O outputs
invalid. Otherwise, assuming the proxy time is t , he chooses random number r Z *p

and computes W1 = H1 ( X , Y , t ), SK PP = (d1 , d 2 ) = ( g r r


2 Fu (W1 ) , g ). O sends
SK PP = (d1, d 2 ) to proxy decryptor P .

Encrypt : Assuming the proxy time is t . In order to encrypt message m GT ,


parse PK P = ( X , Y ) , and check the validity of the public key by the
equation e( X , g 2 ) = e(Y , g1 ) . If so, choose s Z *p and compute the ciphertext as

follows: C = (C0 , C1, C2 , C3 ) = (m e( X , Y ) s , g s , Fu (W1 ) s , Fv (W2 ) s ),


where W2 = H 2 (C0 , C1 , C2 , PK P ) .

Decrypt : The proxy decryptor P first computes W2 = H 2 (C0 , C1 , C2 , PK P ) and


checks the validity of the ciphertext by the following equation:
e(C1, Fu (W1 ) Fv (W2 )) = e( g , C2C3 ). If the equation doesn't hold, he outputs invalid.
2
Otherwise, he decrypts the ciphertext: m = C0 (e( d1, C1 ) / e(C2 , d 2 )) x .

3.2 Security Analysis

Theorem 1. The proxy-protected anonymous proxy decryption scheme is IND-CCA


secure against Type
adversary assuming the Decision Bilinear Diffi-Hellman
problem is hard.

Theorem 2. The proxy-protected anonymous proxy decryption scheme is IND-CCA


secure against Type adversary assuming the Decision Bilinear Diffi-Hellman
problem is hard.
A Generic Construction for Proxy Cryptography 289

4 Conclusion
In this paper, we give a generic proxy-protected anonymous proxy cryptography
followed with the security model. Our model not only captures all the properties
introduced by Lee et al., but also we also give a corresponding security model which
makes the schemes have precise security guarantee. Especially, in our model, the
encrypter needn't verify the validity of the public key of delegated decryptor,
Furthermore, the original decryptor can also run cryptographic operations as the
delegated decryptor does, but he cannot impersonation any delegated decryptor. Thus
this model not only protects the privacy of the delegated decryptor, but also the one of
the original decryptor. This is the first generic model satisfying the above properties
in literature. Finally, we give a concrete proxy decryption as examples.

Acknowledgement. This work is Supported by the National Natural Science


Foundation of China (No.60873232), Open Research Fund from Key Laboratory of
Computer Network and Information Integration In Southeast University,
Ministry of Education, China (No. K93-9-2010-10), Shandong Natural Science
Foundation (No.Y2008A22) and Shandong Postdoctoral Special Fund for Innovative
Research (No.200902022).

References
1. Mambo, M., Okamoto, E.: Proxy cryptosystem: Delegation of the power to decrypt
ciphertexts. IEICE Trans., Fundamentals E80-A, 5463 (1997)
2. Blaze, M., Bleumer, G., Strauss, M.: Divertible protocol and atomic proxy cryptography.
In: EUROCRYPT 1998. LNCS, vol. 1403, pp. 127144. Springer, Heidelberg (1998)
3. Mu, Y., Varadharajan, V., Nguyen, K.Q.: Delegation decryption. In: Walker, M. (ed.)
Cryptography and Coding 1999. LNCS, vol. 1746, pp. 258269. Springer, Heidelberg
(1999)
4. Sarkar, P.: HEAD: Hybrid Encryption with Delegated Decryption Capability. In: Canteaut,
A., Viswanathan, K. (eds.) INDOCRYPT 2004. LNCS, vol. 3348, pp. 230244. Springer,
Heidelberg (2004)
5. Wang, L., Cao, Z., Okamoto, E., Miao, Y., Okamoto, T.: Transformation-free proxy
cryptosystems and their applications to electronic commerce. In: Proceeding of
International Conference on Information Security (InfoSecu 2004), pp. 9298. ACM Press,
New York (2004)
6. Canetti, R., Hohenberger, S.: Chosen-ciphertext secure proxy re-encryption. In: ACM
Conference on Computer and Communications Security, pp. 185194 (2007)
7. Libert, B., Vergnaud, D.: Tracing Malicious Proxies in Proxy Re-Encryption. In: Galbraith,
S.D., Paterson, K.G. (eds.) Pairing 2008. LNCS, vol. 5209, pp. 332353. Springer,
Heidelberg (2008)
8. Weng, J., Deng, R.H., Chu, C., Ding, X., Lai, J.: Conditional Proxy Re-Encryption Secure
against Chosen-Ciphertext Attack. In: Proc. of the 4th International Symposium on ACM
Symposium on Information, Computer and Communications Security (ASIACCS 2009),
pp. 322332 (2009)
VCAN-Controller Area Network Based Human Vital
Sign Data Transmission Protocol

Atiya Azmi1,2, Nadia Ishaque2, Ammar Abbas2, and Safeeullah Soomro1


1
Department of Computer Science and Engineering,
Yanbu University College, Yanbu Al-Sinaiyah, Kingdom of Saudi Arabia
{atiya.azmi,safeeullah.soomro}@yuc.edu.sa
2
Sir Syed University of Engineering and Technology, Pakistan
{ammarabbas,786.nadya}@gmail.com

Abstract. The vital signs diagnostics data is considered high risk critical data
that requires time constrained message transmission. The continuous real time
monitoring of patient allows prompt detection of adverse events and ensures
better response to emergency medical situations. This research work proposes a
solution for acquisition of human vitals and transferring this data to remote
monitoring station in real time, over Controller Area Network Protocol
(CAN).CAN is already been used as an industrial standardized field bus used in
a wide range of embedded system applications where real time data acquisition
is required. Data aggregation and few amendments in CAN are proposed for
better utilization of available bandwidth in context to patients vital sign
monitoring. The results show that proposed solution provides efficient
bandwidth utilization with sufficient number of monitored patients. Even with
high frame rate per patient per second, adequate number of patients can be
accommodated.

Keywords: Controller Area Network, Human Vital signs, Data Aggregation,


Real Time Monitoring.

1 Introduction
Real time patient monitoring is the most important part of post operative or
emergency situation medical aid provision. Vital sign monitoring data forms
messages whose untimely delivery can lead to life threatening damage or serious
injury. The Vital signs data considered for monitoring consist of Heart Rate (HR),
Systolic blood pressure (SBP), Diastolic blood Pressure (DBP), Electrocardiogram
(EKG), Oxygen Saturation (SP02) and Body Temperature (Temp) [1]. Different
mediums and standards have been proposed by researchers to accomplish this task [2]
[3] [4]. This research work proposes a simplified solution that sense, store, process
and transmit human vitals using Controller Area Network Protocol with some
proposed amendments in the frame format. The CAN protocol is selected for its cost
effectiveness, robustness, minimum error rate and utilization in time critical
applications [5] [6]. The solution gives provision for each patients context data entry
at the bedside, to medical staff, in order to set ranges for the vital signs alarm values.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 290296, 2011.
Springer-Verlag Berlin Heidelberg 2011
VCAN-Controller Area Network Based Human Vital Sign Data Transmission Protocol 291

2 System Descriptions
The proposed system is divided into six major units. 1 illustrates the proposed system
using Controller Area Network (CAN) protocol for transferring patient vital sign data.
Following is a brief description of each unit.

Fig. 1. Block diagram of proposed system

1. Sensing and Digitizing Unit (SDU). The Sensing and Digitizing unit (SDU)
provides interface to the sensors attached to the patients for monitoring current
vital signs. It receives data from different sensors and digitizes the input if
needed. Analog to Digital conversion is required for most of the sensors
monitoring the vitals. SDU receives input from Context Data Input (CDU) unit to
change the sampling rate as per requirement of doctor or medical staff, for an
individual patient.
2. Bed Side Display Unit (BDU). Patient bed side display (BDU) unit is available at
patients bed sides. It displays currents vitals of the patient for visiting doctors
and support staff.
3. Context Data Input Unit (CDU). Every patient is treated differently according to
his specific disease and condition. Sometimes some vital sign values are
acceptable for a particular patient but can be alarming for the others with
immediate medical attention required. Patient Context Data Input Unit (CDU)
provides doctors and supporting staff to enter range against patients each vital
sign that can be alarming for him, according to his current medical condition.
CDU provides the input parameter to Processing and Aggregation Unit (PAU).
The given parameters are utilized by SDU, for setting alarm if the current vital
sign data is not within specified range.
4. Processing and Aggregation Unit (PAU). The Processing and Aggregation Unit
(PAU) is the most important unit of proposed System. Major processing tasks are
performed in this section. PAU gets the sensors data from SDU and buffers it for
processing. The buffered data is checked against the parameters provided by
CDU. If the current data value for any vital sign is not in the normal range, PAU
triggers an alarm at patients BDU and sets Ro bit for CAN Frame. Another
major task of PAU is that it aggregates the data of the six sensors into one data
set (56 bits) and provides it to CAN interface as a single value along with Ro bit
value. This reduces the control bits overhead that arise when data from individual
292 A. Azmi et al.

sensor is sent separately. This Ro bit is utilized by CAN receiver to set alarm at
remote monitoring site.
5. Controller Area Network Interface Unit (CANI.). Controller Area Network Interface
Unit (CANI) gets the aggregated data from PAU as a single value of 56 bits. CANI
adds control bits according to CAN protocol and also sets Ro bit, as given by PAU.
Finally CAN PDU is sent on the CAN bus to remote monitoring station.

3 Benefits of Using CAN


Transferring patient vital sign monitoring data imposes different network protocol
requirements then transferring large blocks of data on a communication networks.
The data should be sent and received in real time with, minimum error rate and
reliability. Following are the facts that significantly support the possible use of CAN
protocol for this purpose
CAN has been utilized and proven efficient in time critical application that
requires guaranteed timely delivery of data for processing and decision
making[5].
Error rate of CAN protocol network is minimum [5].
CAN provide deterministic and secure data exchange [6].
Fault confinement is another important aspect of CAN that automatically drops
the faulty node from the network and prevents any node from bringing down the
network [7].
Ethernet and other already used protocols have much higher overhead in terms of
control bits and management messages then CAN protocol.
CAN support wired medium for transmission of signal that reduces the
possibility of interference. Wired medium is generally recommended for use
within hospital limits where numerous interfering equipments are present.
CAN is an industrial standardized protocol (ISO 11898), with maximum speed of
1Mb/sec and can be implemented in real time systems [5].
Plug and play support in CAN provides opportunity to add new nodes (patients)
on run time [5], [7].

4 CAN Protocol
Controller Area Network (CAN) is a serial communication protocol introduced by
BOSCH [7] in 1980.Varient of CAN are utilized as low level messaging protocol by
many vendors and researchers[8][9][10] for different time critical application.
This research work is based on standard CAN 2.0A protocol.

Fig. 2. CAN2.0A Frame Format


VCAN-Controller Area Network Based Human Vital Sign Data Transmission Protocol 293

Standard CAN 2.0A: CAN 2.0A uses very simple and minimum control data bits
frame. Fig 1 shows the frame format of CAN2.0A [7].
Start of Frame (SOF): Used for marking start of message and also helps
synchronization.
Identifier: An 11 bit identifier that identifies the CAN module. Lower the binary
value of identifier higher is the priority for bus access.
Remote transmission request (RTR): RTR is used to request data from a particular
node.
Identifier extension IDE: IDE used to define that extended or standard CAN
identifier is used.
Reserved bit (Ro): A reserved bit that can be utilized for any new functionality.
Data Length Code (DLC): DLC defines how many bytes of data are contained in
CAN Frame.
DATA: CAN2.0A can accommodate up till 8 bytes of data per frame.
Cyclic Redundancy Check (CRC): CRC of 16 bits used for error detection.
ACK: 2 bits ACK utilized for indicating error free message.
End of Frame (EOF): EOF comprising of 7 bits that marks end of CAN frame.
Inter frame space (IFS): IFS is required before next CAN frame arrives.

5 Vital Sign Data


Main Vital signs considered by all physicians are heart rate, blood oxidation level,
temperature, and blood pressure. Apart from these standard signs, EKG is an
additional diagnostic medical data that is considered important in most of the cases
[10].Vital signs indicate the bodys functionality level and help in assessing any
abnormalities from a persons normal and healthy state. Normal ranges of
measurements of these vital signs change with the persons age, sex and medical
condition. The nursing staff is responsible to make this patients context data entry to
bed side monitoring device and set the ranges of alarm values for each vital sign.

Table 1. Human Vital Sign Data Ranges

Data Max. No
Vital Sign Range Values of Bits
Temp 96-108 F 13 8
Heart Rate 30-200 (bpm) 171 8
SPO2 0-100% 101 8
SBP 40-250 211 8
DBP 15-200 186 8
EKG 30-300 270 16
294 A. Azmi et al.

In case of any abnormalities, necessary treatment or medical procedure can be


adopted for a particular patient. The Table 1 illustrates the ranges of vital signs that
are usually supported by most vendors and corresponding number of bits that are
utilized to represent that range

6 Proposed Changes in Controller Area Network Protocol:


The VCAN
Following are the minor changes and enhancements that are recommended in CAN
protocol to make its efficient use for human vital sign data transmission in real time to
remote patient monitoring station. We call this enhanced protocol the VCAN.

7 Results and Discussions


This section presents quantitative analysis of maximum number of patients,
bandwidth utilization with increasing number of patients and bandwidth requirement
per patient. The results are based on the VCAN protocol described in section 5.

Bandwidth Requirement per Patient

50 46

40
28
30 24
kbps

20

10

0
256 300 500
Frame Rate per Patient

Fig. 3. Bandwidth Requirements per Patient in kbps

Fig 3 illustrates bandwidth requirement per patient. On X-axis frame rate per
patient and on Y-axis bandwidth in kilo bits per second (kbps) is represented.
Different possible rates of 256 frames per second, 300 frames per second and 500
frames per second are presented. The reason for selecting such rates is that ECG/EKG
is sampled by different devices at different rates [10].The result shows bandwidth
requirement for a single user will be close to 24kbps, 28 kbps and 40kbps with 256,
300 and 500 frames respectively sent for single user per second.
Figure 4 and 5 shows bandwidth utilization with increasing number of patients.
Figure 6 depicts number of possible patients that can be monitored by our system with
proposed amendments in CAN2.0A protocol. In ideal condition with availability of
VCAN-Controller Area Network Based Human Vital Sign Data Transmission Protocol 295

Bandwidth Utilization (1024 Kbps)

1000
900

Bandwidth Kbps
800
700
600 24 Kbps
500
28 Kbps
400 c
300 46 Kbps
200
100
0
1

6
11

16

21

26

31

36
No of Users

Fig. 4. Maximum bandwidth utilization (1Mbps)

Bandwidth Utilization (800 Kbps)

900
800
700
Bandwidth kbps

600
500 24
Kbps
400 28
300 Kbps
46
200 Kbps
100
0
1
5
9
13
17
21
25
29
33

No of Users

Fig. 5. Bandwidth utilization with consideration of 20% losses

maximum bandwidth i.e. 1 Mbps, the maximum No. of patients whose vital signs data
can be transferred to remote monitoring location are 39, 33 and 20, with bandwidth
requirements of 24kbps, 28 kbps and 40kbps respectively. These data rates as defined
earlier depend upon frame rates set selected per patient.
Figure 6 also shows decreased number of patients, when 20% bandwidth loss due
to line errors and other overheads is considered. But still the number of patients
that the given solution will support would be 33, 28 and 17 patients respectively. That
is still a high number of patients that can be supported with high frame rates in real
time.
296 A. Azmi et al.

Maximum No. of Patient

45
39
40
35 33 33
28
No of Patient 30
25 1 Mbps
20
20 17 800 Kbps
15
10
5
0
24 28 46
Kbps

Fig. 6. Maximum number of patients accommodated by system with Mbps per patient and
800kbps per patient

References
1. Ahmed, A., Riedl, A., Naramore, W.J., Chou, N.Y., Alley, M.S.: Scenario-based Traffic
Modeling for Data Emanating from Medical Instruments in Clinical Enviornment. In: 2009
World Congress on Computer Science and Information Engineering. IEEE Computer
Society, Los Alamitos (2009)
2. Keong, H.C., Yuce, M.R.: Analysis of a Multi-Access Scheme and Asynchronous
Transmit-Only UWB for WBANs. In: Annual International Conference of the IEEE,
EMBC (2009)
3. Farshchi, S., Pterev, A., Nuyujukian, P.H., Mody, I., Judy, J.W.: Bi-Fi: An Embedded
Sensor/System Architecture for Remote Biological Monitoring. IEEE Transaction on
Information Technology in Biomedicine 11(6) (November 2007)
4. Wang, P.: The Real-Time Monitoring System For In-Patient Based on Zigbee. In: Second
International Symposium on Intelligent Information Technology Application (2006)
5. Chen, H., Tian, J.: Research on the Controller Area Network. In: IEEE International
Conference on Networking and Digital Society (2009)
6. Klhmet, U., Herpel, T., Hielscher, K., German, R.: Real Time Guarantees for CAN Traffic.
In: VTC Spring 2008, pp. 30373041. IEEE, Los Alamitos (2008)
7. Bosch: CAN Specification Version 2.0. J. Robert Bosch GmbH, Stttgart, Germany (1991)
8. Misbahuddin, S.: Al-Holou. N.:Efficient Data communication Techniques for Controller
Area Network (CAN) protocol. In: Proceedings of ACS/IEEE International Conference on
Computer and their Applications, Tunis, Tunisia (July 2003)
9. Cenesiz, N., Esin, M.: Controller Area Network (CAN) for Computer Integrated
Manufacturing Systems. Journal of Intelligent Manufacturing 15, 481489 (2004)
10. Zubairi, J.A., Misbahuddin, S., Tassudduq, I.: Emergency Medical data transmission
System and Techniques. HandBook of Research on Advances in Health Informatics
Study on the Some Labelings of Complete
Bipartite Graphs

WuZhuang Li, GuangHai Li, and QianTai Yan

Dept. of Computer. Anyang Nomal University,


Anyang Henan 455002, P.R. China
lwz@aynu.edu.cn

Abstract. If the vertex set V of G =< V,E > can be divided into two un empty sets
X and Y , X Y = V,X Y = , but also two nodes of every edge belong to X and
Y separately, the G is called bipartite graph. If xi X,yi Y,(xi ,yi ) E then G is
called complete bipartite graph. if X = m,Y = n , the G is marked Km,n . In this
paper the graceful labeling, k-graceful labeling, odd graceful labeling and odd
strongly harmonious labeling are given.

Keywords: complete bipartite graphs, graceful labelings, k-graceful labelings,


odd graceful labelings, odd strongly harmonious labelings.

1 Introduction
Graph theory is a branch of mathematics, especially an important branch of discrete
math-ematics. It has been applied in many different fields in the modern world, such
as physiccs, chemistry, astronomy, geography, biology, as well as in computer
science and engineering.
This thesis mainly researches on graph labeling. Graph labeling traces its origin to
thefamous conjecture that all trees are grceful presented by A. Rosa in 1966. Vertex
labeling is amapping that maps the vertex set into integer set. According to the
different requirement for themapping, many variations of graph labeling have been
evolved. In 1988, F. Harary introduced the notion of a (integral) sum graph. Sum
graph was generalied to mod sum graph by Bolland, Laskar, Yurner and Domke in
1990. The concepts of sum graph and integral sum graph havebeen extended to
hypergraphs by Sonntag and Teichert in 2000.
Bipartite graphs are widely applied, but not all bipartite graphs are graceful graphs,
therefore, it is necessary to research their gracefulness further. Based on the
conjecture put forwarded by professor Ma Kejie that crown of complete bipartite
graphs is k - graceful (mn, k2),the conjecture is proved in construction method
when (m = 1or 2 , k 2); (m 3, k (m 2)(n 1)) in the paper. And the demonstration extend
the k - graceful research rang.
Along with the development of computer, The labelings of graphs is more and
more extensive in the application and realms, such as network and
telecommunication...etc. But these years of various labelings of graphs have already
deveolped many kinds, Among them, the search of the graceful labelings and

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 297301, 2011.
Springer-Verlag Berlin Heidelberg 2011
298 W. Li, G. Li, and Q. Yan

harmoniously labelings more active. This text will vs a type of special labelings, Talk
about its various labelings.In this paper the graceful labeling, k-graceful labeling, odd
graceful labeling and odd strongly harmonious labeling are given.

2 Lemma

Definition 1. Let G =< V,E > be a simple graph, if each one of v V , there exist a
Nonnegative integers f (v) to satisfied:
1) u, v V , u v, f (u) f (v) ;

2) max{ f (v) | v V } = E ;

3) e1 , e2 E , e1 e2 , f (e1 ) f (e2 ), Where f (e) = f (u ) f (v) , e = uv


the G is called graceful graph, and f is called graceful labeling.

Definition 2. Let G =< V,E > be a simple graph, G is called k -graceful, if there exist a
injection f:V(G) { 0,1,2, , E + k 1 } such that induced mapping f *:E(G) {k,k + 1, , E + k 1 }
becomes a bijection. Where the induced mapping f * is defined as
e = uv E, f *(uv) = f(u) f(v) , then G is called k -graceful graph, f is called k -graceful
value.
Definition 3. Let G =< V,E > be a simple graph, G is called odd graceful, if there exist a
injection f:V { 0,1,2, ,2 E 1 } such that induced mapping f *:E(G) {1,3,5, ,2 E 1 } becomes a
bijection. Where the induced mapping f* is defined as e = uv, f *(uv) = f(u) f(v) , then G is
called odd graceful graph, f is called odd graceful value.
Definition 4. Let G =< V,E > be a simple graph, if there exist a injection f:V { 0,1,2, ,2 E 1 }
such that induced mapping f *:E(G) { 1,3,5, ,2 E 1 } becomes a bijection. Where the
induced mapping f* is defined as e = uv E, f *(uv) = f (u ) + f (v ) , then G is called odd graceful
graph, f is called odd strongly harmonious labelings value.
Definition 5. If the vertex set V of G =< V,E > can be divided into two un empty sets X
and Y , X Y = V,X Y = , but also two nodes of every edge belong to X and Y
separately, the G is called bipartite graph. If xi X,yi Y,(xi ,yi ) E then G is called
complete bipartite graph. if X = m,Y = n , the G is marked K m,n .
Discussed below the graceful labeling, k-graceful labeling, odd graceful labeling
and odd strongly harmonious labeling.

3 Main Result and Proof

In bipartite graph K m,n , nodes of K m,n is marked


Study on the Some Labelings of Complete Bipartite Graphs 299

X = {x1, x2 ,, xm }, Y = { y1, y2 ,, yn }.
( xi , y j ) E ( K m, n ), (i = 1,2, , m; j = 1,2, , n) .

Theorem 1. K m,n is graceful graph.

Proof. Give K m,n labeling f as follow :


f(x i ) = i 1, (i = 1, 2, , m ),

f(y i ) = jm, ( j = 1, 2 , ...,n ),

We proved f is a graceful value.


(1) f(x i )(i = 1,2, , m ) the maximum is m 1
f(y j )( j = 1, 2, , n ) the

minimum is m then f : V ( K m, n ) {0,1,, mn} is a injection.
(2) The labeling of edges as follow
f * ( xi y j ) = f ( xi ) + f ( y j ) = i 1 + jm ,

when i {1 , 2 , ..., m } , j {1 , 2 , ..., n } ,


f (xi , y j ) ( j = 1,2, , n ) .
*

Then f * : E (Km, n ) {1,2,, mn} is a bijection.


Hence we proved f is a graceful value.

Theorem 2. K m,n is k -graceful graph.

Proof. Give K m,n labeling f as follow:

f(xi ) = i 1, (i = 1, 2, , m ),

f(y i ) = k + m 1 + ( j 1) m, ( j = 1, 2 , ...,n ),

We proved f is a k -graceful value.

(1) all appearance, f : V ( K m, n ) {0,1, , k + E 1}( E = mn) is a injection.


(2) f ( xi y j ) = f ( xi ) + f ( y j ) = k + m + i 2 + ( j 1)m ,
*

When i {1 , 2 , ..., m } , j {1 , 2 , ..., n } ,

f *(x i , y j ) ( k , k + 1, , k + E 1)

Then f : E( K ) {k , k + 1,, k + E 1} is a bijection.


m,n

According to the (1) and (2) , Hence we proved f is a k -graceful value.


300 W. Li, G. Li, and Q. Yan

Theorem 3. K m,n is odd graceful graph.

Proof. Give K m,n labeling f as follow:

f(xi ) = 2(i 1), (i = 1, 2, , m ),

f(y j ) = 2 ( m 1) + 1 + 2 m ( j 1) = 2 mj 1, ( j = 1, 2 , ...,n ),

We proved f is a odd graceful value.

i
In f ( x ) , the maximum is 2(m 1) in f(y ) ,the minimum is 2(m 1) + 1 ,
j

then f : V ( K m , n ) {0,1,3, ,2mn 1} is a injection.

(2) f ( xi y j ) = 2(i + mj) 3 ,


*

When i {1 , 2 , ..., m } , j {1 , 2 , ..., n } ,

f *(xi , y j ) {1,3,5, , 2 E 1} .

Then f : E ( K ) {1,3,5,,2 E 1} is a bijection.


*
m, n

Hence we proved f is a odd graceful value.

Theorem 4. K m,n is odd strongly harmonious graph.

Proof. Give K m,n labeling f as follow:

f(xi ) = 2(i 1), (i = 1, 2, , m ),

f(y j ) = 1 + 2 m ( j 1) , ( j = 1, 2 , ...,n ),

We proved f is a odd strongly harmonious value.


i
(1) f ( x ) is even numbers f(y ) is odd numbers,
j

then f : V ( K m, n ) {0,1,2,,2 E 1} is a bijection.


(2) f * ( xi y j ) = 2i + 2m( j 1) 1 ,

When i {1 , 2 , ..., m } , j {1 , 2 , ..., n } and ( E = mn ),


Then f * : E (Km,n ) {1,3,5,,2 E 1} is a bijection.
Hence we proved f is a odd strongly harmonious value.

4 Conclusion
Bipartite graphs are widely applied. In this paper, the conjecture is proved in
construction method the graceful labeling, k-graceful labeling, odd graceful labeling
and odd strongly harmonious labeling.
Study on the Some Labelings of Complete Bipartite Graphs 301

References
1. Ma, K.: Graceful Graphs. Beijing University Press, Beijing (1991)
2. Slater, P.J.: On k-graceful graphs. In: Proc. of the 13th S.E. Conference on Combinatories
Graph theory and Computing, Boca Raton, pp. 5257 (1982)
3. Maheo, M., Thuillier, H.: On d-graceful graphs. Ars Combinatorics 13(1), 181192 (1982)
4. Gallian, A.: A dynamic survey of graph labeling. The Electronic Journal of Combinatorics
(July 2000)
5. Ma, K.J.: Graceful Graph. Peking University Press, Beijing (1991) (in chinese)
6. Slater, P.J.: On K- graceful graphs. In: Proc(C). of the 13th S. E. Confernece on
Combinatorics, Graph theoryand Computing, Boca Raton, pp. 5257 (1982)
7. Kotzig, A.: Recent results and open problems in of graceful. Congressus Numerantium 44,
197219 (1984)
An Effective Adjustment on Improving the Process of
Road Detection on Raster Map*

Yang Li1,**, Xiao-dong Zhang1, and Yuan-lu Bao 2


1
Beijing Jiaotong University, Beijing, China
liyang@bipt.edu.cn
2
University of Science and Technology of China, Hefei, China

Abstract. This research goal is to improve whole process of recognizing and


extracting road layer from raster map, and to make the whole process as a
closed feedback system with output measurable, state controllable and
recognition sub-process optimally adaptable. The scheme is building an
integrated system with its output road layer measurable by selecting two (or
more) methods of perfect comparatively identifications with their mechanism
complementary and then parallel collecting them as the controlled sub-process
objects of the integrated system. Based on approach thought of microscopic
identification and macroscopic elimination, the system realized first the road
layer initial recognition and last road map level completely to withdraw via
some output layers re-clustering and feedback control. In view of the standard
municipal transportation map, the path and the region valve value condition has
determined. Through re-clustering to the noise the whole map can be
completely divided into two parts of road and non-road (region). The feedback
re-clustering strategies of recognition for the transportation road networks are
based on the core characteristic of obtain road constitution. The convergence of
the feedback re-clustering of road layer guarantees the map level optimization

Keywords: nonlinear adjusting, Recognition Vector Map, Adaptive Control.

1 Introduction
Digital times requires digital communications, digital times make mass-production
and update the traffic vector map task to China. The current transportation system due
to population concentration and a high degree of modernization has several serious
problems. GPS traffic management, vehicle monitoring, the development and
practices of personal navigation technology to alleviate traffic congestion and
maintain the role of social security has become apparent. All applications involving
GPS navigation system of traffic management or control cannot do without the road
location, in a sense, the development of GIS vector traffic has become critical and
bottlenecks of precipitate traffic management.
The digital map in Geographic Information System (GIS) can be divided into grid
(image) map and vector map of two categories. Vector map is obtained from the grid

*
This work was supported in part by the National Natural Science Foundation of China under
Grant 60974092.
**
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 302308, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 303

map by computer processing. On the grid map of the road network traffic information
extraction process improvement has become established the basis for automatic
production vector map link. The current, literature proposed road network to
recognize the various methods of the grid map, many have the original, it is also very
effective for some map. However, from the perspective of system analysis, the whole
identification process is an open-loop system. It is need to manually observe the road
layer and the way of manually adjust the road sampling threshold. How to make road
information as significant variables of the system output, identification of the road
network throughout the recognition process into a feedback system to deal with it, to
solve these problems and improve identification process of the traffic grid road map
information, to achieve automatic recognition of the critical. Improve the traffic
information grid taking the whole process of identification, making a measurable
output, a target can control, the recognition process can be adaptive optimized closed-
loop system.

2 Basic Principle
The road map information comprehensive identification system is shown in Figure 1.
The basic idea is to adjust the recognition threshold by the output of the road layer.
Realization process is shown in Figure 2. Closed-loop system control objectives for

the measure RA0 and the measure R AM. This goal in control of the
controlled object Road, Recognition subsystem output R1Rall roads (that is
A20) and Area recognition subsystem output A1Aall non-road area (that is
R20). In other words, Comprehensive road map information identification system
optimization directly corresponds to the output path information to improve
recognition accuracy. Road of map comprehensive information identification system
output optimization directly corresponds to the output path information to improve
recognition accuracy.

Fig. 1. Map of road information the initial design of an integrated identification system
schematic

Realization process is shown in Figure 2. Closed-loop system control objectives for



the measure RA0 and the measure R AM. This goal in control of the
controlled object Road, Recognition subsystem output R1Rall roads (that is
A20) and Area recognition subsystem output A1Aall non-road area (that is
304 Y. Li, X.-d. Zhang, and Y.-l. Bao

R20). In other words, Comprehensive road map information identification system


optimization directly corresponds to the output path information to improve
recognition accuracy. Road of map comprehensive information identification system
output optimization directly corresponds to the output path information to improve
recognition accuracy.
First of all, to conventional treatment before the traffic grid map identify the
image, including the elimination of obvious noise points, color brightness contrast
adjustment and so on. Images respectively to the road network layer image
recognition process and non-road block layer recognition process after conventional
raster. Recognition of the road network layer selected recognition algorithm, mainly
based on the color of the road, network-wide connectivity and parallel edge feature,
Recognition of non-road block layer, mainly based on region color, simply connected
shapes, closed border and so on. Attention to the two sub-processes as related to
controlled weak pattern recognition process, and get their sensitive evaluation
function threshold adjustment range.

Fig. 2. Road of map information comprehensive identification system the initial design process
of the identification error

Comprehensive identify road network layer to the connectivity decision the key factors
of the correct identification of progress and feedback control. Just setting out from the
classical system design theorem, only consider systems with external input and simple
output feedback control that is difficult to control the internal process of comprehensive
identification system. The simple planning of the comprehensive identification, one is the
road, the other is the region. These two factors (equivalent to the two dominant poles
within the system) are attributable to simplify the handling process.
Take into account the recognition system's internal processes, recognition result of
each sub-process the image in fact is always constituted by the three elements
complete. These three elements are: road pixel block, regional block and pixel noise
pixel block. Need to improve the processing within the system, in the middle of each
process into a noise re-clustering process. The contents of each noise re-clustering
process:
Noise pixels block are planning to road or region;
The result of re-clustering, that is identify connectivity of the center line of the
road network layers, the width of feeder roads and other measurable characteristics to
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 305

distinguish, determine the reasonableness of the road network, whether the final result
of output.
Road network does not meet specifications, is based on irrational part of the
pixel block to modify the standardization map before this noise clustering, it should
not plan the road into the region on the change.
Start a new round of sub-processes of noise re-clustering.
Synthesis of the information used in the road map identification system diagram is
shown in Figure 3.

Fig. 3. Consider the system's internal road of map information processing integrated
identification system

3 Road of Traffic Map Identification of the Extraction on Process


Map of road comprehensive information identification system is shown in Figure 4. In
general, Improve identification of the city grid road traffic map extraction process is
divided into the following sub-processes, are as follows:

Fig. 4. Improve identification of the city grid road traffic map extraction process

3.1 Image Pretreatment Sub-process

The characteristic module matching printed map to eliminate the unique is not
conducive to identification road marking, getting color maps T1 after pretreatment. In
the future, T1 will serve as a map of road identifying the input source extraction view.
306 Y. Li, X.-d. Zhang, and Y.-l. Bao

3.2 Traffic Map Normalization on Sub-process

This sub-process implementation re-clustering of road R and regional A. Obtained


including all noise N0 (covering the white on the road and the gray area layer) the
color traffic maps, as standardized map T2 k , (k = 1,2,..., n) .
Roads and regional standardization algorithm propose separate closed-loop road
feature recognition algorithm and regional characteristics of closed-loop identification
algorithm by shape of the road layer and layers of standardized regional approach,
The method is based on gray-scale map can be achieved with the recognition of urban
traffic map automatically standardized. Block diagram of the realization is shown in
Figure 5.

Fig. 5. Automatic normalization algorithm schematic with the recognition

3.3 Noise Re-clustering Process

The way of eliminate noise N0 is to identify points or determine the noise to achieve
its conversion to a road or area, as noise road re-clustering and noise area re-
clustering. Until the noise set Nn empty set, Obtained complete clustering of road R
and regional A, as black and white binary image T2k +1 (k = 1,2,..., n) .
Used is based on the direction of extension features eight pixel noise completely
re-clustering guidelines. Including unbiased clustering and biased clustering criteria
guidelines. Use unbiased clustering criteria alone cannot completely remove the
noise, however, there is partial clustering criteria alone can completely remove the
noise. Shown in Figure 6 that T in = RUAUN and T out = RUA, as complete
clustering process.

Fig. 6. Noise of based on the direction of extension features eight pixel completely re-clustering
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 307

3.4 Close-Loop Feedback to Improve the Recognition of the Road Sub-process

Based on binary image T2 k +1 road centerline violation criterion change the


normalization map T2 k corresponding noise block to become regional, obtain
improved feedback color map T2k +2, output or to the next round of the
process of normalization and noise re-clustering process. Until the binary image T2 k
+1 on the no road centerline violations is the end.
First extract closed-loop feedback process from the map of road the normalized T2.
Then noise re-clustering. Finally T3 into this section introduce of road to be
irregularities promoter identification and closed-loop feedback process. T2k updated
on the map noise re-clustering again. Constitute closed-loop feedback path
identification process. A closed-loop feedback process, create two new maps, as T2k
and T2k +1, (k = 1, 2, , n). Road irregularities promoter identification and closed-
loop feedback process can be described as: First binary image T2k +1 road
Refinement, obtain road centerline and original plans placed in the formation T2k;
Criteria based on the road centerline irregularities change the normalization map T2k
of the corresponding noise block content, obtain improved color map feedback T2k
+2, output standardization sub-process to the next round of closed-loop process. Until
the binary image T2k +1 on the no road centerline violations occur, cycle end, output
the final result T2 n+1. A complete closed-loop feedback path identification process
schematic is shown in Figure 7.

Fig. 7. Closed-loop feedback path diagram extract

3.5 The Road Conventional Treatment and Network Integrity Verification

Road network layer output binary bitmap black and white. As the previous cycle has
normalized threshold without changing the premise of complete correction of the road
network, if have problems in testing process, it will start the feedback process and
adjustment the threshold relevant of standardized maps, from the original map to start
a new identification process

4 Effect and Conclusions of Road Information Automation


Identification
Improved grid road traffic information recognition process map has been compiled
into the corresponding software RoadExtr.exe and obtain software copyright
308 Y. Li, X.-d. Zhang, and Y.-l. Bao

registration (Registration No. 2005SR09787). The important results obtained directly


( can field validation): All the major cities (including Hong Kong and Taipei) of
"China Highway Traffic Atlas" CD-ROM have 39 color city maps for the road
transport network automatic extraction, Consistent results obtained. Total time spent
in 4 hours.
Digital platform with the map, using a GPS track of part the city road, can in 20
minutes, starting from the grid map Automatic Identification, use of independent
research and development laboratory automatically generated digital maps and
calibration platform, achieve position has been the basic calibration of vector map of
the city traffic.
From the appendix of maps of Hefei, the process of generating and calibration
instructions cities road network, automatic identification and extraction process and
the results, Road map that we construct an integrated identification system platform
information to achieve the desired effect of road information automatically identify, is
a success

References
1. Li, D.: Digital Earth and the Three S technology. GIS Forum (2001),
http://www.gischina.com
2. Fan, C., Li, Z., Ye, X., Gu, W.: The edge of the road approach recognition system and its
real-time implementation. Signal Processing 14(4), 337345 (1998)
3. Zhang, W., Huang, X., Bao, Y., Shi, J.: Automatic generation of vector electronic map.
Microelectronics and Computer 16(4), 3032 (1999)
4. Shen, q., Tang, l.: Introduction to Pattern Recognition. National Defense University Press
(1991)
5. Guo, J., Yao, Z., Bao, Y., Zhang, W.: An automatic correction algorithm of vector map.
China Image and Graphics 4(5), 423426 (1999)
6. Cherkassky, B.V., Goldberg, A.V., Radzik, T.: Shortest paths algorithms: Theory and
Experimental Evaluation. Technical Report 9321480, Computer Science Department,
Stanford University
7. Zhang, Z.: A Feed Close Loop Road Extraction of Color City Map. Journal of Engineering
Graphics 26(4), 2126 (2005)
8. Hai, T.: Recognition and Extraction of Road from Color Raster Traffic map Image. Journal
of Computer Aided Design & Computer Graphics 17(9), 20102014 (2005)
9. Li, J., Taylor, G., Kidner, D.B.: Accuracy and reliability of map-matched GPS coordinates.
Computers & Geosciences 31, 241251 (2005)
10. Haibach, F.G., Myrick, M.L.: Precision in multivariate optical computing. Appl.
Opt. 43(10), 212217 (2004)
Multi-objective Optimized PID Controller for Unstable
First-Order Plus Delay Time Processes

Gongquan Tan, Xiaohui Zeng, Shuchuan Gan, and Yonghui Chen

School of Automation and Electronic Information Engineering,


Sichuan University of Science & Engineering, 643000, Zigong, China
tgq77@126.com, xh-z@sohu.com,
{g.s.c,cyh0418}@163.com

Abstract. A conventional proportional integral derivative (PID) controller has


three independent adjustable parameters, which can be used to meet
requirements of closed-loop systems in aspects of stability, speed and accuracy.
For an unstable first-order plus delay time (FOPDT) process, parameters of the
PID controller are optimized with the maximum complement sensitivity,
damping ratio and the integral gain which are special selected for above
requirements. Simulation results show that the proposed approach is an
effective method for unstable FOPDT processes with different time-delay ratio
and its corresponding closed-loop systems have proper robustness stability and
response performance.

Keywords: Unstable process, FOPDT, PID controller, Multi-objective


optimized.

1 Introduction
In rectification, chemical, pulp and paper making industries, PID controllers are
spread-wide used in 97% control loops. However, the practical application of PID
controller does not meet the expectations of state. More than 30% of controllers
operate in manual mode and control errors are less when operate in manual mode than
in auto mode for 65% of control loops.
Lots of tuning methods for PID controller have been proposed recent years, most
of which are for self-regulating processes. Controlling unstable processes are much
more difficult. Besides stabilizing problems of control systems, the reachable
performances for unstable processes may be obviously differ from that of stable
processes [1-3].
The goals for an unstable process control should include: stabilizing unstable poles,
having a good servo tracking, and eliminating unknown disturbance. There are two
difficulties for achieving these targets in fact. One is the problem of servo tracking
and disturbance rejection, the other is the robustness and performance. For the former
problem, with two-degree-of freedom control structure has been able to deal with
effectively. Reference [3] introduces a method to obtaining the weighted coefficients
of proportional and derivative action for known PID. Reference [4] introduces a two
loops control structure to equivalent the conventional single loop PID, and meanwhile

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 309315, 2011.
Springer-Verlag Berlin Heidelberg 2011
310 G. Tan et al.

elaborates the partition coefficients for two loops. Solution methods for the latter
problem include robust method, optimization method, internal model control (IMC)
method, improved Smith design method, and direct design method, etc. In commonly,
dynamic performances are not good for robust criteria as a merely objective, and
stable margin will not good if the integral performance is only used. In IMC method,
Smith method, and direct design method, adjustable parameter is a compromise
between robustness performance and dynamic performance, which has a great
advantage. But realization structures are relatively complex when they are used in
controls of unstable plus delay time processes. Otherwise, the value of the adjustable
parameter is also a problem.
A multi-objective optimized PID control method for unstable first-order plus delay
time (UFOPDT) processes in industry is proposed in the paper.

2 Question Description
The structure of a two loops control system is as shown in figure 1. In which, r, d, u,
y, and e are the set-point signal, disturbance signal, control variable, controlled
variable, and error signal respectively. The controller C(s) is consists of three
elements A(s), B(s) and Kb, and P(s) is the process model, and these are expressed as
below
K a (1 + sTa )
A( s) = , K b B( s) = K b (1 + sTb ) (1)
s
K p sL p
P(s) = e (2)
1 sT p
where, Ka, Kb and Kp are static gain, Ta, Tb and Tp are time constant, and Lp is a delay
time. Tr=Tp/Lp is the ratio of lag time to delay time.
Controller C(s) can be equivalent to the weighted PID controller, that is
1 1
u = K c (a + + bTd )r K c (1 + + Td ) y
sTi sTi
(3)
K K
= ( aK c + i + bK d ) r ( K c + c + K d ) y
s s
where Kc, Ki, and Kd are the proportional gain, integral gain and derivative gain, Ti
and Td are integral time constant and derivative time constant, and a and b are the

d
r e u y
A(s) B (s) P(s )

Kb C ( s)

Fig. 1. The structure of a two loops control system


Multi-objective Optimized PID Controller for Unstable FOPDT Processes 311

proportional and derivative weighted coefficient respectively. Parameters of Equation


(2) and equation (3) have the relationships as below.

K c = K a (Ta + Tb ) + K b
Ta + Tb Ta
Ki = K a ,a = ,b = (4)
K = ( K T + K )T Ta + Tb + K b / K a Ta + K b / K a
d a a b b

Performance indexes of a closed-loop system include three aspects of stable, fast


and accurate. Methods, such as IMC, adjust filter time constant to satisfy
requirements at three aspects, which is quite simply and direct, but disadvantages are
also obviously. Firstly, the control structures for open-loop unstable processes with
time delay are very complicated. Second, the choosing of adjustable filter time
constant is also a problem. Considering the generalization and simplification of PID
controllers and experiences of engineers in parameter adjustments, a design method
introduced in the paper is based on PID controller directly. There are three adjustable
parameters of a PID controller, obviously three performance indexes can be chosen
for target optimizations.
For stability, Nyquist criterion is adopted, that is, the maximum complement
sensitivity Mt, the maximum sensitivity Ms, or the maximum sensitivity and
complement sensitivity Mst is selected. The Mt index is more representative to
describe stabilities and expected response of systems than phases and gain
margins[6]. Reference [2] derives that the maximum sensitivity of unstable processes
with time delay has lower bound, that is

L( s)
M t = T (s) = = sup T ( j ) e1 / Tr (5)
1 + L( s)

where T() is the function of complement sensitivity, |||| is the H norm. The smaller
Tr is, the greater lower bound of Mt is, and the robustness of the system will decrease.
For a stable process, the recommend Mt =1.3[7]. But it is hard to determine the value
of Mt for an unstable process with time delay.
In reference [8] and [9], the integral gain Ki is recommended as the index of
disturbance rejection. In reference [10], it is proved that the integral gain and the
integral absolute error (IAE) have similar behaviour for a stable process, but the
integral gain is easier.
Moreover, we expect the load-step response of our closed-loop system is almost no
oscillations. In my experiences, this type of response waves can be guaranteed by
damping ratio .
In conclusion, the maximum complement sensitivity Mt, damping ratio and
integral gain Ki are selected as performance and robust indexes for describing a
closed-loop system, and also as optimization indexes for three parameters of a PID
controller.
312 G. Tan et al.

3 Optimization of PID Controllers


Assume the inner feedback element Kb=0, then the controller C(s) is the series form
PID, that is
K a (1 + sTa )(1 + sTb )
C (s) = (6)
s
With the 1/1 Pade approximating the time delay, loop transfer function L(s) is
K p K a (1 + sTa )(1 + sTb )( 2 sL p )
L( s) = P( s)C ( s)
s( sT p 1)(2 + sL p )
(7)
K ( sT + 1)( sTbr + 1)(2 s )
= r ar
s( sTr 1)(2 + s)
where, K r = K p K a L p , Tr = T p / L p , Tar = Ta / L p , Tbr = Tb / L p , r = n L p .
Arrangements with the 1+L(s) = 0, we can have the character factors as

s2 2
( s ) = (Tx s + 1)( + s + 1) (8)
n 2
n
where the coefficient relationships between equation (7) and equation (8) satisfies

(2Tr 1 2 rTr )(Tbr r + 2) + (2Tbr 1 + 2 rTbr )(Tr r + 2)


2 2
K r = r
2
(9)
(Tbrr + 2) 2 + (r 2rTbr + 4 )(r 2rTbr 2r Tbr )
2 2

Trr + 2 2rTbr + 4
2
Tar = + r (10)
K ir (Tbrr + 2) r (Tbrr + 2)
2 2

Because Kr is proportional to the integral gain Ki, so it is reasonable to select a r


which makes Kr maximum. From equation (9), the r that makes Kr maximum can be
calculated mathematically under given and Tbr. Simultaneously, Tar can be gotten
from equation (10) from this case. In other words, for a given and Tbr, optimal
parameters Kr=Krm and Tar=Tarm can be obtained, that is

K rm (Tbr ) = sup K r (Tbr , r ) Tarm (Tbr ) (11)


r

This means we have got the loop function and we can calculate the Mt with
equation (5). Obviously, Mt is a function with the variable Tbr.
We can select a Tbr = Tbm to make the optimal robust index for Mt, that is,

M tm = inf M t (Tbr ) (12)


Tbr
Multi-objective Optimized PID Controller for Unstable FOPDT Processes 313

In conclusion, for a UFOPDT described by equation (2), if damped ratio is


selected properly, three parameters Km= Krm(Tbm), Tam= Tarm(Tbm) and Tbm of a series
PID controller described by equation (6) can be acquired through the optimization of
the integral gain and the maximum complement sensitivity. Therefore, three targets
on stable, fast and accurate of closed loop system, are achieved.
As an example, this method are used to optimize PID parameters at =0.707 for
UFOPDT under different Tr. In this case, three formulas using numerical fitting for
lots of optimal parameter values are as follows

Km = 2ln(3.126 0.1528 / Tr 2 8 Tr )

Tam = (0.6926 + 0.8609 Tr 0.41914 Tr ) / Km
2
(13)

Tbm = 2ln(0.7492 0.008822Tr + 0.07364 Tr )
These formulas are obtained under condition Kb=0. This would make the control
variable u have the derivative term of set-point signal r. We can equivalent moving
this term to feedback tunnel, that is, there is an appropriate Kb=Kbm satisfies

K bmTbm = K d = K mTamTbm K bm = K mTam (14)

So, the parameters of the final set-point weighted PID (SW-PID) controller of
equation (3) are

K m (Tam + Tbm )
Kc = , Ti = (Tam + Tbm ) L p ,
Kp
(15)
T T Tbm
Td = am bm L p , a = ,b = 0
Tam + Tbm Tam + Tbm

4 Simulations
Proportional and derivative weighted coefficients of all PID controllers list below are
set uniformly from equation (15) expect the improved IMC, which is realized
according to its complex structures in reference [3].

4.1 Example 1: An UFOPDT with Large Tr.

An unstable first-order plus delay time process with large Tr is expressed as P(s) = e-
s
/(8s-1). Select Lee PID[11] (=2L, Kc=5.552, Ti=6.4645, Td=0.2843, Ki=0.8588,
Mt=1.6323), Wang PID [1] (Ms=1.6, Kc=5.4582, Ti=5.8705, Td=0.1318, Ki=0.9298,
Mt=1.8691), and modified IMC[12](=2L, Mt=2.5003) compared with the
proposed PID (Kc=6.19, Ti=4.6398, Td=0.2276, Ki=1.3341, Mt=1.9163). Its step
response of the control system for set-point step at 0s and load step at 40s are as
shown in figure 2.
314 G. Tan et al.

Step Response
0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Wang PID
0
0 10 20 30 40 50 60 70 80
Time

Fig. 2. Step responses of an UFOPDT with large Tr

4.2 Example 2: An UFOPDT with Medium Tr

An unstable first-order plus delay time process with medium Tr is expressed as P(s) =
e-0.4s/(s-1). Select Lee PID[11](=2L, Kc=2.1681, Ti=3.9753, Td=0.1368, Ki=0.5454,
Mt=2.4535), Wang PID[1] (Ms=1.6, Kc=2.0484, Ti=2.7727, Td=0.0785, Ki=0.7388,
Mt=3.6623), modified IMC[12] (=2L, Mt=2.9498) compared with the proposed PID
(Kc=2.4644, Ti=2.5172, Td=0.1287, Ki=0.979, Mt=2.7867). Its step response of the
control system for set-point step at 0s and load step at 40s are as shown in figure 3.
As shown in figure 2 and 3, the proposed method is more appropriate for
robustness and performance trade-offs.

1
Step Response

0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Wang PID
0
0 5 10 15 20
Time

Fig. 3. Step responses of an UFOPDT with medium Tr

4.3 An UFOPDT with Small Tr

Considering an UFOPDT with small Tr [2] as P(s) = e-1.2s/(s-1). PID controllers


introduced in many references, for example Wang PID[1], can not be used in this
situation. Select Lee PID[11](=2L, Kc=1.21, Ti=37.9706, Td=0.5832, Ki=0.0319,
Mt=13.209), Huang PID[2] (Kc=1.1716, Ti=52.724, Td=0.5188, Ki=0.0222,
Mt=11.3067), modified IMC[12] (=2L, Mt=10.7886) compared with the proposed
PID (Kc=1.1727, Ti=56.8756, Td=0.0.5039, Ki=0.0206, Mt=12.2816). Its step response
of the control system for set-point step at 0s and load step at 40s are as shown in
figure 4.
Because the control difficulty of this process is great, and the maximum sensitivity
of the closed-loop system with the perturbation model is large, the expected robust
stability is primary. As shown in the figure 4, the proposed method has also better
robust stability than others.
Multi-objective Optimized PID Controller for Unstable FOPDT Processes 315

Step Response
0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Huang PID
0
0 20 40 60 80 100 120
Time

Fig. 4. Step responses of an UFOPDT with small Tr

5 Conclusions
For unstable first-order processes with time delay, the damping ratio is used to
constrain response waveforms, and the integral gain which represents disturbance
rejection and maximum complement sensitivity which represents robust stability are
optimized for PID controller. It has a clear physical meaning to use the damping ratio
for stationary response of a system, and it is easy for understanding and application.
Because damping ratio can be given appropriately, the proposed method is a two-
degree searching approach. It shortens the optimization time and it is worth to be
applied for large scale of time-delay ratio. Simulations are proved these results.

References
1. Wang, Y., Xu, X.: A PID Controller for Unstable Processes Based on Sensitivity.
Academic Journal of Shanghai University of Technology 31(2), 125128 (2009)
2. Huang, H.P., Chen, C.C.: Control-system Synthesis for Open-loop Unstable Process with
Time Delay. IEE Proc. Control Theory & Appl. 144(4), 334346 (1997)
3. Chen, C.-C., Huang, H.-P., Liaw, H.-J.: Set-Point Weighted PID Controller Tuning for
Time- Delayed Unstable Processes. Ind. Eng. Chem. Res. 47, 69836990 (2008)
4. Kaya, I., Tan, N., Atherton, D.P.: A Refinement Procedure for PID Controllers. Electrical
Engineering 88, 215221 (2006)
5. Jeng, J.-C., Huang, H.-P.: Model-Based Auto- tuning Systems with Two-Degree-of-
Freedom Control. Journal of Chinese Institute of Chemical Engineers 37(1), 95102 (2006)
6. Thirunavukkarasu, I., George, V.I., Saravana Kumar, G., Ramakalyan, A.: Robust Stability
and Performance Analysis of Unstable Process with Dead Time Using Mu Synthesis.
ARPN Journal of Engineering and Applied Science 4(2), 15 (2009)
7. Luyben, W.L.: Simple Method for Tuning SISO Controller in Multivariable Systems. Ind.
Eng. Chem., Process Des. Dev. 25, 654660 (1986)
8. strm, K.J., Hgglund, T.: Advanced PID Control. Instrument Society of America, North
Carolina (2006)
9. Tan, W., Liu, J., Chen, T., Marquez, H.J.: Comparison of Some Well-known PID Tuning
Formulas. Computers and Chemical Engineering 30, 14161423 (2006)
10. Chen, Y., Zeng, X., Tan, G.: Auto- tuning of Optimal PI Controller Satisfying Sensitivity
Value Constraints. Advanced Materials Research 204-210, 19381943 (2011)
11. Lee, Y., Lee, J., Park, S.: PID Controller Tuning for Integrating and Unstable Processes
with Time Delay. Chemical Engineering Science 55, 34813493 (2000)
12. Zhu, H., Shao, H.: The Control of Open-loop Unstable Processes with Time Delay Based
on Improved IMC. Control and Decision 20(7), 727731 (2005)
Water Quality Evaluation for the Main Inflow
Rivers of Nansihu Lake

Yang Liyuan1, Shen Ji2, Liu Enfeng2, and Zhang Wei1


1
College of Resources and Environment,
University of Jinan, Jinan 250022, China
youngliyuan@126.com
2
State Key Laboratory of Lake Science and Environment,
Nanjing Institute of Geography and Limnology,
Chinese Academy of Sciences, Nanjing 210008, China

Abstract. Based on water quality monitoring data of four inflow rivers of


Nansihu Lake in 2006, 2007 and 2008, the water qualities were evaluated
according to the method of comprehensive water quality identification index,
with the indicators of dissolved oxygen, CODMn, biochemical oxygen demand
and ammonia nitrogen. The results show that the rivers has been in the state of
serious pollution, in which the monitoring section of Si River was in Grade III
of water quality standard, and others were in Grade IV and V. The evaluation
results have offered interesting scientific basis for environmental controlling to
the inflow rivers of the Nansihu Lake.

Keywords: Water quality assessment, Water quality identification index,


Nansihu Lake.

1 Introduction
At present, water eutrophication and water quality deterioration are very serious, and
have become a global environmental problem. In the past two decades, water quality
deterioration of lakes in our country has become increasingly serious, a large number
of human activities has a negative impact on the lake environment. Water
eutrophication has become one of the major environmental problems to perplex
Chinas economic development.
Nansihu Lake locates in the southwest of Shandong Province, It consists of four
sublakes: Nanyanghu Lake, Dushanhu Lake, Zhaoyanghu Lake and Weishanhu Lake,
which belongs to Sihe River system in Huaihe River basin. It covers an area of 1266
km2 with a North-south length of 126 km. It has divided into Upper and Lower Lake
by a dam which was built in 1960. North of the dam is Upper Lake with catchment
area of 2.69104 km2, which accounted for 88.4% of the whole catchment area; south
of the dam is Lower Lake with catchment area of 0.35104 km2, which accounted for
only 11.6% of the whole area [1].
At present, the national pollution problems in rivers and lakes continue to appear,
which has brought serious negative impact. For lakes, the vast majority of non-point
source pollutants can enter into the lake through inflow rivers[2], and water quality of
inflow rivers is closely related to water quality of the lakes, so it is essential to
evaluate the water quality of rivers into the lake. This paper, the method of

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 316323, 2011.
Springer-Verlag Berlin Heidelberg 2011
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 317

comprehensive water quality identification index, through the survey of representative


organic pollution indicators, was used to evaluate the water quality of four inflow
rivers of Nansihu Lake, in order to provide the scientific basis for pollution
prevention and control.

2 Material and Method

2.1 Monitoring Data Sources

In order to monitor the water quality conditions of inflow rivers, according to basin
environment, pollution emissions along the rivers and hydrological characteristics, we
identified 25 monitoring sections. All collected samples were sent to the Hydrology
and Water Resources Survey Bureau of Jining for analysis. The detection methods
were according to Quality Standard of Surface Water Environment (GB3838-
2002) and Water and Wastewater Monitoring and Analysis Methods (Fourth
Edition).As the inflow rivers with great catchment area mainly located in Upper Lake,
The cross-section data of four major inflow rivers were selected from 2006 to 2008,
which injected into Upper Lake to be the analysis objects: Malou monitoring section
of Baimahe River, Xiyao monitoring section of Dongyuhe River, Yingou monitoring
section of Sihe River and Yulou monitoring section of Zhuzhaoxinhe River.
According to monitoring data and the actual situation of pollution into the rivers,
we selected a group of organic pollution indicators to be evaluation indices which
were closely related to the water quality: DO, CODMn, NH3-N and BOD5. The
evaluation criterion was according to Quality Standard of Surface Water

Environment (GB3838-2002) .

2.2 Water Quality Identification Index


In recent years, many water quality evaluation methods had been proposed by
scholars[3 - 6], such as fuzzy comprehensive evaluation method, nutrition status index
method, and artificial neural network method. There were certain advantages in the
application of these methods on water quality evaluation, but it could not select the
indicators that not reach the standard, and could not estimate the quantitative
difference between water quality status and functional area objectives. Method of
comprehensive water quality identification index, as a new method proposed in 2003,
have overcome the shortages of some evaluation methods, and it also have been
widely used on water quality assessment.
2.2.1 Single-Factor Water Quality Identification Index
Single-factor water quality identification index p was composed of one integer,
decimal point and two significant figures after the decimal point. It can be expressed
as follows:
pi = X 1 X 2 X 3 .
Where: X1 represents the water quality type of i indicator; X 2 represents the
location of monitoring data in the range of X 1 grade changes; X 3 represents the
comparison of water quality type and setting type of function area.
318 Y. Liyuan et al.

Determination of X 1 X 2 : (1) When the water quality was between types


of
and , for the general indicators(except for pH, dissolved oxygen, water
temperature) :
i subscript
X1 X 2 = a +
sup erscript subscript
i subscript
For DO: X1 X 2 = a + 1 ,
sup erscript subscript
Where: i represents the measured concentration of i indicator; sup erscript ,
subscript represent the upper limits and lower limits in water quality standard interval
of a of i indicator; a =12345, and the determination of this value is based
on comparison of monitoring data and national standards.

(2) When the water quality was equal to or worth than the type of , a = 6 , for
the general indicators(except for pH, dissolved oxygen, water temperature):
i sup erscriptof
X1 X 2 = a +
subscript of
sup erscript i
For DO: X1 X 2 = a + m
sup erscript
Where: m represents the correction factor of the formula, and in general it is taken 4.
Determination of X 3 : X 3 = X 1 f1 .
Where: f1 represents objective types of water environmental function area.

2.2.2 Comprehensive Water Quality Identification Index


Comprehensive water quality identification index was composed of total average of
single-factor water quality identification index, the number of participating indicators
that worse than functional area objective, and the comparison of comprehensive water
quality type and functional area standard.

The formula: I wq = np X
i
3 X4

Where:
p i
represents the total average of single-factor water quality
n
identification index, that is X1 X 2 ; n represents the number of evaluation indices;
X 3 represents the number of participating indicators that worse than functional area
objective; X 4 represents the comparison of comprehensive water quality type and
functional area standard.
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 319

2.2.3 Determination of Water Quality Type


The level of comprehensive water quality identification index in the inflow rivers of
Nansihu Lake can be determined as follows (Tab.1)[4]:

Table 1. Determination of comprehensive water quality levels

x
x1 2 Comprehensive water quality level
1.0x x 2.0
2.0x x 3.0
Grade

1 2

3.0x x 4.0
Grade

1 2

Grade
4.0x x 5.0
1 2
Grade
5.0x x 6.0
1 2
Grade
6.0x x 7.0
1 2
Worse than Grade but not black-odor
x x 7.0
1 2

1 2 Worse than Grade and black-odor

3 Results and Analysis

3.1 Single-Factor Water Quality Identification Indices of Selected Sections from


2006 to 2008

Calculate single-factor identification indices according to the annual average of water


quality index of monitoring sections (Tab.2, Tab.3).

Table 2. Monitoring results of selected sections from 2006 to 2008 Unit: mg/L

Monitoring Monitoring
River Name DO CODMn NH3-N BOD5
Time Section
Baimahe River Malou 6.57 10.61 2.61
Dongyuhe River Xiyao 5.80 9.64 1.14
Year 2006
Sihe River Yingou 6.98 9.10 0.54
Zhuzhaoxinhe River Yulou 6.60 15.82 6.19
Baimahe River Malou 6.77 7.74 2.14 4.53
Dongyuhe River Xiyao 6.01 7.24 1.10 4.59
Year 2007
Sihe River Yingou 6.39 6.40 0.50 4.43
Zhuzhaoxinhe River Yulou 6.82 8.21 2.86 12.42
Baimahe River Malou 6.09 6.20 0.76 3.68
Dongyuhe River Xiyao 6.27 8.20 0.37 4.18
Year 2008
Sihe River Yingou 6.80 6.75 0.38 4.42
Zhuzhaoxinhe River Yulou 7.49 6.64 3.31 7.09
Note: In 2006, there were no values of BOD5 due to sampling.
320 Y. Liyuan et al.

As can be seen from Table 3 during the period of 2006-2008, only DO all met the
standards for raw water environment function, and the change range of single-factor
water quality identification indices was not large. Single-factor could determine that
these several rivers were in Grade II or III of water quality standard. The remaining
three indicators existed excessive phenomena of different levels, such as monitoring
section of Yulou of Zhuzhaoxinhe River in 2006, the single-factor water quality
identification index of indicator CODMn was 6.13, single-factor evaluation of this
river was worse than Grade V, and this had a difference of three grades compared to
the functional area objective. For NH3-N and BOD5, the concentrations had a large
difference in different rivers. The single-factor water quality identification indices in
Baimahe River and Zhuzhaoxinhe River were large, and they had poor water quality.
The concentration of BOD5 exceeded serious in Zhuzhaoxinhe River during the years
2007 and 2008, single-factor evaluation was Grade V or worse than Grade V.

Table 3. Single-factor water quality identification indices of selected sections from 2006 to 2008

Monitoring Monitoring
River Name DO CODMn NH3-N BOD5
Time Section
Baimahe River Malou 2.60 5.71 6.32
Dongyuhe River Xiyao 3.20 4.91 4.31
Year 2006
Sihe River Yingou 2.30 4.81 3.10
Zhuzhaoxinhe River Yulou 2.60 6.13 8.15
Baimahe River Malou 2.50 4.40 6.02 4.30
Dongyuhe River Xiyao 3.00 4.31 4.21 4.31
Year 2007
Sihe River Yingou 2.70 4.11 2.00 4.21
Zhuzhaoxinhe River Yulou 2.50 5.82 6.43 6.23
Baimahe River Malou 2.90 4.00 3.50 3.70
Dongyuhe River Xiyao 2.80 4.61 2.60 4.01
Year 2008
Sihe River Yingou 2.50 4.21 2.70 4.21
Zhuzhaoxinhe River Yulou 2.00 4.21 6.73 5.72

3.2 Comprehensive Water Quality Identification Indices of Selected Sections


from 2006 to 2008

Calculate comprehensive water quality identification index according to single-factor


water quality identification index (Tab.4).

Table 4. Comprehensive water quality evaluation

Monitoring Comprehensive Water Quality Identification Index


River Name
Section Year 2006 Year 2007 Year 2008
Baimahe River Malou 4.920 (IV) 4.310 (IV) 3.500 (III)
Dongyuhe River Xiyao 4.121 (IV) 4.031 (IV) 3.520 (III)
Sihe River Yingou 3.410 (III) 3.320 (III) 3.420 (III)
Zhuzhaoxinhe River Yulou 5.622 (V) 5.232 (V) 4.731 (IV)
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 321

From every value of comprehensive water quality identification index in Table 4,


we could know that the water quality of Baimahe River in 2008 was Grade III, and it
met the standard of functional area. The water quality was inferior to the functional
area standard at one level in Dongyuhe River in 2006 and 2007. Although the water
quality reached standard of Grade III in 2008, there were still two indicators more
than the standards. The water quality of Sihe River was the best of the four rivers,
although individual indicators were not compliance, the comprehensive water quality
level had reached Grade III. Zhuzhaoxinhe River was the worst in water quality,
which was most polluted, the comprehensive water quality level was inferior to one to
two levels compared to the standard, and there were an average of two or three
indicators non-compliance in four indicators that involved the overall water quality
assessment. All these can help to establish the treatment direction and intensity for
pollution control in inflow rivers of Nansihu Lake.

3.3 Analysis of Change Trends of Water Quality Identification Indices from


2006 to 2008
From Fig.1, we could see that the development trend of four indicators in Baimehe
River was good during the years from 2006 to 2008, especially the content of NH3-N
decreased more, which made the single-factor evaluation changed from worse than
Grade V to Grade III. The values of single-factor water quality identification indices
in Sihe River were small and changed little, water quality was quite stable. In
Dongyuhe River, the concentrations of DO and BOD5 changed little, concentration of
NH3-N decreased significantly in 2008, content of CODMn declined slightly in 2007,
but it rebounded in 2008. Except for DO, the other three indicators were higher than

year 2006 year 2007 year 2008 year 2006 year 2007 year 2008

7.00 5.00
6.00
5.00 4.00
4.00 3.00
3.00
2.00 2.00
1.00
0.00 1.00
DO CODMn NH3-N BOD5 0.00
DO CODMn NH3-N BOD5
aBaimahe River bSihe River
year 2006 year 2007 year 2008 year 2006 year 2007 year 2008

5.00
8.00
4.00
6.00
3.00
4.00
2.00
2.00
1.00
0.00
0.00 DO CODMn NH3-N BOD5
DO CODMn NH3-N BOD5
cDongyuhe River dZhuzhaoxinhe River
Fig. 1. Tendency of Single-factor water quality identification indices from 2006 to 2008
322 Y. Liyuan et al.

the standard in Zhuzhaoxinhe River. Although they all had downward trends, because
of their large cardinal numbers, single-factor evaluations were still inferior to one or
two grades compared to functional area objectives.
As can be seen from Fig.2, the comprehensive water quality identification indices
of Sihe River were the smallest, comprehensive water quality evaluation was Grade
III, and it reached the standard of functional area. Water quality from good to bad of
the other three rivers was in the following order: Dongyuhe River, Baimahe River and
Zhuzhaoxinhe River. Water quality trends of these three rivers were getting better
during the three years, especially in Baimahe River, water quality had been changed
from Grade IV in 2006 to Grade III in 2008. Though water quality of Zhuzhaoxinhe
River improved somewhat, it was the same as single-factor evaluation due to its large
cardinal numbers, water levels were still not meet the standard of functional area. So
this river had the most serious polluting condition.

6 6
5.62
5.5 5.23 5.5
5 4.92
4.73 year 2006 5 Baimahe River
4.5 4.31 year 2007 Dongyuhe River
4.124.03 year 2008 4.5 Sihe River
4 Zhuzhaoxinhe River
3.50 3.52 3.413.323.42 4
3.5
3.5
3
Baimahe River Dongyuhe Sihe River Zhuzhaoxinhe 3
River River year 2006 year 2007 year 2008

Fig. 2. Tendency of comprehensive water quality identification indices from 2006 to 2008

4 Conclusion
During the period of 2006 to 2008, only DO all met the standards for raw water
environment function, single-factor could determine that these several rivers were in
Grade II or III of water quality standard. The remaining three indicators existed
excessive phenomena of different levels.
Water quality conditions in these four rivers all had good development trends
during the years from 2006 to 2008, especially in Baimahe River, water quality had
been changed from Grade IV in 2006 to Grade III in 2008. Though water quality of
Zhuzhaoxinhe River improved somewhat, water levels were still not meet the
standard of functional area due to its large cardinal numbers. So this river had the
most serious polluting condition.

Acknowledgments. The study was financially supported by the national Natural


Science Foundation of China (Grant No. 40672076,No.40702058).

References
1. Yang, L., Shen, J., Liu, E., et al.: Characteristic of nutrients distribution from recent
sediment in Lake Nansihu. Lake Sci. 19(4), 390396 (2007)
2. Song, H., Lv, X., Li, X.: Application of fuzzy comprehensive evaluation in water quality
assessment for the west inflow of Taihu Lake. Journal of Safety and Environment 6(1),
8791 (2006)
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 323

3. Guo, M.: Application of mark index method in water quality assessment of river.
Environmental Science and Management 31(7), 175178 (2006)
4. Xu, Z.: Comprehensive water quality identification index for environmental quality
assessment of surface water. Journal of Tongji University: Natural Science 33(4), 482488
(2005)
5. Fan, Z., Wang, L., Chen, L., et al.: Application of water quality identification index to
environmental quality assessment of Dianshan Lake. Journal of Shanghai Ocean
University 18(3), 314320 (2009)
6. Chang, H., Che, Q.: Study on methods of evaluation for water eutrophication. Journal of
Anhui Agri. Sci. 35(32), 1040710409 (2007)
Software Piracy: A Hard Nut to Crack
__A Problem of Information Security

Bigang Hong

Department of Economics and Management,


Shaoyang University,
Shaoyang, China
Hong6210@126.com

Abstract. Software piracy is becoming increasingly rampant in the world. In


order to effectively check this trend, we analyse the present situation in both
developing economies and developed economies. Through a comparison, we
conclude that software piracy can be psychologically, economically,
technologically and institutionally based. Results show that the age of an
individual as well as a countrys stage of development and the quality of
governance have the largest impact on the incidence of software piracy. Mature
age, greater economic and political freedoms are shown to have opposite effects
on piracy. Greater diffusion of the Internet and of computer technologies, other
things equal, actually promote the legal use of software. Higher access prices
help to reduce piracy. Overall, psychological, economic, institutional, and
technological factors may exert influences on software piracy to some extents.
Finally, some suggestions are offered.

Keywords: software piracy, Internet, Causes, Measures.

1 Introduction

Easier Internet access in recent years has enabled software firms to distribute their
products online, instead of through traditional distribution channels. At the same time,
the cheap and easy channels have made the issue of protection of intellectual property
become more difficult and imperative. On the one hand, software products are hard to
protect due to their peculiar attributes and, thus, the traditional instruments of
protecting intellectual property such as patents seem ill-suited. On the other hand,
modern technologies make piracy easier in terms of its speed, absence of geographic
constraints, and the absence of the need for a middleman. The Internet is a double-
edged sword when it comes to software distribution. Negligible distribution cost is not
only true for the legitimate firms, but it is also true for the pirate firms which takes
advantage of the anonymity of the virtual world, bringing about large losses of benefit
for legitimate firms, and which in return has a negative impact on the industy itself
and the economy on the whole.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 324329, 2011.
Springer-Verlag Berlin Heidelberg 2011
Software Piracy: A Hard Nut to Crack__A Problem of Information Security 325

2 Rampant Software Piracy Situation


Software is one of the most valuable technologies of the Information Age, running
everything from PCs to the Internet. Yet, because software has become such an
important productivity tool, the illegal copying and distribution of it software piracy
persists globally. Software piracy refers to unauthorized copying or distributing
copyrighted software, including copying, downloading, sharing, sales or installing
multiple copies of copyrighted software in either personal computers or office
computers. As Business Software Alliance's (BSA) statistics show, unauthorized
copying is rampant all over the world (see Figure 1). The magnitude of software
piracy can be gauged from the fact that nearly half of the software acquired globally is
pirated. North America has the lowest software piracy, 21% in both 2008 and 2009.
Asia-pacific, Latin America and Central/Eastern Europe have a much higher
percentage, 61% and 59%, 65% and 63%, 66% and 64% respectively in 2008 and

2009. 1-Asia-Pacific, 2-Central/Eastern Europe, 3-Latin America, 4-Middle East
/Africa, 5-North America, 6-Western Europe, 7-European Union, 8-Worldwide
The situation is even darker when we take a look at individual countries inside the
regions (Table 1). A number of factors contribute to regional differences in piracy
the strength of intellectual piracy protection, the availability of pirated software,
and cultural differences, etc. All the nations in the Top-20 with the highest software
piracy can be counted as developing countries, Georgia being No 1 with 95%. The
most interesting finding is that the least developed countries of the world are missing
from BSA's list. This can be attributed in the first place to the fact that those countries
severely lack computers; i.e., no computer, no software. Developing countries do not
typically have much internal reason to enforce copyright. Their national cultural
industries are weak and the trade balance distorted towards the rich countries. This is
even truer for software copyright, which is sometimes seen to benefit only
multinational companies. Combined with the strong cultural tradition of copying,
which can be found in certain parts of worldespecially in Asia has resulted in the
widespread violations of software copyright described in the statistics above. The
conventional opinion suggests that piracy reduces the sale of legitimate software, thus

70
60
50
40
2008
30 2009
20
10
0
1 2 3 4 5 6 7 8
Source: Based on Seventh Annual BSA/IDC Global Software Piracy Sdudy

Fig. 1. Percentage of Software Piracy in 2008&2009


326 B. Hong

hurting software producers profits. However, several articles on the economics of


software piracy actually argue in favor of piracy. They claim that software piracy
might not be harmful for software that exhibits positive network externalities. The
reason is that for those kinds of software, the larger the user base, the higher utility a
user gets from using it. Because pirate software increases the size of user base,
consumers get higher utilities, and hence they are willing to pay a higher price. The
higher prices then lead to higher profits for software producers, meaning piracy is
actually beneficial. Still some argue that piracy protection raises the cost of pirating,
causing some would-be pirates to buy and others to do without the product. The
resulting smaller user base produces a lower software value and may actually reduce
profits. For certain types of software, where the word-of-mouth interaction among
users and potential users is critical to the growth of the user base over time, pirates
play an important role in converting potential users into users of the software, many
of whom legally purchase the software. They demonstrated their modeling approach
by analyzing the diffusion of spreadsheets and word processors in the United
Kingdom. The results indicated that since the late 1980s, out of every seven software
users, six had pirated copies. On the other hand, the pirates significantly influenced
the potential users to adopt this software. In fact, they contributed to generating more
than 80% of the unit sales for these two types of software. This may explain to some
extent the unwillingness of the developing nations in implementing strict piracy
protection and the rampant software piracy in these nations.

3 Causes for Piracy on Personal Computers


Motives are regarded as the basic causes and determinants of all behaviours which are
not haphazard, trivial, or purely habitual. Motives can be psychologically,
economically, institutionally, and technologically based.
In 2009, consumers continued to spend on PCs and software despite the soft
economy. PC shipments to consumers rose 17%, compared with a 15% drop in
shipments to businesses, governments and schools. More than half of all new PCs
shipped went to consumers.
Furthermore, consumers were much more active installing new software than
businesses, governments or schools. More than three quarters of all software shipped
in 2009 went to consumers. This shift in the PC market has an impact on piracy, as
software piracy rates are generally higher on consumer PCs than those of other
segments of the market. So software piracy constitutes a hard nut to crack in the
emerging economies in the years to come. Here we will analyze the motivations for
software piracy, witn an aim to working out more effective measures accordingly.
First, motivation can be psychologically based when the individual seeks
affirmation of personal values. Individuals may aim to gain group acceptance and,
subsequently, to gain status. Intrinsically motivated behaviour is of two kinds. First,
an individual may seek to behave in ways which allow them to feel competent and
self-determining. The second kind of intrinsically motivated behaviour involves
conquering challenges. When an individual conquers the challenges that they
encounter, he will feel achievement. According to Maslow , there was a basic desire
to know, to be aware of reality, to get the facts, to satisfy curiosity. This factor may
well explain the fact that many crackers of software are not paid for their activities.
Software Piracy: A Hard Nut to Crack__A Problem of Information Security 327

Secondly, there is economically based motive. Economic factors deal with


potential purchasing power of an individual or a firm. Other things being the same,
both individuals and firms in wealthier nations are better able to afford legal software
and thus find the piracy of software relatively less attractive. In addition, the
opportunity cost of being detected and punished for the possession of pirated software
should be higher in more affluent societies. This is why we see software piracy is
more rampant in developing countries. Higher prices of legal software make the
production of pirated software more attractive.
Thirdly concerned is institutional factor. The importance of institutional aspects in
economic decision-making has been well recognized in the literature and the piracy of
software is no exception. Other things being the same, more politically free nations
might have lower rates of piracy because their citizens have access to an established
and equitable system of rule of law and freely-elected representatives make it less
likely that politicians are able to hand out undue favors. Software pirates have greater
fear of exposure in politically free nations for support of this argument. More
economically free nations may experience lower piracy rates due to lower prices of
legal software from enhanced competition making illegal software less attractive.
Fourthly, technological capabilities influence the propensities of pirates to copy
software. Software products are hard to protect due to their peculiar attributes and,
thus, the traditional instruments of protecting intellectual property such as patents
seem ill-suited for these purposes. Still, modern technologies make piracy easier in
terms of its speed, absence of geographic constraints, and the absence of the need for
a middleman In other words, it is easy for someone situated in a remote location to
write/sell pirated software and ship it almost costlessly and instantaneously via the
Internet to almost any part of the world.

4 Ways Out in the Campaign


Widespread and rampant software piracy leads to lost opportunities for the businesses
and related sectors. Information technology is the driving force of economic and
social progress. Through the visionary and balance of public policy, the government
can promote a thriving technology industry for domestic and global economic
benefits. The policy activities BSA aims to establish a dynamic international market
in which software industry innovation, growth and prosperity can take place. Around
the world, along with governments and multilateral organizations,Business Software
Alliance strongly advocates software innovation, effective intellectual property rights
protection, and network security, aiming to reducing trade barriers and other emerging
technology policy questions. Given the psychological, economic, institutional and
technological motivations to software piracy and differences across nations, adequate
actions can be taken to check software piracy, which would have important
implications for software producers, and more generally, for technology policy and
eventually the pace of technological change and ecomonic development.
Generally, we have two kinds of controls against software piracy. Firstly,
preventive means. The purpose of preventive controls is to increase the costs of piracy
through copy protection schemes, including software encryption and hardware locks.
The target against piracy through preventive means can be achieved by the following
328 B. Hong

steps: (1) to increase transaction costssoftware incurs high installation and


implementation costs. (2) to increase learning costslearning how to use a specific
software is costly and could be difficult.
Secondly, deterrent means. In contrast to preventive means, the latter do not
directly increase the cost of pirating software. The target is achieved by the perceived
threat or fear of sanctions. Deterrent meanss include government-to-government
negotiations, educational campaigns, and legal activity related to expanding domestic
copyright laws and seeking to enforce those laws. Although allowing piracy may help
firms to get some market share in the early stage of competition, it does not
necessarily lead to higher profits. Hence, we believe that software firms are better o.
protecting their products.
The software protection environment is vital in the campaign against piracy. The
environment first contains two main measurements: good laws of a nation covering
software piracy, and effective enforcement of the laws by the government. Legal
institutions are crucial in terms of outlining the set of rules for legal transactions and
punishments for illegal acts. The strength and efficacy of these institutions would
deter pirates. Legal institutions could affect both the buyers and sellers of pirated
software. Stronger property rights protection, other things being the same, would
deter pirates. Because of the attributes of software, the Internet has no boundaries,
which makes it the ideal tool for distributing software products globally. However,
nations have boundaries and software protection environments vary tremendously
across nations. Different legal and economic systems lead to different evolution paths
for software protection environments in different countries. Thus, international
coordination is required to make sure there is no safe heaven for the pirates in the
whole world.
Socio-cultural differences across nations might crucially affect illegal activities in
general and software piracy in particular. For instance, cultural norms might dictate
attitudes towards protection of intellectual property.21 Besides shaping individual
behavior, such differences can also affect government policies towards protection of
proprietary information.

5 Conclusion
According to BSA, installations of unlicensed software on PCs dropped in 54 of the
111 individual economies studied, and rose in only 19 in 2009. It is clear that anti-
piracy education and enforcement campaigns spearheaded in recent years by the
software industry, national and local governments, and law enforcement agencies
continue to have a positive impact in driving legal purchases and use of PC software.
It is well recognized that a countrys stage of development and the quality of
governance may have a major impact on the incidence of software piracy. Greater
economic prosperity makes legal software more affordable on the one hand, and
increases the opportunity cost associated with illegal acts on the other hand. More
prosperous nations might also have better and strict monitoring mechanisms to check
piracy. The risk of exposure by a free press in politically free nations will surely
become a negative incentive machanism to software pirates. Consequently, the major
step to take in the campaign against global software piracy is to increase public
Software Piracy: A Hard Nut to Crack__A Problem of Information Security 329

education and awareness. Reducing software piracy often requires a fundamental shift
in the publics attitude toward it. Public education is critical. Governments can
increase public awareness of the importance of respecting creative works by
informing businesses and the public about the risks associated with using pirated
software and encouraging and rewarding the use of legitimate products. It is crucial
for the dedeloped countries, and for the develpoing economies as well in the long
term. As a new emerging economy, China still has a long way to go, for its own
development, and for the responsibility it should bear as a large economy.

References
1. Yang, D., Sonmez, M.: Economic and cultural impact on intellectual property violations:
A study of software piracy. Journal of World Trade 41, 731750 (2007)
2. Piquero, N.L., Piquero, A.R.: Democracy and intellectual property: Examining trajectories
of software piracy. Annals of the American Academy of Political and Social Science 605,
104127 (2006)
3. Greenstein, S., Prince, J.: The diffusion of the internet and the geography of the digital
divide in the United States. National Bureau of Economic Research, working paper #
12182 (2006)
Study of Bedrock Weathering Zone Features
in Suntuan Coal Mine

XiaoLong Li, DuoXi Yao, and JinXiang Yang

School of Earth and Environment


Anhui University of Science and Technology, Huainan, China
xlli@aust.edu.cn

Abstract. Lithological characteristics, mineral composition, physical properties


of water and physical and mechanical properties, weathering zone distribution
and other factors of bedrock weathering zone in coal-seam roof determine the
strength of its watertight performance, while its performance has a direct impact
on impermeable overlying loose surface water aquifers as well as the possible
collapse into the roadway. Therefore, we can prove and correctly evaluate the
hydro geological conditions of bedrock weathering zone in the working surface,
which has an important practical significance on reasonable safe mining coal
rock pillar height and upper limit of mining, increasing the recovery rate of coal
resources, fully developing and utilizing coal resources. And it provides a
reliable basis for mine to achieve a sustained normal safety production.

Keywords: coal-seam roof, bedrock weathering zone; mineral component;


lithological characters.

1 Introduction
In recently years many researchers at home and abroad have studied a lot on
lithological characters, distribution rules and watertight performance in bedrock
weathering zone. But compared with the focus attention- Cenozoic bottom aquifer ('the
fourth below'), it is quite out of proportion. Coal mine is deeply buried in bedrock
weathering zone of coal-seam roof, and has great difference in hydrology and
engineering geology features with weathering zone. And also it is quite inconvenient
for study and research to drill but to get imperfect core most of the cases in this area.
Lithological characteristics, mineral composition, physical properties of water and
physical and mechanical properties, weathering zone distribution and other factors of
bedrock weathering zone in coal-seam roof, determine the strength of its watertight
performance, while performance has a direct impact on impermeable overlying loose
surface water aquifers as well as the possibility of collapse into the roadway. Therefore,
we can prove and correctly evaluate the hydro geological conditions of bedrock
weathering zone in the working surface, which has an important practical significance
on reasonable safe mining coal rock pillar height and upper limit of mining, increasing
the recovery rate of coal resources, fully developing and utilizing coal resources. And it
also provides a reliable basis for mine achieve sustained normal safety production.
The 7211working surface is the fourth fully-mechanized working face of the
first area in 72 coal seams of 81 mining area in Suntuan Coal Mine and its original

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 330335, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Bedrock Weathering Zone Features in Suntuan Coal Mine 331


design Cap elevation is -250m, its Floor elevation is -315m, its dispersion is 65m.Its
mineral seam is mainly 72 coal seams. The working surface inner fault F10 fall head 10

120m , F10-1 fall head 0 5m , DF10 fall head 0 8m is of northeast spread,
and F10 crosses inclined the working surface. It set a large coal block height when
designing it, thus overstocks a great deal of coal resources which causes serious
resource damage. And it is seriously questioned of its technical economy rationality.
Damaged resource has the following characters as high degree of prospecting, shallow
buried depth, complete production system, and good mining technical conditions.
Therefore, whether the coal block height is reasonable or not becomes one of the
theoretical and technical difficulties to be solved in Suntuan coal mine. It shows,


according to the data of 7211 working surface and its eleven drill holes around, that its
Cenozoic loose bed thickness is 203.00 210.85m,its average thickness is 205.97m, its

bedrock face elevation is -176.70 -184.99m, its average elevation is -179.89m.
Generally, the working surface bedrock face elevation lessens gradually from
southwest to northeast.

2 Data and Methods


2.1 Weathering Zone Lithological Characters and Mineral Components
With drilling to get the core form 09 SHUI 3 drill in the working surface, it can be
known that it mainly comprises sandstones and mudstones.
Sandston: amaranth, small grain, mainly comprises quartz, feldspar, calcium
cement. It is cross bedding and medium separation sorting. Its ridge is seriously
damaged, its core is broken and becomes soft and mud state because of strong
weathering. Its bottom is clay bank, small grain and mainly comprises quartz, feldspar,
calcium cement. It is bedding stratification and medium separation sorting.
Mudstone: clay bank, light gray partly, bulk, soft, easy- broken by weathering. Its
bottom mudstone is clay bank or amaranth, bulk, soft core, and easy-broken.
It takes six rock samples of 09 SHUI 3 drill to make x-radial diffraction and scanning
electron microscope to make its rock-soil component and tiny proximate analysis. It is
shown in Tab.1, Fig 1-2. Minerals in it are mainly quartz and feldspar filled with clay
minerals which are mainly kaolin and kaolinite.

Table 1. Weathering zone sample XRD qualitative analysis result table

hole mineral composition


position lithology sample depth
number kaolin kaolinite. quartz mica chlorite calcite others
less than more than
sandstone 206.80~207.00 medium - - little
medium medium
less than
sandstone 209.50~209.70 medium medium - - -
medium
Bedrock
more than
09SHUI3 Weathering mudstone 214.00~214.20 - medium tiny - - -
medium
Zone
less than
mudstone 215.50~215.70 medium medium - - little
medium
more than
mudstone 217.30~217.50 - tiny - - little
medium
332 X. Li, D. Yao, and J. Yang

>677;7@    



,QWHQVLW\ &RXQWV




!4 XDUW]6L2 
!+DOOR\VLWH$$O6L2  2 + +2
!.DROLQLWH$$O6L2  2 + 
!3 RO\OLWKLRQLWH0 . $ O)H/L 6 L$ O 2  2 + )
!.DROLQLWH0 G$O6L2  2 + 

    
7KHWD

Fig. 1. Decency sandy putty rock sample XRD ingredient correspondence intensity chart

Fig. 2. Decency sandy putty rock sample scanning electron microscope

2.2 Water-Physical Properties and Physico-Mechanical Properties


It has not conducted mechanical test with 09 SHUI 3 drill, but comparing
water-physical properties and physico-mechanical properties in the Weathering Zone
with that of mining area102 and referencing the experimental analysis data in 09 SHUI
1,SHUI 2, we think its rock is of low compression strength and soft, but it has good
expansibility and certain water proof ability.
Study of Bedrock Weathering Zone Features in Suntuan Coal Mine 333

2.3 Distribution Characters in the Weathering Zone

The thickness of the zone is determined by many factors as lithology and fracture
development degree. The Various inspection holes wind oxidation zone depth of
Suntuan Coal Mine is shown in Tab.2.

Table 2. Various inspection holes wind oxidation zone depth data sheet

6WURQJ ZLQG R[LGDWLRQ ]RQH :HDN ZLQG R[LGDWLRQ ]RQH


+ROH QXPEHU WRWDO
%RWWRP GHSWK P WKLFNQHVV P %RWWRP GHSWK P WKLFNQHVV P

=+8 -,$1     


)8 -,$1     
)(1* -,$1     

According to its data, zone of weathering depth of the field is 15m; oxidation zone
depth is 25m. Its rocks are mainly khaki, motley, and taupe. When they are decayed
rocks, there is often full of crevice water in the fracture, but its degree mainly depends
on breakover degree and tiny fracture development. The rocks are soft, mostly
fragments, fracture development and low strength.
The study, with data statistic analysis of 7211 working surface and its eleven drilling
holes around, draws up bed rock surface ancient topographic map, elevation contour
map and weathering zone allocation plan, as are shown in Tab.3, picture 3, 4.

Table 3. 7211 working surface and nearby drill hole exposition weathering zone thickness
statistical table

1XPEHU +ROH QXPEHU %HG URFN VXUIDFH HOHYDWLRQ P %HG URFN Z HDWKHULQJ WKLFNQHVV P
   
   
   
   
   
   
   
   
   
 *286+8,  
 6+8,  
$YHUDJH YDOXH  
PLQLPXP  
PD[LPXP  
334 X. Li, D. Yao, and J. Yang

Fig. 3. 7211 bedrock surface ancient landform chart

Fig. 4. 7211 bedrock surface elevation isograms

3 Conclusions
(1) Minerals in it are mainly quartz and feldspar filled with clay minerals with certain
water proof ability.
(2) Ancient landform is mainly low in southeast and high in northwest which has an
obvious influence with the weathering zone development thickness change. Terrain
goes from high to low from north to south, and thickness in the weathering zone slightly
increases. It deposits gravel layer and claypan with different thicknesses in lower area
of ancient landform, and these two have good water-resisting property.
Study of Bedrock Weathering Zone Features in Suntuan Coal Mine 335

(3) From lithology and water-physical properties and physico-mechanical properties


of working surface Weathering Zone it can be known that its rock has low compression
strength and is soft, but it has good expansibility and certain water proof ability. It can
function as resisting caving zone and water flowing fractured zone, which is very
beneficial to lessen waterproof coal rock pillar.

References
1. He, J.: Study of engineering-geological features of weathered zone of Wugou Coal Mines
base rock. Mining Engineering 24(11) (June 2008)
2. Xuan, Y.-q.: Study on the weathered damage attributes of rock and the law of reduction for
coal column protection. Chinese Journal of Rock Mechanics and Engineering 24(11) (June
2005)
3. Xuan, Y.: The possibility study of increasing the upper limit in combined working face under
the thick loose layer containing water. Journal of Anhui University of Science and
Technology 20(1), 4245 (2000)
4. State Bureau of CoalIndustry. In: Retaining Coal Column of the Building, Water Body, the
Railway and the Main Mine Tunnel and Exploitation Rules of the Coal Pressed. China Coal
Industry Publishing House, Beijing (2000)
5. Yang, B.: The testing study of the key technology safely mining coal seam in weathered
zone. Journal of China Coal Society 28(6), 608612 (2003)
6. Yang, B.: Examination and study of retaining sands resisting coal column under the
moderate water-bearing layer. Journal of China Coal Society 27(4), 342346 (2002)
Mercury Pollution Characteristics in the
Soil around Landfill

JinXiang Yang, MingXu Zhang, and XiaoLong Li

The School of Earth and Environmental


Anhui University of Science and Technology,
Huainan Anhui, China
jxyang@aust.edu.cn

Abstract. In recent years, landfill and incineration of waste is considered a new


mercury pollution, which is paid widespread attention by domestic and foreign
scientists. This paper studies mercury pollution Characteristics in the soil around
Landfill taking landfill in Huainan City for an example and using AMA 254
Advanced Mercury Analyzer produced by US LECO Company. The result
shows that: mercury content distribution in surface soil around landfill has the
certain laws: S50(0.0411ppm)> S20(0.0345)>S100(0.0257), and the distribution
of surface soil in different directions is as follows: WS (0.0525ppm)

>W(0.0441ppm)>WS (0.0255ppm)>ES(0.0218ppm); The vertical distribution
is as follows: mercury content in the southeast increased with increasing depth
and the rest direction decreases with depth increasing.

Keywords: landfill, soil, mercury.

1 Introduction

After outbreak of "Minamata" disease which shocked the world in the 50s, Mercury, as
a global pollutant, has caused widespread attention and study. In recent years, landfill
and incineration of waste is considered a new mercury pollution, which is paid
widespread attention by domestic and foreign scientists and the relevant government
departments.
Europe and other developed countries attach great importance to mercury pollution
and release problems of landfill. From the 1980s, the U.S. EPA and other agencies have
carried out research work in this area. Research staff of Oak Ridge National Laboratory
(ORNL) surveyed on atmospheric mercury concentrations of several landfill in the
Florid USA, the result showed that mercury concentrations of downwind in the
atmosphere is usually 30 to 40 times of upwind or more[1-2]. Li Zhonggen and others
analyzed Mercury (Hg) in waste and soil at four municipal solid waste (MSW) landfills
in Guiyang and Wuhan City, the result showed that there is Hg pollution risk associated
with MSW land filling [3]. Ding Zhen, Wang Wenhua and others determined with an
AMA 254-Automatic Solid/Liquid Mercury Analyzer in topsoil samples from the
Laogang Landfill, and it is shown that mercury is released in the form of gaseous
mercury [4]-[7].

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 336340, 2011.
Springer-Verlag Berlin Heidelberg 2011
Mercury Pollution Characteristics in the Soil around Landfill 337

2 Materials and Methods

2.1 Materials

Soil samples are collected from Datong landfill in Huainan City.


The principle of distribution: according to geographical and terrain conditions of
landfill, taking it as the center, we lay a line in the West (W) and East South (ES)each ,

two lines in southwest (WS , WS ); we arrange three points in each line in light of
different distances from the landfill , namely 20m(S20), 50m(S50), 100m(S100), and
taking 20cm as a profile unit we set three sections of 20cm 0-20cm ,20-40cm and
40-60cm in each point.

2.2 Methods

(1) Sample pretreatment


The soil samples are put in a ventilated, cool and dry place to naturally air dry. After
dried, the stones, plant roots and other debris are picked out from the samples, and then
according to quartering, the samples are discard the excess part to retain about 300 g or
so. Finally the samples are put in agate mortar for grinding, passed through 200 eye
nylon screen mesh for screening, and sealed into zip lock bag to prepare for
determination.
(2) Determination of mercury
Mercury content is determined with an AMA 254 Advanced Mercury Analyzer
produced by US LECO Company. This instrument is a unique Atomic Absorption
Spectrometer that uses an element-specific mercury lamp to emit light at a wavelength
of 253.7 nm and a silicon UV diode detector to detect mercury content. This method
does not require sample digestion, and the results are not affected by matrix.

3 Results and Analysis

3.1 Analysis of Mercury Content in Surface Soil

Analysis results of mercury content in surface soil near the landfill are shown in
Table 1, and using EXCEL software for statistical analysis, the results are shown in
Figure 1
From Table 1 and Figure 1, it is known that mercury content distribution in surface
soil around landfill has the certain laws. The mercury concentration in surface soil at
50m away from the garbage dump is higher than that at the 20m and 100m, the order is:
S50 (0.0411ppm)> S20 (0.0345ppm) >S100 (0.0257ppm); and mercury concentrations
in surface soil near the landfill in different directions are as follows: WS

(0.0525ppm)> W (0.0441ppm)> WS (0.0255ppm)> ES(0.0218ppm). But the surface
soil is not contaminated by mercury.
338 J. Yang, M. Zhang, and X. Li

Table 1. Mercury content in surface soil near the landfill (Unit: ppm)

Mercury Content of Sampling Sites at Different


Sampling Point Average
Distances
Position Value
S20 S50 S100
W 0.0352 0.063 0.0342 0.0441

WS 0.0284 0.0305 0.0177 0.0255

WS 0.055 0.0499 0.0525

ES 0.0193 0.0208 0.0253 0.0218

Average Value 0.0345 0.0411 0.0257 0.0360

0.06
0.05
Content ppm

0.04
0.03
0.02
0.01
0
W WS WS ES Position

Fig. 1. Mercury content distribution diagram in surface soil near the landfill

3.2 Vertical Analysis of Mercury Content

According to mercury content in the different section soil, using EXCEL for the
vertical analysis, the results were shown in Figure 2.
From Figure 2, it could be seen that the mercury content of different sampling points
in the rest direction decreases with depth increase, in addition to mercury content of the
three sampling points in the southeast increases with increasing depth and abnormal
occurring at the place of 50m in the southwest No. .
Mercury Pollution Characteristics in the Soil around Landfill 339

0.14
WS 20

0.12
WS 50
WS100
0.1
W20
Content(ppm)

0.08 W50
W100
0.06
ES20
0.04
ES50
0.02 ES100

0

WS 20
0-20 20-40 40-60 Profile
WS 50

Fig. 2. Vertical analysis diagram of mercury content in the soil

4 Conclusions
(1) mercury content distribution in surface soil around landfill has the certain
laws: S50(0.0411ppm)> S20(0.0345)>S100(0.0257). The main reason may be that
mercury in the form of particulate matter and gaseous contaminates soil around
with wind migration and the influence range is about 50m Secondly, mercury
distribution of surface soil in different directions is as follows: WS (0.0525ppm)

>W(0.0441ppm)>WS (0.0255ppm)>ES(0.0218ppm).The main reason may be due to
the influence by the dominant wind direction in Huainan where dominant wind
direction is south-east of Huainan in summer and is northeast in winter.
(2) The vertical distribution of mercury content in the soil landfill around is as
follows: mercury content in the southeast increases with increasing depth. The main
reason may be that the underlying soil in southwest is coal gangue which makes the
mercury distribution in the soil show this trend in the southeast. And the mercury
content of different sampling points in the rest directions decreases with depth increase,
this result shows that mercury pollution in the soil around the landfill is mainly
concentrated in the topsoil.
Because the work time is short and the amount of data is limited, the study on
mercury pollution characteristics is not comprehensive, and other heavy metals
pollution in the soil near landfill has not yet been analyzed. Therefore, the research
work in future needs further study.

References
1. Pukkala, E., Pnk, A.: Increased incidence of cancer and asthma in houses builton a former
dump area. Environ. Health Perspect. 109(11), 11211125 (2001)
2. Vrijheid, M., et al.: Chromosomal congenital anomalies and residen -ce near hazardous
waste landfill sites. The Lancet 9303(359), 320322 (2002)
340 J. Yang, M. Zhang, and X. Li

3. Li, Z.-g., Feng, X.-b., Tang, S.-l., et al.: Distribution Characteristics of Mercury in the Waste,
soil and Plant at Municipal Solid Wastelandfills. Earth and Environment 34(4), 1118
(2006) (in Chinese)
4. Ding, Z.-h., Wang, W.-h., Tang, Q.-h., et al.: Release of Mercury From Laogang Landfill,
Shanghai. Earth and Environment 33(1), 610 (2005) (in Chinese)
5. Bartolacci, S., Buiatti, E., Pallante, V., et al.: A study on mortality around six municipal solid
waste landfills in Tuscany Region. Epidemiol. Prev. 29(5-6 suppl.), 5356 (2005)
6. Tang, Q.-h., Ding, Z.-h., Wang, W.-h.: Pollution and Transference of Mercury in Soil-Plant
System of Different Landfill Units. Shanghai Environmental Sciences 22(11), 768775
(2003) (in Chinese)
7. Chang, Q.-s., Ma, X.-q., Wang, Z.-y., et al.: Pollution characteristics and evaluation of heavy
metals in municipal rubbish landfill sites. Journal of Fujian Agriculture and Forestry
University (Natural Science Edition) 36(2), 194197 (2007) (in Chinese)
The Research on Method of Detection for
Three-Dimensional Temperature of the Furnace
Based on Support Vector Machine

Yang Yu1, Jinxing Chen1, Guohua Zhang2, and Zhiyong Tao2


1
School of Information Science & Engineering,
Shenyang Ligong University,
110159 Shenyang, China
2
Shenyang Artillery Academy of PLA,
110162 Shenyang, China
yusongh@126.com, zhao.y.w@hotmail.com

Abstract. SVM is a rising in common use learning technology base on


Statistical Learning Theory. In view of the variety of the changes of burning
flame in large-scale boiler, in order to master the status of the furnace flame in
real time. In this paper, it introduces the SVM network absorb the basic method
of flame-picture's temperature, to study thoroughly discussed about the SVM
network model, and puts forward an improved SVM network method, as for the
method to calculate three-dimensional temperature field, which has many
advantages to the traditional optimization method. Taking the SVM network
using to test the three-dimensional temperature field in the furnace is feasible
by the simulation studies and the experimental results.

Keywords: SVM, Three-dimensional temperature field, Test.

1 Introduction

The flame burning process in the boiler furnace taking place in a large space within
the constantly fluctuating, and has obvious three-dimensional characteristics of the
physical and chemical processes. The stability of the furnace flame directly affects the
safety of productions. Therefore, testing the three-dimensional temperature field in
the furnace has important practical significance [1-2].
In this paper, taking CCD (charge-coupled device) camera as a two-dimensional
radiation energy distribution of the furnace sensors, receiving the heat radiation signal
of three-dimensional in the furnace, taking further research in the BP algorithm. The
CCD camera under the visible light received by the RGB three-color signal value as
the three SVM network input blackbody calibration furnace 50 groups of red visible
RGB three-color signal values corresponding to the temperature T value as output to
train the SVM network and temperature was established SVM network. SVM
predicted temperature of the network model results are analyzed, and SVM using well-
established network of two-dimensional image of the flame radiation temperature for
the strike, combined with the regular method to complete the three-dimensional

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 341346, 2011.
Springer-Verlag Berlin Heidelberg 2011
342 Y. Yu et al.

combustion flame temperature field reconstruction. Finally, the trained SVM which is
simulated and analyzed verifies that the results are ideal, which also proves that SVM
network using to obtain three-dimensional temperature distribution is feasible.

2 Principle of SVM Network Temperature Measurement


The basic idea of SVM can be summarized as: first transform the input space by a
nonlinear transformation into a high dimensional space, and then strike a new space in
the optimal linear separating hyperplane, which is a nonlinear transformation inner
product by defining the appropriate function to achieve.
Training sample set of input space dimension q , In the input space to feature space,

{ ( x )}
l
j represents a set of nonlinear transformation, l that the dimension of feature
j =1

space. Act as decision-making side of this hyperplane can be defined as Eq1 a feature
space.
l

( x) + b = 0
j =1
j j (1)

When = [b, 1 , 2 , , l ] , ( x ) = 1, 1 ( x ) , 2 ( x ) , , l ( x ) , the decision


T T

hyperplane can be further simplified as:


T ( x) = 0 (2)
At this point, the objective function:

K ( xi , x j ) = ( xi )i ( x j ) (3)

The dual form becomes:


n
1 n
Q ( ) = i i j yi y j K ( xi i x ) (4)
i =1 2 i =1

If i* is the optimal solution, then the algorithm of the other conditions are
unchanged, but
n
* = i* yi ( xi ) (5)
i =1

The corresponding decision function becomes:

n
f ( x ) = sgn i * yi K ( xi i x ) + b* (6)
i =1
SVM output is some linear combination of the middle layer nodes, each
corresponding to middle layer nodes in the input sample and a support vector inner
product, obtained form the decision-making function is similar to a neural network,
therefore, be called support vector network, as shown in Figure 1.
The Research on Method of Detection for Three-Dimensional Temperature 343

K ( x1, x)
x1
K ( x2 , x)
1 y1
x2 2 y2

y



h1yh1 sgn ( )
xm1
K ( xh1, x)
xm h y h
K ( xh , x)

Fig. 1. Schematic diagram of support vector machines

3 SVM Network Model of Temperature

3.1 Sample Selection and Processing

With blackbody furnace, the use of temperature control system, intake of different
temperature thermal radiation burn the image. A total of 50 groups for red (R), green
(G), blue (B) three-color signal value and the corresponding relationship between the
temperature T, the sample data in Table 1.

Table 1. Blackbody calibration furnace sample data of 50 groups

R 700 135 141 154 173 180 182 186 183 190 192 193 194 194 198 200 201 201 204 206 208 210 211 214 216
G 52 58 62 75 92 98 56 65 103 79 84 85 90 98 103 104 107 112 118 115 148 112 115 139 152
B 1 3 3 9 13 14 28 33 37 39 38 40 36 11 19 28 11 12 50 43 20 13 17 32 55
T/C 700 732 781 797 876 884 894 905 913 927 930 938 945 952 962 971 980 984 994 1008 1011 1020 1022 1033 1049
R 217 220 221 225 227 234 235 237 240 242 247 249 252 255 252 255 255 255 255 255 255 255 255 255 255
G 155 160 166 170 185 134 197 200 176 146 146 150 167 171 183 190 198 201 207 215 220 223 227 228 231
B 57 59 58 60 61 23 64 66 55 36 28 32 51 61 75 77 80 81 82 83 84 85 86 87 89
T/C 1052 1068 1072 1090 1105 1125 1138 1149 1152 1167 1178 1182 1198 1205 1216 1227 1249 1255 1260 1271 1283 1305 1316 1321 1330

3.2 The Choice of Kernel Function

Selected kernel function in this article is Gaussian kernel function (also known as
RBF kernel function), and its representation as:

x xi 2

K ( x, xi ) = exp (7)
2
In the formula: is the width of the Gaussian distribution.

3.3 SVM Network Training

This choice SVM network input nodes is 3, the output nodes is 1. Characteristics of
the network using SVM, we construct the 3-layer SVM network, input layer of the
344 Y. Yu et al.

three, namely R, G, B three color values, the output layer one, the flame radiation
images of the two-dimensional temperature T. Network training are listed in Table 1.
Kernel parameter (g) and the penalty factor (c) is the SVM network are two
important parameters. This training process in the network range of the c value is set
to -10 to 10, g value range is set to -5 to 5. Obtained by training SVM parameter
optimization of the network diagram shown in Figure 2.

Fig. 2. SVM parameter optimization map

3.4 SVM Network Model Predictions of Temperature


In the SVM network training is completed, its prediction accuracy for the simulation
analysis. Specific simulation results shown in Figure 3.

-3
x 10
4 3

3
2

2
1
Absolute error
Relative error

0
0

-1
-1

-2
-2

-3 -3
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50

Sample Sample

(a)SVM network plans relative prediction error (b) SVM network prediction absolute error map

Fig. 3. The simulation results of prediction accuracy

Figure 3(a) SVM network prediction for the relative error curve, the curve can be
seen that the prediction of the relative error of less than 0.4%, the prediction accuracy
is very high. Figure 3(b) is the absolute prediction error of SVM network graph can
be seen from the figure the prediction of the absolute error of less than 3 . Sum can
be drawn, after training the SVM network has good prediction accuracy.
The Research on Method of Detection for Three-Dimensional Temperature 345

4 Simulation Experiments and Results


Color CCD camera for the furnace flame combustion at a particular moment shown in
Figure 4. On Figure 4 from the installation of heating in the north of the city center of
Beijing, Shunyi 1 # 90 tons of large coal-fired hot water boiler. Flame color CCD
camera detector arrangement with the boiler corners. Monitoring system grate furnace
combustion space is within the area above 2.1m. By the trained SVM network to
strike a flame temperature part of the image area (first 10 lines of the first 10
columns) cell corresponds to the temperature shown in Table 2.

Fig. 4. CCD camera to get the furnace flame picture

Table 2. SVM method to strike the flame image in part (first 10 lines of 10) regional units of
the temperature
unit:
Temperatue/oC

Fig. 5. SVM-based network to strike a flame temperature of the third layer of three-dimensional
surface chart
346 Y. Yu et al.

Using regularization method [6] completed a three-dimensional temperature field


reconstruction to achieve the furnace. Boiler furnace combustion flame temperature
corresponding to the third floor of the three-dimensional surface as shown in Figure 5.

5 Conclusion
The CCD cameras use two-dimensional radiation energy distribution as a sensor, to
receive from the three-dimensional furnace heat radiation signal, according to a new
model of radiation imaging two-dimensional temperature images of flame and three-
dimensional relationship between combustion temperature equation, the
regularization of the reconstruction methods dimensional combustion flame
temperature field reconstruction. Two-dimensional image of the flame temperature
distribution of radiant energy to strike, the introduction of SVM temperature
measurement network, with 50 groups of blackbody calibration furnace sample data
for training SVM network. After the network with the trained SVM two-dimensional
images of flame radiation temperature for the strike, combined with the regular
method to complete the three-dimensional combustion flame temperature field
reconstruction. Through the combustion flame temperature field reconstruction of
three-dimensional, and further realized the boiler safety, economic, clean operation, to
guide the optimization of boiler combustion is of great significance.

References
1. Zhou, H.C., Han, S.D., Sheng, F., Chu-Guang, Z.: Numerical Simulation on a Visualization
Monitoring Method of Three-Dimensional Temperature Distribution in Furnace. Power
Engineering 23, 21542156 (2003)
2. Chun, H.: Furnace flame detection principles and techniques of visual. Science Press,
Beijing (2005)
3. Wang, S.Z.: Support vector machines and application. PhD thesis, Harbin Institute of
Technology, 19-24 (2009)
4. Deng, N.Y., Tian, Y.J.: A new data mining method - support vector machine. Science
Press, Beijing (2004)
5. Peng, B.: Support Vector Machine Theory and Engineering Application. Xidian University
Press, Xian (2008)
6. Zhou, H.C., Han, S.D., Shen, G.F., et al.: Visualization of three-dimensional temperature
distributions in a large-scale furnace via regularized reconstruction from radiative energy
images: Numerical studies. J. of Quantitative Spectroscopy and Radiative Transfer 72,
361383 (2002)
Study on Wave Filtering of Photoacoustic Spectrometry
Detecting Signal Based on Mallat Algorithm

Yang Yu1, Shuo Wu1, Guohua Zhang2, and Peixin Sun2


1
School of Information Science & Engineering,
Shenyang Ligong University, 110159 Shenyang, China
2
Shenyang Artillery Academy of PLA, 110162 Shenyang, China
yusongh@126.com, zhao.y.w@hotmail.com

Abstract. According to the concept of wavelet, the thesis analyse the


peculiarity of wavelet, and applies Mallat algorithm of wavelet method to treat
the photoacoustic signals of water content in oil. Mallat algorithm is a kind of
multi-resolution wavelet transform. In large scale space roughly corresponds to
the signal profile,in small scale space roughly corresponds to the signal details.
This algorithm is especially suitable for processing seismic signals,
photoacoustic signals, voice signals and other non-stationary signal, and the
ideal effects of noise removal and filtering have been achieved for the
photoacoustic signals through this method.

Keywords: Mallat algorithm, optical wavelet transform, filtering, noise


removal.

1 Introduction
With the technological progress, there are more and more the signal processing
methods. The choices of the signal processing methods are closely connected with the
characteristics of the signals to be tested, so even the same method can results in huge
discrepancies for the processing effects of different signals to be tested. To achieve
the expected signal processing effects, the researchers are always trying to study the
effective methods for processing all kinds of signals. The signal processing has
experienced the course from imitation to number and from definitive signals to
random signals, and it is striding towards the signal processing age with the unsteady
and non-gauss signals as main study objects and with the non-linear and uncertain
characteristics as main study features.

2 The Wavelet Algorithm Study


The basic idea of the wavelet analysis is utilizing a group of functions with local
characteristics to denote or approach other functions. This group of functions is called
wavelet function system, which is formed by a series of extension and translation of
the wavelet functions, and the wavelet transform analyzes the signals on the basis of
the wavelet through extension and translation. The wavelet signals function group
is { a ,b } , and the expression is the Eq. 1.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 347352, 2011.
Springer-Verlag Berlin Heidelberg 2011
348 Y. Yu et al.

1 t b
a ,b ( t ) = a , b R, a 0 (1)
|a| a
In Eq.1, a is an extension factor, b is a translation factor.
Suppose f ( t ) L ( R ) , the continuous wavelet transform is:
2

1 + t b 1 + t b
W f ( a, b ) =< f , a ,b >= f ( t ) dt = f ( t ) dt (2)
|a|
a |a|
a

The opposite Eq. f ( t ) recomposed by W f ( a, b ) is:

1 + + dadb
f (t ) = W f ( a, b ) a ,b ( t ) (3)
C a2
The C expression is: b

+ 2
( )
1
C = d (4)

The continuous wavelet transform is mainly used for theoretic analyses and
derivations, and in the actual signal processing course, we need the discrete wavelet
transform more. Here the so-called discretization refers to the continuous measure
factor a and translation factor b , but not the time variable t . First discretize the
measure factor a and get the binary wavelet and the binary wavelet transform. Then
discretize the time center factor b in the binary integral way.
The continuous wavelet function is:
1 t b
a,b ( t ) = b R, a R + (5)
a a

In the Eq. 5, discretize the measure and the translation parameter of the continuous
wavelet transform. When j and k are integral numbers, the discrete wavelet is:

t kb0 a0j j

= a02 ( a0 t kb0 )
1
j,k (t ) = j
j
(6)
a0j a0

The corresponding discrete wavelet transform is:


j j
f ( t ) ( a0 j kb0 )dt
+ +
W j ( j , k ) = f , j , k a02 f ( t ) j , k ( t ) dt = a02 (7)

If there are positive value constants A and B , and 0 < A B < , f ( t ) L2 ( R )

A f f , j , k
2
B f (8)
j k

Therefore j , k forms one of the wavelet frame of L2 ( R ) , in which A and B are the
frame border.
Study on Wave Filtering of Photoacoustic Spectrometry Detecting Signal 349

3 The Wavelet Mallat Algorithm


The multi-resolving power signal analysis with different measures is to show the signal
features under the condition of different resolving powers, so the essence of multi-
resolving power is to resolve the signals on different space layers. The huge measure
space corresponds with the rough appearance of the signals, the small measure space
corresponds with the tiny parts of the signals. With the constant changes of the
measure spaces, we can observe the signals from rough to precision for respective
measure spaces, and this is the thought of the multi-resolving power analysis. The
essence of the multi-resolving power analysis is a series of subspaces in
L2 ( R ) meeting some specific requirements. Now it can be defined as follows: the
multi-resolving power analysis in L2 ( R ) space refers to the space sequence
{V , j Z }
j meeting the following requirements, which has the monotonous,
progressive perfect, flexible, stable translation and other characteristics. From the
definition of the multi-resolving power analysis, we can obtain the standard orthogonal
basis of the V j space:

j

j , n ( x ) = 2 2 ( 2 x n ) ,

j
n Z (9)

Because ( x ) V0 V1 and there is the standard orthogonal
basis { }
2 ( 2 x n ) , n Z , so there must be a factor column {hn , n Z } L2 ( Z ) ,
and this is the measure equation of the wavelet.
Suppose ( t ) and ( t ) are the measure function and the wavelet factor
of the function f ( t ) approached by the resolving power 2 j , the discrete
approaching C j f (t ) and the detail part D j f (t ) can be expressed

as C j f ( t ) = c j , k j .k ( t ) and D j f ( t ) = d j ,k j .k ( t ) , in which d j , k and c j , k are the

measure factor and the wavelet factor with the resolving power 2 j .
According to Mallat Algorithm, C j f ( t ) is resolved as the summation of the rough
image C j 1 f ( t ) and the detail image D j 1 f ( t ) , and the expression is:
C j f ( t ) = C j 1 f ( t ) + D j 1 f ( t ) (10)

Replacing the Eq.11 and Eq.12 into Eq.10 can form the Eq.13:

C j 1 f ( t ) = C
m =
j 1, m j 1, m ( t ) (11)


D j 1 f ( t ) = d
m =
j 1, m ( t )
j 1, m (12)
350 Y. Yu et al.

C
m =
j 1, m ( t ) +
j 1, m D
m =
j 1, m ( t ) =
j 1, m C
m =
j,m (t )
j .m (13)

Then for the tower-style resolving course analysis of Mallat Algorithm, the
expression is:

C 0 ( n ) C1 ( n ) "" C J 1 ( n ) C j ( n )
(14)
D ( n)
1
D ( n)
2
D J 1
( n)
C 0 ( n ) is the result data of the experiment; C j ( n ) and D j ( n ) are called discrete
approaching and discrete detail. C j ( n ) = C j 1 ( n ) H = h ( l 2n )C j 1 ( l )
lZ
is the low frequency component of the original data, and
D ( n ) = C ( n ) G = g ( l 2n ) C ( l )
j j 1   j 1
is the high frequency component, J is
l z
the biggest resolving frequency.
{ }
H = h ( l ) , l Z and G = { g ( l ) , l Z } means: the mirror filter of the discrete
filter H and G . Though this kind of algorithm is very convenient, it requires that the
point number the original data C (0) ( n ) must be a multiple of 2( N ) , and the resolving
data point becomes half every time after resolving. Then the analysis result of the
optical spectrum becomes discontinuous, and this brings some difficulties for the
analysis of the optoacoustic signal result. Therefore this article adopts the improved
Mallat algorithm to filter the optoacoustic signals, and the expressions are:
l ( j ) 1
C( j) ( n) = h ( ) ( l ) C ( ) ( n 1)
l =0
j j 1
(15)

l ( j ) 1
D( j ) ( n ) = g ( ) ( l ) C ( ) ( n 1)
l =0
j j 1 (16)

4 The Noise Removal Example Analysis of the Optoacoustic Signals


This article applies the improved Mallat algorithm into the optoacoustic signal
processing of detecting water in oil by the optoacoustic optical spectrum. The study
object of optoacoustic optical spectrum detection is the course that the substance
absorbs light, get stimulated and transit to return its original state through non-
radiation. Therefore the photons having been absorbed, annihilated or interacted can
also be detected and analyzed.
Study on Wave Filtering of Photoacoustic Spectrometry Detecting Signal 351

The detecting platform for the optoacoustic optical spectrum to detect the water
content in oil consists of laser, chopper, preamplifier, ARM9 data processing module,
filter, phase-locked amplifier, microphone, and so on. By the experiment platform, a
great deal of oil-in-water optoacoustic data is collected, and because of the huge
amount, a part of the data is listed in Table 1.

Table 1. The Oil-in-Water Optoacoustic Data

Unit: v
Collection number of Collection number of
Optoacoustic data Optoacoustic data
times times
1 0.957031 12 0.969238
2 0.957071 13 0.97168
3 0.95459 14 0.96923
4 0.949707 15 0.974141
5 0.952158 16 0.974121
6 0.952153 17 0.97168
7 0.959473 18 0.981445
8 0.959493 19 0.966797
9 0.964356 20 0.966787
10 0.961911 21 0.959473
11 0.961914 22 0.952148

This article adopts the Eq.15 and Eq.16 to filter and remove the noise for the
optoacoustic data collected in this article. h ( ) shows the factor of the low pass filter in
j

the computing course; g ( j ) shows the factor of the high pass filter in the computing
course; l ( j ) shows the data point number of the filter; as the resolving number of
times increases once, the l ( j ) value doubles correspondingly, and this kind of
improved algorithm is suitable for processing the optoacoustic signals more.
The optoacoustic data curve before filtering is shown in Fig.1.

Fig. 1. The Optoacoustic Optical Spectrum Data Curves before Filtering

Collecting the oil-in-water optoacoustic signals by the microphone and amplifying


the optoacoustic signals to about 0.9V by the preamplifier, we can know that a lot
of noise signal curves are mixed in this signal curve by observing Fig.1. The noise
signal curves can influence the transformed results a lot, and this kind of influence
can lower the precision of the optoacoustic optical spectrum experiment results finally.
352 Y. Yu et al.

The optoacoustic data curves obtained after the Mallat algorithm filter is shown in
Fig.2. The optoacoustic curves after the wavelet Mallat algorithm filter are much
clearer. There is less influence of the noise curves on the optoacoustic curve result, and
such curves can decrease the influence on the water content in oil transform results and
be beneficial to enhancing the precision of the experiment results a lot.

Fig. 2. The Optoacoustic Optical Spectrum Data Curves after Filtering

The transform from the optoacoustic curves to water content in oil is realized by
the Eq. 17.
S = CPL N (17)

In the Eq.17, S shows the voltage value of the optoacoustic signals; PL shows the
luminous frequency of the semiconductor laser; N shows the volume fraction of the
water content in the sample oil; shows the water optical spectrum absorbing factor;
C shows the cell constant of the photocells and depends on all the factors such as the
sensitivity of the microphone and the geometric structure of the photocells.
By Fig.1 and Fig.2, the actual filtering experiment proves that the Mallat algorithm
based on orthogonal wavelet transform has very good filtering and noise removal
effects for the optoacoustic signal processing. This kind of filtering and noise removal
processing lays a foundation for the processing of the experiment platform results, and
this example also proves that this kind of wavelet transform algorithm is practical for
applying in the optoacoustic optical spectrum detection.

References
1. Zheng, J.: Research on retrieving and analysis of the characteristic information of
photoacoustic signal. South China University of Technology
2. Han, L., Tian, F.: Optical implementation of Mallat algorithm. Optics and Precision
Enginering (8) (2008)
3. Li, C.H., Hu, B.J.: Radar Target Signal Denoising Based on Wavelet Mallat. Algoritlunm
Mechatronics (16) (2010)
4. Liu, J.C.: Research on application of seismic signal de-noising based on Mallat algorithm.
HuNan University
Ontology-Based Context-Aware Management for
Wireless Sensor Networks

Keun-Wang Lee and Si-Ho Cha*

Dept. of Multimedia Science, Chungwoon University


San 29, Namjang-ri, Hongseong, Chungnam, 350-701, South Korea
{kwlee,shcha}@chungwoon.ac.kr

Abstract. Wireless sensor network (WSN) can provide the popular military and
civilian applications such as surveillance, monitoring, disaster recovery, home
automation and many others. All these WSN applications require some form of
self-managing and autonomous computing without any human interference.
Recently, ontology has become a promising technology for intelligent context-
aware network management. It may cope with various conditions of WSNs.
This paper describes an ontology-based management model for context-aware
management of WSNs. It provides autonomous self-management of WSNs
using ontology-based context representation, as well as reasoning mechanisms.

Keywords: Ontology, Context-Aware, Self-Management, Network Management,


Wireless Sensor Network.

1 Introduction
Network management is the process of managing, monitoring, and controlling the
behavior of a network. Wireless sensor networks (WSNs) pose unique challenges for
network management that make traditional network management techniques
impractical [1]. WSN consists of a large number of small sensor nodes having
sensing, data processing, communication, and ad-hoc functions. WSNs have critical
applications in the scientific, medical, commercial, and military domains. Examples
of these applications include environmental monitoring, smart homes and offices,
surveillance, and intelligent transportation systems [2]. WSNs are characterized by
densely deployed nodes, frequently changing network topology, variable traffic, and
unstable sensor nodes of very low power, computation, and memory constraints.
These unique characteristics and constraints present numerous challenging problems
in the management of WSNs which are not encountered in the traditional wireless
networks [3][4].
The management of WSNs requires to be lightweight, autonomous, intelligent, and
robust [5]. A network management system needs to handle a certain amount of
control messages to cope with various conditions of the network. With WSNs, it is
extremely important to minimize the signaling overhead since the sensor nodes have
limited battery life, storage, and processing capability. Given the dynamic nature of

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 353358, 2011.
Springer-Verlag Berlin Heidelberg 2011
354 K.-W. Lee and S.-H. Cha

WSNs, an adaptive management framework that autonomously reacts to the changes


in the network condition is required. To do this, We had proposed and implemented
the sensor network management (SNOWMAN) framework [6] employs the policy-
based management (PBM) approach for letting the sensor nodes autonomously
organize and manage themselves. The SNOWMAN framework displayed smaller
energy consumption for network management and longer network lifetime than the
existing schemes for practical size network. In the SNOWMAN framework, sensor
nodes are managed by policy information base (PIB). It is to dynamically respond to
specific events by letting the sensor nodes make local decisions by themselves using
the policies.
However, it may not cope with various conditions of WSNs. Therefore, it is
necessary to introduce more intelligent and context-aware management mechanisms
so that the sensor nodes can adapt themselves to changing situations. These days,
ontology has become a promising technology and it seems that it may be used in the
intelligent context-aware approach for network management [7]. With the advance of
context-aware computing, there is an increasing need for developing context models
to facilitate context representation and reasoning.
Ontologies allow the network management system to organize the information
gathered from various sensors and infer from the information and find out the status
of the network. Therefore, the appropriate management actions for WSN can be
automated. In addition, ontologies can automate information sharing and processing
among WSN's members, and allow WSN management systems to interwork between
them by modeling and organizing WSN environments clearly.
This paper presents an ontology-based context-aware management model to
automatically manage WSNs by applying ontology-based languages, and provides a
possible scenario to implement autonomic self-management using the proposed
ontology-based model. The rest of the paper is organized as follows. Section 2
discusses the application of ontology and related languages, and Section 3 introduces
the proposed ontology-based model. The context reasoning is discussed in Section 4.
Finally, in Section 5, conclusions are made including the future research.

2 Ontology and Semantic Web


Ontologies are used in knowledge management and artificial intelligence to solve
questions related to semantics, with current relevance in the Semantic Web. Ontology
is a formal representation of knowledge by a set of concepts within a domain and the
relationships between those concepts. Ontology provides a vocabulary of classes and
relations to describe a domain, stressing knowledge sharing and knowledge
representation. It can be therefore used to reason and describe about the properties of
a domain. The use of ontologies to represent and reason information related to the
network management will be important [7].
Web Ontology Language (OWL) [8] is the Web standard language to represent
ontologies. The OWL is based on Resource Description Framework (RDF) [9] and
RDF Schema [10]. Currently, OWL is evaluated as the most powerful reasoning
languages, and is also the most widely used. The RDF is a framework for representing
Ontology-Based Context-Aware Management for Wireless Sensor Networks 355

information on the Web, and has an abstract syntax that reflects a simple graph-based
data model. OWL enables the definition of domain ontologies and sharing of domain
vocabularies and RDF provides data model specifications and XML-based
serialization syntax. OWL is an object-oriented approach and a domain is described in
terms of classes and properties. OWL can be used to specify management
information, and can be employed in modeling and reasoning about context
information in WSNs. OWL provides three increasingly expressive sub-languages
designed for use by specific communities of implementers and users. OWL Lite
supports those users that primarily need a classification hierarchy and simple
constraints. OWL DL supports users who want more expressiveness. OWL DL (DL
for Description Logic) generally includes all OWL language constructs, but they can
be only used under certain restrictions. OWL Full is meant for users who want
maximum expressiveness and the syntactic freedom of RDF with no computational
guarantees. In this paper, OWL DL will be used to create an ontology model to
represent and interpret facts, because it represents a good compromise between
expressivity and computational complexity [11].

3 Ontology-Based Context-Aware Management Model


The proposed ontology-based context-aware management model for WSNs includes a
Context Manager, a Context Reasoner, an Ontology Manager, one or
more Sensor/Actuator Manager, a Context Repository, a Rule
Repository, and a large number of sensors and/or actuators in Fig. 1. All but
sensors and actuators are located in the base station of the WSN.

Internet / Context Ontology


Satellite Broker Manager Logic
etc Rules

Manager Context Context


Node Context
Manager Reasoner
Repository

Sensor Actuator Sensor


Manager Manager Manager
Base Station

Context Providers &


Action Executors
Sensors Actuators Sensors

Fig. 1. Overview of the Ontology-Based Context-Aware Management Model


356 K.-W. Lee and S.-H. Cha

The Context Manager collects data from heterogeneous context providers


such as sensor nodes. The Ontology Manager manages the context repository for
context awareness. The Context Broker manages queries from manager and
defines the knowledge and rule files. The Context Repository is knowledge
base for context model and instance storage. And the Logical Rules to infer the
situation are used by the Context Reasoner. Ontology-based context model and
instances are stored in the Context Repository that is the MySQL relational
database. The Sensor Manager and the Actuator Manager read the status
information from various sensor nodes and execute the management actions on that
sensors or WSN through various actuators, respectively. The Ontology Manager,
the Context Manager, and the Context Reasoner are based on Jena
framework [12] and Jess rule engine [13].
Fig. 2 illustrates a partial definition of specific ontology for managing WSNs. The
context model is structured a set of abstract entities, each describing a physical or
conceptual object including Location, Activity and CompEntity
(computational entity), as well as a set of abstract sub-classes. A specialized object for
the SensorParameter class is added for each specific sensing parameter which is
monitored (e.g Temperature, Humidity, Acoustic, etc.). Measurement
values are used to determine network or target status by comparing these values with
a set of parameter ranges (PrameterRange). Each range is specified in terms of
upper and lower thresholds and related activity level; when a measured value falls out
of the thresholds, a corresponding management activity is then triggered.

ContextEntity

hasThresholdParameter ...

CompEntity Location Activity


TheresholdParameter
...
hasParameterName TemperatureSens Filtering
isOutOfRange : Boolean Device or
isLocatedIn
hasReferenceRange
HumiditySensor Actuating
ParameterRange
AcousticSensor
hasUpperValue Notification
hasLowerValue ...
Sensor hasSensorParameter
hasRangeID
SensorParameter
hasSensorStatus SensorStatus
TemperatureRange
hasSensorID Temperature
Actuator
hasSensorEnergyLevel
HumidityRange hasDataSeningValue
... Humidity
hasOutofRangeParameters
AcousticRange
OutOfRangeDataParameter Acoustic

SeismicRange isOutOfRange : Boolean = true


...

Fig. 2. Context-Aware Management Ontology


Ontology-Based Context-Aware Management for Wireless Sensor Networks 357

When one or more measurement values fall out of the thresholds, the WSN's status
is updated with associations to current OutOfRangeDataParameters and the
corresponding Activity events. An activity can be triggered by abnormal
SensorParameters. When the system has automatically discerned that a critical
situation has occurred, due to abnormal sensor parameter values and/or potentially
dangerous environmental conditions, proper intervention actions should be planned.

4 Context Reasoning
In this section, we will describe context reasoning based on the proposed management
model to demonstrate the key feature of the ontology based context model. In the
context model, we represent contexts in first-order predicate calculus. The context
model has the form Predict(subject, value), in which Predict is the set
of predicate names (for example, has status, has sensor ID, or is owned by).
subject is the set of subject names (for example, a sensor, location, or object) and
value is the set of all values of subjects (for example, temperature value, vibration
value, or gas leak). Temperature(sensor12, 100) means that the temperature
of sensor12 is 100o F.
The ontology reasoning mechanism of the proposed ontology-based context-aware
management supports RDF Schema and OWL DL. The OWL reasoning
system supports constructs for describing properties and concepts (classes),
including relationships between classes. The RDF Schema rule sets are needed to
perform RDF Schema reasoning. For example, (?a rdfs:subClassOf ?b),
(?b rdfs:subClassOf ?c) (?a rdfs:subClassOf ?c). User-
defined rules provide the logic inferences over the ontology base. We have defined a
set of rules in order to determine if an alarm has to be triggered and which alarm level
should be activated, according to measurement values and corresponding thresholds.
For instance, the example shows a rule activating an alarm when both following
conditions occurs: the perceived value from an acoustic sensor is higher than 100kHz
and the sensed value from a seismic sensor is higher than 10Hz: (?A rdf:type
AcousticValue) ^ (?A hasMeasResult ?B) > (?B, 100) ^ (?C
rdf:type SeismicValue) ^ (?C hasMeasResult ?D) > (?D, 10)
(?SensorStatus hasActivity Alarm).
As previous mentioned, my context reasoner is built using Jena framework [12],
which supports rule-based inference over OWL/RDF graphs. It provides a
programmatic environment for RDF, RDF Schema and OWL, SPARQL and includes a
rule-based inference engine. Its heart is the RDF API, which supports the creation,
manipulation, and query of RDF graphs. The current RDF API provides a query
primitive, a method for extracting a subset of all the triples that a selector object selects
in a graph. A simple selector class selects all the triples matching a given pattern.

5 Conclusion
In this paper an ontology-based context model for autonomous sensor management of
the WSNs has been proposed. The paper employed OWL to describe the context
358 K.-W. Lee and S.-H. Cha

ontologies because it is more expressive than other ontology languages. And the paper
showed a context reasoning example to implement autonomic self-management by
using the proposed ontology-based model in the WSN. The ontology-based context
model will be feasible and necessary for reasoning and automatic managing in WSN
environments.
Future research activities will be devoted to implement and evaluate the proposed
ontology-based context model and logic-based context reasoning schemes in real
WSN environments.

References
1. Lee, W.L., Datta, A., Cardell-Oliver, R.: Network Management in Wireless Sensor
Networks. School of Computer Science & Software Engineering, University of Western
Australia, http://www.csse.uwa.edu.au/~winnie/
Network_Management_in_WSNs_.pdf
2. Sheetal, S.: Autonomic Wireless Sensor Networks. University of Southern California,
http://www-scf.usc.edu/~sheetals/publications/
AutonomicWSN.doc
3. Mini, R.A.F., Loureiro, A.A.F., Nath, B.: The Distinctive Design Characteristic of a
Wireless Sensor Network: the Energy Map. Computer Communications 27(10), 935945
(2004)
4. Zhao, F., Guibas, L.: Wireless Sensor Networks: An Information Processing Approach.
Morgan Kaufman, Elsevier (2004)
5. Phanse, H.S., DaSilva, L.A., Midkiff, S.F.: Design and Demonstration of Policy-based
Management in a Multi-hop Ad Hoc Network. Ad Hoc Networks 3(3), 389401 (2005)
6. Cha, S.-H., Lee, J.-E., Jo, M., Youn, H.Y., Kang, S., Cho, K.-H.: Policy-based
Management for Self-Managing Wireless Sensor Networks. IEICE Transactions on
Communications E90-B(11), 30243033 (2007)
7. Xu, H., Xiao, D.: A Common Ontology-based Intelligent Configuration Management
Model for IP Network Devices. In: First International Conference on Innovative
Computing, Information and Control (ICICIC 2006), pp. 385388. IEEE, Beijing (2006)
8. Semantic Web Standard: Web Ontology Language (OWL). W3C OWL Working Group
(2007), http://www.w3.org/2004/OWL/
9. Semantic Web Standard: Resource Description Framework (RDF). W3C RDF Working
Group (2004), http://www.w3.org/2004/RDF/
10. W3C Recommendation: RDF Vocabulary Description Language 1.0 RDF Schema
(2004), http://www.w3.org/TR/rdf-schema/
11. Kim, J.: An Ontology Model and Reasoner to Build an Autonomic System for U-Health
Smart Home. Master thesis, POSTECH (2009)
12. Jena framework, http://jena.sourceforge.net/
13. Jess rule engine, http://www.jessrules.com/
An Extended Center-Symmetric Local Ternary
Patterns for Image Retrieval

Xiaosheng Wu1 and Junding Sun1,2


1
School of Computer Science and Technology,
Henan Polytechnic University, Jiaozuo, China
2
Provincial opening laboratory for Control Engineering key disciplines,
Jiaozuo, China
{wuxs,sunjd}@hpu.edu.cn

Abstract. A new texture spectrum descriptor was proposed for region


description in the paper, which is an extension of center-symmetric local
ternary pattern (CS-LTP). Different from CS-LTP, the central piexl of the
region are considered together in the definition of the extended center-
symmetric local ternary pattern (eCS-LTP). Without adding the dimension of
CS-LTP, the proposed operator contains more information of the region. The
two methods, CS-LTP and eCS-LTP were tested on two commonly used texture
image databases in the context of image retrieval and the experimental results
show that eCS-LTP gives better performance than CS-LTP.

Keywords: local binary pattern, center-symmetric local binary pattern, center-


symmetric local ternary pattern, extended center-symmetric local ternary
pattern.

1 Introduction
Because of the simplicity and robustness to variations on illumination and orientation
in texture analysis and pattern recognition, local binary pattern (LBP) [1] and its
extensions [2-4] have been widely used in many fields, such as face recognition [5],
image retrieval [6] and medical image analysis [7],etc.
The LBP and its extensions are simple descriptors which generate a binary code for
a pixel neighbourhood. The origion LBP operator prodeces a 8-bit code (a long
histogram of 256-bin) for an 8-nieghbourhood, which make it difficult for interest
region description. In order to reduce such problem, Heikkil et al introduced the
center-symmetric local binary pattern (CS-LBP) for region description in [8], which is
defined according to the defference of opposing pixels around a given pixel and
produces a 4-bit code (a 16-bin histogram). In [9], Gupta et al presented the center-
symmetric local ternary pattern (CS-LTP) for region description based on CS-LBP.
Different from CS-LBP, CS-LTP only uses the diagonal comparisons and produces a
histogram of 9 bins for each spatial bin. However, whether CS-LTP or CS-LBP, the
central pixel of a region was ignored. Therefore, they can not describe the gradient of
the neighborhood efficiently. CS-LBP was improved in our previous work [10]. In
this paper, CS-LTP was also further extended to enhance its efficiency by considering
the central pixel.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 359364, 2011.
Springer-Verlag Berlin Heidelberg 2011
360 X. Wu and J. Sun

2 Center-Symmetric Local Binary Pattern


The CS-LBP was proposed in [8]. Different from LBP, CS-LBP does not compare the
gray-level of each pixel with the center pixel, but the gray-level difference of the pairs
of opposite pixels in a region. For an 8-neighborhood (see Fig. 1), the definition of CS-
LBP is given as follows,
3
CS _ LBP(T ) = s ( pi pi + 4 ) 2i
i=0
(1)
1, x T
s ( x) =
0, else
where pi and pi + 4 (i = 0,1, 2,3) correspond to the gray-level of center-symmetric
pixels, T was set to obtain the robustness on flat image regions. Since only 4
comparisons are made, CS-LBP generates a 4-bit code. That is to say, it produces a
16-bin histogram, which make it be more effective for region description than the
origion local binary patterns.

p5 p6 p7

p5 p6 p7

p4 pc p0
p4 pc p0

p3 p2 p1
p3 p2 p1

Fig. 1. 8-neighborhood Fig. 2. Illustrative diagram for CS-LTP operator.


(D=2)

3 Center-Symmetric Local Ternary Pattern


The center-symmetric local ternary pattern was introduced in [9]. In its defination,
only the diagonal comparisions are used to generate the CS-LTP code for a given
pixel. It is clear that CS-LTP produces a histogram of 9 bins for each spatial bin.
Mathematically, the CS-LTP at a D neighboring distance is given as,
CS _ LTP ( D, T ) = f ( p5 p1 ) + f ( p7 p3 ) 3
2, x > T
(2)
f ( x) = 0, x < T
1, else

Fig. 2 gives an example for D=2.
An Extended Center-Symmetric Local Ternary Patterns for Image Retrieval 361

4 Extended Center-Symmetric Local Ternary Pattern


Though CS-LTP is a effective extension of CS-LBP, the gradient information is not
considered entirely because of the ignorance of the center pixel pc. In order to fully
use the information of the region, an extended center-symmetric local ternary pattern
(eCS-LTP) was introduced in this paper. In the new definition, the relation of
the center pixel and the diagonal pairs of pixels are considered together instead of the
gray-level difference between the diagonal pairs as CS-LTP. Mathematically, the
defination of the new operator eCS-LTP was given as follows,

eCS _ LTP( D, T ) = g ( p5 pc , pc p1 ) + g ( p7 pc , pc p3 ) 3
2, x > T & y > T
(3)
g ( x, y ) = 0, x < T & y < T
1, else

Though 4 comparisons are made in the defination of eCS-LTP, it is clear that it
also produces a 9-bin histogram as CS-LTP.

5 Experimental Results
Two commonly used image database for research purposes are employed as test beds
and 2 distance is chosen as a measurement criterion of. In the experiments, we chose
D=1 for CS-LTP and eCS-LTP.

2 ( H1 , H 2 ) = i =1 (h1i h2i ) 2 (h1i + h2i )


K
(4)

where H1 and H 2 are the texture spectrum histograms of two images.


For the two databases, PN - the precision of the first N retrieved images and RN -
the recall of the first N retrieved images are chosen as the evaluation criterion [11],
which are defined as,

( I j , Ri ) ( I j , Ri )
Pi , N (q ) = j =1 , Ri , N (q ) = j =1
N N
(5)
N Ri

where Ri denotes the i-class texture which contains Ri similar images, and q is a
query image, and I 1 , I 2 ," , I j ," , I N are the first N retrieved images and

0, if I j Ri
( I j , Ri ) = .
1, if I j Ri
362 X. Wu and J. Sun

5.1 The Performance on Image Set 1

Image Set 1 were chosen form CUReT database. This database includes 45 classes
and each class has 20 texture images (http://www.robots.ox.ac.uk/~vgg/research/).
Fig. 3 shows the 45 example textures.

Fig. 3. Example images in set 1

This experiment employs each image in the first 20 classes as query images, and
400 retrieval results are obtained. Fig. 4 (a) and (b) present the comparision graph of
the average results of 400 queries. The retrieval results show that eCS-LTP produces
better performance than CS-LTP.

0.6 0.8
CS-LTP
eCS-LTP
0.7
0.5

0.6
0.4

0.5
R

0.3
0.4

0.2
CS-LTP 0.3

eCS-LTP
0.1 0.2
4 8 12 16 20 24 28 32 36 40 4 8 12 16 20 24 28 32 36 40
N N

(a) Average recall graph (b) Average precision graph

Fig. 4. Comparison on the first database


An Extended Center-Symmetric Local Ternary Patterns for Image Retrieval 363

5.2 The Performance on Image Set 2


Image Set 2 includes one hundred and nine 640640 pixel gray level texture images
downloaded from http://www.ux.uis.no/~tranden/brodatz.html. Each image is then
partitioned into 16 non-overlapping images (160160) which are considered as
similar images in this experiment. Therefore, we can get 1744 texture images. Fig. 5
shows the 109 example texture images in Set 2.

Fig. 5. Example images in set 2

0.8 0.9
CS-LTP
eCS-LTP
0.7
0.8

0.6
0.7

0.5
R

0.6
P

0.4

0.5
0.3

0.2 CS-LTP 0.4

eCS-LTP

0.1 0.3
3 6 9 12 15 18 21 24 27 30 3 6 9 12 15 18 21 24 27 30
N N

(a) Average recall graph (b) Average precision graph

Fig. 6. Comparison on the second database


364 X. Wu and J. Sun

This experiment employs each image in the first 30 classes as query images, and
480 retrieval results are obtained. Fig. 6 presents the comparision graph of the average
results of 480 queries. The retrieval results also show that eCS-LTP produces better
average recall and precision than CS-LTP.

6 Conclusion
This paper points out the shortcoming of CS-LTP, and proposed an extended CS-LTP
operator. Because of the considering of the central pixel, more information of a region
is fused in the new descriptor. The new operator also keeps the robustness
characteristics of CS-LTP and CS-LBP, such as simplicity, robustness to variations on
illumination and orientation, a short dimension histogram, etc. The two methods CS-
LTP and eCS-LTP were compared in image retrieval experiments, and two
commomly used texture image database sets were chosen as test beds. Experimental
results show that the proposed method is very robust and can achieve significant
performance improvement over the CS-LTP operator.

Acknowledgment. This work was supported by the Key Project of Chinese Ministry
of Education (210128), the Internal Cooperation Science Foundation of henan
province (084300510065, 104300510066), provincial opening laboratory for Control
Engineering key disciplines (KG2009-14).

References
1. Ojala, T., Pietikinen, M., Hardwood, D.: A Comparative Study of Texture Measures with
Classification based on Feature Distribution. Pattern Recognition 1, 5159 (1996)
2. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution Gray-scale and Rotation Invariant
Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis
and Machine Intelligence 7, 971987 (2002)
3. Zhou, H., Wang, R.S., Wang, C.: A Novel Extended Local-Binary-Pattern Operator for
Texture Analysis. Information Sciences 22, 43144325 (2008)
4. Guo, Z.H., Zhang, L., Zhang, D.: Rotation Invariant Texture Classification using LBP
Variance (LBPV) with Global Matching, vol. 3, pp. 706719 (2010)
5. Tan, X., Triggs, B.: Enhanced Local Texture Feature Sets for Face Recognition under Difficult
Lighting Conditions. IEEE Transactions on Image Processing 6, 16351650 (2010)
6. Sun, J.D., Wu, X.S.: Content-base Image Retrieval based on Texture Spectrum Descriptors.
Journal of Computer-Aided Design & Computer Graphics 3, 516520 (2010) (in Chinese)
7. Loris, N., Sheryl, B., Alessandra, L.: A Local Approach based on a Local Binary Patterns
Variant Texture Descriptor for Classifying Pain States. Expert Systems with
Applications 37, 78887894 (2010)
8. Heikkil, M., Pietikinen, M., Schmid, C.: Description of Interest Regions with Local
Binary Patterns. Pattern Recognition 3, 425436 (2009)
9. Gupta, R., Patil, H., Mittal, A.: Robust Order-based Methods for Feature Description. In:
CVPR, pp. 334341 (2010)
10. Sun, J.D., Wu, X.S.: Image Retrieval Based on an Improved CS-LBP Descriptor. In: IEEE
International Conference on Information Management and Engineering, pp. 115117 (2010)
11. Ru, L.Y., Peng, X., Su, Z., et al.: Feature performance evaluation in content-based image
retrieval. Journal of Computer Research and Development 11, 15601566 (2003) (in Chinese)
Comparison of Photosynthetic Parameters and Some
Physilogical Indices of 11 Fennel Varieties

Mingyou Wang*, Beilei Xiao, and Lixia Liu

Agronomy Department,
Dezhou University
Dezhou 253023,
China
nwmy_sddz@163.com

Abstract. Net Photosynthetic rate(Pn), relative electrical conductivity and


MDA content of 11 fennel varieties were tested using LI- 6400 portable
photosynthesis system. The results showed that Pn of XJ0710 has distinct
difference with other varieties in which there is no obvious difference. But the
relative electrical conductivity and MDA of the varieties have obvious
difference. Under environmental stress, all of the varieties had different stress
resistance. This study provides reference for selecting good fennel varieties.

Keywords: Net photosynthetic rate, Relative electrical conductivity, MDA,


Comparison.

1 Introduction

Fennel is apiaceae, foeniculum adans herbage and which place of origin is


mediterranean sea. The history of cultivation are more than 1000 years, and the
fennel can be found everywhere in our country. Fennel can be used as vegetable,
Chinese traditional medicine and flavor. In this report, we studied the differences
of physiological characters through comparing photosynthesis rate and other indexes
of 11 fennel varieties, in order to selected fine varietie and offered valuable
reference.

2 Materials and Methods

2.1 Experimental Field

The experimental field is located in the Dezhou university, and is within east
longitude 11545-11736and north latitude 362425"-38032". Dezhou city is in a
warm temperate zone, and has a continental monsoon climate with four distinct

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 365369, 2011.
Springer-Verlag Berlin Heidelberg 2011
366 M. Wang, B. Xiao, and L. Liu


seasons. The annual average temperature is 12.9 , Highest temperature history is

43.4 , and Minimum temperature history is -27 . Average annual rainfall is
547.5mm, and the average frost-free period is 208 days.

2.2 Materials

The meteials are 11 fennel varieties: DC0410, DC0415, DC0503, LX0611, LY0901,
NJ0712, PY0811, QY0811, XJ0603, XJ0710, XJ0712.

2.3 The Methods

The leaves of same parts on 11 fennels were examined net photosynthesis rate (Pn),
intercellular CO2 concentration (Ci), Stomatal conductance (Gs), Transpiration rate
(Tr) by Photosynthetic apparatus (LI-6400). The relative conductivity (L) and
Malondialdehyde (MDA) were examined by the methods of Zhao et al. (2000). All
experiments was examined 3 times at least. The analysis of variance and discrepancy
comparison ware analysised by the software SPSS (13.0).

3 Results and Discussion


3.1 Pn and Tr of 11 Different Fennel Varieties

Pn and Tr of 11 different fennel varieties were examined, which showed obvious


differences between different varieties. As shown in Table 1, the Pn of LY0901 is
highest, about 23.7umolCO2 m-2s-1, and the Pn of XJ0710 is lowest, about

Table 1. Pn and Tr of 11 fennel varieties

Test of Tr Test of
Pn
Variety significance Variety mmolH2O.m-2.s-1 significance
(umolCO2.m-2.s-1)
5% 1% 5% 1%
LY0901 23.7 a A LY0901 4.723 a A
QY0811 23.33 a A XJ0712 4.257 a A
XJ0712 22.8 ab A XJ0603 4.257 a A
XJ0603 22.233 ab A DC0503 4.183 a A
DC0410 20.767 ab A QY0811 4.113 a AB
NJ0712 20.7 ab A NJ0712 4.1 a AB
DC0415 20.7 ab A LX0611 3.77 a AB
LX0611 20.567 ab A PY0811 3.69 ab AB
PY0811 20.3 ab A DC0415 3.577 ab AB
DC0503 20.067 ab A DC0410 3.503 ab AB
XJ0710 18.633 b B XJ0710 2.547 b B
Comparison of Photosynthetic Parameters and Some Physilogical Indices 367

18.633umolCO2 m-2s-1. The array of all Pn is : LY0901>QY0811>XJ0712>


XJ0603>DC0410> NJ0712 >DC0415 >LX0611 >PY0816 >DC0503 >XJ0710.
The Tr of LY0901 is highest, about 4.723mmolH2Om-2s-1, and The Tr of XJ0710
is lowest, Which is consistent with the Pn value. The array of all Tr is :
LY0901>XJ0712>XJ0603> DC0503> QY0811 > NJ0712 >LX0611>PY0816>
DC0415 > DC0410>XJ0710.

3.2 Ci and Pn/Ci of 11 Different Fennel Varieties

From Table 2 we can see there is no obvious difference among all the
fennel varieties. The Ci value of all array is: DC0503> PY0811> NJ0712> LX611>
LY0901> DC0410> XJ0710> XJ0712> XJ0603> QY0811> DC0415. But the Pn/Ci
value of QY08 is highest, about 8.710-2, and which of XJ0710 is lowest,
about 6.510-2, among the 11 fennel varieties. The results of Pn/Ci have
obvious difference among all the fennel varieties except three varieties
DC0503, LX0611and XJ0710. The Pn/Ci value of all array is: QY0811> LY0901
>XJ0712 >XJ0603> PY0811> DC0415> DC0410> NJ0712> DC0503> LX0611>
XJ0710.

Table 2. Ci and Pn/Ci of 11 fennel varieties

Ci Test of Test of
Variety Variety Pn/Ci
(UmolCO2.mol-1) significance significance
5% 1% Pn/Ci(10-3) 5% 1%
DC0503 303 a A QY0811 87 a A
PY0811 289.667 a A LY0901 83 ab A
NJ0712 288.667 a A XJ0712 82 ab A
LX0611 288 a A XJ0603 81 ab A
LY0901 288 a A PY0811 77 ab A
DC0410 284.667 a A DC0415 76 ab A
XJ0710 282.333 a A DC0410 73 ab A
XJ0712 279.667 a A NJ0712 72 ab A
XJ0603 279 a A DC0503 67 b A
QY0811 271.333 a A LX0611 65 b A
DC0415 270.667 a A XJ0710 65 b A

3.3 Gs of 11 Different Fennel Varieties

There is no obvious difference among Gs of all the fennel varieties (Table 3).
The Gs value of all varieties array is: Y0901>DC0503>XJ0712>PY0811>
NJ0712>XJ0603> LX0611>QY0811>XJ0710>DC0410>DC0415. This result
indicated Gs is not the factor that effecting the otherness between XJ0710 and other
varieties.
368 M. Wang, B. Xiao, and L. Liu

Table 3. Gs of 11 fennel varieties

Gs Test of significance
Variety -2 -1
(molH2O.m .S ) 5% 1%
LY0901 644 a A
DC0503 608.333 a A
XJ0712 521.667 a A
PY0811 520.667 a A
J0712 511.333 a A
XJ0603 500.667 a A
LX0611 493.33 a A
QY0811 491.333 a A
XJ0710 480.667 a A
DC0410 466.667 a A
DC0415 403.33 a A

3.4 Rrelative Conductivity and MDA of 11 Different Fennel Varieties


Relative membrane permeability (RMP) of plant cell is an important index to stress
tolerance. The change of relative conductivity (L) can reflect the relative membrane
permeability of cell under cold stress[3-4]. Low relative conductivity indicated low
harm for palnt[5]. As showed in Table 4, the L of NJ0712 is highest, and which of
QY0811 is lowest, about 6% and 2.7% respectively.

Table 4. Relative conductivity and MDA content in 11 fennel varieties

Test of MDA Test of


Relative
variety significance variety significance
conductivity (%)
5% 1% (umol.g-1 FW) 5% 1%
NJ0712 6 a A PY0811 7.323 a A
DC0410 5 b B DC0415 6.347 b B
PY0811 4.1 c C QY0811 6.345 b B
XJ0710 4 c C NJ0712 5.213 c C
XJ0712 3.7 d CD DC0503 5.072 d C
LX0611 3.4 de DE DC0410 4.88 e D
DC0503 3.4 de DE XJ0712 4.847 e D
XJ0603 3.345 de DE PY0811 4.408 f E
LY0901 3.3 e DE XJ0603 4.232 g EF
DC0415 3.1 e EF XJ0710 4.179 g F
QY0811 2.7 f F LX0611 3.593 h G
Comparison of Photosynthetic Parameters and Some Physilogical Indices 369

MDA is the lipid peroxidation product that can be induced to a higher level when
plants are exposed to a highly osmotic environment, and can be an indicator of
increased oxidative damage[6]. the MDA of fennel variety PY0811 is highest, ahout
7.323umol.g-1 FW, and which of LX0611 is lowest, about 3.593 umol.g-1FW. Both Ls
and MDAs are obviously difference among all the fennel varieties. These results
indicated all fennel varieties have different stress tolerance.

4 Discussion and Conclusions


RMP is another index response to plant stress tolerance such as low temperature,
drought and salt. When membrane permeability increases, RMP also increases, going
with ion leakage out of cell. MDA is the lipid peroxidation product that can be
induced to a higher level when plants are exposed to a highly osmotic environment,
and can be an indicator of increased oxidative damage[7-8].The MDA accumulation
maybe is damage to membrane and cell.
Among the 11fennel varieties, Pn of XJ0710 has distinct difference with other
varieties in which there is no obvious difference. But the L and MDA of 11fennel
varieties have obvious difference. This showed different varieties have different stress
tolerance, which offers groundwork to select good variety for agriculture.

Acknowledgement. The authors gratefully acknowledge the help from Gao


Fangsheng and Zhang hong at Dezhou University, China.
This work was supported by the Grants from the Important Program of Shandong
thoroughbred engineering: Packing up, screening and utilization of Fennel cultivar
resources in Dezhou city (2009).

References
1. Jansen, P.C.M.: Spices, condiments and medicinal plants in Ethiopia, their taxonomy and
agricultural significance, pp. 2029. Pudoc, Wageningen (1981)
2. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, pp. 134137. Higher Education Press, Beijing (2000); pp. 258260 (in Chinese)
3. Wang, S.G., Zhang, H.Y., Guo, Y., Sun, X.L., Pi, Y.Z.: Sammarize of biomembrane and
fruit tree cold resistance. Tianjin Agricultural Sciences 6(1), 3740 (2000) (in Chinese)
4. Huang, L.Q., Li, Z.H.: Advances in the research of cold-resistance in landscape-plants.
Hunan Forestry Science & Technology 31(5), 1921 (2004) (in Chinese)
5. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, vol. 260, pp. 134137. Higher Education Press, Beijing (2000) (in Chinese)
6. Wise, R.R., Naylor, A.W.: Chilling-enhanced photooxidation: evidence for the role of
singletoxygen and superoxide in the break down of pigments and endogenous antioxidants.
Plant Physiol. 83, 278282 (1987)
7. Chinta, S., Lakshmi, A., Giridarakumar, S.: Changes in the antioxidant enzyme efficacy in
two high yielding genotypes of mulberry (Morus albaL.) under NaCl salinity. Plant
Sci. 161, 613619 (2001)
8. Verslues, P.E., Batelli, G., Grillo, S., Agius, F., Kim, Y.S., Zhu, J.H., et al.: Interactionof
SOS2 with nucleosidediphosphate kinase 2 and catalase sreveal sapoint of connection
between salt stress and H2O2 signaling in Arabidopsis thaliana. Mol. Cell Biol. 27, 7771
7780 (2007)
Effect of Naturally Low Temperature Stress on Cold
Resistance of Fennel Varieties Resource

Beilei Xiao, Mingyou Wang*, and Lixia Liu

Agronomy Department, Dezhou University,


Dezhou 253023, China
nwmy_sddz@163.com


Abstract. After being affected by naturally low temperature (-4~0 ), the
changes of relative electrical conductivity and MDA content of 11 fennel
varieties were tested. The experiment studied the relationship between the cold
resistance and changes of relative electrical conductivity and MDA content of
varieties. The results showed that the relative electrical conductivity and MDA
content were higher than CK under naturally low temperature stress. The
varieties had different increasing amplitude. The differences were significant.
The comprehensive comparison indicated that the cold resistance of XJ0712
was the strongest, while that of XJ0710 was the worst.

Keywords: Fennel, Naturally low temperature, Relative electrical conductivity,


MDA, Cold resistance.

1 Introduction

Low temperature is the key environmental factor that effected plant growth,
development and geographic distribution. It is reported that there are hundred billions
loss on agriculture because of low temperature[1]. And the loss will be increased with
extreme weather frequently by earth warming[2]. Therefore, studying plant
physiological mechanism under low temperature has become one of the researches
focus in this field. In this report, we studied the changes of relative electrical
conductivity and MDA under low temperature stress. These results can offer some
reference to screen good fennel varieties resisting low temperature.

2 Materials and Methods

2.1 Experimental Field

The experimental field is located in the Dezhou university, and is within east
longitude 11545-11736and north latitude 362425"-38032". Dezhou city is in a


warm temperate zone, and has a continental monsoon climate with four distinct
seasons. The annual average temperature is 12.9 , Highest temperature history is

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 370374, 2011.
Springer-Verlag Berlin Heidelberg 2011
Effect of Naturally Low Temperature Stress on Cold Resistance 371


43.4 , and Minimum temperature history is -27 . Average annual rainfall is
547.5mm, and The average frost-free period is 208 days. In march every year, cold
wave, gale and frost are the main baneful weather in Dezhou city.

2.2 The Materials


The meteials are 11 fennel varieties: DC0410, DC0415, DC0503, LX0611, LY0901,
NJ0712, PY0811, QY0811, XJ0603, XJ0710, XJ0712.

2.3 The Methods


We selected the leaves samples of 11 fennel varieties at am 8:00- 9:00 on 23 march
2010. The temperature of that day is 5 15 . And other samples were selected at the
same time on 24 march 2010 after a light snow in the night. The relative conductivity
(L), damage degree and Malondialdehyde (MDA) were examined by the methods of
Zhao et al. (2000) [3]. All experiments was examined 3 times at least. The analysis of
variance and discrepancy comparison ware analysised by the software SPSS (13.0).

3 Results
3.1 The Changes of Relative Conductivity of 11 Fennel Varieties After Cold
Stress
Relative membrane permeability (RMP) of plant cell is an important index to stress
tolerance. The change of relative conductivity (L) can reflect the relative membrane
permeability of cell under cold stress. Low relative conductivity indicated low harm
for plant[4-5]. As showed in Figure 1, after cold stress the relative membrane
permeability were increased compared with the controls. The relative conductivity
change and damage degree of PY0811variety were least compared with other 10

25

)
%
( 20 CK
y
t
i Low temperature
v
i
t
c
u 15
d
n
o
c
c
i
r
t
c 10
e
l
e
e
v
i
t 5
a
l
e
R

0
DC0410 DC0503 XJ0603 LX0611 XJ0710 NJ0712 XJ0712 QY0811 PY0811 LY0901 DC0415
Variety

Fig. 1. Phylogenetic analysis of ZmPP2C and PP2Cs from other plants. All selected genes
function are tangible in some degree now.
372 B. Xiao, M. Wang, and L. Liu

varieties (Table 1). And the second was XJ0712. On the contrary, the relative
conductivity change and damage degree of XJ0710 variety were highest. These
results indicated that PY0811variety has highest cold tolerance, and XJ0712 variety
has lowest cold tolerance among all 11 fennel varieties.

Table 1. Changes of low temperature stress on the relative electric conductivity of leaf in
fennel and membrane damage degree

Test of Test of
Increase of Damage
Variety significance Variety significance
RMP (%) degree(%)
5% 1% 5% 1%
XJ0710 387.4 a A XJ0710 16.3 a A
XJ0603 251.5 b B XJ0603 8.7 b B
DC0415 152.7 c C NJ0712 7.8 b BC
DC0410 125.7 cd CD DC0410 6.7 bc BCD
NJ0712 121.4 cd CD DC0415 4.9 cd CDE
LX0611 105.9 cde CD LX0611 3.5 de DE
QY0811 105.8 cde CD QY0811 2.9 de DE
LY0901 82.5 de CD LY0901 2.8 de E
DC0503 78.5 de CD DC0503 2.8 de E
XJ0712 70.3 de CD XJ0712 2.7 de E
PY0811 53.5 e D PY0811 2.3 e E

3.2 The Changes of MDA of 11 Different Fennel Varieties

MDA is the lipid peroxidation product that can be induced to a higher level when
plants are exposed to a highly osmotic environment, and can be an indicator of
increased oxidative damage[6]. MDA often was be considered as a guideline that
response to cold stress[5]. From Figure 2, MDA contents obviously increased after

9
CK
Low temperature
8


W 6
F
g
/
l
o 5
m
u

t
n 4
e
t
n
o
c
A 3
D
M

0
DC0410 DC0503 XJ0603 LX0611 XJ0710 NJ0712 XJ0712 QY0811 PY0811 LY0901 DC0415

Variety

Fig. 2. Changes of low temperature stress on MDA content leaf in fennel


Effect of Naturally Low Temperature Stress on Cold Resistance 373

cold stress. The MDA increase of all varieties array is: XJ0710 (92.7 %)>LX0611
(50.8%)> PY0811( 40.8%) > DC0503(39.3%) >NJ0712(35.1%) > QY0811(31.3%)>
DC0410(28.4%)>LY0901(24%)>DC0415(16.6%)> XJ0603(14.5%)> XJ0712(8.6%).
This result illustrated that XJ0710 suffered greatest oxidative damage than other
variety with the same cold stress treatment, and XJ0712 suffered least.

4 Discussion and Conclusions


The plant cell plasma membrane is differentially permeable membrane that some
particles can pass through, macromolecules cannot. But when cell membrane was
damaged, macromolecules can pass through out and took harm to plant growth[7]. So
RMP is another index response to plant stress tolerance. When membrane
permeability increases, RMP also increases, going with ion leakage out of cell [8-9].
Relative electrical conductivity increased more and the cold tolerance is lower[11].
From Figure 1, we can see PY0811 damaged the lowest, XJ0712 damaged lower, and
XJ0710 damaged the highest, which consisted with PY0811 has high cold tolerance
and XJ0710 has low cold tolerance.
MDA is one of main production of lipid peroxidation. And MDA content and
changes can showed plant cold tolerance in one side [10-11]. The changes in figure 2
indicated that PY0712 has the highest cold tolerance and XJ0710 has the lowest cold
tolerance. In comprehensive analysis of all data, PY0712 has the highest cold
tolerance and XJ0710 has the lowest cold tolerance in the 11 fennel varieties.

Acknowledgement. The authors gratefully acknowledge the help from Hong Zhang
and Fangsheng Gao at Dezhou University, China. This work was supported by the
Grants from the Important Program of Shandong thoroughbred engineering: Packing
up, screening and utilization of fFennel cultivar resources in Dezhou city (2009).

References
1. Deng, J.M., Jian, L.C.: Advances of studies on plant freezing tolerance mechanism:
freezing tolerance gene expression and its function. Chinese Bulletin of Botany 18(5),
521530 (2001)
2. Bertrand, A., Castonguay, Y.: Plant adaptations to overwintering stresses and implications
of climate change. Canadian Journal of Botanical 81, 11451152 (2003)
3. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, pp. 134137. Higher Education Press, Beijing (2000); pp. 258260
4. Wise, R.R., Naylor, A.W.: Chilling-enhanced photooxidation: evidence for the role of
single to xygen and superoxide in the break down of pigments and endogenous
antioxidants. Plant Physiol. 83, 278282 (1987)
5. Huang, L.Q., Li, Z.H.: Advances in the research of cold-resistance in landscape-plants.
Hunan Forestry Science & Technology 31(5), 1921 (2004) (in Chinese)
6. Zhu, J.K.: Plantsalttolerance. Trends Plant Sci. 6, 6671 (2001)
7. Prasad, T.K.: Role of calalase in inducing chilling tolerance in pre-emergent maize
seedlings. Plant Physiol. 114(4), 136913761 (1997)
374 B. Xiao, M. Wang, and L. Liu

8. Wahid, A., Shabbir, A.: Induction of heat stress tolerance in barley seedlings by pre-
sowing seed treatment with glycinebetaine. Plant Growth Regul. 46, 133141 (2005)
9. Wahid, A., Perveen, M., Gelani, S., Basra, S.M.: Pretreatment of seed with H2O2
improves salt tolerance of wheat gene ZmPP2C of Zea mays roots. J. Plant Physiol. Mol.
Biol. 31, 183189 (2005)
10. Chinta, S., Lakshmi, A., Giridarakumar, S.: Changes in the antioxidant enzyme efficacy in
two high yielding genotypes of mulberry (Morus albaL.) under NaCl salinity. Plant
Sci. 161, 613619 (2001)
11. Verslues, P.E., Batelli, G., Grillo, S., Agius, F., Kim, Y.S., Zhu, J.H., et al.: Interaction of
SOS2 with nucleoside diphosphate kinase 2 and catalase sreveal sapoint of connection
between salt stress and H2O2 signaling in Arabidopsis thaliana. Mol. Cell Biol. 27,
77717780 (2007)
Survey on the Continuing Physical Education in the
Cities around the Taihu Lake

JianQiang Guo

ChangZhou University,
Changzhou 213164 China
Jqguo5986@163.com

Abstract. Through the material and questionnaire survey analysis thinks: As for
the Physical Education (PE) attended by On-the-job adult students in college
among the cities around the Tai Lake(taihu lake), the teaching plan, training
goal and teaching material contents formulation must satisfy the physical,
psychological and healthy needs of on-the-job adult students, meet the demands
of social sports, meet the needs of the development of discipline intact.

Keywords: The cities around Tai Lake, Continue the sports education,
Teaching situation, survey.

Preface

Continuing sports education aims to cultivate interest of on-the-job adult students


participating in physical exercise, to maintain, recover and enhance their physique,
improve their working efficiency and establish lifelong exercise habits. Continuing
sports education is a part of the school physical education and the connections of links
between school sports and social sports. In recent years, with the continuous
development of our society, with the reform of education system, and with the
gradually expanding scale of adult high schools, How to improve college adult sports
teaching work, let students gain all-round development in process the re-education,
has been a problem that is worth paying attention to.

1 The Object of Study and Methods

1.1 The Object of Study: students who receive the continuing sports education in the
high schools, city circle around the Tai Lake(suzhou ,wuxi, Changzhou,huzhou,
jiaxing), which open the continuing sports education classes.
1.2 Research methods:
1.2.1 The questionnaire: sending out 900 questionnaires and withdrawing available
824 ones in the city circle around the Tai Lake(suzhou ,wuxi, Changzhou, huzhou,
jiaxing).The recovery is 91.6%.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 375380, 2011.
Springer-Verlag Berlin Heidelberg 2011
376 J. Guo

1.2.2 The methods of mathematical statistics: using computer processing all the
statistics , recycling the available questionnaires and handling the statistics with using
statistical software ,the SPSS11. 5.
1.2.3 The methods of looking into materials: looking into the relevant materials on the
continuing sports education in china.

2 Results and Analysis


2.1 Structure survey on origins of students who receive the continuing sports
education in the high schools in the city circle around the Tai Lake.

Table 1. Continue to sports education students come from structure statistics

JHQGHU    PDWULFXODWH     FROOHJH   7HFKQLFDOVHFRQGDU\VFKRRO        +LJKVFKRRO


QXPEHU  SURSRUWLRQQXPEHU SURSRUWLRQ QXPEHU  SURSRUWLRQ

PDOH                             


)HPDOH                              
$JJUHJDWH                  

The city circle around the Tai Lake lies in the developed economic regions, has
instinctive geographical features. In this area, the engineering education and higher
vocational high schools are developed well. Relatively speaking, a significant number
of technical school students have the desire to study further. As table 1 shows the
fact that 73.2% of students own the degree of Technical secondary school or senior
colleges can prove the point mentioned. There are many universities in the city circle
around Tai Lake. Every university has numerous subordinates schools, students there
continue their physical education and have physical exercise. The scale of admission
of full-time students who will receive continuing education focus on the students who
has finished the high school or who are willing to receive higher education in the
engineering education.
2.2 The survey on the students P.E. learning motivation and interest

Table 2. Continue to sports education different gender students to participate in sport


motivation survey (%)

JHQGHU WRWDO (QKDQFHG SK\VLTXH 0DVWHU VSRUWV VNLOOV ,PSURYH DELOLW\ HQWHUWDLQPHQW &XUH GLVHDVHV
Q Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ

PDOH                      
)HPDOH                 
$JJUHJDWH                 
Survey on the Continuing Physical Education in the Cities around the Taihu Lake 377

Table 2 show that students who receive continuing education have sports exercises
to enhance their physique (accounted for 30.5%), to master of sports skills and
technology (accounted for 18.5 percent). Therefore, as to the teaching form, we
should take a variety of forms, such as basic sports classes Special improve classes,
sports health classes. As to the selection of teaching material contents, ball games and
the body buildings should be the important parts. As to the teaching form, we should
carry out Hierarchical teaching . Otherwise, it will damage the enthusiasm of students'
learning sports severely [1].
Students value orientations in participating in sports activities focus on two
aspects: "enhanced physique" and "entertainment and peaceful mind and inner
tranquility" .Their value orientations reflect that as to the social, cultural and
psychological functions in the sports of "improving ability to adapt the society " ,this
choice rate is low. It shows the sports consciousness of contemporary college students
is required to be guided to establish the comparatively complete sports value. This
article will classify the socialization course into physical education curriculum system
and emphasize and give play to this function of sports. In addition, a not high choice
rate in the "mastering sports skills and technology" also shows that the traditional
guiding ideology on the sports curriculum and students main needs remain docking.
2.3 The survey on the sports that students like

Table 3. Continue education students like PE class project TAB

JHQGHU                          3URMHFW QXPEHU 


WRWDO                           VRUW
PDOH   %DVNHWEDOO   WDEOH WHQQLV    EDGPLQWRQ   IRRWEDOO  
PDUWLDODUWV  VWURQJDQGKDQGVRPH  YROOH\EDOO  
)HPDOH   $HURELFV   WDEOH WHQQLV   EDGPLQWRQ   PDUWLDO DUWV  
ILWQHVV  YROOH\EDOO  

According to table 3, we can see that surveyed students who are receiving the
continuing education like the following sports courses most: basketball, aerobics,
table tennis, badminton, football, martial arts, bodybuilding and volleyball. And in the
report of China mass sports present situation investigation results published by our
national sports administration in 2002, the top 10 sports mentioned above in the report
were included [2].
2.4 The survey on the teaching form of continuing education
According to table 4, we can see that, among the students who are receiving the
continuing education in city circle around Tai Lake. Boys pay little attention to the
teaching form, while girls requirements in the teaching form differed a lot. Mixed
classes or small classes are required. It has a direct links to the continuing students
origin structure in high schools in this city circle around Tai Lake. Relatively
speaking, girls are in a small age. Majority of them are in junior colleges, technical
secondary schools, high schools and very shy physiologically. Their preferences in
378 J. Guo

Table 4. Continue to sports teaching teaches, the class form TAB

JHQGHU 7HDFKLQJ IRUP 7HDFKLQJ IRUP


QXPEHUQ &ORVH 'LYLGH LQWR WDL VPDOOFODVV (OHFWLYH FRXUVH )RXQGDWLRQ TXDOLW\
&OXE
PDOH            
IHPDOH        
SURSRUWLRQ           

teaching form respectively are: elective classes (54.5%), club type classes (26.7%),
basic classes (17.6 percent). The reasons are: teaching mode of elective course
emphasize the sports ability. It is classified by the content of the project, so the
students have more choices. The advantage of Elective PE teaching mode is that it can
sufficiently meet the continuing students demand of development in personality. It is
beneficial to cultivate the ability of sports and fitness and interests. The guiding
ideology of club of teaching mode is to fully exert students' subjective initiative,
attach importance to cultivating students' interest in sports. The main purpose is to
improve students' sports ability and train students' life-long physical exercise habit. In
the basic class, the initiative of middle school students' is greatly depressed. Students
cannot make their choices on their own. It also hinder the development trend of
individualism.

3 Conclusions
3.1 The students who are receiving the continuing education in city circle around Tai
Lake based on the level of college and engineering schools. At the same time, due to
the geographical position, junior colleges and technical secondary schools are the
main matriculate. Our continuing sports education curriculum arrangement should
consider the characteristics of the students origin. We suggest that the time P.E. class
should increase from ordinary 70 hours appropriately to 100 hours and open four
semesters, which will be consistent to the time of ordinary time. We should formulate
plans to ensure continuing students have more sports time, cultivating habit of
participating in regular physical exercise. Increasing the teaching hour and teaching
the necessary sports knowledge to students, Vary from person to person, enable the
student to master at least one of the value of exercise with lifelong sports. The
content of Continuing PE should combine the physiological and psychological, social
sports needs, colleges should arranged the sport lessons according to the reality
.Featured by interest, safety, entertainment, confrontational, general audience ,aerobic
exercise program can help them keep fit and strong ,improve their own interests, vent
their emotions, learn the necessary physical theory knowledge, and can learn the
necessary physical theory knowledge, 1-2 sport skill, make oneself can not only for
the lifelong sports exercise, and participation in social sports activities [3].
Survey on the Continuing Physical Education in the Cities around the Taihu Lake 379

3.2 According to the continuing students motivation of participating in sports


activities in the city circle around Tai Lake, to make the continuing students actually
develop a habit of having sports exercises. First, we should strengthen the
construction of external factors, such as building campus sports culture; cultivating
sports backbones to make them have the ability to organize, inspire people. We
should develop strong administrative measures to make continuing students continue
to participate in sports, strengthen the construction of sports facilities and new sports
hardware construction, and so on. Second, we should strengthen the teaching in sports
knowledge, technology and skills, enables the student to really get the benefits from
sports, arouse their intrinsic motivation, which makes their physical exercise habit
stronger [4].
3.3 According to students like science city sports course project reasonable setting,
paying attention to exercise enhanced physique while, enhance physical foundation of
education of theoretical knowledge, enables the student to clear the human body and
its development, sports on development, try choosing to adapt to the influence of
modern social needs, and to help students' future of sports theory content, such as
sports value and function of the basic law, sports and nutrition, health care to learn,
and human body measurement and evaluation of environmental infrastructure such as
knowledge.
3.4 The education science city continue sports jagged, level students ages span, the
students' physical and mental characteristics, interests, hobbies and gender differences
between the different level. Should be based on the differences of students' practical
teaching, and take small groups, the education sports teaching goal continues to build
on the student lifelong education "concept to implement", on the basis of the reform
of the teaching system, ability in the teaching process arouses student's enthusiasm,
ability makes the current sports teaching quality a substantial breakthrough. The
teaching process implementation "humanistic" thinking, the teacher must set up the
students as the center of modern education concept, everything from students start,
everything for the student, the student all services, thus creating an orderly and
sentient beings, hard work and harmonious classroom teaching atmosphere [5].
3.5 Establishing a suitable sports education achievement standard of appraisal.
Achievement evaluation leads the direction of course assessment, so, in the process of
achievement evaluation, we need to combine the characteristics of adult education,
evaluate scientifically from the Angle of cultivating the ability. Each course should be
combined with the characteristics of the subject. We should do well in the links, such
as the homework and we should not linger on the surface. With the ideas of P.E.
teaching change, new exam has been a sure way to go, test should be divided
into three parts : inspector technical theory, technology assessment and teaching
analysis.

Acknowledgments. The paper is supported by 2010 liberal arts development fund


projects in Changzhou University (No. ZMF100200449).
380 J. Guo

References
[1] Wu, T.P.: (1) of adult college students. The sport motivation research. Journal of Chinese
Sports Science and Technology (article 37 roll) supplement 19-20 (2001)
[2] Mass sport in China present situation investigation results report. The state general
administration of sports, adrenal system. br
[3] Cheng, D.F., Wang, Q.: student sports teaching employed adults. Chinese Adult
Education 10, 164165 (2006)
[4] Wang, Q.J., et al.: Zhejiang education sports teaching model selection of adult. Ningbo
University Journal (Education Science Edition) 26(5), 108109
[5] Chen, Q.S.: Theory of adult education sports education characteristics and countermeasures
study. Journal of Adult Education Sports 25(1), 34 (1993)
The Gas Seepage and Migration Law of Mine Fire Zone
under the Positive Pressure Ventilation

HaiYan Wang, ZhenLong Zhang, DanDan Jiang, and FeiYin Wang

Resources and Safety Engineering School,


China University of Mining and Technology (Beijing),
Xueyuan Road. D11,100083 Beijing, China
vipwhy@vip.sina.com

Abstract. A fire developing tendency mathematical model of mine fire zone


under positive pressure ventilation was established and was validated based on
the particularly serious fire accident of Fuhua 9.20 in Heilongjiang province.
Then the law of positive pressure ventilation influence on the fire zone
developing tendency and its enlightenment to work safety were obtained. Under
positive pressure ventilation, high temperature fire source has an obvious
tendency to spread toward the gob, but with the development of fire, the
aerotactic movement of combustion makes the high temperature fire source
zone also slowly spread toward the roadway of mine. As the fire developing, a
lot of smoke flows into the roadway, and distension of high temperature gas
accelerates the gas flow rate in the roadway. The acceleration directly results in
the spread and diffusion velocity of gas in the ventilation system, which will
aggravate the accident.

Keywords: positive pressure ventilation, mine fire zone, fire spread, smoke.

1 Introduction
The main fans in the mine have three common ways of ventilation: positive pressure
ventilation, negative pressure ventilation and hybrid ventilation. In the mine of forced
ventilation, after the main fan works, it forces fresh air into the roadway, making the
absolute pressure of underground mine greater than that outside of the mine or the
atmospheric pressure outside of air duct with the same elevation. Its relative pressure
is positive, and thus turns forced ventilation into positive pressure ventilation [1]. The
influence of positive pressure ventilation on the spread of fire zone in the mine is
mainly its influence on the migration of smoke currents and high temperature points
corrected pressure ventilation in the mine fire.

2 The Mathematical Model of Fire Developing Tendency in the


Mine Fire Zone under Positive Pressure Ventilation

2.1 Roadway Flow Field Model

The gas flow in the roadway can be considered as an unsteady three-dimensional


turbulent field along with the heat and mass transfer process. The fluid flow within

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 381389, 2011.
Springer-Verlag Berlin Heidelberg 2011
382 H. Wang et al.

the roadway follows relevant equations of the motion of fields, the law of heat and
mass conservations and the basic equations of chemical fluid dynamics established by
taking them as the starting point[2], namely

continuity equation ( u j )
+ = 0; (1)
t x j

momentum equation (i direction) ( ui ) + ( u j ui ) = u j + I ; (2)


t x j uj
x j x j

( h) ( u j h) h
energy equation + = h +I ; (3)
t x j x j x h
j

component equation (l component) ( ml ) + ( u j ml ) = ml + I . (4)


t x j h
x j l
x j

Where: u-velocity, m.s-1; h-enthalpy, K; m-quality fraction; h-transport coefficients; I


is source term.

2.2 Combustion Model in Mine Fire Zone

Goaf, top-coal caving region, and coal pillar in coalmine belong to double porous
media with hole and fracture. The wind flow in roadway penetrates into the coal, and
the oxygen in the air and coal molecules undergo physical adsorption, chemical
adsorption and chemical reactions, while producing a large amount of heat. Under the
adapted heat accumulated conditions, the coal continues to be oxidized and heated.
And when the temperature reaches the ignition point, it will lead to coal combustion.
According to the mass and energy conservations, the mathematical models for
spontaneous combustion of loose coal such as the coal in the gob can be obtained.
The effect of buoyancy can not be ignored during the combustion of fire zone.
Therefore, the seepage flow equation [3] of air leakage in fir zone follows,
K K
h + 0 gTk = ( A + B )v (5)
.
K
Where: v -permeability velocity of air flow, m.s-1; -modulus of permeability
velocity, m.s-1; -Hamiltonian operator; h-total pressure, h=p+0gz, Pa; p-leakage
wind pressure, here means the positive values, Pa; z-perpendicular coordinate, m; 0-
gas density of fresh wind flow in the roadway, kg.m-3; g-acceleration of gravity, m.s-2;
-gas expansion coefficient; T-temperature difference, T=T-T0, K; T-loose coal
temperature, K; T0-temperature of the fresh wind flow in the roadway, K; -gas
K
density, kg.m-3; k -vertical upward unit vector; A-linear drag coefficient, A=n/k;
B-quadratic drag coefficient, B=nDm/k; -movement viscosity coefficient, m2.s-1; n-
porosity; Dm-the average diameter of porous media skeleton, m; -permeability; -
geometric figure coefficient.
The Gas Seepage and Migration Law of Mine Fire Zone 383

The combustion process in fire zone is a dynamic development process of heat


release accompanied by heat dissipation. When heat release rate is greater than
dissipation rate, the coal temperature will rise and eventually lead to combustion.
According to energy conservation equation [4], temperature field distribution can be
drawn as follows,

T K
e / Ce = e ( T ) n g C g ( v T ) + q(T ) . (6)
t
Where: T-coal temperature, K; t-time variable, s; -loose coal density, kg.m-3;
C-heat capacity of loose coal, J.Kg-1.K-1; e-thermal conductivity coefficient of loose
coal, W.m-1.K-1; g-gas density, kg.m-3; Cg-gas specific heat, .Kg-1.K-1; q(T)-heat
intensity of coal, J.s-1.m-3.
Oxygen is a necessary condition for spontaneous combustion. Oxygen migration is
a complex process accompanied by the consumption and dilution of oxygen in coal
under simultaneous convective diffusion and molecular diffusion. According to mass
transfer law [5], oxygen migration meets

dC K (7)
+ (v C ) = ( D C ) V (T ) .
dt

Where: D-gas diffusion coefficient, m2.s-1; V(T)-oxygen consumption rate of coal


combustion, mol.m-3.s-1.

2.3 Coal-Oxygen Chemical Reaction Model

Coal is a complex organic molecule composed of various functional groups and


chemical bonds. Its structure is very complicated, which determines the complexity of
chemical reaction between coal and oxygen involving processes such as adsorption
(including physical adsorption and chemical adsorption) and chemical reaction. Coal-
oxygen reaction includes general coal-oxygen combustion reaction and spontaneous
combustion reaction of coal, and the coal-oxygen combustion reaction involved in the
paper is more complex than general combustion reactions. Due to the complexity of
spontaneous combustion process of coal, to simplify the study, the paper considers the
elements of coal involved in combustion as C, H, O, and divides the chemical reaction
of spontaneous combustion of coal into four steps including the decomposition of
coal, gasification and coke combustion,
b
C a H b Oc a 1 C + a 2 CO2 + a 3 CO + H2
2
C + O 2 CO 2 (8)
2CO + O 2 2CO2
2 H 2 + O2 2 H 2 O .

Where: a, b, c, a1, a2 and a3 - coefficients, among which a1+a2+a3=a, 2a2+a3=c. The


ratio of a, b and c can be attained by elementary analysis. On the basis of reference
[6], the ratio of a1, a2 and a3 can take 213:18:10.
384 H. Wang et al.

2.4 Boundary Conditions

In terms of solid boundary and artificial boundary, the physical boundary condition
during the process of spontaneous combustion includes seepage field boundary
condition and concentration field boundary condition.
The seepage field boundary conditions are as follows:
first kind boundary - given constant pressure
=P; (9)

second kind boundary - given flow


Kgrad = m . (10)

third kind boundary -given wind pressure or value or given air volume.
The concentration field boundary conditions are as follows:
first kind boundary - given concentration on the boundary
C = f1 ; (11)

second kind boundary - given concentration dispersion flux on the boundary


C ; (12)
Dij ni s = f 2
x j

third kind boundary - given gas flux on the boundary


(13)
Cvi Dij C ni s = f 3 .
x j

Where: -variable; K-coefficient of permeability; m-mass flux; f1, f2, f3-concentration


function.

3 Case Selection and Conditions Set

3.1 Fuhua Coal Mine 9.20 Fire Accident

It was during the repeated mining process of No.11 coal seam when the accident
happened. The repeated mining thickness was 2 to 8 meters, and the inclination angle
of coal seam 20 degree. Coal seam roof and floor were composed of sandstone. It
belonged to low-gas coal mine. The volatile matter of coal was 30 percent. Coal-dust
was explosive-gas mixture, with an explosion index of 70 percent. The spontaneous
ignition tendency of coal seam was identified as grade one with a combustion stage of
6 to 8 months. The ventilation system is central tied forced ventilation, with auxiliary
shaft intaking the air and the main shaft returning the air. At 2:28 on September 20,
2008, the mine had a particularly serious fire accident, causing the death of 31 people.
The direct cause of the accident was: the second blind air shaft of the mine was
arranged at the bottom of mined No.11 coal seam. The coal pillar at the top of the
The Gas Seepage and Migration Law of Mine Fire Zone 385

roadway broke, exposed to the air and had air leakage, which caused spontaneous
combustion with temperature ascend because of coal oxidation, and leaded to the fire
incident. Figure 1 is a schematic diagram of ignition point. Based on this fire incident,
this paper analyzed the gas seepage and flow features of smoke in the goaf and mine
ventilation system after the accident, which provided theoretical support to avoid
similar incidents in the future.

Fig. 1. Schematic diagram of ignition point

3.2 Physical Model and Initial and Boundary Conditions

According to the basic conditions of the accident mine ventilation, the paper selects
the pillar of fire existing area, the upper goaf, the lower end of coal and rock and the
ventilation system. As the accident occurred in the upper areas of the second blind air
shaft and fourth crossheading, there occurred no spontaneous combustion in 01 face
area, the second old blind main shaft, second new blind main shaft, sixth
crossheading, and the upper areas of 02 and 03 drifting faces and areas near the two
faces. The roadways of above areas are mainly circulation areas of gaseous products,
therefore, in the paper, the mine ventilation system after the crossing of the second
blind air shaft and fourth crossheading is simplified. Hexahedral structured grid is
adopted to divide and the number of grids is about 3.45 million. The paper taking into
account the complex structure of flow field in spontaneous combustion area, local
mesh refinement is applied in the grids of fire part and some parts. The simplified
physical model was shown in Fig.2.
Basic parameters and boundary conditions are as follows: the normal air inlet
speed of the second blind air shaft is 3 m.s-1, the porosity of coal pillar is 0.2, and
porosity of the gob is 0.2 to 0.3. The seepage field boundaries of the gob and face
boundary conditions are shown in Eq.10, namely, the mass flow of given penetration
is 0.01g.m-2. Gas concentration field boundary condition takes the second kind
boundary condition (Eq.12), namely normal no penetration. Outlet of the blind main
shaft adopts pressure exit boundary conditions. The side wall of the roadway below
the second blind air shaft along the fourth crossheading is impermeable, and the side
wall of second new blind air shaft is impermeable.
386 H. Wang et al.

Fig. 2. Simplified physical model

4 Validation of Model of Fire Developing Tendency in the Mine


Fire Zone under Positive Pressure Ventilation and Analysis of
Simulation Results

4.1 Validation of the Mathematical Model

Table 1 showed the mine data and phenomena explored and observed by the rescue
team somewhere after the accident in the ambulance during the emergency-rescue
process. Fig.3 is a stimulating concentration contour of carbon monoxide of the center
sections along the length direction of the second blind air shaft and fourth
crossheading at the stage of full development of spontaneous combustion. As it can be
seen from the figure, the second blind air shaft, fourth crossheading and second new
blind main shaft are full of dense smoke, and the concentration value of CO balances
between 0.0105 and 0.011. The simulation result approximately approaches the actual
monitoring result and observation. The error mainly arises from the simplification of
ventilation network. Through the above analysis it can see that the simulation results

Table 1. Concentration of carbon monoxide and temperature measured before the adjustment
of the ventilation system during the accident rescue and observed phenomena

Location volume fraction Temperature


of CO
phenomena

To 5 m below the main shaft 0.00125 - -


To 240 m of second new blind - - Dense smoke
main shaft
To the bend at the shaft bottom 0.007 - Moderate smoke
bypassing to 95 m of the throughout the
second old blind shaft whole tunnel
To 80 m below the +180 0.014 35 -
crossheading
The Gas Seepage and Migration Law of Mine Fire Zone 387

basically agree with the development state of the accident, thus indicating the
accuracy and reliability of the mathematical model of fire developing tendency in the
mine fire zone under positive pressure ventilation.

4.2 Analysis of Numerical Simulation Results and Enlightenment to Safe


Production

Fig.3 is the volume fraction contour of carbon monoxide at the direction of center
section of the second blind air shaft and fourth crossheading in the fire accident. The
figure shows: due to the compression of mechanical wind pressure under positive
pressure ventilation, most gas in the fire zone flows into the goaf, and the fire has an
evident tendency to spread upwards from the goaf and coal pillar part. The trend of
carbon monoxide volume fraction contour during the initial period of fire reveals the
volume fractions of carbon monoxide in the second blind air shaft and fourth
crossheading are almost 0, and carbon monoxide enters into the upper part of the
goaf. But with the development of the fire, after the formation of fire center, the
carbon monoxide gas gradually spreads from the fire center towards the direction of
the roadway, and enters into the second blind air shaft, while a lot of smoke begins to
appear in the ventilation system. The volume fraction of carbon monoxide near the
second blind air shaft and fourth crossheading remains at about 0.01, which is close to
the concentration values actual measured by the ambulance crew while doing the
rescue work.
Fig. 4 is a velocity contour through the direction of center section of the second
blind air shaft and fourth crossheading during the development of the fire. The figure
shows that as the smoke that enters into the roadway at the initial stage of fire is
exiguous, it has little flow effect on the airflow inside the roadway. But with the
development of fire, a lot of smoke flows into the roadway, and the expansion of high
temperature gas speeds up the gas flow rate in the roadway. The acceleration will lead
directly to the spread and diffusion rate of smoke in the ventilation system, resulting
in the aggravation of accident.

Fig. 3. Volume fraction contour of carbon monoxide at the section x=1.3, z=61.72
388 H. Wang et al.

Fig. 4. Velocity contour at the section x=1.3, z=61.72

Fig. 5. Temperature contour on the section x=1.3, z=61.72

On the basis of the above simulation results and description of the process of the
accident, it shows that the following aspects should be concerned for fire monitoring
in mine safety:
Firstly, for areas with crushing coal on the side wall of the roadway and serious air
leakage, effective measures to prevent and control spontaneous combustion and air
leakage should be timely adopted, so as to get rid of the major hidden danger of
spontaneous combustion. Positive pressure ventilation can not avoid the reverse
spread of spontaneous combustion near the wind in the goaf. That is why the fire
accident occurred in the Fuhua coal mine.
Secondly, carbon monoxide, as a warning index gas of spontaneous combustion of
mine, should pay attention to the impact and effect of the volume and flow of the air.
Using positive pressure ventilation will pressure most gases produced by spontaneous
combustion into the goaf, making the concentration of carbon monoxide within the
return airway significantly reduce, which increases the difficulty of gas monitoring
and timely warning of spontaneous combustion. Furthermore, the air leakage of coal
in the goaf is serious, so a lot of fire smoke gas containing carbon monoxide directly
seeps from the gap of coal, which also greatly enhances the difficulty of making early
warning. This is also one of the major reasons for not timely warning of the fire
The Gas Seepage and Migration Law of Mine Fire Zone 389

accident. Also, it should be noted that positive pressure ventilation may still reverse
the spread of spontaneous combustion due to spread with absorption of oxygen.
Thirdly, correct the misconception of using 24ppm carbon monoxide concentration
as the warning index value. The mine is in the return airflow with the air volume of
1150 m3.min-1. The concentration of carbon monoxide gradually increased from 2
ppm on September 1st to 7ppm on September 14th, and reached 10ppm or so before
the accident happened, which showed signs of spontaneous combustion, but brought
no vigilance. Taking the standard of 24ppm carbon monoxide concentration as the
limit to toxic and harmful gases injurious to health as the warning index of
spontaneous combustion lost the opportunity of timely warning spontaneous
combustion. The problem exists in many mines of China, but still gets no attention.

5 Conclusion
Firstly, the model of fire developing tendency in the mine fire zone under positive
pressure ventilation was established, including roadway flow field, the fire zone
combustion, chemical reaction model between coal and oxygen and boundary
condition models.
Then, based on the prototype of the particularly serious fire accident of Fuhua
9.20 in Heilongjiang province, the paper verified the rightness of the mathematical
model above, and obtained the law of positive pressure ventilation influence on the
fire zone and its enlightenment to safety in production.

Acknowledgement. Funded by 11th Five-year Plan National Science and Technology


Infrastructure Program 2006BAK03B05, Youth Fund of CUMTB 2009QE05 and
national innovation experiment program for university students 101108z.

References
1. Zhang, G.S.: Ventilation Security. China University of Mining and Technology Press,
Xuzhou (2007)
2. Wang, H.Y.: Study on Three-Dimensional Smoke Flowing Theory and the Application
Technology of VR in Passage Fire. China University of Mining and Technology, Beijing
(2004)
3. Peng, B., Reynolds, R.G.: Culture algorithms: Knowledge Learning in Dynamic
Environments. In: Proceedings 2004 Congress on Evolutionary Computation, pp. 17511758.
World Scientific Publishing Co. Pte Ltd., Singapore (2004)
4. John, D., Anderson, J.R.: Computational Fluid Dynamics-The Basics with Applications.
Tsinghua University Press, Beijing (2002)
5. Incropera, F.P., Witt, D.P., Bergman, T.L., et al.: Fundamentals of Heat And Mass Transfer.
John Wiley & Sons, New York (2007)
6. Krause, U., Schmidt, M., Lohrer, C.: Computations on the Coupled Heat and Mass Transfer
during Fires in Bulk Materials, Coal Deposits and Waste Dumps. In: The Proceedings of the
COMSOL Multiphysics Users Conference, Frankfurt, pp. 16 (2005)
Study on Continued Industrys Development Path of
Resource-Based Cities in Heilongjiang Province*

Ying Zhu and Jiehua Lv

Northeast Forestry University,


College of Economics and Management,
Harbin, China
dresszhu@qq.com

Abstract. The resource-based cities in Heilongjiang Province occupied a very


important position. These cities and regions mostly faced with relative decline
dilemma due to natural resources plunged. These directly affect the
development of resources city. Therefore, industry transformation of resources
city is realizing the sustainable development of regional economy.

Keywords: sustainable development, the resource-based cities, industry


development.

1 Introduction
Heilongjiang province is a big resourceful province, resource-based cities in
Heilongjiang province occupied a very important position. In the past these cities
made significant contributions to national construction, regional economic and social
development, but caused a series of "shock", recently most of the cities and regions
have entered the mid-late resources exploitation. Because of natural resources
plummeted, enterprise economic hand and worsening ecological environment, these
result in the slower development of the cities and regions. They also faced with
relative decline predicament. To choose appropriate and sustainable developments
path in resource-based cities of Heilongjiang has become one of the important
strategic topics.

2 Development Conditions of Resource-Based Cities in


Heilongjiang
Heilongjiang province is the most provinces with the resource-based cities, there are
14, namely the resource-based cities yichun, jixi, hegang, shuangyashan, qitaihe,
daqing, heihe, five dalian pool, ShangZhi, HaiLin, MuLeng, ning an, HuLin. There

*
The article is subsidized by philosophy social science project of Heilongjiang province in
2010( item number:10B046) and Heilongjiang post-doctorate scientific research foundation
(item number:LBH-Q09178).

Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 390395, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study on Continued Industrys Development Path of Resource-Based Cities 391

are seven regional cities, namely, yichun, jixi, hegang, shuangyashan, qitaihe, daqing,
heihe. According to the types of resources they can be divided into coal city,
petroleum city and logging city: 4Coal cities: jixi, hegang, shuangyashan, qitaihe;1
petroleum city: daqing; 9 Logging cities: yichun, heihe, five dalian pool, ShangZhi,
HaiLin, MuLeng, ning an, HuLin. The cities are mostly rich in mineral and forest
resources exploration and exploitation, on the basis of the mining and forest by
evolved, instead of natural form. Due to the need of development of mineral
resources, the state in relatively short period for the number of human, financial and
material related to the original injected rapidly. Only a few families in the village or a
deserted place suddenly became a city.
Resource-based cities in Heilongjiang province is mostly in the period of recession.
Transformation problems relative to industry development in the resource-based cities
is slow. Oil, coal and the forest resources declined. Natural resources relative to
industrial structure is single, continuous small industrial scale, irreparable mining
decline bring growth.

3 Industrial Development of Resources City Are Facing with the


Problem in Heilongjiang
The exposure draft has introduced that the new accounting treatment model for lesser
and lesser, the related suggestion will affect the lessee and the lesser accounting
treatment to the leasing contract, its revision main points as follows:

3.1 The Follow-Up Resources Reserve Is Insufficient, Worsening Ecological


Environment

On petroleum resources, daqing oilfield remaining recoverable reserves 5.39 million


tons, with annual output of 4,500 tons and active implement protective mining, under
the precondition of surplus phenomena that worldly 13 years, From the forest resources
look, the greater hinggan mountains and yichun forest area significantly reduced,
stands total volume, quality of trees and recoverable increment in volume are dropped
substantially embodiment. At the aspect of coal resources, coal production of jixi,
hegang, shuangyashan and qitaihe is 4 big coal cities for 68 years, existing 33 mine,
already had 16 resource exhaustion. The resource-based cities for a long-term
resources exploitation and environmental protection are disjointed, ecological
environment were seriously damaged. Divide common "three wastes" besides,
different types of cities would have different pollution. For example, the coal city
ground subsidence, coal mining take up a lot of land, blocking river discharge dust, is
causing serious pollution of environment. Because of oil separation vegetation
destroyed with soil desertification, serious solemnization in the plain region. Because a
long-term excessive cut trees, forest coverage rate drops, forest shelter-zone role
weakened gradually. Ecological environment destruction, not only affects the local
residents body health and quality of life, but also have largely restricted urban canvass
business success rate.
392 Y. Zhu and J. Lv

3.2 Simple Industrial Structure, to Resources Dependence is Strong

Economic development in resource-based cities of Heilongjiang rely on resources


exploitation and rough machining too much, only pursues resources products
production quantity expansion, and other industrial development enough, causing the
resource-based cities leading industry is single. Economy, finance, etc rely heavily
on resources industry. Four coal cities has been the main force in coal exploitation
and processing, the proportion of GDP in oil industry of daqing city has been more
than 80%, Logging city, the development of the basic is by eating forest capital, stay
in primary processing industry, low value-added such link. The first industry, the
second industry and the third industrys structure is with imbalance. For example, in
2008, three industries in daqing city weight for 11.8%, 85.1% and 3.1%. Most
resource-based cities or in the second industry as the main body, development of the
first and third industry lag behind, the single industrial structure is lack of momentum
in the city's economic development, with resources exhausted, economic conditions
decreasing even also appears a deterioration.

3.3 Industrial Correlativeness Degree Low, and the Economic Benefit is Low

The resource-based cities leading industry is mainly state-owned enterprises, these


industries from investment, production and sales of by the state implements clinching,
makes the resources industries and city formed a relatively independent operation
system, resources industry driving the development of relative industries advantage
radiation slow, diffusion effect is bad, the industrial connection low, limiting resources
industry in local economy association promotes makes urban economic over-reliance
on resources industry. Resources industry development, in order to obtain larger
benefits, often more attention to the development of storage, they generally belongs to
upstream industry, and ignore can produce larger value-added depth processing,
industrial chain short, causing deformity or urban economic recession. In addition, the
large asset resources industry with high specificity, including equipment, basic
production facilities, this part of the assets in enterprises over time is sunk costs,
difficult to be used for other purposes. Therefore, the resource-based cities industrial
structure once formed with strong rigidity, then this rigid it severely city industrial
structure of changeover ability.

3.4 Funds Highlights, the Contradiction between Supply and Demand Talent
Resource Scarcity

Due to the lack of early industrial transformation and upgrade technology and most
resource-based cities is still in the primary state of processed products output
resources, product low profit, lack of market competitiveness, which is lack of
financial resources city basic reasons. In addition, the resource-based cities because of
geographical location, natural environment and living conditions such as
disenchanted, often is the talents outflows than inflows. Talent is the foundation, the
regional innovation of intellectual resources shortage of affected this urban innovating
capacity, the state-owned enterprise technical renovation feels embarrassed, even if a
new &high-tech projects, but lack of talent, good project also reach the good benefit,
the new project cannot be started.
Study on Continued Industrys Development Path of Resource-Based Cities 393

4 Analysis on Resource-Based Cites Continued Industry


Development Path in Heilongjiang

4.1 Guarantee in the Original Leading Industry Development

A city development can not give up its fundamental, resource-based cities to ensure
the sustainable development of resources industry must be original dominance of
dominant industry as a prerequisite is the sustained and healthy development of the
urban steady economic smooth forceful guarantee. Oil industry has been supporting
daqing economic development of important farmar, so daqing petrochemical
enterprise should fully support the construction of large base, strive to ready to
petrochemical, daqing petrochemical industrial construction to become the biggest
pigtail industry. Deforestation is ownership economy development is the most
important pillar, future should continue to promote forest, lumbering, processing such
a complete industrial system. The coal city in huaibei coal resources exploitation and
may at the same time actively expand coal, coal turn electric industry.

4.2 Actively Developing Green Industry

The green industry development can not only bring new economic growth point, and
can reduce the damage to the natural environment, remove the deterioration of the
environment, the greatest hazard. Cows, pigs in daqing city can develop, beef, mutton
and big goose green food industry; etc. Focus on the development of wetland
landscape, green hot spring tourism projects. Ownership should be actively developed
wild fruit drinks, fresh edible, fruit milk, mineral water, quick-frozen brown wait for a
variety of products and pollution-free rice green food; etc. Use of abundant tourism
resources development has the characteristics of forest tourism in heilongjiang
province north coal city in huaibei also should make xingkai lake, wusuli river
WanDaShan forests, rivers, ZhenBaoDao wetland, longjiang gorges, WeiHe wetland,
montefiore meters city tourism projects.

4.3 Focus on the Development of High-Tech Industries

Science and technology is the first productivity, the sustainable development of


resources city make good use of the weapon. Hi-tech industry with high content of
technology, market competition quality is outstanding, the development potential,
high return, high-risk characteristics. The resource-based cities should strive to using
high-new technology instead of resources consumption, make economic growth from
extensive to intensive evolution. The development of high-tech industry will
accelerate the future economic into low consumption, high efficiency and non-
pollution the pace of The Times. Daqing city in practice should focus on developing
biological pharmaceutical, chemical, pharmaceutical, Efforts to develop energy-
saving, environmental protection, special new building materials. Ownership can be
further development in Chinese medicine, western medicine, giving priority to the
pills, tablets, capsules, plugs, oral liquid dosage form complete medical industry. Coal
city can take high-tech forms, develop the hydropower development pithead coal
chemical, coal -- -- --, metallurgy, building materials industries such as coal, used
limited resources in search of higher value returns, extend the industrial chain.
394 Y. Zhu and J. Lv

4.4 To Develop the Service Industry

Service industry development level is the measure of a city's economic development


level, but also an important symbol of the degree of civilization and judge the
important basis. Long-term since Heilongjiang resource-based cities service lagging
development, and the economic development and improve the requirements,
compared to life still exist in the very big disparity. Actively develop the service
industry, promote the proportion of service industry has become the urgent
matter. Daqing constantly improve the third industries, thus taking emerging third
industry instead of traditional third industry, by "using oil raise city" into "by city
raise city".

4.5 Provide the Resource-Based Cities Continued Industry Development of


Government Guarantee

The government is the resource-based cities strategic development of the continued


industry in the organizer, also responsibility subject. First, provide the system
guarantee. The government can for the development of the continued industry in the
laws and regulations, formulate specific investment policies, investment policy and
industrial policy has articles inclining policy. Secondly, the government can provide
some of the financial aid. To realize the industrial structure adjustment and upgrading,
realize substituted industry and pigtail industry gradually entered the, must spend a lot
of money. Lack of money problems rely solely on the market mechanism is difficult
to fully solve, the central government and local government financial intervention
become inevitable. Third, the government can provide for resources city planning and
guidance. The resource-based cities for the development of our national economy has
made great contribution to countries to this kind of urban planning and guidance in
their development plays a stable and directional function. Of course, planning and
guidance is a phased, it is just the resource-based cities developing a period of
requirements.

References
1. Bhattacharjee, Y.: Industry Shrinks Academic Support. Science 312, 671 (2006)
2. Fritsch, M., Brixy, U., Falck, O.: The Effect of Industry, Region, and Time on New
Business Survival A Multi-Dimensional Analysis. Review of Industrial
Organization 28(3), 285306 (2006)
3. Bale, H.E.: Industry, innovation and social values. Science and Engineering Ethics 11(1),
3140 (2005)
4. Wang, S., Jiang, L.: Economic transformation capacities and developmental
countermeasures of coal-resource-based counties of China. Chinese Geographical
Science 20(2), 184192 (2010)
5. Pichon, C.L., Gorges, G., Bot, P., Baudry, J., Goreaud, F., et al.: A Spatially Explicit
Resource-Based Approach for Managing Stream Fishes in Riverscapes. Environmental
Management 37(3), 322335 (2006)
Study on Continued Industrys Development Path of Resource-Based Cities 395

6. West, G.P., Bamford, C.E.: Creating a Technology-Based Entrepreneurial Economy: A


Resource Based Theory Perspective. The Journal of Technology Transfer 30(4), 433451
(2004)
7. Wang, S., Jiang, L.: Economic transformation capacities and developmental
countermeasures of coal-resource-based counties of China. Chinese Geographical
Science 20(2), 184192 (2010)
8. Yang, X., Ho, E.Y.-H., Chang, A.: Integrating the resource-based view and transaction cost
economics in immigrant business performance. Asia Pacific Journal of Management, Online
FirstTM (October 16, 2010)
Cable Length Measurement Systems Based on Time
Domain Reflectometry

Jianhui Song*, Yang Yu, and Hongwei Gao

School of Information Science and Engineering, Shenyang Ligong University,


Shenyang, 110159, P.R. China
hitsong@126.com

Abstract. Based on the principle of time domain reflectometry(TDR) cable


length measurement, the principle error of the TDR cable length measurement
is analyzed. Methods for reducing cable length measurement error are analyzed.
Two cable length measurement systems based on the TDR principle are
designed. The high-precision time interval measurement module is the core of
the first measurement system. The sampling oscilloscope with a computer is the
core of the second measurement system. The experimental results show that the
measurement systems developed in this paper can achieve high cable length
measuring precision.

Keywords: TDR, time interval measurement, measurement system.

1 Introduction
The issue of measuring precision in the wire and cable market becomes more and
more prominent. It is significant to measure the cable length precisely, rapidly and
economically. Compared with the traditional measurement methods, time domain
reflectometry (TDR) technology has an advantage of non-destruction, portability and
high-precision, which is an ideal cable length measurement method[1~3]. In order to
achieve high cable length measuring precision and study the key technologies of the
cable length measurement based on TDR, two cable length measurement systems
based on different platforms are developed.

2 The TDR Cable Length Measurement Theory


TDR is a very useful measuring technology based on high-speed pulse technology.
The cable length measuring principle is very simple. The test voltage pulse is injected
into one end of the cable, and the pulse will be reflected at the end of the cable. By
measuring the time interval between the injection pulse and the reflection pulse, the
cable length can be obtained by assuming the velocity as constant[4,5]. The formula
for length measurement is
v t
l= (1)
2

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 396401, 2011.
Springer-Verlag Berlin Heidelberg 2011
Cable Length Measurement Systems Based on Time Domain Reflectometry 397

Where, l cable length,


v signal propagating velocity,
t time interval between the injection pulse and the reflection pulse.
According to the linear superposition law of error propagation, the cable length error
can be expressed as formula (2)
1
l = (t v + v t ) (2)
2
Where, v signal propagating velocity in cable
t time interval between the injection pulse and the reflection pulse
v propagation velocity error
t time interval measuring error.
The direct pulse counting method is the most basic method of time interval
measurement and it is almost the basic of all time interval measurement methods.
Many methods combine the direct pulse counting method with other methods to
achieve high precision measurement. Therefore, the TDR cable length measuring
accuracy based on the principle of direct pulse counting time interval measurement
are analyzed. According to the principle of direct pulse counting, t can be expressed
as formula (3)
1 N
t = N 2 f (3)
f f

Where, f count pulse frequency


N count pulse number
f count pulse frequency error
N count pulse number error.
Therefore, formula (2) can be expressed as formula (4)
v N f
l = l +l l (4)
v N f

The cable length is measured n times. Taking into account of the random
measuring errors and according to the of random error accumulation, the average
measuring error can be calculated by formula (5):

v l n 1 f
l = l + N l (5)
v n i =1 N i f
The 1 quantization error can be expressed as:
1 1 1 1
= = ... = ( ) = (6)
N1 N 2 Nn N
398 J. Song, Y. Yu, and H. Gao

At present, the crystal oscillator temperature stability is better than 109 , therefore
the count pulse frequency error can be ignored. At this point the TDR cable length
measuring uncertainty ul can be expressed as

uv 2 v 2
ul = (l ) +( ) (7)
v 2f n

Where, uv propagation velocity standard uncertainty


u f reference frequency standard uncertainty
n number of measurements.
It can be seen from formula (7) that when the velocity is constant, the count pulse
frequency and number of measurements are the key factors to the TDR cable length
measuring uncertainty.

3 The TDR Cable Length Measurement Systems


In order to achieve high cable length measuring precision and study the key
technologies of the cable length measurement based on TDR, two cable length
measurement systems based on different platforms are developed. The schematic
diagram and photo of measurement system I and II are shown in Fig. 1 and Fig. 2.
The first measurement system is composed of signal transmitting and receiving
modules, the high precision time interval measurement module, temperature
measurement module, display circuit, control panel and microprocessor. Different
amplitude and different pulse width signal is transmitted into the cable under test
based on the actual measurement situation. Then the time interval between the
transmitted pulse and reflected pulse is shaped into the gate signal and measured by
high precision time interval measurement module. The time interval measurement
result is sent into the microprocessor and filtered to eliminate the gross errors.
According to the ambient temperature measured by the temperature measurement
module, the propagation velocity is compensated. Finally the length of the cable
under test is calculated and sent to the display circuit to display.

Fig. 1. Schematic diagram and photo of measurement system I


Cable Length Measurement Systems Based on Time Domain Reflectometry 399

computer

sampling
signal generator oscilloscopes

temperature Cable
measurement module under test

Fig. 2. Photograph of measurement system II

The second measurement system is composed of signal generator, sampling


oscilloscopes temperature measurement module and computer. Different amplitude
and different pulse width signal generated by signal generator is transmitted into the
cable under test based on the actual measurement situation. The waveform data of the
transmitted pulse and reflected pulse is collected by the sampling oscilloscope and
sent into the computer processing. Firstly, the measurement data is filtered by the
system software. Then the reflected wave is identified and detected by the reflected
wave recognition algorithm. According to the ambient temperature measured by the
temperature measurement module, the propagation velocity is compensated. Finally
the length of the cable under test is calculated.

4 Experiment and Analysis


A length of 105.00m RVV 300/300V PVC sheathed flexible cable is measured by
measurement system I. The count frequency of the high precision time interval
measurement module is 1.2GHz. The cable length is measured 256 times. The
propagation velocity is 2 108 m/s. The measurement data are analyzed. There are
nine measurement results. The histogram is shown in figure 3.
The standard deviation of the TDR cable length measurement system is
256

l i
2
(8)
sl = i =1
= 0.12 ( m )
N 1
400 J. Song, Y. Yu, and H. Gao

The standard deviation of the cable length measurement arithmetic mean is

( )
s l =
lt
N
= 0.01( m ) (9)

The coverage factor k is 2, then the expanded uncertainty of the cable length
measurement is


U l = ks (l ) 2 s (l ) = 0.02 ( m ) (10)

It can be seen from formula (8) to (10) that the measurement system developed in
this paper can achieve high precision cable length measurement.

80
Occurrences number of measurement results

60

40

20

0
103.25 103.33 103.42 103.50 103.58 103.67 103.75 103.83 103.92
Cable length(m)

Fig. 3. Random error of cable length measurement system

5 Conclusions
Based on the principle of time domain reflectometry(TDR) cable length measurement,
Two cable length measurement systems based on the TDR principle are designed. The
high-precision time interval measurement module is the core of the first measurement
system. The sampling oscilloscope with a computer is the core of the second
measurement system. The experimental results show that the measurement systems
developed in this paper can achieve high cable length measuring precision.

References
1. Dodds, D.E., Shafique, M., Celaya, B.: TDR and FDR Identification of Bad Splices in
Telephone Cables. In: 2006 Canadian Conference on Electrical and Computer Engineering,
pp. 838841 (2007)
2. Du, Z.F.: Performance Limits of PD Location Based on Time-Domain Reflectometry. IEEE
Transactions on Dielectrics and Electrical Insulation 4(2), 182188 (1997)
Cable Length Measurement Systems Based on Time Domain Reflectometry 401

3. Pan, T.W., Hsue, C.W., Huang, J.F.: Time-Domain Reflectometry Using Arbitrary Incident
Waveforms. IEEE Transactions on Microwave Theory and Techniques 50(11), 25582563
(2002)
4. Langston, W.L., Williams, J.T., Jackson, D.R.: Time-Domain Pulse Propagation on a
Microstrip Transmission Line Excited by a Gap Voltage Source. In: IEEE MTT-S
International Microwave Symposium Digest, pp. 13111314 (2006)
5. Buccella, C., Feliziani, M., Manzi, G.: Detection and Localization of Defects in Shielded
Cables by Time-Domain Measurements with UWB Pulse Injection and Clean Algorithm
Postprocessing. IEEE Transactions on Electromagnetic Compatibility 46(4), 597605
(2004)
The Cable Crimp Levels Effect on TDR Cable Length
Measurement System

Jianhui Song*, Yang Yu, and Hongwei Gao

School of Information Science and Engineering, Shenyang Ligong University,


Shenyang, 110159, P.R. China
hitsong@126.com

Abstract. Time domain reflectometry (TDR) technology is an ideal cable length


measurement method. The traveling wave propagation velocity is the key to the
measuring accuracy of the TDR cable length measurement system. The cable
crimp levels effect on traveling wave propagation velocity is analyzed
theoretically, and the corresponding experiment is done. The experimental results
are match to the theoretical analysis. The experimental results show that the law
of the traveling wave propagation velocity influenced by cable crimp levels is
different for different types of cable, however, as long as the cable state under test
is uniformed, the velocity error can be controlled in a small range. Therefore the
measuring accuracy of the TDR cable length measurement system is ensured.

Keywords: TDR, propagation velocity, cable crimp level.

1 Introduction
All kinds of wires and cables have been widely used with the rapid development of
the national economy. The wire and cable industry have developed rapidly, and
gradually expanded the production scale and market share. Cable is an important
commodity, and its length measurement precision is strictly formulated in the national
standard. However, the issue of measuring precision in the wire and cable market
becomes more and more prominent. It is significant to measure the cable length
precisely, rapidly and economically. Compared with the traditional measurement
methods, time domain reflectometry (TDR) technology has an advantage of non-
destruction, portability and high-precision, which is an ideal cable length
measurement method[1~3].
It is found in practice that the measuring accuracy of the TDR cable length
measurement system can be influenced by the cable crimp levels. In order to reduce
the cable crimp levels effect on the measuring accuracy of the TDR cable length
measurement system, the cable crimp levels effect on traveling wave propagation
velocity is analyzed theoretically, and the corresponding experiment is done.

2 Relationship between Velocity and Distributed Capacitance of


Cable
According to the theoretical knowledge of the capacitor plates, the capacity of the
capacitor plates can be calculated as formula (1):

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 402407, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 403

Dl
C= (1)
d
Where, D is the width of the capacitor plates, l is the length of the capacitor plates,
d is the distance between the two plates of the capacitor, is the dielectric constant.
According to the cable length measurement principle of capacity conversion of
capacitor, an open end cable is regarded as a capacitor. Two cores of the cable is
regarded as two plates of the capacitor. The effective width of the cable core can be
approximately interpreted as D in formula (1). The length of the cable can be regarded
as l in formula (1). The distance of the cable cores can be interpreted as d in
formula (1). For the two core wire cable whose internal structure, , D and d is
fixed, the capacitances between the two wires of the cable are proportional to the
cable length. That is
C1 l1
= (2)
C2 l 2

The cable length can be obtained by measuring the cable capacitance per unit
length[4].
At high frequencies, the propagation velocity of the electromagnetic waves can be
calculated as formula (3)[5~7].
1
v= (3)
L0 C0

Where, L0 is the distributed inductance per unit cable length. C0 is the distributed
capacitance per unit cable length.
It can be seen from formula (3) that the wave velocity is inversely proportional to
the square root of the distributed capacitance per unit length. The relationship
between the velocity and distributed capacitance can be expressed as follow

v 1/ C0 (4)

The distance between the adjacent insulated core decreases and the distribution
capacitance of the cable increases as the cable is curled. Therefore the velocity
decreases as the cable is curled. However, as long as the cable state under test is
uniformed, the distribution capacitance of the cable is also uniformed; therefore, the
velocity can be controlled in a small range.

3 Experiment and Analysis


In order to further study the law of the traveling wave propagation velocity influenced
by cable crimp levels, the measurements were performed on RVV 300/300V PVC
insulated cable. The equivalent capacitances and distributed capacitances of different
cable length in state of curl and straighten is shown in table 1.
404 J. Song, Y. Yu, and H. Gao

Table 1. Equivalent capacitances and distributed capacitance of different cable length in state
of curl and straighten

Curl Straighten
Cable
Equivalent
under Equivalent Distributed Distributed
capacitances
test(m) capacitances(nF) capacitances(pF) capacitances(pF)
(nF)
30.00 1.5183 50.610 1.4615 48.717
86.75 4.3892 50.595 4.2257 48.711

Table 2. Equivalent capacitance ratio and length radio of with 86.75m and 30.00m cable length
in state of curl and straighten

Curl Straighten
Capacitance ratio Length ratio Capacitance ratio Length ratio
2.8909 2.8917 2.8913 2.8917

Table 3. Velocity and relative error of different cable length and different cable crimp degree

Cable under test(m) Cable state velocity(m/s) Relative error(%)


Straighten 1.9234 108

30.00 1.84
Curl 1.8886 108
Straighten 1.9258 108
86.75 1.94
Curl 1.8892 108

The equivalent capacitance ratio and length radio of 86.75m and 30.00m cable


length in state of curl and straighten is shown in table 2. The ambient temperature is
17.0 . It can be seen from table 2 that although the values of the equivalent
capacitance of the cable are different under different conditions, the capacitances
between the two wires of the cable are still proportional to the cable length. In the
same state, the equivalent capacitance of the cable is greater as the cable length is
longer. But the distributed capacitance per unit cable length is always the same. The
comparison of the distributed capacitance in state of curl and straighten, it can be
found that the equivalent capacitance and distributed capacitance in state of curl is
bigger than the equivalent capacitance and distributed capacitance in state of
straighten. It is because the distance between the adjacent insulated wire cores
decreases as the cable is curled, therefore the equivalent capacitance and distributed
capacitance in state of curl is bigger.
The velocity and relative error of different cable length and different cable crimp
degree is shown in table 3. It can be seen form table 3 that the relative error of
velocity in state of curl and straighten is big. The velocity relative error of the same
cable crimp degree with 86.75m and 30.00m cable length is shown in table 4. It can
be seen form table 3 and table 4 that the cable crimp degree of RVV 300/300V PVC
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 405

sheathed cable has a great impact on the velocity. However, as long as the cable state
under test is uniformed, the velocity can be controlled in a small range.
The measurements were performed on RVVP multi-core shielded cable. The


length of three cores is 40.73m. The length of seven cores is 41.57m. The length of
eight cores is 43.18m. The ambient temperature is 18.0 The equivalent
capacitances and distributed capacitance of different cable cores in state of curl and
straighten is shown in table 5. The velocity relative error is shown in table 6.
It can be seen form table 5 and table 6 that the cable crimp degree of RVVP multi-
core shielded cable has little impact on the velocity. However, the cable state under
test is better to be uniformed to control the velocity error in a small range.


The measurements were performed on SYV75-5-1 coaxial cable. The length is
156.06m. The ambient temperature is 18.0 . The equivalent capacitances and
distributed capacitance of coaxial cable in state of curl and straighten is shown in
table 7. The coaxial cable velocity relative error in state of curl and straighten is
shown in table 8.

Table 4. Velocity relative error of the same cable crimp degree with 86.75m and 30.00m cable
length

Cable state Velocity relative error(%)

Straighten 0.1

Curl 0.03

Table 5. Equivalent capacitances and distributed capacitance of different cable cores in state of
curl and straighten

Straighten Curl
Cable under Equivalent Distributed Equivalent Distributed
test capacitances capacitance capacitances capacitance
(nF) (pF) (nF) (pF)
Three cores 5.7440 141.03 5.7890 142.14
Seven cores 4.4257 106.46 4.4891 107.99
Eight cores 4.7321 109.60 4.7709 110.50

Table 6. Velocity relative error of different cable crimp degree

Three cores Seven cores Eight cores


Cable under test
Straighten Curl Straighten Curl Straighten Curl

velocity(108m/s) 1.7098 1.7071 1.7463 1.7440 1.7030 1.6997

Relative error(%) 0.16 0.13 0.19


406 J. Song, Y. Yu, and H. Gao

Table 7. Equivalent capacitances and distributed capacitance of coaxial cable in state of curl
and straighten

Straighten Curl
Cable Equivalent Distributed Equivalent Distributed
under test capacitances capacitance capacitances capacitance
(nF) (pF) (nF) (pF)
Coaxial
11.993 76.848 12.022 77.034
cabel

Table 8. Coaxial cable velocity relative error of different cable crimp degree

Cable under test Straighten Curl

velocity(m/s) 1.9815 108 1.9811 108

Relative error(%) 0.02

It can be seen form table 7 and table 8 that the cable crimp degree of SYV75-5-1
coaxial cable has very little impact on the velocity. However, the cable state under
test is better to be uniformed to control the velocity error in a small range.

4 Conclusions
The cable crimp levels effect on traveling wave propagation velocity is studied. The
experiment results show that the cable crimp degree of RVV 300/300V PVC sheathed
cable has a great impact on the velocity. The cable crimp degree of RVVP multi-core
shielded cable has little impact on the velocity. The cable crimp degree of SYV75-5-1
coaxial cable has very little impact on the velocity.

References
1. Dodds, D.E., Shafique, M., Celaya, B.: TDR and FDR Identification of Bad Splices in
Telephone Cables. In: 2006 Canadian Conference on Electrical and Computer Engineering,
pp. 838841 (2007)
2. Du, Z.F.: Performance Limits of PD Location Based on Time-Domain Reflectometry. IEEE
Transactions on Dielectrics and Electrical Insulation 4(2), 182188 (1997)
3. Pan, T.W., Hsue, C.W., Huang, J.F.: Time-Domain Reflectometry Using Arbitrary Incident
Waveforms. IEEE Transactions on Microwave Theory and Techniques 50(11), 25582563
(2002)
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 407

4. Shan, L.D., Meng, W.S., Zhang, L.H., Wang, Y.M.: Apply the Principle of Capacity
Conversion of Capacitor and Quick Determination the Inside Break Point of Cable. Metal
World (4), 4648 (2006)
5. Mugala, G., Eriksson, R.: Measurement Technique for High Frequency Characterization of
Semi-Conducting Materials in Extruded Cables. IEEE Transactions on Dielectrics and
Electrical Insulation 11(3), 471480 (2004)
6. Oussalah, N., Zebboudj, Y.: Analytic Solutions for Pulse Propagation in Shielded Power
Cable for Symmetric and Asymmetric PD Pulses. IEEE Transactions on Dielectrics and
Electrical Insulation 14(5), 12641270 (2007)
7. Oussalah, N., Zebboudj, Y., Boggs, S.A.: Partial Discharge Pulse Propagation in Shielded
Power Cable and Implications for Detection Sensitivity. IEEE Electrical Insulation
Magazine 23(6), 510 (2007)
The Clustering Algorithm Based on the Most Similar
Relation Diagram

Wei Hong Xu1, Min Zhu1, Ya Ruo Jiang1, Yu Shan Bai2, and Yan Yu2
1
College of Computer Science, Sichuan University, Chengdu, China
2
College of Education, Tianjin Normal University, Tianjin, China
weihong.xu.scu@gmail.com
yuyantj@yahoo.com.cn

Abstract. The MSRD (Most Similar Relation Diagram) of a dataset is a


weighted undirected graph constructed from an initial dataset. In the MSRD,
each datum represented by a vertex, is connected with its MSD (Most Similar
Data), and each MSRG (Most Similar Relation Group), represented by a sub-
graph, is connected with its MSG (most similar group) through connecting the
most similar pairs of data between the two sub-graph. The clustering algorithm
based on the MSRD involves two stages: constructing the MSRD of the dataset
and cutting the diagram into sub-graphs (clusters). In this paper, we developed a
package of methods for the later stage and applied them to some synthesized
and real datasets. The performance verified the validity of these methods and
demonstrated that the MSRD based clustering is a universal and rich algorithm.

Keywords: data mining, clustering algorithm, weighted graph, the most similar
relation diagram (MSRD), the most similar data (MSD), the most similar group
(MSG).

1 Introduction

In recent years, cluster analysis has become an important tool in data mining, and has
been extensively studied and applied to many fields such as pattern recognition,
customer segmentation, similarity search and trend analysis. Though thousands of
clustering algorithms have been presented, some new algorithms still continue to
appear [1] due to the demand in practice and theory. Some algorithms of recent trend
e.g. ensemble methods [2], semi-supervised method [3], methods for clustering large-
scale datasets, Multi-way clustering method [4] as well as kernel and spectral methods
[5] are developed notably.
Recently, Yan Yu et al. published a clustering algorithm based on the MSRD (Most
Similar Relation Diagram) [6]. This algorithm consists of two stages: constructing the
MSRD of the dataset and cutting the MSRD into clusters. In literature [6], the first
stage has been presented in detail, but the second stage seems to have not been
explored sufficiently. In this paper we develop a package of methods of cutting the
diagram into clusters and apply some of the methods on some synthesized and real
dataset.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 408415, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Clustering Algorithm Based on the Most Similar Relation Diagram 409

2 Terms and Definitions


Definition 1. Let G=G (V, E) is a graph. V={Vn| n=1, ,N} is the set of vertexes.

Then vh is the Most Similar vertex (MSV) of vi if h=arg minj {dij}; dij is the

dissimilarity between vi and vj; vi, vj V; j=1, 2, , h, , N; ji.
A vertex vi and one of its MSV vj is called the most similar pair (MSP) of vertexes
and is denoted by viv j.
According to definition 1, each vertex has at least one MSV in a graph, but it is not
true that a vertex must be the MSV of another vertex. Some vertexes are not the MSV
of any vertex.
The edge connecting a vertex and its MSV is called the 1st level edges, denote by
(1)
eij .

Suppose Gi =Gi(Vi, Ei) is a sub-graph of G(V,E); Ei={eij(1)| j=1, ,J}, J is the
number of the edges of Gi. Then, Gi is called a 1st level sub-graph, which is denoted by
Gi(1).

Definition 2. Let (1) G is cut into M sub-graphs, G={Gm|m=1, ,M} (2) GiGj =,

(Gi, Gj G; i, j=1, 2, , M; ij); (3)Vi={vip|p=1, ,P}; Vj={vjq|q=1, ,Q} (Vi Gi; Vj

Gj). Then, Gl is the most similar graph (MSG) of Gi. vlh Gl and vik Gj is the most
similar pair (MSP) of vertexes between Gl and Gi, if l,h,k=arg minj,p,qwpq (wpq is the

weight of e(vlp, viq)); j=1, ,l, , M; p=1, , h, , P; q=1, , k, , Q; ji).
The edge between the two vertexes of a MSP, vik and vlh, in 1st level sub-graphs is
called the 2nd level edge, denoted by eij(2).
According to definition 2, any sub-graph has at least one MSG in a partition, but
sometimes sub-graph is not the MSG of any sub-graph.
Definition 3. Suppose there is a sub graph Gi(l)=Gi(Vi(l), Ei(l)), l=2,3, , L;
(l)

(1) (l-1)
th
Ei ={eij , , eik | j=1, 2, , J; k=1, 2, , K}. Then, Gi is a m level sub-graph and
is denoted by Gi(m) if m=argl max l.
Definition 4. Suppose there is a graph G=(V, E), V={Vn|n=1,,N}, E=E(eij); eij is the
edge between vi and vj(i,j=1, 2, , N; ij). Then, G is called the Most Similar

Relation Graph (MSRG) if vi vj.

3 The Clustering Algorithm Based on the MSRD


The clustering algorithm based on the MSRD involves two stages: (1) constructing the
MSRD of the dataset; (2) cutting the MSRD into clusters.

3.1 Constructing the MSRD of the Dataset

The procedure of constructing the MSRD of a dataset is as follows:


Define and compute the dissimilarities of each vertex with others and find the
MSVs of every vertex following the definition 1. In this paper we use the Euclidean
distance to measure the dissimilarity between a pair of vertexes.
410 W.H. Xu et al.

(a) (b) (c)

(d) (e)

Fig. 1. (a) The coordinate graph of dataset 1; (b) The four 1st level MSRG of dataset 1; (c) The
2nd level MSRG of dataset 1; (d)The MSRD of dataset 1; (e) The r-MSRD of dataset 1

Connect every vertex (or node) to its MSV by an edge. By doing so, a certain
number of sub-graphs are composed. Following the definitions above, these sub-
graphs are all 1st level MSRGs, and all the edges are the 1st level edges.
According to the definition 2, find the MSGs of every 1st level sub graph and the
MSP of vertexes between the two sub-graphs. Connect the two vertexes of every MSP
so that each 1st level sub-graph will be connected to its MSGs to form a certain
number of 2nd level sub-graphs. Following the definitions above, these sub-graphs are
2nd level MSRGs.Iterate such process to merge the lower level sub-graphs into higher
level sub-graphs until all MSRGs are aggregated into one graph, the MSRD of the
dataset.At last, assign the weight value to every edge beside it.
By now the construction of the MSRD is accomplished.
As an example, we synthesized a simple two-dimension dataset including 20 data
called dataset 1. Fig.1 (a) is the coordinate graph of dataset 1. Fig. 1 (b) shows the
four 1st level sub-graphs and Fig. 1 (c) shows the 2nd level sub-graphs of dataset 1.
The numerals in Fig.1 (b) and (c) are the IDs of some data. The MSRD of dataset 1
constructed following the procedure above is shown in Fig.1 (d). Comparing Fig.1 (a)
with (d), the relation between a dataset and its MSRD can be seen.
From Fig.1 (b) it can be seen that 20 vertexes are connected together by 1st level
edges to construct four 1st level sub-graphs: G1(1){1, 2, 3, 4, 5, 6, 7,8}, G2(1){9, 10, 11,
12, 13, 14,15, 16}, G3(1){17,18} and G4(1){19,20}. Then, G1(1) and G2(1)are connected
by the 2nd level edge e14,17 , to form a 2nd level MSRG G1(2). G1(2){ G1(1), G2(1), G3(1) ,
G4(1)} is the highest level sub-graph and is also the MSRD of dataset 1.

3.2 Cutting the MSRD into Clusters


By cutting off a certain number of edges in the MSRD, a partition of the dataset can be
generated. Usually the edges with the highest weights in the MSRD should be chosen
to cut off if the Euclidean distance is the measure of the dissimilarity. Therefore if the
different approaches are adopted in weighting the edges, the different edges will be
selected to cut off and different partitions will be generated. There may be variety of
methods for weighting the edges relying on the factors being taken into account such
as the number of the clusters, the similarity between the clusters, the densities of the
The Clustering Algorithm Based on the Most Similar Relation Diagram 411

data, the shape of the clusters, the uniformity of the partition, the size or volume of the
clusters etc. Suppose the weight of an edge eij is wij, we propose a package of methods
for weighting the edges of the MSRD:
The d-MSRD. wij=dij. dij is the dissimilarity between xi and xj (or between vi
and vj).
The l-MSRD. wij=lij. lij is the level of eij.
The r-MSRD. wij=rij.

The fragments contain vi vj in a MSRD can be one of the three cases in Fig. 2. In any
case, the rij of eij is calculated according to formula (1). The case of (a) is the normal
one and the formula (1) can be used directly. In the case of (b), take djk=; in the case
of (c), take djk=( djm+djl)/2, the rij is also calculated according to formula (1).

(a) (b) (c)

Fig. 2. The fragment cases in MSRD

(dij-dhi)+(dij-djk) if (dij-dhi)>0 and (dij-djk) >0


rij= (dij-dhi) 0
if (dij-dhi)>0 and (dij-djk) (1)
0 if (d -dh )0 and (d -d )0
ij i ij jk

The u-MSRD. wij=uij.The uniformity uij of Eij is calculated according to formula (2)

uij=( N ij N ij )/( N ij + N ij ) (2)

N ij and N ij are the number of data in the two clusters respectively generated if Eij
is cut off.
The double factor methods. such as the ld-MSRD, ud-MSRD, lr-MSRD, ur-MSRD,
the wij is defined by wij=lijdij,, wij=uijdij, wij=lijrij, wij=uijrij.
The triple factor methods. such as the uld-MSRD and ulr-MSRD, the wij is defined
by wij=uijlijdij and wij=uijlijrij respectively.
Obviously, partitions with different features can be obtained by selecting different
methods.For example, the d-method can generate such partitions that the gaps between
the clusters are as big as possible. So it favors partitions containing isolated nodes. See
Fig.3 (a). It is interesting to notice that the partition generated by d-method is all the
same with that generated by Hierarchical simple method. For instance, the partition in
Fig.3 (a) is generated by Hierarchical simple method also.
The l-method cut the high level edges off to prevent the lower level MSRG from
dividing. Therefore, the partition produced by l-method is more uniform than that
produced by d-method. See Fig.3 (b). The u-method can generate partition with high
uniformity. Partition in Fig. 3 (c) demonstrates this conclusion. The r-method can
guarantee the data points in a manifold to be in the same cluster. See Fig.3 (b).
412 W.H. Xu et al.

(a) by d-method (b) by l-, r-, lr- ur- and ulr- methods (c) by u-method (d) by ud-method

Fig. 3. The partitions generated by different methods of MSRD

The double or triple factor methods take multiple factors into account so that it can
generate partitions with multi-features. Fig. 3 (d) is the partition based on the ud-
method, it is more uniform than (a), but the gaps between some clusters are narrower
than that of (a).
These arguments support that the MSRD method is a rich clustering method for it
can generate variety of partitions with different features, some of which are not be able
to generate by K-means or Hierarchical, such as the partition in Fig.3 (b).
The partition in Fig. 3 (b) is generated by l-, r-, u-, lr- ur- and ulr- methods of
MSRD does not mean that these methods always generate the same partitions for any
dataset. Whether the different methods generate different partitions vary with the
characteristics of the datasets.

4 The Clustering of the Synthesized Dataset


Tow artificial datasets called dataset 2 and dataset 3 are used in testing our algorithm
respectively. The coordinate graph of it is shown in Fig. 4 (a) and 4 (b).
It is no doubt that when the weight of e(v1, v13) is larger than that of e(v1, v2) (see
Fig. 4 (a)), it is an easy task for many existing algorithms to separate the two ellipses
from each other. But when the weight of e(v1, v13) is smaller than that of e(v1, v2) (see
Fig. 4 (b)), many existing algorithms can not separate the two ellipses. The partitions
generated by k-means and the 7 methods of Hierarchical shown in Fig.4 (c), (d), (e), (f)
and (g) demonstrate this argument. However the r-, u- and the ur-method of MSRD can
generate the partition as shown in Fig.4 (h).

(a) d(1,2)<d(1,13) (b) d(1,2)>d(1,13) (c) by ward (d) by complete, average and centroid

(e) by single (f)by weighted and median (g) by k-means (h)by r-MSRD

Fig. 4. (a) and (b) the coordinate graph; (c), (d) ,(e), (f), (g) the partitions of dataset 3 by
different methods; (h) the partitions of dataset 3 by the r-MSRD methods
The Clustering Algorithm Based on the Most Similar Relation Diagram 413

The partitions of dataset 4 generated by k-means, the 7 methods of Hierarchical and


MSRD are shown in Fig.5. Only MSRD can separate the two ellipses (see Fig. 5 (d)).
Fig. 6 shows the coordinate graph and the partitions of dataset 5 by different
methods. It is clear that the partition generated by MSRD is more reasonable and
receivable than the others. This example demonstrates that the MSRD can detect
clusters with different size, shape and density simultaneously.
What we must address is that the partitions generated by k-means and Hierarchical
in Fig. 4, 5 and 6 can also be generated by our MSRD method. This shows universality
of our Method. But some partitions generated by our MSRD methods, such as the
partitions in Fig. 3 (d), Fig. 4 (h) and Fig. 5 (d), are not able to generate by k-means
and Hierarchical. This suggests the richness of our mathod.

a (b) (c) (d)

Fig. 5. The partitions of dataset 4 generated by k-means (a), by single of Hierarchical (b), by
the oher 6 methods of Hierarchical (c), by r-MSRD (d).

(a) (b) (c) (d) (e)

Fig. 6. The coordinate graph (a), and the four-cluster partitions of dataset 5 by k-means (b),
single of Hierarchical (c), word of Hierarchical (d) and ulr-MSRD (e)

5 The Clustering of the Real Dataset


In this section we test our methods in clustering real datasets, Breast-cancer and Wine,
and compare the performance with K-means and Hierarchical.
The clustering results of Iris and Wine by the ulr-MSRD method are given in Table
1 and Table 2 together with the clustering results obtained by K-means, Single and
Ward method of Hierarchical for comparison. In the tables, Ni is the number of the data
in the cluster i; Nj is the number of the data in the category or class j labelled by the
dataset provider; Nij is the number of the data belong to the category j or class j being
partitioned into cluster i. It can be seen from Table 1 and 2 that the partition of Iris
414 W.H. Xu et al.

Table 1. The clustering results of Iris by four methods

Dataset K-means Single Ward ulr-MSRD


Categories Nj Nj Ni,j Nj Ni,j Ni Ni,j Ni Ni,j
1 50 50 50 50 50 50 50 50 50
2 50 62 48 1 0 64 49 47 46
3 50 38 36 99 49 36 35 53 49

Table 2. The clustering results of Wine by four methods

Dataset K-means Single Ward ulr-MSRD


Class Nj Ni Ni,j Ni Ni,j Ni Ni,j Ni Ni,j
1 59 62 59 174 59 64 59 69 59
2 71 65 59 3 3 58 58 55 55
3 48 51 48 1 0 56 48 54 48

generated by ulr-method of MSRD is better than that generated by K-means and Word
of Hierarchical. However, the performance of ulr-MSRD, K-means and Hierarchical
ward in clustering the Wine are not much different.

6 Conclusion

In this paper we developed a package of clustering methods based on the MSRD and
applied them to some real and synthesized datasets. The performance verified the
validity of these methods and demonstrated that the clustering based on the MSRD can
detect both the spherical and non-spherical clusters simultaneously, and has the
capacity to distinguish the clusters of different size, with different density, having
different shape, or belong to the different manifolds. Therefore, it is more universal and
richer than K-means and Hierarchical.

Acknowledgments. The authors would like to thank Professor Pilian He in advance


for his helpful suggestion to improve this paper. We are grateful to the editors and
anonymous reviewers for their constructive comments. This work is supported by the
Science and Technology Department Sichuan Provence, China (2009JY0038).

References
1. Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recognition Letters 31, 651
666 (2010)
2. Fred, A., Jain, A.K.: Data clustering using evidence accumulation. In: Proc. Internat. Conf.
Pattern Recognition, ICPR (2002)
3. Chapelle, O., Schlkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press,
Cambridge (2006)
The Clustering Algorithm Based on the Most Similar Relation Diagram 415

4. Bekkerman, R., El-Yaniv, R., McCallum, A.: Multi-way distributional clustering via
pairwise interactions. In: Proc. 22nd Internat. Conf. Machine Learning, pp. 4148 (2005)
5. Filippone, M., Camastrab, F., Masulli, F., Rovetta, S.: A survey of kernel and spectral
methods for clustering. Pattern Recognition 41, 176190 (2008)
6. Yu, Y., Bai, Y.S., Xu., W.H., et al.: A Clustering Method Based on the Most Similar
Relation Diagram of Datasets. In: 2010 IEEE International Conference on Granular
Computing, San Jose, California, August 14-16, pp. 598603 (2010)
Study of Infrared Image Enhancement
Algorithm in Front End

Rongtian Zheng, Jingxin Hong, and Qingwei Liao

The Advanced Digital Processing Laboratory, Xiamen University,


Xiamen, Fujian, China, 361005
hjx@xmu.edu.cn

Abstract. In this paper, we propose a linear enhancement algorithm for original


infrared images. The algorithm which is choosing the transform threshold based
on infrared detectors features, that is a majority of normal pixels locate in an
aiguille, minority non-operating pixels and some noise locate some other places
in histogram, needs little space resource and time. The experimental results
show that the image details can be revealed effectively by this algorithm. There
is a problem that grayscale range of the target is not much enough when we use
common linear enhancement to process the image while the sky and earth
appear in one infrared image, it leads to low contrast of the target, since, we
propose a improved algorithmsegmentation enhancement. The results show
the improving algorithm is better than the common linear enhancement method.

Keywords: original infrared image, infrared detector, linear enhancement,


Front End process, grayscale.

1 Introduction
Original infrared image collected from infrared detector has some features, which are
existing non-operating pixels, low contrast, gray concentrating on a small grayscale
range in the histogram, and the contrast of target is too low to see the image details. In
order to see the infrared image details and improve image quality, enhancing the
original infrared image in front end is necessary.
The remainder of the paper is organized as follows: in section 2, traditional
infrared image enhancement methods are briefly exposed. In section 3, the proposed
method is introduced. In section 4, we present the improved algorithm. In section 5,
we give details of the experiment results. In section 6, we present our conclusions.

2 Traditional Algorithm
Traditional infrared image enhancement usually uses Histogram equalization (HE).
This method smoothes the histogram through specific gray-level transforming
function and then revises the original image according to new histogram. The
essential of HE is to expand those pixels that have large gray quantity probabilities to
neighboring gay-level pixels and compress those pixels whose quantity gray

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 416422, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Infrared Image Enhancement Algorithm in Front End 417

probabilities are small [1]. According to the features of infrared image, which is the
target in the original image has small grayscale, when HE is used in infrared image
processing, the image quality will worsen much and we can't recognize the target
from the background, so the traditional HE method doesn't suit in these conditions .
Linear enhancement algorithm can expand those target pixels in original infrared
image and compress, even denoise the noise, and its key is to choose gray-level
transforming threshold. Such as using histogram statistics obtain 1/32 and 31/32 mean
value of the histogram as the transform threshold [2], intensify image contrast using
segmented histogram nonlinear extension [3], A kind of infrared image contrast
enhancement algorithm based on discrete stationary wavelet transform (DSWT) and
non-linear gain operator [4], A conventional piecewise linear grey transformation
based self-adaptive contrast enhancement algorithm [5]. These methods could achieve
good performance in back-end processing for original infrared image, but it does not
suit for front-end enhancement.

3 Infrared Image Linear Enhancement Base on Features of


Infrared Detector
In this paper, we use infrared detector named UL04171 Produced by ULIS[6], its
measurement is settled from -10C to 50C,output voltage swing between 1.0v and
4.2v,responsivity value is 3mv/k, so we need 14bit for voltage dynamic range.
Generally, computer screens resolution is form 0 to 255 for gray level image, and
human resolution is much lower. Original infrared image collected from objects
which temperature is normal by infrared detector cant be seen. The essential of the
linear enhancement is to remove redundant pixels from histogram of original infrared
image, then transfer grayscale to another gray level range that is between 0 and 255.
Linear transform formula is as follow:

0 Xin < X1

Y Ymin
Xout = max X in X 2 Xin X1 (1)
X 2 X1
Ymax Xin > X 2
Where Xout, Xin is the output and input of the function, Ymax,Ymin is the largest and
smallest grayscale of result image, X2,X1 is the max and min transform threshold
chosen from original infrared images histogram, choosing X2,X1 is the algorithm that
proposed in this paper. The linear enhancement function is illustrated in Fig.1.
We suppose there is no non-operating pixel in infrared detector, we have histogram
of original infrared image like Fig.2.Here, we just regard the beginning and the end of
the aiguille as the X1 X2 in formula 1 to enhance the image, the result will be perfect.
But, actually, infrared detector has some non-operating pixels, UL04171 has about
2% .Real-time histogram of original infrared image consist of a main aiguille and
some small aiguilles. The main aiguille is from a majority of normal pixels, these
small aiguilles result from non-operating pixel and noises.
418 R. Zheng, J. Hong, and Q. Liao

X2

255

X1 255

Fig. 1. The linear enhancement function

Number

X1 X2

Grayscale

Fig. 2. Ideal histogram

Data of original
infrared image

Calculate the Result image


histogram

Remove non- Transform the


operating pixels from original image
the histogram, obtain According to the
the X1 X2 formula 1

Fig. 3. Linear enhancement algorithm flowchart


Study of Infrared Image Enhancement Algorithm in Front End 419

Base on this phenomenon, we propose the linear enhancement based on features of


infrared detector as follow:
(1). Calculate the histogram of the original infrared image.
(2). According to the sum pixels and ratio of non-operating pixels, calculate the
number of non-operating pixels.
(3). Remove 1/2 non-operating pixels calculated in step 1 from two sides of the
histogram, obtain the transform threshold X1 X2 as in formula 1.
(4). According to the formula 1, transform the original image.
The algorithm illustrated in Fig.3

4 A Improved Algorithm - Segmentation Enhancement Base on


Linear Enhancement
Generally, the performance of the Linear enhancement proposed above is good enough,
but, when the sky and earth appear in one infrared image at the same time, use commom
linear enhancement to process the image will result low contrast of the target.
Through a series of experiment, we find that when there are two main aiguilles which
are consisted by target and sky in one image, grayscale of target in result image enhanced
by Linear method is not much enough, that leads to lowly contrast of result image.
This paper, we use improved algorithm based on the enhancement algorithm
proposed by Chen Zheng [7] to segment the image into the sky section and the earth
section, then enhance the two parts by Linear enhancement method separately, its
steps as follow:
(1). Calculate the histogram of the original infrared image.
(2). According to the sum pixels and ratio of non-operating pixels, calculate the
number of non-operating pixels.
(3). Remove 1/2 non-operating pixels from two sides of the histogram calculated in
step 1, obtain the transform threshold X1 X2 as in formula 1.
(4). Divide the histogram data which between X1 and X2 into two segment using
Otus at threshold T.
(5). Enhance the two parts by linear enhancement method separately.

Enhance the original infrared image by the method above, the grayscale of target in
result image can reach the maximum, the sky have the same grayscale with the target,
it leads to a boundary between the target and sky. If we use an unfair distribution
about the grayscale, such as in process step, distribute all grayscale to the target and
little even no grayscale to the sky, there is no the boundary in result image, and the
quality of the result image is better than the commom linear enhancement.

5 Experiment Result
Fig. 4 shows the original infrared image transformed to grayscale 0-255 and the
histogram of original infrared image. From this figure, we cant see any image
information if the original infrared image is not taken any enhancement. On the
histogram of original infrared image, a majority of pixels locate in a main aiguille.
420 R. Zheng, J. Hong, and Q. Liao

Fig. 5 is the result image enhanced by the proposed method. From this figure, we
can see the image details appearing.

Fig. 4. Original infrared image and the histogram of original infrared image

Fig. 5. The result image enhanced by the proposed method

Fig. 6 is the result image enhanced by the linear enhancement method when the
sky and earth appear in one infrared image at the same time, we can see the contrast
of earth in result image is low.

Fig. 6. The result image enhanced by the linear enhancement method


Study of Infrared Image Enhancement Algorithm in Front End 421

Fig. 7 are the result images enhanced by the linear enhancement algorithm and its
improved algorithm. Fig.7(a) is the result image enhanced by the commom linear
algorithm, the image is too white and the earth contrast is lowly; Fig.7(b) is enhanced
by improved linear algorithm with the earth and sky are distributed the same
grayscale 0-255,compared with (a),the images details are increased greatly and
arrangement of the image is better; Fig.7(c) is enhanced by improved linear algorithm
with the earth is distributed grayscale 0 to 255 but the sky is distributed no
grayscale(0),the images details are as good as Fig. 6 (b),however, there are no pitting
and the boundary between the target and sky, so the vision effects are improved ;
Fig.7(d) is enhanced by improved linear algorithm with the earth is distributed
grayscale between 0 and 255 and the key is distributed grayscale range 0 to 100,its
details is the best in these four result images, there are no pitting, no boundary
between the target and sky, and keeping the sky information.

(a) (b)

(c) (d)

Fig. 7. The result images enhanced by the the linear enhancement algorithm and its improved
algorithm
422 R. Zheng, J. Hong, and Q. Liao

6 Conclusion
This paper proposes a algorithm of choosing transform threshold for linear
enhancement based on the features of infrared detector, The algorithm is easily
implementable and fast, The experimental results show that the image details can be
revealed effectively by this algorithm. Since grayscale of target is not much enough in
result image enhanced by linear method when the sky and earth appear in one infrared
image at the same time, that leads to low contrast of result image, we propose an
improved algorithm, by which enhance the original infrared image both keeping
background information and enlarging the grayscale of target, the result images show
this improved algorithm achieve a good performance.

Acknowledgment. This work was supported by the Major Science and Technology
special project of Fujian Province, P.R.China (No.2009HZ0003-1).

References
1. Zhang, Y.: Image Processing, 2nd edn. Tsinghua University Press, Beijing (2006)
2. Wang, Y.H., Wang, D., Hu, Y.M., Zhang, T.: The FPGA-based Real-Time Image Linear
Contrast Enhancement Algorithm. Microcomputer Information, China (2007)
3. Wu, Z., Wang, Y.: An Image Enhancement Algorithm Based on Histogram Nonlinear
Transform. Acta Phot On Ica Sinica, China (2010)
4. Zhang, C.-J., Yang, F., Wang, X.-D., Zhang, H.-R.: An Efficient NotLinear Algorithm for
Contrast Enhancement of Infrared Image. In: Proceedings of the Fourth International
Conference on Machine Learning and Cybernetics, Guangzhou, August 18-21 (2005)
5. Li, Y., Zhou, J., Ding, W.: A Novel Contrast Enhancement Algorithm for Infrared Laser
Images. IEEE, Los Alamitos (2009)
6. Chen, Z., Ji, S.: Enhancement Algorithm of Infrared Images based on Otsu and Plateau
Histogram Equalization. Laser& Infrared, China (2010)
7. UL 04 17 1 NTC05011-1 Issue 1 640 x 480 LWIR 29 11 05.pdf
Influence of Milling Conditions on the Surface Quality in
High-Speed Milling of Titanium Alloy

Xiaolong Shen, Laixi Zhang, and Chenggao Ren

Hunan Industry Polytechnic, Changsha, 410208, China


ShenXL64@163.com, Lxzhzh@yahoo.com.cn, Renchenggao@yahoo.com.cn

Abstract. The study was focused on the machined surface quality of titanium
alloy under different milling conditions, including milling speed, tool rake angle
and cooling method. The surface quality was investigated in terms of residual
stress and surface roughness. The results show that the compressive residual
stresses are generated on the machined surface under all milling conditions. The
compressive residual stresses in both directions decreased and the surface
roughness increased with the milling speed increasing. The compressive residual
stresses increased with the tool rake angle increasing. The lowest surface
roughness was obtained when the rake angle was 8. Under the condition of dry
milling, the highest compressive residual stresses were obtained, approximately
350 MPa. The highest surface roughness was obtained when the oil mist coolant
was used. Water cooling was the best cooling method with uncoated cemented
carbide tool in high-speed milling of titanium alloy.

Keywords: High-speed milling, Titanium alloy, Milling condition, Surface


roughness, Residual stress.

1 Introduction
Titanium alloys have been widely used in aerospace and other industrial applications
because of their elevated mechanical resistance having a low density and their excellent
corrosion resistance, even at high temperatures. However, titanium alloys are difficult
to machine due to their high temperature strength, relatively low modulus of elasticity,
low thermal conductivity and high chemical reactivity [1-2].Conventional milling
speeds range from 30 to 100 m min-1 when sintered carbide tools were used in the
machining of titanium alloys, resulting in low productivity [3].
High speed machining is widely appreciated in industry for its high material removal
rate, low milling force, high machining accuracy and high surface quality. All these
advantages have led to the application of high-speed machining technology in the
machining of titanium alloys [4].
Many researchers have researched on surface quality of milling of titanium alloy.
Che-Haron [5] carried on the research on surface quality of rough turning of titanium
alloy Ti6Al4V with uncoated carbide. The results showed that the machined surface
experienced microstructure alteration and increment in microhardness on the top white
layer of the surface. Machined surfaces have shown severe plastic deformation and
hardening after prolonged machining time with worn tools, especially when machining

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 423429, 2011.
Springer-Verlag Berlin Heidelberg 2011
424 X. Shen, L. Zhang, and C. Ren

under dry milling condition. A metallurgical analysis on chip obtained from high speed
machining of titanium alloy Ti6Al4V was performed by Puerta Velsquez [6].The
titanium phase was observed in all chips for any tested milling speeds. No evidence of
phase transformation was found in the shear bands. Ezugwu [7] investigated the effect
of machining parameters on residual stress of titanium alloy IMI-834 by milling. The
residual stresses were found to be compressive in nature at the milling speed of 11~56m
min-1 and a linear relationship could not explain the variation of the residual stresses
with respect to the milling parameters. Ge [8] reported experimental evidence that the
surface roughness was less than Ra 0.44m when milling gamma titanium aluminide
alloy at speeds of 60~240m min-1. Workpiece surface has a maximum microhardness of
approximately 600HV (0.100), and the depth of maximum hardened layer was confined
to 180m below the surface. Che-Harons study showed that surface roughness values
recorded were typically less than 1.5m Ra when milling gamma titanium aluminide
alloy at high speed. Microhardness evaluation of the subsurface indicated a hardened
layer to a depth of 300m. Initial residual stresses analysis showed that the surface
contained compressive stresses more than 500MPa. Tool flank wear and milling speed
have the greatest effect on residual stress. Ezugwu investigated the chip formation
mechanism and surface quality as well as milling process characteristics of titanium
alloy in high speed milling. The high milling speed with much more milling tooth will
be beneficial to reduction of milling forces, enlarge machining stability region,
depression of temperature increment, auti-fatigability as well as surface roughness.
The aim of this paper is to investigate the influence of milling speed, tool rake angle
and cooling method on surface quality in high-speed milling of titanium alloy. The
paper will provide experimental and theoretical basis for optimizing high-speed milling
parameters and controlling surface quality of titanium alloy. Consequently, it will lead
to the generation of counteractive machining procedures to improve component fatigue
life and machinability.

2 Experimental Procedure
2.1 Experimental Material
The workpiece material used in all the experiments was an alpha-beta titanium alloy
TC11. The nominal chemical composition of the workpiece material confirms to the
following specification (wt.%): 6.42 Al; 3.29 Mo; 1.79 Zr; 0.23 Si; 0.025 C; 0.096 O;
0.003 H; 0.077 Fe; 0.004 N; allowance Ti. The mechanical properties of tested material
at room temperature and high temperature are shown in Table 1. The workpiece
dimension was 80mm50mm30 mm.

Table 1. Mechanical properties of TC11 titanium alloy


Room temperature mechanical properties High temperature mechanical properties
Tensile strength b/MPa 1128 Test temperature/ 500
Yield strength 0.2/Mpa 1030 Tensile strength b/MPa 765
Elongation /% 12 Rupture strength 100/MPa 667
Shrinkage /% 35
Impact value k/Jcm-2 44.1
Influence of Milling Conditions on the Surface Quality in High-Speed Milling 425

2.2 Milling Conditions

All the machining experiments were carried out on a three-axis Mikron HSM 800 high
speed milling center with an ITNC 530 controller. The milling tools used were K44
uncoated cemented carbide milling cutter with four teeth. The diameter of the cutter
was 10 mm, the helix angle and relief angle were 30, 10, respectively. The milling
mode was down milling.
Three different milling speeds were selected as 376.8 m/min, 471 m/min and 565.2
m/min, and feed per tooth, milling depth and milling width were maintained constant at
0.05 mm/tooth, 0.2 mm and 10 mm, respectively. Meanwhile the tool rake angle used
was 14 . The water coolant was used.
The three tool rake angles used were 4, 8 and 14 to investigate the influences of
rake angle on surface quality. Meanwhile the milling speed, feed per tooth, milling
depth and milling width were maintained constant at 251 m/min, 0.05 mm/tooth, 0.2
mm and 10 mm, respectively. The coolant used was oil mist.
In addition, three different cooling methods including dry, water and oil mist were
selected.

2.3 Measurement of Surface Quality

After each processing step, the surface residual stress was measured by X-ray
diffraction. The residual stress was measured at three locations both in the feed and
stepover directions, and then the average value is calculated. The surface roughness of
the machined surface after each test was measured using a contact type profilometer
instrument (Taylor Hobson Form Talysurf 120). The evaluation length was set at 5.6
mm and the sampling length was fixed at 0.8 mm. The instrument was calibrated using
a standard calibration block prior to the measurements. The surface roughness was
measured by positioning the stylus in the feed direction. The surface roughness was
taken at five locations and repeated twice at each point on the face of the machined
surface, and then the average value is gained.

3 Results and Discussion

3.1 Influence of Milling Speed on Surface Quality

Fig. 1 shows the influence of milling speed on surface residual stresses. It is observed
that all surface residual stresses are compressive, and increasing the milling speed
tends to decrease the compressive residual stresses on the surface in both directions.
The results show the trend of the residual stresses is from compression to tensile with
the milling speed increasing. This trend is expected because as milling speed
increases, machining becomes more adiabatic, so the temperature rise softens the
metal and thus reducing the milling forces. Fig. 2 shows the influence of milling
speed on surface roughness. It can be seen that an increase of milling speed causes an
increase of surface roughness. This was probably due to rapid tool wear at the milling
edge closer to the nose.
426 X. Shen, L. Zhang, and C. Ren

-1
Milling speed vc/m min
350 400 450 500 550 600
0

-50

-100

Residual stress/MPa -150

-200

-250

-300

x
-350
y

-400

Fig. 1. The influence of milling speed on residual stress

1.20

1.15

1.10
Surface roughness Ra/m

1.05

1.00

0.95

0.90

0.85

0.80

0.75

350 400 450 500 550 600


-1
Milling speed vc/m min

Fig. 2. The influence of milling speed on surface roughness

3.2 Influence of Rake Angle on Surface Quality

Fig. 3 shows the influence of rake angle on surface residual stresses. It is observed that
all the surface residual stresses are compressive, and the magnitude of the compressive
stresses increased on the surface in both directions with the rake angle increasing. Fig. 4
shows the influence of rake angle on surface roughness. It can be seen that the surface
roughness decreases slightly when the rake angle is changed from 4 to 8 . However,
the surface roughness increases remarkably when the rake angle is increased from 8
to 14 .
Influence of Milling Conditions on the Surface Quality in High-Speed Milling 427

Rake angle /degree


0
4 6 8 10 12 14

-50 x
y
-100
Residual stress/MPa

-150

-200

-250

-300

Fig. 3. The influence of rake angle on residual stress

0.8

0.7
Surface roughness Ra/m

0.6

0.5

0.4

0.3
4 6 8 10 12 14
Rake angle /degree

Fig. 4. The influence of rake angle on surface roughness

3.3 Influence of Cooling Method on Surface Quality

Fig. 5 shows the influence of cooling methods on surface residual stresses. It is


observed that all the surface residual stresses are compressive, and the highest
compressive stresses on the surface in both directions appeared under dry milling
condition. The magnitude of the highest compressive stress is about 350 MPa in feed
direction. The lowest compressive stresses on the surface appeared in both directions
under water cooling condition. Fig. 6 shows the influence of cooling methods on
surface roughness. It can be seen that the highest surface roughness appeared under oil
mist cooling condition. The lowest surface roughness appeared under water cooling
condition, and the value of that surface roughness is only about 0.271m. By
comprehensive consideration of residual stresses and surface roughness, the best
cooling method is water cooling.
428 X. Shen, L. Zhang, and C. Ren

Dry Water Oil mist


0

-50

-100
Residual stress/MPa
-150

-200

-250

-300 x
y
-350

Fig. 5. The influence of cooling method on residual stress

0.8

0.7

0.6
Surface roughness Ra/m

0.5

0.4

0.3

0.2

0.1

0.0
Dry Water Oil mist

Fig. 6. The influence of cooling method on surface roughness

4 Conclusion
Experimental investigation of the influence of milling conditions, involving milling
speed, tool rake angle and cooling methods, on the surface quality in terms of residual
stress and surface roughness in high-speed milling titanium alloy with uncoated
cemented carbide tool, were carried out. The results show the milling speed, rake angle
and cooling method have important influence on residual stress and surface roughness.
Compressive residual stresses are higher in the feed direction than in the stepover
direction. The best surface quality can be obtained when the rake angle is 8. Water
cooling is the best cooling method.

Acknowledgements. This project is supported by Scientific Research Fund of Hunan


Provincial Education Department (No. 10C0113) and 2010 College Research Project of
Hunan Industry Polytechnic Grant (No. GYKYZ201004).
Influence of Milling Conditions on the Surface Quality in High-Speed Milling 429

References
[1] Shen, X.L., Zhang, L.X., Ren, C.G., Zhou, Z.X.: Research on Design and Application of
Control System in Machine Tool Modification. Adv. Mater. Res. 97-101, 20532057 (2010)
[2] Shen, X.L., Luo, Y.X., Zhang, L.X., Long, H.: Natural frequency computation method of
nonlocal elastic beam. Adv. Mater. Res. 156-157, 15821585 (2011)
[3] Su, Y., He, N., Li, L., et al.: An experimental investigation of effects of cooling/lubrication
conditions on tool wear in high-speed end milling of Ti-6Al-4V. Wear 261, 760766 (2006)
[4] Shen, X.L., Zhang, L.X., Long, H., Zhou, Z.X.: Analysis and Experimental Investigation of
Chatter Suppression in High-speed Cylindrical Grinding. Appl. Mech. Mater. 34-35,
19361940 (2010)
[5] Che-Haron, C.H., Jawaid, A.: The effect of machining on surface integrity of titanium alloy
Ti-6% Al-4% V. J. Mater. Proc. Technol. 166, 188192 (2005)
[6] Puerta Velsquez, J.D., Bolle, B., Chevrier, P., et al.: Metallurgical study on chips obtained
by high speed machining of a Ti-6 wt.% Al-4 wt.% V alloy. Mater. Sci. Eng. A. 452-453,
469474 (2007)
[7] Ezugwu, E.O., Wang, Z.M.: Titanium alloys and their machinabilitya review. J. Mater.
Proc. Technol. 68, 262274 (1997)
[8] Ge, Y.F., Fu, Y.C., Xu, J.H.: Experimental Study on High Speed Milling of -TiAl Alloy.
Key Eng. Mater. 339, 610 (2007)
Molecular Dynamics Simulation Study on the
Microscopic Structure and the Diffusion Behavior of
Methanol in Confined Carbon Nanotubes

Hua Liu, Xiaofeng Yang*, Chunyan Li, and Jianchao Chen

Department of Physics North University of China Taiyuan,


Shanxi Province 030051 Peoples Republic of China
Mobile: 13453153691
liuhua8486200@163.com,
yangxf@nuc.edu.cn

Abstract. Molecular dynamics simulation was performed to study the


conformation and diffusion properties of the methanol molecules in the (8, 8) (9,
9) and (10, 10) single-walled carbon nanotubes under an ambient environment.
We present the results of the radial distribution functions and the diffusion of
methanol in different pipe diameters. The result shows that the methanol
molecules confined inside the (10, 10) carbon nanotubes are observed to have
extremely highly ordered structure. The methanol molecules confined in smaller
pipe diameter are mostly clustered near the central axis. The diffusion of the
methanol molecules that are located in the carbon nanotubes show a strong
dependence on the different pipe diameters are Knudsen diffusion.

Keywords: confined, carbon nanotubes, methanol molecules, molecular


dynamics simulation.

1 Introduction
In recent years, the technique of the molecular dynamics(MD) simulation plays an
important role in studying the properties of the restricted molecules[1,2].The limited
water molecule in carbon nanotubes (CNTs) has become the focus of research
[3,4].Especially, the hydrogen bond has a crucial influence on the physical and chemical
properties of the water molecules[5,6]. As water molecules, methanol molecules can
form hydrogen bond. Yu, LM et al [7]investigated the micro-structural and the dynamic
properties of the methanol by MD simulation, and showed that the average distance
increasing as temperature. The distance between the molecules and the diffusion
coefficients show an increase trend with temperature. The randomness of the molecular
distribution also increases in the carbon nanotube (CNT). Sun, W et al [8] also
deliberated the microstructure of the methanol. The result shows that each methanol
molecule form the average number of hydrogen bond is approximately equal to 2, which
suggests that there is not only one preferred orientation of methanol molecules with
respect to each other. In this paper, we explore the dependence of the microscopic
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 430436, 2011.
Springer-Verlag Berlin Heidelberg 2011
Molecular Dynamics Simulation Study on the Microscopic Structure 431

structure of the methanol on the nanotube diameters, to observe the distribution of


whether can appear similar phenomena. We study the RDF and diffusion properties of
methanol in different diameters of the CNTs by MD simulation.

2 Methods
We use arm-chair mode of rolling the graphite sheet to construct the different
nanotubes in diameter [9,10]. The (8, 8) (9, 9), and (10, 10) single-walled CNT have
been chosen and their diameters are 1.08 nm, 1.22 nm, and 1.356 nm respectively,
according to the formula d = a 3(m2 +mn+n2) , Where a is the C-C distance fixed at

0.142nm. The lengths of the CNT in which contains 16 methanol molecules are
7.739nm thought out our simulations.
A considerable number of potential models have been developed so far to describe
methanol molecules interactions, namely, TIPS [11], OPLS [12], Haugney rigid three-
site model [13], flexible model [14], and polarizable model [15]. The OPLS model
can accurately predict the properties of methanol molecule at ambient temperature.
The intermolecular interaction between a pair of methanol molecules is given by

qi q j ij ij
u(ri , rj ) = + 4 ij ( )12 ( )6 (1)
rij rij rij

Here i and j are the label sites in methanol molecule. ij And ij are Lennard-Jones
energy and radius, respectively. rij Is the distance between sites i and j in methanol,
qi is the charge at site i in methanol. The Lennard-Jones interaction energy and radius
parameters of the cross-terms are calculated on the basis of the Lorentz-Berthelot
1
combining rules: = 1 ( + ) , ij = ( i j ) 2 . The force field parameters are listed in
ij i j
2
Table 1. In the model of methanol the O-H and O-Me distances are 0.91210-10and
1.43010-10m, respectively, and the H-O-Me angle is 108.5.
The simulations were conducted in the canonical ensemble (NVT) at temperature
of 298K with a Nose-Hoover thermostat. The long-range electrostatic interactions
were treated with the Ewald summation method. All simulations are carried out with
an integration time step of 0.5fs and a cutoff distance is 1.0nm for the short-range
interactions. The various equilibrium and dynamical properties were calculated over a
period of 360 ps, after the systems were equilibrated for 140 ps.

Table 1. Potential model parameters

Atom symbol /(kj.mol-1) /nm Partial charge (e)



C CNT 0.293 0.355 0.000
CH3 0.866 0.376 0.4246
O 0.711 0.307 -1.1215
H 0 0 0.6969
432 H. Liu et al.

3 Results and Discussion


The radial distribution function (RDF) can well depict the microscopic structure of a
system. The O-H and O-O radial distribution functions gO-H(r) and gO-O(r) for the
different diameters at 298K are shown in Fig.1and Fig.2

4.5 4.0
d=1.085nm d=1.085nm
4.0 d=1.22nm d=1.22nm
3.5
d=1.356nm d=1.356nm
3.5
3.0
3.0
2.5
2.5
g(r)

g(r)
2.0
2.0
1.5
1.5

1.0
1.0

0.5 0.5

0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
r/nm r/nm

Fig. 1. O-H radial distribution functions Fig. 2. O-O radial distribution functions

As can be seen from Fig1 and Fig2, the first peaks in the O-H radial distribution
functions show a similar trend to that of the O-O radial distribution functions.
Furthermore, the first peak height of the radial distribution functions increases and
that of the first minimum decreases with the larger diameters. The positions of the
first peaks almost remain unchanged by the sizes of the diameters, but the positions of
the first minimum are shifted to right with the increasing diameters. This result
indicates that the hydrogen bonding structure of methanol is enhanced significantly by
increasing diameter. The aggregation densities of oxygen-hydrogen bond and the
oxygen-oxygen bond increase with the larger diameters. The density of the largest
region for the O-H radial distribution functions and the O-O radial distribution
functions are respectively gathered at 0.20nm and 0.29nm. It is possible to figure out
that the architecture graduate closely regular to change the loose disorder. One of the
most important curves is the O-H radial distribution functions. In this section the
influence of the different diameters on the hydrogen bond structure will be
investigated by calculating the average number of coordination numbers. We have
used the following geometric definition of the hydrogen bond that can be formed
among methanol molecules if the oxygen-oxygen distance is less than 0.35nm; the
hydrogen-oxygen distance is less than 0.245nm and the oxygen-oxygen-hydrogen
angleis less than 30. The number of the hydrogen bonds and the coordination
numbers have relation with each other. The coordination numbers ( nOH (r) ) are
obtained by integrating the corresponding O-H pair distribution function ( gOH (r) )
rmin
nOH ( r ) = 4 g OH ( r )r 2 dr (2)
0

In eq (2), is the macroscopic number density of the methanol molecule. The


integration is usually restricted by the position of the first minimum in the pair
distribution function. Both H-O radial distribution functions have the first wave
Molecular Dynamics Simulation Study on the Microscopic Structure 433

trough at 0.284nm. MD simulation was performed to explore the water in confined


CNTs by Gordillo and Marti [16].It is seen that the hydrogen bonds of the confined
water were damaged owing to restriction. It is attributed to the decrease of the average
number of hydrogen bonds and the coordination numbers among waters. The
methanol molecules have the same structure to be observed whether appear this kind
of phenomenon.

Table 2. Coordination numbers in different diameters of carbon nanotubes

d(nm) 1.085 1.22 1.356


NOH 2.28 2.32 2.56

The diameters of the CNTs and the corresponding coordination numbers are listed
in table2. The tendency of the coordination numbers indicate the axial dispersion
coefficient increasing with the larger diameters, the number of the hydrogen bonds
also follows this law.

2.5
Wall of CNT

2.0

1.5

<>
1.0

0.5

0.0
0.0 0.2 0.4 0.6 0.8
r/nm

Fig. 3. The radial density distributions of sites of methanol molecules in the (10, 10) CNTs
gained from MD simulation, illustration shows the longitudinal section of the methanol
molecules in CNT

Coordinate origin represents the center axis of carbon nanotube in Fig 3.The
position of the maximum is away from the pipe wall at a distance of 0.36nm. The
limited effect of the methanol weakens with the larger diameters of the CNT. More
and more methanol molecules which connect to form a ring tend to locate near the
tube wall. However, there are little molecules near the tube axis and keeping away
from the wall. In eq. (1), the Lennard-Jones energy and radius parameters of the
cross-interactions are calculated on the basis of the Lorentz-Berthlot combining rules.
Electrostatic interactions can be ignored on account of low-energy. The derivative of
the Lennard-Jones energy reach conclusion: r = 2 .According to the potential
6

model parameters of the methanol molecule: = 0.331nm. The value of the radius is
0.36nm.The consequence corresponds with the simulated result (Fig 3). We can
conclude that the interaction energy of the methanol molecule which is away from the
wall at a distance of 0.36nm is lowest. The molecules which locate in a relatively
stable state are gathered here. Inner layer represents methanol and outer layer
indicates CNTs as shown in the illustration of Fig 3.The methanol in (8, 8) (9, 9)
434 H. Liu et al.

CNTs gathered near the central axis of the rube due to the strength of the Lennard-
Jones energy. The limited effect of molecules obvious which appear mission-shaped
or liner arrangement
As can be seen from Fig 4, the first characteristic peak is monotonically
decreasing on the CH3-CNT, O-CNT, and H-CNT radial distribution. The first
characteristic peak of the CH3-CNT radial distribution function is the maximum, the
corresponding minimum is the H-CNT radial distribution function. It can probably be
attributed to the space factor of the restricted pipe diameter. The characteristic peak of
the (10, 10) CNTs is lower than the corresponding characteristic peak of the (8, 8) (9,
9) CNTs. This means that the former orderly degree is generally lower than the later.
This is due to the tube space increasing and the limited effect weakens. In order to
make a better understanding of the relationship between diffusion coefficient and the
channel diameters, MD simulation was explored to solve the diffusion coefficient by
two kinds of methods.

1.16
1.14 CH3-CNT 300
d=1.085nm
1.12 O-CNT d=1.22nm
1.10 H-CNT d=1.356nm
250
1.08
1.06
200
1.04
2
MSD/nm

1.02
g(r)

150
1.00
0.98
100
0.96
0.94
0.92 50

0.90
0.88 0
1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 0 100 200 300 400 500
r/nm t/ps

Fig. 4. The diagram of the first characteristic Fig. 5. The mean square distance (MSD) of
peak on the CH3-CNT, O- CNT, H-CNT radial methanol molecules on the axial direction in
distribution functions with the different diameters CNTs

The diffusion coefficient can be obtained either from axial velocity autocorrelation
via the Green-Kubo relations or from mean square displacement via the Einstein
expression for the MD simulation. We take advantage of the Einstein formula and
Green-Kubo relations to calculate the self-diffusion coefficient. The self diffusivity is
conveniently defined using the Einstein expression [17]
N
1
D S = lim ri ( t ) ri (0)
2
(3)
i 6N t i =1

Or, equivalently, using the generalized Green-Kubo relations ,


N
1
D =
3N

i =1 0
u i (t ) u i (0 ) d t (4)

Here, ri (t ) and ui (t ) are the position and velocity of the tagged particle, respectively.
The velocity autocorrelation function reach null point in a short time while adopt
the Green-Kubo equation. However, with the time passing, the numerical instability
leads to the curve waves. The eq. (3) is without this phenomenon. In this paper, it
Molecular Dynamics Simulation Study on the Microscopic Structure 435

would be best to adopt the eq. (3) to calculate the diffusion coefficients of the
methanol. The methanol has particle-wall collisions, and the rate is lower than the
particle-to-particle collisions. This diffusion which is calculated with the following
formula called Knudsen diffusion.
2 8RT
DK = r (5)
3 M
Where r denotes the pore radius and R denotes gas constant; M denotes molecular
diffusion components.

Table 3. Using Einstein method and Knudsen equation separately calculate the diffusion
coefficient of the methanol

d/nm 1.085 1.22 1.356


-9 2 -1
DS/10 m s 4.63 5.61 6.3
DK/10-9m2s-1 4.88 5.73 6.34

Table 3 exhibits the diffusion coefficients of the methanol molecules increase


gradually by this two approaches. This result suggests that the diameter is smaller; the
diffusion coefficient is less obvious. The methanol molecules in the CNT form
orderly structure in respect that the methanol-methanol interaction and the methanol-
wall interaction. These reasons block the spread of methanol molecules in the CNT.
Diffusion coefficients increase when the collision frequency of the methanol and the
pipe wall decrease with the larger diameter. Using eq. (3) ermittelt the average of the
diffusion coefficient is 5.51310-9m2/s. By eq. (5) calculating the average of the
diffusion coefficient is 5.6510-9 m2/s.The relative errors of the results of the two
methods are 2.43%.Therefore, the diffusion of the methanol molecules in different
diameters of the CNTs are Knudsen diffusion.

4 Conclusions
We have carried out extensive research on the diffusion of methanol molecules in
CNTs by molecular dynamics simulation, offering a microscopic description of the
relevant static and dynamic properties. It is found that the limited effect plays a
critical role in the RDF and the diffusion coefficient of the methanol. An
enhancement of the numbers of the hydrogen bonds and the coordination numbers of
the methanol molecules with increasing diameters have been deduced from the RDFs.
The coordination numbers of the methanol are shown in table2.Thus, it is seen that
the diffusion of the methanol molecules that are located in the carbon nanotubes show
a strong dependence on the different pipe diameters are Knudsen diffusion.

References
1. Zhang, X.R., Wang, W.C.: Journal of Chemistry 60, 1396 (2002)
2. Lv, L.H., Wang, Q., Li, Y.C.: Journal of Chemistry 61, 1232 (2003)
436 H. Liu et al.

3. Hummer, G., Rasaiah, J.C., Noworyta, J.P.: Nature 414, 188 (2001)
4. Wang, J., Zhu, Y., Zhou, J., Lu, X.H.: Journal of Chemistry 61, 1891 (2003)
5. Netz, P.A., Starr, F.W., Stanley, H.E., et al.: Static and dynamic properties of stretched
water. J. Chem. Phys. 115, 344348 (2001)
6. Netz, P.A., Starr, F.W., Stanley, H.E., et al.: Relation between structural and dynamical
anomalies in super cooled water. J. Physical A. 314, 470476 (2002)
7. Yu, L., Tang, Y., Liu, J., Liu, J.: Molecular Dynamics Simulation of liquid Methanol.
Chemical Industry and Engineering 7(26), 338341 (2009)
8. Sun, W., Chen, Z., Huang, S.Y.: Liquid methanol microstructure of molecular dynamics
simulation. 11(33), 5153 (2005)
9. Ayappa, K.G.: Langmuir 14, 880 (1998)
10. Zhang, F.J.: Chem. Phys. 111, 9082 (1999)
11. Jorgensen, W.L.: Microstructure and Diffusive Properties of Liquid Methanol. J. Am.
Chem. Soc. 102, 543549 (1980)
12. Jorgensen, W.L.: Optimized intermolecular potential functions for liquid alcohols. Phys.
Chem. 90, 12761284 (1986)
13. Haughney, M., Ferrario, M., McDonald, I.R.: Molecular dynamics simulation of liquid
methanol. J. Phys. Chem. 91, 49344940 (1987)
14. Haughney, M., Ferrario, M., McDonald, I.R.: Molecular dynamics simulation of liquid
methanol with a flexible threesite model. J. Phys. Chem. 91, 43344341 (1987)
15. Skaf, M.S., Fonseca, T., Ladanyi, B.M.: Wave vector dependent dielectric relaxation in
hydrogen-bonding liquids: a molecular dynamics study of methanol. J. Chem. Phys. 98,
89298945 (1993)
16. Gordillo, M.C., Marti, J.: Chem. Phys. Let. J. 329, 341 (2000)
17. Allen, M.P., Tildesley, D.J.: Computer simulation of liquids Methanol. Oxford University
Press, Oxford (1987)
Spoken Emotion Recognition Using Radial Basis
Function Neural Network

Shiqing Zhang1, Xiaoming Zhao2, and Bicheng Lei1


1
School of Physics and Electronic Engineering, Taizhou University,
318000 Taizhou, China
2
Department of Computer Science, Taizhou University,
318000 Taizhou, China
{tzczsq,tzxyzxm,leibicheng}@163.com

Abstract. Recognizing human emotion from speech signals, i.e., spoken


emotion recognition, is a new and interesting subject in artificial intelligence
field. In this paper we present a new method of spoken emotion recognition
based on radial basis function neutral networks (RBFNN). The acoustic features
related to human emotion expression are extracted from speech signals and then
fed into RBFNN for emotion classification. The performance of RBFNN on
spoken emotion recognition task is compared with several existing methods
including linear discriminant classifiers (LDC), K-nearest-neighbor (KNN), and
C4.5 decision tree. The experimental results on emotional Chinese speech
corpus demonstrate the promising performance of RBFNN.

Keywords: Spoken emotion recognition, Feature extraction, Radial basis


function neutral networks.

1 Introduction
Affective computing [1], which aims at understanding and modeling human emotions,
is currently a very active topic within the engineering community. Speech is one of
the most powerful, natural and immediate means for human beings to communicate
their emotions, speech is thus a main vehicle of human emotion expressions.
Recognizing human emotions from speech signals, called spoken emotion
recognition, has attracted extensive research interests within artificial intelligence
field due to its important applications to human-computer interaction [2], call centers
[3], and so on.
A typical spoken emotion recognition system comprises of two parts: feature
extraction and emotion classification. Feature extraction involves in extracting the
relevant features from speech signals with respect to emotions. Emotion classification
maps feature vectors onto emotion classes through a classifiers learning by data
examples. After feature extraction, the accuracy of emotion recognition relies heavily
on the use of a good pattern classifier. So far, the widely used emotion classification
methods, such as linear discriminant classifiers (LDC), K-nearest-neighbor (KNN),
C4.5 decision tree, have been applied for spoken emotion recognition [4-7]. Radial
basis function neural networks (RBFNN) [8, 9] as representative neural networks has

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 437442, 2011.
Springer-Verlag Berlin Heidelberg 2011
438 S. Zhang, X. Zhao, and B. Lei

became a well-known method for classification since its main advantages are
computational simplicity, supported by well-developed mathematical theory, and
robust generalization, powerful enough for real-time real-life tasks. Motivated by
little attention to explore the performance of RBFNN on spoken emotion recognition
task, in this study we employ RBFNN to conduct emotion recognition experiments in
Chinese speech.
This paper is structured as follows. The basic principle of RBFNN for
classification is reviewed briefly in Section 2. The emotional speech corpus and
feature extraction are described in Section 3. Section 4 analyzes the experiment
results. Finally, the conclusions are given in Section 5.

2 Radial Basis Function Neural Networks for Classification


When using radial basis function neural networks (RBFNN) for classification, the
RBFNN is a three layer feed-forward network that consists of one input layer, one
hidden layer and one output layer, as shown in Fig. 1. The input layer receives input
data. Each input neuron of the input layer corresponds to a component of an input
vector x .The hidden layer is used to cluster the input data and extract features. The
hidden layer consists of n neurons and one bias neuron. Each input neuron is fully
connected to the hidden layer neurons except the bias one. Each hidden layer neuron
computes a kernel function. A Gaussian Radial Basis Function could be a good choice
for the hidden layer.

Fig. 1. The basic framework of RBFNN

The used Gaussian Radial Basis Function for the hidden layer is defined as
x ci
exp( ), i = 1, 2, ", n
yi = 2 i2 (1)
i=0
1,
Spoken Emotion Recognition Using Radial Basis Function Neural Network 439

where ci and i represent the center and the width of the neuron, respectively, and
the symbol denotes the Euclidean distance. The weight vector between the input
layer and the i -th hidden layer neuron corresponds to the center ci . The closer x is
to ci , the higher the value the Gaussian function will produce.
The output layer consists of m neurons corresponding to the possible classes of the
problem. Each output layer neuron is fully connected to the hidden layer and
computes a linear weighted sum of the outputs of the hidden neurons as follows:
n
z j = yi wij , j = 1, 2, ", m (2)
i =0

where wij is the weight between the i -th hidden layer neuron and the j -th output
layer neuron.

3 Experiment Setup

3.1 Emotional Speech Corpus

The emotional speech Chinese corpus reported in our present study [10] was used for
experiments. The Chinese corpus was collected from 20 different Chinese dialogue
episodes from a talk-show on TV. In each talk-show, two or three persons discuss the
problems such as social typical issues, family conflict, inspiring deeds, etc. Due to the
spontaneous and unscripted manner of the episodes, the emotional expressions can be
considered authentic. Because of the limited topics, the speech corpus consists of four
kinds of common emotion including angry, happy, sad and neutral. This corpus
collected in total contains 800 emotional utterances from 53 different speakers (16
male /37 female), speaker-independent, each of four emotion for about 200
utterances. All utterances recorded at a sample rate of 16 kHz and 16 bits resolution
with mono-phonic Windows WAV format stored in computer.

3.2 Feature Extraction

It has been found speech prosody and voice quality features are closely related to the
expression of human emotion in speech [3, 10]. The popular prosody features contain
pitch, intensity and duration. And the representative voice quality features include the
first three formants (F1, F2, F3), spectral energy distribution, harmonics-to-noise-ratio
(HNR), pitch irregularity (jitter) and amplitude irregularity (shimmer). The popular
prosody and voice quality features are extracted for each utterance from the Berlin
emotional speech database. Some typical statistical parameters such as mean, standard
derivations (std), median, quartiles, and so on, are computed for each extracted
feature. These extracted acoustic features, 48 in total, are presented as follows.
Prosody features:
(1-10)Pitch: maximum, minimum, range, mean, std (standard deviation), first
quartile, median, third quartile, inter-quartile range, mean-absolute-slope.
(11-19)Intensity: maximum, minimum, range, mean, std, first quartile, median,
third quartile, inter-quartile range.
440 S. Zhang, X. Zhao, and B. Lei

(20-25)Duration: total-frames, voiced-frames, unvoiced-frames, ratio of voiced to


unvoiced frames, ratio of voiced-frames to total-frames, ratio of unvoiced-frames to
total-frames (20ms/ frame).
Voice quality features:
(26-37)First three formants F1-F3: mean of F1, std of F1, median of F1, bandwidth
of median of F1, mean of F2, std of F2, median of F2, bandwidth of median of F2,
mean of F3, std of F3, median of F3, bandwidth of median of F3.
(38-41)Spectral energy distribution in 4 different frequency bands: band energy
from 0 Hz to 500 Hz, band energy from 500 Hz to 1000 Hz, band energy from 2500
Hz to 4000 Hz, band energy from 4000 Hz to 5000 Hz.
(42-46)Harmonics-to-noise-ratio: maximum, minimum, range, mean, std.
(47)Jitter: pitch perturbation in vocal chords vibration
Jitter is calculated with the following equation (3), in which Ti is the i -th peak-to-
peak interval and N is the number of intervals:
N 1 N 1
Jitter (%) = (2Ti Ti 1 Ti +1 ) T i (3)
i =2 i=2
(48)Shimmer: perturbation cycle to cycle of the energy
Shimmer is calculated similarly to Jitter as shown in equation (4), in which Ei is
the i -th peak-to-peak energy values and N is the number of intervals:
N 1 N 1
Shimmer (%) = (2 Ei Ei 1 Ei +1 ) E i (4)
i =2 i =2

4 Experiment Results
All extracted different acoustic features were normalized by a mapping to [0, 1]
before anything else. Three typical classification methods, i.e., linear discriminant
classifiers (LDC), K-nearest-neighbor (KNN) and C4.5 decision tree were used for
emotion recognition and compared their performance with RBFNN. Note that the
number of nearest neighbor for KNN method is set to 1 due to its best performance.
For all emotion classification experiments, we performed 10-fold stratified cross
validations over the data sets so as to achieve more reliable experiment results. In
other words, each classification model is trained on nine tenths of the total data and
tested on the remaining tenth. This process is repeated ten times, each with a different
partitioning seed, in order to account for variance between the partitions.
The different emotion recognition results of four classification methods, including
LDC, KNN, C4.5 and RBFNN, are presented in Table 1. From the results in Table 1,
we can observe that RBFNN performs best with the highest accuracy of 83.3%,
outperforming the other used methods including LDC, KNN and C4.5. This
demonstrates that RBFNN has the best generalization ability among all used four
classification methods. Note that LDC and KNN get an accuracy of about 75%,
indicating that LDC is highly close to KNN on classification task. Additionally, C4.5
performs better than LDC and KNN, and gets a recognition accuracy of 80%.
To further explore the recognition results of different kinds of emotions when
using RBFNN classifier, Table 2 gives the confusion matrix of emotion recognition
Spoken Emotion Recognition Using Radial Basis Function Neural Network 441

results obtained with RBFNN. As shown in Table 2, we can see that that neutral is
best classified with an accuracy of 91.2%, and angry is discriminated with an
accuracy of 85.6%. While other two emotions, happy and sad, only could be classified
with relatively lower accuracies, in detail 80% for happy and 76.4% for sad. This can
be explained to that happy is highly confused to angry and sad for each other since
they have confused acoustic parameters to a great extent.

Table 1. Emotion recognition results for different classification methods

Methods LDC KNN C4.5 RBFNN

Accuracy (%) 75.2 75.1 80.0 83.3

Table 2. Confusion matrix of emotion recognition results with RBFNN

Emotion Angry Happy Sad Neutral

Angry 85.6 10.3 4.1 0


Happy 13.7 80.0 5.2 1.1
Sad 2.8 13.6 76.4 7.2
Neutral 0 6.5 2.3 91.2

5 Conclusions
In this paper, a new method based on RBFNN is presented for spoken emotion
recognition. To effectively compare the performance of RBFNN on spoken emotion
recognition task, three well-known emotion classification methods, i.e., LDC, KNN,
and C4.5, were used. From the experiment results on emotional Chinese speech
corpus, we can conclude that RBFNN performs best with an average recognition
accuracy of 83.3%, due to its good generalization ability. Its worth pointing out that
in this study we focus on recognizing only four emotions such as angry, happy, sad
and neutral. However, in real world scenery some other common emotions such as
disgust, fear and surprise also existed. Therefore, in the future its an interesting
subject to extend our emotional speech Chinese corpus and identify more other
different emotion categories such as disgust, fear, surprise and so on.
Acknowledgments. This work is supported by Zhejiang Provincial Natural Science
Foundation of China (Grant No.Z1101048, No. Y1111058).

References
1. Picard, R.: Affective computing. MIT Press, Cambridge (1997)
2. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W.,
Taylor, J.G.: Emotion Recognition in Human-Computer Interaction. IEEE Signal
Processing Magazine 18(01), 3280 (2001)
442 S. Zhang, X. Zhao, and B. Lei

3. Lee, C.M., Narayanan, S.S.: Toward Detecting Emotions in Spoken Dialogs. IEEE
Transactions on Speech and Audio Processing 13(2), 293303 (2005)
4. Dellaert, F., Polzin, T., Waibel, A.: Recognizing emotion in speech. In: Proceedings of
4th International Conference on Spoken Language Processing, Philadelphia, PA, USA,
pp. 19701973 (1996)
5. Petrushin, V.: Emotion in speech: recognition and application to call centers. In:
Proceedings of 1999 Artificial Neural Networks in Engineering, New York, pp. 710
(1999)
6. Yacoub, S., Simske, S., Lin, X., Burns, J.: Recognition of emotions in interactive
voice response systems. In: Proceedings of EUROSPEECH 2003, Geneva, Switzerland,
pp. 729732 (2003)
7. Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.S.: Emotion recognition using a
hierarchical binary decision tree approach. In: Proceedings of INTERSPEECH 2009,
Brighton, United Kingdom, pp. 320323 (2009)
8. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks.
Neural Computation 3(2), 246257 (1991)
9. Er, M.J., Wu, S., Lu, J., Toh, H.L.: Face recognition with radial basis function (RBF)
neural networks. IEEE Transactions on Neural Networks 13(3), 697710 (2002)
10. Zhang, S.: Emotion Recognition in Chinese Natural Speech by Combining Prosody and
Voice Quality Features. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds.) ISNN 2008,
Part II. LNCS, vol. 5264, pp. 457464. Springer, Heidelberg (2008)
Facial Expression Recognition Using Local Fisher
Discriminant Analysis

Shiqing Zhang1, Xiaoming Zhao2, and Bicheng Lei1


1
School of Physics and Electronic Engineering, Taizhou University,
318000 Taizhou, China
2
Department of Computer Science, Taizhou University,
318000 Taizhou, China
{tzczsq,tzxyzxm,leibicheng}@163.com

Abstract. In this paper a new facial expression recognition method based on


Local Fisher Discriminant Analysis (LFDA) is proposed. LFDA is used to
extract the low-dimensional discriminant embedded data representations from
the original high-dimensional local binary patterns (LBP) features. The K-
nearest-neighbor (KNN) classifier with the Euclidean distance is adopted for
facial expression classification. The experimental results on the popular JAFFE
facial expression database demonstrate that the best accuracy based on LFDA is
up to 85.71%, outperforming the used Principal Component Analysis (PCA),
and Linear Discriminant Analysis (LDA).

Keywords: Facial expression recognition, Local binary patterns, Local Fisher


discriminant analysis.

1 Introduction
Facial Expression is one of the most powerful, nature, and immediate means for
human beings to communicate their emotions and intentions. Automatic facial
expression recognition has attracted much attention over two decades due to its
important applications to human-computer interaction, data driven animation, video
indexing, etc.
An automatic facial expression recognition system involves two crucial parts:
facial feature representation and classifier design. Facial feature representation is to
extract a set of appropriate features from original face images for describing faces.
Mainly two types of approaches to extract facial features are found: geometry-based
methods and appearance-based methods [1]. In geometric feature extraction system,
the shape and location of various face components are considered. The geometry-
based methods require accurate and reliable facial feature detection, which is difficult
to achieve in real time applications. In contrast, the appearance-based methods, image
filters are applied to either the whole face image known as holistic feature or some
specific region of the face known as local feature to extract the appearance change in
the face image. Up to now, Principal Component Analysis (PCA) [2], Linear
Discriminant Analysis (LDA) [3], and Gabor wavelet analysis [4] have been applied
to either the whole-face or specific face regions to extract the facial appearance

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 443448, 2011.
Springer-Verlag Berlin Heidelberg 2011
444 S. Zhang, X. Zhao, and B. Lei

changes. Recently, Local Binary Patterns (LBP) [5], originally proposed for texture
analysis [6] has been successfully applied as a local feature extraction method in
facial expression recognition [7, 8]. No matter which kind of facial features is used,
the dimensionality of facial features is still high-dimensional. Therefore, its desirable
to perform dimensionality reduction on high-dimensional facial features in purpose of
extracting the low-dimensional embedded data representations for facial expression
recognition.
In recent years, a new supervised dimensionality reduction method called Local
Fisher Discriminant Analysis (LFDA) [9] has been proposed to overcome the
limitation of LDA. LFDA effectively combines the ideas of LDA and locality
preserving projection (LPP) [10], that is, LFDA maximizes between-class separability
and preserves within-class local structure at the same time. Thus, LFDA is capable of
extracting the low-dimensional discriminant embedded data representations.
Motivated by the deficiency of studies about LFDA for facial expression recognition,
in this work we use LFDA to extract the low-dimensional discriminant embedded
data representations from high-dimensional LBP features on facial expression
recognition task. We compare LFDA with PCA and LDA to verify the effectiveness
of LFDA for facial expression recognition. We conduct facial expression recognition
experiments on the popular JAFFE [11] facial expression database.
The remainder of this paper is organized as follows. In Section 2 Local Fisher
Discriminant Analysis (LFDA) is reviewed briefly. The popular JAFFE facial
expression database is introduced in Section 3. Section 4 presents the experiment
results. Finally, the conclusions are given in Section 5.

2 Local Fisher Discriminant Analysis

Let xi R D (i = 1, 2," , n) be D -dimensional samples and li {1, 2," , c} be


associated class labels, where n is the number of samples and c is the number of
classes. Let nl be the number of samples in class l :
c

n
l =1
l =n (1)

Let yi R d (1 d D ) be the low-dimensional data representation of a sample


xi , where d is the dimension of the embedding space. Local Fisher Discriminant
Analysis (LFDA) [9] finds a transformation matrix such that an embedded
representation yi of a sample xi is given by

yi = T T xi (2)

where T T denotes the transpose of a matrix .


Let S (lw) and S ( lb ) be the local within-class scatter matrix and the local between-
class scatter matrix:
1 n
S (lw) = Wi (, lwj ) ( xi x j )( xi x j )T (3)
2 i , j =1
Facial Expression Recognition Using Local Fisher Discriminant Analysis 445

1 n
S ( lb ) =
2 i , j =1
Wi ,(lbj ) ( xi x j )( xi x j )T (4)

Ai , j / nl if li = l j
Wi ,(lwj ) = (5)
0 if li l j

Ai , j (1/ n 1/ nl ) if li = l j
Wi ,(lbj ) = (6)
1/ n if li l j

where A is a affinity matrix between xi and x j . Using the local scaling heuristic, A
is defined as
2
Ai , j = exp( xi x j / i j ) (7)
where i is the local scaling around xi and defined by i = xi x (k )
i
(k )
, and x
i is the
k-th nearest neighbor of xi . A heuristic choice of k=7 has shown to be the best
performance.
The LFDA transformation matrix TLFDA is defines as

TLFDA = arg max[tr (T T S (lb )T (T T S (lw)T )1 )] (8)


T R Dd

3 Facial Expression Database

The popular JAFFE facial expression database [11] used in this study contains 213
images of female facial expressions. Each image has a resolution of 256256 pixels.
The head is almost in frontal pose. The number of images corresponding to each of
the 7 categories of expression (neutral, happiness, sadness, surprise, anger, disgust
and fear) is almost the same. A few of them are shown in Fig.1.

Fig. 1. Examples of facial images from JAFFE database


446 S. Zhang, X. Zhao, and B. Lei

As done in [12], we normalized the faces to a fixed distance of 55 pixels between


the two eyes. Automatic face registration can be achieved by a robust real-time face
detector based on a set of rectangle haar-like features [13]. From the results of
automatic face detection, such as face location, face width and face height, two square
bounding boxes for left eye and right eye are created respectively. Then, two eyes
location can be quickly worked out in terms of the centers of two square bounding
boxes for left eye and right eye. Based on the two eyes location, facial images of
110150 pixels were cropped from original frames. Fig.2 shows the process of two
eyes location and the final cropped image. No further alignment of facial features
such as alignment of mouth was performed in our work.

Fig. 2. (a) Two eyes location (b) the final cropped image of 110150 pixels

4 Experimental Results
Since LBP has a predominant characteristic, that is, LBP tolerates against illumination
changes and operates with its computational simplicity, we adopted LBP for facial
image representations for facial expression recognition. The K-nearest-neighbor
(KNN) classifier with the Euclidean distance was used to conduct facial expression
recognition experiments owing to its simplicity. And the parameter K for KNN was
set to 1 for its best performance. To compare with LFDA, PCA and LDA was used
for dimensionality reduction. The reduced feature dimension is limited to the range
[2, 20]. For all facial expression recognition experiments, a 10-fold stratified cross
validation scheme was performed and the average recognition results were reported.

Table 1. The best accuracy for different methods with corresponding reduced dimension

Methods Baseline PCA LDA LFDA

Dimension 2478 17 6 11

Accuracy (%) 80.95 78.57 83.81 85.71

The different recognition results of three used dimension reduction methods, i.e.,
PCA, LDA and LFDA, are given in Fig.3. It should be pointing out that the reduced
feature number of LDA is set to the range [2, 6] because LDA can find at most 6 (less
than 7 categories of expression) meaningful embedded features due to the rank
Facial Expression Recognition Using Local Fisher Discriminant Analysis 447

deficiency of the between-class scatter matrix [3]. The best accuracy for different
methods with corresponding reduced dimension is presented in Table 1. Note that the
Baseline method denotes that the result is obtained on the original 2478-
dimensional LBP features without any dimensionality reduction. As shown in Fig.3
and Table 1, we can see that LFDA obtains the highest accuracy of 85.71% with 11
embedded features, outperforming PCA, LDA and Baseline. This indicates that
LFDA is capable of extracting the most discriminant low-dimensional embedded
representations for facial expression recognition. In addition, LDA performs better
than PCA, since LDA is a supervised reduction method and can extract the low-
dimensional embedded data representations with higher discriminating power than
PCA.

100

90

80
Recognition accuracy / %

70

60

50

40 LDA
PCA
30 LFDA
20
2 4 6 8 10 12 14 16 18 20
Reduced dimension

Fig. 3. Performance comparisons of different used dimension reduction methods

5 Conclusions
In this paper, we presented a new method of facial expression recognition based on
LFDA for. To testify the effectiveness of LFDA, two well-known dimensionality
reduction methods including PCA and LDA, were used to compare with LFDA. The
experiment results on the popular JAFFE facial expression database indicate that
LFDA performs best and obtains the highest accuracy of 85.71% with 11 embedded
features. This is attributed to that LFDA has the better ability than PCA and LDA to
extract the low-dimensional embedded data representations for facial expression
recognition.

Acknowledgments. This work is supported by Zhejiang Provincial Natural Science


Foundation of China (Grant No.Z1101048, No. Y1111058).
448 S. Zhang, X. Zhao, and B. Lei

References
1. Tian, Y., Kanade, T., Cohn, J.: Facial expression analysis. In: Handbook of Face
Recognition, pp. 247275 (2005)
2. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Maui, USA, pp. 586591 (1991)
3. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition
using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine
Intelligence 19(7), 711720 (1997)
4. Lyons, M.J., Budynek, J., Akamatsu, S.: Automatic classification of single facial images.
IEEE Transactions on Pattern Analysis and Machine Intelligence 21(12), 13571362
(1999)
5. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution gray scale and rotation invariant
texture analysis with local binary patterns. IEEE Transactions on Pattern Analysis and
Machine Intelligence 24(7), 971987 (2002)
6. Ojala, T., Pietikinen, M., Harwood, D.: A comparative study of texture measures with
classification based on featured distributions. Pattern Recognition 29(1), 5159 (1996)
7. Shan, C., Gong, S., McOwan, P.: Robust facial expression recognition using local
binary patterns. In: IEEE International Conference on Image Processing (ICIP), Genoa,
pp. 370373 (2005)
8. Shan, C., Gong, S., McOwan, P.: Facial expression recognition based on Local Binary
Patterns: A comprehensive study. Image and Vision Computing 27(6), 803816 (2009)
9. Sugiyama, M.: Dimensionality reduction of multimodal labeled data by local Fisher
discriminant analysis. Journal of Machine Learning Research 8, 10271061 (2007)
10. He, X., Niyogi, P.: Locality preserving projections. In: Advances in Neural Information
Processing Systems (NIPS), pp. 153160. MIT Press, Cambridge (2003)
11. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor
wavelets. In: Third IEEE International Conference on Automatic Face and Gesture
Recognition, Nara, Japan, pp. 200205 (1998)
12. Tian, Y.: Evaluation of face resolution for expression analysis. In: First IEEE Workshop
on Face Processing in Video, Washington, USA, pp. 8282 (2004)
13. Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer
Vision 57(2), 137154 (2004)
Improving Tracking Performance of PLL Based on
Wavelet Packet De-noising Technology

YinYin Li*, XiaoSu Xu, and Tao Zhang

Key Laboratory of Micro-inertial Instrument and Advanced


Navigation Technology, Ministry of Education,
Southeast University, Nanjing 210096, China
Liyinyin19870107@126.com,
xxs@seu.edu.cn, ztandyy@163.com

Abstract. The Phase-Locked Loop is used to track an incoming signal and


provide accurate carrier phase measurements on GPS receivers. However, the
PLL performance is affected by the thermal noise and dynamic stress. In order
to resolve the conflict between reducing PLL noise and overcoming the
dynamic stress, some compromises must be taken in PLL design. This paper
proposes a wavelet packet de-noising technology to resolve this conflict. The
wavelet packet de-noising technique effectively eliminates the effect of the
noise and allows broadening the PLL bandwidth. The performance is evaluated
by real signals test. The results show that this method outperforms the ordinary
method in terms of Signal-to-Noise Ratios (SNR) and Relative Mean-Square
Error (RMSE).

Keywords: GPS, Software Receiver, PLL, wavelet packet, soft-threshold,


de-noising.

1 Introduction
The robustness of GPS receivers has gained more and more attention in recent years.
For GPS receivers, a phase-locked loop provides the carrier phase measurement for
accurate positioning. Unfortunately, the PLL tracking performance is often affected
by many errors, mainly the thermal noise and dynamic stress [1]. To tolerate dynamic
stress, the conventional method is to broaden PLL bandwidth and reduce the pre-
integration time. However, in order to reduce the thermal noise, a narrow bandwidth
and longer pre-integration time are required. Actually, some compromise must be
made to resolve this conflict [2].
This paper proposed an algorithm based on wavelet packet de-noising technology
to improve the tracking performance of PLL. Due to their flexibility, software
receivers are quite valuable and convenient in executing and evaluating the algorithm.
The proposed algorithm was implemented in a GPS software receiver developed in
C/C++ language. The papers frame are as follows: firstly, the basic structure of
software GPS receiver is illustrated; Secondly, some basic concepts of PLL and

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 449456, 2011.
Springer-Verlag Berlin Heidelberg 2011
450 Y. Li, X. Xu, and T. Zhang

wavelet packet de-noising are introduced as well as a detailed introduction of how to


implement the wavelet packet de-noising technology in a PLL; And lastly, test results
and conclusions are demonstrated.

2 Software GPS Receiver Structure

The structure of the software receiver used in this paper is shown as Fig.1. GPS
Software receiver realizes base-band signal processing and navigation solution by the
way of software, through a generalized and modularized hardware platform, viz.,
front-end [3]. Achieving the functions by software makes the software receiver have
characteristics of good compatibility and flexibility. Different algorithms can be
executed and evaluated in the receiver.

IF signal
input Bit synchronization Decode
Signal Signal Position
Frame synchronization Navigation
Acquisition Tracking Solution
Parity check message

Fig. 1. The Structure of a Software GPS Receiver

The principle of a software receiver is as follows: firstly, the receiver should read
the IF data. Secondly the acquisition module is used to acquire the rough code phase
and Doppler frequency from each satellite. Thirdly the tracking module is used to
implement carrier tracking and code tracking based on the parameters from the
second step. Then, synchronization, parity and decoding are implemented in sequence
[4]. As a result, the position message is then calculated. The carrier tracking is
accomplished by PLL to synthesize the replica carrier frequency and phase with the
one of the satellite signals. The performance of PLL plays an important role to the
GPS receiver.

3 Basic Principle of PLL

A PLL is a control loop which synchronizes its output signal uo (t ) (generated by a


voltage controlled oscillator) with a reference or input signal ui (t ) in frequency as
well as in phase. The PLL generates a control signal which is related to the phase
error to control VCO to generate the signal frequency which will be closer to the input
signal, until the output frequency of a PLL is exactly same as the input signal and the
error signal ud (t ) between the oscillators output signal and the reference signal is
zero, or remains constant. In such a control mechanism, the PLL always adjusts
the phase of the output signal to lock to the phase of the reference signal [5]. A
typical PLL block diagram is shown in Figure 2. It consists of three basic
functional components: a discriminator or a phase detector (PD), a loop filter (LF)
Improving Tracking Performance of PLL 451

and a voltage controlled oscillator (VCO). LF is used to filter the result from the PD
and generate the control signal u f (t ) to control the VCO to generate the output
signal.

ui (t) ud (t) uf (t) uo (t)

Fig. 2. A Typical PLL Block Diagram

4 Principle of Wavelet Packet De-noising

Signal f (t ) can be decomposed into coefficients { (t ) } which are on another space


based on the wavelet packet basis. The signal f (t ) can be expressed by the linear
superposition of these coefficients:

f (t ) = a (t ) . (1)

Where { (t ) } are the basis of the other space. We can abstract the characteristic of
the signal from the coefficients a . So the processing of the signal can be replaced by
the processing of a [6]. In this paper, the wavelet packet decomposition was used.
The wavelet packets transform the signal in time domain into the coefficients in the
inner product space of wavelet packet. Define the following notation: ( x ) is an
scaling function and ( x) is the corresponding wavelet function, {Vk } is a multi-
resolution space, also called scale space generated by ( x ) .{Wk } is a wavelet space
generated by ( x) , Wk 1 is an fill space differences between Vk 1 and Vk , so Lvesque
space L2 ( R ) can be decomposed as [7]:

L2 ( R ) = L W2 W1 W0 W1 W2 L (2)

And {2k / 2 (2k t l ) : l Z } is a group of bases of Wk .Wavelet packet transform can be


carried out by followed ways:
Define a sequence of functions as follows [8]:
2 n ( x) = 2 j / 2 hk n (2 j / 2 k )
k
(3)


2 n +1 ( x ) = 2 j/2
k
g k n (2 j / 2 k )
452 Y. Li, X. Xu, and T. Zhang

Where j , k Z (Z is an integer set) and j is a scale parameter, k is time or location


parameter, n N (N is a non-negative integer set), hk is a low-pass filter coefficient,
g k is a high-pass filter coefficient. Moreover {hk } and{g k } are a group of conjugate
mirror filters. Their relationships are as followed:

h
nZ
h
n 2 k n 2l = k ,l hn = 2 g k = (1) k hl k .
nZ
(4)

Then two-scale equations of wavelet packet transform can be achieved:

2 n ( x ) = 2 hk n (2 x k )
k
(5)
2 n ( x) = 2 g k n (2 x k )
k

The signal can be expressed by the following wavelet packet bases function:

f (t ) = Cnj (k )2k / 2 n (2k / 2 t j ) . (6)


n, j

Where

Cnj (k ) = 2 k / 2 f (t )n (2k / 2 t j )dt . (7)


R

Based on the theory of local maxima value of the wavelet transformation, the
characteristic information is concentrated in a few coefficients. So, de-noising process
can be done by saving the character coefficients and threshold other coefficients.
After threshold, the modified coefficients can be used to reconstruction of signal. The
reconstruction formulation of a discrete signal is done by [8]:

Cmj ( k ) = hk 2 n C2jm+1 ( n) + g k 2 n C2jm+1+1 ( n) . (8)


n n

hk 2 n and g k 2 n can be obtained by reversing order of hk 2 n and g k 2 n .


Wavelet packet decomposition can decompose the signal to different frequency
bands in different levels. If we have sufficient decomposed levels and data samples,
the beginning and the ending of a frequency band can be acquired [6]. In general,
wavelet packet de-noising is done in the following steps:
Firstly, decompose the signal based on the selected wavelet packet basis. The
signal is decomposed into several layers of wavelet packet coefficients as a tree [6].
The structure of the tree is as figure 3:
Improving Tracking Performance of PLL 453

Fig. 3. The structure of the wavelet packet tree

Secondly, compute the best tree based on the entropy of Shannon, which is
computing the best wavelet packet basis.
Thirdly, a threshold is computed and then a soft-threshold is applied to the
coefficients. The threshold can be calculated as follows [8]:

= 2 * log 2 N . (9)

There are two kings of threshold functions, hard threshold and soft threshold
functions. In this paper, soft threshold function was selected. It is defined as [8]:

) sign(W j , k )( W j , k ), W j , k
W j,k = . (10)
0 , W j ,k <

Fourthly, signal reconstruction is performed using the modified detail coefficients.


The reconstruction algorithm is as formula (8).

5 Applying Wavelet De-noising Technique for PLL


The purpose of applying the wavelet packet de-noising technique in the PLL is to
reduce the noise level before the loop filter. The block diagram is as figure 4:

uI (k)
SI (k)
uO(k)
ud (k)
ui (k) uf (k)

,
uO (k) SQ (k)

uQ(k)

Fig. 4. Applying Wavelet De-noising Technique in the PLL


454 Y. Li, X. Xu, and T. Zhang

The main function of the wavelet packet de-noising is to reduce the noise level
within the bandwidth of the loop filter. The loop filter could filter out the noise
beyond the bandwidth of the loop filter, but the noise within the bandwidth still could
pass through and affect the NCO tracking performance. It should be noted that the
wavelet packet de-noising technique can only reduce the noise level rather than
eliminate the noise totally. The remaining noise still can affect the NCO tracking
performance but at a lower level [2].
The wavelet packet de-noising technique may cut off some useful signals which
are smaller than the threshold. This disadvantage will make the PLL spend more time
to lock due to the loss of some control signals and the output from PLL distorted.
However the decrease of the noise will produce smaller phase error that will help the
PLL to maintain locked and smooth the output [6].

6 Test Results

The performance of the proposed algorithm is assessed by using real IF data acquired
by RF front-end. The front-ends parameters are as follows: IF (intermediate
frequency) is 4.123968 MHz and sampling frequency is 16.367667MHz. The
software receiver is simulated based on VC++6.0.
The algorithm processed the outputs from the discriminator. The performance is
evaluated in terms of signal-to-noise (SNR) ratios and Relative Mean-Square Error
(RMSE). The formulas of SNR and RMSE are as follows [9]:

( I (i) I d (i)) 2
S N R = 1 0 lo g 10 i =1 . (11)
N


i=1
2
I (i)

N N
RM S = [ ( I (i) I d (i)) 2 ] I 2
d (i) . (12)
i =1 i =1

I (i ) and I d (i ) denote the original signal and the de-noised signal separately.
Doppler frequencies from the ordinary PLL and the modified PLL are compared in
figure5 and figure 6. The tested data is 36000ms long, parts of which are selected to
display for briefness. Both figures show that the data from the proposed algorithms
has less noise than the ordinary PLL , but with a little offset. The results comply with
the theory above. The slope of the curves stands for change rate of Doppler
frequency.
Improving Tracking Performance of PLL 455

Fig. 5. The difference of Doppler frequency between modified PLL and ordinary PLL for satellite 9

Fig. 6. The difference of Doppler frequency between modified PLL and ordinary PLL for satellite 18
456 Y. Li, X. Xu, and T. Zhang

The SNR and RMSE improvements of the proposed algorithm are depicted in
table 1. In general, the results comply with the results from the figures. The wavelet-
de-noising algorithm provides some improvement in SNR and RMSE.

Table 1. Doppler frequency improvement in SNR and RMSE

Satellite number SNR(dB) RMSE


9 64.3340 0.00060687
18 72.5939 0.00023459

7 Conclusion
The wavelet de-noising technique in the PLL can reduce the noise within the
bandwidth of the loop filter and therefore, a less noisy tracking output can be obtained
from the NCO. So this proposed method is effective to improve the PLL tracking
performance. However, the bias between the de-noised signal and the original signal
are related to the soft-threshold function and will still affect the performance of the
PLL. How to improve the threshold function will be the focus of the future work.
Besides, operating the software in real time instead of post-processing is also a key
issue to be considered in later research.

References
1. Ward, P.W., Betz, J.W., Hegarty, C.J.: Satellite Signal Acquisition, Tracking, and Data
Demodulation. In: Understanding GPS Principles and Applications, pp. 153241. Artech
House, Inc., Norwood
2. Lian, P., Lachapelle, G., Ma, C.L.: Improving tracking performance of PLL in high
dynamics applications. ION. NTM, 10421052 (2005)
3. Tsui, James, B.Y.: Fundamentals of Global Positioning System Receivers: A Software
Approach. John Wiley & Sons Inc., Chichester (2000)
4. Cai, B.-G., Shang, G.W., Wang, J., Liu, H.-C.: Design and Realization of Software
Receiver Based on GPS Positioning Algorithm. In: International Conference on
Information Science and Engineering, pp. 20302033. IEEE, Los Alamitos (2009)
5. Gardner, F.M.: Phase Lock Techniques, 3rd edn. John Wiley & Sons, Inc., USA (2005)
6. Qian, H.-m., Ma, J.-c., Li, Z.-y.: Fiber optical gyro de-noising based-on wavelet packet
soft-threshold algorithm. Journal of Chinese Inertial Technology 15(5), 602605 (2007)
7. Daubechies, I.: Orthonormal bases of compactly supported wavelets. Commun. Pure
Appl. 41, 909996 (1998)
8. Peng, Y., Weng, X.H.: Thresholding-based Wavelet Packet Methods for Doppler Ultrasound
Signal Denoising. In: IFMBE Proceedings, APCMBE 2008, vol. 19, pp. 408412 (2008)
9. Yu, W.X., Zhang, Q.: Signal De-noising in Wavelet Packet Based on An Improved-
threshold Function. Communications Technology 43(6), 79 (2010)

Appendix: Springer-Author Discount


This project was supported by the National Natural Science Foundation of China
(No. 60874092, 50575042, 60904088).
Improved Algorithm of LED Display Image Based on
Composed Correction

Xi-jia Song1,2, Xi-qiang Ma1,2, Wei-ya Liu1, and Xi-feng Zheng1


1
Changchun Institute of Optics, Fine Mechanics and Physics,
Chinese Academy of Sciences,
130033 Changchun, China
2
Graduate School of the Chinese Academy of Sciences,
100039 Beijing, China
songxijia2009@gmail.com

Abstract. In order to enhance the display result of the LED display image, an
improved algorithm of LED display image based on composed correction is
proposed. Firstly, the correction principle and the development status of the
current correction technologies of the LED display image is introduced, and the
two main correction technologies are analyzed. Secondly, the theory of
composed correction is proposed by constructing a mathematical model.
Thirdly, this algorithm is achieved in VHDL which is one of hardware
description language, and verified in an experimental platform. Experimental
results show that this algorithm is able to enhance the display result of the LED
display image significantly.

Keywords: Improved algorithm; LED display image; composed correction.

1 Introduction
LED display panel has been used widely in various occasions such as stage
background, traffic guidance, sports events living and so on. The reason that the LED
display panel is widespread attention and development rapidly is that it itself has
many advantages. The reason why widespread attention to LED, and rapid
development, is that it itself has many advantages. The sums of these advantages are:
high brightness, low voltage, low power consumption, and long life. Although the
LED display panel has so many advantages, its development is still constrained by
some unfavorable factors. One of them is the non-uniformity among pixels after the
LED display panel works a period of time, because different pixels have different
length of working [1-6].
In order to solve the above problem, there are two main correction technologies
which are correction based on CCD and correction based on dedicated video
processor. The correction based on CCD has been used in the LED display panel
many years and this technology is already mature. The correction based on video
processor is only in recent years developing. The cost of this technology is high
because the dedicated video processor is expensive. Therefore, an improved algorithm
of LED display image based on composed correction is proposed [7-10].

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 457463, 2011.
Springer-Verlag Berlin Heidelberg 2011
458 X.-j. Song et al.

2 Mathematical Model of LED Display Image


A two-dimensional function is used to denote the LED display image. In particular
coordinates (x, y), for the monochromatic LED display panel, the value of f(x, y)
represents the grayscale level of the pixel in (x, y); for the full color LED display
panel, f(x, y) is combined with three parts which are fR(x, y), fG(x, y) and fB (x, y)
because one pixel is composed of three sub-pixels which are red, green and blue. So
the value of fR(x, y), fG(x, y) and fB (x, y) represents the grayscale level of these sub-
pixels respectively. In the back, the pixel is composed of three sub-pixels in the
absence of special circumstances statement, because the focus of this paper is the full
color LED display panel and the monochromatic LED display panel can be viewed as
one of special circumstance of the full color LED display panel.
We can see from the LED light-emitting principle that f(x, y) is proportional to the
luminous intensity of LED. Therefore, f(x, y) must be bounded and non-negative, that
is expressed in (1).

0 f R ( x, y ) Lmax

0 f G ( x, y ) Lmax (1)
0 f ( x, y ) L
B max

In (1), Lmax denotes the highst grayscale level of the pixel. The coordinates of the
LED display image should be agreed the constants shown in Fig. 1.

Fig. 1. Coordinate constraints of the LED display image. denotes one pixel and (0, 0)
denotes the pixel which in the first row and the first column. The resolution of the LED display
panel is VH. It means that the LED display panel has the number of H rows and V columns
pixels.
Improved Algorithm of LED Display Image Based on Composed Correction 459

3 Mathematical Description of LED Display Image


The LED display image can be described as a matrix F because it itself is digital
array. Each pixel of the LED display panel is the corresponding element of the matrix
F. It should be note that the size of the matrix F is not VH but (3V)H. F is
combined with three sub-matrixes which are FR, FG and FB.
F = [ FR : FG : FB ] (2)
In (2), FR, FG and FB is expresses in (3), (4) and (5) respectively.

f R (0, 0) f R (0,1) " f R (0,V 1)


f (1, 0) f R (1,1) " f R (1, V 1)
FR = R (3)
# # % #

f R ( H 1, 0) f R ( H 1,1) " f R ( H 1, V 1)

fG (0, V ) f G (0, V + 1) " f G (0, 2V 1)


f (1,V ) f G (1,V + 1) " f G (1, 2V 1)
FG = G (4)
# # % #

fG ( H 1,V ) fG ( H 1,V + 1) " fG ( H 1, 2V 1)

f B (0, 2V ) f B (0, 2V + 1) " f B (0,3V 1)


f (1, 2V ) f B (1, 2V + 1) " f B (1,3V 1)
FB = B (5)
# # % #

f B ( H 1, 2V ) f B ( H 1, 2V + 1) " f B ( H 1, 3V 1)
In addition to the abobe representation, the LED display image can be described as
a vector shown as in (6).

f = [ fR fG fB ] (6)

In (6), fR, fG and fB is expressed in (7), (8) and (9) respectively.

f R = [ f0 f1 " fV 1 ] (7)

f G = [ fV fV +1 " f 2V 1 ] (8)

f B = [ f 2V f 2V +1 " f 3V 1 ] (9)

Each emement in vector fR, fG or fB can be expresssed in (10).


460 X.-j. Song et al.

f (0, i )
f (1, i )
fi = (i = 0,1,"3V 2,3V 1) (10)
#

f ( H 1, i )

4 Experiments Results
This algorithm is achieved in VHDL which is one of hardware description language.
This program has four modules which are data_in, data_pro, data_driv and
correction_end. Data_in is a module which receive the data from the data sources
such as display card, video processor and so on. Date_pro is a module which
processes the data provided by data_in based on composed coreection. Data_driv is a
module which transfores the data to the LED display panel. Correction_end is module
which controls the processing of correction.

The main code of this program is


Library ieee;
Use ieee.std_logic_1164.all;
Use ieee.std_logic_unsigned.all;
Entity Composed_Correction is
Generic(M:integer:=8;
N:integer:=8;
K:integer:=16;
L:integer:=8);
Port(pr,pg,pb:in std_logic_vector(M-1 downto 0);
ctr:in std_logic_vector(N-1 downto 0);
qr,qg,qb:out std_logic_vector(K-1 downto 0);
ena:out std_logic_vecor(L-1 downto 0));
End Composed_Correction;
Architecture a of Composed_Correction is
Signal mr,mg,mb: std_logic_vector(M-1 downto 0);
Signal nr,ng,nb: std_logic_vector(N-1 downto 0);
Begin
U1: data_in port map(pr,pg,pb,mr,mg,mb);
U2: data_pro port map(mr,mg,mb,ctr,nr,ng,nb);
U3: data_driv port map(nr,ng,nb,qr,qg,qb);
U4: correction_end port map(ctr,ena)
End a;
This program is verified in an experimental platform.

The algorithm based on composed correction has been applied in the projects, and the
following figures, Fig.3, Fig.4, Fig.5 and Fig.6 are the LED display result before and
after using this algorithm.
Improved Algorithm of LED Display Image Based on Composed Correction 461

display unit

data transform unit


drive unit

control unit

Fig. 2. The experimental platform is combined with the data transform unit, the control unit, the
drive unit and the display unit

(a) (b)

Fig. 3. (a) is the LED dispplay result in red before correcint and (b) is after correcting

(a) (b)
Fig. 4. (a) is the LED dispplay result in green before correcint and (b) is after correcting
462 X.-j. Song et al.

(a) (b)
Fig. 5. (a) is the LED dispplay result in write before correcint and (b) is after correcting

(a) (b)
Fig. 6. (a) is the LED dispplay result in write before correcint and (b) is after correcting

5 Conclusion
The improved algorithm of LED display image based on composed correction is able
to reduce the non-uniformity among pixels after the LED display panel works a
period of time. The experimental results show that this algorithm is able to enhance
the display result of the LED display image significantly.

References
1. Goh, J.C., Chung, H.J., Jang, J., et al.: A new pixel circuit for active matrix organic light
emitting diodes. IEEE Transactions on Electron Device, 544546 (2002)
2. Critchley, B.R., Blaxtan, W.P., Eckersley, B.: Picture quality in large-screen projectors
using the digital micro-mirror. Society for Information Display, 199202 (2008)
3. Winkler, S.: Issues in vision modeling for perceptual video quality assessment. Signal
Processing, 231252 (1999)
4. Cheng, W.S., Zhao, J.: Correction method for pixel response non-uniformity of CCD. Opt.
Precision Eng., 103108 (2008)
5. Chang, F., Wang, R.-g., Zheng, X.-f., et al.: Algorithm of reducing the non-uniformity of
images in LED display panel. IEEE Transactions on Power Electronics, 169172 (2010)
Improved Algorithm of LED Display Image Based on Composed Correction 463

6. Vovk, U., Likar, B.: A Review of Methodsfor Correction of Intensity Inhomogeneity in


MR. IEEE Transactions on Medical Imaging, 405411 (2007)
7. Leemput, K.V., Maes, F., Vandermeulen, D., et al.: Automated model -based bias field
correction of MR images of the brain. IEEE Transactions on Medical Imaging, 885889
(1999)
8. Pinson, M.H.: A New Standardized Method for Objectively Measuring Video Quality.
IEEE Transactions on Broadcasting, 135139 (2004)
9. Peter, J., Burt, E.H.: The Laplacian pyramid as a compact image code. IEEE Transactions
on Communication, 532540 (1983)
10. Ro, Y.M., Huh, Y., Kim, M.: Visual content adaptation according to user perception
characteristics. IEEE Transactions on Multimedia, 435445 (2005)
The Development Process of Multimedia Courseware
Using Authoware and Analysis of Common Problem

LiMei Fu

DaLian Neusoft Institute of Information


DaLian, China

Abstract. With the rapid development of modern education technology,


Multimedia courseware in teaching shows its advantage role. Based on authors
some experiences in developing multimedia courseware, this paper
systematically introduced the principle of multimedia courseware ,the
development tools of multimedia courseware and the development process of
courseware. Meanwhile This paper also introduced some common problems
encountered when we develop multimedia courseware. Finally for these
common problems we make specific analysis. Actual teaching result shows that
using the multimedia courseware can stimulate the learning interest of the
students, and achieved good teaching effect.

Keywords: Authoware, Multimedia Courseware, The process of development,


The analysis of Common Problem.

1 Introduction
With the rapid development of modern education technology,which put computer
technology and network technology as the core, The schools pay more and more
attention to use computers to develop multimedia courseware .Multimedia courseware
in teaching shows its advantage role. A multimedia courseware can be considered as a
teaching system ,which constitute by six basic part.The six part Respectively is cover
introduction, knowledge content, exercises part, jump relations, navigation strategy
and the interface, combining author actual experience in developing C programming
courseware,This paper introduced the development process of courseware using
Authoware, and summarized some problems encountered on the development
process.Finally this paper presents some solution of these problems.

2 The Development Principles of Courseware


The design of multimedia courseware should follow education teaching regularity and
reflect specialized training goal. Meanwhile the courseware should combine teaching
science with practicality.It also should have distinct characteristics and spread
value,as in [1].
(1) Teaching.Firstly, the design of courseware should visual in image .It is also
beneficial for students to understand knowledge. Secondly, the embodiment of the
content should be lively and interesting. Finally, the courseware should also have
practical applicability, and it can be apply to daily learning.of most students.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 464469, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Development Process of Multimedia Courseware Using Authoware 465

(2) Scientific.The content of courseware should be science. At the same time the
materials of courseware should be tightly arould of syllabus.The content of
courseware should also adapt to the needs of teaching object.
(3) Technical.Courseware can run in independent environment with quick speed
and continuous animation.It also should have ability of fault-tolerant and
trouble-free. The interface of courseware should have characteristic of
friendly and strong interactivity.
(4) Artistic.The picture of courseware should be concise, artistically and
unification. Iit also should be reasonable for applying of image, animation
and text font.The size of font should be easily identified.

3 The Introduction of Developing Tools Authorware

Authorware is the product of the Macromedia company. As a kind of multimedia


developing tools, it provides intuitive creation environment based on flow chart and
design icon for developers. Developers can easily organize various multimedia
material together and specify their respective forms.which can make developers
completely control the flow of program .Using Authoware the users can develop
complex multimedia application system with strong ability to interact.

4 The Development Process of C Programming Courseware


Any courseware's success depends on perfect plans .The development of a good
couserware generally is divided into four stages,as in [2].
(1) The collection of related materialst.
(2) The development of courseware.Using multimedia production tools organize
all kinds of material and develop courseware.
(3) The testing and debugging of courseware. After completion in order to make
corrections,courseware must be thorough examination.
Before developing C programming courseware, it should be designed good
hierarchical relationships firstly,as in [3]. According to the characteristics and content
of C programming course we designed the hierarchical relationships of couseware as
figure 1 shows.
As an example figure 1 only lists the part of array,others is omited. Firsly we
should bulid the overall framework of programe.The process as follows, Draging a
group icon to follow line as couseware beginning. According to figure 1, we should
firstly set framework of the the overall program. Building process as follows, draging
a group icons to flow line, as whole courseware beginning. In group icons a AVI files
is included.To prevent the illegal use of courseware,we set passoword for entering the
courseware.If you want to use couseware you must input the right password. Then
again drag a group icon as the whole procedure of the core part.This group icon
contains all conten of courseware, including lecturing. Figure 2 shows the basic
structure of courseware.
466 L. Fu

Fig. 1. The hierarchical relationships of courseware

Fig. 2. The framework of courseware

The second step is to design the core part of courseware . Based on the framework
of courseware and function, we determine the main menu, submenu and
corresponding button of courseware.The design should be convenient to swith
between one part and another part.At the same time the design should have some
buttons which can return home page and exit the courseware system. In order to make
courseware more beautiful, the main menu and submenu of chapters all use flash
production. The design process of main menu as follows, first insert a flash animation
into flow line, and then drag an interactive icon to flow line.In the following of the
interactive icon, it can drag many group icons. there are many types of interactive
response .We choose "hotspots response" as interaction.In those group icons,we can
make use of all kinds of icon to finish the development of courseware.Such as
display icon to display the content of courseware and all kinds of pictures,audio
iconto play sounds,move icon to play movies. Return button and exit button
use caculate icon to finish return main menu and exit system. In order to jump
rightly,each icons are shall be named accurately in the whole process. Each icon
should not have the same name.If have the same name, during execution of program
jumping command would be caused wong runnig. Program flow see figure 3 shows.
Initial interface displays each big chapter of the main menu, the submenu display the
menu of chapters.Click the main menu can enter submenu interface.
The Development Process of Multimedia Courseware Using Authoware 467

5
Fig. 3. The t structure of core courseware

Fig. 4. The structure of statements parts

Third step, each part of teaching contents is refined and divided into several
independent unit of knowledge .Meanwhile it should be clear for every knowledge
unit. According to different knowledge unit, the design gives the corresponding
interface. Pages can be connected through the next page button. If submenus have
been the last page, it should be return the main meun of this chapter byreturn
button.In various state of sub menu, it can exit courseware by pressing "exit button
at anytime. Figure 5 is the structure chart of statements. In order to complete the
corresponding function, you can drag different types of icon to the group icon.
Fourth step is to complete the tests of each unit. Using Authorware knowledge
objectcan be done quickly. There are two kinds of question which respectively is
choice and judgment.The feedback function is very important for the testing,when
students do right,it should be design a encorage feedback,such as a smiling face.or
an encouraging expression etc.When students answer wrong ,the test system should
explain the reasonsts. In order to record the lerning situation of student every time,the
system provids the function of learning diary.Finally the system also provide more
resources of learning by course resource part.

5 The Analysis of Common Problem Encounted on the


Development Process
In the development process of courseware using Authoware,we offen encounter many
questions.It maybe appear on the phase of development ,or on the phase of package,as
in [4].
468 L. Fu

5.1 The Analysis of Common Problem during the Development Phase

(1) Import other formats file


To make a excellent multimedia courseware, it usually need to import other file which
generated by other software file into Authoware. Such as the main menu and various
submenu,we use Flash file that can make interfaces more beautiful. Sometimes it also
needs to insert some executable file, such as we can make an executable file through
Flash to display the execution process of for circulation .It is a little difficulty if we
want to realize it by using Authoware itself.In Authoware if you want to insert a
excutable file ,you can do it by using JumpOutReturn."JumpOutReturn" function
format is JumpOutReturn("program", ["document"][,"creator"]).This function can
begin to run a excutable file ,after finishing running it will return to Authorware
environment.In addition, using the function might well connect Authorware with
other applications, such as the applications.of notebook etc.
PPT is a common file format of courseware.If you want to run PPT, you can click
"insert" menu,then insert a "OLE object" . After inserting the PPT,you can set some
attributes by demo manuscriptsOLE object" menu which is under "edit" menu.

(2) The connection of material


Whatever picture or vide or sound ,materials both have two ways to built-in
Authoware.It is external links and embedded in. If the picture is not very big,
embedded in will be show quickly, in this way, it's best to put these pictures into
the library after completion.If you use external links, it will be convenient to
modify material and reduce data quantity of the application which will be improve
efficiency of operation.

(3) The use of interaction icon


The most powerful function of development courseware using Authorware is
interactive. It has rich control mode of interactive, besides the usual button, drop-
down menus, hotspots, hot objects object outside, Authorware still can provide the
conditions, moving objects, text, press the keys and limited interactive.These
interactions enable convenient to develop courseware.so it is a good way to attract
students to study by using all sorts of interactive when you develop multimedia
courseware.

5.2 The Analysis of Common Problem during the Packaged Stage

(1) lack of DLL file error


After completion of Package,when you run excutable file you may find error which
appear "lacks the DLL" message. This is because that Authorware has been adopted
an open program structure which make all program function exists by hanging
outside. such as Xtra, UCD etc. Therefore, all kinds of external support files must be
released accompanying production together. Solution is to find these files which is
under the installation directory of Authorware and then copy these files to pack
directory. Other similar situation can be solved according to the method.
The Development Process of Multimedia Courseware Using Authoware 469

(2) lack of Xtra error


In order to run the pack files normallly,it still need to copy the Xtra file which have
been used in courseware to pack folder. Otherwise it will cause that some materials
can not be shown normally. Xtra file is some external files which can use to
strengthen Authorware function. The solution of this error can be solved by copying
all Xtras files which is under the installation directory folder of Authoware to
Packaged directory.
(3) When running package file it appears error message.which is Where is
movie name. Avi attention?
The cause of this error is that the program calls avi animation. files .These avi files
are usually bigger, Authorware can not put these files included automatically when
packed. The solution of these problems is copy avi file to packaged directory.

6 Conclusion
This paper expounds the development process of a multimedia courseware using
Authoware,based on authors some experiences in developing the courseware of C
programming.Meanwhile this paper also analysed common problem encountered on
thedevelopment process of courseware. Practical teaching effect shows that using
multimedia courseware teaching has achieved good teaching effect.

References
1. Koper, E.J.R.: A method for the development of multimedia courseware. The British
Journal of Educational Technology 26(2) (May 1995)
2. Flori, R.E.: Computer-Aided Instruction in Dynamics: Does it Improve Learning? In:
Session 3D1 77-81, Proceedings Frontiers in Education, 24th Annual Conference, San
Jose, CA, November 2-6 (1994)
3. Ning, J.X.: How to make the multimedia courseware for teaching. Education Teaching
BBS (March 2011)
4. Zhou, D.: The analysis of common problem using Authoware. Natural Sciences Journal of
Harbin Normal University 17(2) (2001)
Design of Ship Main Engine Speed Controller Based on
Expert Active Disturbance Rejection Technique

Weigang Pan1, Guiyong Yang2, Changshun Wang1, and Yingbing Zhou1


1
Department of Information Engineering, Shandong Jiaotong University, Jinan, Shandong
Province, China
panweigang1980@163.com
2
Beyond the NC Electronics Co., Ltd. Inspur Group, Jinan City, Shandong Province China
yanggy@Inspur.com

Abstract. Applying mathematics model of nonlinear ship main engine control


and the wave disturbances to the design of electronic governor, and considering
the uncertainty of model parameters and the characteristics of servo-system makes
the model have the unmatched uncertainty correspondingly. In order to solve the
difficulty, an active disturbance rejection nonlinear control strategy is proposed,
and the expert system is used to modify parameters of ADRC online which
improve the ADRC's adaptive capacity. A ship main engine genetic algorithm
ADRC controller is designed. The simulation results of ship main engine (ME)
speed tracking and keeping show that the controller has good adaptabilities on the
system nonlinearity and strong robustness to parameter perturbations of the ship
and environmental disturbances. And the speed switching is fast and smooth, thus
achieved high-accurate ship ME speed control.

Keywords: Ship main engine, Expert, Active disturbance rejection controller.

1 Introduction
Ship's main engine performance and service life depends very much on the
performance of the speed adjust. At home and abroad most of the advanced speed
control system of ship used digital governor, which used mainly PID algorithm. For
the ship ME speed control processes are nonlinear, time-varying as well as the
uncertainty of environment, it is very difficult to meet the requirements which obtain
optimal control system performance index by depending on the traditional PID
algorithm, because PID algorithm is usually used in the control system for constant
coefficients and little environmental interference, otherwise, the control system can
not guarantee optimal performance, or even become unstable.This paper design a new
type of ship ME speed controller using expert ADRC technique and simulation study
indicate that it achieves good results when parameter perturbations of the ship and
environmental disturbances.

2 Ship ME Model
Main Engine speed control system structure is shown in Fig.1.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 470475, 2011.
Springer-Verlag Berlin Heidelberg 2011
Design of Ship Main Engine Speed Controller 471

In this paper, MAN B&W S60M large low-speed diesel engine is used, its
mathematical model as follows:
kT1 n f (t ) + kn f (t ) = s (t ) (1)

Fig. 1. ME Block diagram of speed control system

DC servo motor is used as Implementing agencies, its model as follows:


T22 s(t ) + 2T2 s(t ) + s (t ) = ns (t ) (2)
Measurement results of speed detection unit directly affect the system's regulation
accuracy, commonly magnetic sensor is used on board, whose mathematical model
can be regarded as the proportion of links.
Ship main engine used pure delay one order inertia model in the actual design,
which is in (3).It not only simplifies the design work, but also achieve satisfactory
results.
ke s
G (s) = (3)
Ts + 1
In addition, the disturbance problem about ship ME control is very complex, which
relative to water depth, load, wind, current, wave and so on. If the ship at sea in the
rough, it sometimes make the propeller out of the water, suddenly the engine load will
decrease. if the fuel pump can not be reduced in time, the engine speed will suddenly
rise high, and even vehicles, the other hand, when the load suddenly increased,
particularly in the case of low speed operation, if the fuel supply can not increase in
time, may cause stop the engine. These disturbances of the navigation of the ship
which has a large influence in the design of the controller must be considered.

3 Design of Expert ADRC Controller for Ship ME


For the system which can be expressed as a series formation illustrated in Fig.3, it can
be controlled by designing two ADRC controllers, which constitute a double closed
loop control system as shown in Fig.2. Where, ADRC1 is the controllers of the ship
ME system (two-order) and the ship sevo-system(one-order).

Fig. 2. Double rings ship ME controller based on ADRC


472 W. Pan et al.

Fig. 3. Single ring ship ME controller based on ADRC

A ship ME system showed in Fig.2, according to conventional design methods,


generally is designed as shown in Fig.2 Double-ADRC ship ME system. Where,
ADRC1 is the controllers of the ship ME system(two-order) and the ship sevo-
system(one-order), ns for the expectations of the ME speed, sc is the expectations of
displacement of servo mechanism, nf is the actual speed of the ME, s is the actual
displacement of the servo. In accordance with the order from the inner to the outer
tuning controller parameters, respectively, which ADRC2 11 parameters, ADRC1 15
parameters. Parametric design is difficult.
The ultimate goal for designed the ship ME controller is to maintain the accuracy
and tracking to speed, so the servo mechanism and the ship ME mathematical model
can be considered as a whole, and thus greatly simplify the controller design, this
design is shown in Fig.3, where ADRC is the two-order.

4 Parameter Setting of Expert ADRC Controller for Ship ME


Speed Tracking

For other parameters are easy to designed, the expert system is used to optimal
{k p , k d , 01 , 02 , 03 } five parameters. It is easy to select the parameters compare
with other manual adjustment based on experience.
1) The design of the expert ADRC
Controller must be self-tuning parameters before operation. In the controller
parameter tuning stage, based on analysis approximate model of the controlled object,
the inference mechanism of expert system select the best set of ADRC parameters
from the knowledge base and embed into the ADRC technology Control system, the
final aim is achieved which control the uncertain object (Fig.4).

Fig. 4. expert system parameter self-tuning schematic


Design of Ship Main Engine Speed Controller 473

2) Knowledge acquisition
The basic task of knowledge acquisition are get knowledge and create sound,
complete and effective knowledge base for expert systems to meet the needs of the
field of problem solving, which is the key to the controller design, correct or not of
knowledge base directly affect the control accuracy of the controller.
The system has two main methods of knowledge acquisition.
The first is by knowledge engineers, in the process of acquiring knowledge, the
knowledge engineers Coordinate knowledge experts in the field, generating expert
knowledge base. ADRC controller has strong robustness, and can be said to have
"universal" in a certain type of object, but the ADRC controller of fixed parameters is
not able to control all of the objects.Through research on some controlled objects
which have pure delay one order inertia model found that: the constant of
proportionality is 1 (it can be implemented by software), the time constant is
generally 1 ~ 1000s, the delay time is generally 0 ~ 800s; Controller designed ten sets
of ADRC parameters (Tab.1) to control the object.Parameter selection criteria are: the
uncertain objects can be controlled always by a set of knowledge base parameters of
ADRC, and control accuracy can meet the requirements (with a good fast, less than
15% overshoot); for the short delay objects, the knowledge base in the ADRC
parameters should be as little as possible; for the long delay, big inertia objects, the
knowledge base in the ADRC parameters should be as much as possible in order to
meet the requirements of high-precision control .
The second method for failure to control, the user should manually adjust the
intelligent controller parameters, and improve the content of the knowledge base.

Table 1. Ten sets of second-order ADRC parameter table

No. kp kd 01 02 03
1 0.1 80 5 25 80
2 0.5 90 10 30 40
3 1.0 50 15 25 70
4 1.5 30 20 25 30
5 2.0 40 25 30 50
6 3 60 30 30 10
7 4 70 35 30 20
8 5 30 40 25 40
9 6 80 45 25 80
10 8 20 50 30 30

3) Knowledge base
All ADRC parameters (Tab.1) and the controlled object similar model are stored in
knowledge base.
4) Inference mechanism
Inference mechanism is the core of the controller design which equivalent the brain of
experts. It analysis approximate mathematical model of controlled object, use square
474 W. Pan et al.



error integral indicators ISE = e 2 (t )dt as a basis for parameter selection, and
0

simulate online by the parameters of knowledge base, then, the best performance
index which obtained through online simulation embedded in the ADRC control
system to achieve control of an uncertain object.

5 Simulation Results
Simulating experiment is carried out on a ship which the ME is used MAN B&W
S60M, nominal parameters are:
T =12.4s, k=98.5, =0.0656s.
The parameters of ADRC by tuning principle of the ADRC parameters are:
r =100, h =30, 01 =0.02, 02 =0.04, 1 =0.5, 2 =0.25, 1 =0.05, 2 =0.1.
By expert system, the fourth sets of parameters in expert database is used. The
parameters are:
k p =1.5, k d =30, 01 =20, 02 =25, 03 =30.
Set the periodic sample time as 0.1s, simulation time as 500s.
Due to limited space, there are just some simulation results under typical
conditions described in Fig.5-6.

120
100
n
i
m80
/
n
o
i
t 60
u
l
o
v PID
r40
e
20 expert
ADRC
0
0 100 200 300
time(s) 400 500 600

Fig. 5. Simulation result of normal start

120
100
n
i
m
/ 80
n
o
i
t 60
u
l
o
v 40 PID
e
r
20 expert
ADRC
0
0 100 200 300 400 500 600
time(s)

Fig. 6. Simulation result of unload 25% load


Design of Ship Main Engine Speed Controller 475

From which we can see that:


1) In nominal model, the speed response of Expert ADRC controller is fast and
smooth than PID controller.
2) When load changed caused by disturbance, the speed response is still smooth
and accurate relatively.
These results indicate that Expert ADRC controller has a powerful robustness on the
severe nonlinearity, parameter and condition uncertainties of ship ME and under the
ADRC control. The speed switching is fast and smooth, thus achieves the goal of
high-accurate ship ME speed.

6 Conclusion and Prospect


This paper presents a design of the Expert ADRC for ship ME non-linear control.
Simulation results show that this controller appears to be strictly robust to the non-
linear characteristic of the ship, and system disturbances, and the system uncertainty.
The process of speed switching is fast and smooth. So it works perfectly as a ship ME
speed controller.

Acknowledgements. This work is supported by Natural Science Foundation of


Shandong Province (Project Number: 2009ZRB019B2), Natural Science Foundation
of Shandong Province (Project Number: ZR2009FL013), Natural Science Foundation
of Shandong Province (Project Number: ZR2010FL014), Shandong Jiaotong College
Research and Innovation Teams.

References
1. Han, J.Q.: From PID technique to active disturbance rejection control technique. Control
Engineering of China 9(3), 1318 (2002)
2. Han, J.Q.: Auto disturbances rejection control technique. Frontier Science 1, 2431 (2007)
3. Zhao, X.R., Zheng, Y., Hou, Y.H.: The rational spectrum modeling of the ocean wave and
its simulation method. Journal of System Simulation 4(2), 3339 (1992)
4. Hu, Y.H., Jia, X.L.: Theory and Application of Predictive Control of Ships Subject to State
Constraints. Control and Decision 17(4), 542547 (2000)
5. Cheng, Q.M., Wan, D.J.: Overview on the development and comparison of the control
techniques on ship maneuvering. Journal of Southeast University 29(1), 1419 (1999)
6. Pan, W.G., Zhou, Y.B., Han, Y.Z.: Design of Ship Main Engine Speed Controller Based on
Optimal Active Disturbance Rejection Technique. In: 2010 International Conference on
Automation and Logistics, vol. 1, pp. 528532 (2010)
Design of Ship Course Controller Based on Genetic
Algorithm Active Disturbance Rejection Technique

Hairong Xiao1,2, Weigang Pan1, and Yaozhen Han1


1
Department of Information Engineering,
Shandong Jiaotong University, Jinan, Shandong Province, China
panweigang1980@163.com
2
School of Control Science and Engineering,
Shandong University, Jinan, Shandong Province, China

Abstract. Because of the strong non-linearity and uncertainty, the dynamics


restraints of autopilots, as well as the effects of wave disturbances, designing a
high performance ship course controller is always a difficult work. In order to
solve the difficulty, an active disturbance rejection nonlinear control strategy is
proposed, and the genetic algorithm is used to modify parameters of ADRC
online which improve the ADRC's adaptive capacity. A ship course genetic
algorithm ADRC controller is designed. The simulation results of ship course
tracking and keeping show that the controller has good adaptabilities on the
system nonlinearity and strong robustness to parameter perturbations of the ship
and environmental disturbances.

Keywords: Ship course, Genetic algorithm, Active disturbance rejection


controller.

1 Introduction
Ship autopilot will navigate through the designed optimal course, saving lots of
energy, time and manpower. Since 1920s, PID based autopilot, has been designed and
used in the field of ship course control. In 1970s, some modern control methods, such
as the self-adaptive control, are introduced to ship course control. However, the
traditional PID controller is so sensitive to high frequency disturbance that it will
cause steering operation frequently, lacking sufficient adaptability to dynamic
condition of ship and sea. Furthermore, the self-adaptive steer, with a high cost and
the difficulty of parameters regulating, in addition to ship's nonlinearity, can't make
sure that there will be a good control effect.This paper design a new type of ship
course controller using genetic algorithm ADRC technique and simulation study
indicate that it achieves good results when parameter perturbations of the ship and
environmental disturbances.

2 Ship Steering Model and Disturbance Model


2.1 Ship Motion Mathematical Model and Rudder Model
Ship motion can be described by state space model and input-output model. The
former description can handle the problem of ship multi-parameter and has accurate

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 476481, 2011.
Springer-Verlag Berlin Heidelberg 2011
Design of Ship Course Controller 477

reflection of the wind, wave disturbances, but its calculation is very complex. The
latter is called responding ship model method. The method, having eliminated the
transverse excursion, is mainly based on the ship dynamic condition-the relation of
 . However, the differential equation can still include the nonlinear
disturbances and even can reflect the wind and wave disturbance as an equivalent
rudder angle d which together with rudder angle is taken into the ship model. The
linear Nomoto responding ship model is: T +  = K .
Ship course motion is essentially nonlinear, whose model parameters are perturbed
with changes of water depth, navigation speed, loading weight and also wind and
wave disturbances. Besides, auto-rudder is a dynamic system, which is restrained by
the maximum rudder angle and the maximum rudder angular velocity. Taking all of
the above into account, the ship course motion model is usually based on improved
two-dimension Nomoto model which includes uncertain items. The model is
described as follows:

(T + T ) + ( K + K ) H ( ) = ( K + K )( + d )
H ( ) = ( + ) + ( + ) 3

(T + T ) + = ( K + K )
E E E c (1)
max

 max

Where, , : ship course angular acceleration (rad/s2) and angular velocity (rad/s).
, : steering autopilot angular velocity (rad/s) and angle (rad).
c , d : steering autopilot control angle (rad) and disturbance equivalent angle
(rad).
T , T , TE , TE : ship time constant(s) and steering autopilot time constant(s) and
their perturbation.
K , K , K E , K E : ship course control gain (s-1) and rudder control gain
(dimensionless) and their perturbation.
, , , : nonlinear coefficient and their perturbation (s2/rad2) .
 , : maximum rudder angular velocity (rad/s) and maximum rudder angle
max max
(rad).

2.2 Disturbance Model

There are different methods in different literatures, but in this paper, uniform
stochastic wave disturbance is used only which is shown as follows:

= (4.58H 3 + 3.44 H 4 ) (2)

Where H3 and H4 are two independent pseudo-stochastic-variable which submit to


uniform distribution U(0,1).
478 H. Xiao, W. Pan, and Y. Han

3 Design of Genetic Algorithm ADRC Controller for Ship Course


Tracking

3.1 The Ship Course ADRC Controller Design

The ultimate goal for designed the ship course controller is to maintain the course, so
the servo mechanism and the ship course mathematical model can be considered as a
whole, and thus greatly simplify the controller design, this design is shown in Fig.1,
where ADRC. is the two-order.

ADRC rudder ship

Fig. 1. Single ring ship course controller based on ADRC

3.2 Design Design of Genetic Algorithm ADRC

The performance of the ADRC is heavily related with the selection of its parameters
which is mainly confirmed by experiments. Large numbers of simulation research
shows that ADRC controller can completely be designed by separability principle,
namely to individually design TD, ESO and error feedback part and combine into
complete ADRC. Among them, 1 , 2 , 1 , 2 , 01 , 02 are invariable parameter,
and three parameter of ESO can be generate automatically, only k p and k d need be
setting manually. So it is not convenient when actually operating and parameters
change. Using the Genetic Algorithm control, a kind of method called Genetic
Algorithm control is proposed. In this method, k p and k d can be adjusted
automatically. In this paper, a Genetic Algorithm controller is designed, which can
optimally approximate k p and k d automatically according to e1 and e2 . In this
design, e1 and e2 are considered as input. The adaptive disturbance rejection
parameters are modified on-line under the Genetic Algorithm control rules in order to
satisfy the requirements of different time and improve the control performances of
ADRC. With the analysis above, the structure of Genetic Algorithm ADRC designed
is shown as Fig.2.
For solving optimization problems by used genetic algorithms, the encoding schem
genetic methods of operation and setting the fitness function are very important,
because it is not only the judge criteria of individual's adaptive
capacity, but also the optimal solution key to ensure the best individual optimization
problem. Fitness function is closely related to the specific research question, it
should not only reflect the essence of the problem under study, but also easy to
compute.
Design of Ship Course Controller 479

Genetic
Algorithm
kd kp
v2 e2
- NLSEF
u0 u
Object
y
v1 e1 - 1/b
- b

z3 E
z2 S
z1
O

Fig. 2. Genetic Algorithm ADRC

1) Determination of optimal objective function


According to optimization problem for the ADRC, for comprehensive evaluation
about the dynamic performance, static performance of control systems (eg. response
time, settling time, overshoot and steady-state error, etc.), and according to the
minimum energy principle, this objective function is selected as the error absolute
time integral performance index ITAE and the absolute value of the control volume,
namely:
t
J (e) = ( | y ( v0 ) | + | u |) d (3)
0

Its reciprocal is fitness function. Optimal control parameters are the corresponding
controller parameters when the fitness function is the largest.
2) Encoding and genetic manipulation
Encoding: the individual coding forms use real number coding, so it avoid the
trouble of common binary encoding and decoding. {k p , k d } are selected as the
individual expression which is composed by ADRC parameters.
Genetic manipulation: a simple single point crossover fashion. Variation use
adaptive mutation way. When the fitness is high, the mutation rate reduce; When
fitness is low, the mutation rate increase. The common proportion and the optimal
retention policies are choosen. The implementation of the optimal retention policy can
guarantee the optimal individual obtained so far will not be destroyed by cross and
mutation, it cooperating with the other selection methods is an important guarantee of
the optimal value.

4 Simulation Results
Simulating experiment is carried out on a ship which nominal parameters of Nomoto
ship non-linear motion model are :

K=0.48s-1, T=216.58s, =9.16s2/rad2, =10814.30 s2/rad2, TE=2.5s, KE=1s-1;


max = 0.052356 rad/s (3D /s) , max = 0.61082 rad (35D ) .
480 H. Xiao, W. Pan, and Y. Han

The parameters of ADRC by tuning principle of the ADRC parameters are:


r =100, h =30, 01 =0.02, 02 =0.04, 1 =0.5, 2 =0.25, 1 =0.05, 2 =0.1, 01 =20,
02 =26, 03 =28. The scope of the optimization parameters are: k p =0.1~10,
k d =5~100. By genetic algorithms, the optimal parameters which are evolved after
16 generations are as follows: k p =1.7, k d =28.
Set the periodic sample time as 0.1s, simulation time as 500s.
Due to limited space, there are just some simulation results under typical
conditions described in Fig.3-4.

10
Rudder Angle()

-5
0 50 100 150 200 250 300 350 400 450 500
Time(s)

10
HeadingAngle()

6
4

0
0 50 100 150 200 250 300 350 400 450 500
Time(s)

Fig. 3. Simulation results of expected course change from 4o to 8o (nominal parameters)

10
Rudder Angle()

-5
0 50 100 150 200 250 300 350 400 450 500
Time(s)

10

8
HeadingAngle()

0
0 50 100 150 200 250 300 350 400 450 500
Time(s)

Fig. 4. Simulation results of expected course 6o(rational spectrum method disturbance)

From which we can see that:


1) Both the dynamic accuracy (switching course) and static accuracy of course
control (keeping course) are very high when course conversion angle is small.
2) In nominal model, the course response has a high accuracy and the auto-rudder
response is very smooth with a small rudder turning volume. When disturbance is put,
the auto-rudder response is frequent, but still smooth and very accurate.
3) Parameter's stochastic perturbation and control object's unmodeled dynamics
almost have no much influence on auto-rudder and ship course response.
Design of Ship Course Controller 481

5 Conclusion and Prospect


This paper presents a design of the optimal ADRC for ship course non-linear control.
Simulation results show that this controller appears to be strictly robust to the non-
linear characteristic of the ship, and the parameters perturbation, system disturbances,
and the system uncertainty, such as unmodeled dynamics. The process of course
switching is fast and smooth, and the rudder operation is keeping at a high precision
and with very low energy consumption. So it works perfectly as a ship course
controller.

Acknowledgements. This work is supported by Natural Science Foundation of


Shandong Province (Project Number: 2009ZRB019B2), Natural Science Foundation
of Shandong Province (Project Number: ZR2009FL013), Natural Science Foundation
of Shandong Province (Project Number: ZR2010FL014), Shandong Jiaotong College
Research and Innovation Teams.

References
1. Han, J.Q.: From PID technique to active disturbance rejection control technique. Control
Engineering of China 9(3), 1318 (2002)
2. Han, J.Q.: Auto disturbances rejection control technique. Frontier Science 1, 2431 (2007)
3. Zhao, X.R., Zheng, Y., Hou, Y.H.: The rational spectrum modeling of the ocean wave and
its simulation method. Journal of System Simulation 4(2), 3339 (1992)
4. Hu, Y.H., Jia, X.L.: Theory and Application of Predictive Control of Ships Subject to State
Constraints. Control and Decision 17(4), 542547 (2000)
5. Zhou, Y.B., Pan, W.G., Xiao, H.R.: Design of Ship Course Controller Based on Fuzzy
Adaptive Active Disturbance Rejection Technique. In: 2010 International Conference on
Automation and Logistics, vol. 1, pp. 232236 (2010)
A CD-ROM Management Device with Free Storage,
Automatic Disk Check Functions

Daohe Chen1,2, Xiaohong Wang1,3, and Wenze Li1,2

1
Science and Technology Projects in Guangdong Province
2
Department of Computer Science and Information Management,
Shengda Economics Trade & Management College of Zhengzhou,
Zhengzhou, China
chendaohe2006@yahoo.com.cn
2
College of Automation Science and Technology,
South China University of Technology,
Guangzhou, China
xhwang@scut.edu.cn

Abstract. With the development of information resources, there are more and
more CD-ROM resources. The traditional disc management, storage and search
methods can not meet the current requirements of the fast-paced CD-ROM
management infomationization and standardization. Based on the demand, we
studied a distributed intelligence disc management system with random access,
fuzzy search and automatic pop-up functions.

Keywords: intelligence disc management system, random access, fuzzy search,


automatic pop-up.

0 Introduction

With the development of information resources, there are more and more CD-ROM
resources in libraries, archives, television stations, radio stations, hospitals, schools,
military units, enterprises and institutions. A large number of discs need to use special
storage systems. The original manually manage and storage search methods by number
in CD case / box / drawer / cabinet can not meet the current requirements of the
fast-paced CD-ROM management infomationization and standardization. Therefore,
designing an intelligent CD-ROM management device is increasingly becoming a
pressing need.

1 The Overall Structure of CD-ROM Management Device

The system consists of two parts, the first is to design disc storage mechanism to
achieve free CD-ROM disc storage and automatic access functions; second part is the
disc's information management and sharing system.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 482488, 2011.
Springer-Verlag Berlin Heidelberg 2011
A CD-ROM Management Device with Free Storage, Automatic Disk Check Functions 483

CD storage organization is divided into mechanical and electrical control part.


Electrical control part can scan disc storage location, accurately locate stepper motor,
and communicate with the outside world through the Internet. CD information
management system mainly to complete the disc information collection, editing,
publishing, and so on. The system holds the conventional CD-ROM information, but
also scans and stores the discs directory structure information, provides users with the
fuzzy find function of the directory and file header keywords. Another important part of
the Information management system is network share. We can achieve the preview
contents of the disc, scan the CD directory structure, upload CD information and share
with others. Shown in figure 1.

Fig. 1. Overall structure of CD-ROM management device

2 Design of CD-ROM Management Hardware

2.1 Design of Mechanical Parts in Modular Unit Storage Structure

Actually, the discs number of different users needs to storage and manage are great
different. Starting from the universal, to design a program of convenient expansion is
necessary. This device uses a modular unit storage structure to achieve.
Each modular unit storage structure is mainly composed of axial motor, screw, dial,
dial drive motor, rail and bracket components. Axial motor is stepper motor. It is
connected with the screw. By controlling the stepper motor turn, the screw rotates in
accordance with the direction of rotation, drive motor and dial dial to achieve axial
movement. Then the dial can drive the motor and run in the axial movement. The dial
drive motor (which is low power stepper motor) and the dial taken together make up the
check disk body. When it arrived at the designated location, it would drive the dial to
release the disc through the dial drive motor. Why the axial motor and the dial drive
motor using stepper motors? Compared to DC motors and AC motor, the stepper, the
biggest advantage is that the stepper motor control is simple. In the case of not losing
steps, it can be used for open-loop control. In the case of a short trip ,it is with high
accuracy. Using stepper motor control is not only simple but also can avoid the more
484 D. Chen, X. Wang, and W. Li

complicated position detection device, such as rotary transformers, inductosyn, grating,


etc., simplifying the system, so greatly reduces the interference of other electrical
systems.
CD storage grid is designed with "W". The dial just need to push the disc from the
storage stowed position to the taken disc stowed position. In order to reduce the load
screw, dial drive motor and dial stuck in a rail through the bracket. Rail ends are bracket
fixed. Axial motor and dial drive motor are controlled by the main control circuit.
Shown in figure 2.

Fig. 2. Mechanical design of modular unit storage structure

2.2 The Detection Circuit Design of Random Deposition

The disc management device specially designed the detection circuit of random
deposition. When the user needs to deposit a disc, can choose an empty position at
random according to the prompt light position.
The circuit consists of two main parts: storage position detection and light. We use
optical test tube to achieve storage position detection, which is installed at the entrance
of each disc storage grid. When the optical test tube is blocked by disc, it will produce
transitions to identify the stored position. The main control circuit can get optocoupler
level signal through Parallel-in/ Serial-out Shift Register. The optical test tube uses the
spatial arrangement of delta shape . This reduces the limitations that the launch tube
and receiver tube thickness have for disc storage grid size. Indication lamp uses ultra
bright LED. The main control circuit can control indication lamp through
Serial-in/Parallel-out Shift Register, respectively. Shown in figure 3.

Fig. 3. Deposition detection circuit


A CD-ROM Management Device with Free Storage, Automatic Disk Check Functions 485

2.3 Hardware Structure of the Main Control Circuit


From the perspective of actual use, the system uses two control program. The first one
is distributed control unit which consists of embedded control system. They can either
be a separate system, and can be combined with a general server into a network-based
large-scale systems. The second one consists of the total servers, and can complete the
information system management and external network communications.
In order to achieve the stepper motor control, human-computer interaction, network
communications in various functions, Availability and scalability from the start, the
distributed control unit uses embedded computer, the industry PC of PC / 104 bus as the
control center. PC104 single board computer can be used as the core processors and the
bottom control panel communication, but also as a user interface, complete the
Humancomputer interaction, and communicate with the background master server
through the network. PC104 single board computer uses compact stack connected
structure which is suitable for embedded applications, with small, low power
consumption, good seismic performance, high reliability and flexible configuration,
etc., in the field of industrial control has been widely used.
In the system design process, the stepper motor control is a key part. In program
choice, we abandon the complex hardware design, but mainly focus on the interface
circuit and control software design. In the interface circuit, we use CPLD (Complex
Pro-grammableLogic Device) as the key device. The computer connects interface
circuit through PC104 standard interfaces. The interface board consists of CPLD and
peripheral circuits necessary. It can receive action command send by CPU , and convert
these instructions into a corresponding pulse, direction signals, direct stepper motor
movement through stepper motor driver. At the same time, it receives the signal from
the field, and send part of the signal to the CPU, complete control of the whole system.


Here, Altera Corporation EPM7128 using the CPLD, developing software using the
MAX + plus . Shown in figure 4.

Fig. 4. Control unit structure

The control of the system uses software and hardware combination. The computer
only to send pulses and pulse direction instruction to the interface circuit. Software
takes very little storage unit. Program development is no time limit , computer can
perform other tasks freely between each step of the motor to achieve more motion
control of stepper motor.
As the core of the interface circuit, pulse generating circuit shown in Figure 5. The
circuit consists of the data latch circuits, counters, data comparators and necessary logic
circuit. Pulse input in figure is continuous pulse signal send by pulse source. It is
486 D. Chen, X. Wang, and W. Li

controlled by the "start " signal and "send end " signal, that is only when the "start"
signal starts, and the pulse "send end " signal is invalid (for 1), pulse signal send by
pulse resource can enter the counter and output to the stepper motor drive.

Fig. 5. Pulse circuit

Pulse send flow chart shown in Figure 6. First, the CPU send clear signal to the
counter, through the expansion port of the CPLD. Then send the data latch again the
number of pulses planned in advance. After the data latch, the latch output showing the
number of pulses in hexadecimal. For data comparison of two data input device, the latch
input is the number of pulses, and counter input zero, so the data is not equal to
comparison. Data comparator ouput is zero, after negated, is sent to AND gate, and then
send pulses "start" signal through the expansion port. In this case, pulse signal sent by the
pulse source can send the pulse signal to pulse output port and counter pulse input port ,
through the AND gate. With pulse output, the counter starts counting. When the number
of pulses is the same as the data in data latch, data comparator output is 1, after nagated,
through the AND gate, the output pulse blockade, thus, the sending pulse process is over.

Fig. 6. Pulse flow


A CD-ROM Management Device with Free Storage, Automatic Disk Check Functions 487

When the CPU detects the end of the signal pulse, it can repeat the above process,
the next pulse transmission. If it send a large number of pulses, it can send through
multiple cycles, or add CPLD latches, counters, width of the data comparator to
resolve.

Fig. 7. Block diagram of lifting-speed circuit

When the stepper motor start and stop with a higher frequency or speed of mutation,
they will appear out of step or even the phenomenon of stepping motor can not start, so
the program functions must have a lifting-speed. Many are mentioned in the literature
on the lifting-speed curve, lifting-speed circuit block diagram shown in Figure 7. This
part of the circuit consists of frequency divider, frequency synthesizer circuit, data
latch, the data selector. After the crystal pulse signal passed through the frequency
divider and pulse synthesis circuit, it will produce several frequency pulse signals.
These pulse signals are sent to data selector data input port. CPU send the
corresponding control signal to the data selector , through the data latch, then the output
port will send pulses of different frequencies. The pulse signal, as a pulse source, is
provided to the pulse circuit before. About process control, in pulse flow chart, only
need to set the pulse frequency in any step before the start signal send signal.

3 Software Design of CD-ROM Management Device

The main achievement of the information management system is disc information


input, edit update, delete, and CD-ROM information network share. The system is
mainly two parts - general view and system administrators. The management
procedures is also divided into three parts: user management, admin and other
management. User Management provides some of the major user registration,
registration information changes, data browsing and query, data download. Back
section includes system settings, data storage and modification, user management,
announcement and the administrator password changes, student issues and message
processing. Other management, including counting, statistics, user messages, user
requests, provides and manages personal CD data. System use ASP + C #. net + SQL
488 D. Chen, X. Wang, and W. Li

Server as development tool, taking into account the needs of actual use, the use of C / S
(Client / Server) and B / S (Browser / Server) structure combined. Large amount of
information data is to use C / S implementation, including the image file to upload and
so on. Small amount of data is to use B / S implementation, such as add, modify, and
delete the disc information, the user management.
The distributed uni uses the Wince operating system, through writing low-level
driver functions of ISA bus, to achieve the communication between the control unit and
the lower control board. A small embedded database is installed on the Wince system,
which with the total server composes of the network database structure, with the total
server synchronization. When the server has modified the information, it can notify the
distributed unit, to trigger a database update event.

4 Experimental Results

The project has successfully developed a multi-layer disc management unit pilot
prototype, realized accessing CD-ROM through the name of the disc, number, type,
content , and other related information on webpage, using disc directory structure
scanning functions to save disc directory structure information and automatically
associate with the disc number. So as to solve the existing technology to save the
information singularly, the shortcomings of low utilization of space. The modular
structure is designed to achieve an unlimited level of the expansion, provides references
for the design of similar products.

5 Conclusion

With the increase in disc capacity, CD-ROM management will become more
intelligent, humane. Future improvements will focus on network management, remote
query and share information. The disc management will be more efficient, more
convenient.

Acknowledgment. This work was supported by Science and Technology Projects in


Guangdong Province (2009B010900018).

References

1. Xia, M.: Research and Design of Library CD-ROM. SuZhou University (2002)
2. Zhen, T.: Analysis and Design of CD-ROM Electronic Records Management System Based
on Disc Database. Xian University of Architecture Technology (2002)
3. Zha, Y.: CD-ROM Library File Cache Management System. Wu Han University (2004)
4. Xu, D.Y.: Principles of Disc Storage System. National Defence Industry Press (January
2000)
An Efficient Multiparty Quantum Secret Sharing with
Pure Entangled Two Photon States*

Run-hua Shi1,2 and Hong Zhong1,2


1
Key Laboratory of Intelligent Computing & Signal Processing of Ministry of Education,
Anhui University, Hefei, 230039, China
hfsrh@sina.com
2
School of Computer Science and Technology,
Anhui University, Hefei, 230039, China
zhongh@mail.ustc.edu.cn

Abstract. We present an efficient multiparty quantum secret sharing scheme by


using dense coding method. The implementation of this scheme only needs to utilize
pure entangled two-photon pairs as quantum resources, where each pure entangled
two-photon pair can carry more than one bit of classical information. Thus it obtains
the higher efficiency than other schemes with pure entangled states.

Keywords: quantum information, quantum cryptography, quantum secret


sharing.

1 Introduction
Secret sharing had been studied extensively in the classical setting and was extended
to the quantum setting. The first quantum secret sharing (QSS) protocol was presented
by Hillery, Buek and Berthiaume [1] in 1999. Afterward, there were a lot of studies
focused on QSS in both the theoretical [2-4] and experimental [5-6] aspects. At
present, most QSS can only split a (classical or quantum) secret by quantum
mechanics principles. Furthermore, these QSS can be divided into two categories: one
only splits a classical secret and another shares a quantum secret (i.e., an arbitrary
unknown quantum state). In this paper, we mainly focus on the former.
To date, a lot of QSS schemes for sharing a classical secret have been proposed.
According to the differences of using quantum resources, these QSS schemes can also
be divided into three categories: QSS with single photons [7-10], QSS with the
maximally entangled states [11-14] and QSS with the non-maximally entangled states
(i.e., the pure entangled states) [15, 16]. For the first kind of QSS with single photons,
the most difficult problem is to obtain the ideal single photons since requiring the
high precision of the experimental equipments. For the second kind of QSS with the
maximally entangled states, the generation and distribution of maximally entangled
multi-particle states is a difficult task. For the third kind of QSS with the pure

*
This work was supported by the Natural Science Foundation of Anhui Province (No.
11040606M141), Research Program of Anhui Province Education Department (No.
KJ2010A009) and the 211 Project of Anhui University.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 489495, 2011.
Springer-Verlag Berlin Heidelberg 2011
490 R.-h. Shi and H. Zhong

entangled states, the advantage is easy to implement, but the drawback is the low
efficiency. For example, each pure entangled states only carries one bit of classical
information in References [15, 16].
In this paper, we will present an efficient multiparty quantum secret sharing
scheme with pure entangled two-photon states. In our scheme, the sender takes the
pure entangled two-qubit states as quantum resources, instead of the pure entangled
three or multi-qubit states, which makes our scheme more convenient in a practical
application than the scheme in Ref. [15], since the generation of the three or multi-
qubit states is more difficult than the two-qubit states. Especially, the total efficiency
of our QSS scheme is higher than the existing third type of QSS schemes [15, 16].

2 Dense Coding with Pure Entangled Two-Photon States


Dense coding was first proposed by Bennett and Wiesner in 1992 [17]. Utilizing
entanglement properties, dense coding can transmit more than one bit of information
by manipulating only one of the two particles in an entangled state. In this section, we
present a dense coding method with pure entangled two-photon states, in which the
receiver uses the projective measurement first and then applies the generalized
measurement to order to discriminate the senders unitary operators with some
probability.
Suppose that the sender Alice and the receiver Bob each obtain one photon in a
pure entangled two-photon state ab , where Alice owns the photon a and Bob
keeps the photon b with him. Without loss of generality, we assume that the two-
photon pair (a , b) is in the state:

ab
= 00 ab
+ 11 ab
, (1)

where + = 1 and .
2 2

Alice first applies any one of the four unitary operators {U 1 , U 2 , U 3 , U 4 } on the
photon a and then sends the photon a to Bob, where
U1 = I = 0 0 + 1 1 , U 2 = z = 0 0 1 1 ,

U 3 = x = 0 1 + 1 0 , U 4 = i y = 0 1 1 0 . (2)

After receiving the photon a from Alice, the state of Bobs two-photon pair ( a, b ) is
one of the following four cases:
(I I ) ab
= 00 ab
+ 11 ab
= ab
,

( z I ) ab
= 00 ab
11 ab
= ' ,
ab
(3)
( x I) ab
10 ab
01 ab ab
,

(i y I) ab
10 ab
01 ab
'
.
ab
An Efficient Multiparty Quantum Secret Sharing with Pure Entangled Two Photon States 491

Obviously, the above four states are not mutually orthogonal, so these states cannot be
distinguished with certainty. But, they can be distinguished with some probability of
success.
In order to distinguish the four states, Bob first performs a projection onto the
subspaces spanned by the basis states { 00 , 11 } and { 01 , 10 } with projection
operators: P1 = 00 00 + 11 11 and P2 = 01 01 + 10 10 . It is obvious that P1
and P2 are mutually orthogonal, and there exist the following equations:

ab
P1 ab
= 0, ' P1 ' =0, ab
P2 ab
=0, ' P2 ' = 0. (4)
ab ab ab ab

That is, if Bob obtains P1 , then he knows that the state of the two-photon pair ( a, b )
will be either ab
or ' ; if he gets P2 , the state will be either ab
or ' .
ab ab

Without loss of generality, we assume that Bob obtains P1 . Thus, Bob knows that the
state of the two-photon pair ( a, b ) will be either ab
or ' , but he cannot know
ab

with certainty that the state is ab


or ' . Furthermore, in order to known that the
ab

state is in ab
or ' , Bob performs a generalized measurement on his two-
ab
photon entangled states with the corresponding POVM elements in the subspace
{ 00 , 11 }

2
1 2 1 2 1 ( ) 0
M1 = , M2 = , M = .
2 2 2 2 2 2 (5)
3
0 0
3
Here M i = I . If Bob obtains M 1 then his two-photon state is ab
because of
i =1

ab ' M1 ' = 0 ; if he gets M 2 then the state is ' because of


ab ab

ab M2 ab
= 0 . However, if he gets M 3 the state is completely indecisive and Bob
cannot obtain any information. Obviously, the success probability of distinguishing
ab
and ' is 2 2 [18]. Similarly, for the case of P2 one can show that the
ab

success probability is also 2 2 . So, the average amount of information transmitted


from Alice to Bob should be expressed as 1 + 1 2 2 = 1 + 2 2 . Thus, if Alice codes
the four unitary operators U 1 , U 2 , U 3 and U 4 to 00, 01, 10 and 11, respectively,
Bob will obtain the coding information is {00, 01} or {10, 11} with 100% probability
by using the projection measurement, furthermore he will get the coding information
is 00 (10) or 01 (11) with 2 2 probability of success by using the generalized
measurement. Especially, he can distinguish two bits of coding information of Alice
with 100% probability of success when = = 1 / 2 .
492 R.-h. Shi and H. Zhong

3 Multiparty Quantum Secret Sharing with Pure Entangled


Two-Photon States
In this section, we will propose an efficient multiparty quantum secret sharing with
pure entangled two-photon states by using the dense coding method above. We first
consider three-party case. Suppose that there are three parties, say, Alice, Bob and
Charlie, where Alice is the sender and Bob and Charlie are the two agents.
Alice first prepares a pure entangled two-photon pair (h, t ) , which is in either
= 00 + 11 or = 01 + 10 , where +
2 2
ht ht
= 1 (suppose that
). Then Alice holds the photon h in hand and sends the photon t to Bob. After
receiving the photon t , Bob randomly chooses a unitary operation U i ( i = 1, 2, 3 or
4) and applies it on the photon t . Furthermore, Bob transmits the photon t to
Charlie. After receiving the photon t , Charlie also does the same things as Bob, that
is, he encodes his secret on the photon t by applying the unitary operation U i ( i = 1,
2, 3 or 4), and sends back the photon t to Alice. After receiving the photon t sent by
Charlie, Alice performs the projective measurement and the generalized measurement
described in the above section in order to distinguish the state of the entangled two-
photon pair (h, t ) . By the measured outcomes of Alice, she can extract the secret
information shared among all agents. That is, she can build a secret key shared among
all agents.
Now, let us describe the principle of our three-party QSS scheme in detail as
follows:
(1) Alice, Bob and Charlie first agree on that the four unitary operations U 1 , U 2 , U 3
and U 4 represent the two bits of information 00, 01, 10 and 11, respectively.
(2) Alice prepares an ordered N photon pairs. Each photon pair ( h, t ) is randomly in
one of the two pure entangled states ht or ht ( ht = 00 ht + 11 ht and
ht
= 01 ht
+ 10 ht
). Then she divides the photons into two sequences:
[ P1 ( h ), P2 (h ),..., PN ( h )] and [ P1 (t ), P2 (t ),..., PN (t )] , which are denoted as S h and
St , respectively. Furthermore, Alice prepares 3k ( k << N ) decoy photons by
measuring the photons x in some pure photon pairs (x, y) and operating the
remaining photons y with the two unitary operations U i ( i = 1,3 ) and the H
operation as the references [15, 16], inserts randomly these decoy photons into
the sequence St and makes a record of the insertion positions and the
measurement basis of the decoy photons.
(3) Alice sends the new sequence St (including N + 3k photons) to Bob over the
quantum channel and retains the remaining sequence S h . Alice has to confirm
that Bob has actually received the sequence St via classical communication.
(4) After being notified that Bob has received the sequences St , Alice performs an
eavesdropping check of this transmission: Alice announces the positions of k
photons of all decoy photons and their corresponding measurement basis. Bob
An Efficient Multiparty Quantum Secret Sharing with Pure Entangled Two Photon States 493

measures these decoy photons according to Alices announcements and tells


Alice his measurement outcomes. Alice compares the measurement outcomes of
Bob with the initial states of these decoy photons and analyzes the security of the
transmissions. If the error rate is higher than the threshold determined by the
channel noise, Alice cancels this protocol and restarts; or else they continue to the
next step.
(5) Bob randomly chooses one of the four local unitary operations U i ( i = 1,2,3,4 ) to
encrypt each of the remaining photons in the sequence St , say U b , and then he
sends the new sequence St (including N + 2k photons) to Charlie over the
quantum channel. Alice has to confirm that Charlie has actually received the
sequence St via classical communication.
(6) After being notified that Charlie has received the sequences St , Alice performs
an eavesdropping check of this transmission with the help of Bob. If the error rate
is higher than the threshold determined by the channel noise, Alice cancels this
protocol and restarts; or else they continue to the next step.
(7) Charlie randomly chooses one of the four local unitary operations U i
( i = 1,2,3,4 ) to encrypt each of the remaining photons in the sequence St , say
U c , and then sends back the new sequence St (including N + k photons) to
Alice over the quantum channel.
(8) After receiving the sequence St from Charlie, Alice performs an eavesdropping
check of this transmission with the help of Bob and Charlie. If the error rate is
higher than the threshold determined by the channel noise, Alice cancels this
protocol and restarts; or else they continue to the next step.
(9) Alice performs a projection measurement with the corresponding projection
operations { P1 , P2 } first and then a generalized measurement with the
corresponding POVM operations { M 1 , M 2 and M 3 } on the photon pair ( Pi (h ) ,
Pi (t ) ) in two sequences S h and St for i = 1,2,..., N , respectively. By the
measured outcomes of each entangled-photon pair, Alice can infer one bit or two
bits of coded information shared between Bob and Charlie. If he gets M 3 in the
stage of performing the generalized measurements, Alice announces one bit of
classical information 0 to Bob and Charlie, which shows that only the first bit
~ ~ ~ ~
of U b U c will be the sharing secret information, where U b and U c denote the
coding information of the corresponding local unitary operations of Bob and
Charlie, respectively; otherwise, Alice announces one bit of classical information
~ ~
1 to Bob and Charlie, which shows that the whole two bits of U b U c will be
the sharing secret information. Alice transforms her measured outcome sequences
to the classical bit strings K A . By Alices announcements, Bob and Charlie can
transforms their local unitary operation sequences to the classical bit strings K B
and K C .
In this three-party QSS scheme, Bob and Charlie can collaborate to infer the shared
secret key of Alice since K A , K B and K C satisfy the relationship: K A = K B K C ,
494 R.-h. Shi and H. Zhong

where the average bit length of K A is N (1 + 2 2 ) . The generalization of this three-


party QSS scheme to the case with n agents can be implemented in a simple way by
modifying the processes in the case with two agents. We describe it after the step 5
discussed above.
(6') Charlie randomly chooses one of the four local unitary operations U i
( i = 1,2,3,4 ) to encrypt each of the remaining photons in the sequence St , say
U c , and then sends the sequence St to the next agent, Dick.
(7') Then Alice and Dick complete the eavesdropping check of this transmission,
same as that between Alice and Charlie. If the transmission is secure, Dick
performs one of the four local unitary operations U i ( i = 1,2,3,4 ) to encrypt each
of the remaining photons in the sequence St , say U d , and then sends the
sequence St to the next agent.
(8') After repeating the step 7` n 2 times, the sequence St is received securely by
Alice from the last agent, say Zach. Then Alice and Zach complete the
eavesdropping check of this transmission.
(9') After the eavesdropping check, Alice extracts the secret key K A by the same
procedures in step 8 discussed above.

4 Discussion and Summary


Like that in Ref. [16], our QSS protocol is secure against the intercept-resend attack
and the collusion attack. Furthermore, the security of the proposed protocol still
depends on the process for setting up the quantum channels. To set up the secure
quantum channels, we use the decoy-photon technique as the references [15, 16].
In Addition, the total efficiency for QSS can be calculated as t = bs /( qt + bt ) ,
where bs is the number of bits that consist of the secret information to be shared, qt
is the number of qubits that is transmitted and used as the quantum channel (except
for those chosen for security checking) and bt is the public classical bits. In our QSS
scheme, bs = N (1 + 2 2 ) , q t = N and bt = N , so t = (1 + 2 2 ) / 2 . However, in
Zhou et al.s scheme [15], bs = N , qt = mN and bt = 0 , so t = 1 / m , where m is
the number of the agents ( m 2 ). It is obvious that the total efficiency of our QSS
scheme is higher than their scheme.
In summary, we have presented a secure multiparty quantum secret sharing
scheme. The implementation of the scheme only needs to exploit the pure entangled
two-photon pairs as quantum resources, where each photon transmitted can carry
more than one bit of information by the dense coding method (the average amount of
information is 1 + 2 2 bits). With the present techniques, the pure entangled two-
photon pairs may be one of the optimal entangled quantum resources. Thus, we can
deduce that this scheme is feasible.
An Efficient Multiparty Quantum Secret Sharing with Pure Entangled Two Photon States 495

References
1. Hillery, M., Buzek, V., Berthiaume, A.: Quantum secret sharing. Phys. Rev. A 59(3),
1183418229 (1999)
2. Cleve, R., Gottesman, D., Lo, H.K.: How to share a quantum secret. Phys. Rev.
Lett. 83(3), 648651 (1999)
3. Gottesman, D.: Theory of quantum secret sharing. Phys. Rev. A. 61(4), 042311(1-8)
(2000)
4. Sudhir, K.S., Srikanth, R.: Generalized quantum secret sharing. Phys. Rev. A. 71(1),
012328(1-6) (2005)
5. Tittel, W., Zbinden, H., Gisin, N.: Experimental demonstration of quantum secret sharing.
Phys. Rev. A 63(4), 042301(1-6) (2001)
6. Lance, A.M., Symul, T., Bowen, W.P., et al.: Tripartite quantum state sharing. Phys. Rev.
Lett. 92(17), 177903(1-4) (2004)
7. Zhang, Z.J.: Multiparty quantum secret sharing of secure direct communication. Phys.
Lett. A 34(1-2, 4), 6066 (2005)
8. Zhang, Z.J., Li, Y., Man, Z.X.: Multiparty quantum secret sharing. Phys. Rev. A 71(4),
044301(1-4) (2005)
9. Deng, F.G., Zhou, H.Y., Long, G.L.: Bidirectional quantum secret sharing and secret
splitting with polarized single photons. Phys. Lett. A 337(4-6), 329334 (2005)
10. Wang, T.Y., Wen, Q.Y., Chen, X.B., et al.: An efficient and secure multiparty quantum
secret sharing scheme based on single photons. Opt. Commun. 281(24), 61306134 (2008)
11. Zhang, Z.J., Man, Z.X.: Multiparty quantum secret sharing of classical messages based on
entanglement swapping. Phys. Rev. A 72(2), 022303(1-4) (2005)
12. Deng, F.G., Long, G.L., Zhou, H.Y.: An efficient quantum secret sharing scheme with
Einstein-Podolsky-Rosen pairs. Phys. Lett. A 340(1-4, 6), 4350 (2005)
13. Deng, F.G., Li, X.H., Li, C.Y., et al.: Multiparty quantum secret splitting and quantum
state sharing. Phys. Lett. A. 354(3), 190195 (2006)
14. Shi, R.H., Huang, L.S., Yang, W., Zhong, H.: Quantum secret sharing between multiparty
and multiparty with Bell states and Bell measurements. Sci. Chain Phys. Mech. Astron. 53,
22382244 (2010)
15. Zhou, P., Li, X.H., Liang, Y.J., et al.: Multiparty quantum secret sharing with pure
entangled states and decoy photons. Physica A 381(15), 164169 (2007)
16. Shi, R.H., Zhong, H.: Multiparty quantum secret sharring with the pure entangled two-
photon states. Quantum Information Processing (2011), doi:10.1007/s11128-011-0239-9
17. Bennett, C.H., Wiesner, S.J.: Communication via one- and two-particle operators on
Einstein-Podolsky-Rosen states. Phys. Rev. Lett. 69, 28812884 (1992)
18. Luo, C.L., Ouyang, X.F.: Controlled dense coding via generalized measurement.
International Journal of Quantum Information 7(1), 365372 (2009)
Problems and Countermeasures of Educational
Informationization Construction in
Colleges and Universities

Jiaguo Luo1 and Jie Yu2


1
Faculty of Liberal Arts and Law,
Jiangxi University of Science and Technology,
Ganzhou, Jiangxi, China
luojg@126.com
2
Faculty of Foreign Studies,
Jiangxi University of Science and Technology,
Ganzhou, Jiangxi, China
yujiebrenda@163.com

Abstract. Setting up a digital, intelligentized and networking platform based on


modern information technology is of great importance and will be beneficial to
the building of learning community in which abounds in information and the
students are available to the information and will be cultivated into creative
talents. After the detail analysis of the technical characteristics and developing
tendency of educational informationization, this paper puts forward some
suggestions on the problems of the informationization construction.

Keywords: higher education, educational modernization, informationization


construction.

1 Introduction
Promoting the educational modernization by the educational informationization and
changing traditional education pattern by information technology have become the
inevitable trend of education development. The standard of the educational
informationization has become an important mark of national educational level.

2 The Promoting Effects of Educational Informationization on


Educational Modernization

2.1 The Connotation of Educational Informationization

Educational informationization, which refers to setting up a digital, intelligentized and


networking platform based on modern information technology, is the process of
promoting the development and reform of education with modern information
technology deeply being used in educational domain.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 496500, 2011.
Springer-Verlag Berlin Heidelberg 2011
Problems and Countermeasures of Educational Informationization Construction 497

2.2 The Technical Features of Educational Informationization

(1) The Multimedia of the Teaching Content and the Intelligent Teaching Process
By integrating text, figure, image and sound together, it deals with various kinds of
media information and then makes the teaching content structural, dynamic and
graphic. Meanwhile, the combination of IT and cognitive science bring about the
automation of Computer Thinking and Mental Labor. For instance, through the
intelligent teaching system, the students can study, review, and take stimulated test
and self assessment test by Self Assessment Test.
(2) The Network and Virtualization of Information Transmission
It makes the information transmission based on the NET and virtualization,
connecting the global educational resources as a whole for the large number of
learners. Meanwhile, the virtual world made by computer simulation set up a new
teaching pattern for net teaching, distance learning, and virtual laboratory. The
learners perceive the objective world and acquire the relative skill through Virtual
Reality.
(3) The Digitalization of the Learning Resources and the Variety of the Teaching
Pattern Reforms
The digitalization of the information processing and transmission enables the net
collect a plenty of teaching resources, such as Multimedia Resources Database,
Library Information Database, Dynamic Comprehensive Information Database and
etc. Learners can choose relevant learning resources according to the different
conditions, purposes and phases, breaking the limitation of time, spaces and the
teaching management of the traditional Class Education System. It will promote a
mass, global and lifelong education.

3 Problems and Countermeasures of Information Technology


Construction for Higher Education

3.1 Educational Information Technology Construction Focusing on


Quality-Developed Education

(1) Establish Curricula System Adapting to Information Economy Society Curricula


compose an implementation blueprint for the construction of training objectives,
which would be the most important basis for carrying out quality-developed
education. Therefore, the construction of educational informationization should focus
on the reform of curricula to establish a curricula system for quality-developed
education adapting to the information economy society.
(2) Construct Educational Informationization Conducive to Reform of Personnel
Training Modes. Quality-developed education aims to guiding students to change
from their traditional passive receiving, understanding and grasping of knowledge to
initiative acquiring, processing and applying knowledge by making full use of
information networks for their exploration of knowledge along with increasingly
stronger self-learning capability. Therefore, during the construction process of
educational informationization, it should be strongly strengthened to train students
from learning Internet knowledge into the grasping of their learning capability
through Internet while teaching functions of a school should be expanded into being
498 J. Luo and J. Yu

an open learning community so that the new personnel training modes of lifelong
learning, learning organization, etc., can be formed under the circumstances of
educational informationization.

3.2 Campus Network in Close Connection with Teaching to Highlight Various


Characteristics

3.2.1 Functions of Campus Network Curricula


(1) Regarding the functions of campus network curricula, it should pay attention to
interactive features and the creation of learning circumstances. Although, in respect to
the functions of campus network, there are basically sectors of teaching, research,
management, information exchange and distance teaching & learning. Nevertheless,
most of the existing functions of campus networked would be presented with more in
information exchange and management (some of the fast developed schools have
realized their OA office automation). Therefore, much of the benefits of their teaching
capabilities have not yet been maximized, especially in their network curricula.
Network multimedia curricula can be applied to sharing resources of equipment,
teachers, and research & development conditions. Therefore, while establishing the
functions of campus network curricula, it should pay attention to interactive features
and the creation of learning environmental circumstances so that, when satisfying the
basic requirements of the traditional curricula, teaching objectives, a complete system
of knowledge can be clearly reflected, students can be guided to consciously and
effectively conduct their operations and exercises while reasonably assessing their
self-learning results. Besides, network curricula should be presented while reflecting
the learning as the core by stressing students independent learning and designing
as well as inspiration and mobilization of their learning motivation.
(2) Campus Network Curricula Playing an Important Role in Schools Teaching
System. Based on some certain educational purposes, network curricula should be a
combined sum demonstrating some content and implementing the related teaching
activities through the Internet. At present, the campus network curricula is spread
basically using face to face oriented mode which is an education dissemination form
carried out under a schools educational circumstances based on the campus platform
applying teacher-led instruction supplemented by network curricula, whose goal aims
at students systematic learning of curricula knowledge.
It is precisely because it is a supplement to classroom teaching, colleges and
universities, on the one hand, generally put not enough importance to campus network
curricula and; on the other hand, most of them can only provide theoretical study or
practical simulation currently, which can not be a substitute for experiment, practice
and other practical aspects. Therefore, we believe that, during the construction of
higher educational network, the curricula network platform should be enhanced to
play its role in the construction of campus network.
(3) The campus curricula network should be constructed in a combination along
with the construction of campus network, the quality courses and the item pool. Due
to the status that currently both the computer multimedia technology and network
technology are very mature already, it is entirely achievable that, based on the
integrated use of B/S-modeled Web application development technology, JavaSeript,
vbSript, ASP and other technologies, the establishment of campus network, the
Problems and Countermeasures of Educational Informationization Construction 499

construction of quality courses, multimedia curricula and item pool can be


constructed in a combination base. In this way, it can not only solve the problems
regarding the fragmented nature of separate functional modules, the disintegrated
application and disunion interfaces, but also significantly save the construction cost of
educational informationization.
Currently, a number of principal universities which are featured by richer financial
resources and stronger scientific R & D strength have mostly developed their campus
information platform subject to their own actual circumstances while some of the
weaker institutions would generally purchase from software companies, which have
often resulted in the disjoint between their development party and user and hence
increasing their cost later on. Therefore, we recommend these schools to have
strategic cooperation with some companies, by which their original separate
functional modules can be fully integrated to achieve a comprehensive upgrade of
their campus network, such as constructing their campus one-card-through by mutual
benefit cooperation with some banks or some companies. Besides, a united campus
network construction can be implemented in relation to multiple universities to reduce
cost, to improve resource utilization, and to achieve resource sharing among
university communities.

3.2.2 Constructing Campus Network by Considering Both Future and Real


Circumstances and being Implemented in Phases or Blocks
(1) Establish information centers, digital libraries and multimedia information
database;
(2) Build an electronic classrooms, electronic lesson-preparation rooms and
educational teaching resource pools in every school;
(3) Construct office automation, distance teaching & learning and external access
(Establishing close combination of regional education networks and campus network)
and so on.

3.2.3 Highlighting Every University's Own Characteristics, in Particular to Be


Constructed Along with the Campus E-commerce Network
Along with the increasing growth of information network technology, campus
network construction becomes mature and the campus e-commerce activities have
become pretty natural rising in the campus, which has increasingly shown its charm
since it has promoted the campus network construction and provided enriched and
convenient service to campus life of students and teachers, thus enhancing the
understanding capability of students majored in business, economics and
management. Therefore, utilizing the existing conditions to accelerate the
development of campus e-commerce, promote the construction of e-learning platform,
and create learning atmosphere for developing e-business talent has become the key
research subject of digital campus construction.

3.3 The Fund and Management Problems of Information Construction

(1) Strive for financial support from higher levels and receive donations from social
communities to increase funds used. Colleges and universities can set up resource-
searching groups responsible for collecting all kinds of outstanding teaching materials
500 J. Luo and J. Yu

available from the market according to teaching needs. For example, regular contracts
can be signed with Internet companies, such as Teaching Resources Networks of
Chinese Universities and others of such kind to buy network teaching materials and
resources.
(2) Virtuous cycles of educational cooperation can be created with enterprises,
educational institutions and financial institutions, such as mirror-image using some
Internets high level and high-quality educational resource libraries to establish all or
part of images so as to obtain newer and more specialized resources and links. A
schools WWW server and a number of Internets free educational teaching resources
sites can be linked to directly achieve sharing of teaching resources.
(3)Try to Gain the Non-stated Donation and Achieve the Replacement of the
Equipment with the Manufacturers

4 Conclusion
The Process of educational informationization is that of continuous application of
informational science, in which a lot of problems and phenomena will necessarily
occurs and leave us to recognize and solve which will in hence promote the
development of the educational informationization theories.

References
[1] Cai, Y.: The Connotation of the Concept of Educational Informationization: from a
Perspective of Sociology. Journal of Educational Science of Hunan Normal
University 4(1), 1822 (2005)
[2] Lu, P., Ge, N., Liu, Q.: The Gateway to the Future of Chinathe Process of
Informationization in China. Nan Jing University Press, Nan Jing (1998)
[3] Wang, H., et al.: The Modern Distant Learning which Adapting to the Knowledge
Economy. Higher Education Research (5) (1999)
[4] Fu, D.: The Purpose, Content and Significance of Educational Informationization. The
Study of Educational Technology (4) (2000)
[5] Luo, J., Yang, J., et al.: Construct the Platform of Campus Network Course and Accelerate
the Construction of Course. Heilongjiang Researches on Higher Education (10), 146147
(2006)
[6] Zhang, B., Luo, J.: The study of Multimedia Courseware Resource Database Based on the
Campus Network. Journal of Jiangxi University of Science and Technology (3), 3638
(2007)
[7] Luo, J., Yang, J., et al.: Several Problems in the Network Construction of Campus
Electronic Commerce. Market Modernization, 11-3518/TS (20), 122123 (2006)
[8] Chen, F.: Introduction Remarks on the Socialization of Technologya Sociology Study of
Technology. Renmin University of China Press, Peking (1995)
Synthesis and Characterization of Eco-friendly
Composite: Poly(Ethylene Glycol)-Grafted Expanded
Graphite/Polyaniline

Mincong Zhu1, Xin Qing1, Kanzhu Li1, Wei Qi1, Ruijing Su1, Jun Xiao2,
Qianqian Zhang2, Dengxin Li1,*, Yingchen Zhang1,2,3, and Ailian Liu3
1
College of Environmental Science and Engineering, Donghua University,
Shanghai 201620, P.R. China
2
College of Textiles, Zhongyuan University of Technology,
Henan 450007, P.R. China
3
Langsha Holding Group Co., LTD, Zhejiang 322000, P.R. China
lidengxin@dhu.edu.cn, yczhang2002@163.com

Abstract. In this paper, synthesis and characterization of poly(ethylene glycol)-


grafted expanded graphite/polyaniline (PEG-grafted EG/PANi) as a novel eco-
friendly composite material was reported. EG as substrate prepared from
expandable graphite was firstly synthesized by in-situ polymerization at the
presence of aniline (An) to obtain EG/PANi, and then graft polymerization with
as-prepared PEG-grafted PANi (PEG-g-PANi) composite under no tough
conditions. Structural characteristics of the products were evaluated by infrared
spectrum (IR), X-ray diffraction (XRD) and scanning electron microscopy
(SEM) analysis. The experimental results have shown that the product of PEG-
grafted EG/PANi composite were synthesized by facile method.

Keywords: expanded graphite, PANi, PEG, EG/PANi, PEG-g-PANi, PEG-


grafted EG/PANi, eco-friendly composite.

1 Introduction
Expanded graphite (EG), a kind of modified graphite contains abundant multi nano-
and micro-pore structure, is an important raw material for the production of flexible
graphite sheets, which have been widely used as gaskets, thermal insulators, fire-
resistant composites, etc.[1-3] EG keeps a layered structure similar to natural graphite
flakes but with larger interlayer spacing, and has a higher specific surface area than
carbon powder and carbon nanotubes.[4,5] Therefore EG nanosheets can easily
grafted the polymer on the surface.
Polyaniline (PANi) is a well-known and preferred conducting polymer, its
synthesis route is straightforward since it simply involves only one step of oxidation
of aniline (An), and most of all, its monomer is quite inexpensive. Due to its easy
synthesis, environmental stability and simple doping/dedoping chemistry, PANi has
been subjected to extensive use in most commercial application such as lightweight
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 501506, 2011.
Springer-Verlag Berlin Heidelberg 2011
502 M. Zhu et al.

battery electrodes, sensors, electromagnetic shielding devices, and anticorrosion


coatings.[6,7] Polyethylene glycol, a synthesized long-chain flexible macromolecule, are
both hydrophilic and hydrophobic, their dual nature leads to numerous and various
interactions with very different components, which have been demonstrated to be an
outstanding material on the basis of intermolecular associations aimed at either stabilizing
structures in soft materials or strengthening fragile ones.[8] In the present work, we
report on the synthesis and characterization of PEG-grafted EG/PANi composite
under no tough conditions.

2 Experimental
2.1 Materials and Reagents
Commercially available expandable graphite was purchased from Qingdao Nanshu
Hongda Graphite Co., Ltd. Dodecylbenzenesulfonic acid (DBSA) was bought from
TCI (Shanghai) Development CO., Ltd (China) and isophorone diisocyanate (IPDI)
was obtained from Shanghai Spring Chemical Technology Co., Ltd (China). Another
reagents used, such as aniline (An), ammonium persulfate (APS), NaOH, HCl, N, N-
dimethylform amide (DMF), thionyl chloride (SOCl2), tetrabutyl ammonium bromide
(TBAB), 4-aminophenol and polyethylene glycol (PEG) with molecular weight (MW
= 1000) were all of analytical grade and bought from Sinopharm Chemical Reagent
Company (Shanghai, China). All reagents were used without any further purification
and all solutions were prepared with distilled water.

2.2 Synthesis of PEG-Grafted EG/PANi Composite


Firstly, EG was prepared in our laboratory after microwave irradiation treatment of
the commercial expandable graphite at 1000 W for 60 s in a EM-3011EB1 microwave
oven (Sanyo Inc., China) according to the method reported by B. Tryba et al [9].
Secondly, in the previous work [10], EG/PANi composite with the mass ratio of
EG/An (mEG/An)=1.0 was synthesized according to the method reported by C. Xiang et
al [11].
Thirdly, synthetic method used to prepare PEG-grafted PANi in our laboratory is
based on, but modified from the reported method by P. Wang and K.L. Tan [12].
Thus, 50 g PEG-1000, 0.6 mL DMF and 8 mL SOCl2 were mixed and added into the
bottom flask. The reaction was carried out in water-bath for 6 h at 80 . Then, 16 g
NaOH, 11 g 4-aminophenol and 0.54 g TBAB were mixed and poured into the upper
mixture. The slurry reaction mixture was carried out in water-bath for 48 h at 60 .
The product marked PEG-g-PANi was filtered, washed with distilled water, and dried
under vacuum. The PEG-g-PANi was dissolved into 1 M HCl solution. The mixture
solution was mark HCl/PEG-g-PANi.
Finally, 1.0 g EG/PANi, 60 mL HCl /PEG-g-PANi solution, 60 mL An-HCl (0.01
mol An dissolving with 1 M HCl solution) was mixed and placed into beaker with an
ice-bath at 0 . The mixture was kept by mechanical raking for 1 h. Subsequently, 60
mL APS solution was dropwise added into the emulsion. The reaction mixture was
carried out in an ice-bath for another 72 h at 0 . The suspension was filtered,
washed with distilled water, and dried under vacuum. Then, the resultant product was
obtained and marked as mark PEG-grafted EG/PANi composite.
Synthesis and Characterization of Eco-friendly Composite 503

2.3 Characterization

Infrared spectra (IR) were recorded on a Tensor 27 infrared spectrophotometer


(Bruker Inc., Germany) over the wave number range from 4000 to 500 cm-1. Samples
were prepared as KBr pellet. The studies on the crystal structure of products were
carried out using a D/max2550PC X-ray diffractometer (Rigaku Inc., Japan) with Cu
anode (Cu/K-alpha1 radiation = 1.54056) in the range of 20o 2 80o at 40 kV.
Scanning electron micrographs of products were obtained using a JSM-5600LV
scanning electron microscopy (JEOL Inc., Japan).

3 Results and Discussion

3.1 IR Spectra

Fig. 1 shows the IR spectra of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-
PEG, (d) PEG-grafted EG/PANi. The IR spectrum in Fig. 1a and Fig. 1c show that the
same broad absorption peak at 3448.6 cm-1 is of hydrogen bonded O-H stretching
vibration. After in-situ polymerization of PANi, the new peaks appeared in spectra.
Fig. 1b shows the spectra of EG/PANi in which the news peaks appeared at a lower
wave number side in between 500 and 1000 cm-1. The peak at 3498.7 cm-1 is
attributed to O-H stretching vibration. The PANi gave absorption bands at 1573.8,
1386.7, 1186.1, 1124.4, 1070.4 and 1008.7 cm-1, as shown in Fig. 1b. There were
some peaks which appeared both in the spectra of PEG-grafted EG (Fig. 1c) and
PEG-grafted EG/PANi (Fig. 1d): the broad peak at 3448.6 cm-1 was attributed to O-H
bond; the peak at 2925.9 cm-1 corresponded to saturated C-H stretching absorption;
the absorption at 1451.2 cm-1 was attributed to the vibration absorption of CH2 group
in PEG.

Fig. 1. IR spectra for products: (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG, (d)
PEG-grafted EG/PANi
504 M. Zhu et al.

3.2 XRD Analysis

The XRD patterns of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG, (d)
PEG-grafted EG/PANi are shown in Fig. 2, respectively. As shown in Fig. 2a, all the
patterns can be easily indexed as graphite materials, which is in good agreement with
the literature value (JCPDS Card number 75-2078). Fig. 2b shows the diffraction
patterns of as-prepared EG/PANi composite, in which there is one diffraction peak of
located at 18.78. Fig. 2c shows the diffraction patterns of as-prepared PEG-g-EG
composite, in which there is one diffraction peak of located at 21.08, and there is a
broad band centered at 2 = 14 24 , which reveal the sample are partially
crystallized.

Fig. 2. X-ray diffraction patterns of products: (a) EG, (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-
PEG, (d) PEG-grafted EG/PANi

3.3 Morphology Observation

Fig. 3 shows SEM images of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG
and (d) PEG-grafted EG/PANi. The micrographs depicting the EG, as shown in Fig.
3a, revealed that in graphite flakes interlayer spacing is separated by increased
distance and leads into highly porous structure and high surface area. PANi is not
only grafted on the surface, but also in the interspace of EG. As shown in Fig. 3b,
significant changes in morphology are seen in the EG/PANi composite synthesized by
in-situ polymerization. PANi are absorbed into the pores of EG which gives the
composites without distinguishing individual phase. As shown in Fig. 3c, PEG is not
only grafted on the surface, but also in the interspace of EG. As shown in Fig. 3d, the
PEG-grafted EG composite showed a good dispersion, which indicated that PEG
chains could reduce Van der Waals force between composite flakes.
Synthesis and Characterization of Eco-friendly Composite 505

Fig. 3. SEM images of products: (a) EG50.0k, (b) EG/PANi ((mEG/An)=1.0)50.0k, (c) EG-g-
PEG50.0k, (d) PEG-grafted EG/PANi5.0k

4 Conclusions
In this work, EG was modified by the in-situ polymerization with the presence of An
to prepare the EG/PANi composite. Subsequently, EG/PANi composite was graft
polymerization with the as-prepared PEG-g-PANi under routine conditions. The
SEM, IR and XRD observation well demonstrated that the surface of EG flakes was
successfully modified by both PANI and PEG-g-PANi. PEG-grafted EG/PANi
composite was prepared successfully.

Acknowledgement. This work was supported by Innovation Foundation of Donghua


University for Doctoral Candidates (BC20101217) and Shanghai Leading Academic
Discipline Project (B604). The financial support of the China Postdoctoral Science
Foundation (20100480569) and the Postdoctoral Science Foundation of Langsha are
also gratefully acknowledged.

References
1. Chung, D.D.L.: Exfoliation of graphite. J. Mater. Sci. 22, 41904198 (1987)
2. Furdin, G.: Exfoliation process and elaboration of new carbonaceous materials. Fuel 77,
479485 (1998)
506 M. Zhu et al.

3. Inagaki, M., Tashiro, R., Washino, Y., Toyoda, M.: Exfoliation process of graphite via
intercalation compounds with sulfuric acid. J. Phys. Chem. Solids 65, 133137 (2004)
4. Celzard, A., Mareche, J.F., Furdin, G., Puricelli, S.: Electrical conductivity of anisotropic
expanded graphite-based monoliths. J. Phys. D Appl. Phys. 33, 30943101 (2000)
5. Chen, G.H., Weng, W.G., Wu, D.J., Wu, C.L.: PMMA/graphite nanosheets composite and
its conducting properties. Eur. Polym. J. 39, 23292335 (2003)
6. Shimano, J.Y., MacDiarmid, A.G.: Polyaniline, a dynamic block copolymer: key to
attaining its intrinsic conductivity. Synth. Met. 123, 251262 (2001)
7. Ahmad, R., Kumar, R.: Conducting Polyaniline/Iron Oxide Composite: A Novel
Adsorbent for the Removal of Amido Black 10B. J. Chem. Eng. Data 55(9), 34893493
(2010)
8. Chawla, K., Lee, S., Lee, B.P., Dalsin, J.L., Messersmith, P.B., Spencer, N.D.: A novel
low-friction surface for biomedical applications: Modification of poly(dimethylsiloxane)
(PDMS) with polyethylene glycol(PEG)-DOPA-lysine. J. Biomed. Mater. Res. 90, 742
749 (2008)
9. Tryba, B., Morawski, A.W., Inagaki, M.: Preparation of exfoliated graphite by microwave
irradiation. Carbon 43, 24172419 (2005)
10. Zhu, M.C., Qi, W., Mao, Y.J., Hu, Y., Qing, X., Li, K.Z., Yang, T., Zhang, Y.C., Li, D.X.,
Chai, H.M.: Synthesis, characterization and adsorption property of expanded graphite-
conducting polymer composite. Accepted by Adv. Mater. Res. (2011)
11. Xiang, C., Li, L.C., Jin, S.Y., Zhang, B.Q., Qian, H.S., Tong, G.X.: Expanded
graphite/polyaniline electrical conducting composites_ Synthesis, conductive and dielectric
properties. Mater. Lett. 64, 13131315 (2010)
12. Wang, P., Tan, K.L.: Synthesis and Characterization of Poly(ethylene glycol)-Grafted
Polyaniline. Chem. Mater. 13(2), 581587 (2001)
The Features of a Sort of Five-Variant Wavelet Packet
Bases in Sobolev Space*

Yujuan Hu1, Qingjiang Chen2,**, and Lang Zhao2


1
School of Education, Nanyang Institute of Technology,
Nanyang 473000, P.R. China
nysslt88@126.com
2
School of Science, Xi'an University of Architecture and Technology,
Xi'an 710055, China
qjchen66xytu@126.com

Abstract. Wavelet packets have been the focus of active research for twenty
years, both in theory and applications. In this work, the notion of orthogonal
nonseparable five-variantl wavelet packets is introduced. A new approach for
de-signing them is presented by iteration method. We proved that the five-
variant wavelet packets are of the orthogonality trait. We give three
orthogonality formulas regarding the wavelet packets. We show how to
construct nonseparable five-variant wavelet packet bases. The orthogonal five-
dimensional wavelet packets may have arbitrayily high regularities.

Keywords: Nonseparable, five-variant wavelet packets, Sobolev space, iterative


method, bracket product, time-frequency representions.

1 Introduction
The concept of wavelet packets have received much research attention and evidence
has shown that they can be used to improve the localization of the frequency field of
wavelet bases. Although the Fourier transform has been a major tool in analysis for
over a century, it has a serious laking for signal analysis in that it hides in its phases
information concerning the moment of emission and duration of a signal. The main
feature of the wavelet transform is to hierarchically decompose general functions, as a
signal or a process, into a set of approximation functions with different scales.
Wavelet packets[1], owing to their good properties, have attracted considerable atten-
tion. They can be widely applied in science and engineering [2,3]. Coifman R. R. and
Meyer Y. firstly introduced the notion for orthogonal wavelet packets. Chui C K. and
Li Chun [4] generalized the concept of orthogonal wavelet packets to the case of non-
orthogonal wavelet packets so that wavelet packets can be employed in the case of the
spline wavelets and so on. The introduction for biorthogonal wavelet packs attributes

*
Foundation item: The research is supported by the Natural Science Foundation of Shaanxi
Province (Grant No: 2009J M1002), and by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No:11JK0468).
**
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 507512, 2011.
Springer-Verlag Berlin Heidelberg 2011
508 Y. Hu, Q. Chen, and L. Zhao

to Cohen and Daubechies [5]. The introduction for the notion of nontensor product
wavelet packets attributes to Shen [6]. Since the majority of information is multi-
dimensional information, many researchers interest themselves in the investigation
into multivariate wavelet theory. The classical method for constructing multivariate
wavelets is obtained by means of the tensor product of some univariate wavelets. But,
there exist a lot of obvious defects in this method, such as, scarcity of designing free-
dom. The objective of this paper is to generalize the concept of univariate orthogonal
wavelet packets to orthogonal five-variate wavelet packets.

2 Notations and Preliminaries

We start from the following notations. Z and N stand for integers and nonnegative
n
integers, respectively. Let R be the set of all real numbers. R denotes the n -
2 n
dimensional Euclidean space. By L (R ), we denote the square integrable function
space on R . Set x = ( x1 , x2 , , xn ) R , v = (v1 , v2 ,
n n
, vn ), = (1 , 2 ,
, n ), z = e i 2 , where = 1, 2, , n and n N , n 2 . The inner product
for arbitrary g ( x ), ( x ) L (R ) and the Fourier transform of ( x ) are defined as,
2 n

respectively
g , := n g ( x ) ( x) dx , ( ) := n ( x) e ix dx ,
R R

where x = 1 x1 + 2 x2 + + n xn and ( x) denotes the conjugate of ( x ) .


The space H ( R n ) is a Hilbert space equipped with the inner product given by

1

2
h, H ( Rn )
:= h( ) ( ) (1+ | | ) d , h, L ( R ).
2 n
(1)
(2 ) Rn
n

The bracket product of compactly supported functions g ( x ), ( x ) L (R 2 n


) is given by

[ g , ]( ) := uZ n g , ( u ) e iu = uZ n g ( + 2u ) ( + 2u ) . (2)

We are interested in wavelet bases for the Sobolev space H ( R n ) , where is a


positive integer. In this case, we obtain

h, H ( Rn )
= h, + h( ) , ( ) , h, L2 ( R n ) . (3)

By algebra theory, it is obvious that there are 2 elements 0 , 1 , , 2n 1 in space


n

Z + = {(k1 , k2 , , k2 ) : k s Z + , s = 1, 2, , n 1, n} such that Z = ( + 2 Z ) ;


n n
d 0
n

( 1 + 2Z 5 ) ( 2 + 2Z 5 ) = , where 0 = {0 , 1 , , 2n 1} denotes the aggregate of


all the different representative elements in the quotient group Z n /(2 Z n ) and order
0 = {0} where {0} is the null element of Z and 1 , 2 denote two arbitrary distinct
+
n

elements in 0 .Let = 0 {0} and , 0 to be two index sets.


The Features of a Sort of Five-Variant Wavelet Packet Bases in Sobolev Space 509

Definition 1. A sequence { k ( x)}kZ n L2 ( R n ) is called an orthonormal set, if

u , v = n ,v , u, v Z n , (4)

where I s stands for the s s identity matrix and n ,v , is generalized Kronecker


symbol, i.e., n ,v = 1 as n = v and n ,v = 0 , otherwise.

Definition 2. A sequence {v : v Z } W is a frame for H if there exist two positive


real numbers C , D such that
W , C v | , v | D
2 2 2
, (5)
where W be a separable Hilbert space and is an index set. A sequence
{v : v Z } W is a Bessel sequence if (only) the upper inequality of (6) holds. If
only for all W , the upper inequality of (6) holds, the sequence { v } W is a
Bessel sequence with respect to (w.r.t.) . For a sequence c = {c(v)} 2 ( Z n ) , we
define its discrete-time Fourier transform as the function in L2 (0,1) n by

Fc ( ) = C ( ) = vZ n c(v) e 2 ix dx (6)

Note that the discrete-time Fourier transform is Z -periodic. Let Tv ( x ) stand for
n

integer translations of a function ( x) L2 ( R n ) , i.e., (Tv )( x) = ( x v) , and j ,v


= 2 nj / 2 (2 j x v) , where a is a positive real constant number. Let ( x) L ( R ) 2 n

and let V0 = span{Tv : v Z n } be a closed subspace of L2 ( R n ) .

3 Five-Dimensional Multiresolution Analysis

The multiresolution analysis method is an important approach to obtaining wavelets


and wavelet packs. We introduce the notion of multiresolution analysis of L2 (R 5 ) .
Set n = 5, ( x ) L2 (R 5 ) satisfy the refinement equation:

( x) = 32 vZ 5 b(v) (2 x v) , (7)

where { b(v)}uZ 5 is a real number sequence and ( x) is called a scaling function.


Taking the Fourier transform for the both sides of refinement equation (1), we have

( ) = B ( z1 , z2 , z3 , z4 , z5 ) ( 2). (8)

B( z1 , z2 , z3 , z4 , z5 ) = b(v ) z v1 v2 v3 v4 v5
z z z z .
1 2 3 4 5 (9)
vZ 5

Define a subspace U l L2 (R 5 ) ( l Z ) by
U l = closL2 2l (2l n) : n Z 5 , (10)
(R )
5
510 Y. Hu, Q. Chen, and L. Zhao

We say that ( x ) in (8) generates a multiresolution analysis {U j } jZ of L2 (R 5 ),


if the sequence {U j } jZ , defined in (4) satisfies the below: (a) U l U l +1 , l Z ;
(b) U = {0} ; U is dense in L2 (R 5 ) ; (c) ( x ) U l (2 x ) U l +1 ,
l Z l l Z l

l Z ; (d) the family {2l (2l n) : n Z 5 } is a Riesz basis for U l ( l Z ).


Let Wl (l Z ) denote the orthogonal complementary subspace of U l in U l +1 and
assume that there exists a vector-valued function ( x ) = ( 1 ( x ), 2 ( x) , ,
31 ( x))T (see [7]) forms a Riesz basis for Wl , i.e.,
Wl = closL2 ( R5 ) (2l u ) : = 1, 2, , 31; u Z 5 , l Z . (11)

By (5), it is clear that 1 ( x) , 2 ( x ) , , 31 ( x ) W0 U1 . Therefore, there


exist fifteen real sequences { b( ) (v)} ( = 1, 2, 31, v Z 5 ) such that

( x) = 32 vZ d ( ) (v) (2 x v), ,
5 (12)

where = { 1, 2,3 ,30,31 } , 0 = {0} . Formula(6) can be written in


frequency domain as follows:

( ) = D( ) ( z1 , z2 , z3 , z4 , z5 ) ( 2 ) , , (13)

where the symbol of the real sequence {d ( ) (v)} ( , u Z 5 ) is

B ( ) ( z1 , z2 , z3 , z4 , z5 ) = d ( ) (n) z
n1 n2 n3 n4 n5
z z z z .
1 2 3 4 5 (14)
nZ 5

A five-variant scaling function ( x ) L2 (R 5 ) is orthogonal one, if

(), ( u ) = 0,u , u Z 5 . (15)

We say that 1 ( x ), 2 ( x ), , 31 ( x ) are orthogonal five-dimensional wavelets


associated with the scaling function ( x) , if they satisfy:
(), ( v) = 0 , , v Z 5 , (16)

(), ( u ) = , 0,v , , , v Z 5 (17)

4 The Traits of Nonseparable Five-Variant Wavelet Packets

We are now ready to introduce the below notations: 0 ( x) = ( x), ( x) = ( x) ,


d ( v ) = b(v ) , u Z . We are now in a position discuss the traits of orthogon-
( )
0 5

al nonseparable five-variant wavelet packets.


The Features of a Sort of Five-Variant Wavelet Packet Bases in Sobolev Space 511

Definition 3. A family of five-variant functions { 32 n + ( x) : n = 0,1, 2, , = 0,


1, 2, , 31} is called a nonseparable five-dimensional wavelet packs with respect to
the orthogonal scaling function ( x ) , where 0 = {0,1, 2, ,31} and

32 n + ( x) = vZ 5 d ( ) (v) n (2 x v), 0 , (18)

Taking the Fourier transform for the both sides of (19) yields
32 n + ( ) = D ( ) ( z1 , z2 , z3 , z4 , z5 ) n ( 2 ) . (19)
where 0 , D ( ) ( z1 , z2 , z3 , z4 , z5 ) = D ( ) ( / 2) = vZ 5 d ( ) (v) z1v1 z2v2 z33 z4v4 z5v5 .
v

Lemma 1[6]. Assuming that ( x ) is an orthogonal five-variant scaling function and


P( z1 , z2 , z3 , z4 , z5 ) is the symbol of the sequence {b(v)} . Then we have

P ( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 )
2 2 2

2
+ P( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4, , z5 ) + P ( z1 , z2 , z3 , z4 , z5 )
2 2

+ P ( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4 , z5 ) + +
2 2

+ P( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 ) = 1 .
2 2 2

Lemma 3[6]. If ( x ) ( = 0,1, , 31 ) are orthogonal wavelet functions associa


ted with ( x ) . Denoting by y = (1) z , = 1, 2, 3, 4,5, then we have
j

, = {D ( )
( y1 , y2 , y3 , y4 , y5 ) D ( ) ( y1 , y2 , (1) j z3 ,(1) j z4 , (1) j z5 )
1
j =0

+ D ( ) ((1) j +1 z1 , y2 , y3 , y4 , y5 ) D ( ) ((1) j +1 z1 ,(1) j z2 , y3 , y5 ,(1) j z5 )


+ D ( ) ((1) j z1 , (1) j +1 z2 , y3 , y4 , (1) j z5 ) D ( ) ( y1 , y2 , y3 , y4 , y5 )
+ D ( ) ((1) j z1 , (1) j z2 , (1) j +1 z3 , y4 , (1) j z5 ) D ( ) ( y1 , y2 , y3 , y4 , y5 )
+ D ( ) ( y1 , y2 , y3 , y4 , y5 ) D ( ) ((1) j z1 , (1) j z2 , y3 , y4 , (1) j z5 )
+ D ( ) ( y1 , y2 , y3 , y4 , y5 ) D ( ) (( 1) j z1 , ( 1) j z2 , ( 1) j z3 , ( 1) j z 4 , ( 1) j +1 z5 )
+ D ( ) ( y1 , y2 , y3 , y4 , y5 ) D ( ) ( y1 , y2 , ( 1) j z3 , ( 1) j z4 , (1) j z5 )
+ D ( ) ( y1 , y2 , y3 , y4 , y5 ) D ( ) ( y1 , y2 , y3 , (1) j z4 , (1) j +1 z5 )
+ = , , , {0,1, 2, ,31}. (20)
Theorem 1[6]. For every u Z 5 and n Z + , {0,1, 2, , 30, 31} , we have
32 n (), 32 n + ( u ) = 0, 0,u . (21)

Theorem 2. For every u Z 5 and m, n Z + , we have


512 Y. Hu, Q. Chen, and L. Zhao

m (), n ( k ) = m, n 0,k . (22)


Proof. For the case of m = n (22) follows from Theorem 1. As m n and
m, n 0 , (22) can be established from Theorem 1. Assuming that m is not equal to
n and at least one of {m, n} doesnt belong to 0 , rewrite m , n as
m = 32m1 + 1 , n = 32n1 + 1 , where m1 , n1 Z + , and 1 , 1 0 .
Case 1. If m1 = n1 , then 1 1 . By (14), (16) and (18), (22) holds, since

(2 )5 m (), n ( k ) = R5 32 m1 + 1 ( ) 32 n1 + 1 ( ) exp{ik}d
= R5 D ( 1 ) ( z1 , z2 , z3 , z4 , z5 ) m1 ( 2) n1 ( 2) D ( 1 ) ( z1 , z2 , z3 , z4 , z5 ) eik d

= [0,2 ]5 1 , 1 exp{ik} d = O.
Case 2. If m1 n1 , we set m1 = 32m2 + 2 , n1 = 32n2 + 2 , where
m2 , n2 Z + , and 2 , 2 0 . If m2 = n2 , then 2 2 . Similar to Case 1,
we have m (), n ( k ) = O. That is to say, the proposition follows in such
case. As m2 n2 , we order m2 = 32m3 + 3 , n2 = 32n3 + 3 , once more, where
m3 , n3 Z + , and 3 , 3 0 . Thus, after taking finite steps (denoted by r ),we
obtain that mr , nr 0 , and r , r 0 . If r = r , then r r . Similar to
Case 1, (22) follows. If r r , Similar to Lemma 1, we conclude that
m (), n ( k )
r
r

{ B ( ) ( )} O { B ( ) ( )} eik d = O.
((2 ) )
= 1 r +1
2
5 [0,2 ]
5
=1 =1 2

Corollary 1. For n Z + , v Z 5 , we have n (), n ( v) = 0,v .

References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian space-
time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
4. Chen, Q., et al.: Existence and characterization of orthogonal multiple vector-valued
wavelets with three-scale. Chaos, Solitons & Fractals 42(4), 24842493 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal., 26(4),
1061--1074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
The Features of Multiple Affine Fuzzy Quarternary
Frames in Sobolev Space

Hongwei Gao

Department of Mathematics, Yulin University,


Yulin 719000, P.R. China
sxxa66zxcv@126.com

Abstract. The concept of a generalized quarternary multiresolution struccture


(GQMS) of space is formulated. A class of multiple affine quarternary pseudo-
frames for subspaces of L2 ( R 4 ) are introduced. The construction of a GQMS of
Paley-Wiener subspaces of L2 ( R 4 ) is studied. The sufficient condition for the
existence of pyramid decomposition scheme is presented based on such a
GQMS. A sort of affine quarternary frames are constructed by virtue of the
pyramid decomsition scheme and Fourier transform. We show how to draw
new orthonormal bases for space L2 ( R 4 ) from these wavelet wraps.

Keywords: Sobolev space, separable Hilbert space, explicit frame bounds,


projection operator, quarternary, wavelet frames, frame analysis.

1 Introduction and Notations


The frame theory plays an important role in the modern time-frequency analysis.It has
been developed very fast over the last twenty years, especially in the context of
wavelets and Gabor systems. In her celebrated paper[1], Daubechies constructed a
family of compactly supported univariate orthogonal scaling functions and their
corresponding orthogonal wavelets with the dilation factor 2. Since then wavelets
with compact support have been widely and successfully used in various applications
such as image compression and signal processing. he frame theory has been one of
powerful tools for researching into wavelets. To study some deep problems in
nonharmonic Fourier series, Duffin and Schaeffer[2] introduce the notion of frames
for a separable Hilbert space in 1952. Basically, Duffin and Schaeffer abstracted the
fundamental notion of Gabor for studying signal processing [3]. These ideas did not
seem to generate much general interest outside of nonharmonic Fourier series how-
ever (see Young's [4]) until the landmark paper of Daubechies, Grossmann, and
Meyer [5] in 1986. After this ground breaking work, the theory of frames began to be
more widely studied both in theory and in applications [6-8], such as signal process-
ing, image processing, data compression and sampling theory. The notion of Frame
Multiresolution Analysis (FMRA) as described by [6] generalizes the notion of MRA
by allowing non-exact affine frames. However, subspaccs at different resolutions in a
FMRA are still generated by a frame formed by translares and dilates of a single
bivariate function. Inspired by [5] and [7], we introduce the norion of a Generalized
2 4
Quarternary Multiresolution Structure (GQMS) of L ( R ) generated by several

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 513518, 2011.
Springer-Verlag Berlin Heidelberg 2011
514 H. Gao

2 4
functions of integer translates L ( R ) . We demonstrate that the GQMS has a pyramid
decomposition scheme and obiain a frame-like decomposition based on such a
2 4
GQMS.It also lead to new constructions of affine of L ( R ) . Since the majority of
information is multidimensional information, many researchers interest themselves in
the investigation into multi-variate wavelet theory. The classical method for
constructing multi-variate wavelets is that separable multivariate wavelets may be
obtained by means of the tensor product of some univariate wavelet frames. It is
significant to investigate nonseparable multivariate wavelet frames and pseudoframes.
Let be a separable Hilbert space .We recall that a sequence { v }vZ 4 is a
frame for , if there exist positive real numbers L, M such that

, L v | ,v | M ,
2 2 2
(1)
A sequence {v }vZ 4 is called a Bessel sequence if only the upper inequality of (1)
follows. If only for all element g Q , the upper inequality of (1) holds, the
sequence {v }vZ 4 is a Bessel sequence with respect to (w.r.t.) Q .If {v } is a
frame, there exist a dual frame { v } such that
*

h , h = h,v v = h, v v . (2)
v v

The Fourier transform of an integrable function h( x ) L ( R ) is defined by 2 4

Fh( ) = h( ) = 4 h( x)e2 ix dx, R 4 . (3)


R

For a sequence c = {c(k )} 2


( Z 4 ), we define the discrete-time Fourier tramsform
as the function in L2 (0,1)4 given by
Fk ( ) = K ( ) = vZ 4 k (v) e 2 iv . (4)
For s > 0, we denote by H ( R ) the Sobolev space of all quarternary functions
s 4

h( x ) H s ( R 4 ) such that

Rn
| h( ) |(1+ || ||2 s ) d < +.
The space H ( R n ) is a Hilbert space equipped with the inner product given by
1

2
:= h( ) g ( ) (1+ | | ) d , h, g L ( R ).
2 4
h, g H (R4 )
(5)
(2 ) R4
4

We are interested in wavelet bases for the Sobolev space H ( R n ) , where is a


positive integer. In this case, we obtain
h, g H ( R4 )
= h , g + h ( ) , g ( ) , h, g L ( R ) . 2 4
(6)
Suppose that h( x) is a function in the Sbolev space H ( R n ) . For j Z , k Z n ,
setting h j ,u ( x ) = 4 j h (2 j x u ) , we have
j
||H ( R4 ) = || h j , k || L2 ( R 4 ) = 4
(s )
|| h j , k || h ( s ) ||L2 ( R 4 ) .
The Features of Multiple Affine Fuzzy Quarternary Frames in Sobolev Space 515

Note that the discrere-time Fourier transform is Z 4 -periodic.Let r be a fixed positi-


ve integer,and J be a finite index set, i.e., J = {1, 2, , r}. We consider the case of
multiple generators, which yield multiple pseudoframes for subspaces.

Definition 1. Let {Tk j } and {Tk j } ( j J , k Z 4 ) be two sequences in


subspace M L2 ( R 4 ) , We say that {Tk j } forms a multiple pseudoframe for V0 with
respect to (w.r.t.) ( j J , k Z ) if 4

M , = 4 , Tv j Tv j
(7)
jJ vZ

where we define a translate operator, (Tv )( x) = ( x v), v Z


4
for a function
( x) L ( R ) . It is important to note that {Tk
2 4
j } and {Tk j } ( j J , k Z 2 ) need not
be contained in M . The above example is such case. Consequently, the position of
{Tk j } and {Tk j } are not generally commutable, i.e., there exists M such that

, Tv j Tv h j 2 , Tv j Tv j = .
jJ vZ 2 j J vZ

Definition 2. A generalized multiresolution structure (GQMS) {V j , } is a j , j


2 4
sequence of closed linear subspaces {V j } jZ of L ( R ) and 2r elements h j , j
L2 ( R 4 ), j J such that (a) V j V j +1 j Z ; (b) jZ V j = {0}; jZ V j
is dense in L2 ( R 4 ) ; (c) h( x) V j if and only if Dh( x) V j +1 , j Z ,where
4

D ( x) = (2 x) , for ( x) L2 ( R 4 ) ; (d) g ( x) V0 implies Tv g ( x) V0 ,for all


v 4 ; (e) {Tv h j , j J , v Z 4 } forms a multiple pseudoframe for V0 with
respect to Tv j , j J , v Z .
4

2 Construction of GQMS of Paley-Wiener Subspace


A necessary and sufficient condition for the construction of multiple pseudoframe for
Paley-Wiener subspaces of L2 ( R 4 ) is presented as follow.
Theorem 1. Let h j L2 ( R 4 ) ( j J ) be such that j > 0 a.e. on a connected nei-
ghbourhood of 0 in [ 12 , 21 ) ,and = 0 a.e. otherwise. Define jJ { R :
4
j

| j | c > 0, j J } and
V0 = PW = { f L2 ( R 4 ) : supp ( f ) } . (8)
Vn {( x) L ( R ) : ( x / 2 ) V }
2 4 n
nZ , (9)

Then, for h j L ( R ), {Tv h j : j J , k Z } is a multiple pseudoframe for V0 with


2 4

respect to {Tv h j , j J , v Z } if and only if


2
516 H. Gao

jJ h j ( ) h( ). ( ) = ( ) a.e., (10)
where is the characteristic function on .
Proof. For all PW consider

F ( jJ vZ 2 , Tv j Tv j = ( jJ vZ 2 , Tv j F (Tv j )

= ( 4 R4 ( ) j ( ).e 2 iv d j ( )e 2 jv
j J vZ


( + n ).e
1
= 4 4 ( + n) j
2 iv
d j ( )e 2 jv
jJ vZ 0 nZ

= j ( ) ( + n ) j ( + n ) = jJ ( ) j ( ) j ( ).
jJ n 4

where we have used the fact that | | 0 only on [ 12 , 21 ) 4 and that

n2
( + n ) ( + n), j J is 1-periodic function. Therefore

jJ
j j = , a.e.,

is a neccssary and sufficient condition for {Tk j , j J , k } to be a multiple


pseudoframe for V0 with respect to {Tk j , j J , k 4} .
Theorem 2. Let , j L2 ( R 4 ) have the properties specified in Theorem 1 such that
j
(6) is satisfied. Assume V is defined by (9). Then {V , j , j } forms a GQMS.

Proof. We need to prove four axioms in Definition 2. The inclusion Ve Ve +1


follows from the fact that Ve defined by (9) is equivalent to PW2 A .Condition (b) is
satisficd bccause the set of all band- limited signals is dense in L2 ( R 4 ) .On the other
hand ,the intersection of all band-limited signals is the trivial function. Condition (c)
is an immediate consequence of (9). For condition (d) to be prooved, if f V0 ,
~
then f = j J k Z f , Tk j Tk j By taking variable substitution, for n Z 4 ,
f (t n ) = jJ vZ 4 f (), j ( v ) j (t v n ) = jJ vZ 4 f ( n ), j ( v ) j (x v)
That is, Tn f = jJ vZ 4 Tn f Tv j Tv j Or, it is a fact f (Tv j ) has support
in for arbitrary k Z . Therefore, Tn f V0 .

L ( R ) be such that
2 4
Example 1. Let j

1/ r

a.e., || || 18 1, a.e.,

|| || 18 ,
j ( ) = (3 16 || ||)1/ r , a.c., 8 <|| ||< 16 , ( ) = 5 16 || ||, a.e., 18 <|| ||< 165 ,
1 3

0, otherwise. 0. otherwise.

j J . Choose = { R :| ( ) | 1r } = [ 1 4 , 1 4 ] , and define V0 = PW ,
The Features of Multiple Affine Fuzzy Quarternary Frames in Sobolev Space 517

select j L2 ( R 4 ) such thatThen, since jJ j ( ) h j ( ) = 1 a.e. on , by


~
Theorem 1, { Tk j } and { Tk j } form a pair of pseudoframes for V0 = PWA .

3 The Characters of Nonseparable Quarternary Wavelet Packs

To construct wavelet packs, we introduce the following notation: a = 2, 0 ( x) = h( x),


( x ) = ( x), b( 0) (k ) = b(k ), b( ) (k ) = q ( ) (k ), where = {0,1, 2, ,15} We are
now in a position of introducing orthogonal trivariate nonseparable wavelet packs.
Definition 3. A family of functions { 16 n + ( x) : n = 0,1, 2, 3, , } is called a
nonseparable quarternary wavelet packs with respect to the orthogonal quarternary
scaling function 0 ( x) , where
16 n + ( x) = kZ 4 b( ) (k ) n (2 x k ), (11)

where . By taaking the Fourier transform for the both sides of (13), we have
Theorem 3[6]. If { 16 n + ( x) : n = 0,1, 2, 3, , } is called a nonseparable quarte-
rnary wavelet packs with respect to the orthogonal scaling function 0 ( x) , we have

m (), n ( k ) = m , n 0, k , m, n Z + , k Z 4. (12)

4 The Multiple Affine Fuzzy Quarternary Frames

We begin with introducing the concept of pseudoframes of translates.


Definition 4. Let {Tv f , v Z 4 } and {Tv f , v Z 4 } be two sequences in L2 ( R 4 ) . Let
U be a closed subspace of L2 ( R 4 ) . We say {Tv f , v Z 4 } forms an affine
pseudoframe for U with respect to {Tv f , v Z 4 } if
( x) U , ( x) = vZ , Tv f Tv f ( x)
4 (13)
Define an operator K : U 2 4
( Z ) by
( x) U , K = { , Tv f } , (14)
and define another operator S :
2
( Z 4 ) W such that
c = {c(k )} 2 ( Z 4 ) , S{c(k )} = vZ c(v) Tv f . 4 (15)
Theorem 4. Let {Tv f }vZ 4 L2 ( R 4 ) be a Bessel sequence with respect to the sub-
space U L2 ( R 4 ) , and {Tv f }vZ 4 is a Bessel sequence in L2 ( R 4 ) . Assume that K be
defined by (14), and S be defined by (15). Assume P is a projection from L2 ( R 4 )
onto U . Then {Tv }vZ 4 is pseudoframes of translates for U with respect to
{Tv }vZ 4 if and only if
KSP = P . (16)
518 H. Gao

Proof. The convergence of all summations of (7) and (8) follows from the
assumptions that the family {Tv }vZ 4 is a Bessel sequence with respect to the
subspace , and he family {Tv }vZ 4 is a Bessel sequence in L2 ( R 4 ) with which the
proof of the theorem is direct forward.
Theorem 5[8]. Let ( x ) , ( x ) , ( x ) and ( x) , be functions in L ( R ) .
2 4

Assume conditions in Theorem 1 are satisfied. Then, for any L ( R ) , and n Z ,


2 4

15 n 1

, n , k n , k ( x ) = , :v ,k :v ,k ( x ) . (17)
k Z
4
=1 v = k Z 4
15

( x) = , :v , k :v , k ( x ) . ( s ) L2 ( R 4 ) (18)
=1 v = k Z 4

Consequently, if { :v , k } and { :v , k } , ( , v Z , k Z ) are also Bessel


4

sequences, they are a pair of affine frames for L2 ( R 4 ) .

References
1. Daubechies, I.: The wavelet transform, time-frequency localization and signal analysis.
IEEE Trans. Inform. Theory 39, 9611005 (1990)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4), 1061
1074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
8. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
Characters of Orthogonal Nontensor Product Trivariate
Wavelet Wraps in Three-Dimensional Besov Space*

Jiantang Zhao1,** and Qingjiang Chen2


1
College of Math. & Inform. Science,
Xianyang Normal University, Xianyang 712000, China
zas123qwe@126.com
2
School of Science,
Xian University of Architecture and Technology, Xian 710055, China
qjchen66xytu@126.com

Abstract. Compactly supported wavelet bases for Sobolev spaces is investiga-


ted. Steming from a pair of compactly supported refinale functions with multi-
scale dilation factor in space L2 ( R 3 ) meeting a very mild condition, we provide a
general approach for constructing wavelet bases, which is the generalization of
univariate wavelets in Hilbert space. The notion of orthogonal non-tensor
product trivariate wavelet wraps is proposed by virtue of iteration method.
Their orthogonality characters are researched by using time-frequency analysis
method. Three orthogonality formulas concerning these wavelet wraps are ob-
tained. It is necessary to draw new orthonormal bases of space L2 ( R 3 ) from
these wavelet wraps. A procedure for designing a class of orthogonal vector-va-
lued compactly supported wavelet functions.

Keywords: Besov space, nontensor product, subdivision operator, trivariate


wavelet wraps, Bessel sequence, orthonormal bases, functional analysis
method.

1 Introduction
Although the Fourier transform has been a major tool in analysis for over a century, it
has a serious laking for signal analysis in that it hides in its phases information con-
cerning the moment of emission and duration of a signal. In her celebrated paper[1],
Daubechies constructed a family of compactly supported univariate orthogonal scali-
ng functions and their corresponding orthogonal wavelets with the dilation factor 2.
Since then wavelets with compact support have been widely and successfully used
in various applications such as image compression and signal processing. Scaling
biorthogonal wavelets in both univariate case and multtivariate case have been
extensively studies in literature. With symmetry and many other desired properties,

*
Foundation item: This work is supported by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No: 11JK0513 and Grant
No:11JK0468).
**
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 519524, 2011.
Springer-Verlag Berlin Heidelberg 2011
520 J. Zhao and Q. Chen

scaling biorththogonal wavelets have been found to be more efficient and useful in
many applications than the orthogonal ones. Wavelet analysis has been developed a
new branch for over twenty years. Its applications involve in many areas in natural
science and engineering technology. The main advantage of wavelets is their time-
frequency localization property. Many signals in areas like music, speech, images,
and video images can be efficiently represented by wavelets that are translations and
dilations of a single function called mother wavelet with bandpass property. Wavelet
packets, owing to their good properties, have attracted considerable attention. They
can be widely applied in science and engineering [2,3]. Coifman and Meyer firstly
introduced the notion for orthogonal wavelet packets which were used to decompose
wavelet components. Chui and Li Chun [4] generalized the concept of orthogonal
wavelet packets to the case of non-orthogonal wavelet packets so that wavelet packets
can be employed in the case of the spline wavelets and so on. Tensor product
multivariate wavelet packs has been constructed by Coifman and Meyer. The
introduction for the notion on nontensor product wavelet packs attributes to Shen Z
[5]. Since the majority of information is multidimensional information, many
researchers interest themselves in the investigation into multivariate wavelet theory.
There exist a lot of obvious defects in this method, such as, scarcity of designing
freedom. Therefore, it is significant to investigate nonseparable multivariate wavelet
theory. Nowadays, since there is little literature on biorthogonal wavelet packs, it is
necessary to investigate biorth-ogonal wavelet packs.
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 3 denotes the 3-
dimentional Euclidean space. L2 ( R3 ) denotes the square integrable function space.
Let x = ( x1 , x2 , x3 ) R , = ( 1 , 2 , 3 ) R 3 , k = ( k1 , k2 , k3 ) Z 3 , y = e i a , w here
3

2 a Z , = 1, 2,3, , a 1. In the following x = 1 x1 + 2 x2 + 3 x3 . For


3

any ( x ) L2 ( R 3 ) , the Fourier transform of ( x) is defined by

( ) = ( x ) e i x dx. (1)
R3

2 The Trivariate Multiresolution Analysis

Firstly, we introduce multiresolution analysis of space L2 ( R3 ) Wavelets can be cons-


tructed by means of multiresolution analysis. In particular, the existence theorem[8]
for higher-dimentional wavelets with arbitrary dilation matrice has been given. Let
g ( x) L2 ( R3 ) satisfy the following refinement equation:

g ( x) = a 3 nZ bn g ( ax n),
3 (2)

where {bn }nZ 3 is real number sequence which has only finite terms.and g ( x ) is
called scaling function. Formula (1) is said to be two-scale refinement equation. The
frequency form of formula (1) can be written as
Characters of Orthogonal Nontensor Product Trivariate Wavelet Wraps 521

g ( ) = B( y1 , y2 , y3 ) g ( a), (3)
where
B ( y1 , y2 , y2 ) = bn y1n y2 n y3 .
n3
1 2
(4)
3
n Z

Define a subspace X j L2 ( R 3 ) ( j Z ) by

X j = closL2 ( R3 ) a 3 j / 2 g (a j u ) : = 1, 2, , a 3 1; u Z 3 . (5)

Definition 2. We say that g ( x) in (2) generate a multiresolution analysis { X j


} jZ of
2 3
L ( R ) , if the sequence { X j } jZ defined in (5) satisfy the following properties:
(i) X j X j +1 , j Z ; (ii) X j = {0}; X j is dense in L2 ( R 3 ) ; (iii)
jZ jZ
f ( x) X k f (ax) X k +1 , k Z ; (iv) the family {g ( a x u ) : u Z 3 } forms a j

Riesz basis for the space X j .


Let Yk ( k Z ) denote the complementary subspace of X j in X j +1 , and assume
that there exist a vector-valued function ( x) = { 1 ( x), 2 ( x), 3 ( x), , a3 1 ( x )}
constitutes a Riesz basis for Yk , i.e.,
Y j = closL2 ( R3 ) : j ,u : = 1, 2, , a 3 1; u Z 3 , (6)

where j Z , and : j ,u ( x ) = a 3 j / 2 ( a j x u ), = 1, 2, , a 3 1; u Z 3 . Form


condition (5), it is obvious that 1 ( x), 2 ( x ), , a3 1 ( x) are in Y0 X 1. Hence there
( )
exist three real number sequences {q }( = {1, 2, , a 1}, n Z 3 ) such that
3
n

( x) = a 3 qn( ) g (ax n), (7)


nZ 3
Formula (7) in frequency domain can be written as

( ) = Q ( ) ( y1 , y2 , y3 ) g ( a), = 1, 2, , a3 1. (8)

where the signal of sequence {qk( ) }( = 1, 2, , a 3 1, k Z 3 ) is


Q ( ) ( y1 , y2 , y3 ) = q ( )
n
y1n y2 n y3 n .
1 2 3
(9)
nZ
3

A trivariate scaling function g ( x ) L (R ) is called an orthogonal one, if


2 3

g (), g ( n) = 0,n , n Z 3 . (10)

We say ( x) = ( 1 ( x), 2 ( x), 3 ( x), , a3 1 ( x ))T is a orthogonal trivariate vector-


valued wavelets associated with the scaling function g ( x ) , if they satisfy:

g (), ( u ) = 0 , , u Z 3 , (11)

(), ( u ) = , 0,u , , , u Z 3 (12)


522 J. Zhao and Q. Chen

3 The Characters of Nontensor Product Trivariate Wavelet


Wraps

To construct wavelet wraps, we introduce the follow-ing notations: 0 ( x) = g ( x),


( x) = ( x), b( 0) (k ) = b( k ), b( ) (k ) = q ( ) (k ), where . We are now in a posi-
sition of introducing orthogonal nontensor product trivariate wavelet wraps.
Definition 3. A family of functions { a3 n + ( x ) : n = 0,1, 2, 3, , } is called a
nontensor product trivariate wavelet wraps with respect to the orthogonal scaling
function 0 ( x) , where
a3 n + ( x) = kZ 3 b ( ) (k ) n (ax k ), (13)

where = 0,1, 2, , a 3 1. Taking the Fourier transform for the both sides of (13)
yields
a 3n + ( ) = B ( ) ( y1 , y2 , y3 ) n ( a ), (14)
where
B ( ) ( y1 , y2 , y3 ) = B ( ) ( / a ) = b ( )
( k ) y1k1 y2k2 y3k3 . (15)
k Z 3

Lemma 1[6]. Let ( x) L2 (R 3 ). Then ( x) is an orthogonal one if and only if

3
( + 2k ) |2 = 1. .
kZ
(16)

Lemma 2 [6]. Assuming that g ( x ) is an semiorthogonal scaling function.


B( y1 , y2 , y3 ) is the symbol of the sequence {b( k )} defined in (3) and set a = 2 . Then
we have
2 2 2 2
= B( y1 , y2 , y3 ) + B( y1 , y2 , y3 ) + B( y1 , y2 , y3 ) + B( y1 , y2 , y3 ) +

B( y1 , y2 , y3 ) + B ( y1 , y2 , y3 ) + B( y1 , y2 , y3 ) + B ( y1 , y2 , y3 ) = 1. (17)
2 2 2 2

Lemma 3 [6]. If ( x ) ( = 0,1, 7 ) are orthogonal wavelet functions associated


with g ( x ) . Then for a = 2 , , {0,1, 7}, we have

, = {B ( )
(( 1) j y1 , ( 1) j y2 , ( 1) j y3 ) B ( ) (( 1) j y1 , ( 1) j y2 , ( 1) j y3 )
1
j =0

+ B ( ) (( 1) j +1 y1 , ( 1) j y2 , ( 1) j y3 ) B ( ) (( 1) j +1 y1 , ( 1) j y2 , ( 1) j y3 )
+ B ( ) (( 1) j y1 , (1) j +1 y2 , ( 1) j y3 ) B ( ) (( 1) j y1 , (1) j +1 y2 , ( 1) j y3 )
+ B ( ) ((1) j y1 ,(1) j y2 ,(1) j +1 y3 ) B ( ) ((1) j y1 ,(1) j y2 ,(1) j +1 y3 )} = , . (18)
For an arbitrary positive integer n Z + , expand it by

n = j =1 j 8 j 1 ,

. j = {0,1, 2,3, , 7} . (19)
Characters of Orthogonal Nontensor Product Trivariate Wavelet Wraps 523

Lemma 4[7]. Let n Z + and n be expanded as (19). Then we have


i 1 i 2 i 3
( j )
n ( ) = B (e 2j
,e 2j
,e 2j
) 0 ( 0 ) .
j =1
Theorem 1[8]. For n Z + , k Z 3 , we have
n (), n ( k ) = 0,k . (20)
Theorem 2. For every k Z and
3
m, n Z + , we have
m (), n ( k ) = m, n 0,k . (21)

Proof. For the case of m = n , (22) follows from Theorem 1. As m n and


m, n 0 , the result (21) can be established from Theorem 1, where 0 = {0,1, , 7} .
In what follows, assuming that m is not equal to n and at least one of
{m, n} doesnt belong to 0 , rewrite m , n as m = 8m1 + 1 , n = 8n1 + 1 ,
where m1 , n1 Z + , and 1 , 1 0 . Case 1 If m1 = n1 , then 1 1. By (14), (16)
and 18), (21) follows, since

(2 ) 3 m (), n ( k ) = R3 8 m1 + 1 ( ) 8 n1 + 1 ( ) eik d

= R3 B (1 ) ( z1 , z2 , z3 ) m1 ( 2) n1 ( 2) B ( 1 ) ( z1 , z2 , z3 ) exp{ik}d

= [0,4 ]3 B
( 1 )
( z1 , z2 , z3 ) m ( 1
2 + 2 s ) m1 ( 2 + 2 s ) B
( 1 )
( z1 , z2 , z3 ) eik d
sZ
3

= [0,2 ]3 1 , 1 exp{ik} d = O.
Case 2. Provided that m1 n1 , we order m1 = 8m2 + 2 , n1 = 8n2 + 2 , where
m2 , n2 Z + , and 2 , 2 0 . If m2 = n2 , then 2 2 . Similar to Case 1, it
holds that m (), n ( k ) = O. That is to say, the proposition follows in such
case. Sin -ce m2 n2 , then order m2 = 2m3 + 3, n2 = 2n3 + 3 , n2 = 2n3 + 3
once more, where m3 , n3 Z + , and 3 , 3 0 . Thus, after taking finite steps
(denoted by r ), we obtain mr , nr 0 , and r , r 0 . If r = r , then
r r . Similar to Case 1, (3.11) follows. If r r , Similar to Lemma 1, we
conclude that

1
m (), n ( k ) = R 3 8 m1 + 1 ( )8n1 + 1 ( ) eik d
(2 )3
1 r
r

= r +1
]
3 { B ( ) (
)} O { B ( ) (
)} e ik d = O .
(2 ) 3 [0,2 =1 2 =1 2
524 J. Zhao and Q. Chen

For simplicity, we introduce a dilation operator ( D )( x ) = (2 x ), where


( x) L2 ( R 3 ) and set D = { D( x) : ( x) }, where L2 ( R 3 ). For any
Z +3 , define
= { ( x) : ( x) = kZ 3 Ck ( x k ),{Ck } 2
( Z 3 )}. (22)

where the family { ( x ), Z +3 } are trivariate wavelet packets with respect to the
orthogonal function g ( x ) . Then For arbitrary Z +3 , the space D can be
orthogonally decomposed into spaces 2 + , 0 , i.e., D = 0 2 + .
For arbitrary j Z + , define the set
j = { = (1 , a2 , 3 ) Z +3 {0}:2 j 1 l 2 j 1, l = 1, 2,3}.
Theorem 3[6]. The family { ( k ), j , k Z 3 } form an orthogonal basis of
DjW0. In particular, { ( k ), Z + , k Z } constitutes an orthogonal basis of
3 3

space L2 ( R3 ) .

References
1. Daubechies, I.: The wavelet transform, time-frequency localization and signal analysis.
IEEE Trans. Inform. Theory 39, 9611005 (1990)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: Existence and characterization of orthogonal multiple vector-valued
wavelets with three-scale. Chaos, Solitons & Fractals 42(4), 24842493 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Cao, H., Shi, Z.: Construction and decomposition of biorthogonal vector-valued
wavelets with compact support. Chaos, Solitons & Fractals 42(5), 27652778 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
9. Yang, S., Cheng, Z., Wang, H.: Construction of biorthogonal multiwavelets. J. Math. Anal.
Appl. 276(1), 112 (2002)
Research on Computer Education and Education
Reform Based on a Case Study*

Jianhong Sun1,**, Qin Xu2, Yingjiang Li1, and JunSheng Li1


1
Engineering College of Honghe University,
Yunnan Mengzi, 661100, China
{Sparkhonghe,hhlijsh}@gmail.com
2
Faculty of Medicine,
Chu Xiong Medical and Pharmaceutical College,
Yunnan Chu Xiong, 675005, China

Abstract. Currently, the education reform is a widespread concerning issue in


China. As the education model has been unchanged for dozens of years, some
disadvantage that improper to the present era had become gradually apparent.
For keep abreast of new technology, the computer science as one of the fastest
developing disciplines and itself also impacts on modern education deeply
needs education reform timely more than others. In this paper, we analysis the
issues exist in computer education based on a case study research, and the
countermeasures contributing to these issues also are presented.

Keywords: education, computer education, education reform, teaching method.

1 Introduction
Nowadays, there is a widespread concern over the issue that China education is a
success or not. In fact, the opinion concerning this hot topic varies from person to
person. Some people hold the optimistic idea that China education is successful.
Because that many Chinese students can get offer from top universities of Britain,
America or other country on full scholarships and can achieved the highest test scores
in the world in reading comprehension, writing, math, and science with the second
language, English. However, some people doubt that why the schools of China have
not cultivated some worlds leading scientists with creativity. Anyway, China
education has been unchanged for dozens of years, some disadvantage that improper
to the present era has become gradually apparent. It is a consensus to both sides about
the education needs reform.
It is not a long time since the computer as an independent discipline into education.
Therefore, computer science education research is an emergent area and is still
giving rise to a literature [1]. However, for keep abreast of new technology, the
computer science as one of the fastest developing disciplines and itself also impacts

*
Pecuniary aid of Honghe University Discipline Construction Fund (081203).
Pecuniary aid of Honghe University Bilingual Course Construction Fund (SYKC0802).
**
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 525529, 2011.
Springer-Verlag Berlin Heidelberg 2011
526 J. Sun et al.

on modern education deeply needs education reform timely more than others. In this
paper, we analysis the issues exist in computer education based on a case study
research, and the countermeasures contributing to these issues also are presented.

2 Background and the Exist Problems


Honghe University (HU) is a developing university which upgrade from a normal
specialized postsecondary college upgrade to a university since 2003. It is one of
1090 universities1 of China. HU serves a diverse population of full- and part-time,
national and international students. In 2009~2010 academic year, more than ten
thousand students enrolled in 12 undergraduate colleges with 36 undergraduate
specialties [2]. Cultivating applied talents for society is the goal of HU and for this
HUs administration is promoting instructional technology as a crucial part of higher
education for faculty and students same as other universities. Especially, as one of the
most rapidly developing disciplines, the education revolution of computer must be
able to keep up with development of the discipline.
According to our statistics, there are more than 80% students to develop a
relatively unsophisticated website as their diplomas project every year. Only a few
outstanding students can develop an application software with more complicated.
Anyway, the computer education of HU is facing some crucial issues during
developing as follows:
z Weak sense of responsibility and teamwork spirit; these students as potential
programmers, it is a horrible thing if without teamwork spirit. It is well
known that a good team can excel at producing high-quality work, boosting
productivity, and inspiring company loyalty. Therefore, strong teamwork
spirit is a primary requirement of any company recruiting followers. It is
widely believed that one person can not finish a big project such as Windows
Operating System, even the person is a talent. In addition, sense of
responsibility is a part of requirement of teamwork spirit; every team member
should be responsible for his work with strong sense of responsibility.
z Lack of initiative and capability of self-study and hands-on; obviously, most
students have already used to do their jobs with absolute fixed pattern of
behavior learning from the teachers. It is fairly rigid just like the computer
executes the program, step by step. These students usually feel flummoxed
when they meet a new question and do not know how to find a solution by
self-study. However, they usually can get a good score from examination,
because the examination is limited to out of the scope of outline.
z Narrow focus on, most students attain knowledge only from classes, from
teachers, can not make a better use of the Internet to extend the range of their
knowledge; Limited by the scope of outline and time, students acquired
knowledge from the classroom is fairly limited. They still retain the thinking
mode of the phase of primary and middle school on study that it is just study
the knowledge the teacher teaching in class and gets a good result from exam
that is enough. Few of them are aware of the importance of extra-curricular
learning.
Research on Computer Education and Education Reform Based on a Case Study 527

3 Issues and Solutions Analysis


For change this situation, we feel obliged to deal with these problems for
guaranteeing the quality of education. Then what are the major causes of the previous
problems? After cautious research, some causation has been found out as follows:

3.1 The Textbook Issue

As a contrast, a textbook in class of China, the impact on teaching is much larger than
in class of the United States. Because the role of the textbook is significantly different
as follows: in China, teachers usually carry out the teaching plan in strict accordance
with a textbook, do not advocate mentioning other knowledge out of the book scope;
in America, the textbooks only works as reference books, and not only limited to one
book [3]. In another words, most teachers of China have been used to teaching on
textbook-directed instead of on knowledge-oriented. A textbook will narrow a
students focus on if he lack of initiative and capability of self-study to extend the
range of his knowledge by the Internet or other books. Therefore, we should change
the traditional way, try to select a couple valuable reference books for the students. It
will be better if the teachers offer guidance for reading at the same time.

3.2 Teaching Methods and Examination Forms Reform

The teacher as an important role is thought to be an important risk factor in education.


The teaching method, examination forms, professional skills and knowledge and other
personal characters of teachers such as pursuing spirits, communication and
presentation skills, all of these factors are fairly important in teaching activity. Among
of these factors, the opinions on the teaching method, examination forms and
professional skills and knowledge should be able to reach a consensus and worth to
discuss. Not only just we hold the idea that traditional teaching method based on
textbook-directed and examination-oriented should be reformed. Reference [4] [5] [6]
have already discussed the disadvantages of the traditional teaching method and have
respectively offered their own suggestion for teaching reform.
Based on our case study research, some proposals in our opinion are presented as
follows:
To start with, the teachers should keeping update their professional skills and
knowledge. Teaching in a subject, the teacher should master related skills and
knowledge concerning this subject: the development history, the current technology,
achievement and hot discussed topics, even the future trends of development. The
computer science is one of the fastest developing subjects. To be a better teacher in
computer, up to date in professional skills and knowledge must be.
Next, as proverb says, Give a man a fish and you feed him for a day. Teach a man
to fish and you feed him for a lifetime. As a teacher, it is important to understand
that teaching student how to solve a question is much better than just tell them the
answer of a question.
In addition, a teacher must understand he/she is a leader to guide students how to
learn, how to solve problems, not just a knowledge transfer between textbooks and
528 J. Sun et al.

students in class. Teaching students how to use the Internet resource and reference
books to extend professional knowledge will make a better use of the limited class
time.
Finally, improve the form of teaching, the form and content of examinations and
make them serve their purpose better. Panel discussion, group projects are useful for
training teamwork spirit and communication skills of students. Presentation is helpful
for training students to express their opinions. And the examination should consider
every phase of previous teaching forms. The content of homework, projects and
examination should be related with the practice application. This will help enhance
the sense of accomplishment and interest of students in learning.

3.3 Understand the Program Objectives of Computer Clearly

Since 2003, as HU upgrade from a normal specialized postsecondary college upgrade


to a university, the new program objectives was proposed as, The Bachelor of
Engineering (Engineering in Computer Science and Technology) program aims to
prepare graduates for gain a professional qualification in the field of computer and
entry-level positions as IT professionals. The program provides a broad and thorough
study of the essential knowledge and practical skills required by graduates preparing
for a career in the IT profession. Every teacher should understand the program
objectives clearly, because it is guidance for designing the outline and teaching plan
for their responsible courses.
It is well known that the computer can be used in literally any field, limited only by
the imagination of its programmers and the ingenuity of its associated hardware
engineers. And the experiments and projects related various practical applications will
ensure the students understand this point as well. Therefore, a teacher should attempt
to design more experiments and projects related more practical applications for
his/her responsible courses. This way also can be able to enhance the sense of
accomplishment and interest of students in learning.

4 Some Difficulties Exist


The traditional teaching methods and educational concepts are still stamped upon the
minds of people that were formed over the years. It is not easy to be changed.
Furthermore, to make better implement of these new concepts to reform education, it
is need superior departments support. Therefore, some difficulties are unavoidable.
They are:
Firstly, the administration department of HU expects standardized instruction:
standard examination, standard handout format, etc.; it is for pass the National
Evaluation of Undergraduate Teaching Quality (NEUTQ). The NEUTQ has very deep
influence on the development prospects of every college or university, the
administration department have to lay much stress on it. Only a few elective courses,
the teacher have choice to teach in their own way. It is already beyond the scope of
our capabilities to change this situation.
And secondly, many teachers seem reluctant to change the existing teaching
model; Education reform not only need spend much time, but also need devote more
energy to it. Few teachers willing to do it except it must to be done.
Research on Computer Education and Education Reform Based on a Case Study 529

Thirdly, from primary school to college or university, the students have already
used to accept the traditional teaching method. In fact, the teaching methods such as
panel discussion, group projects and presentation are difficult to achieve the desired
effect. This situation is not just several courses reform can be changed except the
whole education system has a significant revolution.

5 Discussion
Many well-known educationalists and scholars have realized that the Chinese
traditional education model exists many problems need to be changed. Qin Xuesen, a
well-known physics of China had asked a question while in his deathbed: why
Chinese schools have not nurtured outstanding elitists? As more and more people
concerning this issue, the exist problems is likely to be solved soon.

References
1. Fincher, S., Petre, M.: Computer science education research, p. 1. Taylor & Francis Group,
London (2004)
2. Sun, J.H., Zhu, Y.B., Fu, J.W., Xu, H.C.: The Impact of Computer Based Education on
Learning Method. In: 2010 International Conference on Education and Sport Education,
ESE 2010, Wuhan, China, vol. 1, pp. 7679 (2010)
3. Zhu, Y.X., Zhen, H.: Comparison of Choice of Textbooks in Universities between
China and the United States. Igher Educational Research in Areas of Communications (6),
104105 (2004) (in Chinese)
4. Han, H.Y.: Researching on Teaching Method of Undergraduate Education. Joural of Jilin
Business and Technology College 25(3), 8688 (2009) (in Chinese)
5. Duan, H., Wang, S.: Deepen Education Reform Innovative Teaching Modes. China
University Teaching 4, 3537 (2009) (in Chinese)
6. Deng, Y., Shang, Q.: A Comparison of Chinese and American College Textbooks,
Teaching Methods and Curriculums. Journal of Zhanjiang Normal College 22(10), 3437
(2001) (in Chinese)
The Existence and Uniqueness for a Class of Nonlinear
Wave Equations with Damping Term

Bo Lu and Qingshan Zhang

Department of Mathematics,
Henan Institute of Science and Technology,
Xinxiang 453003, P.R. China
cheersnow@163.com,qingshan11@yeah.net

Abstract. In this paper the existence and uniqueness of the global generalized
solution and the global classical solution are studied by the Galerkin method,
The class of nonlinear wave equations describe the propagation of long waves
with the viscosity in the medium with the dispersive effect. It can also be
governing the problem of the longitudinal vibration of the 1-D elastic rod.

Keywords: nonlinear wave equation with damping term, initial boundary value
problems, global generalized solution, global classical solution.

1 Introduction
In this paper we are concerned with the following initial boundary value problem

utt 2bu xxt + u xxxx = (u xn ) x , x ( 0, 1) , t > 0, (1.1)

u ( 0, t ) = u (1, t ) = 0, u xx ( 0, t ) = u xx (1, t ) = 0, t 0, (1.2)

u ( x , 0 ) = ( x ) , ut ( x, 0 ) = ( x ) , x [ 0, 1] (1.3)

where > 0, b > 0 are constants, u(x, t)is the unknown function. The subscripts x and

t indicate the partial derivative with respect to x and t, respectively, n N, (x) and
(x) are given initial value functions definite in [0, 1].
The equation (1.1) is contacted with many equations. For example, in the study of
a weakly non-linear analysis of elasto-plastic-microstructure models for longitudinal
motion of an elasto-plastic bar in [1] there arises the model equation

utt + u xxxx = ( u x2 ) , (1.4)


x

where u(x, t) is the longitudinal displacement, > 0, = 0 are any real numbers.
Moreover, the special solution of equation (1.4), its instability, and the instability of
the ordinary stain solution were studied in [1]. In [2], [3], the authors studied the
problem for determining solution of the generalized equation of the equation (1.4).
And the authors proved the existence and uniqueness of the global generalized

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 530533, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Existence and Uniqueness for a Class of Nonlinear Wave Equations 531

solution and the global classical solution of several initial boundary value problems
by the contraction mapping principle, and give the sufficient conditions of the non-
existence of the solution.
Because the equation (1.4) can describe the propagation of the wave in the medium
with the dispersion effect, its meaningful[4] to study the following nonlinear wave
equations with the viscous damping term.
The paper [5], [6] studied the equation with the viscous damping term proved the
global existence, asymptotic property of its solution of the initial boundary value
problem, gave some sufficient conditions of the blow-up of the solution.
In this paper, we are going to prove the existence and uniqueness of the global
generalized solution and the global classical solution of the initial boundary value
problem (1.1)--(1.3) by Galerkin method.

2 Existence and Uniqueness of Global Generalized Solution of the


Problem (1.1)-(1.3)
We are going to prove the existence and uniqueness for the problem (1.1)-(1.3) by
Galerkin method and compactness theorem in this section. First of all, well study the
initial boundary value problem (1.1)--(1.3).
{ }
Let yi ( x ) be the orthonormal bases in L [0, 1] composed of the eigenvalue
2

problem
y + y = 0, x ( 0, 1) ,
(2.1)
y ( 0 ) = y (1) = 0

corresponding to eigenvalue i (i =1, 2,


d .
), where =
dx
Let
N
u N ( x, t ) = Ni ( t ) yi ( x ) (2.2)


i =1

be Galerkin approximate solution of the problem (1.1)-(1.3), where Ni(t) are the
undetermined functions, N is a natural number. Substituting the approximate solution
uN(x, t) into Eq.(1.1) and the initial value functions, multiplying both sides by ys(x)
and integrating on (0, 1), we obtain

( uNtt 2bu Nxxt + u Nxxxx , ys ) = ( (u ) , y ) ,


n
Nx x s
s = 1, 2, N , (2.3)


where ( , ) denotes the inner product of L [0, 1].
2

Substituting the approximate solution uN(x, t) and the approximate of the initial
value functions into Eq.(1.1) and initial conditions (1.3), we get

Ns (0) = s , Ns (0) = s , s = 1, 2, N, (2.4)

In order to prove the existence of the global generalized solution for the problem
(1.1)--(1.3), we make a series of esti- mations for the approximate solution uN(x,t) .
532 B. Lu and Q. Zhang

Lemma 2.1[6]. Suppose that n 3 and n is an odd number. H [0,1] , L [0,1],


2 2

xL [0,1], and (x) and (x) satisfy the boundary conditions(1.2). Then for any
n+1

N, the initial value problem (2.3), (2.4) has a global classical solution Ns C [0, T ]
2

(s = 1, 2,,N). Moreover, the following estimate holds:

u N (, t ) + u Nt (, t ) C1 (T ), t [0, T ],
2 2
H2
(2.5)

where and in the sequel C1(T ) and Ci(T )( i = 2, 3,) are constants which only depend
on T .

Lemma 2.2[6]. Suppose that the conditions of Lemma 2.1 hold. H [0,1] ,
4

H [0,1], then, the approximate solution of the problem (1.1)--(1.3) satisfies the
2

following estimate:

u N (, t ) + u Nt (, t ) + u Ntt (, t ) C3 (T ), t [0, T ], (2.6)


2 2 2
H4 H2

Theorem 2.1. Suppose that n 3 and n is an odd number, if H [0,1], H [0,1],


4 2

and (x) and (x) satisfy the boundary conditions(1.2), the initial boundary value
problem (1.1)--(1.3) exists the unique global generalized solution

u C ([0, T ]; H 4 [0, 1] ) C1 ([0, T ]; H 2 [0, 1]) C 2 ([0, T ]; L2 [0, 1]) , (2.7)

where u(x, t) satisfies the boundary value conditions (1.2) in the generalized sense,
and it satisfies the initial value conditions (1.3) in the classical sense.
Proof: From (2.6) we know that

u N C ([0, T ]; H 4 [0, 1] ), u Nt C1 ([0, T ]; H 2 [0, 1]), u Ntt C 2 ([0, T ]; L2 [0, 1]) .

Using Sobolev imbedding theorem[7] we have

u Nxs C ([0, T ] [0, 1] ), 0 s 3, u Ntxs C ([0, T ] [0, 1]), 0 s 1.


According to Ascoli--Arzela theorem we see that we can select a subsequence still
denoted by {uN(x, t)} from {uN(x, t)}, such that there exists a function u(x, t) and when
N the subsequence {uN(x, t)} uniformly converges to the limiting function u(x, t)
in [0, T ] [0, 1]. The corresponding subsequence of the derivatives {uNx(x, t)} also
uniformly converges to ux(x, t) in 0, T ] [0, 1]. And according to the compactness
principle, the subsequence {uNxs(x, t)}(0 s 4), {uNxst(x, t)}(0 s 2), and {uNtt(x, t)}
weakly converge to the uxs (x, t)(0 s 4), uxst(x, t)(0 s 2), and utt(x, t) in L ([0, T
2

] [0, 1]), respectively. Hence, we get u(x, t) satisfies (2.7).


Sense u(x, t) satisfies the boundary value conditions (1.2) in the generalized sense,
and it satisfies the initial value conditions (1.3) in the classical sense.
Therefore, u(x, t) is the generalized solution of the problem (1.1)--(1.3). Its easy to
prove the uniqueness of solutions of the problem (1.1)--(1.3). This completes the
proof of the theorem.
The Existence and Uniqueness for a Class of Nonlinear Wave Equations 533

3 Existence and Uniqueness of Global Classical Solution of the


Problem (1.1)--(1.3)

Lemma 3.1[6]. Suppose that the conditions of Lemma 2.2 hold. H [0,1] ,
7

H [0,1], then, the approximate solution of the problem (1.1)--(1.3) satisfies the
5

following estimate:
u N (, t ) + u Nt (, t ) + u Ntt (, t )
2 2 2
H7 H5 H3
(3.1)
+ u Nttt (, t ) C4 (T ), t [0, T ],
2
1
H

Theorem 3.1. Suppose that n 3 and n is an odd number, if H [0,1], H [0,1],


7 5

and (x) and (x) satisfy the boundary conditions(1.2), the initial boundary value
problem (1.1)--(1.3) exists the unique global generalized solution

u C ([0, T ]; C 4 [0, 1] ) C1 ([0, T ]; C 2 [0, 1]) C 2 ([0, T ]; C[0, 1]) , (3.2)


where u(x, t) satisfies the initial boundary value conditions (1.2) and (1.3) in the
classical sense.

4 Conclusion
By the method in Section 2 and Section 3, we can obtain the other initial boundary
value problem of the equation.

References
1. An, L.J., Peire, A.: A weakly nonlinear analysis of elasto-plastic-microstructure models.
SIAM J. Appl. Math. 55(1), 136155 (1995)
2. Chen, G.W., Yang, Z.J.: Existence and nonexistence of global solutions for a class of
nonlinear wave equations. Math. Meth. Appl. Sci. 23, 615631 (2000)
3. Zhang, H.W., Chen, G.W.: Potential well method for a class of nonlinear wave equa-tions
of fourth-order. Acta Mathematica Scientia 23A(6), 758768 (2003)
4. Guenther, R.B., Lee, J.W.: Patial Differential Equations of Mathematical Physics and
Integral Equations. Prentice Hall, NJ (1988)
5. Yang, Z.: Global existence asymptotic behavior and blow-up of solutions for a class of
nonlinear wave equations with dissipative term. J. Differential Equations 187, 520540
(2003)
6. Chen, G.W., Lu, B.: The initial C boundary value problems for a class of nonlinear wave
equations with damping term. J. Math. Anal. Appl. 351, 115 (2009)
7. Mzja, V.G.: Sobolev Spaces. Springer, New York (1985)
Research on the Distributed Satellite Earth Measurement
System Based on ICE Middleware

Jun Zhou, Wenquan Feng, and Zebin Sun

School of Electronics and Information Engineering,


Beijing University of Aeronautics and Astronautics, Beijing, 100191, China
jerrychou1983@sina.com

Abstract. Satellite earth measurement is one of the crucial steps during the
development of satellites, and also plays an important role in system validation
and performance evaluation. This paper presents a distributed satellite earth
measurement system based on ICE(Internet Communication Engine)
middleware, which guarantees the high scalability and flexible configuration by
decoupling the correlative relationship between the message distributers and
subscribers in the system, and thereby realizes the dynamic location and load-
balance. The system implementation and test results prove that this system
performs better in distributed deployment, flexibility and network wideband
occupancy compared with traditional systems.

Keywords: Satellite earth measurement, Internet communication engine,


middleware, distributed.

1 Introduction
Satellite earth measurement is one of the crucial steps during the development of
satellites, and also plays an important role in system validation and performance
evaluation. Satellite earth measurement system itself is a sophisticated giant system
whose design and implementation process are affected by many factors. First of all,
during its application, measuring units of the system have to constantly and
periodically transfer all sorts of results to difference-based service units, during which
information intercourse is of distributing-subscribing model. Second, different data
reports transmission may have different requirements in transferring efficiency,
safety and reliability, which requires the information intercourse to effectively support
those differences. In addition, in view of the diversity of the measuring environments,
the system has to be operated on multi-platforms. But in current system, the compact
coupling association between measuring units and service units has severely limited
the scalability of the system, so the different needs of service units cannot be meet.
Meanwhile, the current system has a poor inter-operability and is hard to be practically
deployed in large scale.
Information subscribe/publisher system based on ICE (Internet Communication
Engine) middleware provides a new solution for the above-mentioned problems. ICE
middleware guarantees the system high scalability and flexible configurations by de-
coupling the association between publishers and subscribers, which lays a realistic
foundation for the distributed design of the large satellite earth measurement system.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 534541, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Distributed Satellite Earth Measurement System 535

Based upon the above analysis, this paper presents a new design of distributed
satellite earth measurement system based on ICE middleware. Compared with
traditional one, the new system is effectively simplified, realizes the dynamic location
and load-balance of the target server, and performs much better in distributed
deployment, flexibility and the occupancy of network bandwidth.

2 The Distributed Design Principle Based upon ICE Middleware


Middleware technology is a very important supporting technology in establishing a
distributed system and widely applied in the design. But the existing middleware
technology has defects in cross-platforms, performance, inter-operability and
convenience of development at different degree. As an object oriented middleware
platform, ICE could provide tools, general interfaces and database supports based on
different operating systems and programming languages for establishing object
oriented distributed application system; ICE also provides simple object model, simple
but strong functional general interface, efficient and compact protocol, plentiful call-
dispatch mode, which puts forward a new idea for the design of satellite earth
measurement system. Firstly, ICE provide subscribe/publish services, ICEStorm,
which could de-couple the compact coupling association between information
publishers and subscribers, therefore guarantees the systems high scalability and
flexibility. Traditional information subscribe/publish mode is shown as the Fig. 1.

subscriber

publisher subscriber

subscriber

Fig. 1. Traditional information subscribe/publish mode

Under traditional information subscribe/publish mode, publishers periodically


provide information to subscribers, which demands publishers to manage subscribers
registration information, monitor data transmission, and recover errors. But in
practical operation, there are many publishers and subscribers. When subscribers
subscribe different information, publishers have to cost lots of resources to manage
and maintain the above-mentioned details, which severely limited the scalability and
flexibility of the system. While in ICE structure, ICEStorm plays as a agency between
publishers and subscribers. When publishers are ready to publish a new message, they
no longer pay attention to subscribers and just simply send a request to the ICEStorm
server who is full in charge of transferring message to subscribers. On the other side,
subscribers just have to interact with ICEStorm server, finish tasks like subscribing or
cancelling it and obtain interested information. With this logic, publishers and
subscribers could concentrate on their own application logic, which greatly simplifies
the subscribe/publish process. subscribe/publish mode based on ICEStorm is shown
as the Fig. 2.
536 J. Zhou, W. Feng, and Z. Sun

subscriber

publisher ICEStorm subscriber

subscriber

Fig. 2. Subscribe/publish mode based on ICEStormc

Information distribution also could be categorized by subjects. Subscribers could


choose their interested subjects, and only information consistent to subjects will be
publisherd to particular subscribers. Therefore this structure is especially suitable for
large scale difference application environments. Moreover, ICE provides publisherd
management service for locating and activating ICE application program.
Fig.3 presents an example of topic federation. Topic T1 has links to T2 and T3, as
indicated by the arrows. The subscribers S1 and S2 receive all messages published on
T2, as well as those published on T1. Subscriber S3 receives messages only from T1,
and S4 receives messages from both T3 and T1.

6

3 7

6

3 7

6
3 7

6

Fig. 3. Topic federation

The service is consist of registration positioning service and arbitrary nodes. They
cooperate to manage information and service processes consisting of the application,
provide redundancy for the same service running in different servers. Thus, when one
of the servers cannot provide service, the other servers in the same group could
provide the same service to the client.

3 Distributed Satellite Earth Measurement System Based on ICE


Middleware

Analyzing the demands of the system and integrating the advantages that ICE have in
information subscribe/publish and publisherd management service, this paper puts
forward a implementation solution for the publisherd satellite earth measurement
system based on ICE middleware, which is shown as Fig.4.
Research on the Distributed Satellite Earth Measurement System 537

interface interface interface interface

Server

Software
Modules
Server Server
Server Server

VISA-DTEA

net

ICE middleware

Cosumer Cosumer Cosumer

Fig. 4. Distributed satellite earth measurement system based on ICE middleware

Function modules of the system are of distributed deployment and systematically


integrated based on ICE middleware so as to support the systems measurement. The
front-end measuring units mainly include three types: a series of intelligent devices
providing hardware interface functions, computers with specific software functions,
and computer software controlling standard commercial instruments. In practical
application, the system deploys one or more measuring units according to the changes
of the measuring targets. The front-end measuring units measure all kinds of orders,
signals and data, such as satellite remote sensing, satellite remote control and satellite
position, through multiple measuring interfaces, produce data reports of various
themes by digesting the measuremecnt datac, and publisher them to differences
service units to process relevant tasks. Based on ICE middleware technology, the
system de-couples the association between service units of the satellite earth
measurement system, thereby achieves flexible configurations of the system and
efficient distribution of the data, and eventually realizes the dynamic location and
load-balance.
In the aforesaid system, a safe and effective event message communicating
mechanism plays a crucial link in the design of the satellite earth measurement
system. The design of the systems event message transfer module is shown as Fig.5.

Ice general
interface

Receiving thread Netword


poolt Qos
connection

Distribute thread
Event buffer Memory
pool

Fig. 5. The design of the systems event message transfer module


538 J. Zhou, W. Feng, and Z. Sun

The functions of the modules are as follow:


z ICE general interface: realizing standardized communications in
heterogeneous environment.
z Receiving thread pool management: managing the thread pool that receives
events, processing concurrent events according to priorities.
z Distributing thread pool management: managing thread buffer pool that
publishers events.
z Network connection management: limiting the number of publisherd and
received concurrent events
z Event buffer management: publisherrs send event messages to buffer zone
which sequentially forward them to sending thread.
z QoS management: managing QoS parametersc of informatcion transmission.
z Memory management: unified allocation and release of cmemory

Based on the aforesaid function design, the process of the event message transmission
is shown as Fig.6.

Message Message
sender receive

Interface Interface

Send Binding
command identify

Enter event
Waiting for
buffer zone connection
network

Event buffer
Thread pool Data decode Waiting data
zone

Qos Qos
Qos Qos
management management

Waiting Waiting
connection connection
network network

ICEStorm
Ice general server
Ice general
interface interface

Fig. 6. The process of the event message transmission

In Fig. 6, before the sending end sends a message, the receiving end have to call a
user interface binding to its unique identity, and establish network connection with
ICEStorm to receive messages. After the connection is established, the receiving end
enters waiting for data state, returns the received data to upper application through
callback function.
The distribution end calls for user interface to publisher message and sends event
messages to event buffer zone with sequential priorities; when there is a idle thread
in the distributing thread pool, the event message will be picked out of the event
buffer zone, and the transfer parameters will be managed by QoS. When the data is
ready, it waits for the connection management module allocates handles of network
Research on the Distributed Satellite Earth Measurement System 539

connection. During this period, connection management module will first decide
whether there is an idle connection or a multiplexed connection, and return network
connection handle when there is one. The general transmission interface publishers
messages to ICEStorm server by calling for handles.
After receiving an event message, ICEStorm server will first traverse subscribers
relevant to the theme of the message, and sequentially forward the message to each
subscriber. After receiving the event message, the receiver will wait for the receiving
thread pool allocate a processing thread, correspondingly process the data after
acquiring the handle of the receiving thread, and eventually send it to the callback
functions that subscribers registered to receive.
We take a theme T1 for example as follow, in order to briefly demonstrate the
process of event messages publishe/subscribe of the satellite earth measurement
system. The publisherrs implementation process is as follow:

z First acquire an agent of theme management;


z Acquire an agent of T1 theme, if the theme doesnt exist, then create one, or
acquire an existing theme;
z Acquire the publisherr object agent of theme T1;
z In the end collect and publisher the event message.

The subscribers implementation process is as follow:


z First acquire an agent of the theme management;
z Create an object adapter as the host of the theme interface;
z Instantiate the agent and activate it by the object adapter;
z Subscribe theme T1;
z Process the message of the report until it is closed;
z Cancel the subscription of theme T1.

4 System Test
In order to verify the validity of the design of the system and test its performances,
this paper established a testing environment, shown as Fig.7.

server

73/,1.
URXWHU

client client client client

Fig. 7. Testing environment

The testing environment is a 100M LAN established by TP-LINK SOHO router.


There are 5 computers who all have the same Windows XP Professional SP3
operation system, dual-core processor, 2G memory, 100M/1000M adaptive net card.
One of the 5 computers play as a server, sending data to the others. Each of the other
540 J. Zhou, W. Feng, and Z. Sun

4 client computers open 4-6 clients, which makes 20 clients in all. The server will
send orders to clients with a rate of 10 frames/s. The byte length of every order is 30K.
The test time is 24*3 hours.
The testing result is shown as table 1.

Table 1.

duration Volume Feedback packet loss Reported time Qos


(K/m) time(s) rate

The 1st 10*20*60*30= 0.00098 0.01% <0.003 100%


24hours 12000*30

The 2nd 10*20*60*30= 0.00094 0.0089% <0.003 100%


24hours 12000*30
The 3rd 10*20*60*30= 0.00099 0.0093% <0.003 100%
24hours 12000*30

The test results shows that the volume of the order transmission is
10*20*60=12000 frames/m; the volume of the data transferred is
12000*30K=360000K/m. After inspecting log information of the 24*3 hour test, we
find the server operate normally and send orders correctly; its feedback time is less
than 0.001s; packet loss rate is under 0.01%, and report in 0.003s after losing a
packet; the command packets with QoS could be sent by 100%. The results are far

beyond the satellite earth measurement system s performance indexes and meet the
requirements of the design.
In order to test the event message transmission performance under rough network
environment, we limit the network speecd to 1M. The server send orders to clients
with a rate of 10 frames/s. The byte length of every order is 5K. The test time is 24*3
hours. The volume of the order transmission is 10*20*60=12000frames/m; the
volume of the data transferred is 12000*5K=60000K/m.
The results show that the system could reach the satellite earth measurement

system s performance indexes even under a relatively rough network environment
and full loaded.
In this paper, we use single-ended subscriber timing method to test throughput.Test
case contains a publisher and a subscriber, the publisher is incessant publishing event,
the subscriber continues to receive events. The number n events in time for Tn, the
throughput is calculated as:

T = EVENT SIZE * n / (Tn-T1)

Fig. 8. Throughput test


Research on the Distributed Satellite Earth Measurement System 541

The results shown in Figure 8, when the event is small, the throughput with events
increases linearly with increase in size, but due to network bandwidth and
Environmental factors such as hardware constraints.The throughput increases to 9MB
/ S reaches the limit.

5 Conclusion
Analyzing the needs of the satellite earth measurement, this paper put forward a
satellite earth measurement system based on ICE middleware. Relying on the
technological advantages of the ICE middleware, the system de-coupled the
association between the service units of the satellite earth measurement system,
accomplished the flexible configuration of the system and the hi-efficient data
distribution, and realized the dynamic location and load-balance of the target server.
Eventually, the system test effectively verified the validity and performances of the
solution.

References
1. ZeroC. Publisherd Computing with Ice. Revision 3.4 (June 2010),
http://zeroc.com/doc/index.html
2. Joseph, H.Y.D.: Space Telecommunications Systems Engineering. Plenum Press, New
York (1983)
3. Chen, Y.-y.: Satellite Radio Monitoring and Control Technology. China Astronautics
Publishing House (2007)
4. Tesauro, G.: Reinforcement Learning in Autonomic Computing: A Manifesto and Case
Studies. IEEE Internet Computing 11, 2230 (2007)
5. Gokul, S., Cristiana, A.: Towards end-to-end quality of service: controlling I/O
interference in shared storage servers. In: Proceedings of the 9th ACM/IFIP/USENIX
International Conference on Middleware, Springer-Verlag New York, Inc., Leuven (2008)
6. Welch, V., Siebenlist, F., Foster, I., Bresnahan, J., Czajkowski, K., Gawor, J., Kesselman,
C., Meder, S., Pearlman, L., Tuecke, S.: Security for grid services. In: The Twelfth IEEE
International Symposium on High Performance Distributed Computing, Seatle,
Washington, pp. 4857 (June 2003)
7. Blanquer, J.M., Batchelli, A., Schauser, K.E., Wolski, R.: Quorum: Flexible Quality of
Service for Internet Services. In: Processings of Symposium on Networked Systems
Design and Implementations, NSDI 2005 (2005)
The Analysis and Optimization of KNN Algorithm
Space-Time Efficiency for Chinese Text Categorization

Ying Cai and Xiaofei Wang

Dept. of Computer Science and Technology,


Beijing Information Science & Technology University
Beijing, 100101, P. R. China
ycai@bistu.edu.cn, wang_xfout@126.com

Abstract. The performance of any algorithm for text classification are reflected
in the of reliability classification results and classification algorithm is high
efficient. We analyze the space-time efficiency of different stages based on the
traditional KNN algorithm process for Chinese text classification and ensure the
reliability of classification. And we optimize efficiency of the algorithm and the
feasibility in the practical application from these aspects including feature
extraction, feature weighting, similarity computing etc.

Keywords: KNN Algorithm, Space-Time Efficiency, Text Categorization,


Feature, Feature Vector, Similarity.

1 Introduction

With the Web site of resources and the popularization of electronic text, the human
began to pursue an efficient and reliable method of information processing in
response to the rapid development of information technology industry brought about
by the explosion of knowledge and other issues. Now many scholars concern the text
classification technology.
Text classification will be determined by one or more pre-defined class method
based on the text content [1]. The current text classification methods can be divided
into general rule-based calssifying and statistical-based classifying which including
Decision Tree, K-nearest neighbor (KNN), Support Vector Machine, Bayes etc.[2].
However, the performance of any classification is reflected in two aspects, namely,
the reliability and high efficiency of the classification algorithm. Chinese text
classification requires more reliability and efficiency because of its inherent
complicated, confusing meaning of the word, language forms and other
characteristics.
The text classification algorithm is often optimized for the reliability of the
algorithm itself and is very difficult to implement it. At the same time it is easy to
ignore time and space efficiency of classification algorithms. Or efficiency for the
algorithm optimization of a certain stage, the whole traditional classification
algorithms advantage is dispersed weakened. Therefore we analyze the efficiency
of time and space in difference stage in order to get a reasonable proportion of

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 542550, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Analysis and Optimization of KNN Algorithm Space-Time Efficiency 543

space-time according to the process of traditional KNN algorithm. It can get high
efficiency and feasible for optimization algorithm of the practical application in
ensuring the reliability of classification.

2 KNN Algorithm

2.1 KNN Algorithm Overview

Traditional KNN algorithm is a simple and effective non-parametric algorithm. It is


outstanding in the precision and recall rate. But its main problem is a high feature
dimension space[3]. KNN is a lazy learning method, the calculation of large sample
similarity, classification time is nonlinear, training fast but classification slow. And
KNN classifier is strongly affected by the distribution of training data in efficiency.
Its computational is unbearable in the general computer environment[4].
Traditional KNN algorithm is one of the most useful classifier as a Chinese text. It
is deserved to study and exploration for the performance analysis and optimization.

2.2 Traditional KNN Algorithm Process

The basic idea of the traditional KNN algorithm can be expressed as: According to the
traditional vector space model, text features are formalized as the weighted feature
vector[5]. For a given text to be classified, calculate similarity (distance) for each text
in the training set. Then select the K texts with the nearest distance between the
training set of documents and text sets to be classified. Determine which categories of
the new text [6] according to the above K texts category. The algorithm flow is as the
Figure1.

Fig. 1. Traditional KNN Algorithm Flow


544 Y. Cai and X. Wang

2.3 The Principles of Traditional KNN Algorithm

KNN algorithm design and implementation of the optimization process should follow
a few principles to improve the space-time efficiency of algorithms.
(1) Store intermediate results with a disk file.
(2) Minimize the number of disk file access.
(3) Hash table used as the basic storage structure.

3 KNN Algorithm Space-Time Efficiency Analysis

We analyze the space-time efficiency of the traditional KNN algorithm. It will divide
three stages including feature extraction, feature vector computing and similarity
computing.

3.1 Feature Items Analysis

The solution of feature item on the traditional KNN algorithm exists time and space
both defects.
First, the current widespread use of evaluation function to extraction feature items.
But the evaluation function only increase in the extraction accuracy within a limited,
and the time cost and the calculation cost of flat text similarity is same, time-
consuming is too high and increase the training corpus part of the burden.
Secondly, feature extraction is not strictly the requirements of space resources,
however, the characteristics of large-scale text feature entry will greatly increase the
computational complexity of subsequent algorithms. In the calculation feature vector
stage, each feature item is calculated as a dimension of vector.

3.2 Feature Vectors Analysis

It is the basis in the text classification that the document was changed into the format
computer can do it by using simple and accurate method [7]. Formalization of the
classic text is as the feature vector with feature as following: (W1,W2,W3,,Wn),
where Wi is the ith weight of feature item. The text formalization of the feature is
calculated by assigning weight and form feature vectors.
Feature vectors are composition numerical by weight. Feature items weighted
based on the following two main experiences [1].
(1)The more lexical item appearing in a text the more it related to the subject of it.
(2)The more times appear set of lexical items in the text the worse the term
discrimination between items.
Traditional KNN algorithm use tfidf(term frequencyinverse document frequency),
weighting formula[7], that is, feature items wk in the text t k weight is:

tfidf ( w k , t k ) = # ( w k , t k ) * lg( N /# w k ) (1)


The Analysis and Optimization of KNN Algorithm Space-Time Efficiency 545

# ( wk , t k ) is the number of the feature item wk appearing in the text t k . N is


the total number of text , # wk is the number of text when appearing the feature wk .
High dimension of feature vectors, is generally more than 20 World Wide Web,
commonly used feature vector stored in two ways.
(1) Using fixed-length dimensions to store text feature vector. It takes up a lot of
storage space, easy to calculate similarity and seek fast.
(2) Using variable length dimensions to store text feature vector. Saving only the
characteristics of each item in the real text, need small space, not easy to calculate the
similarity, a large seek time. The tfidf algorithm with weight is as follows.
for(text_i=first_text to N)
for(i_tag_j=i_first_tag to text_i.length)
begin
#t_j++;
for(text_k=first_text to N)
if(search i_tag_j in text_k) #w_j++;
end
return #t*log(N/#w);
The time complexity above algorithm is O (n3) and space complexity is O(1). It
costs so more time by repeatedly opening the disk file in the inner loop that it
becomes the bottleneck for the weighting algorithm. Consider the separation of the
inner loop, or cut down to a constant level for the inner cycle.

3.3 Similarity Analysis

We calculate the similarity between the test and the training corpus to reflect how
similarity and provide data support for the classification. We use cosine formula as a
formula for calculating the similarity in Chinese text categorization [8].
Between the text t i and t j , the similarity is :

M
M M
sim ( t i , t j ) = ( w ik * w jk ) / w ik2 w 2jk (2)
k =1 k =1 k =1

wik is the kth feature item weight in the text t i . M is the total number of feature
items.
Because the traditional KNN algorithm needs to calculate the similarity with each
training text, and therefore simplifying the training process will cost a lot of
time[9].The traditional KNN algorithm for classification is low efficiency. Testing
corpus choose single text . The basic design of the similarity is replacing time with
space. Traditional similarity algorithm is as follows.
for(weight_i=first_weight to M) put into hashtable ha;
for(text_j=first_text to N)
for(j_weight_k=j_first_weight to M)
if(search j_weight_k in ha) sim_j();
546 Y. Cai and X. Wang

The time complexity of algorithm is O(n2), space complexity is O(n). But if we


use a conventional algorithm to calculate the similarity of large texts, time complexity
will increase to O (n3), then control the time consumption is particularly important.

4 Optimization Scheme of KNN Algorithm and Test

Then with the results of KNN algorithm space-time efficiency analysis, we design and
test space-time efficiency optimization schemes according to every stages. Mainly
including extraction of feature items optimization scheme, feature items weighted
optimization scheme, similarity calculation optimization scheme.

4.1 Features Extraction Optimization and Test

The feature item data are the most is resources in the KNN classification algorithm. If
the evaluation function is ignored, the establishment of good quality stop words can
also reduce the dimension of feature vectors, and to ensure time efficiency of the
algorithm. At present, stop words table is no uniform standard [10]. According to
different extraction method stop words can be constructed of different tables, the
space-time classification algorithm will affect the performance. Feature extraction
program supports different items filter design is as follows.
if(word.length()>lower_limit && word.length<upper_limit)
if(word.trait==n or word.trait==v) tag_hash.put(word);
Choose different word frequency, word combinations form different feature item
extraction scheme. The test result is as the Table1. It includes 500 training Corpus.

Table 1. Testing Result of Features

Feature Feature Feature


Word Speech
No. Extraction Storage Vector
Frequency Part
Time(ms) space(KB) Dimension
1 >99 Noun 3407 416 206846
2 >99 noun,verb 3250 660 228281
3 >499 Noun 3594 205 1884
4 >499 noun,verb 3719 380 3228
5 >999 Noun 3453 154 841
6 >999 noun,verb 3609 293 1495
7 <1000 Noun 3500 265 212241
8 <1000 noun,verb 3531 371 233114
9 <500 Noun 3469 214 211198
10 <500 noun,verb 3578 284 231381

The different test results are as the Figure 2.


The Analysis and Optimization of KNN Algorithm Space-Time Efficiency 547

4.2 Optimization of Feature Items with Weight and Test

We design tfidf optimization algorithm as the following.


for(text_i=first_text to N)
for(i_tag_j=i_first_tag to text_i.length) #t_j++;
for(tag_group_i=first_tag_group to tag.group)
for(text_j=first_text to N)
if(search tag_group_i in text_j) #w_group_i++;

The test results are as the Table2. It includes 500 training Corpus.

Fig. 2. Feature Analysis

Table 2. Testing Result of Feature Weighting

Size of Computing Auxiliary


feature time (ms) storage
set space
1 78739100 4
100 821850 400
1000 110950 4000
10000 15953 40000
50000 7609 200000
100000 5422 400000 Fig. 3. Analysis of Feature Weighting
200000 5000 800000
548 Y. Cai and X. Wang

As the group size increases, the time complexity of the algorithm tends to O(n2),
space complexity tends to O(n). When the block size is 1, optimization algorithm is
the same to the traditional one. When the block size is greater than 100, the time cost
is reduced, space is increased slightly, and the system is easy to bear. Therefore, the
optimization algorithm is feasible and effective.

4.3 Optimization Scheme of Similarity Calculation

We consider to using the space instead of time if test corpus is composed of the text
corpus. We send all the training feature vectors into memory, also can get the
similarity. Optimized algorithm time complexity is O(n2) and space complexity
increase to O(n2) but it still can withstand within the system.
for(train_text_i=train_first_text to N)
for(i_weight_k=i_first_weight to M)
put into hashtable hash_train[];
for(test_text_j=test_first_test to N)
for(j_weight_k=j_first_weight to M)
if(search j_weight_k in hash_train[]) sim_j();

5 Classifier Design and Test


After the optimization of traditional KNN algorithm, the space-time efficiency is
improved and more reasonable. But this high efficiency must based on the ensuring of
reliability.

5.1 Classifier Design

Select the K Neighbors. Select the K training text with big similar as the K neighbors
for the current test version. In practical problems, it is difficult to determine the K
value of the selected, often only rough estimates based on experience. This method of
valuation may cause a decline in the accuracy of KNN algorithm.
Scoring by Category. Test the text similarity of K neighbors by accumulation,
accumulated value will be scored [11].
Classification Proposed. According to the principle of the KNN algorithm, we
should include the class which is in the highest scores.

5.2 Performance Indexes

Classification proposed is on the direct basis of evaluating the classification


performance by Classifier. The following performance test index is generally used.
1) accuracy rate = number of correctly assigned to the particular type of the text /
actual number assigned to certain types of the text.
2) the recall rate = number of correct assigned to certain types of text / text of the
actual number to be assigned to certain types.
The Analysis and Optimization of KNN Algorithm Space-Time Efficiency 549

3) Standard measure is as following formula (3), where p is the accuracy, r is the


recall rate,: the weight of precision and recall rate. F1 measure is taken when is 1.

F = (1 + 2 ) pr /( 2 p + r ) (3)

5.3 Classifier Testing

Sougou Corpus was selected which involve the nine categories including financial
business, information technology, food hygiene, sports, tourism, education exams,
employment workplace, culture arts, military weapons. There are 1990 paper in each
category. Dictionary contains 275,613 words, excluding stop words 68767 words.
Choose 900 test text which is 5% of total corpus. Classifier test results is as shown the
Table 3.

Table 3. Testing Result of Classifier

Perfor-
K mance FB IT FH SE TV EE EW CA MW
index
Precision 91.1 83.7 81.0 94.1 78.0 69.9 79.8 85.3 92.4
20 Recall 92.0 82.0 85.0 95.0 78.0 86.0 79.0 58.0 97.0
F1 91.5 82.8 82.9 94.5 78.0 77.1 79.4 69.1 94.6
Precision 89.3 82.7 83.3 96.0 86.1 72.7 80.8 88.6 91.5
100 Recall 92.0 81.0 85.0 96.0 87.0 88.0 80.0 62.0 97.0
F1 96.5 81.8 84.2 96.0 86.6 79.6 80.4 72.9 94.2
Precision 87.3 81.8 84.2 98.0 85.3 72.1 81.2 89.6 91.7
400 Recall 89.0 81.0 85.0 96.0 87.0 88.0 82.0 60.0 97.0
F1 88.1 81.4 84.6 97.0 86.1 79.3 81.6 71.9 94.3
Precision 84.0 81.1 96.5 99.0 85.3 72.9 77.0 87.0 100.0
800 Recall 89.0 77.0 82.0 95.0 87.0 86.0 87.0 60.0 99.0
F1 86.4 79.0 88.7 96.9 86.1 78.9 81.7 71.0 99.5
Precision 80.9 81.5 87.1 99.0 85.2 74.1 77.2 100.0 100.0
1500 Recall 89.0 75.0 81.0 95.0 86.0 86.0 88.0 61.0 99.0
F1 84.8 78.1 83.9 96.9 85.6 79.6 82.2 75.8 99.5

Performance test results above are automatically statistics by the classifier without
manual checking. The results show that K value has little influence on the classifier.
The above implementation and testing environment for the computer is 2GHz CPU
frequency, 3GB main memory, Windows XP operating system, Java Language
compiler. Thus, the traditional KNN algorithm efficiency is recognized by common
PC environment. The weighted average time of feature vectors are 63ms / articles,
similarities average time 4043ms / articles, the traditional KNN classifier level are
classified time 125ms / articles.
550 Y. Cai and X. Wang

6 Conclusion

In this paper, we analysis time and space efficiency for the conventional KNN
algorithm of Chinese text classification process. We present a set of detailed
efficiency optimization scheme in ensuring the reliability of the classification
including extraction of feature items optimization scheme, feature items weighted
optimization scheme, similarity calculation optimization scheme. Tests results
satisfied the expected results.

Acknowledgment. The research is supported by the General program of science and


technology development project of Beijing Municipal Education Commission under
Grant No.KM201010772012, Funding Project for Academic Human Resources
Development in Institutions of Higher Learning under the Jurisdiction of Beijing
Municipality Grant No.PHR201007131 and Beijing Municipal Organization
Department Project talent under Grant No.2010D005007000003.

References
1. A Survey on Automated Text Categorization,
http://wenku.baidu.com/view/64589a4bcf84b9d528ea7a45.html
2. Guo, G.D., Wang, H., Bell, D., Bi, Y.X., Greer, K.: An KNN Model-based Approach and
Its Application in Text Categorization. J. Computer Science, 986996 (2003)
3. Yang, Y.M., Pedersen, J.O.: A Comparative Study on Feature Selection in Text
Categorization. In: 14th Intl Conf. on Machine Learning (ICML 1997), pp. 412420.
Morgan Kaufmann Publishers, San Francisco (1997)
4. Vries, A.D., Mamoulis, N., Nes, N.: Efficient KNN search on vertically decomposed data.
In: 2002 ACM SIGMOD International Conference on Management of Data, pp. 322333.
ACM Press, Madison (2002)
5. Sun, R.Z.: An Improved KNN Algorithm for Text Classification. J. Computer Knowledge
and Technology. 6(1), 174175 (2010)
6. Ma, J.B., Li, J., Teng, G.F., Wang, F., Zhao, Y.: The Comparison Studies on the Algorithm
of KNN and SVM for Chinese Text Classification. Journal of Agricultural University of
HeBei 31(3), 120123 (2008)
7. Wang, X.Q.: Research of KNN Classif ication Method based on Parallel Genetic
Algorithm. Journal of Southwest China Normal University 35(2), 103106 (2010)
8. Zhu, G.H., Cheng, C.P.: An Improved k-Nearest Neighbor Classification Method. Journal
of HeNan Institute of Engineer Ing., 6567 (2008)
9. Liu, B., Yang, L., Yuan, F.: Improved KNN Method and Its Application in Chinese Text
Classification. Journal of Xihua University 27(2), 3336 (2008)
10. Zhou, Q.Q., Sun, B.D., Wang, Y.: Study on New Pretreatment Method for Chinese Text
Classification System. J. Application Research of Computers (2), 8586 (2005)
11. He, F., Lin, Y.L.: Summary of Improving KNN text classification algorithm. J. FuJian
Computer (3), 3336 (2005)
Research of Digital Character Recognition Technology
Based on BP Algorithm

Xianmin Wei

Computer and Communication Engineering School of Weifang University


Weifang, China
wfxyweixm@126.com

Abstract. This paper describes the digital character recognition process and
steps. Using artificial neural networks with momentum term and adaptive
learning rate back-propagation algorithm to train and identify the ideal signal
and noise signal containing the number of characters. By comparing the results
to obtain that using the same network with the ideal signal and noise signal for
training the network, the system can be more fault-tolerant.

Keywords: Neural network, BP algorithm, a noisy digital character recognition.

1 Introduction

Digital recognition technology in the field of image processing is an important


research direction, and it is one of the hot areas of computer application. It consists of
on-line handwriting recognition and off-line handwriting recognition. In the former
system, through recording lifting pen, falling pen, on the spatial location of each pixel
of handwritten figures, as well as the time between strokes and other information, in
processing that information, the system extracts information characteristics by certain
rules, then the recognition module compare and and identify the characteristics
of the information with characteristics of library, and finally converted into
computer language code. the latter compared with the former without Stroke
information, it is more difficult, more widely used, such as bank notes, business
reports, financial statements, statistical reports and other forms system is a focus of
current research, but also a difficulty. This article describes how to use the neural
network back propagation algorithm (BP algorithm) for offline handwritten digit
recognition.

2 Simple Process of Hand-Written Numbers with BP Algorithm

BP algorithm used in a simple digital identification process as "pre" and "BP


character recognition," specifically shown in Figure 1.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 551555, 2011.
Springer-Verlag Berlin Heidelberg 2011
552 X. Wei

Fig. 1. BP Number Recognition

The premise work of Digital Identification is to change the visual image into
binary image with computer processing, which uses a given threshold metod to
change pixels in the image into two colors according to a certain standard. However,
fonts of binary images blurred in many cases, or spread appearing messy white or
dark dots, causing some difficulties to identify, using gradient sharpening methods to
sharpen the image, so that the blurred image becomes clear, and can play a role in
removing noise.
When identification Only by the characteristics of each digital character to
determine, so the binary image after sharpening needs to be split into individual
characters, for character refinement. Shelling algorithm commonly used, from the
boundary of layer by layer to remove the black spots until you find a collection, this
collection coincides with the boundary (thickness of 1 or 2.) In order to extract the
characteristics of any character, also normalized the digital characters, that is the size
of the character transforms into a uniform size, character position (rotation,
translation) corrected. Many people believe that regulation of each character image
into 5 9 pixels of a binary image is ideal, because the smaller size of the image, the
higher the recognition rate, the faster network training. In fact, compared to identify
the character images, 5 9-pixel map is too small. The normalized, the image
information is lost a lot, when the image recognition, the accuracy is not high.
Experimental results show that the regulation of a character image into 10 18 pixels
binary image is the real ideal. Processed from the characters is split, the extract can
best embody the characteristics of the character feature vectors, on behalf of the BP
into the network, the network training. Then extract the sample to be identified in the
generation of feature vectors into the trained BP network, the character can be
identified. Commonly used method of extracting a feature vector extraction method
pixel by pixel, frame feature extraction method, extraction of vertical and other
statistics. This experiment uses a pixel-by-pixel extraction method.

3 BP Neural Network for Number Identification

3.1 BP Neural Network Structure and Description

BP network is a multilayer feedforward networks with one-way transmission. In


addition to input and output nodes in the network, there are one or more layers of
hidden nodes, nodes in one layer do not couple. Input signal from the input layer
nodes in turn pass through the hidden layer, then spread to the output node. The
output of each layer is only under the influence of output layer. The node unit
characteristics (transfer function) is usually Sigmnid type, of which, a slope parameter
for the Sigmnid, by changing the parameter a, will be different slope Sigmnid
function.
Research of Digital Character Recognition Technology Based on BP Algorithm 553

The basic idea of BP algorithm is: For an input sample, after weights, thresholds,
and activation function operation, get a output, and then compared it with the
expectations of the sample, if any deviation, from the output began to back-
propagation deviation, to adjust the right value and the threshold, and output of
network gradually become the same as hope output. Thus, BP algorithm is based on
the steepest descent method, the steepest descent method as the inherent
disadvantages: falling into local minimum easily, slow convergence and causing
oscillation effect, in adjusting the weights to use the momentum method, which
accelerated convergence rate, and to some extent reduce the probability of falling into
local minimum, but can not completely overcome these shortcomings. To speed up
the convergence rate, also used the adaptive learning rate.

3.2 Design and Training of Neural Network

The goal is to identify 10 numeric characters from 0 to 9. Each character is divided


into small pieces of 5 7 to digitize, respectively represented by a vector. 10 input
vectors which containing 35 elements is defined as an input vector matrix, vector
represents a letter, the corresponding location of a data value is 1, while the other
position is 0. There are two types of data such as input: one is in an ideal state of the
signal; the other is to use randomly generated noisy signal. The network for fast
training, learning rate, the initial value selected in 0,01 - 0, 7 between. Connection
weights obtained random number between -1 and 1, the initial value of the expected
distortion is a random number between 0 and1.
Network through outputing a 10-element output vectors to distinguish these
numeric characters, such as character 1 corresponding to the vector, the elements of
its first position is 1, while the subsequent location of the element value is 0. After
input and output to be determined the network structure can be designed. Layer 1 is
the input layer, based on the above analysis of the data to be identified to determine
the neural network input layer has 35 nodes; layer 2 is hidden layer, the conventional
method for determining the contacts is twice the input layer, but to rely on experience
and methods to try to determine the number of nodes, through the system error test
with different structures to determine the hidden layer nodes is 10 nodes, shown in
Table 1.

Table 1. Signals Training and Test Error with Noise of Hidden Layer

Number of Training error Test error


hidden neurons
5 0.099121 0.308258
10 0.098804 0.129052
15 0.099700 0.225840

Layer 3 is the output layer, the target output vector which containing 10 data shows
that the layer has 10 nodes. Hidden layer and output layer activation function are
Sigmnid, S-type function on the network structure shown in Figure 2.
554 X. Wei

Fig. 2. Logarithmic S-function network structure

To find a suitable training methods, and found with the increasing of samples
number, training results of separate BP method or adaptive learning rate BP are not
ideal, but both adaptive learning rate and momentum term of the BP training
algorithm works well, Therefore using this function to train the neural network. In
order to produce a certain input vector network fault tolerance, the best way is to use
both with an ideal signal and noise signal to train the network. In this study, the first
using 15 group ideal signal to train the network; the 2nd using 15 signals with noise
first and then the ideal signal with 15 groups on the same network training. With 10
kinds of increasing noise signal, which is obtained by the ideal signal alphabet to add
the average of 0, standard deviation from 0.05 to 0.5. Changes in network training
error is shown in Figure 3.

Fig. 3. Error changes of the training process without noise

Observed indicators of these curves shows that training time can be quickly
achieved.
Meanwhile, in using different levels of noise signal case, respectively, from 0 to 9
numbers for the 10 100 tests, the network identification error rate curve with the noise
signal shown in Figure 4.

Fig. 4. Recognition error rate curve


Research of Digital Character Recognition Technology Based on BP Algorithm 555

Dotted line in Figure 4 is the error recognition rate curve without the error training
network, solid line is the error recognition rate curve with the network trained by the
error. It can be seen that from Figure 4 the network training error tolerance is greatly
improved.

4 Experimental Results and Analysis


Identification method of BP neural network, to take the whole character directly as
input of neural network. 500 selected number characters, where 200 of them are
training samples, the remaining samples are test data. Test results in Table 2.
The results show that: character recognition based on neural network method has
strong fault tolerance and a strong adaptive learning ability, it is a good recognition.

Table 2. Experimental Results

Item Total Identified Error Rejection Recognition Error Rejection


samples number recognition number rate recognition rate
number rate
Training
200 200 0 0 100% 0% 0%
samples
Test
300 282 9 9 94% 3% 3%
samples

Acknowledgments. This paper is funded by 2011 Natural Science Foundation of


Shandong Province, its project number is 2011ZRA07003.

References
1. Bian, Z.: Pattern recognition. Tsinghua University Press, Beijing (2002)
2. Yang, S.: Image pattern recognition technology-VC + +. Tsinghua University Press,
Beijing (2005)
3. Chen, Y.: Feedforward network pattern recognition preprocessing method to handwritten
digit recognition application. Chinese Academy of Sciences Semiconductors, Beijing
(1995)
4. Yang, Y.: Based on Neural Network Handwritten Digit Recognition. East China
Geological University 26(4), 383386 (2003)
5. Roth, M.W.: SurveyofNeural Network Technologyfor Automatic Target Recognition.
IEEE Trans. Neural Networks 1(1), 2843 (1993)
Image Segmentation Based on D-S Evidence
Theory and C-means Clustering

Xianmin Wei

Computer and Communication Engineering,


School of Weifang University,
Weifang, China
wfxyweixm@126.com

Abstract. On the basis of the Dempster-Shafer evidence theory, this paper


given the multi-source information fusion method based on Dempster-Shafer
evidence theory, and applied information fusion technology of Dempster-Shafer
evidence theory in the classification of bamboo image texture. The image data
for the DS classification, the user need to train samples, and proposed an D-S
method of automatically obtaining the training sample in accordance with
image feature and C-means clustering algorithm. First, the image is divided into
several regions, each region containing images using wavelet decomposition to
remove the edge of the area, and then calculate the remaining energy of the
mean smooth area as the feature value, use the C-means clustering algorithm to
smooth the regional classification, the feature value and type of training
samples labeled as DS, and finally training of the classifier to segment the
image. Experimental results show that the proposed method has achieved good
segmentation results.

Keywords: Dempster-Shafer evidence theory, texture, C-means clustering.

1 Introduction
Dempster-Shafer evidence theory is an expansion of the classical form of the
probability theory, trust function of evidence is associated with upper and lower
probability values, and using trust function and likelihood function to explain the
multi-valued mapping, and on this basis, dealing with uncertain information to form
the theory of evidence. Classification of image texture image features can not be
separated on the selection and extraction, a single feature can not meet the
requirements of the image target classification and recognition. To this end, the DS
evidence theory based on the extracted image texture features based on the use of it
for information fusion, the final decision-making strategies using the image texture
classification.
This paper first decomposed the full image into more detailed sub-graph by
wavelet decomposition, then applied the mean center of mobile convergence
to reduce the amount of data, then made the fuzzy C-means clustering on the
center of the convergence, making the texture characteristics of the image more
concentration.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 556561, 2011.
Springer-Verlag Berlin Heidelberg 2011
Image Segmentation Based on D-S Evidence Theory and C-means Clustering 557

2 Texture Feature Extraction


In this paper, extraction of texture features, with wavelet analysis of the image first,
then the clustering features of the image texture is more prominent.

2.1 The Multi-layer Structure of Wavelet Decomposition

Wavelet decomposition of an image can be decomposed into the approximation signal


and the horizontal layer by layer, vertical and diagonal detail component, and then
decomposition made on the details of the components to n layers, its decomposition
diagram in Figure 1.

Fig. 1. Schematic diagram of full wavelet decomposition ((a) Original, (b) wavelet
decomposition, (c) wavelet frame)

2.2 Mean Shift Algorithm

Mean shift algorithm is approach which based on the texture density gradient to
estimate the center of the cluster, can deal with unsupervised cluster classification.
Mean shift algorithm is a move in the feature space sample points close to the
average, until convergence to a specific location. The location is considered the center
of the texture. For any point to itself as the center and radius of a given point within
the region mean shift algorithm processing in order to achieve the purpose of
convergence. Shown in Figure 2.
Mean shift algorithm formula is:

(1)

In (1), x is any point, n is the number of points, h is the window radius or


bandwidth, d is the dimension of feature space, K as the core, and its calculation
formula is:

(2)

Where Cd is the capacityof the d-dimensional space.


558 X. Wei

Fig. 2. Data Convergence points diagram of mean shift algorithm

2.3 Fuzzy C-means Clustering

For any finite set of data models {xm;m=1......M}, the number of clusters c (2 C
M), cluster weight index w, 1 <w <, initialize matrix elements of the division of
fuzzy clustering is , where c = 1 ..... C, m = 1 ...... M, Fuzzy C-means clustering
by minimizing on the membership matrix U and cluster center V of the objective
function to be achieved. Matrix U(0) formed by the C M. Performing the following
steps:
calculated of cluster centers using
update U(l) apply where
if ,then the loop ends where is the convergence
threshold, when the time to meet the objective function of the type similar to the
minimum, or return to the first step to continue. The algorithm has been shown to
have good convergence.
Weighted index w determined the degree of fuzzy clustering , w is too small, the
effect of clustering is not good, w is too large, the effect of clustering unclear,
experiments show that w = 2, clustering effect is good.

3 D-S Evidence Theory Classification of Bamboo Cells


3.1 D-S Evidence Theory
D-S evidence theory is proposed by Demptster in 1967, with expansion and
development by Shafer, it is also known as the D-S theory. Evidence deal with the
problem of uncertainty and data redundancy. It used trust function as a measure, by
the probability of an event to be binding, confidence-building function, and use
information in decision-making methods to eliminate uncertainty and redundancy.
In this paper, the basic structure of the probability distribution function method,
first choose a variety of textures under a priori categories of texture images constitute
the training sub-sample model library; then be identified blocks of texture images,
respectively, each feature extraction block fi , respectively with the model library to
match the texture image corresponding to features, to calculate the correlation
Image Segmentation Based on D-S Evidence Theory and C-means Clustering 559

coefficient Pi (j), j on behalf of the texture model library classification number, and
finally by Pi (j) fi structural characteristics of the basic probability assigned texture mi
j (j). Here's how: as fi and texture characteristics of the maximum
correlation coefficient j.
(3)

The distribution coefficients of characteristic i and the correlation texture;

(4)

It is characterized by a reliable factor. In order to use D-S combination rules, the


correlation coefficient to be transformed by the basic probability function. Features i
given target the basic probability value of mi (j), the Pi (j) j formed feature fi which
given basic probability texture:

(5)
Feature i given the recognition framework with the basic probability value, that
is, the uncertainty of the probability characteristics:

(6)

It can calculate a single feature (the image gray value and GLCM entropy) of the
basic probability value, and then calculate the basic probability integration features.
Value from the basic probability distribution functions and confidence can be
calculated likelihood function, resulting in evidence of confidence intervals, the
difference between the level of ignorance expressed proposition.

3.2 Classification Rules

The analysis and decision-making of basic probability distribution function, using rule-
based methods to analyze significance based on probability distribution, determine the
following four rules: a) target class should have a maximum value of the basic probability
distribution; b) target category the basic values and other types of probability distribution
of the basic difference between the value of the probability distribution must be greater
than a threshold value 1, it means that all the different types of evidence for each level of
support should be kept large enough difference; c) uncertainty probability mi ( ) must be
smaller than a certain threshold value 2, the level of the target categories of ignorance or
uncertainty of evidence can not be too big; d) the basic probability distribution of target
class value must be greater than the probability of uncertainty mi (), ie know very little of
a target, you can not on its classification.
In summary, the application basic steps of D-S evidence theory for image texture
classification are as following; extract the image texture of the image gray value
and GLCM entropy, and model base texture matching, calculation of correlation
coefficient; each were calculated evidence of basic probability assignment
function, belief function and likelihood function; use of combination rules,
560 X. Wei

integration of all the evidence obtained under the basic probability assignment
function of trust and the likelihood function; according to decision rule, choose the
largest integrated under the assumption that the evidence.

4 Experimental Results
This treatment targets is to select cell's horizontal bamboo screen, Figure 3 (a) is a
bamboo stalk of bamboo stem cross-section of the original crystallization of vascular
images, Figure 3 (b) is the application of edge detection based on phase consistency

(a) single-crystal graph of vascular

(b) segmentation based on phase coherence

(c) extraction based on shapefeature

Fig. 3. The process of image segmentation


Image Segmentation Based on D-S Evidence Theory and C-means Clustering 561

for the initial segmentation results Figure 3 (c) is the application of the edge
information based on morphological characteristics of the regional growth area of the
segmentation result.
By comparing the original figure of Figure 3 (a) with the processing results of
Figure 3 (c), which shows that it is in general satisfactory. It not only split out of the
vascular area, and loss of crystalline fibrous cap smaller amount of information, more
complete information about the target area reserved for the accuracy of extracting
feature information to provide a guarantee.

5 Conclusion
In view of the natural features details of the image texture with rich diverse, for
single-use application of D-S evidence theory and fuzzy C-means clustering in many
cases can not meet the requirements of image segmentation. Segmentation method
described in the text using mean shift algorithm, centralized information area.
Application of D-S evidence theory and then test using the border to provide the
potential for regional growth and regional models. Automatically selected using the
seed point boundary information, complete the final segmentation, region
segmentation analysis based on the characteristics of regions of interest. Experimental
results show that the method for image texture segmentation, and has good
robustness, segmentation results obtained with the human visual system to determine
basically the same.

Acknowledgments. This paper is funded by 2011 Natural Science Foundation of


Shandong Province, its project number is 2011ZRA07003.

References
1. Ma, W.M., Chow, E., Tommy, W.S.: A new shifting grid clustering algorithm. Pattern
Recognition 37(3), 503514 (2004)
2. Pilevar, A.H., Sukumar, M.: GCHL: A grid-clustering algorithm for high-dimensional very
large spatial data bases. Pattern Recognition Letters 26(7), 9991010 (2005)
3. Nanni, M., Pedreschi, D.: Time-Focused clustering of trajectories of moving objects.
Journal of Intelligent Information Systems 27(3), 267289 (2006)
4. Birant, D., Kut, A.: An algorithm for clustering spatial-temporal data. Data & Knowledge
Engineering 60(1), 208221 (2007)
5. Cai, W.L., Chen, S.C., Zhang, D.Q.: Fast and robust fuzzy c-means clustering algorithms
incorporating local information for image segmentation. Pattern Recognition 40(3),
825833 (2007)
6. Sun, J., Liu, J., Zhao, L.: Clustering algorithm. Journal of Software 19(1), 4861 (2008)
7. Ke, Y., Zhang, J., Sun, J., Zhang, Y., Zhou, X.: Combined with support vector machines
and C means clustering for image segmentation. Computer Application 26(9), 20822083
(2006)
8. Tian, J., Li, Y., Cao, R.: A Markov process and the evidence theory based on multi-source
image fusion segmentation methods. Microelectronics & Computer 23(2), 2734 (2007)
9. Wang, Y., Han, J.: Dempster-Shafer theory of evidence based on iris image classification
method. Xian Jiaotong University 39(8), 829831 (2005)
Time-Delay Estimation Based on Multilayer Correlation

Hua Yan, Yang Zhang, and GuanNan Chen

School of Information Science and Engineering,


Shenyang University of Technology, Shenyang 110870, China
yanhua_01@163.com, zhangyang_0504@163.com

Abstract. Time-delay estimation technique is a key to acoustic temperature


field measurement. In order to acquire a stable acoustic time-of-flight in low
SNR, a time-delay estimation method combined multilayer cross-correlation
and multilayer auto-correlation (MC method for short) is proposed. Cross-
correlation method and second correlation method are described briefly for
comparison. Theory analysis and simulation research in MATLAB prove that
MC method can obtain highest estimation precision among these three methods
when the signal to noise ratio is low.

Keywords: time-delay estimation, multilayer cross-correlation, multilayer auto-


correlation, low SNR.

1 Introduction
Temperature field measurement based on acoustic time-of-flight (TOF) tomography
[1] has been used in atmospheric monitoring and heat management. It has many
advantages such as nondestructive, noncontact sensing and quick in response. Time-
delay estimation technique is a key to acoustic temperature field measurement.
Time-delay estimation is an important signal processing problem and has received
significant amount of attentions during past decades in various applications, including
radar, sonar, radio navigation, wireless communication, acoustic tomography, etc [1].
The signals received at two spatially separated microphones in the presence of noise
can be modeled by
r1 (t ) = s (t ) + n1 (t ), r2 (t ) = s (t D ) + n2 (t ), (0 t T ) (1)

where r1(t) and r2(t) are the outputs of the two microphones, s(t) is the source signal,
n1(t) and n2(t)represent the additive noises, T denotes the observation interval, and D
is the time-delay between the two received signals.
Time-delay estimation is not an easy task because of various noises and the short
observation interval. There are many algorithms to estimate the time-delay D. The
cross-correlation (CC) is one of the basic algorithms. Many methods, such as second
correlation (SC) method [2] and generalized correlation method [3] develop based on
this algorithm.
There are two forms of correlations: auto- and cross-correlations. The cross-
correlation function is a measure of the similarities or shared properties between two
signals. It can be used to detect/recover signals buried in noise, for example the

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 562567, 2011.
Springer-Verlag Berlin Heidelberg 2011
Time-Delay Estimation Based on Multilayer Correlation 563

detection of radar return signals and delay measurements. The autocorrelation


function involves only one signal; it is a special form of cross-correlation function and
is used in similar applications. In order to acquire a stable acoustic time-of-flight
in low SNR, a time-delay estimation method combined multilayer cross-correlation
and multilayer auto-correlation (MC method for short) is proposed in this paper, and
compared with cross-correlation method and second correlation method.

2 The Principle of Time-Delay Estimation Based on Correlation

2.1 The Cross-Correlation (CC) Method

The CC method cross-correlates the microphone outputs r1(t) and r2(t), and considers
the time argument that corresponds to the maximum peak in the output as the
estimated time- delay. The CC method can be modeled by:
DCC = arg max[ Rc ( )]

Rc ( ) = E[ r1 (t ) r2 (t + )] = E[ s (t ) + n1 (t )][ s(t D + ) + n2 (t + )] (2)


= Rss ( D) + Rn s ( D) + Rsn2 ( ) + Rn1n ( )
1 2

The signal and the noise are assumed to be uncorrelated. Thus Rn1 s ( D ) ,
Rsn2 ( ) are zero. Also, the additive noises are assumed uncorrelated, thus Rn1n2 ( ) is
zero. However, Rn1n2 ( ) is usually not be neglected in practice due to the existing
correlation between the two noises. So we have
Rc ( ) = Rss ( D) + Rn1n2 ( ) = RS1 ( D) + N1 ( ) (3)

where Rss ( ) or RS1 ( ) is the auto-correlation of the source signal s(t), RS1 ( D)
reaches maximum when =D, Rn1n2 ( ) or N1 ( ) is the cross-correlation of noise n1(t)
and n2(t). RS1 ( ) and N1 ( ) can be thought of as the signal component and noise
component of Rc ( ) , respectively.

2.2 Second Correlation (SC) Method

The auto-correlation of r2(t) can be expressed as

R1 ( ) = E[r2 (t ) r2 (t + )] = E[( s (t -D) + n2 (t ))( s (t D + ) + n2 (t + ))]


(4)
=Rss ( ) + Rn2 s ( D) + Rsn2 ( D) + Rn2 n2 ( )

Assuming the signal and the noise to be uncorrelated, we have

R1 ( ) = Rss ( ) + Rn2n2 ( ) = RS1 ( ) + N1' ( ) (5)


564 H. Yan, Y. Zhang, and G.N. Chen

where Rn2 n2 ( ) or N1' ( ) is the auto-correlation of noise n2(t). RS 1 ( ) and N1' ( ) can
be thought of as the signal component and noise component of R1 ( ) , respectively.
The SC method cross-correlates R1 (t ) and Rc (t ) , and considers the time argument
that corresponds to the maximum peak in the output as the estimated time- delay. The
SC method can be modeled by:
DSC = arg max[ Rs ( )]

(6)
Rs ( ) = E[ R1 (t ) Rc (t + )] = E [ RS 1 (t ) + N1' (t )][ RS1 (t D + ) + N1 (t + )]

Assuming the signal and the noise to be uncorrelated, we have

Rs ( ) = E[ RS1 (t ) RS1 (t D + )] + E[ N1' (t ) N1 (t + )] = RS 2 ( D) + N 2 ( ) (7)

where RS 2 ( ) is the auto-correlation of the signal RS 1 ( ) , RS 2 ( D ) reaches


maximum when =D, N 2 ( ) is the cross-correlation of noise N1' (t ) and N1 (t ) .
RS 2 ( ) , N 2 ( ) can be thought of as the signal component and noise component of
Rs ( ) , respectively.

2.3 Multi-Correlation (MC) Method

Correlation operation is an effective means of de-noising and increasing the signal to


noise ratio. Therefore we try to estimate time-delay in low SNR by multiplayer
correlation.
The auto-correlation of R1 (t ) can be expressed as

R2 ( ) = E[ R1 (t ) R1 (t + )] = E [ RS1 (t ) + N1' (t )][ RS 1 (t + ) + N1' (t + )] (8)

Assuming the signal and the noise to be uncorrelated, we have


R2 ( ) = E[ RS1 (t ) RS1 (t + )] + E[ N1' (t ) N1' (t + )] = RS 2 ( ) + N 2' ( ) (9)

where RS 2 ( ) is the auto-correlation of the signal RS1 ( ) , N 2' ( ) is the auto-


correlation of noise N1' (t ) . RS 2 ( ) , N 2 ( ) can be thought of as the signal component
and noise component of R2 ( ) , respectively.
The MC method cross-correlates R2 (t ) and Rs (t ) , and considers the time argument
that corresponds to the maximum peak in the output as the estimated time- delay. The
MC method can be modeled by:
DMC = arg max[ Rm ( )]

(10)
Rm ( ) = E[ R2 (t ) Rs (t + )] = E [ RS 2 (t ) + N 2' (t )][ RS 2 (t D + ) + N 2 (t + )]
Time-Delay Estimation Based on Multilayer Correlation 565

Assuming the signal and the noise to be uncorrelated, we have


Rm ( ) = E[ RS 2 (t ) RS 2 (t D + )] + E[ N 2' (t ) N 2 (t + )] = RS 3 ( D ) + N 3 ( ) (11)

where RS 3 ( ) is the auto-correlation of the signal RS 2 ( ) , RS 3 ( D ) reaches


maximum when =D, N 3 ( ) is the auto-correlation of noise N 2' (t ) and N 2 (t ) .
RS 3 ( ) , N 3 ( ) can be thought of as the signal component and noise component of
Rm ( ) , respectively.

3 Fast Implementation of CC, SC, and MC Methods


Above correlation operations are implemented in discrete time. Sample r1(t) and r2(t)
simultaneously, we have two data sequences r1(n) and r2(n), each containing N data.
The cross-correlation R12 ( j ) between r1(n) and r2(n) can be expressed as
N
1
R12 ( j ) =
N
r (n)r (n + j)
n=0
1 2 (12)

For longer data sequences, correlation operations can be speeded up by using the
correlation theorem and fast Fourier transform as follows [4].

R12 ( j ) = IFFT [ FFT (r1 (n)]* FFT (r2 (n)] (13)

where FFT and IFFT denote the inverse fast Fourier transform and fast Fourier
transform, respectively. * is conjugate operator.
The fast implementation of CC, SC and MC methods is given in Fig.1. In Fig.1,
REAL means obtaining the real part and MAX means obtaining the peak position,
respectively.

Fig. 1. The fast implementation of CC, SC and MC methods

4 Simulation Research
In order to avoid the influence of acoustic travel-distance measuring error and so on,
the acoustic travel-time estimation method based on multilayer correlation is verified
566 H. Yan, Y. Zhang, and G.N. Chen

using MATLAB simulation data. The acoustic source signal s(t) is a linear swept-
frequency cosine signal generated by chirp(t,f0,t1,f1) and the acoustic signal at
receiving point can be written as s(t-). In this paper, f0=200Hz, f1=850Hz,
t1=T/2=N/2/fs, N=25000 or 50000, fs=250kHz, = 0.014412 s. N is the number of
samples, fs is the sample frequency.
Simulation research shows that the acoustic travel-time estimation value is stable
and exact if the noise is weak. But if the noise isnt weak, the acoustic travel-time
estimation value will fluctuate slightly. The standard deviation (std) and the relative
root-mean-square error (R_RMSE) are used to assess the stability and accuracy of the
travel-time estimation values. They are defined as follows.
n n
1 n
( i ) i ( i )
2

n i =1
std = i =1
, = i =1
R _ RMSE = 100% (14)
n 1 n
where is the actual acoustic travel-time; i is the ith estimation value of the acoustic
travel-time; n is the number of measurement (estimation), in this paper, n=100.
The estimation results when Gaussian white noises and colored noises are added
are given in Table1 and Table 2, respectively. The colored noise is obtained by
feeding the white noise through a band-rejection filter. The system function of the
filter is
2
H (z) = 1
(15)
1 2z 1 .2 2 7 z 2 0 .6 1 9 2 z 3
Following can be found from Table 1~Table 4.
1) The stability and accuracy of time-delay estimation will decrease with the
decreasing of SNR, or the decreasing of samples.
2) Among CC method, SC method and MC method, MC method has best stability
and accuracy of time-delay estimation.

Table 1. Estimation results when Gaussian white noise is added

CC method SC method MC method


SNR std R-RMSE(%) std R-RMSE(%) std R-RMSE(%)
-5 1.78e-5 0.1233 5.46e-6 0.0382 3.08e-6 0.0215
-10 3.14e-5 0.2180 1.22e-5 0.0846 7.49e-6 0.0518
25000
-15 4.88e-5 0.3370 3.52e-5 0.2437 2.71e-5 0.1875
samples
-18 7.24e-5 0.5026 5.31e-5 0.3677 5.69e-5 0.3948
-19 2.38e-4 1.6632 7.33e-5 0.5086 7.22e-5 0.5001
-5 1.41e-5 0.0980 4.28e-6 0.0296 2.63e-6 0.0182
-10 2.89e-5 0.2001 1.13e-5 0.0785 6.46e-6 0.0447
50000
-15 4.26e-5 0.2950 3.48e-5 0.2411 2.34e-5 0.1622
samples
-18 6.32e-5 0.4476 4.87e-5 0.3407 4.67e-5 0.3231
-19 6.34e-5 0.4390 5.26e-5 0.3650 4.96e-5 0.3434
Time-Delay Estimation Based on Multilayer Correlation 567

Table 2. Estimation results when colored noise is added

CC method SC method MC method


SNR std R-RMSE(%) std R-RMSE(%) std R-RMSE(%)
-5 2.08e-5 0.1442 1.32e-5 0.0928 8.04e-6 0.0556
-10 2.04e-5 0.1411 1.13e-5 0.0786 8.03e-6 0.0560
25000
-15 2.30e-5 0.1591 1.06e-5 0.0736 7.39e-6 0.0512
samples
-18 2.10e-5 0.1454 1.16e-5 0.0816 8.61e-6 0.0609
-19 2.30e-5 0.1622 1.27e-5 0.0879 8.96e-6 0.0623
-5 1.60e-5 0.1106 8.13e-6 0.0562 5.89e-6 0.0409
-10 1.79e-5 0.1247 8.84e-6 0.0611 5.69e-6 0.0394
50000
-15 1.77e-5 0.1231 8.87e-6 0.0618 5.96e-6 0.0412
samples
-18 1.88e-5 0.1318 8.48e-6 0.0585 6.44e-6 0.0456
-19 1.87e-5 0.1304 1.01e-5 0.0701 5.78e-6 0.0399

5 Conclusion
In order to acquire a stable acoustic time-of-flight in low SNR, a time-delay
estimation method combined multilayer cross-correlation and multilayer auto-
correlation (MC method) is proposed and compared with cross-correlation (CC)
method and second correlation (SC) method. Theory analysis and simulation research
in MATLAB prove that MC method can obtain highest estimation precision these
three methods when the signal to noise ratio is low.

Acknowledgments. The work is supported by Natural Science Foundation of China


(60772054), Specialized Research Fund for the Doctoral Program of Higher
Education (20102102110003) and Shenyang Science and Technology Plan
(F10213100).

References
1. Ostashev, V.E., Voronovich, A.G.: An Overview of Acoustic Travel-Time Tomography in
the Atmosphere and its Potential Applications. Acta Acust. United Ac. 87, 721730 (2001)
2. Gedalyahu, K., Eldar, Y.C.: Time-delay Estimation from Low-rate Samples: A Union of
Subspaces Approach. IEEE T. Signal Proces. 58, 30173031 (2010)
3. Tang, J., Xing, H.Y.: Time Delay Estimation Based on Second Correlation. Computer
Engineering 33, 265269 (2007) (in Chinese)
4. Knapp, C.H., Carter, C.G.: The Generalized Correlation Method for Estimation of Time
Delay. IEEE T. Acoust., Speech, Signal Processing ASSP21, 320327 (1976)
5. Ifeachor, E.C., Jervis, B.W.: Digital Signal Processing: A Practical Approach, 2nd edn.
Publishing House of Electronics Industry, Beijing (2003)
Applying HMAC to Enhance Information Security for
Mobile Reader RFID System

Fu-Tung Wang1, Tzong-Dar Wu2, and Yu-Chung Lu3


1
Department of Computer Science & Information Engineering,
Ching Yun University, 32097 Taoyuan, Taiwan
2
Department of Electrical Engineering,
National Taiwan Ocean University, 20224 Keelung, Taiwan
3
eSECURE Technology, Inc.
32068 Taoyuan, Taiwan
futung@cyu.edu.tw

Abstract. This study proposed a data protection framework to enhance


information security for mobile reader RFID system. HMAC scheme was
integrated to communication protocol between tag and reader under considering
the security requirements, limited computation capability and memory space in
passive RFID system. The system life cycle comprised initial, maintain and
revoke phases. Then the security is analyzed and prototype is developed. The
results demonstrate that the proposed approach meets the system security
requirement and suits for mobile reader RFID system.

Keywords: RFID, HMAC, weakness, security controls.

1 Introduction
RFID is a contact-less automatic identification technology which combines
information and communication technologies. It has become very popular and been
applied in various business processes. The RFID application types of the most
common are asset management, asset tracking, automated payment, access control,
and supply chain management, etc[1-3]. An RFID system is comprised of tags,
readers, and application systems. In system structure shown in Fig.1, the tag stores
detail information of customer and the owner company. The RFID readers read or
write data to tags and transfer the data to the application system by using a radio
frequency interface.

Fig. 1. System structure diagram

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 568573, 2011.
Springer-Verlag Berlin Heidelberg 2011
Applying HMAC to Enhance Information Security for Mobile Reader RFID System 569

The configuration of RFID system can be categorized as passive and active types.
In present mobile era, active type application is more attractive to most company.
However, there is a few of study, focusing at active type RFID systems, been found
[4-11]. The situation what we face is that the ubiquitous computing is accompanied
with new weakness which induces information attack. Moreover, owing to the wide
distributed item sites in application environment, owned company could not manger
assets satisfied. In general case, information attack occurs in some weakness and fails
the service. The potential security controls are identified to mitigate the corresponding
risks with following the Guidelines for RFID Systems [12]. The system security
requirement have been highlighted in [13] as in Table.1.
The remainder of this paper is presented as follows: Section 2 describes the
proposed security protocol. Section 3 presents the systems implement and discussion,
Section 4 contains the conclusions.

Table 1. Security requirement traceable mapping

Security controls Countermeasure scheme Description


Data stored in RFID tags are assigned
5.2.7 Non-revealing
Specific security model and coded using specific format and
identifier formats
algorithm.
Authentication techniques based on
cryptography often provide integrity
HMAC(password based
5.3.1 Authentication service for data included in the
authentication and hash
and data integrity transaction. It is used to prevent
function)
unauthorized reading from or writing to
tags.
5.3.3 Tag data
DES Include encrypting the data on tags.
protection

2 Proposed Security Protocol


The scope of this study is restricted in mobile reader RFID system. The back end
system is assumed to be protected well because the well-defined information security
infrastructure has been deployed. Since the tags were embedded in each item and the
items were numerous and would be scanned at very close ranges. Passive tags made
the most sense, given their low cost.
Proposed system includes mobile device and server end. Mobile device provide
function of operator logon, tag registration, secret key modification. The initial parts
of authentication module has already included in the program. It will be inspired
when reader communicate to tag. The server end does mainly the database
management. It deals with the tag management. All the information of corresponding
tag and operation record will be recorded by the server.
Proposed framework implements the engineering aspects controls which concerned
the authentication and privacy issue [14]. A two phase method to address these
problems. In turn, the DES was employed to offer safer data storing and transferring.
HMAC scheme and proposed protocol is used as an authentication scheme to enhance
system security. The system life cycle comprises initial, maintain and revoke phases.
570 F.-T. Wang, T.-D. Wu, and Y.-C. Lu

1) Initial phase: First, the tag embedded in item is carried on in factory and the
secret keys belong to each tag are specified, stored in database. The related HMAC is
generated and written to tags, the detail step is depicted in Fig.2.
2) Maintain phase: Third party personnel is responsible for deciding whether the
tag is legal or not by checking the HMAC. Only and only if the HMAC match,
maintainer can be allowed to process the next step and write the encrypted data to tag.
The detail operation process is depicted in Fig.3.
3) Revoke phase: Only and only if the HMAC match, the tag could be revoked by
following the same procedure as shown in Fig.3.

6KRZVXFFHVV

\ PHVVDJH


(QG

Fig. 2. The initial phase flow chart

Start Is the right n


Tag?

Request tag y
data
Write encrypted
data(maintain/
revoke) to tag
n Read tag data
correctly? Show fail
message
y Reflash the
database
Show fail Get target
message information
Show success
message
Calculate
HMAC
End

Fig. 3. The maintain process flow chart


Applying HMAC to Enhance Information Security for Mobile Reader RFID System 571

3 Discussion and System Implement

As mentioned previously, lower computation ability and limited memory space is a


severe limitation of most of RFID systems, and either more efficient cryptography
scheme or a more intelligent protocol design is required to improve security and
system performance. The HMAC approach to RFID offers attractive factors due to its
lower computing time and memory space saving compared to the digital signature
scheme. Proposed mechanism achieves the two-way authentication from tag to reader.
Since the tag identification number is seen as a unique number and only legal
operational reader keeps the correct key to corresponding tag. Illegal reader does not
have permission to read the tag information while the reader fails to pass the
validation process for HMAC checking. The hash function makes the integrity of
tag identification number and companys secret key information for sure.
Moreover, enhanced security is assured because the data sent from the tag takes DES
encryption method. Finally, each tag has a different secret key and all keys
have no any obvious relationship, hacker can not deduct keys to other tags, even
thought they broke the key of a certain tag. Hackers can not reveal hardly the data
stored in Tag.
Based on the proposed system architecture and security framework, a patrol
prototype system was developed on Microsoft Visual studio 2005. The server
management interface is shown in Fig.4 and the mobile device interfaces is shown in
Fig.5. The patrol related data are shown in Fig 6. After the legal tag is verified, the
current time and responsible staff identification number was written to the tag.

Fig. 4. The server logon interface


572 F.-T. Wang, T.-D. Wu, and Y.-C. Lu

Fig. 5. The mobile device logon interface

Fig. 6. The patrol data recording interface

4 Conclusion
Integrating RFID technique to asset management is an innovative application. The
mobile reader based system is preferred. It can less human resources and analyzes
business data in time. Then, a framework comprise HMAC authentication and DES
encryption method has been proposed to protect information security. The prototype
Applying HMAC to Enhance Information Security for Mobile Reader RFID System 573

system demonstrates that the proposed approach can meet the system security
requirement. Furthermore, the proposed method provides flexibility in spending the
memory space. It is no doubt that Lower computation cost and more memory safe are
two factors favoring the successful implement of the RFID. Proposed protocol is
valuable for deploying a real RFID based asset management system and the security
framework could also be used for other mobile reader RFID system deployment.

References
1. Chan, S.Y., Luan, S.W., Teng, J.H., Tsai, M.C.: Design and implementation of a RFID-
based power meter and outage recording system. In: IEEE International Conference on
Sustainable Energy Technologie, pp. 750754 (2008)
2. Rieback, M.R., Crispo, B., Tanenbaum, A.S.: The Evolution of RFID Security. IEEE
Pervasive Computing, 6269 (2006)
3. Wu, M.Y., Ke, C.K., Tzeng, W.L.: Applying Context-Aware RBAC to RFID Security
Management for Application in Retail Business. In: IEEE Asia-Pacific Conference on
Service Computing, pp. 12081212 (2008)
4. Ding, Z.H., Li, J.T., Feng, B.: A Taxonomy Model of RFID Security Threats. In: IEEE
International Conference on Communication Technology, pp. 765768 (2008)
5. Schaberreiter, T., Wieser, C., Sanchez, I., Riekki, J., Roning, J.: An Enumeration of RFID
Related Threats. In: IEEE Second International Conference on Mobile Ubiquitous
Computing, Systems, Services and Technologies, pp. 381389 (2008)
6. Sharif, V.P.: A Critical Analysis of RFID Security Protocols. In: IEEE International
Conference on Advanced Information Networking and Applications, pp. 13571362
(2008)
7. Chien, H.Y.: Secure access control schemes for RFID systems with anonymity. In: 2006
International Workshop on Future Mobile and Ubiquitous Information Technologies,
pp. 9699 (May 2006)
8. Zhai, J., Park, C., Wang, G.: Hash-Based RFID Security Protocol Using Randomly Key-
Changed Identification Procedure. In: Gavrilova, M.L., Gervasi, O., Kumar, V., Tan,
C.J.K., Taniar, D., Lagan, A., Mun, Y., Choo, H. (eds.) ICCSA 2006. LNCS, vol. 3983,
pp. 296305. Springer, Heidelberg (2006)
9. Avoine, G., Oechslin, P.: A scalable and Provably Secure Hash-Based RFID Protocol. In:
Communications Workshops on IEEE Pervasive Computing, pp. 110114 (2005)
10. Gao, X., Xiang, Z., Wang, H., Shen, J., Huang, J., Song, S.: An Approach To Security
and Privacy of RFID System for Supply Chain. In: IEEE International Conference on
E-Commerce Technology for Dynamic E-Business, pp. 164168 (2004)
11. Kang, S., Lee, I.: A Study on New Low-Cost RFID System with Mutual Authentication
Scheme in Ubiquitous. In: IEEE International Conference on Multimedia and Ubiquitous
Engineering, pp. 527530 (2008)
12. Karygiannis, T., Eydt, B., Barber, G., Bunn, L., Phillips, T.: Guidelines for Security Radio
Frequency Identification (RFID) Systems. NIST: Special Publication 800-98 (2007)
13. Wang, F.-T., Wu, T.-D.: Information Security Study on RFID Based Power Meter System.
In: IEEE International Conference on Information Management and Engineering,
Chengdu, China, pp. 317320 (2010)
14. Maiwald, E.: Nectwork Security, A beginners Guide. The McGraw-Hill Companies,
New York (2003)
Analysis Based on Generalized Regression Neural
Network to Oil Atomic Emission Spectrum Data of a
Type Diesel Engine

ChunHui Zhang, HongXiang Tian, and Tao Liu

College of Naval Architecture and Power,


Naval Univ. of Engineering,
Wuhan Hubei 430033, China

Abstract. In order to deeply mine the information of Oil Atomic Emission


Spectrum Data, a simulation model and a prediction model of Cu concentration
of a type of six- cylinder diesel engine were established by applying Generalized
Regression Neural Network. Seven different working conditions had been set up
and sixty-nine oil samples had been taken from engine. The results show that the
absolute errors of the simulation value of the 69 samples are within the
acceptable accuracy indices and the absolute errors of the prediction value of the
19 samples are lower than the acceptable accuracy indices. It has been proved
effective that Cu concentration can be predicted via Generalized Regression
Neural Network algorithm.

Keywords: Generalized Regression Neural Network (GRNN), Diesel Engine,


Atomic Emission Spectrum, Wear elements.

1 Introduction

Modern mechanical equipment and power plant commonly runs oil film of several
microns, lubricating oil carries abundant wear information. The analysis of physics &
chemistry performance target, grain and concentration of elements can reveal wear
conditions, the oil quality decay and contamination of machine[1]. Oil monitor
technique is a technique of obtaining information of lubricant and wear, predicting
faults and ascertaining reasons, types and parts of faults through analyzing performance
change of lubricant and wear debris carried of checked equipment[2]. There are many
common methods of deeply mining information of oil atomic emission spectrum data,
including the grey system theory[3], the Factor Analysis[4], the maximum entropy
principle analysis[5], the correlation coefficient analysis[6], the regression analysis[7]
and the support vector machines[8], etc. This paper aims at setting up a relation
between the concentration of wearing elements of diesel engine and its loads,
cylinders clearances and run-time oil renewed by applying Generalized Regression
Neural Network (GRNN) to dealing with the oil atomic emission spectrum data of a
6-cylinder diesel engine.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 574580, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis Based on GRNN to Oil Atomic Emission Spectrum Data 575

2 General Regression Neural Network Theory


General Regression Neural Network was developed by Donald F. Specht in 1991,
called GRNN for short[9]. GRNN have a strong ability of the nonlinear mapping,
flexible network structure and a high degree of fault tolerance, be employed for solving
nonlinear problems. GRNN have a higher advantage in approximation of function and
learning speed than RBF and be convergence to optimization regression plane finally.
Whats more, GRNN can deal with unstable date too.
The GRNN topology consists of input layer, hidden layer (radial basis layer) and
output layer, the internal structure of a GRNN is depicted in figure1[10].

Input layer Hidden layer Output layer

W 1 S1 R
W2

x1 dist
x2 f1 ( x ) f 2 (x )
n1 A1 n2 A2
S1 R P

Nprod
S1 1 S1 1 S 2 1 S 2 1

P
P P
P P
P

P
P S 2 S1 P
P
P

xR b1 P

S1 1
P

Fig. 1. Internal structure of GRNN

As show in the figure 1: X means input vector, R means dimension of input


vector, S1 , S 2 means the numbers of neuron of each network. The number of neuron
hidden layer equals to number of the training samples. The weight functions of hidden
layer is Euclidean distance which calculates the distance from the input vector of length
R to each of the Q training vectors of length R, expressed by di st . b1 means
threshold value of the hidden layer. In general, the delivery function of the hidden layer
is Gaussian function, the equation is as following:

1
R i ( x ) = exp x di
2
(1)
2 i
2

In the equation (1),x is the input sample; d i is center of the hidden layer node;
i is smooth factor which decided basis function shape of the location of i hidden
1
layer. The output of hidden layer A is as following.
The output layer is a pure linear layer, the weight function of it is a normalization dot
product the weight function, expressed by npord , which calculate
576 C.H. Zhang, H.X. Tian, and T. Liu

A 1
= exp


( di st b i1 )
2


2 2
(2)

the n2 of network. The elements of n2 are quotients that dot products of which vector
1 1
A and each row element of weight matrix W 2 and the sum of elements of vector A .
( )
The value of output of network A = purelin n2 . The trait that the learning of
2

network depends on the samples of dates decides to extremely avoid effects of results
from ones supposition.

3 Date Analysis and Discussion

3.1 Experimental Date and Instrument

The SPECTROIL M spectrum instrument used in experiments is a special emission


spectrometer designed for analyzing lubricating oil elements. The instrument has
higher precision and well repeatability and can analysis 21 elements once.
This emission spectrometer is calibrated by complete normalization and daily
normalization before experiments. Matlab2010a is employed in analysis Emission
Spectrum Data.
Emission spectrum data analyzed in this paper comes from a reference[11]. Diesel
engine has seven different operating conditions arranged by updating pistons of diesel
engine and altering the clearance between cylinders and new pistons. 69 oil samples
taken from the engine with different loads and operating time under different operating
conditions has been analyzed by the SPECTROIL M spectrum instrument. The
number, the clearance Between cylinder-piston and the code of cylinder-piston is
shown in Table 1 [12]:

Table 1. Seven Different Conditions of Diesel Engine

No. Clearance Between Cylinder-piston Code of Cylinder-piston


0.7mm between the second cylinder-piston, 0.6mm between the 0000001
1 fifth cylinder-piston, 0.87mm between the sixth cylinder-piston and
other cylinder-pistons are normal.
0.7mm between the second cylinder-piston, 0.87mm between 0000010
2
the sixth cylinder-piston and other cylinder-pistons are normal.
0.7mm between the second cylinder-piston and other 0000100
3
cylinder-pistons are normal.
0.87mm between the sixth cylinder-piston and other 0001000
4
cylinder-pistons are normal.
0.7mm between the second cylinder-piston, 0.6mm between the 0010000
5
fifth cylinder-piston and other cylinder-pistons are normal.
0.6mm between the fifth cylinder-piston and other 0100000
6
cylinder-pistons are normal.
7 Normal clearance between every cylinder-piston. 1000000
Analysis Based on GRNN to Oil Atomic Emission Spectrum Data 577

3.2 Establishing the GRNN Model

According to a trait of GRNN, numbers of neuron of input layer equals to dimension of


input vector, numbers of neuron of output layer equals to dimension of output vector.
There are 21 elements in the oil atomic emission spectrum data. Among 21 elements,
Fe, Cr, Pb, Cu and Al are related with wear of diesel engine; Na, Ca, Mg, Ba ,P, B and
Zn are related with additive of lubricating oil; Mg, B, Na, Si, Ca and P are related with
exotic contamination; elements of which concentration are under sensitivity of
instrument are disturbing elements, such as Ag, Ti and V. component elements of
lubricating oil are not commonly analyzed in experiment, such as C,H. concentration of
Cu which can reflect adequately wear condition of engine is chosen as output dates of
neural network, which means the node of output layer being 1. Input vector is
represented by matrix composed of the Clearance between Cylinder-piston and the
Code of Cylinder-piston and operating time, the node of input layer is 9. Then, a GRNN
model is established by a sentence of MATLAB newgrnn(P, T , SPREAD ) .Input vector
is represented by P , output vector is represented by T , the spread rate of radial basis
function is represented by SPREAD . The delivery function of the hidden layer and
output layer are respectively Gaussian function and pure linear function, so parameter
of artificially factors of GRNN is an only SPREAD . A great deal of experiments show
that approximating process from network to sample date will be smoothly and the
prediction error of sample date will be minor when SPREAD equals to 10.
Concentration of Cu is simulated by trained GRNN. The output of simulated
concentration of Cu is show in figure 2:

Fig. 2. The predicting result of Cu by GRNN method


578 C.H. Zhang, H.X. Tian, and T. Liu

3.3 Error Analysis

Figure 2 shows the most of samples are simulated successfully. The SPECTROIL M
spectrum instrument has prescribed the acceptable accuracy indices of Cu when the
standard densities of Cu are 0, 5, 10, 30, 50, 100 and 300ppm, as shown in Table 2.
Cubic function of MATLAB interp1(x, y, x i , ' cubic ') is employed for estimating
accuracy indices of Cu of 69 samples.
Standard concentration accuracy Indices of Table 2 are used as input vector x , y
among the function.

Table 2. Acceptable Accuracy Indices for Wear Metals-mean of Cu

Standard concentration
0 5 10 30 50 100 300
/ppm

Accuracy Indices/ppm 0.92 1.61 2.44 5.91 9.43 18.2 53.5

The absolute errors of the simulation value of Cu concentration have been obtained
by comparing the simulation value with the observation value. Comparing the absolute
errors with the acceptable accuracy indices obtained from cubic function can reveal the
stimulating result. The result shows that all the 69 samples absolute errors are lower
than the acceptable accuracy indices, as shown in Table 3.

Table 3. Simulation Efficiency of Cu Concentration

Absolute Accuracy Overrun Absolute Accuracy Overrun


No. Error(ppm) Indices(ppm) (Y/N) No. Error(ppm) Indices(ppm) (Y/N)
1 -0.01 1.91 N 36 0.05 1.4 N
2 -0.08 1.9 N 37 0.06 1.4 N
3 0.36 1.83 N 38 0.04 1.42 N
4 0.48 1.9 N 39 0.04 1.42 N
5 0.1 1.96 N 40 0.06 1.43 N
6 0.54 1.91 N 41 0.01 1.43 N
7 -0.43 2.08 N 42 0 1.39 N
8 0.5 1.95 N 43 0 1.43 N
9 0.31 1.98 N 44 0.03 1.43 N
10 0.41 1.95 N 45 0.34 1.43 N
11 -0.28 2.06 N 46 0.26 1.69 N
12 0.22 1.98 N 47 0.03 1.66 N
13 -0.17 2.05 N 48 0.05 1.66 N
14 0.19 2.01 N 49 0 1.51 N
15 -0.4 2.12 N 50 0.03 1.52 N
16 -0.79 2.18 N 51 0.02 1.54 N
17 0.05 2.03 N 52 0.06 1.55 N
18 -0.23 2.08 N 53 0.05 1.51 N
19 0.15 2.01 N 54 0.1 1.11 N
20 -0.63 2.15 N 55 0.09 1.14 N
21 0.18 2.01 N 56 0.02 1.18 N
22 -0.15 2.12 N 57 0.02 1.18 N
23 -0.13 2.12 N 58 0.02 1.19 N
Analysis Based on GRNN to Oil Atomic Emission Spectrum Data 579

Table 3. (continued)

24 0 1.77 N 59 0.02 1.21 N


25 0 1.73 N 60 0.02 1.21 N
26 -0.04 1.75 N 61 0.07 1.22 N
27 -0.05 1.75 N 62 0.13 1.25 N
28 -0.05 1.75 N 63 0.22 1.03 N
29 0.25 1.7 N 64 0.13 1.05 N
30 -0.14 1.81 N 65 0.06 1.07 N
31 -0.04 1.8 N 66 0.15 1.09 N
32 0.07 1.78 N 67 0.14 1.09 N
33 -0.04 1.8 N 68 0 1.09 N
34 0.01 1.36 N 69 0 1.09 N
35 -0.05 1.42 N

3.4 Predicting and Analysis

According to constantly changing parameter of operating condition (including running


time and load) and machine parameter (clearance between cylinder-piston), an
appropriate model is set up in terms of GRNN algorithm for predicting Cu
concentration. 50 random samples were taken as the training sets. The remaining 19
samples have been predicted, results of prediction have been shown in Table 4:

Table 4. Predicting Efficiency of Cu Concentration

Observation Prediction Absolute Accuracy Overrun


No. value(ppm) value(ppm) Error(ppm) Indices(ppm) (Y/N)
11 7.8 7.29 -0.51 2.06 N
12 7.3 7.3 0 1.98 N
13 7.7 7.31 -0.39 2.05 N
14 7.5 7.65 0.15 2.01 N
15 8.1 7.66 -0.44 2.12 N
28 5.9 5.95 0.05 1.75 N
29 5.6 5.95 0.35 1.7 N
30 6.3 6.18 -0.12 1.81 N
38 3.7 3.62 -0.08 1.42 N
39 3.7 3.7 0 1.42 N
40 3.8 3.7 -0.1 1.43 N
46 5.5 5.02 -0.48 1.69 N
47 5.3 5.29 -0.01 1.66 N
53 4.3 4.6 0.3 1.51 N
59 2.2 2.28 0.08 1.21 N
60 2.2 2.28 0.08 1.21 N
61 2.3 2.5 0.2 1.22 N
68 1.3 1.17 -0.13 1.09 N
69 1.3 1.18 -0.12 1.09 N

The data shown in Table 4 suggests that 19 samples absolute errors are lower than
the acceptable accuracy indices.
580 C.H. Zhang, H.X. Tian, and T. Liu

4 Conclusions
Based on the Generalized Regression Neural Network algorithm, the simulation model
has been set up for the relation between Cu concentration of lubricating oil of diesel
engine and its loads, cylinders clearances and running time oil renewed. The
simulation model has proved to be effective, for the absolute errors of the simulation
value of the 69 samples are lower than the acceptable accuracy indices.
As regards to seven working conditions, the Cu concentrations of the random 19
samples were predicted accurately through the predicting model based on the
Generalized Regression Neural Network analysis.

References

1. Toms, L.A., Toms, A.M.: Machinery Oil Analysis. Society of Tribologists & Lubrication
Engineers, 120 (2008)
2. Yan, X.: Development and think Of oil monitor technique. Lubrication Engineering 14(7),
68 (1999)
3. Wang, L., Li, L.: The Sampling Time Prediction of Oil Analysis for Power-shift Steering
Transmission. Lubrication Engineering 35(8), 8487 (2010)
4. Liu, T., Tian, H.X., Guo, W.Y.: Application of Factor Analysis to a Type Diesel Engine
SOA. In: 2010 International Conference on Measuring Technology and Mechatronics
Automation, Changsha, China, March 13-14, pp. 612615 (2010)
5. Huo, H., Li, Z., Xia, Y.: Application of maximum entropy probability concentration
estimationapproach to constituting oil monitoring diagnostic criterions. Tribology
International 39, 528532 (2006)
6. Tian, H.X., Ming, T.F., Liu, Y.: Comparison among oil SPECTRAL date of six types
of marine engine. In: International Conference on Transportation Engineering, Chengdu,
pp. 4348 (2009)
7. Zhou, P., Liu, D.F., Shi, X., Li, G.: Threshold Setting of Oil Spectral Analysis Based on
Robust Regression. Lubrication Engineering 35(5), 8588 (2010)
8. Fan, H.B., Zhang, Y.T., Ren, G.Q., Luo, H.F.: Study on Prediction Model of Oil Spectrum
Based on Support Vector Machines. Lubrication Engineering 183(11), 148150 (2006)
9. Shi, F., Wang, X.C., Yu, L.: Analysis of 30 cases of MATLAB neural network, pp. 7380.
Press of Beijing University of Aeronautics and Astronautics, Beijing (2010)
10. Yan, W.J.: Research on Discrimination of Tongue Diseases with Near Infrared
Spectroscopy. Infrared Technology 32(8), 487490 (2010)
11. Liu, T.: Study in the Field of Oil Atomic Emission Spectrum Data Mining, pp. 1820. Naval
University of Engineering, Wuhan (2009)
12. Liu, T., Tian, H.X., Guo, W.Y.: Application of Factor Analysis to a Type Diesel Engine
SOA. In: 2010 International Conference on Measuring Technology and Mechatronics
Automation, Changsha, China, March 13-14, pp. 612615 (2010)
Robust Face Recognition Based on KFDA-LLE
and SVM Techniques

GuoQiang Wang* and ChunLing Gao

Department of Computer and Information Engineering,


Luoyang Institute of Science and Technology,
471023 Luoyang, Henan, P.R. China
{wgq2211,gclcsd}@l63.com

Abstract. Locally Linear Embedding (LLE) is a recently proposed algorithm


for non-linear dimensionality reduction and manifold learning. However, it may
not be optimal for classification problem. In this paper, an improved version of
LLE, namely KFDA-LLE, is proposed using kernel Fisher discriminant analysis
(KFDA) method, combined with SVM classifier for face recognition task.
Firstly, the input training samples are projected into the low-dimensional space
by LLE. Then KFDA is introduced for finding the optimal projection direction.
Finally, SVM classifier is used for face recognition. Experimental results on
face database demonstrate that the extended LLE method is more efficient and
robust.

Keywords: Face Recognition, Manifold learning, Locally Linear Embedding,


Kernel Fisher Discriminant Analysis (KFDA), Support Vector Machines.

1 Introduction
Face recognition has been researched extensively in the past decade due to the recent
emergence of applications such as secure access control, visual surveillance, content-
based information retrieval, and advanced human and computer interaction. However,
facial expression, occlusion and lighting conditions also change the overall
appearance of faces. The high degree of variability in those factors makes face
recognition a challenging task. To recognition human faces efficiently, dimensionality
reduction is an important and necessary operation for multi-dimensional data. The
objective of dimensionality reduction is to obtain a more compact representation of
the original data, a representation that nonetheless captures all the information
necessary for higher-level decision-making. Wang et al. [1] present four reasons for
reducing the dimensionality of observation data: (1) To compress the data to reduce
storage requirements; (2) To extract features from data for face recognition; (3) To
eliminate noise; and (4) To project data to a lower-dimensional space so as to be able
to discern data distribution. For face recognition, classical dimensionality reduction
methods have includes Principal Component Analysis (PCA) [2], Independent
Component Analysis [3], and Linear Discriminate Analysis [4].

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 581587, 2011.
Springer-Verlag Berlin Heidelberg 2011
582 G. Wang and C. Gao

Recently, neuroscientists suggested that human perceive objects in manifold ways.


Manifold, to be brief, is a topological space that is locally Euclidean distance. Roweis
and Saul [5] proposed a non-linear dimensionality reduction method, Locally Linear
Embedding (LLE), for learning the global structure of nonlinear manifolds exploiting
the local symmetries of linear reconstructions. Although the LLE has demonstrated
excellent results for exploratory analysis and visualization of multivariate data. It is
suboptimal from the perspective of pattern classification. In this paper, we propose
method for face recognition by extending the LLE with Kernel Fisher Discriminant
Analysis (KFDA). Firstly, the input training samples are projected into the low-
dimensional space. Then KFDA is introduced for finding the optimal projection
direction. Finally, SVM classifiers are used for face recognition. Experimental results
demonstrate the effectiveness and robust of the proposed recognition approach.

2 KFDA-LLE
LLE maps its inputs into a single global coordinate system of lower dimension,
attempting to discover nonlinear structure in high dimensional data by exploiting the
local symmetries of linear reconstructions. Its optimizations do not involve local
minima though capable of generating highly nonlinear embeddings.
The LLE transformation algorithm is based on simple geometric intuitions, where
the input data consist of N points xi , xi R D , i [1, N ] , each of dimensionality
D , which obtained by sampling an underlying manifold. Provide there is sufficient
data (such that the manifold is well sampled), each data point and its neighbors are
expected to lie on or near a locally linear patch of the manifold. Linear coefficients
that reconstruct each data point from its neighbors are used to characterize the local
geometry of these patches. As an output, it provides N points y i , yi R d ,
i [1, N ] where d << D . A brief description of LLE algorithm is as follows:
Stage I, the cost function to be minimized is defined as:
2
N N
(W ) = xi Wij xij (1)
i =1 j =1

Given X = [ x1 , x2 ,..., x N ] , the dimension of xi is D . For one vector xi and


weights Wij that sum up to 1, this gives a contribution:

2
K K K
(W ) =
i
W
j =1
ij ( xi x j ) = WijWim C ijm
j =1 m =1
(2)

where C i is the K K matrix:


C ijm = ( xi xij )T ( xi xim ) (3)
Robust Face Recognition Based on KFDA-LLE and SVM Techniques 583

Stage II, the weight matrix W is fixed and new m-dimensional vectors y i are
sought which minimizes another cost function:
2
N N
(Y ) = yi Wij yij (4)
i =1 j =1

The Wij can be stored in an n n sparse matrix M, then re-writing equation (4)
gives:
N N
(Y ) = M ij yiT y j (5)
i =1 j =1

To improve the LLE standalone classification performance, one needs to combine


LLE with some discrimination criterion. The KFDA-LLE is similar to the original
LLE in the first two steps. The original facial vector x R D is transformed into the
feature vector y R with d << D . The main different between them is that in our
d

algorithm, the feature vector y is nonlinearly mapped into a high dimensional space,
and then a FLD method is utilized, globally forming a nonlinear mapping, i.e.KFDA.
The computation in high dimension space can be facilitated by the Mercer kernel
function.
The main ideal of KFDA first maps the feature vectors y into a high dimension
space F by a nonlinear mapping . Fisher linear discriminant analysis (FLD) can
then be performed in F . Thus, we define the within-class scatter matrix S w , the
between-class scatter matrix S b for the mapped training samples respectively as
follows:
c ni
S w = ( ( g j i )( ( g j ) i )T (6)
i =1 j =1

c
S b = ni ( i )( i ) T

(7)
i =1

where i is mean vector of class i and is the total mean vector in the mapped
space respectively. If S w is nonsingular, the optimal projection Wopt is chosen as the
matrix with orthonormal columns which maximizes the ratio of the determinant of the
between-class scatter matrix to that of the within-class scatter matrix, i.e.,

W T S bW
Wopt = arg max (8)
W W T S wW
584 G. Wang and C. Gao

According to the theory of reproducing kernel [6], any solution w must lie in the
space which is spanned by { ( x1 ),..., ( x N )} , i.e,
n
w = i ( g i ) = (9)
i =1

where we define as [ ( g1 ), ( g 2 ),..., ( g n )] and a coefficient vector as


[1 , 2 ,..., n ]T . After substituting Eq. (6), Eq. (7), Eq. (9) into Eq. (8), it follows
wT S b w = T K b (10)

wT S w w = T K w (11)

where,

j = [k ( g1 , g j ),..., k ( g n , g j )]T (12)

c
K b = ni (mi m)(mi m)T (13)
i =1

c ni
K w = ni ( j mi )( j m) T (14)
i =1 j =1

T
1 j
n nj
m j = k ( g1 , g i ),..., k ( g n , g i ) (15)
n j i =1 i =1
T
1 n n

m = k ( g1 , g i ),..., k ( g n , g i ) (16)
n i =1 i =1
Then, the following eigenvalues problem is obtained:

AT K b A
A = arg max = = [ 1 ,..., c 1 ] (17)
A AT K w A

To deal with the singularity of with-scatter matrix K w that one often encounters in
classification problems, we can use a regulation strategy with adding a multiple of the
identity matrix to the within-scatter matrix, i.e., K w = K w + I where is a small
Robust Face Recognition Based on KFDA-LLE and SVM Techniques 585

number). This also makes the eigenvalue problem numerically more stable. Another
viable approach for singularity problem by using PCA could also be adopted [3]. This
method performs a linear dimensionality reduction removing the null space to obtain
a matrix of full rank.
For a new testing sample z , whose projection onto the optimal discriminant vector
w of the feature F is
n n
wT ( z ) = k T ( xk ) ( z ) = k k ( xk , z )
k =1 k =1
(18)

= T [k ( g1 , z ),..., k ( g n , z )]T

3 Support Vector Machines

SVM[6][7] perform pattern recognition for two-class problems by finding the


decision surface which minimizes the structural risk of the classifier. This is
equivalent to determining the separating hyperplane that has maximum distance to the
closest samples of the training set. Considering the problem of separating a set of
training samples belong to two separable classes, ( x1 , y1 ) ,, ( x n , y n ) , where
xi R d , yi {1,+1} , an optimal hyperplane can be obtained through solving a
constrained optimization problem, the decision function can be written as:

n
f ( x) = sgn(
i =1
*
i y i k ( xi x) + b * ) (19)

where i (i = 1,..., n) and b * are the best solutions to the optimization problem, the
*

kernel k ( x, y ) commonly used include polynomials kernel, radial basis function


kernels and sigmoid kernel etc.
The SVM approach was originally developed for binary classification problems. In
many practical applications, a multi-class pattern recognition problem has to be
solved. Usually there are two strategies [8] to solve multi-class problems by using
binary SVM classifiers: The first one is one-against-one, in this method, for each
possible pair of classification a binary classifier is calculated, each classifier is trained
on a subset of the training set containing only training examples of the two involved
classes. If N represents the number of total classes, all (N-1)N/2 classifiers are
combined through a majority voting scheme to estimate the final classification. The
other one is the one-against-rest, in this method N different classifiers are constructed,
one for each class. Here the l-th classifier is trained on the whole training data set in
order to classify the members of class l against the rest. In the classification stage, the
586 G. Wang and C. Gao

classifier with the maximal output defines the estimated class label of the current
input vector. While the former has too many computations, we adopt the latter one for
our face recognition task.

4 Experimental Results
We test both the original and improved LLE method against Eigenface and Fisherface
methods using the face database B of the BVC2005 [9]. This face database contains
2000 face images corresponding to 100 distinct subjects. The 2000 face images were
acquired under vary illumination conditions, different facial expressions, diverse
background and certain pose changes. Each subject has twenty images of size of
640x480 with RGB color. Since the database has large background, to reduce the
adverse effect, original images have been cropped containing the facial contours with
size 100x100. The cropped images of a few subjects in the database are shown in
figure 1.

Fig. 1. Face images from database B of the BVC2005

In this experiment, a pre-processing stage is first applied to normalize all of the


images. Illumination compensation was performed using both conventional histogram
equalization and the phase-only image obtained using the FFT[10]. The samples in
the database are randomly divided into two disjoint sets: ten images for training and
the remaining ten images for testing. All images are projected to a reduced space and
recognition is performed using SVM classifier. To reduce the fluctuation of results
caused by the randomness of data sets, the experiment was performed 10 times and
the results are averaged. For all methods, all samples are projected to a subspace
spanned by the c-1 largest eigenvectors. The experimental result is shown in Table 1.
From the Table we can see that the performance of our approach is better than other
methods. As in KFDA-LLE, kernel methods could effectively extract the nonlinear
feature for further classification.
Robust Face Recognition Based on KFDA-LLE and SVM Techniques 587

Table 1. The experimental results on test database using the different methods

Method Parameter Reduced Space Face Recognition Rate


Eigenface+SVM NA 32 80.45%
Fisherface+SVM NA 37 85.02%
LLE+SVM K=10 42 82.67%
KFDA-LLE+SVM K=10 35 91.25%

5 Conclusion
In this paper, an improved version of LLE, namely KFDA-LLE, is proposed for face
recognition. The KFDA-LLE is capable of identifying the underlying structure of
high dimensional data and discovering the embedding space nonlinearly of the same
class data set. Then KFDA is used to find an optimal projection direction for
classification. In face recognition experiments, KFD-LLE serves as a feature
extraction process compared with LLE, and two other well-established subspace
methods combined with SVM classifier. Experimental results on face database show
that KFD-LLE excels LLE and is highly competitive with those two baseline methods
for face recognition.

References
1. Wang, J., Zhang, C.S., Kou, Z.B.: An Analytical Mapping for LLE and lts Application in
Multi-pose Face Synthesis. In: The 14th British Machine Vision Conference (2003)
2. Turk, M.A.: pentland, A.P.: Face Recognition using Eigenfaces. In: Proc. IEEE Conf.
Computer Vision and Pattern Recognition, pp. 586591 (1991)
3. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by ICA. IEEE
Transactions on Neural Networks 13(6), 14501463 (2002)
4. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition
using class specific linear projection. IEEE Trans. on Pattern Analysis and Machine
Intelligence 19(7), 711720 (1997)
5. Roweis, S.T., Saul, L.K.: Nonlinear Dimensionality Reduction by LLE. Science 290,
23232326 (2000)
6. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)
7. Li, Y.F., Ou, Z.Y., Wang, G.Q.: Face Recognition Using Gabor Features and Support
Vector Machines. In: Zhou, T.H., Bloom, T., Schaffert, J.C., Gairing, M., Atkinson, R.,
Moss, E., Scheifler, R. (eds.) CLU. LNCS, vol. 114, pp. 114117. Springer, Heidelberg
(1981)
8. Schwenker, F.: Hierarchical Support Vector Machines for Multi-Class Pattern
Recognition. In: Fourth International Conference on Knowledge-Based Intelligent
Engineering Systems & Allied Technologies, vol. 2, pp. 561565 (2000)
9. The First Chinese Biometrics Verification Competition,
http://www.sinobiometrics.com
10. Kovesi, P.: Symmetry and Asymmetry From Local Phase. In: The 10th Australian Joint
Conf. on A.I. (1997)
An Improved Double-Threshold Cooperative
Spectrum Sensing

DengYin Zhang and Hui Zhang

Key Lab of Broadband Wireless Communication and Sensor Network Technology,


(Nanjing University of Posts and Telecommunications),
Ministry of Education, Nanjing 210003, China
{Zhangdy,Y001090436}@njupt.edu.cn

Abstract. Cognitive Radio is becoming a hot research topic in wireless


communication field now, in which the spectrum sensing is one of the most
critical technologies. In the double-threshold cooperative spectrum sensing,
the fusion center collects the local decisions and observational values of
the secondary users, and then makes the final decision to determine whether the
primary user is presence or not. While the fusion center collects the
observational values, the traditional method does not consider that the reliability
of different cognitive user will be different, which will cause certain impact on
detection performance. For this shortfall, we introduce the reliability factor
based on the signal-to-noise ratio, and propose an improved double-threshold
cooperative spectrum sensing method. Simulation results will show that the
spectrum sensing performance is improved significantly under the proposed
scheme compared with the conventional method.

Keywords: cognitive radio, double threshold, cooperative spectrum sensing,


reliability factor.

1 Introduction
With the expansion of broadband multimedia services, the radio spectrum resources
are becoming scarcer and scarcer, and the contradiction between supply and demand
of the spectrum is becoming more and more acute. The emergence of cognitive radio
alleviates the contradiction between supply and demand of the current spectrum.
Spectrum sensing is one of the most critical technologies in cognitive radio. The
detection of spectrum holes can improve the utilization of wireless spectrum
resources. In the cognitive network (CRN), the cognitive user (Secondary User, SU)
can only use the frequency band which is not used by the statutory main user (Primary
User, PU). In order to limit the interference to the PU, the SU should exit the band
immediately if the PU exists and can switch to other free unused spectrum to continue
communicating. It is a major challenge that how SU can detect the existence of PU
timely and reliably. Therefore, spectrum sensing is very important in cognitive radio
technology.
The existing spectrum sensing methods have single-user detection and
collaborative detection [1]. Single-user detection mainly includes matched filter

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 588594, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Improved Double-Threshold Cooperative Spectrum Sensing 589

detection, cycle stationary feature detection and energy detection. In practical


applications of cognitive radio, wireless environment is very complex and
changeable, in which the shadow, concealed terminal, multipath, noise uncertainty
and other factors will result in negative impact for the detection performance of
single-user, Cooperative spectrum sensing can reduce this impact and reduce the
requirements of cognitive equipment, which has a higher detection accuracy,
therefore, it is one of the hot spot which people study now. Data fusion algorithm in
cooperative sensing mainly includes hard decision collaboration algorithm [2], soft
decision collaboration algorithm [3] and hybrid decision collaboration algorithm.
In [4], a detection method using double threshold in cooperative spectrum sensing
was proposed to reduce the communication traffic. Part of the cognitive users send
their energy values to fusion center by soft decision , the rest of the cognitive users
send their results by hard decision to the fusion center, and the fusion center makes a
final decision to determine whether the PU is absence. But this method has not
considered that the channel conditions of different SU are different and the detection
reliabilities also vary, which will affect the detecting performance. To solve this
problem, an improved double-threshold cooperative detection which can improve the
detection performance is proposed.

2 The Traditional Double-Threshold Cooperative Spectrum


Sensing Method
In the following model, there are two thresholds 1 and 2, where Yi denotes the
collected energy value of the i-th SU. If energy value exceeds 2, then this user
reports H1, which means that the PU is present. If energy value is less than 1, the
decision H0 will be made, which means that there is no PU, if Yi is between 1 and 2,
the SU send the collected energy value to the fusion center which will make the
further judgments [5].

Decition H0 Energy value Yi Decition H1

1 2 Yi

Fig. 1. The double threshold detection mode

Assuming that each SU can does energy detection independently and has the
identical detection threshold values for simplicity. If Yi satisfies 1<Yi< 2 then the i-
th SU will send Yi to the fusion center, Otherwise, If Yi satisfies Yi < 1or Yi > 2, it
will report its local decision Li to the fusion center. We use Ri to denote the
information that the fusion center receives from the i-th SU, then it can be given by:

. (1)
otherwise
590 D. Zhang and H. Zhang

and
0 0
. (2)
1

Without loss of the generality, we assume that the fusion center receives K local
decisions and N-K energy detection values among N cognitive users. Then the fusion
center makes an upper decision according to N-K energy detection values, which is
given by:

0 0
. (3)
1

where is the energy detection threshold value of the fusion center according to
the proper false alarm probability. It shows that these N-K cognitive users could not
distinguish between the presence and the absence of the PU, so the fusion center
collects their observational values and then makes an upper decision instead of the
local decision by themselves [5].
The energy values fusion that the fusion center collects follows the distribution as
given below [6]:

H
Y~ . (4)
2 H

where represents the sum of SNR for N-K cognitive users, we can
make a further decision according to energy detection [7].
The fusion center makes a final decision based on OR-rule [8]:

1 D 1 H
. (5)
0 otherwise H

In the double threshold detection, some cognitive users cannot make their local
decisions directly, so they need send the detection values to the fusion center where
compares their energy values fusion with the threshold value for further judgment.
However, without considering that the different cognitive user will have different
detection reliability caused by their different geographical location and environment,
which would affect the final detection performance.

3 An Improved Double-Threshold Cooperative Spectrum Sensing


Method
Because the cognitive users have different environment, that is, the shadow fading
they have went through is different, which finally will lead to their SNR vary, so that
An Improved Double-Threshold Cooperative Spectrum Sensing 591

the credibility of information each cognitive user detected cannot be equated, in order
to solve this problem, we proposed a weighting judgment method by using reliability
factor which based on SNR values in this paper. In [9], the reliability factor has been
defined, and in this paper we introduce it into the double threshold cooperative
detection. The basic idea of reliability factor [9] is to set weight by using the SNR
value of SU received. The higher its SNR value is, the bigger its weight will be, and
the lower its SNR value is, the smaller its weight will be. The contribution of each SU
to the overall decision can be determined according to the detection performance,
which can help to fuse detection value of each SU better and improve the accuracy of
the cooperative spectrum sensing. First of all, the SU send the measured SNR value to
the fusion center, and then fusion center will calculate its weight factor according to
the following formula. Then the fusion center makes decision based on both the
weight factor and its own detection value.

W . (6)

In the traditional double threshold collaborative sensing, because the channel


conditions of cognitive users are different, their detection reliabilities are also
different. In this paper, the fusion center is not only compute the summation of all
detection values simply but makes a weighted summation according to the weight
factor, and then we can know that the PU is absent or not by comparing the weighted
energy summation with threshold value, finally, the decision H0 or H1 will be made.
Mathematical description of weighted coordination algorithm is:

Y WY . (7)

The energy value Yw is the summation of the product of the detected energy value
Yi and the weight factor Wi , where Yi denotes the collected energy value of the i-th
SU, Wi denotes the weight factor of the i-th SU, N-K is the number of cognitive users.
Since Yi is an independent chi-square distribution with 2TW freedom degrees,
likewise, Yw is also a Chi-square distribution the same with the Yi, then it can be
given by [10]:

H
Y . (8)
2 H

And

W . (9)

is the SNR of the i-th SU, u=TW, T is the spectrum detection time, W is the

bandwidth when band-pass signal performs energy detection. W1 W2 WN-K
are the weighting factors of the cognitive users respectively.
592 D. Zhang and H. Zhang

To facilitate the analysis, we introduce two parameters 0,i and 1,i to represent the
probability of1<Yi<2 for the i-th SU under hypothesis H0 and H1 respectively, then
we have :

, P |H . (10)

, P |H . (11)

According the calculation method of traditional double threshold detection, we can


get the probability of detection, missing and false alarm respectively for each SU:

P, PY |H Q 2 , . (12)

P , PY |H 1 , P, . (13)

,
P, PY |H . (14)

Using QdQm and Qf to denote the cooperative probability of detection, missing and
false alarm respectively, then we have:

Q P , , 1 Q 2 P , . (15)

Q 1 Q . (16)

,
Q 1 1 , P, 1 , P, , 1 . (17)

where (:,:) is incomplete Gamma function, Q( ) as a general Marcum Q function.

4 Simulation Results and Analysis

In the front, we have analyzed the performance of the improved double threshold
spectrum sensing, and now we verify its performance by using MATLAB simulation.
It can be clearly seen that this method performs better.
Parameters are given as follows: AWGN channel, , = , =0.1the number of users

N=10 u=5, the SNR of the i-th SU =-15:1:-6dB the simulation results can be
shown by the follow figures:
An Improved Double-Threshold Cooperative Spectrum Sensing 593

Fig. 2. Detection probability vs. false alarm Fig. 3. Missing probability vs. false alarm
probability probability

In figures, it can be seen that the improved double threshold collaborative detection
has a better performance compared with the traditional method, with higher detection
probability and lower missing probability.
In this paper, the reason that the improved double threshold collaborative detection
method can improve the detection performance is due to that the cognitive users have
different impact to fusion center according to their different reliability factors.
Considering this situation, we introduce the reliability factors while cognitive users
send their detection values to the fusion center. It can be seen that this improved
method can significantly improve the cooperative spectrum sensing capability in the
cognitive radio network from the simulation results.

5 Conclusions
In this paper, we have proposed an improved double threshold collaborative detection
method, in the traditional method, the cognitive users that need send their detection
values to the fusion center have different channel conditions, which result in that their
detection performance vary, the improved method fully thinks about this situation
which did not in the traditional method, it can be clearly seen that the performance of
the improved method is significantly better from the simulation results.

Acknowledgments. Thework was partially supported by Swedish Research Links


[No.348-2008-6212], the National Natural Science Foundation of China [61071093],
Chinas Project 863 [2010AA701202], and SRF for ROCS, SEM [NJ209002].

References
1. Akyildiz, I.F., Lee, W.-Y., Vuran, M.C., et al.: Next generation/dynamic spectrum
access/cognitive radio wireless networks: A survey. Computer Networks 50(13), 21272159
(2006)
2. Zhou, X., Ma, J., Li, G.Y., Kwon, Y.H., Soong, A.C.K.: Probability based combination for
cooperative spectrum sensing. IEEE Trans. Commun. 58(2), 463466 (2010)
594 D. Zhang and H. Zhang

3. Meng, J., Yin, W., Li, H., Hossain, E., Han, Z.: Collaborative spectrum sensing from
sparse observations using matrix completion for cognitive radio networks. In: Proc. IEEE
2010 International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
(March 2010)
4. Sun, C.-h., Zhang, W., Letaief, K.B.: Cooperative spectrum sensing for cognitive radios
under bandwidth constraints. In: Proc. IEEEWCNC (2007)
5. Zhu, J.: Double Threshold Energy Detection of Cooperative Spectrum Sensing in
Cognitive Radio. In: 3rd International Conference on Oriented Wireless Networks and
Communications, CrownCom 2008, May 15-17 (2008)
6. Digham, F.F., Alouini, M.S., Simon, M.K.: On the energy detection of unknown signals
over fading channels. IEEE Transactions on Communications 55(1), 2124 (2007)
7. Urkowitz, H.: Energy Detection of Unknown Deterministic Signals. Proceedings of the
IEEE 5(4), 523531 (1967)
8. Varshney, P.K.: Distributed detection and data fusion. Springer, New York (1997)
9. Visser, F.E., Janssen, G.J.M., Pawelczak, P.: Multi node Spectrum Sensing Based on
Energy Detection for Dynamic Spectrum Access. In: Vehicular Technology Conference,
pp. 13941398 (May 2008)
10. Bin Shahid, M.I., Kamruzzaman, J.: Weighted Soft Decision for Cooperative Sensing in
Cognitive Radio Networks. In: 16th IEEE International Conference Networks, pp. 16
(2008)
Handwritten Digit Recognition Based on Principal
Component Analysis and Support Vector Machines

Rui Li and Shiqing Zhang

School of Physics and Electronic Engineering, Taizhou University


318000 Taizhou, China
{lirui,zhangshiqing}@tzc.edu.cn

Abstract. Handwritten digit recognition has always been a challenging task in


pattern recognition area. In this paper we explore the performance of support
vector machines (SVM) and principal component analysis (PCA) on
handwritten digits recognition. The performance of SVM on handwritten digits
recognition task is compared with three typical classification methods, i.e.,
linear discriminant classifiers (LDC), the nearest neighbor (1-NN), and the
back-propagation neural network (BPNN). The experimental results on the
popular MNIST database indicate that SVM gets the best performance with an
accuracy of 89.7% with 10-dimensional embedded features, outperforming the
other used methods.

Keywords: Handwritten digits recognition, Principal component analysis,


Support vector machines.

1 Introduction
Handwritten digit recognition is an active topic in pattern recognition area due to its
important applications to optical character recognition, postal mail sorting, bank
check processing, form data entry, and so on.
The performance of character recognition largely depends on the feature extraction
approach and the classifier learning scheme. For feature extraction of character
recognition, various approaches, such as stroke direction feature, the statistical
features and the local structural features, have been presented [1, 2]. Following
feature extraction, its usually needed to reduce the dimensionality of features since
the original features are high-dimensional. Principal component analysis [3] is a
fundamental multivariate data analysis method and widely used for reducing the
dimensionality of the existing data set and extracting important information. The task
of classification is to partition the feature space into regions corresponding to source
classes or assign class confidences to each location in the feature space. At present,
the representative statistical learning techniques [4] including linear discriminant
classifiers (LDC) and the nearest neighbor (1-NN), and neural network [5], have been
widely used for handwritten digit recognition. Support vector machines (SVM) [6]
became a popular classification tool due to its strong generalization capability, which
was successfully employed in various real-world applications. In the present study we
employ PCA to extract the low-dimensional embedded data representations and
explore the performance of SVM for handwritten digit recognition.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 595599, 2011.
Springer-Verlag Berlin Heidelberg 2011
596 R. Li and S. Zhang

2 Principal Component Analysis


Principal Component Analysis (PCA) [3] is a basis transformation to diagonalize an
estimate of the covariance matrix of the data set. PCA can be applied to represent the
input digit images by projecting them onto a low-dimensional space constituted by a
small number of basis images derived by finding the most significant eigenvectors of
the covariance matrix.
In order to find a linear mapping M which maximizes the objective function
( trace( M T cov( X ) M ) ), PCA solves the following eigenproblem:

cov( X ) M = M (1)

where cov( X ) is the sample covariance matrix of the data X . The d principal
eigenvectors of the covariance matrix form the linear mapping M . And then the low-
dimensional data representations are computed by Y = XM .

3 Support Vector Machines


Support vector machines (SVM) [6] is based on the statistical learning theory of
structural risk management and quadratic programming optimization. And its main
idea is to transform the input vectors to a higher dimensional space by a nonlinear
transform, and then an optimal hyperplane which separates the data can be found.
Given training data set ( x1 , y1 ),..., ( xl , yl ), yi {1,1} , to find the optimal
hyperplane, a nonlinear transform, Z = ( x) , is used to make training data become
linearly dividable. A weight w and offset b satisfying the following criteria will be
found:
wT zi + b 1, yi = 1
T (2)
w zi + b 1, yi = 1
We can summarize the above procedure to the following:
1
min ( w) = ( wT w) (3)
w, b 2
Subject to yi ( wT z i +b) 1, i = 1, 2,..., n
If the sample data is not linearly dividable, the following function should be
minimized.
l
1
( w) = wT w + C i (4)
2 i =1

whereas can be understood as the error of the classification and C is the penalty
parameter for this term.
l
By using Lagrange method, the decision function of w0 = i yi zi will be
i =1
l
f = sgn[ i yi ( z T zi ) + b] (5)
i =0
Handwritten Digit Recognition Based on Principal Component Analysis 597

From the functional theory, a non-negative symmetrical function K (u, v) uniquely


defines a Hilbert space H , where K is the rebuild kernel in the space H :
K (u , v) = i (u )i (v) (6)
i

This stands for an internal product of a characteristic space:


ziT z = ( xi )T ( x) = K ( xi , x) (7)
Then the decision function can be written as:
l
f = sgn[ i yi K ( xi , x) + b] (8)
i =1

The development of a SVM emotion classification model depends on the selection of


kernel function. There are several kernel functions, such as linear, polynomial, radial
basis function (RBF) and sigmoid, that can be used in SVM models.

4 Experiment Study

4.1 MNIST Database

The popular MNIST database of handwritten digits, which has been widely used for
evaluation of classification and machine learning algorithms, is used for our
experiments. The MNIST database of handwritten digits, available from the web site:
http://yann.lecun.com/exdb/mnist, has a training set of 60000 examples, and a test set
of 10000 examples. It is a subset of a larger set available from NIST. The original
black and white images from NIST were size normalized to fit in a 2020 pixel box
while preserving their aspect ratio. The images were centered in a 2828 image by
computing the center of mass of the pixels, and translating the image so as to position
this point at the center of the 2828 field. In our experiments, for computation
simplicity we randomly selected 3000 training samples and 1000 testing samples for
handwritten digits recognition. Some samples from the MNIST database are shown in
Fig.1.

4.2 Experimental Results and Analysis

To verify the performance of SVM on handwritten digits recognition task, three


typical methods, i.e., linear discriminant classifiers (LDC), the nearest neighbor (1-
NN) and the back-propagation neural network (BPNN) as a representative neural
network were used to compare with SVM. For BPNN method, the number of the
hidden layer nodes is 30. We employed the LIBSVM package, available at
http://www.csie.ntu.edu.tw/cjlin/libsvm, to implement SVM algorithm with RBF
kernel, kernel parameter optimization, one-versus-one strategy for multi-class
classification problem. The RBF kernel was used for its better performance compared
with other kernels. For simplicity, the feature dimension of the original grey image
features (2828=784) is reduced to 10 as an illustration of evaluating the performance
of SVM.
598 R. Li and S. Zhang

Fig. 1. Some samples from the MNIST database

Table 1. Handwritten Digits Recognition Results with 10-Dimensional Embedded Features

Methods LDC 1-NN BPNN SVM

Accuracy (%) 77.6 82.7 84.8 89.7

Table 2. Confusion matrix of Handwritten Digits Recognition Results with SVM

Digits 0 1 2 3 4 5 6 7 8 9
0 90 0 0 0 0 5 0 0 1 0
1 0 110 1 0 0 0 0 0 4 0
2 0 0 84 1 2 0 1 1 0 0
3 0 0 5 108 0 3 0 0 7 0
4 0 0 3 0 69 1 1 0 1 12
5 2 0 3 2 2 88 0 0 1 1
6 2 0 0 0 1 0 84 0 1 0
7 0 2 1 0 1 0 0 101 0 6
8 2 0 2 2 0 4 1 0 75 3
9 0 2 0 0 6 1 0 2 4 88
Handwritten Digit Recognition Based on Principal Component Analysis 599

Table 1 presents the different recognition results of four classification methods


including LDC, 1-NN, BPNN as well as SVM. From the results in Table 1, we can
observe that SVM performs best, and achieves the highest accuracy of 89.7% with 10-
dimensional embedded features, followed by BPNN, 1-NN and LDC. This
demonstrates that SVM has the best generalization ability among all used four
classification methods. In addition, the recognition accuracies for BPNN, 1-NN and
LDC, are 84.8%, 82.7% and 77.6%, respectively.
To further explore the recognition results of different handwritten digits with SVM,
the confusion matrix of recognition results with SVM is presented in Table 2. As
shown in Table 2, we can see that three digits, i.e., 1, 3 and 7, could be
discriminated well, while other digits could be classified poor.

5 Conclusions
In this paper, we performed reduction dimension with PCA for the grey digits image
features and explored the performance of four different used classification methods,
i.e., LDC, 1-NN, BPNN and SVM, for handwritten digits recognition from the
popular MNIST database. The experimental results on the MNIST database
demonstrate that SVM can achieve the best performance with an accuracy of 89.7%
with 10-dimensional reduced features, due to its good generalization ability. In our
future work, its an interesting task to study the performance of other more advanced
dimensionality reduction techniques than PCA on handwritten digits recognition.

Acknowledgments. This work is supported by Zhejiang Provincial Natural Science


Foundation of China (Grant No. Y1111058).

References
1. Trier, O.D., Jain, A.K., Taxt, T.: Feature extraction methods for character recognitiona
survey. Pattern Recognition 29(4), 64662 (1996)
2. Lauer, F., Suen, C.Y., Bloch, G.: A trainable feature extractor for handwritten digit
recognition. Pattern Recognition 40(6), 18161824 (2007)
3. Partridge, M., Calvo, R.: Fast dimensionality reduction and simple PCA. Intelligent Data
Analysis 2(3), 292298 (1998)
4. Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: a review. IEEE
Transactions on Pattern Analysis and Machine Intelligence 22(1), 437 (2000)
5. Kang, M., Palmer-Brown, D.: A modal learning adaptive function neural network applied
to handwritten digit recognition. Information Sciences 178(20), 38023812 (2008)
6. Vapnik, V.: The nature of statistical learning theory. Springer, New York (2000)
Research on System Stability with Extended Small
Gain Theory Based on Transfer Function

Yuqiang Jin* and Qiang Ma

Department of Training, Naval Aeronautical and Astronautical University,


264001 Yantai, China
{naau301yqj,maqiang1024}@126.com

Abstract. Considering the situation that controlled object is described by linear


transfer function, a extended small gain theory is proposed and applied in the
analysis of system stability. Especially, a comparison between two stable
systems is researched and it is useful for the controller design of linear systems.
What is worthy pointing out is that this method also can be applied in some
general nonlinear systems with a simple transformation. So it is still an
important improvement of the small gain theory although only the linear
transfer function situation is studied.

Keywords: Small gain theory, Transfer function, Stability, Control.

1 Introduction

Robust control is researched by many scholars in recent years because not only the
stability should be concerned but also the robustness should be considered by system
designers. Many control methods are integrated with robust control to solve the
system uncertainties such as adaptive control, neural network control and sliding
mode control[1-5].
Small gain theory is one of the most important theories on system robustness in
control field. It is very revealing about the essence of robustness and stability of
control systems[4-7]. It also can guide the design of controller perfectly. Especially, it
has some mature and systematic conclusions for linear systems. Also, the small gain
theory can be extended in nonlinear systems in some special situation.
In this paper, a comparison between two stable system is researched for linear
controlled objects. And an extended small gain theory is proposed and it can be
extended in a large family complex systems.

2 Main Conclusion

The main conclusion of this paper can be concluded as the following theorem 1.

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 600605, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on System Stability with Extended Small Gain Theory 601

B(s )
Theorem 1: If a controlled object described by transfer function as G (s) = is
A( s )
D( s)
stable with a controlled described by transfer function as H ( s) = , then a
C (s)
controlled object described as G1 ( s ) = G ( s )G2 ( s ) is table with a controller
D( s) k0
H1 ( s ) = , where G2 ( s ) = .
k0C ( s ) (k1s + 1)(k2 s + 1)

3 Preliminary Knowledge

To make it simple, we consider a general kind of linear system with negative


feedback and the transfer function can be written as

G( s)
C ( s) =
1 + G( s) H ( s) . (1)

Assumption 1: The controlled object is stable. It means that there is no unstable poles
in A( s ) .
Assumption 2: The controller is realizable. It means that there is no unstable poles in
C (s) .

4 Proof of the Theorem


Considering the situation that, the transfer function of close loop system can be
written as

G1 ( s )
C1 ( s ) =
1 + G1 ( s ) H1 ( s ) . (2)

Use the analogue , assume that the system is unstable, then there is unstable pole in
C1 ( s ) as
G1 ( s )
C1 ( s ) =
1 + G1 ( s ) H1 ( s )
602 Y. Jin and Q. Ma

Where

D(s)
1 + G1 (s) H1 (s) = 1 + G(s)G2 (s)
k0C (s)
G(s) D(s)
= 1+
(k1s + 1)(k2 s + 1)C ( s) . (3)
B(s) D(s)
= 1+
(k1s + 1)(k2 s + 1) A(s)C (s)
B(s) D(s) + (k1s + 1)(k2 s + 1) A(s)C (s)
=
(k1s + 1)(k2 s + 1) A(s)C (s)

Then the equation B( s ) D( s ) + (k1s + 1)(k 2 s + 1) A( s )C ( s ) = 0 has a root as


s = a > 0.
Because the original system is stable, so the equation
B ( s ) D ( s ) + A( s )C ( s ) = 0 has no unstable roots. Define

B ( s ) D ( s ) + A( s )C ( s ) = F ( s ) , (4)

and

B( s ) D( s ) + (k1s + 1)(k2 s + 1) A( s )C ( s ) = J ( s ) , (5)

Where A( s )C ( s ) has no unstable roots, then without loss of generality , we assume


that A(0)C (0) > 0 .
If F (0) > 0 and for s > 0, F ( s ) > F (0) > 0 , then it is obviously has
J (0) = F (0) > 0 , s > 0, J ( s ) F ( s ) = [(k1s + 1)(k2 s + 1) 1] A( s )C ( s ) > 0 .
If F (0) < 0 , we can write F ( s ) as

F(s) = k(1s +1)(2s +1) + (3s +1)(4s +1)(5s +1) . (6)

Then

F(s) = k(1s +1)(2s +1) + (3s +1)(4s +1)(5s +1) .


(7)

Because k < 0 and the order of denominator polynomial is higher than the order of
numerator polynomial. So we have F () > 0 . And remember that F (0) < 0 , then
Research on System Stability with Extended Small Gain Theory 603

it means that there exists positive roots of this system. So we proves that the new
system is stable.
k0 (k3 s + 1)
If we consider the general situation such as G2 ( s ) = and
(k1s + 1)(k2 s + 1)
D( s)
controller as H1 ( s ) = , the denominator polynomial of close loop system can
k0C ( s )
be written as

(k3 s + 1) B( s ) D( s ) + (k1s + 1)(k2 s + 1) A( s )C ( s ) = J ( s ) . (8)

And the denominator polynomial of the original system can be written as

F ( s ) = k ( 1s + 1)( 2 s + 1) + ( 3 s + 1)( 4 s + 1)( 5 s + 1)


(9)
= B( s ) D( s ) + A( s )C ( s ) .
Because the original system is stable, so we have k > 0 , F (0) > 0 , for any
s > 0, F ( s ) > 0, A(s)C(s)>0 , B ( s ) D ( s ) , A(0)C (0) > 0 , B (0) D (0) > 0 .
The J (0) = F (0) > 0 , ( k3 s + 1) B ( s ) D ( s ) + ( k1s + 1)( k 2 s + 1) A( s )C ( s ) = J ( s ) ,
J ( s ) F ( s ) = {( k3 s + 1) 1}B ( s ) D ( s ) + {( k1s + 1)( k2 s + 1) 1} A( s )C ( s ) .
If s > 0 , {( k1s + 1)( k2 s + 1) 1} A( s )C ( s ) > 0 . {( k3 s + 1) 1}B ( s ) D ( s ) means
that if both the controlled object and controller itself have no unstable roots, then
J ( s ) F ( s ) > 0 , so the system is stable.

If the system is a non-minimum phase system, then the small theorem is necessary to
cope it. We assume a transfer function is defined as

k0 (k3 s + 1)
G2 ( s ) = (10)
(k1s + 1)(k2 s + 1) .

And because it is a non-minimum phase system, we compute its infinite norm as

G2 ( s )
= 3 > k0 (11)
.

Assume the infinite norm of the original system as

G( s)
= 1 (12)
.
604 Y. Jin and Q. Ma

For controller, solve its infinite norm as

H (s)
= 2 (13)
.

According to small gain theorem, we have

G( s)
H ( s)
= 1 2 < 1 (14)
.

If the new system stable, then it should satisfy the following formula

G ( s )G2 ( s )
H1 ( s )
<1 (15)
,

and

G( s)G2 ( s)
H1 (s)
= G(s)G2 ( s)
H1 ( s)
< G( s) G2 ( s) H1 ( s) . (16)
= G(s) 3 H1 (s)
<1

So we can design

k
H1 ( s ) = H ( s) (17)
3 ,
Where k < 1 .
Then we have
k
H1 ( s ) = H (s) (18)

3
.

Then it means that

G ( s )G2 ( s )
H1 ( s )
< G ( s ) 3 H1 ( s )

k
= G( s) 3 H ( s) (19)
3

= k G (s)
H ( s)
<1
.
Research on System Stability with Extended Small Gain Theory 605

Obviously, the system is stable now. It means that the new system is guaranteed to
be stable only if the gain of the original system is amplified properly.
k0 (k3 s + 1)
And for some simple situations such as G2 ( s ) = , it is easy to
(k1s + 1)(k2 s + 1)
solve its infinite norm. For example, the infinite norm of
k0
G2 ( s ) = is k0 . So it is the simple situation of this theorem.
(k1s + 1)(k 2 s + 1)

5 Conclusion

Because H1 ( s ) is not necessary to be linear for small gain theorem, we can design a
nonlinear control law for a simple system, then we multiply the corresponding gain,
so a stable control law for a complex system is designed. Also, G2 ( s ) is not
necessary to be linear for small gain theorem, so this theorem can be applied in some
nonlinear situations.
Above all, the small gain theorem is extended based on the situation that the
controlled object is linear transfer function. And it can be applied in a large family of
complex system with the comparison method. Also, it is pointed that the comparison
method can be applied in some nonlinear situations.

References
1. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
2. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
3. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
4. Hu, M., Xu, Z., Zhang, R.: Parameters identification and adaptive full state hybrid
projective synchronization of chaotic (hyper-chaotic) systems. Phys. Lett. A 361, 231237
(2007)
5. Gao, T., Chen, Z., Yuan, Z.: Adaptive synchronization of a new hyperchaotic system with
uncertain parameters. Chaos Solitons Fractals 33, 922928 (2007)
6. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
7. Tang, F., Wang, L.: An adaptive active control for the modified Chuas circuit. Phys. Lett.
A 346, 342346 (2005)
Research on the Chattering Problem with VSC of
Supersonic Missiles Based on Intelligent Materials

Junwei Lei*, Jianhong Shi, Guorong Zhao, and Guoqiang Liang

Department of Control Engineering,


Naval Aeronautical and Astronautical University,
264001 Yantai, China
{leijunwei,zhaoguorong302,liangguoqiang1024}@126.com,
zhld2002@163.com

Abstract. Considering pitch channel simplified model of missile control


system, a VSCL(variable structure control law) is designed by using overload
and angle velocity signals. Especially, an analysis is proposed for the situation
of equal control with uncertainties. An integral action is introduced to
compensate the uncertainties and a soft function is adopted to solve the
chattering problem of VSC. Finally, an analysis for the chattering problem, the
function of integral action and soft function is thoroughly studied. The
conclusion shows that the dead zone of soft function is a important factor that
affect the chattering phenomenon. And the relationship between the chattering
problem and the system gain is revealed.

Keywords: Chattering, VSC, Missile, Uncertain, Stability.

1 Introduction
Because the sliding mode of VSC system is inflexible, it means that it is independent
with the outer interference and system uncertainties, so the VSC method is an ideal
robust control method. Many control methods are integrated with robust control to
solve the system uncertainties such as adaptive control, neural network control and
sliding mode control [1-5].
As the development of computer technology and automatic industry, the realization
of control algorithm need to use computers, so the discrete control algorithm is used
wildly. But the discrete control law can not lead to ideal sliding mode, it can only
cause quasi-slide control. And because of the existence of quasi-slide mode, the
chattering and accuracy problem are more prominent in discrete control field [6-9].
There are many factors caused the above problem. In this paper, a pitch channel
simplified model of missile control system is researched based on the sliding mode
control with overload and angle velocity signals. And the essential reason for causing
the chattering problem is studied. Also the integral adaptive strategy and soft function
are analyzed and adopted to reduce the chattering problem. As a side note, the
chattering situations caused by discrete computer algorithm or caused by the choosing
of simulation step are neglected in this paper.
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 606611, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Chattering Problem with VSC of Supersonic Missiles 607

2 Model Description
Taking the model of pitch channel of anti-ship missile for a example, the air dynamic
of missile pitch channel system can be written as follows:

1
 = z ( P sin + Y mg cos )
mv
M
 z = z = z
Jz
m qS L
Jz
(1)

P sin + Y
ny =
mg

the definition of symbols can see ref[6].


A theorem in paper[6] shows that any linear feedback control law that can stabilize
the linear approximate system can also stabilize the original nonlinear system only if
the linear approximate system of the nonlinear system is gradual stable. So we can
research the linear approximate system and then we can find a control law that can
stabilize the original nonlinear system. The linear approximate system can be
described as follows:

 = z a34 a35 z
 z = a24 + a22 z + a25 z
. (2)
v v
n y = a34 + a35 z
g g

3 Control Law Design

We define a new variable as e = n y n yd , and consider that it is difficult to measure


the attack angle, so we choose the sliding mode as follows:

S = c1e + c2 edt + c3 z . (3)

Solve the derivative of the sliding mode as

S = c1e + c2 e + c3 z
, (4)
= l1 z + l2 + l3 z + l4z + l5 1 ( , t ) + l6 2 ( , t ) + l7 e
608 J. Lei et al.

Where parameters are defined as

v g v g
l1 = (c1 a34 + c3 a22 ) , l2 = (c3a24 c1 a34 + c2 ) , (5)
g va34 g v

v va v va
l3 = (c1 a34 35 + c3 a25 c1 a34 a35 c3 a24 35 ) (6)
g g g ga34 ,

l4 = c1
v
g
a35 l
5 = c1n dy c2 n dy . (7)

Because it is impossible to accurately remove the equal control item caused by


system uncertainties, we use an integral action to remove the equal control item and
compensate system uncertainties. As a side note, if we increase the gain of the system
to eliminate the equal control item, then the chattering will also be increased.
So we consider using a integrator to approximate the equal control item. For the
standard system, we have

s = l1 z + l2 n y + v + l5 . (8)

Design the control law as v = l1 z l2 n y l5 k1 sgn( s ) k2 s , then the system


is stable. Also, if we design the control law as

v = k1 sgn( s ) k 2 s k d sgn( s ), k d = max( l1 z l2 n y l5 ) . (9)

Then we have ss 0 , also the system is stable and s 0 .

4 System Analysis
We consider to use a integrator to approximate the equal control item such
that s 0 . Then we define a new variable as ek = k d kd and design

v = k1 sgn( s) k2 s kd sgn( s) , (10)


Research on the Chattering Problem with VSC of Supersonic Missiles 609

It holds as

ss < k1 s k2 s 2 kd s + kd s = k1 s k2 s 2 + ek s . (11)

Then

ss = s (l1 z + l2 n y + l5 k1 sgn( s ) k2 s kd sgn( s ))


= k1 s k2 s 2 kd s + (l1 z + l2 n y + l5 ) s
. (12)
k1 s k2 s 2 kd s + kd s
< k1 s k2 s 2 + ek s

We assume kd = max( l1 z l2 n y l5 ) and because kd is bounded, then



design e = k , so we have e e = k e = e s .Then it means design kd = s
k d k k d k k
, the system is guaranteed to be stable. Now we consider how to reduce the chattering
of the system. We consider that keep the equal control item to prove a constant value
in the control law as

v = k1 sgn( s ) k2 s kd sgn( s ) + sat keq (k1d ) , (13)

Where k1d = sdt is used to estimate the equal control item. Then we have

ss = s (l1 z + l2 n y + l5 k1 sgn( s ) k2 s kd sgn( s ) + sat keq (kd ))

= k1 s k2 s 2 kd s + (l1 z + l2 n y + l5 ) s
. (14)
k1 s k2 s 2 kd s + kd s + satkeq (kd ) s

< k1 s k2 s 2 + ek s + satkeq (kd ) s

If we choose k1 > keq , then the equal control item can be estimated properly. Also
because of the introduce of estimation of equal control item, the chattering of the
system will be reduced.
610 J. Lei et al.

Undoubtedly, there always exists chattering only there exists the sign function. So
we use a continuous function instead of the sign function.
For the standard system s = l1 z + l2 ny + v + l5 , we design the control law as
v = l1 z l2 n y l5 k1 sgn( s ) k2 s to make the system stable.

A soft function is taken place of the sign function as

s
v = l1 z l2 n y l5 k1 k2 s . (15)
s +

Then it holds as

s2
ss = s (l1 z + l2 n y + v + l5 ) = k1 k2 s 2 < 0 . (16)
s +

But considering the uncertainties, Design the control law as

v = k1 sgn( s ) k2 s kd sgn( s )
. (17)
kd = l1z l2 n y l5

So the system is stable. And considering the situation of soft function, design
s s
v = k1 k2 s k g
s + s + , (18)
kd = max( l1 z l2 n y l5 )

Then it holds

s s
ss = s(l1z + l2ny ++l5 k1 k2s kg )
s + s +
, (19)
s2 s2
< k1 k2s2 + kd s kg
s + s +

It is easy to prove that

s2 kg s (kd k g ) s + kd
kd s k g = s (kd )= s . (20)
s + s + s +
Research on the Chattering Problem with VSC of Supersonic Missiles 611

Then choose k g = kd , and get

s2 k 1
kd s k g = s d kd ( s )1/ 2 , (21)
s + s + 2

So it is easy to prove that

s2 k s s (k1 s kd )
ss < k1 k2 s 2 + d = k2 s 2 . (22)
s + s + s +

So the stable area is solved as s > kd / k1 . And the dead zone is small enough
only if is small enough. Also if we increase k1 , then the dead zone will be reduced.

5 Conclusions
The functions of integral equal control and soft function are researched to reduce the
chattering of a class of pitch channel simplified model of missile system. And the
relationship between the system gain and dead zone of soft function is analyzed.

References
1. Gao, W., Hung, J.C.: Variable Structure Control of Nonlinear systems: A New Approach.
IEEE Trans. Indus. Electro. 40(1) (February 1993)
2. Polycarpou, M.M., Ioannou, P.A.: A Robust Adaptive Nonlinear Control Design.
Automatic 32(3), 423442 (1996)
3. Kim, S.-H., Kim, Y.-S., Song, C.: A robust adaptive nonlinear control approach to missile
autopilot design. Contr. Engin. Prac. 12, 149154 (2004)
4. Tang, F., Wang, L.: An adaptive active control for modified Chuas circuit. Phys. Lett.
A 346, 342346 (2005)
5. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
6. Qian, X., Zhao, Y.: Flying Mechanics of Missiles. Beiing University of Engineering Press,
Beijing (2000)
7. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
8. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
9. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
Research on Backstepping Nussbaum Gain
Control of Missile Overload System

Jianhong Shi*, Guorong Zhao, Junwei Lei, and Guoqiang Liang

Department of Control Engineering,


Naval Aeronautical and Astronautical University,
264001 Yantai, China
zhld2002@163.com,
{zhaoguorong302,leijunwei,liangguoqiang1024}@126.com

Abstract. The overload control of missile system is researched under the


condition that the control direction is unknown. A nussbaum gain strategy is
adopted to solve the unknown control direction problem for the linear model of
missile motion of pitch channel . The backstepping technology and integral
action are used to cope the system uncertainty. A novel type of control law,
which is only consisted of overload signal and angle velocity of missile, is
constructed and the stability of whole system is guaranteed by a whole
Lyapunov function. The defect of the proposed method is that the dyanmic of
control fin is not considered.

Keywords: Backstepping, Uncertainty, Nussbaum Gain, Supersonic Missile,


Overload Control.

1 Introduction
Backstepping technology is a well-known method which is widely used in the design
of missile control systems[1-5]. The control system of missiles described by the
differential equations have strong nonlinearities and time varying characteristics.
In this paer, the linear model of missile pitch channel motion is studied basd on
previous research work. The input coefficient is assumed to be unknown, which is
usually satisfied in real missile flight control. And the Nussbaum gain method is used
in this paper to solve the unknown control direction problem. Also uncertain linear
supersonic missile system is controlled where only overload and angle velocity are
necessary to be measurable. Most important of all, no aero-coefficient is necessary to
be known because of the adopting of integral action.

2 Model Description
Without considering the first order dynamic of actuator, the linear model for the
missile motion in the pitch channel are given by

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 612615, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Backstepping Nussbaum Gain Control of Missile Overload System 613

 = z a34 a35 z
 z = a22z + a24 + a25 z
v v (1)
ny = a34 + a35 z
g g .
The control objective is to design a control u such that the overload of the system
d
n
( y )can track the desired value n y , where the sign of a25 is unknown.

With some transformations, the above model can be written as

v v v v
ny = a34z a34 a34 f (ny , z ) a35 a34 z + a35z
g g g g , (2)
 z = a22z + a24 f (ny ,z ) + a25 z

Where f (n y , z ) can be solved by the above output equation.

3 Stability Analysis

Considering the above subsystem, define a new variable as e1 = ny n dy , We define


a new variable as

v v v
f1 = a34 a34 f ( ny , z ) a35 a34 z + a35z . (3)
g g g
Assume that there exist two parameters d11 and d10 such that

f1 d1e1 + d0 . (4)

Then design the virtual control zd as

zd = k11 sign ( a34 )e1 k12 sign ( a34 ) edt . (5)

Then it holds

v v v
e1e1 = a34 k11e12 a34 k11e1 e1dt + f1e1 + a34 e2 , (6)
g g g
614 J. Shi et al.

Where e2 is defined as e2 = z zd .
Also define a new variable as

f 2 = a22 z + a24 f (n y , z )  zd . (7)

Similarly, also assume that there exist two parameters d 21 and d 20 such that

f 2 d 21e2 + d 20 . (8)

If the sign of a22 is known, so in the same way, the virtual control for this subsystem
can be designed as:

zdd 1 = k 21sign(a22 )e2 k22 sign(a22 ) e2 dt . (9)

Since the sign of a22 is unknown, a Nussbaum gain strategy is adopted to solve the
unknown input coefficient proble, the virtual control can be designed as follows:

z = N ( k ) zdd 2 , (10)

Where

zdd 2 = k 21e2 k22 e2 dt , (11)

And design

N ( k ) = k cos( k ) , (12)

And the turning law of Nussbaum gain is designed as

k = e2 zdd 2 . (13)

Also, the following equation holds

e2e2 = f2e2e2 k21 a22 e22 k22 a22 e2 e2dt e2 (1 + a25 N (k )) zdd 2 (14)
.

Choose a Lyapunov function as

1 2 1 2
V = e1 + e2 (15)
2 2 .
Research on Backstepping Nussbaum Gain Control of Missile Overload System 615

It is easy to prove that

V (1 + a25 N ( k )) k . (16)

With integral computation on both side of the inequality, it holds

k (t )
Vi (t ) Vi (0) (k (t ) k (0) + a25 ( N (k ))dk . (17)
k (0)

Use the apagoge method, assume that k (t ) will be unstable in finite time, so when
t tn , k (t ) . With the help of Nussbaum gain function characteristics, it is
easy to prove the above inequality is contradict. So k (t ) is bounded in finite time and
the system is stable.

4 Conclusions
The main contribution of this paper can be summarized as follows. A nussbaum gain
strategy is adopted to slove the unknown control direction problem for the linear
model of missile motion of pitch channel . Also, the backstepping technology and
integral action are applied to guarantee the whole system is stable and only overload
and angle velocity are necessary to be measured. But the defect of this paper is that
the dyanmic of control fin is not considered. So it will be throughtly considered in
our future work.

References
1. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
2. Qian, X., Zhao, Y.: Flying Mechanics of Missiles. Beiing University of Engineering Press,
Beijing (2000)
3. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
4. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
5. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
Adaptive Control of Supersonic Missiles
with Unknown Input Coefficients

Jinhua Wu*, Junwei Lei, Wenjin Gu, and Jianhong Shi

Department of Control Engineering,


Naval Aeronautical and Astronautical University,
264001 Yantai, China
{wujinhua1024,leijunwei,guwenjin301}@126.com,
zhld2002@163.com

Abstract. Based on previous work, a Nussbaum gian controller is proposed to


solve the unknown input direction problem of a large family of supersonic
missiles. Also considering the one order dynamic characteristic of actuator , the
uncertainties of nonlinear missile system, which is assumed to satisfy the so-
called bounded uncertainty condition, are coped by the adopting of integral
action and backstepping technology.And the unknown control direction
problem of supersonic missile is solved by adopting of the Nussbaum gain
method. A novel analysis with Lyapunov function and Nussbaum gain function
are completed and the stability of the whole system is guaranteed by
constructing of the Lyapunov function.

Keywords: Adaptive, Nussbaum Gain, Missile, Control.

1 Introduction
Adaptive backstepping controllers has been researched in many papers[1-5]. The
uncertainties in missile pitch planes model considered in [2] are consisted of
uncertain parameters and unknown nonlinear functions, where the unknown functions
represents the model error or the time varying of the system. But the input coefficient
are assumed to be positive. In fact, the sign of input coefficient is possibly to be
unknown or it will be changed unexpectly under some complex fight conditions.
In this paper, a Nussbaum gain strategy is proposed to control the uncertain
supersonic missile described in [2]. Comparing with the above adaptive method, the
unknown control direction problom is solved by adopting the Nussbaum gain control
technology.

2 Model Description
The nonlinear model for the missile motion in the pitch plane is adopted by Hull &
Qu[2]. Considered the first order dynamic of actuator, equations of motion in the
pitch plane are given by
*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 616620, 2011.
Springer-Verlag Berlin Heidelberg 2011
Adaptive Control of Supersonic Missiles with Unknown Input Coefficients 617

QS
 = [C z ( , M m ) + B z ] + q
mV
,
QSd (1)
q =
I [C m ( , M m ) + Bm ],  = au a
yy
where , q and are angle of attack, pitch rate and control fin deflection angle,
respectively, and m, V , I yy , Q, S and d are mass, velocity, pitching moment of
inetia, dynamic pressure, reference area, and reference length, respectively, and
M m is Mach number, u and a are control input and actuator bandwidth, respectively.
The aerodynamic coefficients in Eq.(1) are represented as the function of Mach
number and angle of attack, the expression of functions can see ref [1].
Taking uncertainties of the aerodynamics into consideration, we can rewrite Eq.(1)
as

x1 = f1 ( x1 ) + f1 ( x1 ) + x2 + [ g1 + g1 ]x3
, (2)
x2 = f 2 ( x1 ) + f 2 ( x1 ) + [ g 2 + g 2 ]x3

Where the defination of x1 is and x2 stands for q , and the defination of


1(x1,t) , f1(x1) , g1 , g1 , 2 ( x1 , t ) , f 2 ( x1 ) , g 2 , g 2 and b2 see ref [1].
With the same analysis in [1], we can get the inequalities as

f1 ( x1 ) < ( C1 + C1 p2 + C1 p2 )[ z1 ( x1 ) + z 2 ( x1 )Mm ] , (3)

And

f2 (x1) < ( C2 + C2 p3 + C2 p3 )[ m1(x1) + m2 (x1)Mm ] . (4)

The main assumption of this paper is the below assumption 1.

Assumption 1: The uncertainties of the system satisfies the following triangular


bounds condition: i ( x, t ) i pi ( x) ,where pi ( x) are unknown smooth functions
and i are unknown constant parameters.

Definition 1: N ( ) is a Nussbaum-type function, if it has the following


characteristics

f2 ( x1) < ( C2 + C2 p3 + C2 p3 )[ m1( x1) + m2 ( x1 )Mm ] , (5)


618 J. Wu et al.

and

1 s
s 0
lim inf N ( x)dx = . (6)
s

3 Design of Nussbaum Gain Controller


Considering the above subsystem, a new type Nussbaum gain controller can be
designed as follows.
First, define a new variable as e1 = x1 x1d , then the first order subsystem can be
written as

e1 = f1 ( x1 ) + f1 ( x1 ) + x2 + [ g1 + g1 ]x3 x1d , (7)

Design a integral virtual controller for the first order subsystem as follows:

x2d = f1 ( x1 ) k1e1 k2 e1dt . (8)

Define

f = f1 ( x1 ) + [ g1 + g1 ]x3 x1d . (9)

And assume

f d1 e1 + d 2 , (10)

where d1 and d 2 are positive constant.


Then we define a new variable as e2 = x2 x2d , then the second order subsystem
can be written as

e2 = f 2 ( x1 ) + f 2 ( x1 ) + [ g 2 + g 2 ] x3 x2d , (11)

Because g 2 is unknown, we use a Nussbaum gain control method to solve the


uncertain input coefficient problem. First, we design a virtual control as:

x3dd = f 2 ( x1 ) k3e2 k4 e2 dt . (12)


Adaptive Control of Supersonic Missiles with Unknown Input Coefficients 619

Define

g = f 2 ( x1 ) x2d , (13)

Assume

g d3 e3 + d 4 , (14)

where d3 and d 4 are positive constant. It is obviously that if [ g 2 + g 2 ]x3 = x3dd ,


the system is stable with big enough di . Because of the existence of unknown input
coefficient, a Nussbaum gain is adopted to cope with it as follows.
Design the control as

x3 = N ( ) x3dd , (15)

Where

N ( ) = cos( 2 / 4 ) , (16)

And the adaptive turning law of Nussbaum gain is designed as

 = e2 x2dd . (17)

Choose a Lyapunov function as

1 2 1 2
V= e1 + e2 . (18)
2 2
It is easy to prove that

V e2 ( g 2 + g 2 ) N ( )) x2dd x2dd . (19)

Then it satisfies

V [1 + ( g 2 + g 2 ) N ( )] . (20)

With integral computation on both side of the inequality, It holds as

(t )
Vi (t ) Vi (0) ( (t ) (0) + ( g 2 + g 2 ) ( N ( ))d . (21)
(0)
620 J. Wu et al.

Use the apagoge method, we assume that (t ) will be unstable in finite time, so
when t tn , we have (t ) . With the help of Nussbaum gain function
characteristics, it is easy to prove the above inequality is contradict. So (t ) is
bounded in finite time. And it is easy to prove that the system is stable.

4 Conclusions
A novel adaptive controller is designed with Nussbaum gain strategy and the
unknown input coefficient problem of supersonic missile is solved in this paper. Also,
the uncertainties are coped by the adopting of integral action and backstepping
technology. So the whole system is guaranteed to be stable with the help of the
Lyapunov stability theorem.

References
1. Kim, S.-H., Kim, Y.-S., Song, C.: Contr. Eng. Prac. 12, 149154 (2004)
2. Hull, R.A., Qu, Z.: Design and evaluation of robust nonlinear missile autopilot from a
performance perspective. In: Proce. of the ACC, pp. 189193 (1995)
3. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
4. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
5. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
The Fault Diagnostic Model Based on MHMM-SVM
and Its Application

FengBo Zhu*, WenQuan Wu, ShanLin Zhu, and RenYang Liu

Naval University of Engineering,


717, Jiefang Road, Wuhan, China
623016756@qq.com

Abstract. A new method of incipient fault diagnosis for analog circuits based on
MHMM-SVM was presented. The model of MHMM has the ability of dealing
with continuous dynamic signals and is suitable to depict for the samples that are
in the same kinds, the model of SVM is based on Structural Risk Minimization
and is adapt in classifying the different kinds. The two models are
complementary for each other, and the method made them fused. At first, the
dimensions of the experimental samples were decreased and divorced easily with
the LDA technology, and secondly, the MHMM-SVM model was built using the
samples. Finally, from experimental results that are compared with
MHMM-based and SVM-based diagnostician methods, the conclusion can be
drawn that the method has certain advantages for the incipient fault diagnosis.

Keywords: MHMM-SVM, analog circuit, incipient faults, LDA.

1 Introduction

Because of varieties of electronics equipments faults, component bad and structure of


complexity, for long time, the most of faults examination and fixed positions of
electronics system are the problems of hard solution. Currently ,calculate ways ,for
example of misty nerve network, inherit methods etc mostly are more effective under
the circumstance that equipments take place hard faults (equipments is complete or part
expiration diagnose), but for equipments incipient fault diagnosis and advance forecast
as that the equipments take place soft faults(the parameter outruns permits and differs
but don't make equipments complete expired), there are nothing good to
report[7].However, the hard faults usually arouse a series of chain reaction, and results
that the whole equipments may break down, therefore, looking for the way of incipient
fault diagnosis and forecast has significance for keeping the whole equips from
breaking down.
The model of HMM is a kind of dual random process that extensively is applied to
the signal processing and the mode to identify, and it has the ability of dealing with
continuous dynamic signals, already succeed to be applied to the realm of the speech
understanding and others' face to identifying[1].At home and abroad parts of experts

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 621627, 2011.
Springer-Verlag Berlin Heidelberg 2011
622 F. Zhu et al.

have started to apply it for faults diagnosis and positioning, the effect is better than the
traditional methods such as the neural network[2,3,5].The model of SVM is machine
learning method that comes from data classification statistics theory, and has high
classification accuracy and well spreading ability, ,and currently at faults diagnosis
realm has already got an extensive application[2,4]. MHMM model is suitable to depict
for the samples that are in the same categories, SVM model is adapt in classifying the
different categories[1,2]. The two merits are complementary for each other, the method
of the passage made them fused to deal with the samples of incipient faults, from
experimental results comparing with MHMM-based and SVM-based diagnostician
methods, the conclusion can be drawn that the method has certain advantages for the
incipient fault diagnosis.

2 The Model of MHMM-SVM

2.1 The Model of Gaussian of Mixture HMM(MHMM)


The MHMM model is a extension of HMM when the observation sequence is a
continuous density distribution signal. The HMM model which is subjected to
continuous density distribution presumes that the observed feature vector distributes in
a sort of probability density. The quality of this kind of model depends on physical truth
of assumed probability distribution. Generally speaking, some frequently-used
probability distribution functions is hard to describe the distribution of a set of data in a
single way, therefore, the Gaussian of Mixture HMM is presented, which approximates
the actual distribution of feature vector by using several Gaussian distributions called
Gaussian Mixture Model(GMM) while each distribution have different kernel and
dispersion.

2.2 Support Vector Machines(SVM)

SVM is a learning technique developed by Vapnik and his team in 1959, which is based
on statistical learning theory. It has remarkable advantages in dealing with
small-sample, non-linear and high dimension problems and is rest on the VC dimension
theory and structural risk minimization of statistical learning theory. SVM also
provides the best compromise between complexity(that is ,the learning accuracy for
given training samples) and leaning ability (that is, the ability to identify each samples
faultlessly) of the model for limited set of sample information to obtain the best
generalization ability.
Structural risk minimization (SRM) denotes the minimal sum of empirical risk and
belive risk. Machine learning is essentially a approximation of real model of the
problem. Empirical risk denotes the error of classifier on given samples, and believe
risk has a relationship with two parameters: one is the number of samples, it is obvious
that the larger the number of samples and the higher the accuracy of leaning outcomes,
the believe risk is littler; the other is VC dimension of a classification function (the
complexity of samples), simply, the VC dimension is larger, the generalization ability
is worse and the confidence risk is higher.
The Fault Diagnostic Model Based on MHMM-SVM and Its Application 623

2.3 The Major Process of SVM-MHMM Used for Fault Diagnosis


Fault diagnosis based on MHMM is decided by maximum likelihood value outputted
by the model. As for a data sample with tolerance, it is possible that some of its MHMM
models are pretty similar. Depending on the above only to classify the samples would
lead to an error. Similarly, fault diagnosis based on SVM clarifies only according to the
signal features at fixed time, which ignores the sequential features, would also lead to
erroneous judgment. In conclusion, fault diagnosis based on MHMM-SVM is
presented. The major process of this model are as follows:

7KHVDPSOHVXVHGIRUWHVWLQJ
7KHVDPSOHVXVHGRQWUDQQLQJ

7KH&ODVVLILFDWLRQEDVHG
RQ0+00
6WDWHRI
1RUPDO 6WDWHRI 6WDWHRI
IDXOW
VWDWH IDXOWRQH IDXOWWZR
WKUHH

7ZRVWDWHVWKDW 6WDWHVWKDW
PRVWOLNHQHVV PRVWXQOLNHQHVV
UHPRYH
6WDWHRI
1RUPDO 6WDWHRI 6WDWHRI
IDXOW
VWDWH IDXOWRQH IDXOWWZR
WKUHH
7KHFODVVLILFDWLRQEDVHGRQ690
7ZRVWDWHVWKDW 6WDWHVWKDW

 
 
 P R
V W O
L N HQ
H V V
 
 P
R V W 
X Q OL
N H Q
HV V  U
  H P R Y
H 
 


7+(5(68/76


 
 

 
 

 
 

 
 

 
 

 
 

Fig. 1. Flow char of fault diagnosis based on MHMM-SVM
7+(5(68/76

The whole process contains four parts: feature extraction, diagnosis of MHMM,
diagnosis of SVM and making a judgment.

3 The Application of MHMM-SVM on Incipient Fault Diagnosis of


Analog Circuits

In this paper, take the typical circuit of active filter for example, the tolerances of each
components are 5%:

9 & &
5    9
 9 & &
&   N  8  $
5     
 9 N   
 Q )
    
5  
 P 9 UP V & 
 Q )  N 5    / 0    1
    + ] 
 m       N 9 ' '
5 
  
9 ' ' N
  9  

Fig. 2. Circuit of active filter


624 F. Zhu et al.

If the value of a component goes beyond the normal range of 20%, it was called the
incipient soft fault.
A Monte Carlo analysis is conducted to one of the states of the circuit by Multisim:

Fig. 3. Analyzing based on Monte Carlo Method

As is depicted in the figure, the effect of incipient parameters' change on outputs of


the circuit is more obvious in the frequency of ranging from 10KHZ to 100KHZ. The
values under the following frequency are elected to compose a 9-dimentional vector
which serves as original datas:8KHZ, 10KHZ, 15KHZ, 20KHZ, 30KHZ, 50KHZ,
60KHZ, 80KHZ, 100KHZ. One hundred original vectors are obtained from one
hundred times' Monte Carlo analysis for each states.

3.1 The Original Data Processing Based on Linear Discriminant Analysis (LDA)

The original samples collected from the circuit are pretty redundant and complex,
which would not only affect the results but also increase the complexity of training
when it is used for training of MHMM-SVM model directly. However, with the LDA
technique, the original data could be transformed into a lower dimensional space, thus
it promotes the arithmetic speed and has high accuracy
LDA is a dimension reduction and supervised method used in classification, which
is widely applied in face recognition at the moment. It based on Fisher criterion,
searching for the best projection vector that maps a high-dimensional sample into a
low-dimensional space, which obtains the maximum between-class scatter and the
minimum within-class scatter of all projection samples. That is to say, samples of the
same class are gathered and the different one is separated as far as possible in
dimensionality reduction[6].
Through dimensionality reduction by LDA, two columns of data that has maximum
eigenvalue are selected to plot the figure. As is obviously showed, not only

Z C
0 .3 R 1 S
R 1 J
R 3 S
0 .2 R 3 J
R 4 S
R 4 J
0 .1
C 1 S
C 1 J
0 C 2 S
C 2 J

-0 .1

-0 .2

-0 .3
-0 .4 -0 .2 0 0 .2 0 .4 0 .6 0 .8

Fig. 4. The samples dimensions-decreased by LDA


The Fault Diagnostic Model Based on MHMM-SVM and Its Application 625

dimensionality reduction is realized but also the data obtains preliminary classification
to some extent. When R2 goes beyond the normal range of 20%, the outputs are exactly
the same with the normal one's, which does not affect its results. Therefore, there are
eleven states as follows: the value of R1 rises 20%, the value of R1 drops 20%, the
value of R3 rises 20%, the value of R3 drops 20%, the value of C1 rises 20%,the value
of C1 drops 20%, the value of C2 rises 20%,the value of C2 drops 20%, the value of R4
rises 20%, the value of R4 drops 20%, and normal state. The original feature vectors are
reduced to five-dimensional ones, which are used as data samples.

3.2 The Training and Result of the Model of SVM-MHMM

Fifty feature vectors of one hundred ones serve as the training samples. Five
ones are chosen in random to compose a observation sequence, so fifty observation
sequences are obtained. In the same way, the other fifty feature vectors are used as
testing samples. Eleven models of MHMM should be trained for eleven states of the circuit.
The feature vector is the mixture of three Gaussian models. The value of the
states of MHMM is set as five and the structure is classed as left to right one. Initial
state is normal, so the initial matrix is [1,0,0,0,0]. The state transition matrix is
[0.5,0.5,0,0,0;0,0.5,0.5,0,0;0,0,0.5,0.5,0;0,0,0,0.5,0.5;0,0,0,0,1]. K-means algorithm is used
to approximate the observation sequence to determine the parameters of Gaussian of
Mixture HMM. The model is trained by E-M algorithm where iterations is set as 50 and
fluctuation tolerance is 1e-4. Figure 5 shows the iteratin of the state of R1 rising for 20%.

9 5 0

9 0 0

8 5 0

8 0 0

7 5 0

7 0 0

6 5 0

6 0 0

5 5 0

5 0 0
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8

Fig. 5. The process of log-likelihood constringed

The model of SVM is mainly used to classify the two categories whose
log-likelihood values are the most similar in the judgment of MHMM. Therefore,
fifty-five binary classifications should be trained, for example, fig.6 and fig.7.

Fig. 6. Binary classification of R1and R1 Fig. 7. Binary classification of R1and C2


626 F. Zhu et al.

When a testing sequence has been judged by the model of MHMM, two states
are obtained whose log-likelihood values are the most similar. Subsequently, put
the above testing sequence into the SVM classification trained for the two state to
compartmentalize. At last, faults diagnosis result is obtained as that the following
table 1 shows:

Table 1. Faults diagnosis results

QRUPDO 5 5 5 5 5 5 & & & &


0+00            
690           
0+006           
90

It can be seen from table 1 that the fault detection rate of MHMM-SVM is higher
than the single model of MHMM or SVM, and MHMM is better than SVM in a certain
extent. As for high separation samples, the three model all have good fault detection
rate such as the normal state. But for the extreme overlapping samples such as the
change of R1 and C2, the results are different widely. The sample separation of R1
and R1 is larger from fig.4, so it is good to the classification of SVM, but the
log-likelihood values of MHMM are -612.02 and -612.86 that are almost the same; On
the contrary, the sample separation of R1 and C2 is lower, which does bad to the
classification of SVM, but the log-likelihood values of MHMM are -612.02 and
-630.66, and the classification can be carried out easily. Therefore, the combination of
two models has better results of classification for extreme overlapping samples.

4 Conclusion
The method based on MHMM-SVM for the incipient fault diagnosis of electronic
device has better advantages than ones based on the single model of MHMM or SVM.
Based on comprehensive analysis of the advantages of both models, that is, the abilities
of MHMM describing related sequences and the abilities of SVM depicting the
between-class most distance, this paper bears out the practicability and feasibility of the
combination of two models.

References
1. Shi, D.C., Han, L.Y., Yu, M.H.: Automatic audio stream classigication based on hidden
markov model and support vector machine. Journal of Changchun University of Technology
(Natural Science Edition) 29(2), 178182 (2008)
2. Liu, X.M., Qiu, J., Liu, G.J.: HMM-SVM Based Mixed Diagnostic Model and Its
Application. Acta Aeronautica et Astronautica Sinica 26(4), 497500 (2005)
3. Xu, L., Wang, H.J.: Study on Fault Prognostic and Health Management for Electronic
System. University of Electronic Science and Technology of China, Chengdu (2009)
The Fault Diagnostic Model Based on MHMM-SVM and Its Application 627

4. Chen, S.J., Lian, K., Wang, H.J.: Method for Analog Circuit Fault Diagnosis Based on GA
Optimized SVM. Journal of University of Electronic Science and Technology of
China 38(4), 554558 (2009)
5. Alpaydin, E.: The Grid: Introduction to Machine Learning. China Machine Press, Beijing
(2009)
6. Zhang, A.N., Zhuang, Z.M.: A Study on the Method of Face Recognition Based on
Optimized LDA and RBF Neural Network. Shantou University (2007)
7. Yang, S., Hu, M., Wang, H.: Study on Soft Fault Diagnosis of Analog Circuits. The
Electronics and Computer 25(1), 18 (2008)
Analysis of a Novel Electromagnetic Bandgap Structure
for Simultaneous Switching Noise Suppression

Hua Yang*, ShaoChang Chen, Qiang Zhang, and WenTing Zheng

Naval University of Engineering, 717, Jiefang Road.Wuhan, China


wenwenoffice@126.com

Abstract. Electromagnetic Bandgap (EBG) structures have been successfully


applied for suppressing the Simultaneous Switching Noise (SSN) between the
power and ground planes. Aimed at the relative narrow bandwidth and the poor
performance in low frequency of the conventional Uniplanar-Compact EBG
(UC-EBG) structure, a novel EBG structure formed by adding spiral-shaped
metal strips on the conventional UC-EBG is proposed. This new EBG structure
significantly enhance the equivalent inductance between the neighboring nuit
cells, as well as change the character of the stop-band. Simulation results show
that the new EBG structure can achieve a bandwidth reach 4.35GHz.This
stop-band is 47.5% wider than the conventional UC-EBG structure, and 17.5%
wider than the reference EBG structure in literature. The excellent SSN
suppression is achieved between 0.18-4.53GHz in the depth of -40dB.

Keywords: PCB, Electromagnetic Bandgap (EBG), Simultaneous Switching


Noise(SSN), power /ground plane.

1 Introduction
Today, the reliability of digital circuit system is attend to more and more challenged
with that, the ceaseless increase of the frequency of system clock, the enlarge size of the
digital ICs, the sharp increase of the organ density on the Printed Circuit Board (PCB).
The power layer and ground layer in multilayer PCB are actually equal to a pair of
parallel-plate resonator in high frequency station. Simultaneous Switching Noise(SSN,
or delta I noise) which can lead to significant Signal Integrity(SI) problems and
Electromagnetic interference(EMI) issues [1-3]will came into being by the high
impedance in resonance condition. But the favorable Signal Integrity and
Electromagnetic compatibility (EMC) are important contents in reliable design of high
speed digital circuit systems. In order to improve the reliability of digital circuit system,
the SSN in power/ground plane must be suppressed effectively.
A typical method for SSN suppression is to use decoupling capacitor between the
power plane and ground plane[4]. However, because of the high frequency parasitic
parameters in itself, the decoupling capacitor will lost its ability to suppress SSN in the
frequency higher than 600MHz[5]. In order to restrain the SSN between power plane
and ground plane in high frequency region, a novel concept of mitigating SSN using

*
Corresponding author.

S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 628634, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis of a Novel Electromagnetic Bandgap Structure for SSN Suppression 629

Electromagnetic Bandgap(EBG) structures was introduced[6]. However, the stop-band


of the EBG structure is too narrow to suffice the real need of the SSN suppression.
There is much research focused on broaden the stopband of the EBG structure, a
multi-via mushroom EBG was proposed in literature[7], but it is so complicated to
manufacture and cost-effective. Two improved uniplanar compact EBG structures
were studied in literature[8],[9].The simulation results show that the EBG structure
proposed in literature[8] can achieve a stop-band wider than conventional UC-EBG
structure, and the SSN in stop-band is effectively suppressed in the depth of -40dB, but
the SSN in the low frequency region is still ineffectively suppressed. Therefore, how to
broaden the width of stop-band in order to realize the SSN suppression in wide band
from low to high frequency becomes an important issue for engineers in recent years.
In this paper, we focus on the wideband SSN suppression in high speed digital PCB.
Based on the mechanism and equivalent circuit of the conventional UC-EBG structure,
a novel UC-EBG structure which formed by adding spiral-shaped inductance is
proposed for broaden the width of stop-band. To verify its performance, 3-D
electromagnetic filed simulation is performed by using the commercial software of
Ansoft SIwave 3.0.

2 UC-EBG Structure and Its Mechanism for SSN Suppression


There are two kinds of mechanism that stop-band is formed in EBG structure: one is
Bragg scattering which includes dielectric drilling and ground plane etched; And the
other is local resonance which includes mushroom EBG structure and UC-EBG
structure. Fig.1 is the model of the conventional UC-EBG structure. Compared with
mushroom EBG, the UC-EBG structure is only consist of metal patches and metal
strips[10].

(a) (b)

Fig. 1. Conventional UC-EBG structure. (a) unit cell (b) the equivalent LC circuit in adjacent unit
cells

In UC-EBG structures, the resonance of the periodic unit cells plays an important
role in the stop-band forming. The unit structure can be analyzed by equivalent LC
circuit, and its character of electromagnetic also can be described by equivalent
inductance and equivalent capacitance.
630 H. Yang et al.

Fig.1 (b) shows the equivalent LC circuit in adjacent unit cells of UC-EBG
structure, the equivalent capacitance which is represented by C parameter comes from
gaps between the adjacent unit cells, and the equivalent inductance which is
represented by L parameter comes from the current on narrow metal strips between the
adjacent unit cells, the center frequency of stop-band and the 3dB bandwidth can be
approximately expressed by the following formula [11]:
1
0 = ( 1)
LC
1 L
BW = = (2)
0 C
Parameter 0 represents the center frequency, parameter represents the width of
the stop-band, = 120 is the wave impedance in freedom space.
Actually, the UC-EBG structure is equal to a series of parallel connection LC pairs,
it shows a character of high impedance around the resonance frequency of the
power/ground planes. The UC-EBG structure supplies a high-impedance gateway
which is equivalent to a band-stop filter for the SSN when it spreads along the
power/ground plane, in this way, the SSN is restricted in the local cell and can not
transmit and excitated along the power/ground pairs. As a result, the resonance between
power/ground planes is minished, and the impedance around the resonance frequency
is also reduced. This is the mechanism of the UC-EBG structure for SSN suppression.

Fig. 2. The 33 conventional UC-EBG structures

In order to describe the mechanism for SSN suppression, an example of the


conventional UC-EBG structures is shown in Fig.2. The test PCB has four metal layers
with the size of 9090mm, the top layer and the bottom layer are two signal layers, the
second layer is power layer, and the third layer is ground layer. 33 unit cells are etched
in the power plane. The parameters for simulation are setted as follows:
a=30mm,w=28.5mm,h=6.75mm,s=g=1.5mm, the thickness of dielectric between
power and ground layer is 0.8mm, and the ports are located in (45mm,45mm) and
(75mm,75mm).
Fig.3 shows the simulated results of S12 parameters of the conventional UC-EBG
structures in Fig.2. Based on the -40dB suppression level, the simulation results show that
Analysis of a Novel Electromagnetic Bandgap Structure for SSN Suppression 631

the conventional UC-EBG structure has an obvious stop-band ranging from 0.83GHz to
3.78GHz, and the bandwidth is 2.95GHz. Compared with the continuous plane, the
conventional UC-EBG structure can suppress the SSN in the depth of -40dB in its
stop-band, thus, UC-EBG structure can effectively suppress the SSN between power and
ground planes. However, we can find that the SSN in the frequency lower than 0.83GHz
are still ineffectively suppressed, and the relative width of the stop-band is still narrow.
So, a new EBG structures are proposed to resolve these problems in section 3.

Fig. 3. S12 parameters of conventional UC-EBG and continuous plane

3 New EBG Structure Design


From the formula (1) and (2), we can educe some conclusions for new UC-EBG
structure design in the following: The bandwidth of the EBG structures is in direct ratio
with the equivalent capacitance, as well as in inverse ratio with the equivalent
inductance; So, the augmentation of the equivalent inductance can broaden the
bandwidth while it also can reduce the center frequency. The typical method to increase
the equivalent inductance is adding uniplanar compact spiral-shaped inductance[12]. In
this paper, we use this method to enlarge the relative bandwidth of the stop-band.

(a) (b)

Fig. 4. Improved UC-EBG unit cells. (a) new unit cell (b) unit cell in literature[8]
632 H. Yang et al.

Fig.4 (a) shows the new proposed EBG structure in this paper. This UC-EBG
structure is consisted of metal patches and spiral-shaped metal strips, the spiral-shaped
strips is in the outside of the patches. The metal strips increase the length of connection
between the adjacent unit cells, as a result, the equivalent inductance of EBG structure
unit cells is enlarged. This measure improves the characteristic of the stopband and
enlarge the bandwidth. In order to show the excellent performance for SSN suppression
of the new UC-EBG structure, we take an improved UC-EBG structure which proposed
in literature [8] for comparison. The reference structure is shown in Fig.4 (b).

Fig. 5. The 33 New UC-EBG structures

4 Simulation Results
To achieve good SSN suppression, we propose a new UC-EBG structure by adding
spiral-shaped inductance. We use the commercial available software of Ansoft
SIwave3.0 to verify the excellent performance of the new UC-EBG structure. The test
PCB also has four metal layers with the size of 9090mm, and 33 unit cells are etched
in the power plane. The parameters for simulation are setted as follows: a=30mm,

Fig. 6. S12 parameters of new UC-EBG and conventional UC-EBG


Analysis of a Novel Electromagnetic Bandgap Structure for SSN Suppression 633

w=28.5mm, h=6.85mm, s=0.5mm, d=0.6mm, the thickness of dielectric between power


and ground layer is 0.8mm, and the two ports are located in (45mm,45mm) and
(75mm,75mm).
Fig.6 shows the simulated results of S12 parameters of the new EBG structure in
Fig.5. They show that the new UC-EBG structure has better performance than the
conventional UC-EBG structure as the same size. Based on the -40dB suppression
level, the stop-band of new EBG structure is ranging from 0.18GHz to 4.53GHz, it can
achieve a bandwith higher than 4.35GHz,this bandwidth is 47.5% wider than the
conventional UC-EBG structure. In the meanwhile, the lower corner frequency of the
stop-band is 650MHz lower than the conventional UC-EBG structure, It means that the
SSN in low frequency region is effectively suppressed by the new EBG structure. And
the SSN suppression is very stable because it appears below -60dB in the stop-band.
Because of the spiral-shaped metal slice slightly weakened the equivalent
capacitance between the unit cells, the center frequency of the new EBG structure is
moved about 50MHz to the high frequencay region. Despite of this, compared with the
conventional UC-EBG structure, the new EBG structure proposed in this paper
effectively suppress the SSN from lower to higher frequency.

Fig. 7. S12 parameters of new EBG and reference EBG in literature[8]

Fig.7 shows the simulated results of S12 parameters compared between the new EBG
structure and reference EBG structure with the same size. Based on the -40dB
suppression level, the reference EBG structure has a stop-band ranging from 0.42GHz to
4.12GHz, the bandwidth is 3.70GHz. Through simulations, we can find that the new
UC-EBG structure proposed by adding spiral-shaped metal strips has a bandwidth which
is 17.5% wider than the reference, the lower corner frequency of the stop-band is
240MHz lower than the reference structure. Therefore, compared with the reference EBG
structure in literature [8], the new UC-EBG structure provides not only the lower
frequency SSN suppression characteristic but also the wider stop-band SSN suppression.

5 Conclusion
In this paper, based on the conventional UC-EBG structure, we proposed a new
UC-EBG structure which formed by adding spiral-shaped metal strips on the metal
634 H. Yang et al.

patches to effectively suppress the SSN in Power/Ground planes. The -40dB stop-band
of the proposed EBG structure is ranging from 0.18GHz to 4.53GHz, this stop-band is
47.5% wider than the conventional UC-EBG structure, and 17.5% wider than the
reference EBG structure in literature [8]. The SSN in low frequency region is also
effectively suppressed by the new EBG structure. The excellent performance of the
SSN suppression is simulated by using commercially available software. Good
performance is achieved. The proposed EBG structure has better performance than the
reference structure in literature for SSN suppression.

References
1. Abhari, R., Eleftheriades, G.V.: Metallo-dielectric electromagnetic band-gap structures for
suppression and isolation of the parallel-plate noise in high-speed circuit. IEEE Trans.
Microw. Theory Tech. 51(6), 16291639 (2003)
2. Kamgaing, T., Ramahi, O.M.: Design and modeling of highimpedance electromagnetic
surface for switching noise suppression in power planes. IEEE Trans. Electromagnetic
Compatibility 47(3), 479489 (2005)
3. Shahparnia, S., Ramahi, O.M.: Electromagnetic interference (EMI) reduction from printed
circuit boards (PCB) using Electromagnetic band-gap structures. IEEE Trans.
Electromagnetic Compatibility 46(4), 580586 (2006)
4. Power Integrity and Ground Bounce Simulation of High Speed PCBs. Empowering
Profitability worldwide workshop. Ansoft (2002)
5. Ricchiuti, V.: Power-Supply Decoupling on Fully Populated High-Speed Digital PCBs.
IEEE Trans. Electromagnetic Compatibility 43, 671676 (2001)
6. Chen, G., Melde, K., Prince, J.: The applications of EBG structures in power/ground plane
pair SSN suppression. In: IEEE 13th Topical Meeting on Electrical Performance of
Electronic Packaging and Systems, pp. 207210. IEEE Press, Portland (2004)
7. Zhang, M.s., Li, Y.-S., Jia, C., et al.: A Power Plane With Wideband SSN Suppression
Using a Multi-Via Electromagnetic Bandgap Structure. IEEE Microwave and Wireless
Components Letters 17(4), 307309 (2007)
8. Du, J.-Y., Kim, J.-M., et al.: Analysis of Separately Arranged Patterns for Suppression of
Simultaneous Switching Noise. In: IEEE, Proceeding of Asia-Pacific Microwave
Conference, pp. 5558. IEEE Press, Yokohama (2006)
9. Kim, B., Kim, D.-W.: Improvement of Simultaneous Switching Noise Suppression of Power
Plane using Localized Spiral-Shaped EBG Structure and /4 Open Stubs. In: IEEE,
Proceeding of Asia-Pacific Microwave Conference, pp. 14. IEEE Press, Bangkok (2007)
10. Yang, F., Ma, K., Qian, Y., et al.: A uniplanar compact photonic bandgap (UC-EBG)
structure and its applications for microwave circuits. IEEE Trans. Microwave Theory and
Technol. 47, 15091514 (1999)
11. Sievenpiper, D.: High-impedance electromagnetic surface. Ph.D.dissertation. Dept.
Electrical Engineering, Univ. California, Los Angeles, CA (1999)
12. Li, Y., Fan, M.Y., Feng, Z.H.: A spiral electromagnetic bandgap (EBG) structure and its
application in microstrip antenna arrays. In: IEEE, Proceeding of Asia-Pacific Microwave
Conference, p. 4. IEEE Press, Suzhou (2005)
Author Index

Abbas, Ammar I-290 Chen, Jianchao I-430


An, Jianwei II-81 Chen, Jie IV-481
Ao, Hongyan I-131 Chen, Jifei II-458
Azmi, Atiya I-290 Chen, Jinpeng II-579
Chen, Jinxing I-341
Bai, Lifeng IV-454 Chen, Jr-Shian V-190, V-245
Bai, QingHua IV-288 Chen, JunHua V-512
Bai, Wanbei III-202 Chen, Kuikui III-202
Bai, Yan II-102 Chen, Li III-303
Bai, Yu Shan I-408 Chen, Min V-314, V-329
Bao, Yan III-107 Chen, Pengyun III-89
Bao, Yuan-lu I-302 Chen, Qing V-407
Baqi, Abdul I-161 Chen, Qingjiang I-29, I-507, I-519,
Bi, TingYan V-144, V-149 IV-59
Bi, Weimin IV-486 Chen, Rong III-299
Bingquan, Liu III-398 Chen, Rongrong V-590
Bo, Cheng V-533 Chen, Ruey-Maw V-245, V-356
Chen, ShaoChang I-628
Cai, JinBao II-77, V-128 Chen, Sheng-wei III-322
Cai, Ying I-542 Chen, Shouhui III-170, V-581
Cao, Gang IV-493 Chen, Shun-Chieh V-362
Cao, Lijuan I-131 Chen, TongJi IV-106
Cha, Si-Ho I-353 Chen, Weiqiang V-16, V-160
Chai, Baofen III-133 Chen, Wen-Jong V-381
Chang, Bau-Jung V-112 Chen, Xia IV-197
Chang, Liwu III-36 Chen, Xiaohong IV-444
Chang, Yung-Fu V-381 Chen, Xiaohui II-572, II-579
Changhua, Li III-63 Chen, Xilun II-197
Che, HongLei I-167 Chen, Xin IV-11, IV-313, V-32
Chen, Bin III-112 Chen, XingWen II-291
Chen, BingWen I-55 Chen, Yaqi II-611
Chen, Chih-Sheng III-333, IV-367 Chen, Yi-Chun V-139
Chen, Chin-Pin V-309, V-381 Chen, Yin-Chih V-413, V-458
Chen, Daohe I-482 Chen, Yonghui I-309
Chen, Dezhao I-78 Chen, YongPing IV-160
Chen, Fuh-Gwo V-139, V-190, V-245, Chen, YouRong III-545, IV-221
V-356 Chen, Yun V-496
Chen, Gang III-299 Chen, Yuzhen II-496
Chen, GuanNan I-562 Chen, Zhuo I-231
Chen, Hsuan-Yu V-309 Cheng, Ching-Hsue V-190
Chen, I-Ching IV-525 Cheng, Fenhua III-522
Chen, Jia V-175, V-543, V-548 Cheng, Genwei II-176
Chen, Jia-Ling IV-376 Cheng, GuanWen I-187, I-194
Chen, Jianbao IV-324 Cheng, JuHua IV-221
636 Author Index

Cheng, Zhongqing II-526, II-532 Fang, WenJie IV-99


Chi, Yali III-436 Fangyang, Zhang II-267
Cho, Ying-Chieh V-335, V-348 Feng, Junhong V-154
Choi, Seung Ho II-34 Feng, Ming III-373
Chou, Hung-Lieh V-190 Feng, Ruan II-267
Chou, Tzu-Chuan IV-279 Feng, Wenquan I-534
Chu, Huiling V-123 Fenghua, Wen III-398
Chu, Yuping V-227 Fu, Chuanyun II-211, II-284
Chun, Hung-Kuan IV-367 Fu, Hong V-604
Chunlin, Xie III-95 Fu, HongYuan I-187, I-194
Cui, JianSheng V-472 Fu, LiMei I-464
Cui, Jingjing IV-352 Fu, Tsu-Hsun IV-595
Cui, Yu-Xiang IV-549 Fu, Yao V-32
Fu, Zheng IV-589
da Hu, Jun I-201 Fuqiang, Yue II-586
Dai, Lu IV-481
Dai, Shixun IV-481 Gan, Shuchuan I-309
Dai, YanHui V-299 Gao, Caixia IV-148, IV-175
Dai, ZhiRong III-378 Gao, ChunLing I-581
Deng, Pei-xi III-73 Gao, Dong Juan IV-165
Deng, Rui V-48 Gao, Fuxiang I-99
Deng, YingZhen V-43 Gao, Guangping III-124
Ding, Wen V-180 Gao, Hongwei I-396, I-402, I-513
Ding, Xiangqian III-57 Gao, Jianming III-160
Ding, Yingqiang V-429, V-489 Gao, Long IV-47
Dong, Jianwei V-227 Gao, Rencai II-477
Dong, Lu I-118 Gao, Zhen IV-298
Dong, Yang IV-428 Gao, Zhongshe III-310
Dong, Yanyan IV-454 Ge, LingXiao III-545
Dong, Zaopeng III-89 Gong, Ke V-396
Dong, Zhen III-156 Gong, Li III-202
Dongping, Liu V-533, V-585 Gong, Ping II-205
Du, Qiulei IV-423 Gong, Xiaoxin III-226
Du, Ruizhong II-466 Gong, Yuanpeng III-57
Du, Xuan III-78 Gong, Zheng III-11
Du, Yang V-512 Gou, ShuangXiao III-441
Du, ZhiTao V-596, V-604 Gu, Chunmiao V-118
Du, Zhong II-380 Gu, Huijuan IV-404
Duan, Li IV-1 Gu, JunJie II-354
Duan, Zhongyan II-617 Gu, Lefeng III-402
Gu, Wenjin I-616
Enfeng, Liu I-316 Gua, MeiHua V-372
Guan, Chen-zhi III-299
Fan, Jihui II-176 Guan, Yong III-214
Fan, Li Ping IV-319 Gui, Jiangsheng V-185
Fan, LiPing IV-509 Guo, Fabin V-81
Fan, Xiaobing V-105 Guo, Guofa IV-554, IV-560
Fang, Fang IV-66 Guo, Honglin I-22
Fang, HaiNing II-440 Guo, Jiannjong III-609
Fang, Liang IV-470 Guo, JianQiang I-375, IV-17, V-565
Author Index 637

Guo, Lili III-138 Hsiao, Yu-Lin V-309


Guo, LongFei IV-197 Hsieh, Kai-ju IV-362
Guo, Ping IV-357 Hsu, Chien-Min III-457
Guo, Qinan V-429 Hsueh, Sung-Lin III-457
Guo, Tongying III-328 Hu, Chengyu II-131
Guo, Xiaowei II-15 Hu, Dongfang IV-122
Guo, Zhenghong III-568 Hu, Feihui I-137
Guojian, Zu II-386, II-434 Hu, Jian-Kun V-462
Guoxing, Peng II-424 Hu, Jinhai V-554
Guozhen, Shen II-405 Hu, Lihua IV-248
Hu, Lin III-587
Haisheng, Liu IV-602 Hu, Shueh-Cheng IV-525
Han, Fengyan III-328 Hu, Tao III-100
Han, Gangtao V-429 Hu, Wujin V-554
Han, Guiying III-293 Hu, XiaoHong II-1
Han, Guohua III-339 Hu, YangCheng II-507
Han, Houde II-392 Hu, Yujuan I-507
Han, Jianhai IV-122 Hu, YunAn II-592
Han, Juntao III-214 Hu, Zhongyong III-481
Han, Liang I-219 Hua, Qian IV-191
Han, Lingling I-176 Huanchang, Qin II-398
Han, Qiyun II-279 Huang, Chia-Hung V-458
Han, Ran I-49 Huang, Chiung-En V-139
Han, Yaozhen I-476 Huang, He IV-465, IV-520
Han, Yumei V-442, V-452 Huang, Hua II-305
Han, ZhiGang I-123 Huang, Jiayin I-78
Hao, Chen IV-170, V-392 Huang, Jie III-100
Hao, Hanzhou III-107 Huang, Jinqiao IV-554, IV-560
Hao, Hui III-568 Huang, Liqun II-205, V-207
Hao, Li V-273 Huang, Min IV-418, IV-433
Hao, Shangfu III-568 Huang, Ming-Yuan V-386
Haob, JinYan V-372 Huang, Ning III-532
He, Bo IV-399 Huang, Shih-Yen V-356
He, Jianxin IV-258 Huang, Wanwu II-611
He, Jing II-572 Huang, Wun-Min IV-367
He, Jinxin V-257 Huang, Xin I-257
He, Lianhua V-287 Huang, Yong IV-143
He, Xingran II-222 Huang, Yongping II-222
He, Xiong II-40 Hui, Peng III-143
He, Xuwen II-40 Hui, SaLe I-225
He, Yong II-341 Huo, GuoQing V-596
He, Yongxiu III-156 Huo, XiaoJiang III-451
He, Yueqing II-192
Hong, Bigang I-324 Ishaque, Nadia I-290
Hong, Cao II-446
Hong, Jingxin I-416 Jaroensutasinee, Krisanadej I-244
Hong, Qingxi III-207 Jaroensutasinee, Mullica I-244
Hong, Xu V-167 Ji, ChangPeng II-348
Hongtao, Yang V-533, V-585 Ji, DongYu II-560, II-566
Hou, Xia II-197 Ji, Jun II-392
638 Author Index

Ji, Shen I-316 Lei, Liu IV-428


Ji, Xiang II-598 Li, Caihong III-385
Ji, Yukun I-214 Li, Chaoliang I-93
Jia, ChangQing II-253 Li, Chunyan I-430
Jia, GuangShe III-508 Li, Dengxin I-501, II-89
Jia, NianNian II-253 Li, Dong II-482
Jia, XiaoGe IV-393, V-559 Li, Fo-Lin IV-549
Jia, Yue II-452 Li, GuangHai I-297
Jia, Zhi-Ping II-162 Li, Haiyan V-539
Jian, Gao IV-111 Li, Hongmei IV-243
Jiang, Annan V-437 Li, Ji III-78
Jiang, Chaoyong V-282 Li, Jiejia III-328
Jiang, DanDan I-381 Li, Jiming III-277
Jiang, HaiBo II-526 Li, Jing II-279, III-481, III-592
Jiang, Lerong IV-133 Li, Jinxiang III-445
Jiang, Libo IV-444 Li, JunSheng I-525
Jiang, Nan V-267 Li, Jushun III-496
Jiang, Wen V-484 Li, Kanzhu I-501
Jiang, Wenwen III-363, III-368 Li, Kunming IV-324
Jiang, Xiaowei IV-330 Li, Li III-78
Jiang, Yan I-62 Li, Lulu I-131
Jiang, Yan-hu III-299 Li, Luyi III-347
Jiang, Ya Ruo I-408 Li, Maihe II-380
Jin, Shengzhen III-214 Li, Min II-291
Jin, Yuqiang I-600, II-11 Li, Minghui IV-268, V-94
Jin, Yuran V-227 Li, Nan III-31, IV-433
Jin, Yushan II-222 Li, Ning II-197
Jing, Shengqi I-181 Li, Ping IV-288, IV-335, IV-346
Jingjing, Ma II-429 Li, Qiong V-262
Jingmeng, Sun V-585 Li, Rui I-595
Jinguo, Li III-468 Li, SanPing III-562, IV-481
Jin-jie, Yao III-143 Li, Shi-de V-53
Jong, Gwo-Jia V-413, V-458 Li, Shizhuan V-506
Junrui, Li III-242 Li, Taizhou I-231
Jushun, Li III-474 Li, Weidong V-167
Li, Wenze I-482
Kang, Cui III-516 Li, WuZhuang I-297
Kang, Zhiqiang V-570 Li, Xiaojuan III-214
Konigsberg, Zvi Retchkiman I-8 Li, XiaoLong I-330, I-336
Kuai, JiCai III-118 Li, Xiaolu III-402
Kuang, Aimin IV-530 Li, XiaoMin II-23
Li, Xin V-448
Lee, Ching-Yi V-309 Li, Xiqin II-113
Lee, In Jeong II-68 Li, XiZuo III-293, IV-388
Lee, Keun-Wang I-353 Li, Xuehua V-528
Lee, Seung Eun II-150, II-169 Li, Xueqin III-249
Lei, Bangjun II-572, II-579 Li, Yang I-149, I-302
Lei, Bicheng I-437, I-443 Li, Yanru II-526, II-532
Lei, Junwei I-606, I-612, I-616, II-7, Li, Yan-Xiao II-335
II-243, II-248 Li, Yaqin III-445
Author Index 639

Li, Ye III-89 Liu, Lixia I-365, I-370


Li, Yibin III-385 Liu, Liyuan II-40
Li, Ying III-402 Liu, Meihua III-138
Li, Yingjiang I-525 Liu, Pengtao II-131
Li, YinYin I-449 Liu, Qiao I-99
Li, Yong IV-143, IV-465, IV-520 Liu, RenYang I-621
Li, YongGan I-36, IV-52 Liu, Ruikai II-222
Li, Yongqing II-598 Liu, Ruixiang V-133
Li, Zhanli I-270 Liu, Tao I-574
Li, Zhan-ping IV-22 Liu, TianHui IV-6
Li, Zheng I-263 Liu, Tsung-Min V-335, V-348
Li, Zhongxu IV-577 Liu, Weijie IV-186
Liang, Guoqiang I-606, I-612, II-7 Liu, Wei-ya I-457
Liang, Tsang-Lang V-335, V-348 Liu, Wen I-201
Liang, Xiaoteng I-72 Liu, XianHua V-22
Liao, Chin-Wen V-320 Liu, Xiaojuan II-47, II-53
Liao, Qingwei I-416 Liu, Xiaosheng I-137
Liao, Yingjie V-543, V-548 Liu, XiaoYan II-125
Lijun, Cai III-468 Liu, Xingliang II-380
Lili, Dong III-63 Liu, Xuewu I-93
Lin, Hui II-40 Liu, Yanfei V-185
Lin, Jun I-257 Liu, Yang V-88
Lin, Sho-yen V-320 Liu, Yongfang III-392
Lin, Tingting I-257 Liu, Yongqi V-133
Lin, Wei-Jhih IV-367 Liu, Yunjing II-96
Lin, Yaw-Ling IV-525 Liu, Yuqiang I-118
Lin, Ying II-77, V-128 Liu, Zhenwei IV-577
Lin, Yong Zheng III-21 Liu, Zhen-ya III-299
Lin, Yufei II-15 Liu, Zhihai IV-47
Liu, Ailian I-501 Liu, Zhihui I-83
Liu, Anping III-315, V-287 Liu, Zhong IV-99
Liu, BanTeng III-545, IV-221 Liu, ZhongJing III-451
Liu, Bin IV-554, IV-560 Lixia, Song IV-602
Liu, Bing II-113 Liyuan, Yang I-316
Liu, Chang III-254 Llinares, Manuel Bernal IV-428
Liu, Dan IV-186 Loglo, S. IV-340
Liu, Dong III-254 Lou, Xian V-175
Liu, Fei II-341 Lu, Bo I-530, II-496
Liu, Haimei V-496 Lu, GuoDan I-187, I-194
Liu, Haisheng V-402 Lu, Ning-hai III-73
Liu, Hongzhi IV-36, IV-181 Lu, Quan I-78
Liu, Hua I-430 Lu, YiRen V-22
Liu, Huanbin IV-262 Lu, Yu-Chung I-568
Liu, Jian V-105 Luo, Jiaguo I-496, III-46
Liu, June III-430 Luo, Min III-283
Liu, Jun-Lan II-335 Luo, Ping I-42
Liu, Junqiu I-257 Luo, Qian V-202
Liu, Lei II-598 Luo, Ruwei IV-133
Liu, LiHong V-65, V-596 Luo, Tao III-156
Liu, LingXia II-230, II-236 Luo, Tieqing III-527
640 Author Index

Luo, Xiaoting IV-298 Peng, Lingyi I-93


Luo, Xin II-107 Peng, Ting II-211
Lv, HeXin IV-536 Peng, Yanfei V-448
Lv, Jiehua I-390 Peng, Zhaoyang V-293
Lv, TingJie IV-197 Pheera, Wittaya I-244

Ma, Bitao II-552 Qi, Bingchen IV-308


Ma, Bole II-137 Qi, ShuHua V-267
Ma, Dongjuan I-1 Qi, Wei I-501
Ma, F.Y. III-468 Qi, Weiwei II-211, II-284
Ma, Guang III-202 Qi, XiaoXuan IV-6
Ma, Hongji III-226 Qi, Yun V-22
Ma, Jibin I-143 Qian, Manqiu IV-117
Ma, Liang IV-543, V-314, V-329 Qiao, Jinxia V-452
Ma, MingLi III-532 Qiao, Shan Ping III-21
Ma, Qiang I-600, II-11 Qiao, XinXin III-232
Ma, TaoLin I-149, I-155 Qiao, Ying-xu II-473
Ma, Wenxin III-485 Qin, Ping I-194
Ma, Xiaoyan III-481 Qin, QianQing I-55
Ma, Xi-qiang I-457 Qin, Yanhong IV-566, V-11
Ma, Yi V-221 Qing, Xin I-501
Mao, PanPan II-253 Qiong, Li I-111
Mei, BaiShan IV-93
Meng, Haoyu V-303 Rao, Congjun III-430
Meng, Jian V-133 Ren, Chenggao I-423
Meng, Xiankun IV-268, V-94 Ren, GuangHui IV-504
Meng, Yi III-303 Ren, Honghong III-568
Ming, Xiao IV-111 Ren, Liying V-376
Mu, Dinghong V-105, V-554 Ren, Wen-shan II-452
Ren, Xiaozhong IV-122
Nai, Changxin I-118 Ren, YanLing II-354
Niu, JiaYing II-440 Ru, Wang III-63
Niu, Jingchun III-260 Rui, Yannian III-226

Okawa, Yoshikuni IV-308 Sang, Junyong IV-170, V-392


Ou, Jun V-314, V-329, V-506 Sangarun, Peerasak I-244
Ou, Xiaoxia III-598, III-604 Sarula IV-340
Shan, Rong V-518
Pan, Hongli II-380 Shanjie, Wu IV-602
Pan, Kailing III-502 Shao, Yongni II-341
Pan, Weigang I-470, I-476 Shao-lin, Liu III-143
Pan, Xinyu IV-404 Shen, Bo V-423
Pan, Yang IV-143 Shen, HongYan V-472
Pang, Yingbo V-484 Shen, Huailiang II-490
Park, Hyung Chul II-157 Shen, Limin III-260
Pei, Yulong II-211, II-284 Shen, Wei II-513
Pei, Yun V-314, V-329 Shen, Xiangxing IV-154, IV-486
Peng, Chen-Tzu III-557 Shen, Xiaolong I-423
Peng, Guobin II-186, II-192 Shen, XiuGuo IV-210
Peng, Kang IV-572 Shen, Zhipeng IV-459
Author Index 641

Shi, Jianhong I-606, I-612, I-616, II-7 Sun, Yuqiang I-239, I-251, III-165
Shi, Ji cui I-201 Sun, Yuzhou III-36
Shi, Jinliang V-376 Sun, Zebin I-534
Shi, Ming-wang III-73 Sun, Zhongmin III-160
Shi, Penghui II-89 Suo, Zhilin V-442
Shi, Run-hua I-489
Shi, ShengBo IV-536 Tai, David Wen-Shung V-309
Shi, Yaqing IV-303 Tai, David W.S. IV-376, V-362
Shu, Yang IV-602, V-402 Takatera, Masayuki II-28
Song, Ani V-81 Tan, Gangping II-458
Song, HaiYu IV-388 Tan, Gongquan I-309
Song, Huazhu V-477 Tan, Ran III-378
Song, Jianhui I-396, I-402 Tan, Wenan IV-243
Song, Jianwei V-100 Tan, Wentao V-213
Song, Lixia V-402 Tan, Wenxue IV-215
Song, XiaoWei I-187 Tan, Xianghua IV-514
Song, Xi-jia I-457 Tan, Xilong V-213
Song, Yunxia IV-11, IV-313, IV-418, Tan, YaKun IV-233
IV-433 Tan, Zhen-hua V-221, V-233
Soomro, Safeeullah I-161, I-290 Tan, ZhuWen V-43, V-48
Soomro, Sajjad Ahmed I-161 Tang, Xian III-288, IV-293
Su, Ruijing I-501, II-89 Tang, Zhihao I-22
Su, Te-Jen V-386 Tao, Yi II-545
Su, YangNa III-68 Tao, Zhiyong I-341
Su, Yen-Ju V-362 Teng, Yusi IV-88
Sui, Li-ping V-467 Tian, DaLun I-123
Suk, Yong Ho II-34 Tian, Dan III-283
Sun, Chunling V-576 Tian, Daqing IV-474
Sun, Dong III-156 Tian, Hongjuan II-367
Sun, Hongqi IV-439 Tian, HongXiang I-574
Sun, Jianhong I-525 Tian, Hua V-489
Sun, Jie III-516 Tian, Jian V-53
Sun, Jinguang V-448 Tian, JianGuo IV-393, V-559
Sun, Junding I-359 Tian, Qiming V-303
Sun, Li IV-470 Tian, Xianzhi III-26
Sun, Peixin I-347 Tian, Yinlei IV-52, IV-59
Sun, Qiudong III-485 Tian, Yu II-176, II-380
Sun, Qiuye IV-577 Tian, Yuan V-43, V-48
Sun, Shusen V-185 Tien, Li-Chu V-320
Sun, Shuying V-418 Tong, XiaoJun V-88
Sun, Wei III-160 Tsai, Chang-Shu III-333, IV-367
Sun, Weiming II-243, II-248 Tsai, Chung-Hung III-333
Sun, Xiao IV-274 Tsai, Hsing-Fen IV-371
Sun, XiaoLan I-155 Tsai, Tzu-Chieh III-557
Sun, XiuYing III-592 Tu, Fei IV-399
Sun, Yakun I-118
Sun, Yansong I-149 Wan, Benting IV-583
Sun, Ying II-513 Wan, Guo-feng III-322
Sun, Yu II-192 Wan, Lei I-276
Sun, Yuei-Jyun V-386 Wang, An I-263
642 Author Index

Wang, Baojin III-373 Wang, Shi-Xu V-240


Wang, Bing IV-42 Wang, Shouhong IV-313
Wang, Botao IV-11, IV-418 Wang, Shufang III-1
Wang, Changshun I-470 Wang, Shuying IV-449
Wang, ChenChen IV-233 Wang, Sida V-251
Wang, Cheng III-424, IV-262 Wang, Te-Shun IV-595
Wang, Chenglong IV-47 Wang, Tong IV-243, IV-248
Wang, Cuiping V-342 Wang, Wei IV-154
Wang, CunRui V-267 Wang, WeiLi V-59
Wang, DaoYang III-221 Wang, Weiwei III-373
Wang, Dongai III-138 Wang, WenWei I-55
Wang, Enqiang II-89 Wang, Xianliang I-62
Wang, FeiYin I-381 Wang, Xiaodong III-57
Wang, Fengwen II-96 Wang, XiaoFei II-216
Wang, Fengzhu II-102 Wang, Xiaofei I-542
Wang, Fu-Tung I-568 Wang, Xiaohong I-482
Wang, Fuzhong IV-175 Wang, XiaoHua V-180
Wang, Gai I-143 Wang, Xiaoling III-496
Wang, Guilan III-84 Wang, Xiaotian II-500
Wang, GuoQiang I-581 Wang, Xinyu II-243, II-248
Wang, Haichen III-328 Wang, Xiping IV-215
Wang, HaiYan I-381 Wang, XueFeng III-303
Wang, HanQing III-575 Wang, Xueyan I-143
Wang, Hao I-187, I-194 Wang, Ying III-527, V-570
Wang, Huiling V-213 Wang, YingXia IV-17, V-565
Wang, Huiping III-598 Wang, Yong III-6
Wang, Jian IV-248, IV-404 Wang, YuMei IV-148, IV-203
Wang, Jing I-72 Wang, YunWu III-516
Wang, Junxiang V-437 Wang, Yuqin III-339
Wang, Kaijun I-214 Wang, Yuting IV-459
Wang, Lan III-418 Wang, ZhangQuan III-545
Wang, Lei IV-77 Wang, Zhongsheng III-112
Wang, LiangBo III-532 Wei, JianMing II-592
Wang, Lijie I-176 Wei, Li III-468
Wang, Linan V-251, V-257 Wei, Min II-298
Wang, LingFen IV-274 Wei, Wanying IV-474
Wang, Long IV-22 Wei, WenYuan I-187
Wang, Meijuan IV-303 Wei, Xianmin I-551, I-556
Wang, Minggang V-548 Wei, YuHang V-48
Wang, Mingyou I-365, I-370 Wei, Zhang I-316, IV-111
Wang, Mingzhe V-81 Wei, ZhenGang V-180
Wang, Qibing II-361 Weiwei, Wu III-468
Wang, Ray IV-376 Wen, Xinling II-119
Wang, Renshu II-102 Weng, Chunying V-76
Wang, Ruijin II-113 Weng, Pu-Dong IV-279
Wang, Runsheng V-570 Wu, Chunmei III-15
Wang, Sangen II-380 Wu, Guo-qing V-467
Wang, Shengchen I-239 Wu, Hui V-100
Wang, Shiheng I-42 Wu, Jianan II-513
Wang, Shilong I-276 Wu, Jinhua I-616
Author Index 643

Wu, Lingyi II-259 Xing, Chong II-513


Wu, MeiYu II-348 Xing, YueHong IV-93
Wu, Peng IV-572, V-202 Xiong, Jieqiong IV-181
Wu, Qingxiu V-314, V-329, V-506 Xiong, Sujuan II-611
Wu, QinMing II-482 Xiuping, Chen I-207
Wu, Shianghau III-609 Xu, Dan III-237, IV-238
Wu, Shufang IV-138 Xu, DaWei II-1
Wu, Shui-gen III-299 Xu, Hang II-373
Wu, Shuo I-347 Xu, HongSheng III-418, III-490
Wu, Tao V-240 Xu, Huidong IV-382
Wu, Tsung-Cheng IV-279 Xu, Jian V-233
Wu, Tzong-Dar I-568 Xu, Jie IV-93
Wu, Wei IV-227 Xu, Junfeng III-36
Wu, WenQuan I-621 Xu, LiXiang II-125
Wu, XianLiang IV-412 Xu, Maozeng V-396
Wu, Xiaoli V-452 Xu, Meng IV-238
Wu, Xiaosheng I-359 Xu, Qin I-525
Wu, Xinlin III-413 Xu, Shan I-187
Wu, YanYun IV-210 Xu, Wei Hong I-408
Wu, Yong III-11, III-436 Xu, Xiang I-15
Wu, Youbo II-545 Xu, XiaoSu I-449
Wu, Youneng II-186 Xu, Xinhai II-15
Wu, Yu-jun II-373 Xu, Yun III-424
Wu, Zheng-peng I-49 Xu, Yuru I-276
Wu, ZhenGuo V-43 Xu, Zhao-di III-254
Wu, ZhiLu IV-504 Xu, Zhe II-259
Xu, Zhenbo V-590
Xi, Jinju IV-215 Xu, ZiHan I-187, I-194
Xi, Wei III-1 Xue, Anke II-259
Xia, Fangyi I-105 Xue, Yanxin III-378
Xia, Lu IV-82 Xue, Yongyi IV-1
Xiang, LiPing III-575 Xue, Yunfeng III-485
Xiang, Zhang III-63
Xianyan, Liang I-207 Yali, He II-424
Xiao, Beilei I-365, I-370 Yan, Hua I-562
Xiao, Guang V-123 Yan, JingJing IV-197
Xiao, Hairong I-476 Yan, Mao V-277
Xiao, Jun I-501 Yan, QianTai I-297
Xiao, Li III-315 Yan, Renyuan V-76
Xiao, Zhihong V-523 Yan, Wang III-242
Xiaoting, Luo V-501 Yan, Wenying III-485
Xie, ChunLi IV-274, IV-388 Yan, Yujie II-552
Xie, Jin II-125 Yan, ZheQiong V-543
Xie, Kefan V-496 Yang, Bin V-462
Xie, Min IV-572, V-202 Yang, Chung-Ping IV-371
Xie, Wu III-124 Yang, Guang I-270, V-207
Xie, Zhibin III-352, III-357, Yang, Guang-ming V-221, V-233
III-363, III-368 Yang, Guiyong I-470
Xin, Jiang IV-127 Yang, Heng II-310
Xin, Lu II-40 Yang, Hong-guo II-473
644 Author Index

Yang, Hua I-628 Yun, Chao I-167


Yang, Huan-wen IV-549 Yunming, Zhou III-180
Yang, Jianguo I-83
Yang, JingPo V-472 Zang, JiYuan I-167
Yang, JinXiang I-330, I-336 Zang, Shengtao I-72
Yang, Li V-477 Zeng, Qingliang IV-47
Yang, LiFeng IV-499 Zeng, Xiaohui I-309
Yang, Min IV-11, IV-418 Zhai, Changhong III-197
Yang, Peng II-272 Zhan, Hongdan I-99
Yang, Rui III-73 Zhang, Changyou IV-27
Yang, Shouyi V-489 Zhang, Chengbao II-58, II-63
Yang, Xianwen I-263 Zhang, ChunHui I-574
Yang, Xiao V-233 Zhang, Chunhui II-40
Yang, XiaoFeng I-430 Zhang, Cuixia IV-382
Yang, Xihuai V-167 Zhang, Dejia V-303
Yang, Xu-hong II-28, II-373 Zhang, DengYin I-588
Yang, YaNing II-291 Zhang, Deyu V-1
Yang, Ying V-11 Zhang, DongMin IV-210
Yang, Yuan IV-486 Zhang, FaJun IV-99
Yang, Yunchuan II-176 Zhang, Fan IV-233
Yang, Zhi II-354 Zhang, Feng-Qin II-335
Yang, Zhifeng IV-154, IV-486 Zhang, Gui I-123
Yang, ZhongShan V-273 Zhang, Guohua I-341, I-347
Yao, Benxian III-221 Zhang, Guoyan I-284
Yao, DuoXi I-330 Zhang, Hongliang II-216
Yao, Jie V-196 Zhang, Hui I-588
Yao, Qiong IV-165 Zhang, Huimin III-124
Yao, ShanHua IV-412 Zhang, Huiying III-267
Yaozhong, Wang III-398 Zhang, JianKai V-299
Ye, Xin IV-36 Zhang, Jie II-205, V-154, V-207
Yi, Jiang IV-127 Zhang, Jimin III-283
Yi, Zheng IV-71 Zhang, Jin III-522, III-527
Yin, Di III-508 Zhang, Jing II-520
Yin, Guofu IV-474 Zhang, Jingjing II-40
Yin, Yang IV-66 Zhang, JinGuang II-440
Yin, ZhenDong IV-504 Zhang, Jingyu IV-572
Yong, Wu IV-66 Zhang, JinYu III-562
Yu, Dehai V-437 Zhang, Jun IV-191
Yu, Hongqin IV-258 Zhang, Junhua I-72
Yu, Jie I-496 Zhang, Laixi I-423
Yu, Meng III-57 Zhang, Lei III-352, III-357
Yu, Yan I-408 Zhang, Li III-133
Yu, Yang I-341, I-347, I-396, I-402 Zhang, Lian-feng V-467
Yu, Yen-Chih V-413 Zhang, Liang II-605
Yuan, Deling V-123 Zhang, LiangPei I-155
Yuan, Jiazheng IV-382 Zhang, Lichen II-316, II-323, II-329
Yuan, Pin V-175 Zhang, Lin I-231
Yuan, Yi IV-143 Zhang, Linli III-315
Yuan, Zhanliang III-237 Zhang, Linxian IV-335
Yufeng, Wu II-411 Zhang, Mei V-70
Author Index 645

Zhang, Min I-251, II-305, III-165 Zhang, Zifei II-137


Zhang, MingXu I-336 Zhao, Chen III-150
Zhang, Nan IV-308 Zhao, DanDan IV-274, IV-388
Zhang, Qiang I-628 Zhao, Guorong I-606, I-612
Zhang, Qianqian I-501 Zhao, Hong III-129
Zhang, Qimin I-1 Zhao, Jian III-532
Zhang, Qingshan I-530, II-496, IV-514 Zhao, Jiantang I-519
Zhang, Qiuyan II-310 Zhao, JiYin II-291
Zhang, Rui II-598, III-41, V-59, V-65 Zhao, Lang I-507
Zhang, RuiLing III-490 Zhao, Languang III-328
Zhang, Shaomei I-72 Zhao, Lei V-462
Zhang, Shiqing I-437, I-443, I-595 Zhao, Liang III-373
Zhang, Shui-Ping II-335 Zhao, Lin V-22
Zhang, Songtao III-52 Zhao, Ling III-21
Zhang, Suohuai IV-27 Zhao, Pengyuan II-466
Zhang, Tao I-449 Zhao, Song II-216, II-310
Zhang, Wanfang III-175 Zhao, Wen I-131
Zhang, Wangjun II-539 Zhao, Xiaoming I-437, I-443
Zhang, Wei III-170, V-581 Zhao, Xiaoping IV-583
Zhang, Wenbo V-1 Zhao, Xin III-41
Zhang, Wenfeng II-89 Zhao, Yanna I-239, I-251, III-165
Zhang, Wenju V-202 Zhao, YaQin IV-504
Zhang, Xia V-396 Zhao, Yisong IV-1
Zhang, Xiaochen II-81 Zhao, Yun V-38
Zhang, Xiao-dong I-302 Zhao, Yunpeng II-532
Zhang, XiaoJing III-273 Zhao, Zheng-gang II-452
Zhang, Xiaojing IV-253 Zhe, Zhao III-180
Zhang, XiaoMei II-605 Zhen, Gao V-501
Zhang, Xin II-15 Zheng, Bin V-133
Zhang, XingPing IV-233 Zheng, Fanglin III-347
Zhang, XiPing IV-93 Zheng, Mingxia I-214
Zhang, XiuLi V-6 Zheng, Qun III-260
Zhang, Xiushan II-137 Zheng, Rongtian I-416
Zhang, Xuetong II-259 Zheng, WenTing I-628
Zhang, Yan V-196 Zheng, Xi-feng I-457
Zhang, Yang I-562 Zheng, Yanlin III-347
Zhang, Yi-Lai II-305 Zheng, Yeming III-1
Zhang, Yingchen I-501 Zheng, Yongsheng V-118
Zhang, Yonghong IV-382 Zheng, Yuge II-367
Zhang, Yongmei III-339 Zheng, Yuhuang III-552
Zhang, YuanYuan I-219, I-225 Zhi, Lin IV-127
Zhang, YueHong V-273 Zhishui, Zhong II-418
Zhang, Yue-Ling II-335 Zhong, Guoqing V-26
Zhang, Yuhua III-160 Zhong, Hong I-489
Zhang, YuJie I-219, I-225 Zhong, Luo V-477
Zhang, Zhan IV-148, IV-203 Zhou, Chang IV-99
Zhang, Zhengwei III-385 Zhou, Defu III-445
Zhang, ZhenLong I-381 Zhou, Fang III-11
Zhang, ZhenYou V-59, V-65 Zhou, Guixiang IV-514
Zhang, Zhenzheng III-502 Zhou, Hu I-83
646 Author Index

Zhou, Hui II-500 Zhu, Min I-408


Zhou, Jianguo IV-577 Zhu, Mincong I-501, II-89
Zhou, Jun I-534 Zhu, Peifen IV-530
Zhou, Kai xi II-192 Zhu, QingSheng III-538
Zhou, Rui-jin V-467 Zhu, ShanLin I-621
Zhou, ShuKe I-29 Zhu, Wen-Xing II-162
Zhou, Xu II-513 Zhu, XiaoFang II-23
Zhou, Xuexin IV-465 Zhu, XuFang II-23
Zhou, Yingbing I-470 Zhu, Xunzhi III-193
Zhou, You II-513 Zhu, YiHao III-232
Zhou, Yu-cheng IV-22 Zhu, Ying I-390
Zhou, Yunming III-187, III-193 Zhu, Yu-Ling IV-549
Zhu, Bo V-477 Zhu, Zhiliang V-221
Zhu, Dengsheng II-341 Zhu, Pei-jun V-196
Zhu, Dongbi III-407 Zhuang, Shuying III-587
Zhu, FengBo I-621, II-23 Zong, Xueping III-46
Zhu, Haibo III-207 Zou, Kun I-15
Zhu, Jiajun III-580 Zou, XianLin III-538
Zhu, Mao IV-154 Zuo, Long II-452

Das könnte Ihnen auch gefallen