Beruflich Dokumente
Kultur Dokumente
13
Volume Editors
Song Lin
International Science & Education Researcher Association
Wuhan Branch, No.1, Jiangxia Road, Wuhan, China
E-mail: 1652952307@qq.com
Xiong Huang
International Science & Education Researcher Association
Wuhan Branch, No.1, Jiangxia Road, Wuhan, China
E-mail: 499780828@qq.com
The International Science & Education Researcher Association (ISER) puts its
focus on the study and exchange of academic achievements of international teach-
ing and research sta. It also promotes educational reform in the world. In addi-
tion, it serves as an academic discussion and communication platform, which is
benecial for education and scientic research, aiming to stimulate the interest
of all researchers.
The CSEE-TMEI conference is an integrated event concentrating on the eld
of computer science, environment, ecoinformatics, and education. The goal of the
conference is to provide researchers working in this eld with a forum to share
new ideas, innovations, and solutions. CSEE 2011-TMEI 2011 was held during
August 2122, in Wuhan, China, and was co-sponsored by the International
Science & Education Researcher Association, Beijing Gireida Education Co. Ltd,
and Wuhan University of Science and Technology, China. Renowned keynote
speakers were invited to deliver talks, giving all participants a chance to discuss
their work with the speakers face to face.
In these proceeding, you can learn more about the eld of computer science,
environment, ecoinformatics, and education from the contributions of several
researchers from around the world. The main role of the proceeding is to be
used as means of exchange of information for those working in this area.
The Organizing Committee made a great eort to meet the high standards of
Springers Communications in Computer and Information Science (CCIS) series.
Firstly, poor-quality papers were rejected after being reviewed by anonymous
referees. Secondly, meetings were held periodically for reviewers to exchange
opinions and suggestions. Finally, the organizing team held several preliminary
sessions before the conference. Through the eorts of numerous people and de-
partments, the conference was very successful.
During the organization, we received help from dierent people, departments,
and institutions. Here, we would like to extend our sincere thanks to the pub-
lishers of CCIS, Springer, for their kind and enthusiastic help and support of our
conference. Secondly, the authors should also be thanked for their submissions.
Thirdly, the hard work of the Program Committee, the Program Chairs, and the
reviewers is greatly appreciated.
In conclusion, it was the team eort of all these people that made our con-
ference such a success. We welcome any suggestions that may help improve the
conference and look forward to seeing all of you at CSEE 2012-TMEI 2012.
Honorary Chairs
Chen Bin Beijing Normal University, China
Hu Chen Peking University, China
Chunhua Tan Beijing Normal University, China
Helen Zhang University of Munich, Germany
Organizing Chairs
ZongMing Tu Beijing Gireida Education Co. Ltd, China
Jijun Wang Beijing Spon Technology Research Institution,
China
Quan Xiang Beijing Prophet Science and Education
Research Center, China
Publication Chairs
Song Lin International Science & Education Researcher
Association, China
Xiong Huang International Science & Education Researcher
Association, China
Co-sponsored by
International Science & Education Researcher Association, China
VIP Information Conference Center, China
Reviewers
Chunlin Xie Wuhan University of Science and Technology, China
Lin Qi Hubei University of Technology, China
Xiong Huang International Science & Education Researcher
Association, China
Gang Shen International Science & Education Researcher
Association, China
Xiangrong Jiang Wuhan University of Technology, China
Li Hu Linguistic and Linguidtic Education
Association, China
Moon Hyan Sungkyunkwan University, Korea
Guang Wen South China University of Technology, China
Jack H. Li George Mason University, USA
Marry. Y. Feng University of Technology Sydney, Australia
Feng Quan Zhongnan University of Finance and
Economics, China
Peng Ding Hubei University, China
Song Lin International Science & Education Researcher
Association, China
XiaoLie Nan International Science & Education Researcher
Association, China
Zhi Yu International Science & Education Researcher
Association, China
Xue Jin International Science & Education Researcher
Association, China
Zhihua Xu International Science & Education Researcher
Association, China
Wu Yang International Science & Education Researcher
Association, China
Qin Xiao International Science & Education Researcher
Association, China
Weifeng Guo International Science & Education Researcher
Association, China
Li Hu Wuhan University of Science and Technology, China,
Zhong Yan Wuhan University of Science and Technology, China
Haiquan Huang Hubei University of Technology, China
Xiao Bing Wuhan University, China
Brown Wu Sun Yat-Sen University, China
Table of Contents Part I
1 Introduction
The theories of stochastic partial differential equations have been extensively many
areas, such as economics, finance and several areas of science and engineering and so
on. There have been mach researched in deterministic age-structured population with
diffusion system and discussed the existence, uniqueness, stability regularity and
localization of the solution of this system [1-3].
In recent years, it is more necessary to consider the random behavior of the birth-
death process and the effects of the stochastic environmental noise for Age-structured
population systems. Most of papers are concerned about stochastic population system.
The random element is considered, there have been many results from stochastic age-
stochastic age-structured population system. For instance, Zhang discussed the
existence and uniqueness for a stochastic age-structured population system with
diffusion [4]. When the diffusion of the population is not considered, Zhang studied
the existence, uniqueness and exponential stability of a stochastic age-dependent
population system ,and numerical analysis for stochastic age dependent population
have been studied in [5-8]. Interest has been growing in the study of stochastic
differential equations with jumps, which is extensively used to model many of the
phenomena arising in the areas [9-10].
In general, most of stochastic age-structured population system with Poisson jump
have not analytic solutions, thus numerical approximation schemes are invaluable
tools for exploring their properties. In this paper, a numerical analysis for stochastic
age-structured population system which is described by Eqs(1) will be developed. The
first contribution is to study the Semi-implicit Euler approximation solutions
converge to the analytic solution. The second contribution is to consider diffuse form
div( Pu ). In particular, our results extend those in [6-8].
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 17, 2011.
Springer-Verlag Berlin Heidelberg 2011
2 D. Ma and Q. Zhang
Let: O = ( 0, A) T , and V { | L2 ( O ) , L2 ( O ) , where are
xi xi
generalized partial derivatives}. Then V the dual space of V .we denote by and
the norms in V and V respectively; by , the duality product between V , V , and
by (, ) the scalar product in H .Let ( , F , P ) be a complete probability space with
a filtration {Ft }t 0 satisfying the usual conditions (i.e., it is increasing and right
continuous while F0 contains all P-null sets). In this paper, we consider the
convergence of stochastic systems with diffusion kdiv ( P u ),
P P
t + r = kdiv( Pu ) (r , t , x) P +
A (1)
P (0, t , x ) = (r , t , x ) P (r , t , x )dr , in(0, T ) ,
0
P ( r , 0, x ) = P0 ( r , x ), in(0, A) ,
P (r , t , x) = 0, on A = (0, A) (0, T ) ,
u (t , x ) = P (r , t , x)dr ,
A
0 inQ,
Nt
div be the divergence operator. Let W(t) be a Wiener process, hr
( ,t, x,P) is the
t
poisson jump procession.
For system (1), the discrete time semi-implicit Euler approximation on t is defined
by the iterative scheme:
Qn+1 Q
Qn+1 = Qn + (1)[ + kdiv(Qn+1u) (r, t, x)Qn + f (r, t, x, Qn )]+t +[ n+1
r r
+kdiv(Qn+1u) (r, t, x)Qn+1 + f (r, t, x,Qn+1)]+t + g(r, t, x, Qn )Wn + h(r, t, x, Qn )Nn ,
Here [0,1], Qn is the approximation to P (tn , r , x) , for tn = nt , the time
increment is t = T 1 , and Brownian motion increment is +Wn = W(tn+1) W(tn ),
N
Possion process increment is +N n = N (t n +1 ) N (t n ).
Convergence of the Stochastic Age-Structured Population System with Diffusion 3
Z1 (t ) = Z1 (t, r, x) = kN=01Qk I[k t ,(k +1)t ] , Z2 (t ) = Z2 (t, r, x) = kN=01Qk +1I[kt ,(k +1)t ] ,
There I G is the indicator function for the set G and Z1(tk ) = Z2 (tk1) = Qk = Q(tk , r, x).
To establish the convergence theorem we shall use the following assumptions:
(i)(Lipschitz condition ) here exists a positive constant K such that P1 , P2 C
| f (r , t , x, P1 ) f (r , t , x, P2 ) | | g (r , t , x, P1 ) g (r , t , x, P2 ) |
| h(r , t , x, P1 ) h(r , t , x, P2 ) | K | P1 P2 |, a.e.t ;
(ii) ( r , t , x ) and (r , t , x) are continuous in Q such that
0 0 ( r , t , x ) < , 0 ( r , t , x ) < , k 0 k ( r , t ) k ;
t
(iii) f ( r , t , x, 0) = 0, g ( r , t , x, 0) = 0, | |+u || < k ;
2
0 3
In this section, we will provide some theorem which are necessary for the proof of Qt
convergence to the analytical Solution of this system Pt . (We only discuss the
iterative scheme of the continuous-time, for iterative scheme of the discrete time,
there is similar.)
t Qs t t
| Q0 |2 2 , Qs ds + 2 kdiv(Qs u ), Qs ds 2 0 ((1
0 r 0 0 O
t
) Z1 ( s) + Z 2 ( s))Qs drdxds + 2 g ( r , s, x, Z1 )Qs drdxd Ws
0 O
t
+ 2 ((1 ) f (r , s, x, Z1 ) + f (r , s, x, Z 2 ))Qs drdxds
0 O
0 0
t
Since 2 kdiv(Q u ), Q ds
0 s s
t t
k (Q u + Q +u ) drdxds + k |Q | ds
s s
2
s
2
0 O 0
t t t
2k | |+u || ds + 2k | | Q +u || ds + k |Q | ds
2
s
2
s
2
0 0 0
t t t t
2k | |+u || ds + 2k |Q | ds | |+u || ds + k |Q
2
s
2 2
s |2 ds.
0 0 0 0
By assumptions and the quality of operator, there exist k1 =(2k3 +1)k and k2 =2k3k
t t
such that 2 0
kdiv(Qs u ), Qs ds k1 |Qs |2 ds + k2 . ()
0
t Qs 1 t
Applying (7) and 0
r
, Qs ds A 2 |Qs |2 ds, we have
2 0
t t t t
| Qt |2 | Q0 |2 + A 2 |Qs |2 ds + k1 |Qs |2 ds + k2 + 2 0 |Z1 |2 ds + 2 0 |Z 2 |2 ds
0 0 0 0
t t t
+2 | f (r, s, x, z1 (s)) |2 ds + 2 | f (r, s, x, z2 (s)) |2 ds + (1+ + 0 ) |Qs |2 ds
0 0 0
t t t
+2 g(r, s, x, Z1)QsdrdxdWs + || g(r, s, x, Z1)||22 ds + |h(r, s, x, z1(s))|2 ds
0 O 0 0
t
+ |h(r, s, x, Z )|2 dN .
+ |h(r, s, x, z1(s))|2 ds + 2 h(r, s, x, Z1)QdrdxdN
t t
0 s 0 O 1 0
t
+2E sup g(r, s, x, Z1 )Qs drdxdWs
0t1 t 0 O
t
h(r, s, x, Z )Q drdxdN + E sup |h(r, s, x, Z ) |
t
+2E sup 1 s 1
2
dN .
0t1 t 0 O 0t1 t 0
1 t 1 t
E[sup | Qs |2 ] + K1 || g(r, s, x, Z1)||22 ds E[sup | Qs |2 ] + K2 K1 E | Z1 |2 ds,
8 0t1t 0 8 0t1t 0
t
In the same way, we obtain: O
E sup
0t1t 0 O
h(r, s, x, Z )Q drdxdN
1 s
1 t
E sup | Qs |2 + K 2 K 2 |Z1 |2 ds, byZ i sup | Qs |, i = 1, 2
8 0t1 t 0 0 t1 t
3 1 t
E sup | Qt |2 ( A 2 + + 1 + 50 + (5 + 2 K1 + 2 + 3K 2 ) K 2 ) E sup | Qs |2 ds + E | Q0 |2 +k2 .
0 t1 t 2 8 0 0 t1t
E[| Qt Z1 (t ) |2 ] C2 +t ,
Theorem 3.2. under assumption, for each t [0, T ],
E[| Qt Z 2 (t ) |2 ] C3 +t.
Proof. For arbitrary t[0,T], there exists k such that t [k+t,(k +1)+t], so we have
t Qs 2 t
| Qt Z1(t) |2 6+t
k +t
|
r
| ds + 6k 2 +tk1 |Qs |2 ds + 6k 2 +tk2
k +t
t t
+62t |(1)Z1(s) +Z2 (s)|2 ds +122t |h(r, s, x, Z1(s))|2 ds
k +t k +t
t t
+12t | f (r, s, x, Z1 (s)) | ds +12t | f (r, s, x, Z2 (s)) |2 ds
2
k +t k +t
t t
+6 | |2 .
g (r , s, x, Z1 ( s))d Ws |2 +12 | h(r , s, x, Z1 ( s))dN
k +t k +t
0t1 t 0
1 t
E[sup | Ps Qs |2 ] + K 2 K 5 E | ps Z1 |2 ds,
8 0t1 t 0
t
E sup ( h(r , t , x, Ps ) h( r , s, x, Z1 ))( Ps Qs ) drdxdN
O 0 t1t 0 O
1 t
E sup | Ps Qs |2 + K 6 K 2 E | Pt Z1 |2 ds,
8 0t1 t 0
t
Applying Theorem3.2, have E sup | Pt Qt | C5 +t + C6 E sup | P Q |
2 2
t t dt ,
0t1 t 0 0t1 t
References
1. Hernandez, G.E.: Age-density dependent population dispersal in RN. Mathematical
Biosciences J. 149, 3756 (1998)
2. Hernandez, G.E.: Existence of solutions in a population dynamic problem. J. Appl.
Math. 509, 4348 (1986)
3. Hernandez, G.E.: Localization of age-dependent ant-crowding populations. J. Q. Appl.
Math. 53, 35 (1995)
4. Zhang, Q., Han, C.Z.: existence and uniqueness for a stochastic age-structured population
system with diffusion. J. Science Direct 32, 21972206 (2008)
5. Zhang, Q., Liu, W., Nie, Z.: Existence, uniqueness and exponential stability of stochastic
age-dependent population. J. Appl. Math. Comput. 154, 183201 (2004)
6. Zhang, Q., Han, C.Z.: Convergence of numerical solutions to stochastic age-structured
population system with diffusion. J. Applied Mathematics and Computation 07, 156 (2006)
7. Zhang, Q.: Exponential stability of numerical solutions to a stochastic age-structured population
system with diffusion. Journal of Computational and Applied Mathematics 220, 2233 (2008)
8. Zhang, Q., Han, C.Z.: Numerical analysis for stochastic age-dependent population
equations. J. Appl. Math. Comput. 176, 210223 (2005)
9. Gardon, A.: The Order of approximations for solutions of Ito-type stochastic differential
equations with jumps. J. Stochastic Analysis and Applications 38, 753769 (2004)
Parallel Computer Processing Systems Are
Better Than Serial Computer Processing
Systems
1 Introduction
Serial computer processing systems are characterized by the fact of executing
software using a single central processing unit (CPU) while parallel computer
processing systems simultaneously use multiple CPUs at the time. Some of the
arguments which have been used to say why it is better parallel than serial
are: save time and/or money and solve larger problems. Besides that there are
limits to serial processing computer systems due to: transmission speeds, lim-
its to miniaturization ans economic limitations. However we would like to be
more precise and give a denitive and unquestionable formal proof to justify the
claim that parallel computer processing systems are better than serial computer
processing systems. The main objective and contribution of this paper consists
in using a formal and mathematical approach to prove that parallel computer
processing systems are better than serial computer processing systems (better
related to: saving time and/or money and being able to solve larger problems).
This is achieved thanks to the theory of Lyapunov stability and max-plus alge-
bra applied to discrete event systems modeled with time Petri nets. The paper
is organized as follows. Sections 2 and 3 provide the mathematical results uti-
lized in the paper about Lyapunov theory for discrete event systems modeled
with Petri nets and max-plus algebra in order to achieve its goal (for a detailed
exposition see [1] and [2]). In section 4, the solution to the stability problem for
discrete event systems modeled with timed Petri nets using a Lyapunov, max-
plus algebra approach is given. Section 5, applies the theory presented in the
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 814, 2011.
c Springer-Verlag Berlin Heidelberg 2011
Parallel Computer Processing vs. Serial Computer Processing 9
v = uT A 0 (1)
v = uT A 0 A 0 (3)
Remark 1. Notice that since the state space of a TPN (timed Petri net) is con-
tained in the state space of the same now not timed PN, stability of PN implies
stability of the TPN.
v = AT u 0 (4)
k +
A . Where the element [A ]ji gives the maximal weight of any path from j
k=1
to i. If in addition one wants to add the possibility of staying at a node then one
must include matrix E in the definition of matrix A+ giving rise to its Kleene
star representation defined by:
A = Ak . (5)
k=0
Lemma 2. Let A Rnnmax be such that any circuit in the communication graph
G(A) has average circuit weight less than or equal to . Then it holds that:
n1
A = Ak . (6)
k=0
Av =v (7)
|p|w
(A) = max (8)
pC(A) |p|1
With any timed event Petri net, matrices A0 , A1 , ..., AM Nn Nn can be dened
by setting [Am ]jl = ajl , where ajl is the largest of the holding times with respect
to all places between transitions tl and tj with m tokens, for m = 0, 1, ..., M ,
with M equal to the maximum number of tokens with respect to all places.
Let xi (k) denote the kth time that transition ti res, then the vector x(k) =
(x1 (k), x2 (k), ...xm (k))T , called the state of the system, satises the M th order
M
recurrence equation: x(k) = Am x(k m); k 0 Now, assuming that all
m=0
the hypothesis of theorem (5) are satised, and setting x(k) = (xT (k), xT (k
M
1), ..., xT (k M + 1))T , equation x(k) = Am x(k m); k 0 can be
m=0
expressed as: x(k + 1) = A x(k); k 0, which is known as the standard
autonomous equation.
Definition 6. A TPN is said to be stable if all the transitions fire with the same
proportion i.e., if there exists q N such that
xi (k)
lim = q, i = 1, ..., n (9)
k k
Lemma 3. Consider the recurrence relation x(k + 1) = A x(k), k 0, x(0) =
x0 Rn arbitrary. A an irreducible matrix and R its eigenvalue then,
xi (k)
lim = , i = 1, ..., n (10)
k k
Now starting with an unstable T P N , collecting the results given by: proposition
(2), what has just been discussed about recurrence equations for T P N and the
previous lemma (3) plus theorem (3), the solution to the problem is obtained.
In this section, the main objective of this manuscript which consists in giving a
precise and denitive answer to the question why are parallel computer process-
ing systems preferred to serial computer processing systems (better related to:
saving time and/or money and being able to solve larger problems), is presented.
Fig. 1.
d: the problem has been solved. The places (that represent the states of the
serial computer processing system) are: A: problems loading, P: the problems
are waiting for a solution, B: the problem is being solved, I: the CPU of capacity
Cd is idle. The holding times associated to the places A and I are Ca and Cd
respectively, (with Ca > Cd).
Remark 2. Notice that Ca, the size of q, is the time it takes to a problem until
is completely loaded in the computer in order to be solved, larger problems will
have larger Ca s, while Cd, the capacity of the CPU, is the time it takes to the
CPU to reset.
0 1 0 0
The incidence matrix that represents the P N model is A = 0 1 1 1
0 0 1 1
Therefore since there does not exists a strictly positive m vector such that
A 0 the sucient condition for stability is not satised. Moreover, the P N
(T P N ) is unbounded since by the repeated ring of q, the marking in P grows
indenitely i.e., the amount of problems that require a solution accumulate.
However, by taking u = [k, k, k]; k > 0 (but unknown), we get that AT u 0.
Therefore, the P N is stabilizable which implies that the T P N is stable. Now,
let us proceed to determine the exact value
of k. From the T P N model we
Ca
|p|
obtain that: A = A0 A1 = Ca Cd . Therefore, (A) = max |p|w =
pC(A) 1
Ca Cd
max{Ca, Cd} = Ca. This means that in order for the T P N to be stable and
work properly the speed at which the serial computer processing system works
has to be equal to Ca or being more precise, that all the transitions must re
at the same speed as the problems arrive i.e., they have to be solved as soon
as they are loaded into the computer which is attained by setting k = Ca. In
particular, transition s which is related to the execution time of the CPU has to
be red at a speed equal to Ca.
Parallel Computer Processing vs. Serial Computer Processing 13
Fig. 2.
Remark 3. The previous analysis is easily extended to the case with n CPUs,
obtaining that u = [Ca, Ca/n, Ca/n, ..., Ca/n] which translates into the condi-
tion that the transitions s1,s2,...,sn, have to be red at a speed equal to Ca/n.
Summary 7. The parallel computer processing system works properly if transi-
tions s1,s2,...,sn, fire at a speed equal to Ca/n which implies that the execution
frequency of the CPUs has to be equal to Ca/n.
5.3 Comparison
As a result of summaries (6) and( 7) the following facts are deduced:
1. (Saving time) It is possible to solve a problem of size Ca with one CPU
which takes time Ca, or there is the option of solving a problem of size
nCa (or n problems of size Ca each one) using n CPUs which will take the
same time as with one CPU.
2. (Saving Money) In order to execute a program, there is the option of pur-
chasing one CPU that costs Ca or n CPUs that cost Ca/n. This is
signicant for large Ca.
3. (Solving larger problems) If Ca increases due to the fact that the problem to
be solved becomes larger then this will result in an increment on the CPUs
execution frequency. As a consequence the serial computer processing option
becomes expensive and/or slow. This is also true for the parallel computer
processing alternative however, by distributing Ca between the n CPUs the
economical and/or time impact results to be much lower.
References
1. Retchkiman, Z.: Stability theory for a class of dynamical systems modeled with
Petri nets. International Journal of Hybrid Systems 4(1) (2005)
2. Heidergott, B., Olsder, G.J., van der Woude, J.: Max Plus at Work. Princeton
University Press, Princeton (2006)
3. Baccelli, F., Cohen, G., Olsder, G.J., Quadrat, J.P.: Synchronization and Linearity,
Web-edition (2001)
Smooth Path Algorithm Based on A* in Games
1 Introduction
In games, we often use a regular grid diagram to demonstrate game map, these grids in
certain proportion or resolution divide game map into small pieces of cells, each cell
called a node. Based on grid game map, pathfinding's main purpose is according to
different terrain and obstacles, find a shortest and lowest cost path. Many games use A*
algorithm as its pathfinding strategy, such as typical RTS games and RPG games. Due
to the characteristics of game software itself, its pathfinding algorithm has more
request, such as searching time should be short, path found should smooth and realistic,
etc. Therefore, the standard A* algorithm need to do many improvement before used in
games. Aiming at the special request of pathfinding in games, this paper analyses
various improvement method, and proposes a smooth path generation algorithm.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 1521, 2011.
Springer-Verlag Berlin Heidelberg 2011
16 X. Xu and K. Zou
the starting point to the goal in this diagram. This search process is called state space
search. Common state space search methods have depth first search (DFS) and breadth
first search (BFS), BFS first searches the initial state layer, and then next layer until
find the goal so far. DFS is according to certain order first searches a branch, and then
another branch until find the goal. Breadth and depth first search have a big flaw is that
they are both in a given state space exhaustion, it can be adopted when the state space is
small, but will not desirable when the state space is very big, and unpredictable
circumstances. They will have much lower efficiency, and even not complete
pathfinding, In this case we should use heuristic pathfinding. Heuristic pathfinding will
estimate each search position in the state space, and get the lowest cost of position and
from this position until target. This may omit many useless path searches, improve
efficiency. In heuristic pathfinding, the estimate cost of position is very important, and
use different heuristic function will get different effect [2]. The heuristic functions
general form is as follows:
f(n)=g(n)+h(n) (1)
Among them, g (n) is the actual cost from the starting point to node n, and h (n) is the
estimate cost from node n to the goal. Because g(n) is known, it can be calculated by
reverse tracking from node n to starting point in accordance with a pointer to the parent,
then accumulate all the edge cost in the path. So, heuristic function f (n)s heuristic
information relies mainly on function h (n) [3]. According to a certain known
conditions of state space, heuristic function will select the node with minimum cost to
search, again from this node continue to search, until you reach the goal or failure, but
not expanded nodes need not search. Design of function h (n) will direct impact on
whether this heuristic algorithm can become A* algorithm [4].
Function h (n) in A* algorithm usually adopts the classic Manhattan heuristic function.
Namely obtain the minus of abscissa from current node to the goal, and also the minus
of ordinate from current node to the goal, again both absolute values adding together.
Its primary drawback is that in an eight-way pathfinder, this method is inadmissible, so
it is not guaranteed to find the shortest path. Also, because it is overweighted compared
to G, it will overpower any small modifiers that you try to add to the calculation of G
like, say, a turning penalty or an influence map modifier. So we need to find more
suitable heuristic function. And when choosing heuristic function, we still need to
Smooth Path Algorithm Based on A* in Games 17
consider calculated amount. Therefore we should take the compromise in the precise
function and its calculated amount. Here we adopted an improved heuristic function
shown below:
h(n)=max(fabs(dest.x-current.x),fabs(dest.y-current.y)) (2)
This heuristic function can satisfy the admissible condition and guarantee to give us
the shortest path from starting point to the goal.
In order to improve search efficiency, we preprocess those unreachable areas in the
game map. These areas possibly are separated by a bar obstacle, for example river in
addition one side, also possibly be surrounded by walls, etc. For such a terrain, A*
algorithm will detect all neighbour nodes around this unreachable node until failed,
waste a lot of time. Through put all unreachable nodes into the unreachable list
beforehand, and check whether destination is in the unreachable list before pathfinding.
And for the Open table, we adopted the binary heaps to enhance the efficiency of the
algorithm. The specific algorithm flow chart shown below:
Fig. 1. A* algorithm flow chart, adopted the binary heaps for the Open table, and preprocess
those unreachable areas in the game map before starting searching
18 X. Xu and K. Zou
The path generated by A* algorithm is a set of discrete node {N1 N2, N3,..., , NP-1, NP},
if a game role move along these nodes, it will encounter many twists, the path is too
long, and quite time-consuming, we can adopt key-point optimization strategy, namely
select limited key points to represent the whole path node set. There are many methods
to select the key points, we can select each direction change point as key point, and
other nodes are all omitted. Also we can calculate the original path node set{N1 N2,
N3,..., , NP-1, NP}, if in any two node central segment does not have any obstacles
(assume the grid map is known), then all other nodes between this two nodes, can be
omitted but only this two nodes are preserved. After such processing, the path
generated by A* algorithm is constituted by limited key points, the path length reduced
many, the track time reduced suddenly, and convenient for further smooth path design
(The experimental results is shown below) [5].
Fig. 2. Left figure shows A* algorithm generated the node sequence, and right figure shows the
key-point node sequence after optimization
After finished the key-point optimization, we can apply Catmull-Rom splines to the
key-point interpolation, through add several interpolation points between the key
points, make the whole path look more smooth.
Catmull-Rom splines are a family of cubic interpolating splines formulated such that
the tangent at each point Pi is calculated using the previous and next point on the spline,
(Pi+1 Pi1). The geometry matrix is given by
0 1 0 0 pi2
0 0 p i 1
p ( s ) = [1 u u2 u 3 ] (3)
2 3 3 2 pi
2 2 p i +1
Catmull-Rom splines have C1 continuity, local control, and interpolation, but do not
lie within the convex hull of their control points.
Smooth Path Algorithm Based on A* in Games 19
Note that the tangent at point p0 is not clearly defined; oftentimes we set this to (p1
p0) although this is not necessary for the assignment (you can just assume the curve
does not interpolate its endpoints).
The parameteris known as tension and it affects how sharply the curve bends at
the (interpolated) control points (figure 4). It is often set to 0.5 but you can use any
reasonable value for this assignment.
Catmull-Rom formulas required four input point coordinates, the calculation results
is the point for the second point to the third point between an approximate u% places.
When = 0.5, computation formula is as follows [6]:
p ( s ) = p i 2 * ( 0 .5 * u + u * u 0 .5 * u * u * u )
+ p i 1 * (1 2 . 5 * u * u + 1 . 5 * u * u * u ) (4)
+ p i * ( 0 .5 * u + 2 * u * u 1 .5 * u * u * u )
+ p i +1 * ( 0 . 5 * u * u + 0 . 5 * u * u * u )
Fig. 5. Catmull-Rom smooth path production. Left figure shows the key points (marked as red
circle), right figure shows the smooth path.
5 Conclusion
This paper analyzed the A* standard algorithm, and proposed one kind of improved
strategy. The algorithm adopted an admissible heuristics function, and performed
key-point optimization on the node sequence, in view of the key-point sequence, the
paper realized a kind of smooth path generation based on the Catmull-Rom splines.
Generated by the search path can be reflected in the game actual path effect, and
embodies the certain intelligence and humanization. But in execution efficiency, take
sacrifices the storage space and the CPU time as the price. Future game need more
intelligences, more user-friendly game roles, therefore, hoped that can have better
algorithms to solve problems in game pathfinding.
References
1. Tao, Z.H., Hang, C.Y.: Path Finding Using A* Algorithm. Micro Computer Information
23(17), 238240 (2007)
2. Lester, P.: A* pathfinding for beginners (2005),
http://www.policyalmanac.org/games/aStarTutoria.lhtm
3. Heping, C., Qianshao, Z.: Applicaion And Implementation of A*Agorithms in the Game
Map Pathfinding. Computer Applications and Software 22(12), 118120 (2005)
4. Lester, P.: Using Binary Heaps in A* Pathfinding (2003),
http://www.policyalmanac.org/games/binaryHeaps.htm
5. Wei, S., Zhengda, M.: Smooth path design for mobile service robots based on improved A*
algorithm. Journal of Southeast University (Natural Science Edition) 40sup(I) (September
2010)
6. Deloura, M.: Game Programming Gems. Charles River Media, Inc., London (2000)
7. Higgins Daniel, F.: Pathfinding Design Architecture. In: AI Game Programming Wisdom.
Charles River Media, London (2002)
The Features of Biorthogonal Binary Poly-scale Wavelet
Packs in Bidimensional Function Space
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 2228, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 23
the case of the spline wavelets and so on. Tensor product multivariate wavelet packs
has been constructed by Coifman and Meyer. The introduction for the notion on
nontensor product wavelet packs attributes to Shen Z [5]. Since the majority of
information is multidimensional information, many researchers interest themselves in
the investigation into multivariate wavelet theory. But, there exist a lot of obvious
defects in this method, such as, scarcity of designing freedom. Therefore, it is
significant to investigate nonseparable multivariate wavelet theory. Nowadays, since
there is little literature on biorthogonal wavelet wraps, it is neces-sary to investigate
biorthogonal wavelet wraps .
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 2 denotes the 2-
dimentional Euclidean space. L2 ( R 2 ) denotes the square integrable function space.
i 2
Let x = ( x1 , x2 ) R , = (1 , 2 ) R 2 , k = ( k , k ) Z , z = e , z2 = e 2 .
2 2 i 2
1 2 1
, = ( x) ( x ) dx, ( ) = ( x) e i x dx,
R2 R2
n , v = n ,v I s , n, v Z 2 , (1)
theorem[8] for higher-dimentional wavelets with arbitrary dilation matrice has been
given. Let h( x) L ( R ) satisfy the following refinement equation:
2 2
f ( x) = m2 kZ 2 bk f (mx k ) (2)
where {b(n)}nZ 2 is real number sequence which has only finite terms.and f ( x) is
called scaling function. Formula (1) is said to be two-scale refinement equation. The
frequency form of formula (1) can be written as
f ( ) = B ( z1 , z2 ) f ( m), (3)
where
B( z1 , z2 ) = b(n , n ) z1n z2 n .
2
1 2
1 2
(4)
( n1 , n2 )Z
Define a subspace X j L2 ( R 2 ) ( j Z ) by
V j = closL2 ( R2 ) m j f ( m j x k ) : k Z 2 . (5)
g ( ) = Q ( ) ( z1 , z2 ) f ( a ), = 1, 2, , m 2 1. (8)
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 25
f (), f ( k ) = 0, k , n Z 2 . (10)
f (), g ( k ) = 0 , , k Z 2 , (11)
where = 0,1, 2, 3. By taaking the Fourier transform for the both sides of (12), we have
h nk + ( ) = B ( ) ( z1 , z2 ) h k ( 2 ) . (14)
where
B ( ) ( z1 , z2 ) = B ( ) ( / 2 ) = b ( )
(k ) z1k1 z2k2 (15)
k Z 2
| ( + 2k ) | 2
=1 . (16)
kZ2
= B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 )
2 2 2 2
(17)
26 Z. Tang and H. Guo
2
Proof. If f ( x) is an orthogonal bivariate function, then kZ 2 f ( + 2k ) =1 .
Therefore, by Lemma 1 and formula (2), we obtain that
1= | B (e i (1 2+ k1 )
, e i ( 2 2 + k 2 )
) f ((1 , 2 ) 2 + (k1 , k2 ) ) |2
kZ 2
=| B( z1 , z2 ) k Z 2 f ( + 2k ) |2 + | B( z1 , z2 ) k Z 2 f ( + 2k + (1,0) ) |2
+ | B( z1 , z2 ) k Z 2 f ( + 2k + (0,1) ) |2 + | B( z1 , z2 ) k Z 2 f ( + 2k + (1,1) ) |2
= B ( z1 , z2 ) + B ( z1 , z2 ) + B ( z1 , z2 ) + B ( z1 , z2 )
2 2 2 2
This complete the proof of Lemma 2. Similarly, we can obtain Lemma 3 from (3), (8), (13).
Lemma 3. If ( x ) ( = 0,1, 2,3 ) are orthogonal wavelet functions associated with
h( x) . Then we have
{ B (( 1) z1 , ( 1) z 2 ) B (( 1) z1 , ( 1) z 2 ) + B
( ) ( ) ( )
(( 1)
j +1
z1 , ( 1) z 2 )
1 j j j j j
j =0
( )
B (( 1)
j +1
z1 , ( 1) z2 )}: = ,
j
= , , , {0,1, 2, 3}. (18)
j =1
Proof. Formula (20) follows from (10) as n=0. Assume formula (20) holds for the
case of 0 n < 4r0 ( r0 is a positive integer). Consider the case of 4 r0 n < 4r0 +1 .
For , by induction assumption and Lemma 1, Lemma 3 and Lemma 4, we have
2
( 2 ) hn (), hn ( k ) = 2 h n ( ) exp {ik} d
2
R
4 ( j1 +1) 4 ( j2 +1)
= B ( ) ( z1 , z2 ) h[ n / 8] ( / 2) eik d
2
jZ
2
4 j1 4 j2
(z , z ) | h
4 4 2
= 0 0
B
( )
1 2 [n / 8] ( + 2 j ) | e d
2 ik
j Z
2 2
The Features of Biorthogonal Binary Poly-scale Wavelet Packs 27
4 4 2 2 2
= B ( ) ( z1 , z 2 , z 3 ) e ik d = e ik d = o , k
0 0 0 0
(2 )2 hm (), hn ( k ) = R2 h4 m1 + 1 ( )h 4 n1 + 1 ( ) exp{ik}d
= [0,4 ]2 B ( 1 ) ( z1 , z2 ) hm ( 1
2 + 2s ) h m1 ( 2 + 2s ) B ( 1 ) ( z1 , z2 ) eik d
sZ 2
1
=
(2 )2 [0,2 ]2
1 , 1 exp{ik} d = O.
1
r (2 ) R
2 h 4 m1 + 1 ( ) h 4 n1 + 1 ( ) e d
ik
hm (), hn ( k ) = 2
r
1
2 [0,2r +1 ]2
= { B ( ) ( / 2 )} O { B ( ) ( / 2 )} eik d = O.
(2 ) =1 =1
References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal.~26(4), 1061--1074
(1995)
Abstract. The rise of frame theory in appled mathematics is due to the fle-
xibility and redundancy of frames. Structured frames are much easier to con-
struct than Structured orthonormal bases. In this work, the notion of the ternary
generalized multiresolution structure (TGMS) of subspace L2 ( R 3 ) is proposed,
which is the generalization of the ternary frame multiresolution analysis. The
biorthogonality character is characterized by virtue of iteration method and va-
riable separation approach. The biorthogonality formulas concerning these wa-
velet packages are established.The construction of a TGMS of Paley-Wiener
subspace of L2 ( R 3 ) is studied. The pyramid decomposition scheme is obtained
based on such a TGMS and a sufficient condition for its existence is provided.
A procedure for designing a class of orthogonal vector-valued finitely support-
ed wavelet functions is proposed by virtue of multiresolution analysis method.
1 Introduction
Every frame(or Bessel sequence) determines an synthesis operator, the range of which
is important for a lumber of applications. The main advantage of wavelet function is
their time-frequency localization property. Construction of wavelet functions is an
important aspect of wavelet analysis, and mul-tiresolution analysis approach is one of
importment ways of designing all sorts of wavelet functions. There exist a great many
kinds of scalar scaling functions and scalar wavelet functions. Although the Fourier
transform has been a major tool in analysis for over a century, it has a serious laking
for signal analysis in that it hides in its phases information concerning the moment of
emission and duration of a signal. The frame theory has been one of powerful tools
for researching into wavelets. Duffin and Schaeffer introduced the notion of frames
*
Foundation item: The research is supported by Natural Scientific Foundation of Sh-aanxi
Province (Grant No:2009J M1002), and by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No:11JK0468).
**
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 2935, 2011.
Springer-Verlag Berlin Heidelberg 2011
30 S. Zhou and Q. Chen
for a separable Hilbert space in 1952. Later, Daubechies, Grossmann, Meyer, Benede-
tto, and Ron revived the study of frames in[1,2], and since then, frames have become
the focus of active research, both in theory and in applications, such as signal
processing, image processing and sampling theory. The rise of frame theory in
applied mathematics is due to the flexibility and redundancy of is due to the flexibility
and redundancy of frames, where robust-ness, error tolerance and noise suppression
play a vital role [3,4]. The concept of frame multiresolution analysis (FMRA) as des-
cribed in [2] generalizes the notion of MRA by allowing non-exact affine frames.
Inspired by [2] and [5], we introduce the notion of a trivariate generalized multi-
resolution structure(TGMS) of L2 ( R 3 ) , which has a pyramid decomposition scheme. It
also lead to new constructions of affine frames of L2 ( R 3 ) . Sampling theorems play a
basic role in digital signal processing. They ensure that continuous signals can be
represented and processed by their discrete samples. The classical Shannon Sampl-
ing Theorem asserts that band-limited signals can beexactly represented by their
uniform samples as long as the sampling rate is not less than the Nyquist rate. Wave-
let wraps, owing to their good properties, have attracted considerable attention. They
can be widely applied in science and engineering. Since the majority of information is
multidimensional information, many researchers interest themselves in the investig-
ation into multivariate wavelet theory. But, there exist a lot of obvious defects in this
method, such as, scarcity of designing freedom. Therefore, it is significant to study
nonseparable multivariate wavelet theory. Nowadays, since there is little literature on
biorthogonal wavelet wraps, it is necessary to research biorthogonal wavelet wraps.
We start from some notations. Z and Z + denote all integers and all nonnegative in-
tegers, respectively. R denotes all real numbers. R3 denotes the 3-dimentional Euclide-
an space. Let L2 ( R 3 ) be the square integrable function space on R 3 . Set and Z 3 =
{( z1 , z2 , z3 ) : zr Z , r = 1, 2,3}, Z +3 = {{( z1 , z2 , z3 ) : : zr z+ , r = 1, 2,3}. Let U be a
separable Hilbert space and is an index set. We say that a sequence { }v U
is a frame for U if there exist positive real constrants L1 , L2 such that
v | , v | L2
2 2 2
U , L1 , (1)
U = ,v
v v* = , v* v .
v
(2)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 31
Note that the discrete-time Fourier transform is 1-periodic. Let v g ( x ) stand for
integer translates of a function g ( x) L2 ( R 3 ) , i.e., ( va g )( x) = g ( x va) , and g n ,va
= 2 n g (2n x va ) , where a is a positive real constant number. Let ( x) L ( R 2 3
)
and let V0 = span{Tv : v Z } denote a closed subspace of L ( R ) . Assume that
2 2 3
( ) := vZ 3 | ( + v ) |2 L [ 0,1] . In [ 3] , the sequence { v ( x )}v is a frame for
3
V0 if
and only if there exist positive constants L1 and L2 such that
L1 ( ) L2 a.e., [0,1] \ N = { [0,1] : ( ) = 0} .
3 3
(4)
h( x) Y , h( x) = vZ h, va va ( x )
3 (5)
Define an operator K : Y 2
( Z 3 ) by
h( x) Y , Kh = { h, va } , (6)
Proof. The convergence of all summations of (7) and (8) follows from the assumptio-
ns that the family { va}vZ 3 is a Bessel sequence with respect to the subspace Y , and
he family { va }vZ 3 is a Bessel sequence in L2 ( R 3 ) with which the proof of the
theorem is direct forward.
32 S. Zhou and Q. Chen
The filter functions associated with a TGMS are presented as follows. Define filter
functions D0 ( ) and D 0 ( ) by D0 ( ) = sZ 3 d 0 ( s ) e 2 is and
B 0 ( ) = sZ 3 b0 ( s ) e 2 is
of the sequences d 0 = {d 0 ( s)} and d 0 = {d 0 ( s )} , resp-
ectively, wherever the sum is defined. Let {b0 (v)} be such that D0 (0) = 2 and
B0 ( ) 0 in a neighborhoood of 0. Assume also that D0 ( ) 2 . Then there exists
f ( x) L2 ( R 2 ) (see ref.[3]) such that
f ( x) = 2 sZ 3 d0 ( s ) f (2 x sa ) . (12)
There exists a scaling relationship for f ( x) under the same conditions as that of d0
for a sequence d 0 , i.e.,
f ( x) = 2 sZ 3 d 0 ( s) f (2 x sa) . (13)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 33
G ( x) = G2 + ( x) = Q G (2 x k ),
( )
k 0 . (15)
k Z 3
Lemma 1[4]. Let F ( x), F ( x) L2 ( R 3 , C v ). Then they are biorthogonal if and only if
F ( + 2k ) F ( + 2k ) *
= Iv . (16)
kZ 3
G 2 + (2 ) = Q ( ) ( )G ( ), 0 , (17)
G2 + (2 ) = Q ( ) ( )G ( ), 0 , (18)
Q (( + 2 ) / 2)Q
0
( ) ( )
(( + 2 ) / 2)* = , I v . (19)
[G (), G ( k )] = , 0, k I v , k Z 3 . (22)
= , I v exp{ik }d = O.
[0,2 ]3 1 1
4 Conclusion
References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
The Traits of Dual Multiple Ternary Fuzzy Frames of Translates 35
YongGan Li*
Office of Financial affairs, Henan Quality Polytechnic, Pingdingshan 467000, P.R. China
txxpds@126.com
Abstract. Frame theory has been the focus of active research for twenty years,
both in theory and applications. In this paper, the notion of the bivariate
generalized multiresolution structure (BGMS) of subspace L2 ( R 2 ) , which is the
generalization of frame multiresolution analysis, is proposed. The biorthogona-
nality traits on wavelet wraps are researched by using time-frequency analysis
approach and variable separation approach. The construction of a BGMS of
Paley-Wiener subspace of L2 ( R 2 ) is studied. The pyramid decomposition
scheme is obtained based on such a GMS and a sufficient condition for its
existence is provided. A procedure for designing a class of orthogonal vector-
valued finitely supported wavelet functions is proposed by virtue of filter bank
theory and matrix theory.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 3641, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Characteristics of Multiple Affine Oblique Binary Frames 37
0
;
(1 + mZ ) (2 + mZ ) = , where 0 = {0 , 1 , , m2 1} denotes the aggregate of
2 2
all the different representative elements in the quotient group Z 2 /(mZ 2 ) and order
= {0} where {0} is the null element of Z and 1 , 2 denote two arbitrary distinct
2
0 +
n , v = n ,v I s , n, v Z 2 , (1)
W , C v | , v | D
2 2 2
, (2)
A sequence { v : v Z } W is a Bessel sequence if (only) the upper inequality of
(2) holds. If only for all W , the upper inequality of (2) holds, the sequence
{ v } W is a Bessel sequence with respect to (w.r.t.) . If { f v } is a frame, there
*
exists a dual frame { f v } such that
W = , f v
v f v* = , f v* f v .
v
(3)
Fc ( ) = C ( ) = vZ 2 c ( v )e 2 ix dx (4)
Note that the discrete-time Fourier transform is 1-periodic. Let Tv ( x ) stand for
integer translates of a function ( x) L ( R ) , i.e., (Tva )( x) = ( x va) , and
2 2
n,va
= 4 (4 x va ) , where a is a positive real constant number. Let ( x) L ( R 2 )
n n 2
f ( x) U , f ( x) = vZ f , Tva Tva ( x )
2 (6)
Define an operator K : U 2
( Z 2 ) by
f ( x) U , Kf = { f , Tva } , (7)
Theorem 1. Let {Tva }vZ 2 L2 ( R 2 ) be a Bessel sequence with respect to the subs-
pace U L2 ( R 2 ) , and {Tva }vZ 2 is a Bessel sequence in L2 ( R 2 ) . Assume that K be
defined by (7), and S be defined by(8). Assume P is a projection from L2 ( R 2 ) onto
U . Then {Tva }vZ 2 is pseudoframes of translates for U with respect to {Tva }vZ 2
if and only if
KSP = P . (9)
Proof. The convergence of all summations of (7) and (8) follows from the assump-
tions that the family {Tva }vZ 2 is a Bessel sequence with respect to the subspace ,
and he family {Tva }vZ 2 is a Bessel sequence in L2 ( R 2 ) with which the proof of the
theorem is direct forward.
We say that a bivariate generalized multiresolution structure (BGMS) {Vn , f ( x),
f ( x)} of L2 ( R 2 ) is a sequence of closed linear subspaces {Vn }nZ of L2 ( R 2 ) and two
elements f ( x) , f ( x) L2 ( R 2 ) such that (i) Vn Vn +1 , n Z ; (ii) V = {0} ;n Z n
2
( )
Proposition 1[6]. Let f L R satisfy | f | a.e. on a connected neighbourhood of
2
Proposition 2[5]. Let {Tva f }vZ 2 be pseudoframes of translates for V0 with respect to
{Tva f }vZ 2 . Define Vn by
Vn {( x) L2 ( R 2 ) : ( x / 4 n ) V } nZ , (13)
40 Y. Li
The filter functions associated with a BGMS are presented as follows. Define filter
functions D0 ( ) and D 0 ( ) by D0 ( ) = sZ 2 d 0 ( s ) e2 i and
B 0 ( ) = sZ 2 b0 ( s) e 2 i
of the sequences d 0 = {d 0 ( s )} and d 0 = {d 0 ( s )} , res-
pectively, wherever the sum is defined. Let {b0 (v)} be such that D0 (0) = 2 and
B0 ( ) 0 in a neighborhoood of 0. Assume also that D0 ( ) 2 . Then there exists
f ( x) L2 ( R 2 ) (see ref.[3]) such that
f ( x) = 2 sZ 2 d 0 ( s ) f (4 x sa ) . (14)
There exists a scaling relationship for f ( x) under the same conditions as that of
d0 for a seq. d 0 , i.e.,
f ( x) = 2 sZ 2 d0 (v) f (4 x sa) . (15)
where = 0,1, 2, 3. By taaking the Fourier transform for the both sides of (12), we have
h nk + ( ) = B ( ) ( z1 ,2 z ) h k ( 2). (17)
B ( )
( z1 , z2 ) = B ( / 2 ) = b
( ) ( ) k1
(k ) z z
1
k2
2 (18)
k Z 2
| ( + 2k ) | 2
=1 . (19)
kZ 2
= B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 ) + B ( z1 , z 2 )
2 2 2 2
(20)
The Characteristics of Multiple Affine Oblique Binary Frames 41
( )
B (( 1)
j +1
z1 , ( 1) z 2 )}: = ,
j
= , , , {0,1, 2, 3}. (21)
j =1
References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Yang, S., Cheng, Z., Wang, H.: Construction of biorthogonal multiwavelets. J. Math. Anal.
Appl. 276(1), 112 (2002)
Generation and Characteristics of Vector-Valued
Quarternary Wavelets with Poly-scale Dilation Factor*
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 4248, 2011.
Springer-Verlag Berlin Heidelberg 2011
Generation and Characteristics of Vector-Valued Quarternary Wavelets 43
Coifman R. R. and Meyer Y. firstly introduced the notion for orthogonal wavelet
packets which were used to decompose wavelet components. Chui C K.and Li Chun
L.[4] generalized the concept of orthogonal wavelet packets to the case of non-
orthogonal wavelet packets so that wavelet packets can be employed in the case of the
spline wavelets and so on. Tensor product multivariate wavelet packs has been
constructed by Coifman and Meyer. The introduction for the notion on nontensor
product wavelet packs attributes to Shen Z [5]. Since the majority of information is
multidimensional information, many researchers interest themselves in the investigation
into multivariate wavelet theory. Therefore, it is significant to investigate nonseparable
multivariate wavelet theory.
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 2 denotes the 4-
dimentional Euclidean space. L2 ( R 4 ) denotes the square integrable function space.
x = ( x1 , x2 , x3 , x4 ) R , = (1 , 2 , 3 , 4 ) R 4 , k = (k , k , k , k ) Z ,
4 4
Let 1 2 3 4
, = 4 ( x) ( x ) dx, ( ) = 4 ( x) e i x dx,
R R
n , v = n ,v I s , n, v Z 4 , (1)
theorem[8] for higher-dimentional wavelets with arbitrary dilation matrice has been
given. Let ( x ) L2 ( R 4 ) satisfy the following refinement equation:
( x) = m 4 kZ bk (mx k ) , 4 (2)
where {b(n)}nZ 4 is real number sequence which has only finite terms.and ( x )
L2 ( R 4 ) is called scaling function. Formula (1) is said to be two-scale refinement
equation. The frequency form of formula (1) can be written as
( ) = B( z1 , z2 , z3 , z4 ) ( m), (3)
B( z1 , z2 , z3 , z4 ) = b( n) z1n z2 n z3n z4 n 1 2 3 4
. (4)
2
nZ
Define a subspace V j L ( R ) ( j Z ) by
2 4
V j = closL2 ( R4 ) m 2 ( m j x k ) : k Z 4 . (5)
jZ jZ j
h( mx) Vk +1 , k Z (iv) the family { (m x n) : n Z 4 } forms a Riesz basis for
the spaces V j .
Let U k (k Z ) denote the complementary subspace of V j in V j +1 , and assume
that there exist a vector-valued function ( x) = {1 ( x), 2 ( x), , m4 1 ( x)} constitutes
a Riesz basis for U k , i.e.,
U j = closL2 ( R4 ) : j , n : = 1, 2, , m4 1; n Z 4 , (6)
( ) = Q ( ) ( z1 , z2 , z3 , z4 ) f ( m), = 1, 2, , m 4 1. (8)
where the signal of sequence {q ( )
k }( = 1, 2, , m 1, k Z ) is
4 4
( )
Q ( z1 , z2 , z3 , z4 ) = q ( )
n
z1n z2 n z3n z4 n .
1 2 3 4
(9)
nZ
4
Generation and Characteristics of Vector-Valued Quarternary Wavelets 45
(), ( k ) = 0, k , n Z 4 . (10)
(), ( k ) = 0 , , k Z 4 , (11)
(), ( n) = , 0, n , , , n Z 4 (12)
H ( x) = H 4 + ( x) = Q ( )
k H (4 x k ). (14)
kZ 4
F ( + 2k ) F ( + 2k ) *
= Iv . (15)
kZ 4
H 4 + ( ) = Q ( ) ( / 4) H ( / 4), 0 , (16)
H 4 + (4 ) = Q ( ) ( ) H ( ), 0 , (17)
1
Q ( ) ( ) = 4 Qk( ) exp{ik }, 0 (18)
4 kZ 4
46 P. Luo and S. Wang
1
Q ( ) ( ) =
44
Q ( )
k exp{ik }, 0 . (19)
kZ 4
0
Q ( ) (( + 2 ) / 4)Q ( ) (( + 2 ) / 4)* = , I v (20)
Proof. Since the space R 4 has the following partition: R 4 = uZ ([0, 2 ]4 + 2u ) , and 4
2
= ( ) 4 [0,2 ] + 2 k Q ( ) ( ) H ( ) H ( )* Q ( ) ( )* ei 4 k d
k Z
4
4
1 1
=
(2 ) 4
[0,8 ]4
Q ( ) ( / 4)Q ( ) ( / 4)* eik d =
(2 )4
[0,2 ]4
,v I u ei 4 d = 0, k , v I v .
[ H (), H ( k )] = , 0, k I v , k Z 4 . (28)
(2 ) 4 [ H (), H ( k )] =
R4
H 41 + 1 ( ) H 41 + 1 ( )* exp{ik }d
= Q ( ) ( / 4){ H ( / 4 + 2u ) H ( / 4 + 2u )* }Q ( ) ( / 4)* e ik d
1 1
[0,8 ]
4 1 1
u Z
s
= , I v exp{ik }d = O.
[0,2 ]4 1 1
16 4 [ H (), H ( k )] = 4 H ( ) H 1 ( )* eik d
R
= 4 H 41 + 1 ( ) H 4 1 + 1 ( )* exp{ik }d =
R
= {Q ( ) ( l )}{ H ( l + 2u ) H ( l + 2u )*}{Q ( ) ( l )}* eik d
l l
[0,24 ]4 4 4 4 4
l =1 uZ 4 l =1
)} O { l =1Q ( ) ( l )}* exp{ik }d = O.
{Q ( l ) (
=
l
([0,2 4 ] 4
4 l
4
l =1
4 Conclusion
References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis: Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: A study on compactly supported orthogonal vector-valued wavelets and
wavelet packets. Chaos, Solitons & Fractals 31(4), 10241034 (2007)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
48 P. Luo and S. Wang
Abstract. Under the axiomatic system of buffer operator in grey system theory,
two new strengthening buffer operators are established, which have been based
on strictly monotone function. Meanwhile, the characters and the inherent
relation among them are studied. The problem that there are some
contradictions between quantitative analysis and qualitative analysis in
pretreatment for vibration data sequences is resolved effectively. A practical
example shows theirs validity and practicability.
1 Basic Concept
Definition 1. Assume that the sequence of data representing a systems behavior is
given, X=(x(1),x(2),,x(n)) then
(1)X is called a monotonously increasing sequence if k = 1,2, " , n 1 ,
x(k ) < x(k + 1) .
(2) X is called a monotonously decreasing sequence, if k = 1,2, " , n 1 ,
x(k ) > x(k + 1) .
(3) X is called a vibration sequence if k , k {1,2, " , n 1} , x(k ) < x(k + 1) and
x(k ) > x(k + 1) .
And M= max x ( k ) , m= min x (k ) , then M-m is called the amplitude of X.
1 k n 1 k n
Axiom 1. Axiom of Fixed Points. Assume that X is a sequence of raw data and D is a
sequence operator, then D must satisfy x(n) d= x(n).
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 4954, 2011.
Springer-Verlag Berlin Heidelberg 2011
50 R. Han and Z.-p. Wu
In the effect of the Axiom of Fixed Points which limit the sequence operator,
x(n) in the raw data is never changed.
Axiom 2. Axiom on Sufficient Usage of Information. When a sequence operator is
applied, all the information contained in each datum x(k), (k=1,2,,n)of the sequence
X of the raw data should be sufficiently applied, and any effect of each entry x(k),
(k=1,2,,n) should also be directly reflected in the sequence worked on by the
operator.
The Axiom 2 limits any sequence operator which should be defined without the
sequence of raw data and based on the information in the sequence we have kept.
Axiom 3. Axiom of Analytic Representations. For any x(k)d, (k=1,2,,n) can be
described with a uniform and elementary analytic representation in x(1),x(2),,x(n).
The Axiom 3 requires procedures which the operator is applied to the raw data
clearly, standardized, and as simple as possible, in order to calculate and makes
calculation easier on the computer.
Definition 3 All sequence operators, satisfying these three axioms, are called buffer
operators, XD is called buffer sequence.
Definition 4 Assume X is a sequence of raw data, D is an operator worked on X,
when X is a monotonously increasing sequence, a monotonously decreasing sequence
or a vibration sequence, if the buffer sequence XD increases or decreases more rapidly
or vibrate with a bigger amplitude than the original sequence X, the buffer operator D
is termed as a strengthening operator.
Theorem 1 (1) When X is a monotonously increasing sequence, XD is a buffer
sequence, then D is a strengthening operator x( k ) d x ( k ) ( k = 1, 2,3,..., n ) ;
(2)When X is a monotonously decreasing sequence, XD is a buffer sequence, then
D is a strengthening operator x( k ) d x( k ) ( k = 1,2,3,..., n) ;
(3)When X is a monotonously vibration sequence and D is a strengthening
operator, XD is a buffer sequence ,then
max x(k ) max x( k )d , min x(k ) min x( k )d .
1 k n 1 k n 1 k n 1 k n
That is, the data in a monotonously increasing sequence shrink when a strengthening
operator is applied and data in a monotonously decreasing sequence expand when a
strengthening operator is applied[1].
x(k ) d 1 = g i f i ( x(k )) i ,
f i (x (n ))
when X is a monotonously increasing sequence, a monotonously decreasing sequence
or a vibration sequence, D1 is a strengthening buffer operator.
A Kind of New Strengthening Buffer Operators and Their Applications 51
Proof: It is easily proved that D1 satisfies the three axioms of buffer operator, so D1 is
a buffer operator. We prove that D1 is strengthening buffer operator.
(1) When X is a monotonously increasing sequence, because 0x(k)x(n)
and fi is a strictly monotonic increasing function and fi>0,i=1,2,,n, 0fi (x(k))fi
1 i i fi x n i i
1 i i fi x n i i
max{x(k )d 1 } max{x(k )}
1 k n 1 k n
f ( x (k )) n f ( x(i )) n i +1
1
x( k )d 2 = g i i
n k + 1
i , k = 1, " , n ,
i = k f i ( x (n ))
when X is a monotonously increasing sequence or a monotonously decreasing
sequence, D2 is a strengthening buffer operator.
Proof: It is easily proved that D2 satisfies the three axioms of buffer operator,
so D2 is a buffer operator. We prove that D2 is strengthening buffer operator.
(1)When X is a monotonously increasing sequence, because
0x(k)x(n)and fi is a strictly increasing monotonic function and
[ (( (( )))) ] 1 ,
f i > 0, i = 1,2," , n , 0 f i ( x(k )) " f i ( x(n )) ,
1
fi x k n k +1
fi x n
f ( x(k ))
[ ( ( )) ] [ ( ( )) ]
n n
f ( x (k )) , and
1
( ( ))
( ( ))
1 1
1,
fi x i n i +1 i fi x i n i +1
n k +1 n k +1
fi x n fi x n i
i =k i=k
3 Case Study
We take Per capita power consumption (Unit :kwh) for example to prove the effect of
strengthening buffer operator in this paper on GM(1,1) prediction model. Per capita
power consumption of China from 2000 to 2006 is chosen as a sequence of raw data
(Table1).
prudent monetary policy for economic growth, injecting vigor, cause the power
demand increased sharply. Therefore, this article takes China's per capita power
consumption of raw data sequence from 2000 to 2005 as modeling data, the data in
2006 as a model test data. Calculating per capita consumption growth power is as
follows: 9.215%, 8.091%, 11.132%, 9.499, 13.933%, 15.090%, an annual average
increase rate of 9.468%.
We test the primary data sequence for quasi-smooth, when t 2003 , its smooth
ratio is 0.401,0.313, 0.272, 0.246 which is in (0,0.5).Its smooth ratio is decreasing and
satisfies quasi-smooth. Therefore a accumulation of the primary data sequence has
quasi-exponential law. But it is clear that the former has a little slow growth rate, the
latter has fast growth rate. Therefore, it is best to smooth the original data sequence to
weaken the impact of the shock disturbance and highlight the laws of the data.
Take f i ( x ) = x 2 , g i ( x ) = x 0.5 , i = 1,2," , n to construct the buffer operator,
and apply the second order buffer operator which in this article to strengthen the raw
data. Then we build the prediction model and compare with the raw data sequence
(Table 2).
The GM(1,1) model directly built by the raw data sequence without buffer
operator is
x (2000 + t ) = 1317.848e0.102t 1185.448 .
The GM(1,1) model built by the strengthening buffer operator sequence XD1 is
From the table 2, all the predictive relative error of the strengthening buffer
sequences which are applied by the second order buffer operator D1 and D2 are
smaller than the predictive relative error of the raw data sequences in the model. The
predictive relative error applied with D2 is smaller, and its predictor is 241.23 and
closer to the observation 249.4. Its one step predictive error is only 3.28%, namely the
predictive accuracy is the highest.
54 R. Han and Z.-p. Wu
4 Conclusion
Based on the recent literature, two new strengthening buffer operators are established,
which have been based on strictly monotone function. The example shows that the
predictive accuracy increases by the strengthening buffer sequences which are applied
by the second order buffer operator D1 and D2 .
References
1. Liu, S.-f., Dang, Y.-g., Fang, Z.-g.: Grey system theory and its application. Science Press,
Beijing (2004)
2. Liu, S.-f.: The three axioms of buffer operator and their applications. The Journal of Grey
System 3(1), 3948 (1991)
3. Liu, S.-f.: Buffer operator and its application. Theories and Practices of Grey System 2(1),
4550 (1992)
4. Liu, S.-f.: The trap in the prediction of a shock disturbed system and the buffer operator.
Journal of Huazhong University of Science and Technology 25(1), 2527 (1997)
5. Dang, Y.-g., Liu, S.-f., Liu, B., et al.: Study on the strengthening buffer operators. Chinese
Journal of Management Science 12(2), 108111 (2004)
Infrared Target Detection Based on Spatially Related
Fuzzy ART Neural Network
Abstract. In order to solve the ghosts, the halo effect as well as the lower
signal-to-noise ratio problems more effectively, this paper presents a spatially
related fuzzy ART neural network. We introduce a laterally-inspirited learning
mode into the background modeling stage. At first, we combine the region-
based feature with the intensity-based feature to train the spatially related fuzzy
ART neural network by the laterally-inspirited learning mode. Then two
spatially related fuzzy ART neural networks are configured as master-slave
pattern to build the background models and detect the infrared targets
alternately. Experiments have been carried out and the results demonstrate that
the proposed approach is robust to noise, and can eliminate the ghosts and the
halo effect effectively. It can detect the targets effectively without much more
post-process.
1 Introduction
Visual surveillance is a hot topic in computer vision. In recent years, thanks to the
improvement of infrared technology and the drop of its cost, the thermal imagery has
been widely used in the surveillance field. In comparison with visible imagery,
adopting the thermal imagery has many benefits, such as all-day working ability,
robust to light change, no shadow, etc. However, the thermal imagery has its own
difficulties, such as a lower signal-to-noise ratio, a lower resolution, an uncalibrated
white-black polarity change as well as the halo effect which appears around the
very hot or cold objects [1]. The comparison between visible and thermal imagery can
be learned from Lins review [2].
Many detection approaches have been proposed for thermal imagery. Some
approaches base on the targets appearance [3-5]; some approaches base on the
targets motion [1, 6].
A fuzzy ART neural network approach is proposed to detect the targets in our
previous works [7]. It is capable of learning the total number of categories adaptively
and stable, but it has some shortages. It does not take advantage of the strong spatial
correlation of thermal imagery. It only exploits the time domain information, and
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 5561, 2011.
Springer-Verlag Berlin Heidelberg 2011
56 B. Chen, W. Wang, and Q. Qin
utilizes only one simple feature to build the background models; moreover its
background model updating strategy is not comprehensive. In this paper, we exploit
and combine the spatial information with the temporal information, and propose a
novel infrared target detection approach based on spatially related fuzzy ART neural
network. The remainder of this paper is described as follows: Sec. 2 introduce the
laterally-inspirited learning mode and the master-slave working pattern, and present
the detection approach comprehensively. The experimental evaluation of our
approach is described in Sec. 3. Finally Sec. 4 includes conclusions and further
research directions.
Several frames ahead (Ft, t=0N) are allocated to build the background model, so we
can obtain the sample data like formula 1.
The background modeling stage includes two sub-stages: initial background model
sub-stage and build background model sub-stage.
Initial Background Model Sub-stage. This stage is focused on the fuzzy ART neural
network initialization and sample data filter out.
A fuzzy ART neural network is initialized and trained with the sample data. Taking
advantage of the strong spatial correlation of infrared image, we combine the region-
based feature that the median of pixels neighbor with the intensity-based feature that
the pixels own intensity to train the fuzzy ART neural network (like formula 3).
Where, M denotes the size of pixels neighbor, P (i,j) denotes the input feature vector.
After training, we can obtain each categorys information, such as categorys
appearing frequency and each sample datas category, and then filter each category
out according to its properties. If a category belongs to the background, then its
appearing frequency must be bigger than others or a threshold value, so we set a
threshold , if the categorys frequency larger than , then this category belongs to
background, or else, this category belongs to non-background and must be deleted. In
the same way, the sample data are filtered out according to its category. By means of
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 57
filtering, the proposed approach can suit for the real situation that the foreground
objects appear during the background modeling period.
Build Background Model Sub-stage. This stage is focused on the fuzzy ART neural
network retrained and background model refined.
Owing to the thermal imagery mechanism, the thermal image has a strong spatial
correlation in a local region, so the background models should also reflect this
property. In order to exploit and fuse the spatial and temporal correlation effectively,
we introduce the laterally-inspirited learning mode as follow:
According to the category of Pt(i,j), for each location, its initialized fuzzy ART
neural network and those initialized fuzzy ART neural networks in its neighbor are
retrained, like formula 4.
{FART(i k, j k) | k = 0,1,...,(R 1) / 2} are retrained with P (i, j), if Pt (i, j) belongs to background
t
(4)
nothing, if Pt (i, j) belongs to foreground.
Where, R denotes the size of laterally-inspirited region, FART (i,j) denotes the
fuzzy ART neural network of location (i,j).
The fuzzy ART neural network is retrained by the laterally-inspirited learning
mode can associate the background models with the spatial information. Furthermore,
it combines with the region-based feature can exploit the spatial correlation
information effectively, and make the detection more stable and robust to noise. We
name this type of neural network as spatially related fuzzy ART neural network.
W jt+ 1 = ( P t ( i , j ) ^ W jt ) + ( 1 - ) W jt . (5)
3) When the alternation time interval T passes, the slave network B turn to be
master network and begin to detect the targets, and the master network A is reset and
established as slave network. When each alternation time interval T passes, the master
network and the slave network swap.
3 Experimental Results
To evaluate the performance of the proposed approach, we tested the approach with
the OTCBVS Benchmark Dataset Collection [9]. Some example images of five
original thermal sequences are shown in the first row of Fig. 1. We can see from these
images that the thermal images have a lower resolution; the halo effect appears
around objects; the targets temperature is not always higher than environments.
In our experimental evaluation, all detection results obtained by any approach are
the raw detection results without the post-processing. In order to quantitatively
measure the performance of the detection results, we manually segment the person
regions and take it as the ground truth to compare with the automatic detection results.
Some silhouettes segmented are shown in the second row of Fig. 1.
We compare the spatially related fuzzy ART neural network (SRFART) with three
other approaches: the single weighted Gaussian approach (SWG) [1]; the codebook
approach (CB) [10] and the fuzzy ART neural network (FART) [7]. The single
weighted Gaussian approach is an improved version of single Gaussian model. The
codebook approach is one of prominent detection approaches. About parameter
setting, for single weighted Gaussian approach, we set the stand deviation = 5 ,
the detection threshold T=8; for codebook approach, we set the light control
parameters =0.7, =1.3; for FART approach and the proposed approach, we set the
choice parameter =0, the learning rate parameter =1, the vigilance parameter
=0.85 for seq.1, 2, 5, =0.75 for seq.3, 4; for the proposed approach, we set the
alternation time interval T=10s, the size of pixels neighbor M=3, the size of laterally-
inspirited region R=3.
We show the silhouette results obtained by each detection approaches on
representative images (from the five sequences) in Fig. 1. From the third row, we can
see that the detection results obtained by codebook approach have much noise, and
even exist the ghosts like the seq. 4, 5. From the fourth row, we can see that the
detection results obtained by single weighted Gaussian approach also have much
noise, and the halo effect is very clear which severely impairs the detection results
like the sequences 3, 4. From the fifth row, we can see that the detection results
obtained by fuzzy ART neural network approach have some noise, like seq. 1, and
cannot detect the targets effectively when the contrast between target and
environment is low, like seq. 5. In contrast, from the last row, we can see that the
detection results obtained by the proposed approach have less noise and do not exist
the ghosts; the proposed approach is able to detect most portions of peoples bodies
even when the contrast between target and environment is low; the proposed approach
can eliminate most of halo and extract the people effectively.
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 59
Fig. 1. Visual comparisons of detection results of the four approaches across different images
Fig. 2. Visual comparison of detection results between whether to utilize the master-slave
pattern
Fig. 2 shows the visual comparison of detection results between whether to utilize
the master-slave pattern. Image (a) in Fig. 2 is one example image of the background
sample frames. We can see that there is a person standing in front of door and the
door is open. After a while, the person goes into the house and the door is closed, and
we can see it from the image (b). Image (c) shows the ground truth detection result for
image (b). From the image (d) we can see that: when we do not utilize the master-
slave pattern, the detection result has two ghosts (false object) obviously. In contrast,
60 B. Chen, W. Wang, and Q. Qin
when we do utilize the master-slave pattern, the background model can reflect the real
situation as accurately as possible, and can eliminate the ghosts effectively.
In order to evaluate the detection results objectively and quantitatively, the F1
metric [8] is adopted. The F1 metric, also known as Figure of Merit or F-measure, is
the weighted harmonic mean of precision and recall, as defined in formula 6. Recall,
also known as detection rate, gives the percentage of detected true positives as
compared to the total number of true positives in the ground truth; Precision, also
known as positive prediction rate, gives the percentage of detected true positives as
compared to the total number of detected items. Higher value of F1 means a better
detection performance.
2 * R e c a ll* P r e c is io n
F1= (6)
R e c a ll+ P r e c is i o n
The F1 measurements of each approach for five sequences in the Fig. 1 are shown
in table 1. Comparing the measurements in table 1, we can see that: Due to the
presence of halo and ghosts, the F1 values of single weighted Gaussian and codebook
are poor, and due to the lower contrast between target and environment, the F1 values
of the fuzzy ART neural network is low, like seq. 5. In contrast, the spatially related
fuzzy ART neural network approach performs well all the same even when the halo
and ghosts exist.
Generally speaking, the proposed approach is robust to noise and can reflect the
real situation as accurately as possible. It can eliminate the halo and the ghosts
effectively and detect the targets effectively without much more post-process.
4 Conclusions
We presented a novel infrared target detection approach based on spatially related
fuzzy ART neural network. It associates the background models with the spatial
information by the laterally-inspirited learning mode, and it reflects the real situation
as accurately as possible by adopting the master-slave working pattern. The approach
can detect targets more stable and robust to noise. It can eliminate the halo and ghosts
effectively and detect the targets effectively without much more post-process.
In the implementation of the proposed approach, we do not take account of the
processing speed. In the end, the detection speed of the proposed approach is only 2
fps. So we are going to study how to speed the model up in the future.
Infrared Target Detection Based on Spatially Related Fuzzy ART Neural Network 61
References
1. Davis, J.W., Sharma, V.: Background subtraction in thermal imagery using contour
saliency. International Journal of Computer Vision 71, 161181 (2007)
2. Lin, S.-S.: Review: Extending visible band computer vision techniques to infrared band
images. Technical Report, University of Pennsylvania (2001)
3. Li, Z., Bo, W., Ram, N.: Pedestrian detection in infrared imaged based on local shape
features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, pp. 18. IEEE Press, Minneapolis (2007)
4. Kai, J., Michael, A.: Feature based person detection beyond the visible spectrum. In: IEEE
Conference on Computer Vision and Pattern Recognition, pp. 3033. IEEE Press, Miami
(2009)
5. Stephen, O.H., Ambe, F.: Detecting People in IR Border Surveillance Video Using Scale
invariant image moments. In: Proceedings of SPIE, pp. 73400L-16. SPIE, Orlando (2009)
6. Fida, E.B., Thierry, B., Bertrand, V.: Fuzzy Foreground Detection for Infrared Videos. In:
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Workshops, pp. 16. IEEE Press, Anchorage (2008)
7. Chen, B., Wang, W., Qin, Q.: Infrared target detection based on Fuzzy ART neutral
network. In: The Second International Conference on Computational Intelligence and
Natural Computing, pp. 240243. IEEE Press, Wuhan (2010)
8. Lucia, M., Alfredo, P.: A self-organizing approach to background subtraction for visual
surveillance applications. IEEE Transactions on Image Processing 17, 11681177 (2008)
9. Object Tracking and Classification in and Beyond the Visible Spectrum,
http://www.cse.ohio-state.edu/otcbvs-bench/
10. Kyunqnam, K., Chalidabhonqse, T.H., David, H., et al.: Real-time foreground-background
segmentation using codebook model. Image Segmentation 11, 172185 (2005)
A Novel Method for Quantifying the Demethylation
Potential of Environmental Chemical Pollutants
1 Introduction
Before chemical compounds can be used, evaluation of their safety is essential.
Currently, assessment of contamination of genetic material, such as DNA mutation,
focuses mainly on detecting direct damage. However, many pollutants that may be
deemed safe by existing safety evaluation protocols can have indirect effects. For
example, pollutants may cause DNA methylation and activate other epigenetic
mechanisms that can lead to slight changes in long-term biological traits but may not
cause genetic mutations, chromosomal aberrations, and other genetic damage [1; 2]
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 6271, 2011.
Springer-Verlag Berlin Heidelberg 2011
A Novel Method for Quantifying the Demethylation Potential 63
Low-dose pollutant exposure can cause DNA methylation changes, and long-term,
mild, recurring methylation changes can begin the process of damaging health. Thus,
developing a tool to assess indirect damage is essential. [3; 4]
Currently, research of the epigenetic toxicity of environmental pollutants is in its
infancy. [5] Some researchers have proposed the use of serum biochemical analysis,
histopathological evaluation, and analysis of the DNA methylation status in healthy
animals treated with chronic and/or sub-chronic pollutant exposure in vivo. [6; 7]
However, such a detection system has not been developed, and there is an urgent need
to establish a stable and practical detection method to evaluate the epigenetic toxicity
of pollutants. [8; 9]
Our approach to quickly evaluate epigenetic toxicity of pollutants was to use the
non-toxic enhanced green fluorescent protein (EGFP) to build a screening system,
which we call quantifying the demethylation potential (QDMP) . The basic principles
of the system are as follows: DNA methylase Mss.I was used to artificially modify
commercial green fluorescent protein plasmid (EGFP) in vitro so that the EGFP gene
promoter was highly methylated. [10] The methylated EGFP plasmid was transfected
into the human hepatoma cell line HepG-2, and successfully transfected HepG-2 cells
were selected through several rounds of screening of plate cultures. [11] The selected
HepG-2 cells that contained the methylated EGFP plasmid were used as the target and
confirm the demethylation matter arsenate, hydralazine as a positive control. Next, the
heavy metals Cd and Ni, atmospheric particulate matter extracts, and contaminated
shellfish samples extracts in a gradient of doses were tested using the screening
system. [12; 13; 14] For each pollutant type, gene promoter methylation, cell
expression of EGFP mRNA, and green fluorescence intensity of EGFP in cells were
measured to establish a linear relationship between these parameters and pollutant
dose. In this way, the level of EGFP gene promoter methylation was linked to the
level of cell green fluorescence intensity, so that the former could be calculated from
the latter. Thus, the lower the level of methylation of the EGFP gene promoter of
cultured cells, the stronger the green fluorescence. [15] When pollutants and HepG-2
cells transfected with hypermethylated pEGFP-C3 were co-cultured, those with a
strong DNA demethylation function exhibited significantly enhanced green
fluorescence; such pollutants were considered to have low-methylation epigenetic
toxicity. [16] In contrast, the lack of a significant increase in green fluorescence was
indicative of weak DNA demethylation potential, and such compounds were deemed
to have low-methylation epigenetic toxicity (Fig. 1).
DNA (500 ng) digested with the restriction enzymes Msp and Hpa was treated with
sodium bisulfite as previously reported (Gonzalgo, 2002; Xianliang, 2006). [17]
The human liver cancer cell line HepG-2 was purchased from the China Type Culture
Collection (Chinese Academy of Medical Sciences, Beijing, China) and grown in
DMEM supplemented with 10% fetal bovine serum (FBS; JRH Bioscience, San
Antonio, TX, USA) and 1% NEAA (Chinese Academy of Medical Sciences).
A 599 kb CMV promoter of the pEGFP-C3 plasmid was transfected into HepG-2
cells using the FuGENE HD transfection reagent (F. Hoffmann-La Roche Ltd, Basel,
Switzerland). This reagent (3, 4, 5, 6, or 7 l) was directly pipetted into the medium
containing the diluted pEGFP-C3 DNA (0.02 g/l) without allowing contact with the
walls of the plastic tubes. The transfection reagent:DNA complex was incubated for
15 min at room temperature, and then the transfection complex was added to the cells
in a drop-wise manner. The wells were swirled to ensure distribution over the entire
plate surface. Cell growth was best and the fluorescence was brightest when 7 l of
the transfection reagent were used.
Cells were seeded at a density of 3105 cells/10 cm dish on day 0. On day 1, the
medium was changed to one containing 5-aza-dC (Amresco Co Ltd, Amresco, ,USA),
which was freshly dissolved in PBS and filtered through a 0.2m filter. On day 4,
cells were harvested and genomic DNA was obtained by serial extraction with
phenol/chloroform and ethanol precipitation. Total RNA was extracted by ISOGEN
kit (QIAGEN Co Ltd, Germany) HepG-2 cells, into which the hypermethylation
EGFP-C3 plasmid gene had been introduced by homologous recombination, were
exposed to heat at 37 C for 6 h on days 2, 3, and 4 because this treatment effectively
induced higher expression of hypermethylated EGFP-C3 mRNA.
In the next step, competent bacteria were prepared using E. coli strain DH5. After
the PCR product was purified, it was linked with the pGEM2T vector, transformed
into the competent bacteria, and screened via blue-white screening. The product was
separated by 1.5% agarose gel electrophoresis and photographed using a gel imaging
system camera. Positive clones were screened by PCR amplification, and bacteria
were cultured with shaking at 37 C, and confirmed using universal primer T7 and
SP6 sequencing by the Beijing Genomics Institute of Bacteria Biotechnology
Company.
A Novel Method for Quantifying the Demethylation Potential 65
Fig. 2. Representative treated with sodium bisulfite sequencing group and the untreated group.
Note: the picture in the sequence of the nine target sites in the CG, the figure shows the sites of
which three from the methylation sites, six loci from the non-methylated sites; the chart below
shows nine loci in the methylation status of all.
A Novel Method for Quantifying the Demethylation Potential 67
are presentative sequence diagram and pattern of methylation, which showed that
methylation of plasmid construction management is better, the plasmid EGFP gene
promoter methylation after treatment with 90.4% of CG sites was methylated, and
hypermethylation status of the EGFP gene promoter was quantitatively expressed.
Fig. 3. Real-time quantitative PCR detection of cell sample results and dissolution curves of
PCR products
good linear relationship between the 5-aza-dC dose gradient and the mean
fluorescence intensity :y = 10.402x + 6.0334; R2 = 0.829. This result shows that
eukaryotic cells containing the green fluorescent protein gene vector exhibit a
methylation-sensitive response relationship. The intensity of cellular green
fluorescence can be used as an indicator of the presence of certain pollutants. [18]
The sensitivity of the assay system was tested by adding various doses of 5-aza-dC to it.
Demethylation of the promoter CMV of pEGFP-C3 was clearly observed at doses of
0.1M or higher. Furthermore, fluorescence intensity was induced in a dose-dependent
manner. We also examined the appearance of green fluorescence before and after the
addition of 5-aza-dC. Under a fluorescence microscope, significant fluorescence was
observed in the cytoplasm after the addition of 0.0008 M of 5-aza-dC.
4 Conclusions
A system to assay for demethylating agents was established using the promoter CMV
of the plasmid pEGFP-C3. To our knowledge, this is the first assay system that uses
an endogenous CMV promoter. Methylation stability is high for endogenous
sequences , and CGIs(CpG islands)in promoter regions have higher fidelity than those
outside [19]. Mechanisms involved in the maintenance and monitoring of the
methylated status of CGIs are expected to function well for the promoter CMV of the
plasmid pEGFP-C3. Therefore, accurate estimation of epimutagens should be possible
with this system.
To establish a sensitive detection system, it was necessary to use a methylated
promoter CMV that responds to low doses of demethylating agents (i.e., 5-aza-dC). In
this study, we used the CMV promoter of the plasmid pEGFP-C3 because our recent
studies [20] identified it as one that met the requirements. The CMV was
demethylated by 5-AZA-dC at doses as low as 0.01 M in parental HepG2 cells. This
represents high sensitivity, considering that laboratory use of 5-aza-dC is between 0.1
and 10 M. Demethylation of the CMV and expression of the introduced EGFP were
observed after addition of 5-aza-dC at doses of 0.1 M or higher. Fluorescence of the
EGFP product was detected by fluorescence microscopy after addition of 1 M 5-
AZA-dC. In summary, we established a detection system for demethylating agents
using an endogenous promoter CMV; this system is expected to allow accurate
detection of epimutagens.
References
1. Eriksen, T.A., Kadziola, A., Larsen, S.: Binding of cations in Bacillus subtilis
phosphoribosyldiphosphate synthetase and their role in catalysis. Protein Sci. 11, 271279
(2002)
2. Zoref, E., Vries, A.D., Sperling, O.: Mutant feedback-resistant
phosphoribosylpyrophosphate synthetase associated with purine overproduction and gout.
Phosphoribosylpyrophosphate and purine metabolism in cultured fibroblasts. J. Clin.
Invest. 56, 10931099 (1975)
3. Becker, M.A., Smith, P.R., Taylor, W., Mustafi, R., Switzer, R.L.: The genetic and
functional basis of purine nucleotide feedback-resistant phosphoribo sylpyro phosphate
synthetase superactivity. J. Clin. Invest. 96, 21332141 (1995)
4. Reichard, J.F., Schnekenburger, M., Puga, A.: Long term low-dose arsenic exposure
induces loss of DNA methylation. Biochem. Biophys. Res. Commun. 352, 188192 (2007)
5. Olaharski, A.J., Rine, J., Marshall, B.L., et al.: The flavoring agent dihydrocoumarin
reverses epigenetic silencing and inhibits sirtuin deacetylases. PLoS Genet. 1(6), e77
(2005)
6. Birnbaum, L.S., Fenton, S.E.: Cancer and developmental exposure to endocrine disruptors.
Environ. Health Perspect. 111, 389394 (2003)
7. Salnikow, K., Zhitkovich, A.: Genetic and epigenetic mechanisms in metal carcinogenesis
and cocarcinogenesis: nickel, arsenic, and chromium. Chem. Res. Toxicol. 21, 2844
(2008)
8. Tang, W.Y., Newbold, R., Mardilovich, K., et al.: Persistent hypomethylation in the
promoter of nucleosomal binding protein 1 (Nsbp1) correlates with overexpression of
Nsbp1 in mouse uteri neonatally exposed to diethylstilbestrol or genistein.
Endocrinology 149, 59225931 (2008)
9. Reik, W., Dean, W., Walter, J.: Epigenetic reprogramming in mammalian development.
Science 293, 10891093 (2001)
10. Bombail, V., Moggs, J.G., Orphanides, G.: Perturbation of epigenetic status by toxicants.
Toxicol. Lett. 149, 5158 (2004)
11. Feil, R.: Environmental and nutritional effects on the epigenetic regulation of genes.
Mutat. Res. 600, 4657 (2006)
12. Wu, C., Morris, J.R.: Genes, genetics, and epigenetics: a correspondence. Science 293,
11031105 (2001)
13. Feinberg, A.P., Ohlsson, R., Henikoff, S.: The epigenetic progenitor origin of human
cancer. Nat. Rev. Genet. 7, 2133 (2006)
14. Suzuki, M.M., Bird, A.: DNA methylation landscapes: provocative insights from
epigenomics. Nat. Rev. Genet. 9, 465476 (2008)
15. Barreto, G., Schaefer, A., Marhold, J., et al.: Gadd45 promotes epigenetic gene activation
by repair-mediated DNA demethylation. Nature 445, 671675 (2007)
16. Wade, P.A., Archer, T.K.: Epigenetics: environmental instructions for the genome.
Environ. Health Perspect. 114, A140A141 (2006)
17. Schmelz, K., Sattler, N., Wagner, M., et al.: Induction of gene expression by 5-aza-2-
deoxycytidine in acute myeloid leukemia (AML) and myelodysplastic syndrome (MDS)
but not epithelial cells by DNA-methylation-dependent and -independent mechanisms.
Leukemia 19, 103111 (2005)
18. Olaharski, A.J., Rine, J., Marshall, B.L., et al.: The flavoring agent dihydrocoumarin
reverses epigenetic silencing and inhibits sirtuin deacetylases. PLoS Genet. 1, e77 (2005)
A Novel Method for Quantifying the Demethylation Potential 71
19. Appanah, R., Dickerson, D.R., Goyal, P., et al.: An unmethylated 3 promoter-proximal
region is required for efficient transcription initiation. PLoS Genet. 3, e27 (2007)
20. Okochi-Takada, E., Ichimura, S., Kaneda, A., et al.: Establishment of a detection system
for demethylating agents using an endogenous promoter CpG island. Mutat. Res. 568,
187194 (2004)
21. Wang, X., et al.: High-throughput assay of DNA methylation based on methylation-
specific primer and SAGE. Biochem. Biophys. Res. Commun. 341, 749754 (2006)
22. Brunori, C., Ipolyi, I., Massanisso, P., Morabito, R.: New Trends in Sample Preparation
Methods for the Determination of Organotin Compounds in Marine Matrices. Handbook
Environment Chemistry, Part O(5), 5170 (2006)
23. Barreto, G., Schaefer, A., Marhold, J., et al.: Gadd45 promotes epigenetic gene activation
by repair-mediated DNA demethylation. Nature 445, 671675 (2007)
24. Cheetham, S., Tang, M.J., Mesak, F., et al.: SPARC promoter hypermethylation in
colorectal cancers can be reversed by 5-aza-2deoxycytidine to increase SPARC
expression and improve therapy response. Br. J. Cancer 98, 18101819 (2008)
Study of Quantitative Evaluation of the Effect of Prestack
Noise Attenuation on Angle Gather
1 Introduction
Seismic data preserved-amplitude processing is crucial to achieve reliable prestack
characteristic and prestack inversion results. Although the importance of AVO
forward and inversion technique has been shown by Jonathan E et al. (2006) [1] and
Heidi Anderson Kuzma et al. (2005) [2], there will be false abnormal responses in
AVO analysis because of disadvantages of prestack noise attenuation and energy
compensation methods which easily lead to energy inconsistencies of near and far
offset. Then the following inversion and attribute analysis processing will appears
multi-solutions, making inversion results not able to truly reflect subsurface lithology
and physical property changes, which is not conducive to lithologic inversion,
reservoir prediction and fluid discrimination, as for example in Gary Mavko et
al.(1998) [3], Yinbin Liu et al.(2003) [4] and Fatti J L et al.(1994) [5].
The information of different angles has to be used in the extraction of prestack
attributes and prestack inversion. For big angle gather, because of the characteristics
of angle gather and the impact of oil/gas AVO and random noise and coherent noise,
the denoising methods may be different from conventional methods used in CSP and
CMP gathers. As the real data is complex and direct quantitative evaluation is not
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 7277, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Quantitative Evaluation of the Effect of Prestack Noise Attenuation 73
easy to achieve, it has both necessity of theoretical study and feasibility of achieving
that we establish specific theoretical model and quantitatively study the effect of
prestack noise attenuation on angle gather. Now we will carry out quantitative
evaluation for random noise, coherent noise and surface wave always occurring in
real data.
Fig. 1. Horizontal layer model. Left) Velocity model. Right) Shot gather.
Figure 2, left, shows CMP gather before and after NMO and the angle gather
without random noise and with noise of signal to noise ratio of 1 and 0.5. While S/N
is 1, random noise affects event slightly and AVO phenomenon can be obviously
observed. After attenuating noise, a small amount of residual random noise has a little
effect on events. While S/N is 0.5, random noise affects event very largely and we
cant observe AVO phenomenon. After attenuating noise, a great amount of residual
random noise affects angle gather to a large extent.
In order to see more obvious changes in angle gather, we selectively stacked 1-
13, 13-26 and 27-39 of the angle gather into one trace and repeated it five times
(Figure 3). When S/N is 1, AVO phenomenon of small reflection efficient has been
difficult to distinguish visually, the middle angle information in the deeper formation
has been largely polluted and far angle information has been also greatly affected.
After attenuating noise, the middle angle gather in the deeper formation has been
74 J. Zhang et al.
clearly improved, but the distortion of the contaminated weak signal slightly
increased. When S/N is 0.5, only strong reflection information can be distinguished.
Far angle, middle angle and near angle gather are all greatly polluted. After
attenuating noise, the angle gather section has been greatly improved entirely, but the
contaminated events of far angle, middle angle and near angle gather have not.
Fig. 2. Stacking angle gather. From left to right: without random noise; before attenuating
random noise (S/N=1), after attenuating random noise(S/N=1), before attenuating random
noise(S/N=0.5) and after attenuating random noise(S/N=0.5).
Fig. 3. Stacking angle gather. From left to right: without random noise; before attenuating
random noise (S/N=1), after attenuating random noise, before attenuating random
noise(S/N=0.5) and after attenuating random noise.
Study of Quantitative Evaluation of the Effect of Prestack Noise Attenuation 75
Fig. 4. Comparison of AVA curve of the fourth layer. Left) S/N=1. Right) S/N=0.5.
Figure 4 shows comparison of AVA curve of the fourth layer when S/N is 1 and
0.5. We can find that as signal to noise ratio decreases, the jitter phenomenon is more
and more serious and the reflection coefficient occurred a very significant jump.
Fig. 5. From left to right: CMP gather after NMO and angle gather before and after attenuating
coherent noise, stacking angle gather before and after attenuating coherent noise
Figure 6 shows comparison of AVA curve of the fourth layer before and after
attenuating coherent noise. We can clearly observe the main interference point of
coherent noise, and the reflection coefficient has a very significant jump.
Fig. 7. From left to right: CMP gather after NMO and angle gather before and after attenuating
surface wave, stacking angle gather before and after attenuating surface wave
5 Conclusions
Through the analysis, we draw the following conclusions:
1) Through adding random noise of different signal to noise ratio and attenuating
it, we find that the anti-noise ability of angle gather stacking is much better than that
of stacking CMP gather. For original angle gather, the threshold of signal to noise
ratio is 1. For angle gather stacking, the minimum signal to noise ratio can be up to
0.5.
2) Through adding random noise of different signal to noise ratio and attenuating
it, we find that the effect of coherent noise on angle gather is smaller than that on
CMP gather. After attenuating coherent noise, stacking angle gather became better
and only a small amount of high frequency glitches are remained. The larger the
apparent velocity of coherent noise is, the more seriously the effect of it on angle
gather.
3) Based on attenuating surface wave, we find that surface wave has strong energy
and low frequency and a small amount of residue will affect the angle gather largely.
As surface wave mainly occurs in near offset, the effect of it on near offset than that
on far offset. After suppressing surface wave, there will be energy loss in near angle
gather, which should be compensated.
References
1. Downton, J.E., Ursenbach, C.: Linearized amplitude variation with offset (AVO) inversion
with supercritical angles. Geophysics 71(5), E49E55 (2006)
2. Kuzma, H.A., Rector, J.W.: The zoeppritz equations, information theory and support vector
machines. In: SEG/Houston 2005 Annual Meeting, pp. 17011705 (2005)
3. Mavko, G., Mukerji, T.: A rock physics strategy for quantifying uncertainty in common
hydrocarbon indicators. Geophysics 63(6), 19972008 (1998)
4. Liu, Y., Schmitt, D.R.: Amplitude and AVO responses of a single thin bed.
Geophysics 68(4), 11611168 (2003)
5. Fatti, J.L., Vail, P.J., Smith, G.C., et al.: Detection of gas in sandstone reservoirs using
AVO analysis: A case seismic case history using the Geostack technique.
Geophysics 59(5), 13621376 (1994)
6. Xu, Y., Xia, J., Miller, R.D.: Numerical investigation of implementation of air-earth
boundary by acoustic-elastic boundary approach. Geophysics 72(5), 147153 (2007)
7. Saenger, E.H., Bohlen, T.: Finite difference modeling of viscoelastic and anisotropic wave
propagation using the rotated staggered grid. Geophysics 69, 583591 (2004)
A User Model for Recommendation Based on Facial
Expression Recognition
1
Center for Studies of Information Resources,
Wuhan University, Wuhan, China
2
Research Center for China Science Evaluation,
Wuhan University, Wuhan, China
mrluquan@sina.com, chendezhao0023@163.com, hjy1120@126.com
1 Introduction
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 7882, 2011.
Springer-Verlag Berlin Heidelberg 2011
A User Model for Recommendation Based on Facial Expression Recognition 79
Section 1 makes an introduction to this paper, Section 2 provides the user model and
section 3 gives the method to build the user model, including t and value. Section 4
introduces updating mechanism of this model. Section 5 makes a summary of this
paper.
When a user browses one topic of information, a reaction may be made to it, then he
may click it, restore it, and his expression may also reflect his interest. In the user
model, emotion is important to obtain users interest. Users emotion can be recognized
and expressed, document [6] has researched in the affective visualization, document [7]
has built affective model for emotion. The user model built in this paper is combined
with affective model, and users emotion is added to the model.
This user model can be described as:
U = {T , C , W , t , } (1)
Ti = {(C , W ), t i , i } (2)
Formula (1) describes the elements of the model. In this model, T are the topics which
the user is interested in. C represents the characteristic of the topic. W represents the
weight related to the characteristic, t represents the times user model changes. is an
affective function reflecting the user emotion by facial expression recognition, is
emotional sign of interest degree. Every topic Ti has a series of (C, W) and ti, i.
Very active
Very passive
The result affective function computes is based on coordinate which decides the
emotion of expression. Users emotion is continuous and can be classified by two
dimensions[8] , as it shows in firgure1. Users emotion may be mapped in this space.
In this model, it neednt recognize the emotion, but the degree of the emotion, such
as positive or negative. Because in recommendation system, t just knows whether user
is interested in the topic or not, and to what degree. So it should map the emotion to the
emotion dimension, then it may use 0 to express very negative emotion, and 1 very
positive emotion, the value 0.5 just represents neutral.
TF(ti) represent times item ti occurs in the document, and DF(ti) represents the amount
of documents.
t represents the times topic T updates, it reflects the frequency users interest
updates. If t is increasing, it may infer that the user is paying more attention to the topic.
At the same time the t should include the time t value changes. The times user model
updates and the time it updates will be referred when the model eliminates topics.
To decide the topics, it should also gather users facial expression which is used to
analyze users emotion. Affective function computes users emotion, which is a
sign of users interest. Affective function computes users emotional value between
0 and 1, and the value represents different emotion in different degree. Affection
computing depends on the facial expression recognition, and decides the emotion by
analyzing the facial characteristics such as structure of face, shape and so on[9]. And
then the function will compute the emotion, the result is presented as potential value. It
is decided by expression potential formula.
1
K (e, Ex) = (5)
1 + a || e Exc || 2
||||2 represents the distance norm of expression, a is a constant, controls the fading
speed of basic expression Exs potential. Exc represents the center of expression.
Expression potential can be used to classify expression, and then the expression should
be mapped to the emotion value, its value is between 0 and 1.
Affective function can express users real emotion, avoiding the problem of
traditional model that user clicks a web page or browses one topic information while he
isnt interested in it.
A User Model for Recommendation Based on Facial Expression Recognition 81
After building the user model, it will update in the following learning. It includes two
aspects: one is to update the model, such as to adjust the value Ci, Wi, , t , and to add
new topic. Another is to eliminate some topics when the storage is not so large.
When a topic is in the user model, the user model will compute similarity based on
SVM theory. And then the user model will be updated including the Ci, Wi, , t.
Every time the topic updates, the t related to the topic will be added:
t = t+1 (6)
When a new topic is found, it will be added to the model, if the storage can store the
topic, or some topics will be eliminated. Elimination strategy includes , and t. If the
emotion value according to is low, it indicates that users interest is not so strong.
So the topic will have priority of elimination. If the t is low, it may indicate that this
topic is not updated constantly and user doesnt browse this topic information. This
topic may be context information and user does not pay attention to this topic after a
period of time. From the point view of information lifecycle management theory, the
topic has no value or low value[10]. So the topic is chosen to eliminate. If t is high
but the time model updates is long, it means the model doesnt update for a long
time, so the topic may be out of date, and the topic can be chosen. The long time is
relative.
When deciding which topic to be eliminated, the user model should consider t and
, and adjust the user model to reflect users interest.
5 Summary
This paper makes an introduction to our user model, and details the processing and
application of the user model. In the personalized service, users emotion is also
important, while the user model doesnt express it. So the model takes advantage of the
facial expression recognition and builds a user model which contains users emotion.
This model utilizes affective function , as a sign of users interest. At the same time,
this model provides t to reflect the frequency of the topic changes. t is also used to
decide whether the user pays attention to the topic or not. t and values are important
in updating the model and recommending information to users. As users require more
accurate information and affective computing develops, user model will adopt the
emotion factor and applied to the recommendation system.
References
1. Ying, X.: The Research on User Modeling for Internet Personalized Services. National
University of Defence Technology (2003)
2. Rucker, J., Polanco, M.J.: Siteseer: Personalized Navigation for the Web. Communications
of the ACM 40(3), 7375 (1997)
3. Joachims, T., Freitag, D., Mitchell, T.: WebWatcher:A tour guide for the world wide web.
In: Artificial Intelligence, Japan (August 1998)
4. Yan, D., Liu, M., Xu, Y.: Toward Fine-rained User Preference Modeling Based on Domain
Ontology. Journal of the China Society for Scientific and Technical Information 29(3),
442443 (2010)
5. Li, S.: The Representation and Update for User Profile in Personalized Service. Journal of
the China Society for Scientific and Technical Information 29(1), 6771 (2010)
6. Zhang, S., Huang, Q.: Affective Visualization and Retrieval for Music Video. IEEE
Transactions on Multimedia 12(6) (2010)
7. Qin, Y., Zhang, X.: A HMM-Based Fuzzy Affective Model For Emotional Speech
Synthesis. In: 2nd International Conference on Signal Processing Systems (2010)
8. Li, J.: Study on Mapping Method of Image Features and Emotional Semantics. Taiyuan
University of Technology (2008)
9. Skelley, J.P.: Experiments in Expression Recognition. Masters thesis, Massachusetts
Institute of Technology, EECS (2005)
10. Rief, T.: Information lifecycle management. Computer Technology Review 23(8), 3839
(2003)
An Improved Sub-Pixel Location Method for Image
Measurement
1 Introduction
Machine vision based measurement is one of the key technologies in manufacturing. It can
perform shape and dimensional inspection to ensure that they lie within the required
tolerances [1]. In order to improve the precision of image measurement, many scholars
have put forth some effective sub-pixel location algorithm [2-6]. As the actual measurement
system should meet the requirements of precision, efficiency and reliability, it should not be
too complicated. Interpolation based sub-pixel edge detection method has been widely used
in practice for its fast calculation speed. It uses the interpolation function to restore one-
dimensional continuous light intensity approximately.
However, for the discrete distribution of pixel points in the digital image, the
traditional gaussian interpolation algorithms are only available in horizontal, vertical
and diagonal direction ( 45 , 135 ). Certain errors will be generated inevitably for
the location of arbitrary edge direction. Consequently, it is necessary to improve the
algorithm to improve the sub-pixel location precision.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 8392, 2011.
Springer-Verlag Berlin Heidelberg 2011
84 H. Zhou, Z. Liu, and J. Yang
(a) (b)
The position on the maximum of the gray value difference is the boundary to
discriminate the background and the object. Because the integration effects and
optical diffraction effects of optical components, as well as the aberrations of the
optical system, the change of the gray values become gradient style in the image,
which should be drastic changes in reality. Classic edge extraction principle considers
that the maximum difference presents the edges of image objects.
According to Square aperture sampling theorems, optical components always
perform the integration of the light intensity which projected onto the photosensitive
surface with a fixed size area at fixed interval, the output results are the gray values in
an image. As integral time and integral area is fixed, so the outputs depend only on
the light intensity distribution on the surface. A pixel gray values output can be
expressed as:
j +1 2 i +1 2
f (i , j ) = g ( x, y )dxdy (1)
j 1 2 i 1 2
Here, f (i, j ) is pixel gray values, g ( x, y ) is the light intensity distribution of the
continuous images. Theoretically, variations of edge gray values should be a gaussian
distribution, which is shown in Fig.2, the vertex position of the curve is the precise
location of the edge point.
Expression of the Gaussian curve is:
1 ( x ) 2
y= exp( ) (2)
2 2 2
Obviously, the equation is conic style. To simplify the calculation, we can use the
values after logarithmic operation to fit a parabola to obtain vertex coordinate. So, we
use conic instead of Gaussian curve to improve the efficiency.
An Improved Sub-Pixel Location Method for Image Measurement 85
Mm
W2
W1
P1 P Pmax P2
Surpose the conic form is y = Ax2 + Bx + C , the gray value output for each pixel is:
n +1 2
y (n) = ( Ax 2 + Bx + C ) dx (4)
n 1 2
Let the number of the maximum point of grayscale value difference be 0, and its
value be represented as f0. This position can be calculated by classic operator
mentioned above. The number of the two points which is nearby the maximum point
are represented as -1 and 1, and their values are represented as f-1and f1 , We can get
the gray values as follow:
1
1 2 1 1 2 13
f 1 = ( Ax 2 + Bx + C )dx = Ax3 + Bx 2 + Cx = A B + C (5)
3 2
3 2 32 12
1
f0 = A+C (6)
12
13
f1 = A+ B +C (7)
12
Combine the equations from (5) to (7), we can obtain the expression of A, B, C as
follows:
1 1 13 1 1
A= ( f 1 + f 1 2 f 0 ) B= ( f 1 f 1 ) C= f0 f 1 f (8)
2 2 12 24 24
This solution is the result after taking logarithms in the Gaussian curve and the
pixel gray values difference in (9) should be substituted by logarithms, so we get:
ln f1 ln f 1
x= (10)
2(2 ln f 0 ln f1 ln f 1 )
86 H. Zhou, Z. Liu, and J. Yang
For the discrete distribution of digital images pixel points, the traditional gaussian
interpolation algorithms can only be performed in horizontal, vertical and diagonal
direction ( 45 , 135 ). In most cases, edges are in arbitrary direction, so the
accuracy of location will be decreased inevitably.
Fig.3 shows an object edge of the original image. Fig.4(a) shows the trend curve of
gray value change in arbitrary horizontal direction, and Fig.4(b) in gradient direction.
As can be seen from the figures, the gray values in normal direction change more
drastically than in the other direction, which means that edge location in normal
direction will be more accurate.
(a) (b)
3 Algorithm Optimization
After obtained the edge location of object using LoG operator, we use Hough
transform to get the curve slope of the boundary tangent line, and thus the
corresponding normal could be obtained. Suppose the angle of edge normal to
horizontal axis is , as shown in Fig.5. Take a certain edge point after pixel-precise
location as center, rotate the coordinate system to make the normal line be the x-axis,
then the edge direction become the y-axis. Do gaussian curve fitting in gradient
direction to get the sub-pixel location more accurately.
An Improved Sub-Pixel Location Method for Image Measurement 87
1 f (n ) (x 0 )
f ( x ) = f ( x 0 ) + f' ( x 0 )( x x 0 ) + f " ( x 0 )( x x 0 ) 2 + " + (x x 0 ) + R n (x ) (12)
2! n!
f (n +1) ( ) n +1
In the formula, R n ( x ) = (n + 1)! (x x 0 ) , which is Lagrange remainder.
In General, second order Taylor series is sufficient to approach the original
function and this paper try to use Taylor series to the interpolation function. As shown
in Fig.7, a single pixel is surrounded by 4 pixels and we can choose any 3 points
3
( C 4 ) to do Lagrange interpolation. Finally, we take the average as the output gray
value for the pixel.
(0,0) (0,1)
(x,y)
(1,0) (1,1)
Suppose the gray value for points (0, 0), (0, 1) (1, 0), (1, 1) is y0, y1, y2 and y3
respectively. Do second order Lagrange interpolation for point (0, 0), (0, 1) and (1, 0)
to get f0(x, y):
( x x1 )( x x 2 ) ( x x 0 )( x x 2 ) ( x x 0 )( x x 1 )
f 0( x, y) = y 0 + y1 + y2 (13)
( x 0 x1 )( x 0 x 2 ) ( x1 x 0 )( x1 x 2 ) ( x 2 x1 )( x 2 x 0 )
Similarly, Lagrange interpolation for points (0, 0), (0, 1) and (1, 1), we get f1(x,y):
(x x1 )(x x 3 ) ( x x 0 )(x x 3 ) ( x x 0 )(x x1 )
f 1(x, y) = y0 + y1 + y3 (14)
( x 0 x1 )(x 0 x 3 ) ( x1 x 0 )(x1 x 3 ) ( x 3 x1 )(x 3 x 0 )
Lagrange interpolation for points (0, 1), (1, 0) and (1, 1), we get f2(x, y):
( x x 2 )( x x 3 ) ( x x 1 )( x x 3 ) ( x x 2 )( x x 1 )
f 2( x , y) = y1 + y2 + y3 (15)
( x 1 x 2 )( x 1 x 3 ) ( x 2 x 1 )( x 2 x 3 ) ( x 3 x 1 )( x 3 x 2 )
Lagrange interpolation for points (0, 0), (1, 0) and (1, 0), we get f3(x, y):
( x x 2 )( x x 3 ) ( x x 0 )(x x 3 ) ( x x 2 )(x x 0 )
f 3( x , y) = y 0 + y2 + y3 (16)
( x 0 x 2 )( x 0 x 3 ) ( x 2 x 0 )(x 2 x 3 ) ( x 3 x 0 )(x 3 x 2 )
Finally, Calculated the pixel gray value f(x, y) for point (x,y):
1
f ( x, y) = (f 0 ( x, y) + f1 ( x, y) + f 2 (x, y) + f 3 ( x, y)) (17)
4
After obtaining the gray values for the interpolation points in the gradient direction
under the new coordinates system, we can relocate the edge points for sub-pixel
precision in the gradient direction based on the Gaussian interpolation algorithm.
An Improved Sub-Pixel Location Method for Image Measurement 89
4 Experiment Results
In order to verify the algorithm, we adopted the standard Gauge which has a high
quality of straight edges for sub-pixel location experiment. Fig.8(a) illustrate a
standard Gauge with 20mm of working length (the long side is the working face) and
Fig.8(b) shows the gray value data of gradual partial edge.
(a) (b)
Fig. 8. Original image of Gauge and gray values of partial edge
Table 1. The list of edge point coordinates by different edge extraction methods
Execute the LoG edge extraction (=1, thresh=70) and we can get the edge binary
image. Do Hough transform for the left side edge we get the line equation as
y=3.0994x-909.3564. To precisely locate the edge point, rotate the coordinate system
about the LoG zero crossing point to make the edge be y-axis and the normal
direction become x-axis.
Conduct sub-pixel location according to the algorithm mentioned above. Notice
that the operation is only carried out for the zero crossing point along the normal
direction. Table 1 lists the edge point coordinates for pixel precision, sub-pixel
precision by traditional Gaussian Interpolation and the sub-pixel precision by
improved algorithm.
Fig.9 shows the 2D curve drawn by positions of each sub-pixel points. We can find
that the curve made by pixel level precision shapes like a zigzag, traditional gaussian
interpolation method remedy such errors to some extent. However, because it was not
carried on the edges normal, so it inevitably has some errors compared with the real
edge. The improved algorithm executes second sampling in edges normal and uses
470
pixel accuracy edge
traditional sub-pixel edge
465
improved sub-pixel edge
460
y coordinate
455
450
445
440
436 437 438 439 440 441 442 443 444
x coordinate
Fig. 10. Ring gauge of 20 Fig. 11. Comparison of edge point curve
An Improved Sub-Pixel Location Method for Image Measurement 91
Error
values
Pixel point
Because standard gauge boasts high-quality straight lines and curve edges, we can
figure out the sub-pixel location precision by checking the shape errors of the object
edges from the image. The shape errors of gauge are shown in Table 2 and we can
find that the new algorithm improved the precision of sub-pixel location obviously.
5 Summary
Edge detection and sub-pixel precision location for edge points are the basis for image
measurement. Classical Gaussian interpolation location can achieve high speed but
low precision. As for arbitrary oriented edges, gaussian interpolation on the edge
gradient can improve the location precision. To restore the gray values on the gradient
direction, perform second order Lagrange interpolation and then the gaussian
interpolation of sub-pixel relocation. Experiment proves that improved sub-pixel
interpolation algorithm get much better precision than traditional algorithm.
References
1. Steger, C., Ulrich, M., Wiedemann, C.: Machine Vision Algorithms and Applications,
pp. 12. Tsinghua University Press, Beijing (2008)
2. Qu, Y.D.: A fast subpixel edge detection method using Sobel-Zernike moment operator.
Image and Vision Computing 23, 1117 (2005)
3. Tabatabai, A.J., Mitchell, O.R.: Edge location to sub-pixel values in digital imagery. IEEE
Trans. Pattern Anal. Machine Intell. PAMI-6(2), 188201 (1984)
4. van Assen, H.C., Egmont-Petersen, M., Reiber, J.H.C.: Accurate object localization in gray
level images using the center of gravity measure:accuracy versus precison. IEEE
Transactions on Image Processing 11(12), 13791384 (2002)
5. Malamas, E.N., Petrakis, E.G.M., Zervakis, M., et al.: A survey on industrial vision
systems, applications and tools. Image and Vision Computing 21, 171188 (2003)
6. Li, Y., Pang, J.-x.: Sub-pixel edge detection based on spline interpolation of D2 and LoG
operator. Journal of Huazhong University of Science and Technology 28(3), 7779 (2000)
The Dynamic Honeypot Design and Implementation
Based on Honeyd
1
Hunan University of Commerce Beijin College
2
Hunan First Normal University
3
School of Computer Hunan University of Commerce, Changsha, China
{12870595,494680234,522396825}@qq.com
1 Preface
Along with the the rapid development of Internet technology, Network information
safety has to be face to a serious threat. The current network security technologies
mainly use the passive defense methods, but these methods are very tough to deal with
complex and changeable attacks from hacker. Since passive defense modes are difficult
to deal with the complex and changeable attacks, we must solve the problem of
defensive measure which is from the passive into active.This is also our research new
topic. In this context , We put forward a kind of active defense network security
technologyHoneypot. The Honeypot system elaborate network resources for
hackers, which is a strict monitoring network deception system.The system aims at
attracting hacker attacks through offerring real or analog networks and services
,,collecting the information and analyzing its attack behavior and process during the
hacker attacks.In this way,we can hold the hackers motivations and goals, repair
security holes The system attacked before , which can avoid the attacks occurred.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 9398, 2011.
Springer-Verlag Berlin Heidelberg 2011
94 X. Liu, L. Peng, and C. Li
After Honeyd receives packets, The central bag splitter will check IP packet length
and confirm bag checksum. Honeyd main response three Internet protocols which are
ICMP, TCP and UDP , Other protocols are discarded after credited into log .Before
Packets are processed, The central bag splitter will search the honeypot configuration
corresponding with the packet destination address. If they can't find the corresponding
configuration, the System uses a default configuration..After the configuration is given,
Packet and corresponding configuration will be assigned to specific protocol processor.
The main purpose obtaining the environment around is to learn about the Internet
environment. Its the necessary conditions to solve the honeypot system configuration ,
That is, to solve allocation problems must know surrounding network environment
first.
According to the above active detections and passive fingerprint designs, we will be
able to determine approximately the kind of operating system and obtain the basic
situation of the environment. Its design is shown in figure 2:
Fig. 2. Design of the active detection combining with the passive fingerprint
The system through the Honeyd to simulate the virtual honeypot. Through creating
Honeyd.config files to configure the template. The system created a default host
templates, Used to store those not in other templates defined in packets, In addition,
also created a Windows operating system template and a XP operating system template
and router template.
Honeyd usually has two layers of data control, Respectively is a firewall data control
and router data control, Firewall data control mainly through a firewall to control the
honeypot out connection, Firewall adopt "wide into severe out" strategy, In fire
prevention wall of honeypot machine from outside sends the number of connections set
a rational threshold, Generally allow outside sends number of connections Settings for
5 to 10 more appropriate, Won't cause invaders doubt, Also avoid honeypot system
become the invaders against other systems and tools. Routing control by routers
completed, Basically is to use routers to go out the access control function of the packet
filtering, Lest honeypot is used to attack other parts of a network. Mainly used to
prevent Dos attack, IP deception or some other deceptive attack. In this system, we
adopt gateway to replace, The advantage of using gateway is: Gateway no network
address, Control operation will more latent, Hackers perceive is not easy. We adopt
Honeynet development of rc. Firewal scripts to the configurations and realization, And
using IPTables to restrict. IPTables is Linux self-contained open source firewall,
According to the need to Forsake a bag, In a given period allowed only a certain
number of new connection, Possibly through discard all packages to completely isolate
the honeypot system. Every time the connection initialization out the connection,
Firewall count, When the total limit is reached then, IPTables will block any Honeypot
launched from any connection. Then IPTable reset itself, Allow each time period
allowed out connection number. In this script installed per hour allow TCP and UDP,
ICMP or other arbitrary IP packet out number of connections, When an intruder outside
sends a packet to specified value, Automatically cut off all foreign connections, To
reduce the network risk.
The Dynamic Honeypot Design and Implementation Based on Honeyd 97
Data capture is the key of the honeypot system, We need to use data capture
information to determine the invaders behavior and motivation, In order to determine
the invaders gain access after had done, We need to capture data can provide invaders
keystroke records and attack effect.
Snort is a lightweight intrusion detection system, It has three working mode: Sniffer,
packet recorder, network intrusion detection system. We mainly use Snort intrusion
detection model. Above configuration files is Snort collected data output to local called
Snortdb Mysql database, The user name is Snortoper, Verification code is Password. At
the same time will be recorded in Tcpdump format packets Snort.log file.
Sebek is a based on the kernel's data capture tools, It can be used to capture the
honeypot concealed all activities. Sebek caught in the packet encryption has great
advantage, Because no matter what kind of encrypted data to the destination host to
have after action, Will be decrypted to call system calls. Hackers who get packets, Use
its own agreement will the packet on the Internet, Thus obtained by the Sebek Server.
Sebek Client are through some hidden technology makes the invaders feel oneself be
monitored, Convenient for us to capture the real data. Sebek Client capture the data
package into UDP packets, Through the nic driver sent to the Internet, Avoid being
invaders may install the sniffer to detect. Sebek consists of two parts: The client and the
server. The client from the honeypot capture data and the output to network lets
server-side collection. The server have two ways to collect data: The first kind is
directly from the network activity packet capture, The second from Tcpdump format
preservation packets files. When data collected can upload the relational database, Also
can instantly display keystroke records.
log record is mainly to the honeypot host capture data recorded, Its main function is to
collect and record hackers behavior, For the future analysis hackers the tools used,
strategies and their attack purposes or take lawsuit hackers crime to provide evidence.
In order to ensure that capture hacker attacks data security, We design a log
oportunidades programme to the backup data in the system. The honeypot host is
running with Linux ep-red Hat 9.0 operating system of real host,the Syslog of Linux
Red Hat 9.0 function is powerful, Syslog can send recording system kernel and tools
generated information. We can configure their Syslog. Conf files, To realize the virtual
honeypot collected log message transferred to log server. By modifying Syslog. Conf,
Realized the local log information transfer to remote log server for backup. Finally, we
began to capture the hacker information for analysis, Thus learning hackers means and
98 X. Liu, L. Peng, and C. Li
methods, In view of its attack means to take corresponding defensive measures, To this,
the Honeyd based on dynamic honeypot is realized basically.
5 Summary
References
1. Fu, X., Yu, W., Cheng, D., et al.: On Recognizing Virtual Honeypots and Countermeasures.
In: The 2nd IEEE Interational Symposium on Dependable, Autonomic and Secure
Computing, vol. 9, pp. 220230 (2006)
2. Leita, C., Mermoud, K., Daeier, M.: SeriptGen:all automated script generation tool for
honeyd. In: 21st Annual Computer Security Applications Conference, vol. 9, pp. 125135
(2008)
3. Zhang, F., Zhou, S., Qin, Z., et al.: Honeypot:a supplemented active defense system for
network security. In: Proceedings of the Fourth International Conference, PDCAT 2009,
vol. 8, pp. 231235 (2003)
4. Kreibich, C., Crowcroft, J.: Honeycomb-Creating intrusion Detection Signatures Using
Honeypots (EB/PDF),
http://www.sigcomm.org/I-IotNets-II/papershaoneycomb.pal.2010
5. Domscif, M., Holz, T., Mathes, J., Weisemoller, I.: Measuring Security Threats with
Honeypot Technology, 2129 (2009)
6. Kwong, L., Yah: Virtual honeynet srevisited.SMC Information Assurance Workshop. In:
Proceedings from the Sixth Annual IEEE, vol. 9, pp. 230240 (2010)
Research of SIP DoS Defense Mechanism Based on
Queue Theory
1 Introduction
SIP was proposed in 1999 by IETF as a signaling protocol that based on IP network
environment. And it has been widely used in NGN at present. SIP is vulnerable to
DoS attack because it runs on IP network environment and has openness. For
example, attackers can create false messages which have false source addresses and
via field. And the SIP proxy server which was attacked is set as request initiator, then
these messages are sent to large numbers of SIP users. Hence the spoofed users will
send many DoS attack messages to the attacked server.
Therefore, research on DoS detection defense system of SIP has become a hot point
at present and also has been a problem that urgently needed to be solved in NGN
deployment. According to the definition of VolPSA, DoS attack problems of SIP
system can be divided into five categories: request flooding, malformed requests and
messages, QoS abuse, spoofed messages, call hijacking [1]. This paper mainly
researches on the flooding problems of SIP, and combining with the M/M/1/K
mathematical model it proposes a SIP DoS attack defensive scheme that based on the
queue theory. At last, this paper simulates and analyzes the performance of the scheme.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 99104, 2011.
Springer-Verlag Berlin Heidelberg 2011
100 F. Gao, Q. Liu, and H. Zhan
time that the server processes the unit message is Q seconds, the size of messages
obeys exponential distribution whose average is L, the average service rate of SIP
system is and the average service time is S, thus:
S=1/=LQ (1)
For convenient analysis, we assume that the service time and the interarrival time
still obey exponential distribution during the system under DoS attacking [2]. We also
assume that the whole system has only a core processing unit and the size of the cache
is K, namely there are a maximum of K request messages wait to be processed, the
rest of the arrival messages will be discarded [3]. So we can analyze the whole system
using the M/M/1/K model.
For the M/M/1/K system, we write as = / , thus we can obtain the rate (p0) when
the queues length of the system is 0:
1
k
p0 = i (2)
i =1
The rate of messages being discarded when the system statistical balance is:
=pk=kp0 (3)
The average waiting time of messages when there are no messages are lost is:
1 0
P (1 k k 1 + ( k 1) k ) ( 1)
1
1
(1 )
2
W = (4)
1 1 ( k 1) k
1 0
2
The average delay of the messages equals the waiting time of the messages in the
queue plus the average service time, thus the average response time of the system is:
R = W ( , L, K ) + S ( L ) (5)
We can see that the response time of system is related with three parameters: the
arrival rate of the request messages , the average size of the messages L and the size
of the system cache K. Normally, the value of K is sure, so the response time of
system depends on and L.
For example, if the illegal INVITE message blocks up the header of the queue, the
legal INVITE message behind it will not be able to obtain the service so that the call
can not be established according to the normal process, which will lead to sessions is
failed because of the timeout at last. Even if the illegal INVITE message does not
block up the header of the queue, a large number of illegal messages block at the head
of the queue, that may cause the new coming session response message cannot be
accepted or be discarded, the call of legal users will also be timeout because of not
accepting any response [4], what is shown in Fig.1.
Fig. 1. The blocked queue of INVTE message. The former is the situation of blocking the
header of the queue and the later is blocking the queue.
Fig. 2. The Sketch maps of single queue and priority queue. The former is the situation of the
signal queue and the later is the priority queue.
In order to reduce the influence that the INVITE flooding has on the proxy server, we
consider the scheme in which the priority queue is introduced. SIP server generally uses
FIFO queue when it processes the message queue. When there is a DoS attack, it will
produce the problem above. If we adopt the priority queue, the INVITE messages will
be assigned with low priority and be put into low priority queue, others that are not
INVITE messages will get high priority and be put into high priority queue. The two
queues respectively follow FIFO principle, but only when the higher priority queue is
empty, they will process the messages of lower priority, what is shown in Fig.2.
Thus, we can assign the original cache sources of server to two queues, and the
way of assigning the sources according to the actual conditions. For convenient
analysis, this paper assign the cache sources to two queues averagely, namely two
M/M/1/K/ queues [5].
The priority queue adopts the non-preemptive priority rules, a message which is
accepting services is allowed to finish this service without interference, even if there
is a higher priority message arrives. Every priority message has an independent
queue. When the server can be used, the first message of the non-preemptive priority
queue that has the highest priority will be the first to accept the service. It will give
the corresponding average delay equations for every priority category. For the
M/M/1/ (K/2) system which has multiple priority levels, firstly, we define various
parameters as followed: qwi is the average length of queue whose priority is i; Wi is the
average waiting time whose priority is i; p0i is the rate when the length of the message
102 F. Gao, Q. Liu, and H. Zhan
This paper will only discuss the situation of having two priority levels, we can
easily get the average delay time of the messages that having high priority is:
1
W1 = TR + qwi (8)
1
For the messages of low priority, the expression of average detention is generally
similared with the expression of high priority expect that there is still a message
waiting in the queue when a message of high priority arrives at the server. At this
time, the expression is:
1 1
W2 = TR + qwi + qwi + 1W2 (9)
1 2
According to P-K equation, we can deduce the average residual service time is:
1 n
TR = i X i 2
2 i =1
(10)
For the system which has two priority levels, its average response time is:
1 R1 + 2 R2
R= (12)
1 + 2
Fig. 4. The performance comparison between three priority queues and two priority queues
The parameters of simulation are set as followed: the average arrival rate of
messages, namely, the attacking factor obtains its value between 0 msg/s and 10
msg/s. The average service rate of SIP transaction process is 10 msg/s. We can
obtain the result through running the simulation program. We can observe
dramatically that with the growth of , the average response time of messages obeys
exponential distribution. So the attracting factor has a huge affect on the response
time of the system. When the value of attracting factor is larger than 8, the response
time of message of the target node will increased greatly, even causing the target node
cannot response to any message, the system will collapse. In order to accurately
compare the superiority of three levels queue and two levels queue, we obtain the
curve of the two queues under the same parameters, what is shown in Fig.4. We can
obviously find that the response time of three levels queue is far better than that of
two levels queue from the comparative curve.
5 Conclusion
Although the M/M/1/K model that was established in this paper just aims at the
INVITE flooding attack, it summarizes the advantages of the detecting defense
104 F. Gao, Q. Liu, and H. Zhan
mechanisms in a certain extent and realizes easily. All kinds of the existing SIP DoS
attack defense mechanism has its own advantages and disadvantages, the key factors
to analyze the performance of the scheme is to establish an effectively mathematical
analysis model. This paper verified the effective of the DoS attack defense scheme
that is mentioned through emulating the ARS model based on queuing theory. In the
future research, it will obtain better effect if it couples with certain discarding
INVITE message algorithms.
References
1. Rosenberg, J., Schulzrinne, H., Handley, M., et al.: SIP: Session Initiation Protocol. RFC
3261 (2002)
2. Yin, Q.: The reserch on SIP DoS attack defense mechianism. The Journal of Chongqing
University of Posts and Telecommunications 20(4), 471474 (2008)
3. Zhang, G., Fischer-Hbner, S., Ehlert, S.: Blocking attacks on SIP VoIP proxies caused by
external processing. Telecommunication Systems 45(1), 6176 (2009)
4. Ormazabal, G., Nagpal, S., Yardeni, E., Schulzrinne, H.: Secure SIP: A Scalable
Prevention Mechanism for DoS Attacks on SIP Based VoIP Systems. In: Schulzrinne, H.,
State, R., Niccolini, S. (eds.) IPTComm 2008. LNCS, vol. 5310, pp. 107132. Springer,
Heidelberg (2008)
5. El-moussa, F., Mudhar, P., Jones, A.: Overview of SIP Attacks and Countermeasures. In:
Weerasinghe, D. (ed.) ISDF 2009. LNICST, vol. 41, pp. 8291. Springer, Heidelberg
(2010)
Research on the Use of Mobile Devices in Distance EFL
Learning
Fangyi Xia
Abstract. This research focuses on exploring the possible use of mobile devices
in distance learning of English as a Foreign Language. Firstly, it studies mobile
devices application in language teaching. Secondly, it analyzes the current
problems of distance EFL learners in China. Then, it makes suggestions on how
to use mobile devices in distance EFL learning in China. Finally, it finds that
mobile learning could partly address some of the current problems of distance
EFL learners, such as lack of constant exposure to learning contents and lack of
big chunk of time for study and revision etc. Some effective ways are to
represent learning contents on mobile devices, create an online submission
system for oral and written assignment and design a mobile interactive quiz
system.
1 Introduction
With the rapid development of wireless mobile technology, mobile, portable and
handheld devices, such as mobile phones, personal digital assistants (PDAs), MP3
and MP4, etc. have become very popular in peoples daily life. And some have
powerful functions as personal computers do. They are now regarded as ideal tools
for students with little time because they can access increasingly sophisticated content
no matter where they are, or when they have time to study.
Research in the field of mobile learning is on the rise in recent years. A lot of
studies and projects have been conducted to explore the possibilities of mobile phones
for educational use. As Mohamed Ally believes, mobile learning is transforming the
delivery of education and training. This study finds that mobile learning has growing
significance in distance education. The slogan of COLs (Commonwealth of
Learning) Lifelong Learning for Farmers initiative says Mobile phones: not just a
tool for talking, but also a tool for learning. However, in China, with the worlds
greatest number of cell phones, most people still use mobile phones just as a tool for
talking or recreation but not a tool for learning. Mobile learning enables learning
anywhere at anytime, which matches the mission of Radio & TV Universities
(RTVU), the largest provider of open and distance learning (ODL) in China.
Nevertheless, mobile learning has not found widespread use at RTVUs except for a
few researches focusing on model exploration. This research will explore the use of
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 105110, 2011.
Springer-Verlag Berlin Heidelberg 2011
106 F. Xia
present, most of the distance learners of EFL in China are in-service adults and part-
time students, so they are encountering more difficulties than those full-time students
at regular universities.
Most of the students at Radio & TV universities in China are working adults and have
many roles in one with many conflictive tasks to deal with. They have to take care of
their work, family, study etc., so they are always pressed for time. They cant manage
to always give top priority to study and sit down to learn English for even one whole
hour without interruption. They only have sporadic and bits of time every day that
could be used for study. But many of them even do not know how to make full use of
such bits of time, so they dont have time left for study, which constitutes great
impediment to language learning.
Radio & TV universities have a hybrid delivery mode, partly online and partly in
classroom. In the limited hours of classroom teaching, tutors bombard students with
huge amount of information and contents which students can not digest for the
present. It needs repeated exposure to the learning materials to digest and internalize
what they have learned in the classroom. But the learning materials are in textbooks
or online. The working adult students are often on the move, traveling from home to
workplace, from workplace to their childrens schools, from this city to that city on
business etc. Its almost impossible for them to carry those thick and heavy books
108 F. Xia
about with them and access to a computer and Internet is not always available for
every one of them anywhere. Without constant exposure to the learning contents,
what they have learned in classroom slips out of their memory quickly. As a result, a
lot of the students are constantly in the agony of learning-then-forgetting, learning-
then-forgetting.
A large part of the students are poorly motivated. They dont want to work hard to
acquire knowledge so as to work better and live better, or to give themselves a sense
of achievement or satisfaction. Their only purpose of getting registered with the
university is to get a diploma, which could be helpful to their promotion or anything
else. Thus, they dont care about what they can learn in the process, but focus their
attention on examinations. They like exam-oriented teaching very much.
All these problems should be blamed for the fact that distance EFL learning is not
as effective as desired. Something has to be done to solve some of the problems so as
to enhance the delivery of distance English language courses.
Reading materials such as the passages and articles from the textbook should be
redesigned in the format that could be read on mobile devices, such as textfile (.txt).
Listening materials should be represented in mp3 or other formats so that they could
be retrieved and played on mobile devices. Focal language points, such as words,
phrases, collocations, sentence structures could be made in small chunks so that
students could make bits of time to review and recite them. In this way, students can
store these learning materials in their phones and they can access these materials
anywhere at anytime.
Since the mobile devices are together with their users almost round-the-clock, this
will increase distance learners exposure to the learning materials, thus enhancing
their language sensitivity, which is of great help to language learning.
Research on the Use of Mobile Devices in Distance EFL Learning 109
An assignment submission system should be designed for students to submit their oral
and written assignment online. This system should be very easy to use, similar to an
email system. Students can read the assignment requirements online, do the
assignment and submit it to the system, all of which could be done either on their
computers or on mobile phones, whichever is handy for them. As for oral assignment,
students can record their own voice responses to the oral tasks and send them back to
the teacher for marking, evaluation and feedback. Tutors could retrieve and download
students submitted assignment through a computer or a mobile phone, mark them
and submit their feedback to the system.
An interactive quiz system should be designed for a range of interactive exercises and
diagnostic quizzes to help examination preparation. The quizzes should be delivered
in small chunks and students could get immediate feedback after submitting their
answers. The system should be able to record the students points. It would be more
appealing for students if it were game-like. Students could earn bonus points toward
their continuous assessment by answering the quiz questions.
Students can make full use of their bits of time to review what they have learned
from the text book as well as tutorials and test themselves even when they just have 5
minutes. Next time when the learner accesses the quiz system, it will start where it
stopped last time.
5 Conclusion
China has the greatest number of mobile phone users in the world, but mobile phones
features and capabilities are not fully explored and utilized. This leads to a great waste
of mobile resources. While many countries are researching into the use of mobile
devices in education, the largest providers of ODL in China, Radio & TV universities
should take the initiative to research, explore and develop mobile learning technology
for use in its delivery distance courses. Only by doing so could they live up to their
promise to allow for learning anywhere at anytime.
While a lot of previous researches and projects are focused on vocabulary learning
and practicing, this research shifts its focus to review and examination preparation. It
finds that the portability of mobile devices could be utilized to address some of the
current problems of distance EFL learners, such as lack of constant exposure to
learning contents and lack of big chunk of time etc. Mobile learning could become a
daily reality for distance EFL learners through learning content representation on
mobile devices, an online submission system for oral and written assignment as well
as a mobile interactive quiz system.
Mobile learning enables students to review what they have learned and prepare for
the exams in a more leisurely, relaxed and effective way. It is expected that students
would be better prepared for, more confident and achieve better results in the exams
than otherwise.
110 F. Xia
References
1. Ally, M. (ed.): Mobile Learning Transforming the Delivery of Education and Training
(2009)
2. Brown, E. (ed.): Mobile Learning Explorations at the Stanford Learning Lab. Speaking of
Computers, vol. 55. Board of Trustees of the Leland Stanford Junior University, Stanford
(2001)
3. Cavus, N., Ibrahim, D.: m-Learning: An Experiment in Using SMS to Support Learning
New English Language Words. British Journal of Educational Technology 40(1), 7891
(2009)
4. Cui, G., Wang, S.: Adopting Cell Phones in EFL Teaching and Learning. Journal of
Educational Technology Development and Exchange 1(1), 6980 (2008)
5. Levy, M., Kennedy, C.: Learning Italian via Mobile SMS. In: Kukulska-Hulme, A.,
Traxler, J. (eds.) Mobile Learning: A Handbook for Educators and Trainers. Taylor and
Francis, London (2005)
6. Prensky, M.: What Can You Learn From A Cell Phone? Almost Anything (2005),
Information on http://innovateonline.info/pdf/vol_issue5/
What_Can_You_Learn_from_a_Cell_PhoneAlmostAnything.pdf
7. Thornton, P., Houser, C.: Using Mobile Phones in English Education in Japan. Journal of
Computer Assisted Learning 21, 217228 (2005)
Flood Risk Assessment Based on the Information
Diffusion Method
Li Qiong
1 Introduction
Flood disasters are more and more frequent in our country, in ordinary flood risk
assessment, probability statistics method is the main tools which is used to estimate
hydrological variables exceedance probability. This method has the advantage that its
theory is mature and its application is easy. But when it comes to solving practical
problems, problems exist in the feasibility and reliability without considering fuzzy
uncertainty. Once encountering small sample problem, results based on the classical
statistical methods are very unreliable sometimes. In fact it is rather difficult to collect
long sequence of extremum data and the sample is often small. So we can use fuzzy
mathematical method for comprehensively disaster risk evaluation. This paper uses
information diffusion-a fuzzy mathematics method to establish flood risk assessment
model with small sample and then applies it to the flood risk analysis in henan province
successfully.
2 Information Diffusion
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 111117, 2011.
Springer-Verlag Berlin Heidelberg 2011
112 L. Qiong
( x u )2
fi (u j ) = exp i 2 j u U
j
(1)
2h
0.8146(b a), n = 5;
0.5690(b a), n = 6;
0.4560(b a),
n = 7; 2)
h = 0.3860(b a), n = 8;
0.3362(b a), n = 9;
0.2986(b a), n = 10;
0.6851(b a) / (n 1), n 11
3)
m
Ci = f i (u j )
j =1
Let
We obtain a normalized information distribution on U determined by xi , shown in
Eq. (4).
x (u j ) =
i
f i (u j )
Ci
4)
Flood Risk Assessment Based on the Information Diffusion Method 113
For each monitoring point u j , summing all normalized information, we obtain the
information gain at u j , which came from the given sample X. The information gain is
shown in Eq. (5).
5)
n
q(u j ) = xi (u j )
i =1
q(u j ) means that, with the information diffusion technique we infer that there are
q(u j ) (generally is not an integer) sample points in terms of statistic averaging at the
monitoring point u j .Obviously q(u j ) is not usually a positive integer, but is certainly
m
Q = q(u j )
a number not less than zero. And assumption j =1
, (6) where Q is the sum of
the sample size of all q(u j ) , theoretically, there will be Q = n, but due to the numerical
calculation error, there is a slight difference between Q and n. Therefore, we can
p(u j ) =
q(u j )
Q
7)
The frequency value can be taken as the estimation value of its probability.
m
P (u j ) = p (u j ) (8)
k= j
3 Application Example
To raise the grade resolution of flood disaster loss, a new modelprojection pursuit
(PP) model ([5]) is used for evaluating the grade of flood disaster and the flood degree
values are calculated as in table 2.
Based on the disaster degree values of the 32 samples (see Table 2), that is the sample
points set X = {x1 , x2 ," , x32 } .The universe discourse of the disaster degree values
namely the monitoring points set is taken as U = {u1 , u2 ," , u41} = {0, 0.1, 0.2," 4.0} .
Fig. 1. The exceedance probability curves of flood to disaster degree value based on information
diffusion and frequency analysis
116 L. Qiong
The result in Figure 1 illustrates the risk estimation i.e. the probability of exceeding the
disaster degree value. From Figure 1 we know the risk estimation is 0.2745 when the
disaster index is 3.5, in other wods, in Henan Prvince, floods exceeding 3.5 degree
value (extreme floods) occur every 3.64 years. Similarly, the probability of floods
exceeding 2.5 degree(large floods) is 0.5273, namely Henan Province suffers the floods
exceeding that intensity every 1.90 years. This indicate the serious situation of floods in
Henan Province whether on the aspect of frequency or intensity . In Figure 1 the curve
so estimated is compared to the frequency analysis based on the results of Jin et al.([5]).
Figure 1 shows that our results are consistent with those of frequency analysis. It also
means that normal information diffusion is useful to analyze probability risk of flood
disaster. Because the flood disaster belong to the fuzzy events with incomplete data,
therefore, the method proposed is better than frequency method to analyze the risk of
the flood disaster.
4 Conclusion
Floods occur frequently in China and cause great property losses and casualties. In
order to implement a compensation and disaster reduction plan, the losses caused by
flood disasters are among critically important information to flood disaster managers.
This study develops a method of flood risk assessment disasters based on information
diffusion method, and it can be easily extended to other natural disasters. It has been
tested that the method is reliable and the results are consistent with the real values.
References
1. Huang, C.F.: Integration degree of risk in terms of scene and application. Stochastic
Environmental Research and Risk Assessment 23(4), 473484 (2009)
2. Huang, C.F.: Information diffusion techniques and small-sample problem. Internat. J.
Information Technol. Decision Making 1(2), 229249 (2002)
3. Huang, C.F.: Risk Assessment of Natural Disaster: Theory & Practice, pp. 8698. Science
Press, Beijing (2005)
4. Huang, C.F., Shi, Y.: Towards Efficient Fuzzy Information Processing-Using the Principle
of Information Diffusion. Physica-Verlag (Springer), Heidelberg, Germany (2002)
5. Jin, J.L., Zhang, X.L., Ding, J.: Projection Pursuit Model for Evaluating Grade of Flood
Disaster Loss. Systems Engineering-theory & Practice 22(2), 140144 (2002)
6. Chen, S.Y.: Theory and model of variable fuzzy sets and its application. Dalian University of
Technology Press, Dalian (2009)
Flood Risk Assessment Based on the Information Diffusion Method 117
7. Chen, S.Y.: Fuzzy recognition theory and application for complex water resources system
optimization. Jilin University Press, Changchun (2002)
8. Chen, S.Y.: Theory and model of engineering variable fuzzy sets - mathematical basis for
fuzzy hydrology and water resources. Journal of Dalian University of Technology 45(2),
308312 (2005)
9. Jin, J.L., Jin, B.M., Yang, X.H., Ding, J.: A practical scheme for establishing grade model of
flood disaster loss. Journal of Catastrophology 15(2), 16 (2000)
Dielectric Characteristics of Chrome Contaminated Soil
1 Introduction
Recently, nearly all the chromium contaminated soil is due to chromium residue, and
chromium residue is a kind of hazardous waste containing Cr6+, which is caused in the
proceeding of chromium salt production. Now the unsettled chromium residue is still
up to 400 million tons, crossing more than 20 provinces of the whole country[1]. Some
chromium salt enterprises closed down after the chromium production, and the
chromium waste residue is placed directly on the open air without any windproof,
waterproof and leakproof facilities, so most of the domestic chromium contaminated
sites formed. Some contaminated sites are near the surface water, and the leaching Cr6+
caused serious groundwater, surface water and soil pollution[1-3].
The former work of contaminated sites restoration is to monitor the pollution
accurately. Currently to monitor the contaminated soil and groundwater, the basic way
is to collect samples for the physical and chemical analysis, however, this traditional
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 118122, 2011.
Springer-Verlag Berlin Heidelberg 2011
Dielectric Characteristics of Chrome Contaminated Soil 119
method may consume much time and high cost [4-6]. There are obvious shortcomings:
the samples is limited, and it is difficult to achieve a thorough understanding of the
pollution conditions; geological drilling could destroy the original pollutants
distribution and concentration in the ground, and easily make the pollutants migrate to
the deeper layer of the ground; monitoring period is very long, not suitable for
long-term monitoring.
Because of the above problems, a rapid and effective monitoring method needs to be
developed in the chromium contaminated sites. Geophysical studies show that as a very
important parameter, the dielectric properties are rapid and non-destructive in the
measurement[7], so dielectric properties of the contaminated soil may evaluate the
underground pollution conditions[8]. Dielectric constant, also known as the
permittivity, dimensionless, is a factor that shows insulation material properties; it is to
measure the degree of charge polarization in external electric field. In the previous
study of dielectric properties of soil, relationship between dielectric properties and
water content, soil contaminants is researched [9-11].
There is no report about the complex dielectric constant of chromium contaminated
soil. This study will search for the relationship between complex dielectric constant and
physicochemical property (such as water content, pollutant concentration and void
ratio) of chromium contaminated soil at the different frequency.
Use random distribution methods in soil sampling. The main pollutants in the
contaminated sites are Na2CrO4 and Na2Cr2O4. As the need for simulation with the
contaminated soil in the sites, adding the Na2CrO4 into the unpolluted soil to make the
soil sample. Each soil sample is about 100g after quartation. Table 1 shows the physical
properties of soil samples, and table 2 shows the leachate concentration of the soil
samples.
Table 1. Physical properties of soil sample
w/% /
pH CEC/(cmol/kg) TOC/% T/
sand slit clay g/cm3
The pollutant concentration, water content and frequency are the main factors of the
complex dielectric constant. And prepare a series of soil samples with different
concentrations of chromium contamination (50, 100, 150, 200, 500, 1000mg/kg),
different water content (8, 15, 25%).
In the range of 10MHz to 200MHz frequency, the real and imaginary parts of soil
samples are measured by Agilent E5061A microwave network analyzer.
3 Experimental Results
Figure 1 shows the soil samples complex dielectric constant with different water
contents. Both the real and imaginary parts of samples decrease when the frequency
increases. When the frequency is less than 50MHz, the real and imaginary part sharply
decline; when higher than 50MHz, the downward trend is smooth. The imaginary part
is largely affected by water content, and the real part is not affected by water content
significantly.
Based on the above analysis, considering the water content, void ratio and other factors
impact on the soil constant, the real part could be represented by extending Topps
formula[9]
v = m + n 2 + p 3 + k b (1)
= K1 v w + K 2 (3)
Therefore, taking into account real part and imaginary part, dielectric method could
be used in the evaluation of chromium polluted soil.
122 Y. Sun et al.
4 Conclusions
In the conditions of different water content and pollution concentration, both the real
part and imaginary part of soil samples dielectric constant have a significant dispersion.
The complex dielectric constant has a dramatic change when the frequency is below
50MHz, therefore, 10-50MHz is the suitable frequency range for the dielectric constant
measurement.
As the real part is largely affected by the water content, the imaginary part is
necessary for the evaluation of the chromium pollution in the soil. And the complex
dielectric method could be used in the chromium pollution monitoring.
References
1. Gu, C., Shan, Z., Wang, R.: Investigation on pollution of chromic slag to local soil. Mining
Safety & Environmental Protection 32, 1820 (2005) (in Chinese)
2. Li, J., Zhu, J., Xie, M.: Chromium and health. Trace Elements Science 4, 810 (1997) (in
Chinese)
3. Zhang, H., Wang, X., Chen, C.: Study on the polluting property of chrome residue
contaminated sites in plateau section. Chinese Journal of Environmental Engineering 4,
915918 (2010) (in Chinese)
4. Cheng, Y., Yang, J., Zhao, Z.: Status and development of environmental geophysics.
Progress in Geophysics 22, 13641369 (2007) (in Chinese)
5. Li, Z., Nai, C., Nian, N.: Application and prospect of physical exploring technology for solid
waste. Environmental Science & Technology 29, 9395 (2006) (in Chinese)
6. Kaya, A., Fang, H.Y.: Identification of contaminated soils by dielectric constant and
electrical conductivity. Journal of Environmental Engineering 123, 169177 (1997)
7. Campbell, J.E.: Dielectric properties and influence of conductivity in soils at one to fifty
megahertz. Soil Science Society of America 54, 332341 (1990)
8. Thevenayagam, S.: Environmental soil characterization using electric dispersion. In:
Proceedings of the ASCE. Special Conference of the Geoenvironment 2000, pp. 137150.
ASCE, New York (2000)
9. Topp, G.C., Davis, J.L., Annan, P.: Soil water content: measurements in coaxial
transmission lines. Water Resource 16, 574582 (1980)
10. Dobson, M.C., Ulaby, F., Hallikainen, T., et al.: Microwave dielectric behavior of wet soil.
Part II. Dielectric mixing models. IEEE Transactions on Geoscience and Remote
Sensing 23, 3546 (1985)
11. Arulanandan, K.: Electrical dispersion in relation to soil structure. Journal of the Soil
Mechanics and Foundations Division 99, 11131133 (1973)
12. Francesco, S., Giancarlo, P., Raffaele, P.: A strategy for the determination of the dielectric
permittivity of a lossy soil exploiting GPR surface measurements and a cooperative target.
Journal of Applied Geophysics 67, 288295 (2009)
Influences of Climate on Forest Fire
during the Period from 2000 to 2009
in Hunan Province
Abstract. Climate change affects the dynamic changes of forest fire. This paper
aims to analyze the influence of climate on forest fire in Hunan province by using
multiple linear regression and correlation analysis. The results showed that the
average affected area of forest and forest loss caused by forest fire showed a
significant linear relationship with climate and could be expressed by multiple
regression equation. According to the meteorological factors, we can forecast the
trend of forest fire development in most areas of Hunan. The formation of a large
number of fuel was caused by ice crystal pressure off forests during special
freezing climate, which led to remarkably increasing occurrence of forest fire.
The correlation coefficient between the thickness of ice during freezing disaster
and the frequency of forest fire in March after disaster reached 0.798 with
significant correlation.
1 Introduction
Climate is an important factor in dynamic change of forest fire. The dynamic change
can lead to climate change of forest fire in frequency occurrence, fire area, fire cycle
can change the structure and function of forest landscape[1~3]. With the population
increasing, the influence of human activities on forest fire is growing, but climate is still
the dominant factor of forest fire dynamic change[4,5]. Since the early 1900s, the
global average surface temperature has increased by 0.74 , the average temperature
increase rate over the past 50 years is almost twice over the past 100 years[6]. As the
global gets warming, the extreme weather events such as El Nio, droughts, floods,
thunderstorms, hail, storms, high temperatures and sandstorms occur frequently both in
their frequency and intensity. Under the extreme weather conditions, the condition of
forest fire is key to the research of relationship between the climate and forest fire. In
various countries, the scientists have studied the relationship between abnormal climate
and forest fire, the results show that: Along with the global warming, the accelerating
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 123130, 2011.
Springer-Verlag Berlin Heidelberg 2011
124 Z. Han, D. Tian, and G. Zhang
frequency of the extreme weather affected the changes in forest fire danger and
increased the possibility of forest fire and extra serious forest fire in the extreme
weather areas[7~12].
The study of relationship between the climate change and forest fire has a great
practical significance in forestry production, which is the important basis in national
and local formulated long-term forest management strategy[13].
Hunan province has abundant forest resources and frequent forest fire. In recent
years, the annual temperature has rised, the rainfall has declined gradually and the rare
freezing weather has occurred in Hunan. Those factors result in the increase of the
forest fire and the decrease of the industrial and agricultural production, a serious threat
to people's life and property security. This paper has set Hunan province as the research
area, studied and quantified various meteorological factors and forest factor relations,
explored the occurrence and development in different counties and cities of Hunan
under the climatic conditions influence, which has a great significance to the proper
management of forest fire.
climate, concentrated rainfall and rich sunlight resources. The yearly mean temperature
is around 16 ~18.5 , the annual average duration of sunshine is 1250~1850 hours,
the yearly precipitation is 1200~1700mm.
Hunan covers a total area of 211875 square kilometers and is characteristic of
abundant mountains and hills. It is an important agricultural province in the south of
China. Hunan is troubled with frequent forest fire, 18782 fires during 2000~2009. Due
to the huge agricultural population, productive fire is the main cause for forest fire
which accounts for 62% of the total number, unproductive fire 31%, thunders and other
natural fire less, which occur only 20 times during 2000~2009. Winter and spring are
seasons of forest fire. Fires take place the most from February to April.
vector, and set that Y with X exist linear regression relationship, namely at the point
of X , the Y expectations (average) are:
y1 = 01 + 11 x1 + 21 x 2 + " + m1 x m ,
y 2 = 02 + 12 x1 + 22 x 2 + " + m 2 x m , (2)
#
y p = 0 p + 1 p x1 + 2 p x 2 + " + mp x m .
The formula is called Y linear regression equation about X [13].
101 counties of Hunan Province as a unit, select a total of four indicators including
the frequency of forest fire, the average area of affected forest (hm2/time), the average
forest tree loss (m3/time), the average economic loss (ten thousand RMB /time) as
dependent variables, and 7 indicators of the annual average temperature, annual
precipitation, annual relative humidity, annual average wind speed, annual sunshine
hours, stumpage stock and regional altitude as independent variables. The paper
extracted 280 sets of data from the statistical data, made multiple linear regression
analysis, regression effect test and prediction error test.
There are some exceptions in the statistical data, such as the incidental data (forest
fire increase sharply) during the special freezing climate period and the data of much
less fire areas (average annual number of forest fire is 0 to 3 times) including
Hengyang, Changsha, Xiangyin, Yuanjiang, Anxiang, Huarong. As these data can not
reflect the inherent relationship between meteorological factors and forest fire factors
in normal climatic conditions, and thus affect the accuracy of the model, so they should
be removed beforehand.
In early 2008, Hunan affected a freezing disaster, the number of forest fire increased
dramatically after the disaster. The climate data such as the temperature, relative
humidity, ice thickness during the disaster and the regional forest fire data in February
and March were collected and studied. The correlation and comparative analysis of the
forest fire data in February, March and the meteorological data during the disaster had
been done respectively.
126 Z. Han, D. Tian, and G. Zhang
The frequency of forest fire, the average fire forest area, average fire forest loss and
average economic loss are expressed by Y1, Y2, Y3 and Y4 respectively, x1, x2, x3,
x4, x5. x6 and x7 separately stand for annual average temperature, annual precipitation,
annual sunshine hours, annual average relative humidity, annual average wind speed,
altitude and stumpage. The relationship calculated by multiple linear regression
analysis between forest fire factors and meteorological, environmental factors is
respectively showed as follow:
(1) Regression equation of forest fire number and meteorological factors:
Y1 = 42.6291+ 5.433x1 0.0075x2 0.008x3 + 0.2502x4 4.9669x5 0.0046x6 (3)
Tab.2: the average difference of the predictive and the actual sampling data between the
average area of affected forest and meteorological factors model is 0.183. The mean
difference of the predictive and the actual sampling data between average forest loss
and meteorological factors model is 7.5242. The results show that the predicting and
actual values are close to each other and the regression model (4) and (5) can accurately
predict the average affected forest area and the average forest loss.
Table 1. Validation of regression equation for average area of forest damage and meteorological
data
Table 2. Validation of regression equation for average forest lost and meteorological data
In recent years, under the influence of global climate change, the frequency of special
climate is increasing in Hunan, which has a significant impact on forest fire. Especially
in early 2008 (from 13th January to 3rd February), there was low temperature freezing
weather in a large area of Hunan, causing the probability of forest fire increasing
(Fig.1).
As the result of the correlative analysis between the fire frequency of affected area
and temperature of the ice disaster on February and March in 2008, the correlation
coefficient between every regions fire and the temperature is 0.095 in February, 0.352
in March. At the same time, the history data shows Changsha area minimum
temperatures reached -9.5 on 31st January, 1969 and -11.3 on 9th February, 1972.
And the minimum temperature of Chenzhou reached -9 on 11th January, 1955[14].
According to the historical record, it was said that the temperature of each district was
below this temperature of the weather at the same period, but neither such ice freezing
phenomenon on a large-scale appeared, nor the increasing frequency of forest fire
caused by large-scale affected forest occurred. These indicate that the temperature is
not the key element to trees loss caused by the wildfires after the ice disaster.
128 Z. Han, D. Tian, and G. Zhang
2500
2500
s2000 se2000
e ri
r
i1500 f1500
f
t ts
s1000 er1000
e oF
r
o 500
F 500
0 0
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12
month month
Fig. 1. The number of forest fire in previous years (average monthly) and in 2008 ( each month)
The results of the correlative analysis between the fire frequency of affected area and
the relative humidity on February and March in 2008 show that the correlation
coefficient between every regions fire and the relative humidity is 0.171 in February,
0.268 in March. The correlation is not significant, too.
The sharp rise of the frequency of forest fire after freezing disaster is attributed to the
ice crystals which are formed and accumulated under the special climate conditions in
freeze disaster. And these make the branches and trunks mechanically wrecked, lots of
combustible fuel formed. While ice crystals are created under the specific climate
condition of nearly saturated relative humidity cooperated with moderate low
temperature in a long time. Therefore, we can study the influence of climate on forest
fire during the freeze disaster by analyzing the relationship among ice crystals, climate
and forest fire.
As the result of the correlative analysis between the ice thickness of each area and
climate conditions, the correlative coefficient between the ice thickness and the days
(relative humidity85%) is 0.59, the correlative coefficient between the ice thickness
and the days (relative humidity85% and temperature below 0 ) is 0.648, the
correlative coefficient between the ice thickness and the days (relative humidity85%
and -1~0) is 0.75.
The number of forest fire in February and March of 2008 and the thickness of ice
during the freezing disaster are used to analyze their correlation respectively(Tab.3).
The results show that: in February, the correlation coefficient between the number of
forest fire and the ice thickness during the freezing disaster is 0.596, while the
correlation coefficient in March is 0.798, the correlation is significant.
The above analyses indicate that the freezing disaster on forest fire mainly makes an
effect on the formation of ice crystals in an appropriate climatic condition (air relative
humidity85% and the temperature is kept at -1~0). Under this condition, the climate
period gets longer, the ice gets thicker. Ice accumulation causes trees impairment,
makes combustible fuel formed, and then the probability of forest fire increases
correspondingly, the ice thickness and the number of forest fire after disaster is positive
correlation.
Influences of Climate on Forest Fire during the Period from 2000 to 2009 129
Table 3. Thickness of ice accumulated around electric wires and the number of forest fire during
ice disaster 2008
thickness of
341 335 250 174 148 34
icemm
number of
forest fire 70 65 36 55 48 59
in February
number of
forest fire 376 167 232 190 102 46
in March
After the freezing disaster, the number of forest fire in February in Hunan is 1030
times, it sharply rises to 2744 times until March. As to the correlation analysis between
the ice thickness and forest fire number in February and March, the impact of freezing
disaster on forest fire number is delayed. According to the number of forest fire each
time in February and March (10 days as a period), from Feb.1 to Feb.10, the number of
forest fire is 85, from Feb.11 to Feb.20 and Feb.21 to Feb.29, 472 and 473 respectively.
From March 1 to March 10, the number of forest fire is up to 2300, 84% of forest fire in
March. It can be concluded that March 1~10 is the hot time of forest fire affected by the
freezing disaster, and the lag effect is about one month. After the freezing disaster,
climate gets warming, human activities gradually resume, coupled with the formation
of lots of combustible fuel during the freezing disaster period, a large number of forest
fire occur finally.
Fitting the ice thickness with the number of forest fire in March of affected areas
during the ice storm, we set y as the number of fire, x as the ice thickness, then the
fitting curve is y = 2.9674x0.7672 , the confidence level is 95%, R2 is 0.8241.
4 Conclusion
As more than 93% fires in Hunan are man-made fires, there is no significant linear
relationship between the climate and the forest fire number. However, the average
forest fire affected area and average forest fire loss have a linear relationship with
meteorological factors, it can be expressed by multiple regression equation. According
to the meteorological factors, we can forecast the trend of forest fire development under
the general climatic conditions in most areas of Hunan. But the model does not apply to
so few forest fire and frozen areas.
The special freezing climate largely affects the forest fire by making the ice crystals
formed under the appropriate climate condition. And ice accumulation causes trees
impairment, makes combustible fuel formed, and then the probability of forest fire
increases correspondingly, the number of forest fire finally surged greatly in February
and March after the freezing disaster. The correlation coefficient between the thickness
of ice during the freezing disaster and the number of forest fire in March is at 0.798.
130 Z. Han, D. Tian, and G. Zhang
The impact of the freezing disaster on forest fire is delayed, the delay period is about
one month. Therefore, the use of the delay time to reinforce the combustible fuel
removing and standardize the forestry activities after the disaster is important and
effective to the forest fire management.
Acknowledgment. This work was carried out at the Natural Science Foundation
program of Hunan Province of Studies on the Influences of Global Climate Change on
the Spatial -temporal Pattern of Forest Fire in Hunan.
References
1. Florent, M., Serge, R., Richard, J.: Simulating climate change impacts on fire frequency and
vegetation dynamics in a Mediterranean-type ecosystem. Global Change Biology 8(5),
423437 (2002)
2. Donald, M., Gedalof, Z., David, L., Peterson,, Philip, M.: Climatic change, Wildfire, and
Conservation. Conservation Biology 18(4), 890902 (2004)
3. Tian, X., Shu, L., Wang, Y.: Forest fire and climatic change review. World Forestry
Research 19(5), 3842 (2006)
4. Mollicone, D., Eva, H.D., Achard, F.: Ecology: human role in Russian wild fires.
Nature 440, 436437 (2006)
5. Shu, L., Tian, X.: The forest fire conditions in recent 10 years. World Forestry
Research 11(6), 3136 (1998)
6. WMO. Press Release No.805, http://www.wmo.int/pages/index_zh.html
7. Williams, A.A.J., Karoly, D.J.: Extreme fire weather in Australia and the impact of the El
Nino Southern Oscillation. Australian Meteorological Magazine 48, 1522 (1999)
8. Wang, S.: The Study of the causes regularity of forest fire on big time scale and the
prediction theory in medium-term. Technology Metallurgica (2), 4750 (1994)
9. Zhao, F., Shu, L., Tian, X.: The forest combustible fuel dry conditions changes of
Daxinanling forest region in Inner Mongolia under the climate warming. Ecological
Journal 29(4), 19141920 (2009)
10. Ding, Y., Ren, G., Shi, G.: National Climate Change Assessment Report (I).Chinese Climate
Change History and Future Trends. Climate Change Review 2(1), 38 (2006)
11. Zhang, T.: The Impact Analysis and Suggestion of Guangdong Freeze Disaster on Forest
fire. Forestry Survey Planning 33(5), 7984 (2008)
12. Zhao, F., Shu, L.: The Study of climatic abnormals impact on forest fire. Forest Fire
Prevention (1), 2122 (2007)
13. Yuan, Z., Song, S.: Multiple Statistical Analysis. Science Press, Beijing (2009)
14. Liu, X., Tan, Z., Yuan, Y.: The Cause of Hunan Freezing Disaster Weather Damage to
Forest. Forestry Science 44(11), 134140 (2008)
Numerical Simulation for Optimal Harvesting Strategies
of Fish Stock in Fluctuating Environment
1
College of Life Science and Biotechnology, Dalian Ocean University,
Dalian, China, 116024
2
Institute of Mechanical Engineering, Dalian Ocean University, Dalian, China, 116024
{zhaowen,clj}@dlou.edu.cn
Abstract. The population size of fish stock is affected by the variability of its
environment, both biologic and economic. The classical logistic growth equation
is applied to simulate fish population dynamics. Environmental variation was
included in the optimization of harvest to obtain a relation in which the maximum
sustainable yield and biomass varied as the environment varied. The fluctuating
environment is characterized by the variation of the intrinsic growth rate and
environmental carrying capacity. The stochastic properties of environment
variables are simplified as normal distribution. The influence of stochastic
properties of environment variables to population size of fish stock is discussed.
The investigation results relation can be applied for management of fisheries at
the optimum levels in a fluctuating environment.
1 Introduction
Fish growth is a major process of fish biology and is part of the information necessary
to estimate stock size and fishing mortality in stock assessments models. Modeling
growth of fishes, crustaceans and mollusks has received considerable attention in
studies of population dynamics and management of wild and cultivated species[1].
Knowledge of the size of a fish population is fundamentally important in the
management of fisheries. The measurement of these populations is difficult and fishery
scientists have developed a large body of both theory and practical experience that
bears on the problem[2]. Bousquet investigated the biological reference points, such as
the maximum sustainable yield (MSY), in a common Schaefer (logistic) surplus
production model in the presence of a multiplicative environmental noise. This type of
model is used in fisheries stock assessment as a firsthand tool for biomass modeling[3].
Die reviewed some methods to propose to calculate maximum sustainable yield[4].
Wang used the Novikov theorem and the projection operator method, and obtained the
analytic expressions of the stationary probability distribution, the relaxation time, and
the normalized correlation function of this system[5]. Bio-economic fisheries models,
depicting the economic and biological conditions of the fishery, are widely used for the
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 131136, 2011.
Springer-Verlag Berlin Heidelberg 2011
132 L. Li et al.
Where x0 is the population size at time t=0. The main features of the logistic model are
characterized as follows: 1) The optimal size of fish population, in which the
population size has a maximum intrinsic growth rate.
K
x opt = . (3)
2
Where xopt represents optimal size of fish population.
2) The maximum growth rate, which is also called as maximum sustainable
yield(MSY)
rK
(dx / dt ) max = . (4)
4
3) The relative growth rate declines with increasing population, and it will equal
zero when population size reaches its maximum.
dx x
/ x = r (1 ). (5)
dt K
Numerical Simulation for Optimal Harvesting Strategies 133
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
Relative population size
The rate of change of fish stock dx/dt is determined by natural reproductive dynamics
and harvesting [7]
x = f ( x, t ) h(e, x, t ). (6)
Where f(x,t) is the natural growth rate of fish stock which is dependent on the current
size of the population x. The quantity harvested per unit of time is represented by
h(e,x,t). The net growth rate dx/dt is obtained by subtracting the rate of harvest h(e,x,t)
from the rate of natural growth f(x,t).
The rate of harvest h(e,x,t) is assumed proportional to aggregate standardized fishing
effort (e) and the biomass of the stock x; that is [8]
Where is the catchability coefficient. Once average fishing power has been
calculated, the standardized fishing effort is computed as [9]
e(t ) = P n. (8)
max = [ p h C]exp( t )dt
0
(10)
= [ p e(t ) x(t ) ce(t )]e ( t )
dt.
0
xopt r
eopt = r (1 )/ = . (12)
K 2
From about analysis we can find that the fish stock has a maximum growth rate
when population size of fish stock equal to half of the environmental carrying capacity.
Maximum sustainable yield has been postulated by Shaefer for a fishery on an isolated
population of fish which is growing according to the logistic law. When a fished
population is found to be below half of the pristine population (i.e. the population prior
to fishing), the population is termed overfished. When a population is overfished, the
catch per unit effort (CPUE) decreases as the fishing effort increases.
The logistic equation changes when the system is affected by some external factors
such temperature, drugs, radiotherapy. Under equilibrium conditions in a deterministic
environment the relationship between yield and biomass is a parabola and there is one
maximum sustainable yield located as the top of parabola where h=rK/4 and e=r/(2 ).
In a fluctuating environment, as the environment changes it will affect both the carrying
capacity and the rate of increase and there is a family of parabolas relating yield and
biomass[11].
Numerical Simulation for Optimal Harvesting Strategies 135
15 SDr=0.05
probability density
12 SDr=0.025
9
6
3
0
0.5 0.6 0.7 0.8 0.9 1
r
0.02
probability density
SDk=20
SDk=40
0.01
0
900 930 960 990 1020 1050 1080
k
1200
1000
Population size
800
600
400
SDk=60,SDr=0.075
200
SDk=40,SDr=0.050
0
0 2 4 6 8 10 12 14 16 18 20
T ime
Fig. 4. Variation of population size versus time in fluctuating environment (Ka=1000, ra=0.75,
x0=200)
Suppose that the intrinsic growth rate and environmental carrying capacity are
subject to normal distribution. Fig. 2 shows stochastic distribution properties of
intrinsic growth rate. Fig. 3 shows stochastic distribution properties of environmental
carrying capacity. Fig. 4 shows the variation of population size versus time in
fluctuating environment.
136 L. Li et al.
From Fig. 4, we can conclude that the variation of population size versus time in
fluctuating environment is stochastic changeable.
5 Conclusion
Fishery systems have very complex interactions between resource stocks and the
factors such as labor, fluctuating environment and capital used to harvest fish stocks.
Stochastic simulation technique is used to describe the influence of the highly variable
marine environment. The complexity of the fisheries management stems from the
dynamic nature of the marine environment and numerous interest groups with different
objective. The undermining properties of marine environment affect the variation of the
intrinsic growth rate, environmental carrying capacity and optimal harvest strategies.
References
1. Hernandez-Llamas, A., David, A.: Estimation of the von Bertalanffy, Logistic, Gompertz
and Richards Curves and a New Growth Model. Marine Ecology Progress Series 282,
237244 (2004)
2. Swierzbinski, J.: Statistical Methods Applicable to Selected Problems in Fisheries Biology
and Economics. Marine Resource Economics 1, 209234 (1985)
3. Bousquet, N., Duchesne, T., Rivest, L.-P.: Redefining the Maximum Sustainable Yield for
the Schaefer Population Model Including Multiplicative Environmental Noise. Journal of
Theoretical Biology 254, 6575 (2008)
4. Die, D.J., Caddy, J.F.: Sustainable Yield Indicators From Biomass: are there appropriate
reference points for use in tropical fisheries? Fisheries Research 32, 6979 (1997)
5. Wang, C.-Y., Gao, Y., Wang, X.-W.: Dynamical Properties of a Logistic Growth Model
with Cross-correlated Noises. Physica A 390, 17 (2011)
6. Tsoularis, A., Wallace, J.: Analysis of logistic growth models. Mathematical
Biosciences 179, 2155 (2002)
7. Jensen, A.L.: Maximum Harvest of a Fish Population that Has the Smallest Impact on
Population Biomass. Fisheries Research 57, 8991 (2002)
8. Eisennack, K., Kropp, J.: Assessment of Management Options in Marine Fisheries by
Qualitative Modeling Techniques. Marine Pollution Bulletin 43, 215224 (2001)
9. Arnason, R.: Endogenous Optimization Fisheries Models. Annals of Operations
Research 94, 219230 (2000)
10. Sun, L., Xiao, H., Li, S.: Forecasting Fish Stock Recruitment and Planning Optimal
harvesting strategies by Using Neural Network. Journal of Computers 4, 10751082 (2009)
11. Jensen, A.L.: Harvest in a Fluctuating Environment and Conservative Harvest for the Fox
Surplus Production Model. Ecological Modelling 182, 19 (2005)
The Graphic Data Conversion from AutoCAD to
GeoDatabase
1 Introduction
As AutoCAD software has powerful functions of graphics drawing and graphics
editing, most companies use it to draw digital topographic map, so most of the vector
data are CAD format [1]. However, to a large extent, these CAD data are also
important source of GIS data, and now many related companies have established their
own fundamental geographic information system. In order to satisfy the GIS
application requirements and reduce the cost of buying GIS data [2], they hope their
CAD data can be applied to fundamental geographic information system. To achieve
the goal of applying CAD data to GIS system, we must complete the conversion form
CAD data to GIS data. The conversion process includes two parts: one is Graphical
data conversion, and the other is attribute data conversion. And graphics data
conversion is the more complex and important part, so in order to solve the problem
of graphics data conversion, we use ArcEngine components and ObjectARX
components provided by ESRI to develop a data conversion tool from CAD to
GeoDatabase in the .NET environment.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 137142, 2011.
Springer-Verlag Berlin Heidelberg 2011
138 X. Liu and F. Hu
the.NET environment; Then read the point, straight line, polyline and other graphic
elements of DWG files using ObjectARX development kit in the.net environment,
and acquire the conversion data; Finally, based on the conversion data obtained which
are in the relevant feature class of the GeoDatabase, program to realize creating
feature. This is the conversion process of graphic data. The conversion interface is
shown in figure 1.
radius. Actually, the circle is a peculiar arc whose central angle equals 360, so their
conversion method is the same.
The conversion of ellipse and elliptical arc: you should gain the conversion data
which mainly included coordinates of centre point, starting angle, central angle,
rotation angle, semi-major axis and elliptic axial ratio. Besides rotation angle, other
conversion data can be obtained from graphic elements of ellipse and elliptical arc by
ObjectARX. So you must calculate the rotation angle by writing an independent
algorithm. Because the ellipse is a peculiar elliptical arc whose starting angle equals
zero, they used the same conversion method.
Through above the conversion job, then we can see the effect that AutoCAD graphic
data are converted to GeoDatabase. Obviously, the AutoCAD graphic data and the
GIS graphic data which is converted to GeoDatabase maintain a high degree of
consensus. Thus, we achieve the lossless conversion of AutoCAD graphic data. The
effect is shown in figure 2 and figure 3.
The Graphic Data Conversion from AutoCAD to GeoDatabase 141
5 Conclusion
This paper mainly introduced how to use ArcEngine components and ObjectARX
components to realize graphic data conversion from AutoCAD to GIS in the .NET
environment. And solved some problems which existed in the graphic data
conversion, such as geometry distortion of graphic elements, coordinate location
dislocation, area of polygon appeared negative value, etc. Finally, we achieve the
consistency and integrity of graphic data before and after conversion, and accomplish
graphic data lossless conversion from AutoCAD to GIS. Thus, these will promote the
application of GIS technology.
142 X. Liu and F. Hu
References
1. Weiler, K.J.: Boundary Graph Operators for Nonmanifold Geometric Modeling Topology
Representations. In: Geometric Modeling for CAD Applications. Elsevier Science,
Amsterdam (1988)
2. Li, S.: Research Based On the Conversion of Topographic Map and GeoDatabase.
Geographic Space Information (2), 2628 (2010)
3. Erden, T., Coskun, M.Z.: Analyzing Shortest and Fastest Paths with GIS and Determining
Algorithm Running Time. Visual Information and Information System, 269278
4. Yao, J., Tawfik, H., Fernando, T.: A GIS Based Virtual Urban Simulation Environment.
In: Computational Science, ICCS 2006, pp. 6068 (2006)
5. Qin, H., Cui, H., Sun, J.: The development training course of Autodesk. Chemical Industry
Press, Beijing (2008)
6. Rebecca, Tse, O.C., Gold, C.: TIN Meets CAD Extending the TIN Concept in GIS. In:
Computational Science, ICCS 2002, pp. 135144 (2002)
7. Liu, R., Liu, N., Su, G.: Combination and Application of Graphic Data and Relational
database. Acta Geodaetica et Cartographica Sinica 29(4), 229333 (2000)
8. Sun, X.: DWG Data Format and Information Transmission of Graphical File. Journal of
XIAN University of Science and Technology (4), 372374 (2001)
9. Ebadi, H., Ahmadi, F.F.: On-line Integration of Photogrammetry and GIS to Generate
Fully Structured Data for GIS. Innovations in 3D Geo Information System Part 2, 8593
(2006)
10. Bordogna, G., Pagani, M., Psaila, G.: Spatial SQL with Customizable Soft Selection
Conditions. STUDFUZZ, vol. 203, pp. 323346 (2006)
11. Song, Z., Zhou, S., Wan, B., Wei, L., Li, G.: Research for CAD Data Integrated In GIS.
Bulletin of Surveying and Mapping, Beijing (2008)
12. Frischknecht, S., Kanani, E.: Automatic interpretation of scanned topographic maps: A
raster-based approach. In: Chhabra, A.K., Tombre, K. (eds.) GREC 1997. LNCS,
vol. 1389, pp. 207220. Springer, Heidelberg (1998)
Research on Knowledge Transference
Management of Knowledge Alliance
Abstract. Knowledge alliance is the necessary for new era businesses, and
knowledge transference has largely affected the Alliance's decision-making and
behaviors. Based on the two companies form of alliance the paper makes the
game analysis of knowledge transference, and then through the analysis shows
how the knowledge alliances to manage knowledge transference and make
management decisions. This paper researches on how to manage knowledge
alliance well. Research finding, for guiding enterprises through the effective use
of external knowledge of business alliances to enhance the competitiveness, for
guiding the Alliance to effectively manage knowledge transfer in order to
promote enterprise development, has important application value.
1 Introduction
According to the transferability of knowledge, knowledge can be divided into explicit
knowledge and tacit knowledge. Explicit knowledge can be used by formal language,
including procedures, mathematical expression, plans, manuals, etc, recording these
to transfer and share. Companies in advantages of explicit knowledge are more easily
imitated by others, and then relatively weakened. Of course, companies can also learn
explicit knowledge they need from other companies, which can be integrated with
existing knowledge into new knowledge. Tacit knowledge refers to not be clearly
expressed by a system or dominant language, which can be called only not words but
meanings. In business, experience, skills and mental models are important asset of
enterprises, also the most core ability of companies. Because of this knowledge is
often implicit, it cannot easily be imitated. It is the most lasting competitiveness of
enterprises.
Knowledge alliance, a risk-sharing network, with other enterprises, universities and
research institutions through a variety of contract or equity shares complementary
advantages and risks, during the process of achieving the strategic objectives, in order
to share knowledge resources and promote knowledge flowing and create new
knowledge. The purpose is to learn and create knowledge. Alliance partners can not
only get the experience, ability and other tacit knowledge market transactions cannot
do, also, by the complementary knowledge, they can also create new knowledge a
single enterprise cannot, so that alliance partners benefit.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 143148, 2011.
Springer-Verlag Berlin Heidelberg 2011
144 J. Ma, G. Wang, and X. Wang
The concept of knowledge transference was first proposed in 1977 by Teece, U.S.
technology and innovation management theorists. He believes that the international
transference of knowledge can help companies accumulate valuable knowledge
and promote technology diffusion, thereby reducing the technology gap between
regions.
companies were UA and UB. In the process of knowledge transference, the absorption
capacity of A is a, and B is b, 0 a 1, 0 b 1. Knowledge transference is U
generated by alliance, and condition of U is integration knowledge of two companies,
that is three possibilities: A obtained knowledge from B; B obtained knowledge
from A; A and B companies jointly acquired knowledge. A occupies knowledge
exclusively in the case of , and B occupies knowledge exclusively in the case of ,
and A, B share knowledge in the case of .
Suppose A transferred knowledge, but B did not, so that the income of B is
UB+b(UA+U). B received additional revenue but no knowledge transference, and A
transferred knowledge but no receive additional revenue, thus this situation is some
loss for A, though this loss is psychology. Suppose the loss is k, so the revenue for A
is UA-K. Similarly, if B transferred knowledge ,but A not, so the revenue for A and B
is: UA+a(UB+U),UB-K. If A and B selected transference both, then the revenue for A
and B is: UA+a(UB+U), UB+b(UA+U). On the contrary, if A and B did not select
transference, then the revenue is UA, UB. Game payoff matrix of knowledge
transference for A and B show table 1.
If A selected knowledge transference, then the choice of transference or not was
same for B--UB+b(UA+U). If A selected non-transference, then the revenue UB of B
selecting non-transference was greater than the revenue UB-K of transference, so the
non-transference is Bs strategy. If B selected knowledge transference, then A had the
same revenue UA+a(UB+U), no matter of transference or not. If B selected non-
transference, then the revenue UA of A selecting non-transference was greater than
Research on Knowledge Transference Management of Knowledge Alliance 145
A B
transference non-transference
transference UA+a(UB+U), UB+b(UA+U) UA-K, UB+b(UA+U)
non-transference UA+a(UB+U), UB-K UA, UB
uk
1 u
[UA+a(UB+U)] (n-k)UA (1)
References
1. Hu, Y., Liu, X.: Game analysis of knowledge sharing in knowledge alliance. Science &
Technology Progress and Policy, 143145 (April 2009)
2. Wang, Y.: Research on knowledge alliances cooperation innovation based on game
theory. Dalian University of Technology Master thesis, pp. 2627 (2009)
3. Wang, Z., Li, J.: Introduction to Game Theory. China Renmin University Press, Beijing
(2004)
The Study of Print Quality Evaluation System Using the
Back Propagation Neural Network with Applications to
Sheet-Fed Offset
1 Introduction
Subjective evaluation, objective evaluation and comprehensive evaluation are the three
methods for color print quality evaluation. And the comprehensive evaluation is widely
used, which is based on data obtained by objective evaluation and partly subjective
evaluation with a variety of factors. The approach of combining psychological
impression of subjective and objective data analysis makes the evaluation criteria more
scientific. So the print quality evaluation system using the Back Propagation neural
network is more efficient.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 149154, 2011.
Springer-Verlag Berlin Heidelberg 2011
150 T. Ma, Y. Li, and Y. Sun
Although the BP network is the most widely used network model, but there are two
shortcomings: first, the convergence rate of study is not so fast. Second, it will be
trapped in local minima. Therefore, the improved BP algorithm is divided into two
categories: The first category refers to the BP heuristic algorithms that use the
information technology; the other is to join the BP algorithm for numerical
optimization techniques.
This article attempts to introduce BP neural network to the print evaluation [1],
establishing the color print quality evaluation model based on BP neural network,
hoping to study for the evaluation of printed materials a new way of thinking. We
confirmed the standards of each index printing witch have 34 targets in this paper. It
comes from our having the print quality level as the four levels: excellent, good, fair,
and poor. In accordance with People's Republic of industry standards - offset print
quality requirements, and others authority standard.
(1) Determine assessment index system [2]. Index number of the BP network is the
number of input nodes.
(2) Determine number of layers BP network. The model used has a three-tier
network structure with one input layer, one hidden layer and one output layer.
(3) Determine the evaluation target. The node of output layer is 1.The evaluation
target level is the corresponding each print value of quality.
(4) Treat the samples and evaluation of target value positively and standardized.
(5)Initialize the weights of the network nodes and network threshold with a random
number (usually a number between 0 and 1).
(6)Put samples and target of evaluation in the network after the standardization, and
give the corresponding desired output.
(7) Forward-propagation. Calculate the output layers nodes.
(8) Calculate errors of the layers nodes.
(9) Back-propagation. Correct weights.
(10) Check that all samples of the input is completed or not.
(11) Calculate errors. When the total error is less than the error limit, the network is
end of the training. Otherwise the process goes to step (6), and continue to train.
(12) Trained network can be used for formal evaluation.
in the MATLAB software for each grades, and generated a total of 1200 samples (also
more). For using the method of the early termination to improve the network
generalization ability, the random sample was divided into three parts: training sample,
test samples and test samples. MATLAB neural network toolbox dividevec function is
used in this paper.
dividevec function call format:
[trainV,valV,testV] =dividevec(p,t,valPercent,testPercent)
Here, trainV is training samples; valV is the test samples; testV is the samples for
testing; p is input vector; t is the output vector; valPercent represent samples to test the
percentage of the total sample; testPercent is total sample for the percentage of test
samples.
Setting the model parameter is the important work of the evaluation system [3]. We will
introduce the parameter involved and reason of choosing them.
(1) Number of network layer: because the BP neural network can achieve any
nonlinear mapping and its network characteristics, this model adopted an three-tier
network including input layer, a hidden layer and an output layer.
(2) Activation function: because the nonlinear approximation ability of BP network is
reflected by the activation function, the activation function of S-type is usually
used in hidden layer and the output layer activation function can be linear or
S-type. We use the S-type activation function in the both hidden layer and output
layer in this paper.
(3) Input layer: we selected 34 evaluation indexes, namely the dimension of input
vector is 34. So the input layer nodes are 34.
(4) Output layer: In this paper, we determined that the corresponding print quality
scale superior, good, qualified and the bad network model value of output is
0.2,0.4, 0.6, 0.8 respectively.
(5) The choice of initial weight: In order to make the learning process convergence, we
initialized the weight random (-1, 1).
(6) The choice of learning algorithm: Selecting the learning algorithm, we must
consider the performance of the algorithm itself and think about the complexity of
the problem, the size of sample set, network error target and the type of problems
to be solved. Through taking the experiments and analysis, this study used the
method of an early termination method and SCG algorithm.
(7) Number of training steps: After extensive testing, we found that combining the
early termination method and the SCG algorithm method to train the network is
very fast. So we selected 1000 as the training steps.
When the network parameters have been identified, we can write programs in
MATLAB to carry out training and simulation.
152 T. Ma, Y. Li, and Y. Sun
4 Experiment
Sample
Output of model 0.5104 0.3566 0.4558 0.7645
We can see in Table 1. The output value of the first sample is 0.5104. So its level of
quality is qualified. The model output of sample II and III is in [0.2975, 0.4935], so
the level of them is good. The output value of the forth sample is 0.7645. It is greater
than 0.7039, so the quality level is disqualification. Though the sample II and III are
all good, the output value of sample II is smaller. So we can get the quality of sample
II is better than the sample III, which reflects the advantage of the BP network
evaluation.
In this paper, we use the subjective evaluation method for comparison. Under the
standard lighting conditions, several experts with extensive experience have the four
samples subjective evaluate. The mainly items are represented as following. Check the
sheets clean or not; check the of high, medium, dark tone with a magnifying glass;
check the replication for color with signal bar or color control strip; check clarity of dot
with a magnifying glass; check the replication of text. After these series of progress, we
can get the sample I is qualified, the sample II is excellent, and sample III is good. The
forth sample is failed. Compared with the result of BP network evaluation, we can see
the BP neural network model is effective.
5 Conclusions
In this paper we proposed a sheet-fed offset print quality evaluation model based on BP
neural network and achieve it though using MATLAB neural network toolbox. The
applicability and accuracy of the evaluation model is confirmed through measurement,
evaluation and comparison with the subjective evaluation to practical samples.
As future works we have to supplement more targets to the evaluation index system
for its comprehensive, and enable the parameters of BP neural network we established
to be better. The following study can try to establish an independent evaluation of
visual system to make it convenient without MATLAB which this paper used.
154 T. Ma, Y. Li, and Y. Sun
References
[1] Ma, T.: Study of the Method of Evaluating Color Digital Image. In: International Conference
on Computer Science and Software Engineering, ICGC 2008, pp. 225228 (December 2008)
[2] Otaki, N.: Colour image evaluation system. OKI Technical Review 7.0(194), 6873 (2003)
[3] Guan, L., Lin, J., Chen, G., Chen, M.: Study for the Offset Printing Quality Control Expert
System Based on Case Reasoning. IEEE, Los Alamitos
[4] Bohner, M., Sties, M., Bers, K.H.: An automatic measurement device for the evaluation of
the print quality of printed characters. Pattern Recognition 9, 1119 (1997)
[5] Guan, L., Lin, J., Chen, G., Chen, M.: Study for the Offset Printing Quality Control Expert
System Based on Case Reasoning. IEEE, Los Alamitos
Improved Design of GPRS Wireless Security System
Based on AES
1
School of Printing and Packaging,
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote
Sensing, Whuhan University, Wuhan, 430079, China
mtl1968@whu.edu.cn
1 Introduction
With the development of network technology, the dangerous factors are increasing in
the transmitting process of digital information. In China, to solve those issues, much
attention was paid to form encryption systems combining with theories in
Cryptography. The creation of those systems increased its query efficiency and
accuracy, blew to criminals and protected the consumers and producers. In [1] RFID
security system was studied. Its useful by signing a digital signature for products ID
via SHA-1 algorithm. However, how to prevent the deception of false data from illegal
readers is an issue to be solved. In addition, literature [2] designed a GPRS-based WSS,
which combining networks and encryption technology was used in products security
system. But an obvious issue is that the selection of encryption algorithm.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 155160, 2011.
Springer-Verlag Berlin Heidelberg 2011
156 T. Ma, X. Sun, and L. Zhang
Many cryptographists have discovered that existing S-boxes have some fatal
weaknesses. They have properties of short periods and bad distribution [5]. In terms of
AES-192 (12 rounds iteration), the number of rounds which can be successfully
attacked is at least 6, and the highest record is 9 in 2005 [3].
Considering above analysis and the actual application requirements, we will select
AES as the basic algorithm in this paper. However, we must use the optimized AES
algorithm to decrease the possibility of being successfully attacked and ensure
operation security of the system.
(1)
are 10110101 and 10001111. This change can destroy the regularity of matrix and
increase the difficulty of attack through increasing the number of private bytes.
(2)
Change of the matrix has broken regularity of the original data. Although the
operation complexity in calculation programs didnt change heavily, it has largely
increased attack difficulty. Thus, the security factor of this system has been increased
overall.
Designing S-box with Better Nonlinearity. Many literatures show it is an efficient
way to increase security of S-box by improving the nonlinearity of given S-box [7]. We
can improve its nonlinearity by swapping two output vectors in its truth table [8]. To
find a new S-box by this method, we need to define some sets of ( , ) pairs for which
( , ) is maximum and near-maximum.
W
+
1
{ }
{
= ( , ) : B ( , ) = WH max ,W 1 = ( , ) : B ( , ) = WH max
}. (3)
(b)Defining sets of ( , ) for which the WHT magnitude is close to the maximum
(See 4).
W = {(, ) : B(, ) = WH } { }
+
2 max
2 ,W 2 = (, ) : B(, ) = WH max + 2
(4)
W = {(, ) : B(, ) = WH 4},W = {(, ) : B(, ) = WH + 4}
+
2 max 2 max
Further, defining + + +
= W 2 W 23 ,W 2,3 = W 2 W 23
.
W 2, 3
(a) L ( x ) L ( x ), ( , ) W + W ;
1 2 1 1
(b) ( y ) ( ), ( , ) + , ( y )
L ( x ), ( , ) W ;
+
L 1 L x 2 W L 1 2 1 1
(c)
L ( y ) = L ( x ), ( , ) W , L ( y ) = L ( x ), ( , ) W ;
1 2 1 2 1 1
L ( y ) = L ( x ), L ( y ) = L ( x ), L ( y ) L ( x ), L ( y ) L ( x ) ;
2 1 1 2 1 1 2 2
(e)For all ( , )
W , not all of the following are true:
2,3
L ( y ) L ( x ), L ( y ) L ( x ), L ( y ) = L ( x ), L ( y ) = L ( x ) .
2 1 1 2 1 1 2 2
We can acquire S-boxes with better nonlinearity based on the above method and
satisfied those five conditions. AES algorithm can be optimized if S-box has
successfully optimized.
In the encryption procedure of AES, S-box in SubBytes can be existing S-box, new
S-box respectively.
Inputting the same plaintext into AES structure with different S-boxes, we can get
different cipher texts through N rounds transformation. Comparing cipher texts
generating by the same plaintext and through related calculating, we can draw a
conclusion: cipher text encrypted via optimized AES has stronger anti-attack
characteristic.
6 Conclusions
In our work, we have analyzed the current situation and insecurity of current security
systems. Otherwise, we selected AES which is much safer as the basic algorithm in the
basis of existing GPRS-based WSS. In addition, S-box has been properly adjusted so as
to get optimized AES, which has been applied to GPRS-based WSS to form a more
reliable and safer security system. It will perform the product authenticity enquiry
service more exactly and protect the benefits of firms and consumers more effectively.
160 T. Ma, X. Sun, and L. Zhang
This work mainly stated some possible methods on how to optimize S-box of AES.
To improve S-box, we analyzed the theory of AES. In our paper, improved S-boxes
have been proposed. They are extremely practical in theory, what we should do is to
program procedures and implement it. Now that we have built a useful theory, putting it
into practice is our future work.
References
1. Ni, W., et al.: Design and Implementation of RFID-based Multi-level Product Security
System. Computer Engineering & Design 15(30) (2009)
2. Jia, Y.: Research and Implementation of GPRS-based Wireless Security System. Dalian
Technology University, Liaoning (2006) (in Chinese)
3. Liu, N., Guo, D.: AES Algorithm Implemented for PDA Secure Communication with Java
(2007) (in Chinese)
4. Zheng, D., Li, X.: Cryptographyencryption algorithm and agreement. Electronic Industry
Press, Beijing (2009) (in Chinese)
5. Wang, Y.B.: Analysis of Structure of AES and Its S-box. PLA Univ. Sci. Tech. 3(3), 1317
(2002)
6. Chen, L.: Modern Cryptography. Science Press, Beijing (2002) (in Chinese)
7. Millan, W.: How to Improve the Nonlinearity of Bijective S-boxes. In: Boyd, C., Dawson, E.
(eds.) ACISP 1998. LNCS, vol. 1438, pp. 181192. Springer, Heidelberg (1998)
8. Millan, W.: Smart Hill Climbing Finds Better Boolean Functions. In: Workshop on Selected
Areas in Cryptology 1997, Workshop Record, pp. 5063 (1997)
Design and Realization of FH-CDMA Scheme for
Multiple-Access Communication Systems
1 Introduction
This Spread Spectrum is a type of modulation that spreads the modulated signal
across available frequency band, in excess of minimum bandwidth required to
transmit the modulating signal [1] and [5]. Spreading makes signal resistant to noise,
interference and eavesdropping. Spread Spectrum is commonly used in personal
communication systems including mobile radio communication and data transmission
over LANs. Spread Spectrum has many unique properties that cannot be found in
other techniques of modulation.
These include the ability to eliminate multi-path interference, privacy of message
security, multi-user handling capacity and low power spectral density since signal is
spread over a large frequency band [2] and [6]. There are two commonly used
techniques to achieve spread spectrum. Viz, Direct Sequence Spread Spectrum (DS-
SS) and Frequency Hopping Spread Spectrum (FH-SS).A DS-SS transmitter converts
an incoming data (bit) stream into a symbol stream. Using a digital modulation
technique like Binary Phase Shift Keying (BPSK) or Quadrature Phase Shift Keying
(QPSK), a transmitter multiplies the message symbols with a pseudo random (PN)
code. This multiplication operation increases the modulated signal bandwidth based
on length of chip sequence. A Code Division Multiple Access (CDMA) system is
implemented via these coding. Each user over a CDMA system is assigned a unique
PN code sequence. Hence, more than one signal can be transmitted at the same time
on same frequency. In this paper Frequency Hopping (FH-SS) Spread Spectrum
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 161166, 2011.
Springer-Verlag Berlin Heidelberg 2011
162 A. Baqi, S.A. Soomro, and S. Soomro
modulation technique has been used with a new spreading code in which conventional
PN code controls a typical chaos oscillator. The resultant chaotic signal has a wide
frequency range from few KHz to few MHz (12.34 kHz to 9.313 MHz). The
motivation to use chaotic signal in place of conventional PN code has been because,
chaotic systems are nonlinear dynamical systems with certain distinct characteristics.
These systems can generate highly complex waveforms even though the number of
interacting variables is minimal [3].
For an iterated map a dynamical system with single variable can result in chaotic
behaviour while for a continuous system, three coupled differential equations can
result in a complicated dynamics. Time series generated from chaotic dynamics have
the following three interesting properties: (i) wide-band spectrum, (ii) noise-like
appearance, and (iii) high complexity. In a chaotic system, trajectories starting from
slightly different initial conditions diverge exponentially in time which is known as
sensitive dependence on the initial conditions. Because of these distinctive properties,
chaotic systems are widely being studied for secure communication and multiple user
communication applications [4]. In this paper we are presenting the methodology in
section and in section 3 we are presenting the results which came through our
technique and finally we make conclusion.
second. Frequency hopping is the easiest spread spectrum modulation to use. Any
radio with a digitally controlled frequency synthesizer can, theoretically, be converted
to a frequency hopping radio. This conversion requires the Addition of a pseudo noise
(PN) code generator to select the frequencies for transmission or reception. Most
hopping systems use uniform frequency hopping over a band of frequencies. This is
not absolutely necessary, if both the transmitter and receiver of the system know in
advance what frequencies are to be skipped. Thus a frequency hopper in two meters
could be made that skipped over commonly used repeater frequency pairs. A
frequency hopped system can use analogue or digital carrier modulation and can be
designed using conventional narrow band radio techniques. De-hopping in the
receiver is done by a synchronized pseudo noise code generator that drives the
receivers local oscillator frequency synthesizer. FH-SS splits the available frequency
band into a series of small sub channels. As transmitter hops from sub channel to sub
channel, transmitting short bursts of data on each channel for predefined period,
referred to as dwell time (the amount of time spent on each hop). The hopping
sequence is obviously synchronized between transmitter and receiver to enable
communications to occur. FCC regulations define the size of the frequency band, the
number of channels that can be used, and the dwell time and power level of the
transmitter. In the frequency hopping spread spectrum a narrowband signal mover
hops from one frequency to another using a pseudorandom sequence to control
hopping. This result in a signals lingering at a predefined frequency for a short period
of time, which limits the possibility of interference from another signal source
generating radiated power at a specific hop frequency.
164 A. Baqi, S.A. Soomro, and S. Soomro
Frequency Hopping Spread Spectrum Systems are categorized into following sections
which are written below:-
In an SFH spread system the hop rate (fh chip rate) is less than the base band message
bit rate fb. Thus two or more (in several implementations, more than 1000) base band
bits are transmitted at the same frequency before hopping to the next RF frequency.
The hop duration, TH is related to the bit duration Tb by:
In an FFH spread spectrum the chipping rate, fc, (chipping rate is same as hopping
rate) is greater than the base band data rate fb. In this case one message bit Tb is
transmitted by two or more frequency hopped RF signals. The hop duration or chip
duration (TH = TC), is defined by:
The second input to Modulo-Two adder has been generated by the frequency
synthesizer driven by chaos oscillator. As such, this signal changes in a
pseudorandom manner. The Chaotic signal generator is the heart of the proposed
FHCDMA system. The chaotic signal generator is designed around a Wein bridge
oscillator in which digitally controlled variable resistance technique using Linear
Feedback Shift Register (LFSR), Decoders and array of transistors have been use to
select randomly the resistor values and is given in Figure 2 and 1. The output of the
oscillator thus produces the sustained analogue signal with varying frequency and
amplitude.
This FH-SS signal is thus regenerated at the receiver by means of the chaos
oscillator in association with a locally generated PN sequence in a similar fashion as
that used at the transmitter for Proper synchronization between transmitter and
receiver. The FH- S generated at the receiver is modulo-2-added with the received
Design and Realization of FH-CDMA Scheme 165
modulated signal. The output of the Modulo-2-Adder is converted into Parallel form
by using serial-to-parallel converter (Demultiplexer). The Demultiplexer output drives
a digital to analogue converter. The resultant analogue signal is amplified and after
low passes filtering to deliver the transmitted information signal at the receiver. The
circuit diagram representation of the proposed scheme is outlined in Figure 2.
The resultant FH-SS signal at the receiving end is shown in figure 3. The output of
the receiver after de-spreading with the locally generated spreading code has been in
the figure. It can be seen that the received signal which is similar to that of the
transmitted signal shown in Figure 2.
The proposed Frequency Hopping Code Division Multiple Access (FH-CDMA) has
been experimentally tested for its performance by transmitting and receiving a speech
signal over the FH- CDMA system. Various waveforms obtained while transmitting
and receiving the speech signal have been recorded for technical observation by using
readily available non-linear and linear ICs. The waveforms obtained at various check
points have been found satisfactory and are in conformity with the theoretical
observations. The waveforms obtained at various check points are shown in figure 3
as under:-
166 A. Baqi, S.A. Soomro, and S. Soomro
4 Conclusion
Spread Spectrum based Code Division Multiple Access (CDMA) are increasingly
becoming more popular for multi-user communication systems. In most of such
multi-user systems, the given bandwidth (in a given area) is to be divided and
allocated to various communication channels. However, in order to share the same
bandwidth by many users in a given service area, an equal number of unique pseudo-
random codes with good correlation and statistical properties are required. Chaotic
signal generators are generally used for generating pseudorandom codes with good
correlation properties. In this paper hardware based chaotic signal generator has been
proposed which can be easily programmed to generate a large number of unique
random codes best suited for a multiuser CDMA system. The proposed programmable
chaotic signal generator has been used to implement an FH-CDMA communication
system and subsequently tested for the transmission and reception of a voice signal.
The results of experimental verifications have been presented in the paper and are in
conformity with theoretical observation. The proposed scheme will find a range of
applications in Spread Spectrum modulation, CDMA, Global Positioning Systems
(GPS) etc. Further, the proposed scheme guarantees adequate security with low
system complexity.
References
1. Goodman, D.J., Henry, P.S., Prabhu, V.K.: Frequency -Hopping multilevel FSK for mobile
radio. Bell Syst.-Tech., J. 59(7), 12571275 (1980)
2. Einarssen, G.: Address assignment for a Time-Frequency coded Spread Spectrum. Bell
Syst. Tech. J. (7), 12411255 (1980)
3. Jiang: A note on Chaotic Secure Communication System. IEEE Trans. Circuits Systems,
Fundamental Theory Applications 49, 9296 (2002)
4. Bhat, G.M., Sheikh, J.A., Parah, S.A.: On the Design and Realization of Chaotic Spread
Spectrum Modulation Technique for Secure Data Transmission, vol. (14-16), pp. 241244.
IEEE Xplore (2009)
5. Linnartz, J.P.M.G.: Performance Analysis of Synchronous MC-CDMA in mobile Rayleigh
channels with both Delay and Doppler spreads. IEEE VT 50(6), 13751387 (2001)
6. Win, M.Z.: A unified spectral analysis of generalized time-hopping spread-spectrum
signals in the presence of timing jitter. IEEE J. Sel. Areas in Communication 20(9), 1664
1676 (2002)
Design and Implement on Automated Pharmacy System
Abstract. This paper introduces that the research design of the automated
pharmacy system which is aim at accessing the packed drugs, which shows that
the system should have three main functions and implementations of three main
functions. Introducing the detailed structural design of the automatic medicine-
input system, dense medicine-store system and medicine-output system; and
researching on control methods of each executing agency in the system. In the
end, it is to design the function of system software. The system has been
developed and applied in Hospital outpatient pharmacy and it is in good
working condition.
1 Introduction
The pharmacy is the pivotal issue of hospital. At present, the method of medicine-
store mainly is fixed shelves in domestic hospital pharmacy, but this accessing mode
has its unavoidable disadvantages: 1) drug storage is scattered and space utilization
rate is very low; 2) pharmacist has high labor intensity and low working efficiency; 3)
manual medicine-output makes mistakes easily, and cause drug accidents. Therefore,
the hospital pharmacy automation is the new trend of development of pharmacy, and
it is also an important sign of the service and working concept innovation [3].
This paper introduces the automated pharmacy system which mainly consists of
automatic medicine-input system, dense medicine-store system and medicine-output
system and database management system. The system can implement three basic
functions which are medicine-input, dense medicine-store and medicine-output.
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 167175, 2011.
Springer-Verlag Berlin Heidelberg 2011
168 H. Che, C. Yun, and J. Zang
Fig. 4. Manipulator
The manipulators accept drugs that are tested, medicine-input transmission system
will locate the medicine-input manipulator in the storage number of dense storage
system, and stepping motor turned a certain Angle according to height of the kit, the
lifting board lifts up the corresponding height driven by the synchronous belt. When kits
rise to higher than the frame former board, because of the gravity, drugs will slide into
170 H. Che, C. Yun, and J. Zang
the slot of the frame former board, and then slide into the slot of medicine storage in the
dense storage system. Step motor repeats the above movements, until the lifting board
hands out the last kit. In the process, if drugs decline due to friction, then rotate
electromagnet work, drive the dial the piece to click, and make drugs slide successfully.
Dense storage system consists of roller type slope store pharmacy and roller store
medicine slot, as shown in Fig. 5.
Roller type slope store pharmacy mainly consists of frame body, support beam, and
roller medicine-store slot, frame body is 3540mm x built 1440mm x 2450mm cube
structure which is composed by section aluminum; Support beam is constructed by
aluminum extrusion, it and frame body are assembly constituted the installation
matrix of roller store medicine slot.
Roller medicine-store slot consists of roller, rolling shaft, parting strip, border and
beams, as shown in figure 6. Beams and border compose the installation matrix of the
roller medicine-store slot, the roller that is 10mm diameter is set into rolling shaft, the
rolling shaft uniformly distributed with 20mm of separation distance. Because widths
of pharmacy packed drugs are different, the bar is set into the rolling shaft; the space
between adjacent parting strips constructs the minimum storage unit of packed drugs,
because the parting strip is set into the rolling shaft, to enhance the overall rigidity of
the roller medicine-store slot.
Design idea is based on gravity blanking principle; the roller medicine-store slot is
installed in store pharmacy with 15 angle, then drugs are affected by its own gravity
after getting into store, and automatic slide into the opening of medicine-out of the
dense storage system, wait for medicine-out.
Design and Implement on Automated Pharmacy System 171
Automatic medicine-out system consists of the medicine-out driver and elevators. The
packed drugs in the dense storage system is placed at the tilt roller medicine-store
slot, each medicine-store slot keeps the same type of drugs, and form matrix
arrangement as a whole. When there is no medicine-out action, the packed drugs is
reliably located by fixed block shaft that is installed in the flanges of both ends.
Medicine-out driver consists of electromagnet and flap, as shown in Fig. 7. The
flap is installed in the ejector rob which is in front of electromagnets, After
electromagnet electrified, the ejector rob contracts to drive flap rotate around of fixed
axis, the front-end of the kit is jacked up by the flap at this time, when the bottom of
the kit is higher than the limit block shaft, then take the medicine-out action. Then,
the kit slides to belt of the elevator from the slope of the elevator.
Elevator consists of guide rail, belt line, aluminum paddles, flap, transmission
system, photoelectric sensor that is for testing drugs and self-protection sensor, as
shown in Fig. 8 below. Elevator takes the lifting motion along the two guide rails in
the vertical plane, after locating the position according to the layer position of
medicine-store, the medicine-out driver acts and completes the medicine-out. Drugs
are taken out and go through the photoelectric sensor testing surface; sensor sends
172 H. Che, C. Yun, and J. Zang
detected signals to PLC, then count, and match with the number of drugs in the
prescription. Taken drugs fall on the belt line directly, when drugs are taken
completely, belt line transport drugs to the opening of medicine-out, then the flap is
open, drugs are sent out, and complete the deployment of prescription drugs.
Fig. 8. Elevator
When drugs are blocked in the roller medicine-store slot because of the package
quality, or because the slot surface is not smooth enough and so on, if the elevator
continues to move, then the kit will interfere with its movement, when the problem is
serious, which will damage the elevator and medicine-store slot. So there are two
groups of bijective photoelectric sensors that are installed in elevator to be used for
protection and detection. When kits or other objects block the detective light rays of
sensors, the control system will stop elevators movement; when objects are removed,
and detective light rays can pass, then the control system will control elevator to
continue to move.
TCP/IP
Manageme
nt Level
Automated Pharmacy Service Computer
TCP/IP
Monitoring
Automated Pharmacy IPC
Level
RS232
PCI
ISA
PISO813 Data Controlling
PMAC Motion Control Card CP1H PLC
Acquisition Card Level
Laser Ssensor 1
Llaser Sensor 3
Laser Sensor 2
Electromagnet
Setpper Motor
Photoelectric
Servo Drive
Limit Level
Zero Level
AC Motors
Bijective
Rotating
Sensors
Sensors
Sensors
Sensors
Drive
Executive
Level
Setpper Motors
Servo Motors
Control level: according to control instruction that is sent by monitoring level calls
the corresponding bottom control procedures, then to control execution level parts.
Executive level: accepts control level programming instructions, drives executive
level motor to run and accords with requirements; kits detection, protection detection
and system zero limit detection.
Beginning
Initialization
NO
Whether the kit dimensions
NO consistent with the database?
Whether medicine-in?
YES
YES Delivery the kits to the
Determine the number of the kits manipulator
which will be put in the roller
medicine-store slot. Medicine-input transmission
system take the manipulator to the
Manipulator prepare to motion. roller medicine-store slot
The kits which will be put in the The manipulator let the kits slide
roller medicine-store slot would be in the roller medicine-store slot
put on belt line of automatic
The manipulator and medicine-
medicine-input system.
input transmission reset
The combination of medicine-output driver and elevator realized the drugs batch
output. As drugs out, the system reads the drug store bits information in database,
elevator moves to the location of drugs, the medicine-out driver acts, counting sensor
sends the quantity of drugs of feedback to database.
After completing this prescription medicine-out, elevator moves to the opening
of medicine-out, the belt line of elevator and flap act at the same time, completes
the medicine out. Automatic medicine-out program flow is shown in Fig. 11 as
follows.
Design and Implement on Automated Pharmacy System 175
Beginning
5 Conclusions
This paper introduces the ontology structure, the control system and the design of
software system of automation pharmacy system, at present, the operation of this
system in hospital is in good condition, which proves that the system design is
reasonable and feasible.
References
1. Liu, X.g., Yun, C., Zhao, X.f., Wang, W., Ma, Y.: Design and application on
automatization device of pharmacy. Journal of Machine Design, 6567 (2009)
2. Zhao, X.f., Yun, C., Liu, X.g., Wang, W.: Optimization for Scheduling of Auto-pharmacy
System. Computer Engineering 193-195, 200 (2009)
3. Chen, L.: The Research and Its Implement of a New Type Intelligent Dispensing System
for Chinese Medicine. Sichuan University, ChenDu (2004)
4. Subramanyan, G.S., Yokoe, D.S., Sharnprapai, S., Nardell, E., McCray, E., Platt, R.: Using
Automated Pharmacy Records to Assess the Management of Tuberculosis. Emerging
Infectious Disease 5(6), 788 (1999)
5. Thomsen, C.J.: A Real Life Look at Automated Pharmacy Workflow Systems. National
Association of Chain Drug Stores 1, 29 (2005)
6. Thomas, M.P.: Medication Errors. Clinical Pediatrics 4, 287 (2003)
Research on Digital Library Platform
Based on Cloud Computing
1 Introduction
With the development of computer, network and information technologies, digital
library faces to great challenge, such as resource storing and sharing, various personal
services requirement, and so on. In order to solve these problems and put digital
library into full play, the existing library space and time constraints must be break. In
the new time, digital library should carry out the humanist service. Cloud computing
is an effective way to promote digital library development [1].
Cloud computing concept is proposed by Google firstly. IBM, Microsoft and
so on also defined cloud computing. And now there is no a unified concept. In a
word, cloud computing is a new new emerging
computing model, which compromises the merits
of Parallel Computing, Distributed Computing,
Grid Computing, Utility Computing, Networkstorage
Technologies, Virtualization and Load Balance. The
principle of cloud computing is that integrating
computers distributed in network into one entity with
a strong ability to perfect computer system, and using
Saas, PaaS, IaaS and MSP business model to put
computing power to terminal computers. The
services of cloud computing is managed by a Data
processing center, who provides unified services Fig. 1. Cloud computing model
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 176180, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Digital Library Platform Based on Cloud Computing 177
interface to users and meets users personal needs. Cloud computing services model is
as figure 1.
Since 2006, cloud computing concept is proposed by Google, cloud computing
becomes a new research hot topic in IT area [2]. Now, cloud computing is wildly used
in digital library, office system and so on. In 2009, the concept of cloud computing
liabrary was proposed by Richard Wallis [3]. OCLC has announced that liabrary
management service based on cloud computig will be supported to their number.
Besides, District of Columbia Public Library and Eastern Kentucky University
Library are offering services based on cloud computing. In our country, cloud
computing is in the stage of theoretical research, many scholars do some research on
cloud computing [4-6]. In order to promote cloud computings application in digital
library, the paper put forward a new digital library model based on cloud computing.
This model plays the role of cloud computings distributed store and power
computing, can realize resource sharing and promote digital library serving
efficiency.
Service
Layer
The infrastructure layer of digital library
platform is composed by a lot of public cloud Application Interface
and private cloud, which are integrated
through internet to form a virtual a huge data Hardware Management
Managemet
Layer
center or a supercomputer. Data Managment
Refer to public cloud, librarys digital
Cloud Computing
resource storage and application enviroment Safety Management
of data center can be built usint LaaS. Refer
Object Sets
to private cloud, local library can build its
Date Layer
digital library platform under main server and Object-relational Mapping
APP server provided by public cloud. Private
cloud can protect some resource, which Databases
The funciton of data layer is converting non-uniform data to unified resource object.
It include databases, object-relational mapping and object sets.
2.2.1 Databases
Different library platform may take different database. But there are almost useing
following several databases: Oracle, SQL Server, and so on.
Management layer is the core layer of digital library platform. Its function is to
manage the hardware in Infrastructure layer, data resource in data layer, and system
security.
Research on Digital Library Platform Based on Cloud Computing 179
Internet
In private cloud, data backup, system log, device monitor, et al are taken to ensure
local resource safe. Between the different clouds, system takes information safety
assesment, Mutual trust mechanism, and Data Encryption to protect the security of
communications. Besides, opration is Transparent to users. The opration of data
storage, computing, invalidation, and so on are all isolated to users. And digital
library manager can assign different permissions according to users identity.
Service layer provide visiting interface to users. Users with administrative privileges
can finish the work of library management, lending management, library charges, and
application development and expansion. Personal users can login digital library and
enjoy online services, such as books borrow, books scheduled, documentation
retrieval, and academic exchanges.
3 Summary
Cloud computing is a new effctive way to build modern digital library platform.
Based on cloud computing the paper presents a new digital library platform model,
and its architecture is given in detail. This digital library platform implements
resource storage and sharing efficiently, and provides users with fast, convenient and
efficient services. This study could provide the reference effect for the design and
realization of digital library.
References
1. Hu, X.J., Fan, B.S.: Cloud Computing: The Challenges to Library Management. Journal of
Academic Libraries 27(4), 712 (2009)
2. Buyyaa, R.: Cloud computing and emerging IT platfrorm: Vision, Hype, and reality for
delivering computing as the 5th utility. Future Generation Computer System 6, 599616
(2009)
3. Wallis, R.: Cloud Computing Libraries and OCLC. The Library 20 Gang, EB (2009),
http://librarygang.talis.com/2009/05/06/library-20-gang-0509-
cloud-computing-libraries-and-oclc/,2009-05-15
4. Zhou, X.B., She, K., Ma, J.H.: Compostion Approach for Software as a Service Using C
loud Computing. Journal of Chinese Computer Systems 31(10), 9421953 (2010)
5. Zhang, G.W., He, R., Liu, Y., Li, D.Y.: An Evolutionary Algorithm Based on Cloud
Model. Chinese Journal of Computers 31(7), 10821091 (2008)
6. Zheng, P., Cui, L.Z., Wang, H.Y., Xu, M.: A Data Placement Strategy for Data-Intensive
Applications in Cloud. Chinese Journal of Computers 8, 14721480 (2010)
7. Richardson, L., Ruby, S.: Restful web services, EB/OL (2010),
http://home.cci.lorg/~cowan/restws.pdf
Research on Nantong University of Radio and TV
Websites Developing Based on ASP and Its Security
Shengqi Jing
Abstract. The paper firstly introduced the current research and development of
the ASP technology at home and abroad. Secondly it introduced particularly ASP
technology based on Nantong University of Radio and TV Website, database
technology, security technology of network and security of Web station. Basing
on the research above, the author presented the functions of management,
display, query and so on. Then I introduced particularly several technologies
which had to be mastered in the aspects of security of the Web station, including
encryption and attestation technology, firewall technology, intrusion detection
technology, system copying technology, and so on. At last the paper made a
conclusion and analysis. It presented use for reference of the exploitation and
security research of information query system based on ASP and received good
result.
1 Introduction
ASP (Microsoft Active Server Pages) is Microsoft's development of a service-side
scripting environment, which is a collection of objects and components. ASP file is an
executable script embedded in HTML documents, HTML and Active Control will
combine to produce and implement dynamic, interactive, high-performance Web
server applications with the extension. Asp. ASP technology is a replacement for CGI
(Common Gateway Interface, Common Gateway Interface) technology. Simply
speaking, ASP is a server-side script in the operating environment, through such an
environment, users can create and run dynamic, interactive Web server applications,
such as interactive dynamic web pages, including the use of HTML forms to collect and
process information, upload and downloads, as users use their own CGI programs in
the same, but it is much simpler than the CGI. More importantly, ASP uses ActiveX
technology is based on an open design environment, users can define and produce their
own components to join, so White has the dynamic web page with almost unlimited
expansion capacity, which is the traditional CGI programs are far less than other place.
In addition, ASP can use ADO (Active Date Object, Microsoft, a new data access
model) easy access to the database, so that the development of applications based on
WWW is possible [1].
ASP advantages.ASP technology is intuitive, easy to learn the advantages of a
beginner in the short term can also write reliable code. It also provides a more powerful,
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 181186, 2011.
Springer-Verlag Berlin Heidelberg 2011
182 S. Jing
it can greatly speed up the development of the project. Although this is only a few years
the birth of ASP technology, already widely used reasons.
ASP is very obvious advantages: First, it is really nothing to do with the browser, the
system is running on the server side, just the standard HTML language of the results
returned to the client browser, regardless of which browser the client is; Second, it is
completely server-side operating systems, software maintenance, upgrades, completely
on the server side, the client does not need any preparation; again, ASP scripting
language can be any Script language, as long as the engine on the line to provide the
appropriate, ASP VB Script support itself and Java Script, free to decide which one to
use. Now a common network database through a browser to access the technology has
CGI, JDBC, etc., but the realization of the technology is much more complicated than
the use of ASP, taking into account the actual needs of the system and the advantages of
ASP, so use ASP technology to achieve system [2] .
Specific advantages are the following:
(1) Increased efficiency in development: ASP provides an easy to learn the script,
and with many built-in objects, which greatly simplifies the development of Web
applications, so development efficiency can be improved.
(2) Interaction: ASP page is a page with computing power, it can in the run-time
environment and the use of different parameters produce different HTML output.
Although ASP is a server-side applications, but it can also be the traditional client-side
scripting and plug the control outside the mixed use, dynamically generated for the
browser running on the layout of the script and negative outside the object inserted in
the client browser dynamically generated graphical user interface.
(3) Security enhancements: ASP script executed on the server, the user's browser is
just spread the implementation of the results of ASP generated HTML documents, this
was to reduce the requirements for the browser, on the other hand strengthen the
security of the system .
(4) Cross-platform: ASP is the main body and platform-independent HTML and
various scripts, both of which do not have to be built, the connection procedure can
change the content timely, direct running in various operating environments, ActiveX
component is developed by a variety of programming languages, and
vendor-independent, cross-operating environment across the network implementation
of binary components, and through the HTML and script, developers can easily set
various functions of the ActiveX Service ASP web applications composed of a set
piece.
(5) IIS and ASP technologies: taking into account the interactive features of the
system, you can use IIS and Active Server Pages technology (ASP) to achieve
dynamic, interactive Web design [3].
IIS (Internet Information Server) is Microsoft Windows-based system provided by the
Internet Information Server, the WWW service includes ASP script server to support
ASP technology. IIS has a simple installation, powerful, and protocol compatibility
with other Microsoft software compatibility, the company's development and so on [1].
IIS and ASP technology with simple and efficient to Web and database connections.
HTML scripts, and other components will be compared to the co. Establish an efficient
interactive environment for dynamic Web applications. The interaction is expressed in,
according to information submitted by users and respond to, no manual update a file on
Research on Nantong University of Radio and TV Websites 183
the page to meet the application needs. Database data can change at any time, and the
server application running on it without having to change [4].
2. Information Display
Show all information
Do not display information by the
query information by keyword
3. Station by keyword query [4]
System Functional Analysis and Design.Information query is divided into three
modules: information management module, the information display module and the
information query module, the function modules shown in Figure 1.
Add
Management Information
Delete
Query information by
Allow
Column name Data Type Length
air
articleid int 4
type nvarchar 50
title nvarchar 255
url nvarchar 255
content ntext 16
hits int 4
big nvarchar 50
vote nvarchar 50
[from] nvarchar 50
fromurl nvarchar 255
dateandtime smalldateint 4
Administrato
r Login
login.asp
Audit
Account
chklogin.asp
Yes No
3 Conclusions
This paper describes in detail the information inquiry system based on ASP web site
design and implementation process and the security of the site, which is based on ASP
web development with theoretical and practical significance,at last,with the method, set
up Web page of Nantong University of Radio and TV.
References
1. Wang, C.H., Xu, H.X.: Site planning, construction and management and maintenance of
tutorials and training. Beijing University Press, Beijing (2008)
2. Zhang, Y.Y.: Site Management Manual. China Water Power Press, Beijing (2010)
3. Tao, T.: Dynamic Web-based ASP technology design and implementation. Fujian PC
(November 2010)
4. Gu, X.M., Zhang, Y.P.: ASP-based security research. Computer Engineering and Design
(August 2004)
5. Kern, T., Kreijger, J., Willcocks, L.: Exploring ASP as sourcing strategy: theoretical
perspectives, propositions for practice. Journals of Strategic Information Systems
(November 2009)
Analysis of Sustainable Development in Guilin by Using
the Theory of Ecological Footprint
1 Introduction
Natural ecosystem is the material basis on which human rely for existence, and the
sustainable development should be conducted on biological capacity of ecosystem [1].
With the development of economy and increasing population, its important to analyze
quantitative data and interpret the sustainable development in Guilin.
Ecological Footprint (henceforth EF) was firstly proposed by Canadian scholar Rees
and his student Wackernagel [2,3]. At prensent, many domestic and foreign scholars
has been conduced much study and applied it to research the regional sustainable
development [4,5]. This paper is based on EF model and statistical data [6] to calculate
and analyze EF. The results showed that the state of biological capacity and resource
utilization, proposing scientific suggestions during the process of sustainable
development in Guilin.
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 187193, 2011.
Springer-Verlag Berlin Heidelberg 2011
188 H. Wang et al.
inhabitants, 88.303 billion Yuan GDP and the production of three industries was 19.4:
45.2: 35.4 in 2008, Its initially established some pillar industries such as electronics,
rubber, and processed food etc.
3 Methods
EF refers to the geographical space occupied by human being with ecological
productivity, and providing resources or waste acceptance under the existing living
standard. Calculation of EF starts from two assumption with description as follows:
Firstly, humans can determine their own consumption of resources, energy and waste;
Secondly, these resources and waste can be converted into biological productive areas
or ecological production areas that can produce and consume them. Biological
productive areas included the following 6 items: cropland, forest, grassland, fishing
ground, built-up land and fossil fuel land. The formula is:
ef =EF/N=aiAi =Ci/pi (1)
2 -1
ef denotes per-capita ecological footprint (hm cap ); EF indicates that the total
ecological footprint of regional ecosystem; N is the population of Guilin; ai is
equivalence factor; Ai represents one item consumption of per-capita ecological
footprint (hm2cap-1); Ci stands for one item of per capita consumption; pi is one item of
the global average land productivity (kghm-2).
Biological capacity (henceforth BC) is considered as ability for providing
biologically productive areas in a country or a region. The equation is:
bc= BC/N=0.88ajrjyj (2)
BC is biological capacity; bc denotes per-capita biological capacity; aj represents
per-capita areas; yj stands for yield factor; rj is equivalence factor; N is the population
Yield factor donates the dispersion between the local yield, which is represented by
the items of biologically productive areas in different countries or regions and average
yield of the world. The values of yield factors in this paper are: cropland 1.7, built-up
land 1.7, forest 0.9, grassland 0.2, fishing ground 1.0, and fossil fuel land 0 [7].
It is used that equivalence factors to standardize different items of lands. The values
of equivalence factors in this paper are described as follows: cropland 2.8, built-up land
2.8, forest 1.1, fossil fuel land 1.1, grassland 0.5 and fishing ground 0.2 [8]. It is
deducted from 12% biodiversity areas in the calculation of BC.
Compared resources and energy consumption of a region with its BC, it is find that
whether the EF beyond its BC. If the EF is greater than its BC, its called ecological
deficit; otherwise called ecological surplus.
EF per ten-thousand GDP is a ratio of per-capita EF and the region's gross domestic
product (GDP), this indicator can reflect the utilization of land resources in a region.
The smaller value it is, the higher land productivity will be.
Setting year and per-capita GDP as independent variables, per-capita EF as
dependent variable, getting scatter plot and finding the best fitting curves, then utilizing
SPSS to analyze the statistical data and judging whether the fitting curves can
characterize the trends of per-capita EF with per-capita GDP and year.
Analysis of Sustainable Development in Guilin 189
11 Cropland Forest
Fishing ground Grassland
10 Fossil fuel land Built-up land
9
EF(10 6 hm 2 )
8
7
6
5
4
3
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
Year
produce such as woods, oil cropland fruit are increased; growth of built-up land area
reflects the result of urbanization in the past decade. Slower growth lands are fishing
ground and cropland, average annual growth rates are 1.08% and 0.71% respectively,
the results showed the limitation of natural resources, especially the major biological
resources restriction.
During the decade, BC remained at 0.60 hm2/cap, cropland and forest providing
much than 0.5hm2/cap, while the grassland, fishing ground and built-up land provide
less (Table 1). From the perspective of the growth rate of various lands, it is clearly that
an increasing trend of built-up land, the average annual growth rate is 1.21%, by
process of urbanization; forest changes in a weak fluctuation condition, it shows that
urban ecological construction is in a relatively stable state; cropland is negative growth
(-1.23%), the average annual decrease rate is roughly equal to the average annual rate
of built-up land. Increasing infrastructural land is the main factor of cropland reduction.
It is pivotal to insure BC and achieve sustainable socio-economic development by
strengthen urban and rural planning, reduce depletion of cropland.
Because of the restriction of self-sustain, self-control, the resource and environment
capacity of eco-system, per-capita BC grows slowly in Guilin in the past decade. But
per-capita EF increases rapidly. Hence, per capita ecological deficit grows from
0.7436hm2/cap in 1999 to 1.4382 hm2/cap in 2008, increases 93.4% (Fig.2). There is a
lack of energy and main productive resources such as ironstone, phosphorite and other
0.5
0.0
-0.5
-1.0
-1.5
1999200020012002200320042005200620072008
Year
1.6
1.2
GDP
1.2
0.8
0.8
0.4 0.4
0 0
1999200020012002200320042005200620072008
Year
y = 1.2501e0.0491x R 2 = 0.968
2.2
2.0 R 2 = 0.982
2.0
1.8
1.8
1.6 1.6
1.4 1.4
1.2 1.2
0 1 2 3 4 5 6 7 8 9 10 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9
Year Per-capita GDP(ten thousand yuan)
Fig. 4. The trend of EF with year Fig. 5. The relationship between per-capita
EF and per-capita GDP
192 H. Wang et al.
5 Suggestions
(1) Controlling population size and improving population quality. Guilin has
been in condition of ecological deficit in the past decade, and the deficit increased
constantly. As the expansion of population year by year, biological resources and
energy demand will be certainly increased, as well as the waste amount will be
increased. Hence, the ecological environment will face with more pressure. It is
important to control population to decrease EF. At the same time, knowledge of
ecological environmental protection and sustainable development should be educated,
its help to guide reasonable consumption and reduce resources waste in Guilin.
(2) Increasing the level of technological development and promoting industrial
restructuring. Science and technology are primary productive forces. The higher
productivity it is, the stronger BC will be. It is important for improving BC and
reducing ecological deficit to develop technology productivity. It is held to strengthen
science and technology research and develop local pillar industries, promote
technological innovation, transfer scientific and technological achievements to
practical productive forces. Meanwhile, concentrate manpower and resources to
develop local characteristics and competitive industries, such as motor vehicles and
parts, food and beverage processing, rubber, pharmaceutical and biological products
and promote development of industrial clusters or industrial chain (such as Fruit
cultivation - Drinks and local and special products processing - Eco-tourism), improve
ecological footprint of cropland and forest, accelerate economic development rapidly
and reduce waste emissions.
(3) Strengthening agricultural production and ensuring cropland and forest
supply. Cropland consumption occupy a large proportion of the EF in Guilin, it is
utilized that a business model likes "company + base + farmers" to accelerate integrative
process of agricultural cultivation, processing and marketing. Furthermore adjust
agricultural structure and product structure, optimize the layout of agricultural areas,
improve the quality and production of major agricultural products such as food, fruits,
vegetables by reforming low-yielding farmland, change farming method, improve crop
varieties and cultivation techniques. Developing three-dimensional planting and
breeding, promoting a planting and breeding mode expressing biogas as a link to pig,
methane, fruit (vegetables) to achieve integrated development of agriculture, and enhance
people's way to get rich. Forest resource is abundant in Guilin. It is important to rational
exploit and utilize of forest resource in precondition of ecological protection.
Analysis of Sustainable Development in Guilin 193
(4) Increasing the efficiency of fossil fuels and developing clean energy. Fossil
fuels are the main energy consumption in Guilin, but the city have little coal resource
which requires input coal resources from outside, it will bring urban air pollution.
Therefore, it is necessary to promote hydropower, solar and other clean energy, making
the best of straw, animal waste and other biomass resources to develop biogas energy to
achieve waste recycling and accelerate eco-agricultural development. Its beneficial to
improve the energy structure and reduce dependence of external energy.
6 Conclusions
In study period, EF indicated that an increasing trend, the values grew from
1.3434hm2/cap to 2.0499hm2/cap and BC was stable. Therefore, per-capita ecological
deficit increased. Growing efficiency of resource utilization led to EF per ten-thousand
GDP decrease; per-capita EF increased with year and per-capita GDP by predictive
analysis in future. In the past decade, resource supply could not meet the resource
consumption in Guilin. Ecosystem was under a condition of unsustainable
development. On the basis of population control, development of science and
technology should be energetically promoted, and economic growth mode should be
transformed to enhance BC, meanwhile resources and energy consumption patterns
should be changed to improve the efficiency of resource utilization, reduce the EF and
promote socio-economic sustainable development in Guilin.
Acknowledgements. The work was supported by a Program Sponsored for
Educational Innovation Research of University Graduates in Guangxi Province
(No.200910596R01) and the Guangxi Key Laboratory of Environmental Engineering,
Protected Assessment.
References
1. Xu, Z.-m., Zhang, Z.-q., Cheng, G.-d.: Ecological Economic theory method and application.
Yellow River Conservancy Press, Zhengzhou (2003)
2. Rees, W.E.: Ecological footprints and appropriated carrying: what urban economics leaves
out. Environmental Urbanization 4, 121130 (1992)
3. Wackernagel, M., Rees, W.E.: Our Ecological Footprint: Reducing Human Impact on the
Earch. New Society Publishers, Gabriola Island (1996)
4. Scotti, M., Bondavalli, C., Bodini, A.: Ecological Footprint as a tool for local sustainability:
The municipality of Piacenza (Italy) as a case study. Environmental Impact Assessment
Review 29, 3950 (2009)
5. Zhang, H.-Y., Liu, W.-D., Lin, Y.-X., et al.: A modified ecological footprint analysis to a
sub-national area: the case study of Zhejiang Province. Acta Ecologica Sinica 29,
27382748 (2009)
6. Guilin Economic and Social Statistics Yearbook, editorial board: Guilin Economic and
Social Statistics Yearbook. Chinese Statistics Press, Beijing (1999-2008)
7. Xu, Z.-m., Chen, D.-j., Zhang, Z.-q., Cheng, G.-d.: Calculation and analysis on ecological
footprints of China. Acta Pedologica Sinica 39, 441445 (2002)
8. Zhang, Y., Wu, Y.-m.: The eco-environmental sustainable development of the Karst areas in
southwest China analyzed from ecological footprint model-a case study in Guangxi region.
Journal of Glaciology and Geocryology 28, 293298 (2006)
Analysis of Emergy and Sustainable Development
on the Eco-economic System of Guilin
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 194200, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis of Emergy and Sustainable Development on the Eco-economic System of Guilin 195
cities and historical cities in the world. In the late of 2008, the city had 27800 km2 land
area, population reached 508.32 ten thousand, and production value reached 883.02
hundred million yuan, which was 18.64% in growth over the previous year, and
the structure of three industries was 19.42: 45.24: 35.34[4]. Guilin is a emerging
industrial city which has a good basis, and it has formed electronics, rubber
manufacturer, machine industry, cottonocracy, pharmacy, coach manufacture,
handicraft, and food and light industry as its pillar industries, the construction of going
to scale for industries and enterprises is effective, which makes economic increase
substantially, the city's industrial enterprises above designated size has reached 765,
and the output value of industry has reached 35.071 billion yuan.
The theory and method of emergy analysis established by H.T.Odum ststes that any
form of energy is derived from solar energy, solar emergy is used as a benchmark to
measure energy value of various energies in the real application. Based on the emergy,
we can measure and compare the true value of different types and grades; we can also
use it to transform different types and incomparable energies to emergy by emergy
conversion rates. This study includes the following steps:
(1) Collect the relevant geographical and economic data in Economic and
Social Statistic Yearbook of Guilin from 2003 to 2008.
(2) According to the "energy system language" legend presented by H.T.Odum,
determine the boundaries of system-wide, the main energy source, the main
components of the system, and list the system processes and relationships within
various components.
(3) List the main items of energy input and output, and convert each category and
material into common emergy unit with the corresponding conversion rate to evaluate
their contribution in the system and status.
(4) Merge the similar and important project.
(5) In line accordance with the local characteristics, the table of emergy analysis
and system classification, optimize and select the overallemergyindexes.
(6) Propose the rational countermeasures based on the results.
Net emergy yield ratio (EYR) is output emergy of the system to economic feedback
emergy ratio, an indicator to measure the size of the economic contribution of the
196 Z. Xu et al.
systems output, and also a standard to measure the systems efficiency. The EYR of
Guilin can be seen from Table 2 that EYR of Guilin has improved overall, but still has
some fluctuations, which shows the net benefit of eco-economic system in Guilin has
increased, the production efficiency and competitiveness has been enhanced in the
same condition of input economic emergy.
Emdollar value (Em$) is annual using emergy gross in a country or region to the gross
domestic product of that year ratio. Generally speaking, the Em$ in less developed
regions is higher, and their extent of economic development is lower. However, the
developing regions buy a lot of external resources, and the GDP is also higher, so
their Em$ are lower. Em$ of Guilin decreased year by year (Table 2), which was lower
than Gansus(11.881012 sej/$) in 2000 and Xinjiangs(14.71012sej/$) in 1999
significantly, but still than other coastal provinces, such as Jiangsus3.021012sej/$ in
2000 and Fujians 2.761012sej/$ in 2004[5-8], showing openness of economy, extent
of circulation are not high, and economic development is in a lower level.
year
Indicators and methords
2003 2004 2005 2006 2007 2008
Emergy self-sufficiency 0.472 0.424 0.460 0.487 0.471 0.469
ratio:(EmR+EmN)/EmU
Input emergy ratio:EmI/EmU 0.528 0.576 0.540 0.513 0.529 0.531
Renewable resoures emergy 0.180 0.188 0.172 0.144 0.139 0.123
ratio:EmR/EmU
Emergy per capita:EmU/P 6.818 6.517 7.382 8.448 8.899 9.887
(1015sej/person)
Emergy density 12.029 11.578 13.147 15.173 16.153 18.079
(1011sej/m2):EmU/area (m2)
Population carrying capacity 3474186 3769986 3528041 3278054 3369255 3324365
(person):(EmR+EmI)/(EmU/P)
Emdollar value
(1012sej/$):Em$ =EmU/GDP 6.964 5.818 5.578 5.323 4.589 3.907
($)
Net emergy yield
ratio:EYR=EmO/EmI 5.748 5.784 8.104 6.713 6.902 6.749
Environment loading
ratio:ELR=(EmU-EmR)/EmR 4.555 4.329 4.811 5.949 6.185 7.159
Waste emergy index:EWI
=EmW /(EmR+ EmIR) 0.079 0.073 0.072 0.065 0.071 0.070
Fraction of emergy used from
electricity:Emel /EmU 0.068 0.079 0.074 0.083 0.076 0.072
Emergy exchange
ratio:EER=EmIS/EmISP 2.185 1.924 1.638 1.380 1.018 0.927
Emergy investment
ratio:EmI/(EmR+EmN) 1.120 1.357 1.176 1.052 1.121 1.134
Waste emergy/renewable
emergy ratio:EmW /EmR 0.195 0.189 0.189 0.186 0.210 0.226
Emergy sustainable
index:ESI=EYR/ELR 1.262 1.336 1.684 1.128 1.116 0.943
Emergy index for the
sustainable development: 2.710 2.528 2.718 1.540 1.123 0.865
EISD=EYREER/(ELR+EWI)
Fraction of emergy used from electricity (FEE) is the amount of electricity tothe
totalamount of used emergy ratio. The case of FEE in Guilin recently fluctuated largely
for many years, but decreased significantly in recent two years (Table 2). Guilins
electrical emergy was obviously lower than Fujians (19%) in 2004,
Jiangsus(20.8%) in 2000, Gansus (11.36%) and levels of others provinces [5,7,8].
This indicates that:
Guilins industries were underdeveloped, low level of
198 Z. Xu et al.
electrification, and some industries might use coal. Electronics, rubber mill,
machinery industry, cottonocracy, pharmacy, coach manufacture, handicraft and other
pillar industries in Guilin generally used lower energy. It shows that the current
orientation of industrial development in Guilin is appropriate by and large.
Emergy density (ED) is the total amount of used emergy in a country or region to
the national or regional area ratio, which reflects the objects strength and level
of economic development.
In recent years, and the index of ED in Guilin rose from 12.0291011sej/(m2a) in
2003 to 18.0791011sej/(m2a) in 2008 (Table 2), which had an increase of 50.30%,
but there was a large gap with some areas along the coast, showing that the level of
economic development of Guilin is underdeveloped, and its environmental pressure is
relatively limited.
Emergy per capita (EPC) is the amount of used emergy to the total population of a
country or a region ratio, it is used to evaluate the standard of peoples living. In
general, the greater the EPC is, the higher the per capita emergy is. The EPC of Guilin
rose from 6.8181015sej/person in 2003 to 9.8871015sej/person in 2008 (Table 2),
showing that the living standard of Guilins people had been improved to some extent,
but there was also a large difference with some developed cities, such as Guangzhou
(13.391015sej/personyear) and Beijing (17.891015 sej/personyear)[3,9].
Currently, GDP per capita is 17435 yuan in Guilin, the disposable income of urban
resident per capita is only 14636 yuan/year, and the net income of farmer per capita is
4465yuan/ year, so people's life is at such a relative low standard.Guilin is
still behindhand region in southwest of China.
Emergy index for sustainable development (EISD) is the product of Net emergy yield
ratio (EYR) and emergy exchange ratio (EER) to the sum of environment loading ratio
(ELY) and emergy waste index ratio (EWI). The EISD of Guilin decreased from 2.710
in 2003 to 0.865 in 2008 sharply(Table 2), the capability for sustainable development
dropped off rapiadly. Analyzing the causes, from 2003 to 2008, the EYR of Guilin rose
from 5.748 to 6.749, but the corresponding adjustment for industrial structure and
circular economy did not keep up with, and the ELY and EWI decreased somewhat,
making the EER decline from 2.185 to 0.927, so EISD fell quickly. With the rapid
expansion of economy, the contradiction among Guilinseconomic development,
resources, and entironment appeared gradually, it is the key for sustainable
development to speed up industrial restructure, develop the circular economy, and
build circular industry with local characteristics.
References
[1] Lan, S.-f., Qin, P.: Emergy analysis of ecosystem. Chinese Journal of Applied
Ecology 12(1), 129131 (2001)
[2] Odum, H.T.: Guidance of energy, environmental and economic system. Oriental Press,
Beijing (1992); translated by Lan sheng-fang
[3] Sui, T.-h., Lan, S.-f.: Emergy analysis of Guangzhou urban ecosystem. Chongqing
Environmental Science 23(5), 423 (2001)
[4] Statistical bureau of Guilin. Economic and social statistical yearbook of Guilin. China
Statistical Press, Beijing (2004-2009)
[5] Zhao, S., Li, Z.-z.: Study on emergy analysis of Gansu ecological-economic systems.
Northwest Botanic Journal 24(3), 464470 (2004)
[6] Li, H.-t., Liao, Y.-c., Yan, M.-c., et al.: A study on emergy evaluation of Xinjiang
ecological-economics systems. Geographica Journal 58(5), 765772 (2003)
[7] Li, J.-l., Zhang, Z.-l., Zeng, Z.-p.: Study on emergy analysis and sustainable development of
Jiangsu ecological-economics system. China Population Resources and
Environment 13(2), 7378 (2003)
[8] Yao, C.-s., Zhu, H.-j.: Emergy analysis and assessment of sustainability on the ecological
economic System of Fujian province. Fujian Normal University 23(3), 9297 (2007)
[9] Song, Y.-q., Cao, M.-l., Zhang, L.-x.: Beijing, Emergy-based comparative analysis of urban
ecosystem in Beijing, Tianjin and Tangshan. Ecological Journal 29(11), 58825890 (2009)
Multiple Frequency Detection System Design*
1 Introduction
So-called measuring frequency is in certain time intervals count of digital pulse in
simple way. In order to realize intelligent count frequency measure, a kind of
effective method is to used microcontroller in the design of the frequency. The
traditional frequency meter just detect external signal frequency value, and the simple
display it. This method is effective, but can not continuous reflected signal frequency
changes, can not track signals timely, lost a large number of signal detection of
information, It harm to design of the timely analysis and processing signal .In this
paper, the design combines the microcontroller and VB6.0 display interface,
Measuring the frequency steps: MCU measuring frequency, the tube for display,
serial transmission of data, the PC to draw the frequency curve, stored frequency data,
and so on.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 201206, 2011.
Springer-Verlag Berlin Heidelberg 2011
202 W. Liu, J. da Hu, and J.c. Shi
$
PLFURFRQWUROOHU
6\VWHPWLPH
The program can detect multi-channel signals, by synchronizing the gate and
function switching circuit for time division multiplexing circuit, time to achieve with
two counters count and event count separately. Other channels in the display data,
when in another channel the data storage in the Microcontroller, and may through the
corresponding keys are being set.
7LPHEDVH
*DWHFKDQQHO
X Q LW
The first program can do more powerful, soft features better. Simple in principle,
the specific circuit in real time is better, but also in achieving high accuracy in the
preparation of software features is relatively high demand. Another use of traditional
microcontroller chip can not be directly measured by counting the frequency of
1MHz, usually the clock frequency of microcontroller at the under of 12MHz, the
machine cycle of at least 1us, if the frequency of 1MHz measured must be added
frequency circuit microcontroller, This can greatly improve the frequency meter
measuring range, Application of space more widely. The greatest feature of the
second program is the full hiardware circuit, crcuit stability, high accuracy, without
tedious debugging process, greatly reducing the production cycle, but the frequency
measurement range is limited, with great limitations. The requirements of this design
can be measured at frequencies up to 1MHz, which must be applied frequency circuit.
In summary, the design uses the first option to achieve the frequency measurements.
In conclusion, this design chooses the first implementation of frequency
measurement.
The design uses a multi-channel signal input through the switch to select the signal
down into the circuit, The circuit design of the eight frequency measurement circuit,
204 W. Liu, J. da Hu, and J.c. Shi
Through the 74151 data selector selects one of inputs the design of frequency
requirements measure frequency range up to 1MHz, so sent measure frequency value
into SCM before them split processing, while D flip-flop frequency has this
function,so frequency by the D flip-flop treating first.
This design uses 8-bit common cathode LED on the measured frequency and the
number of channels selected for display, Dan connections consist of light-emitting
diodes figure 8, when use some pen segment the light emitting diode luminescence
can show 0 ~ 9 number. LED seven segment displays, also known as LED digital
tube, the display can be divided into a total of yin and yang are two common types.
The so-called digital tube of anode, is the public termination positive, when the
cathode combined with low levels the section to shine. Common cathode is the digital
negative public termination, when the anode with high levels the section to shine.
The core of the frequency design is the AT89C51,it finish the function of signal and
frequency measurement and so on, so it is necessary to develop appropriate
procedures, to messure the frequency. C51 language was used to debug the led
display and counter circuit, divider circuit and serial port to send the results of
instruction and so on. The program can be divided into the following specific
procedures for the preparation of several modules: the main program, counting
process, switch the selection process, display routines, and serial port procedures.
The PC software is designed to be completed based on VB6.0. Writing the
corresponding function program received through the serial communication of
information from the next crew to complete the system, including the frequency of
sending and curve display.
Software design includes the main program, counting process, switch the selection
process, display routines, and serial port procedures.
Main program to complete the setting of timer, serial port baud rate settings, and
the interrupt type, etc.
Counting program External interrupt frequency was set to 1s, during the 1s time to
count on the external frequency. The frequency was the frequency of the external
signal is the number of values.
Switch to select program Setting P1.6, P1.7 to select the port switch from top to
bottom, by choosing a different channel, determine the frequency of the channel.
Display program display the measured data, open the appropriate place, call the
appropriate section of code to display.
Serial port and data saving program the measured frequency value was send by the
serial port to PC. When PC received data curves depict, drawing the curves by
different colors to correspond to the frequency value and frequency value for database
save for inquiries. And the flow diagram was showed below:
Multiple Frequency Detection System Design 205
6 W D U W
6WDUW
6 H Q G
,QLWLD OL]H WKH F R P P D Q G V
FRUUH VSRQGLQJGD WD
1
: K H W K H U 1 R G D W D LV
< (QWH UWKH D Q \ G D W D U H W X U Q H G
& RQWLQXH WR 1 &RQWLQXH WRFRXQW
LQWH UUXSWVH UY LFH
FRXQW VH FRQGLQWH UUXSWLRQ <
UR X WLQ H
' U D Z F X U Y H
(QG
( Q G
Fig. 3. Program flow chart of counting program Fig. 4. Serial transmit data flow diagram
Proteus and vb6.0 was used to joint simulate. By testing, the system can function
normally on a pre-requirement. The external input frequency information will be first
measure by MCU and displayed by seven segment LED display decoder, through the
serial port to the PC, host computer system can quickly and accurately under the crew
received the frequency of information transmission. After corresponding treatment,
drawing in the corresponding linear axis and the frequency of the measured data
stored in the database. Simulation was shown in Figure.5 to Figure.7:
Fig. 5. Local machine measured the fifth Fig. 6. Depicts the corresponding PC curves
channel frequency display
computer, more intuitive reflecting the real-time frequency. Figure.7 showed us the
measured frequency value is stored to the database, which will help on the data query
and analysis.
5 Conclusion
The simulation results can be seen from the above. Compared with the traditional
multi-channel frequency counter the frequency based on MCU has better
performance, and enhanced the overall function of the frequency meter. Of course, the
whole system is still in the testing process and a lot of errors were found, such as the
measurement error.also,there are some errors in the channel, such as 5-channel input
10000Hz frequency, the tests showed that as 9998Hz, but the measured data allowed
within the error range.
References
[1] Zhang, Y.: Microcontroller theory and applications, pp. 100154. Higher Education Press,
Beijing (2008)
[2] Guoqi: Visual Basic Database system development technology, pp. 236251. Posts &
Telecom Press, Beijing (2003)
[3] Cao, W.B.: C / C + + serial communication typical application programming practice, pp.
20136. Electronic Industry Press, Beijing (2009)
[4] Wang, P.: Visual Basic Programming fundamentals tutorial, pp. 77160. Tsinghua
University Press, Beijing (2006)
[5] Li, X.W., Chao, J.: Electronic Measurement Technology, pp. 78102. University of
Electronic Science and Technology Press, Xian (2008)
[6] Yan, S.: Fundamentals of electronic technology, pp. 185213. Higher Education Press,
Beijing (2004)
The Law and Economic Perspective of Protecting
the Ecological Environment
Abstract. The core of the law and economic theory is to obtain the maximum
benefits at the minimum costs. The environment and economic growth of the
contradictions are becoming increasingly prominent today, how to obtain most
of the economic performance with loading down the environment at the same
time? The text proposed the sustainable development as guiding ideology, in
coordination to develop as guiding principles, the "guidelines on prevention" as
guidelines.
1 The Core of the Law and Economic Theory: Costs and Benefits
Law and economic theory, from jurisprudence view, "Economic analysis of law".
Economic analysis of law is a theory to apply the concepts and study method of
economics to study and understand the problem of the law, it rose in America firstly
in the 1960s. It is Richard Allen Posner who collected the major achievements of the
economic theory, his book the economic analysis of law completed a task that
comprehensive and systematic analysis and summary of the economic theory.
The Law and economic theory by nature "apply the economic theory and
economical methods fully in legal systems analysis.[1] In economic analysis of
jurists eyes, "economics is one of our rational choices and in the world, resources
are limited relative to the human desires " .Based on the hypothesis that man is
"maximize" selfinterests the economists get the core idea of the law and
economics theory from the three basic principles the law of supply and deman d,
y
maximize the efficienc , and maximizing value ), the core idea of the law and
economics theory efficiency
to allocate and use resources with value can be
maximized.[2]
Apply this theory to the legal field, he will come to a conclusion: maximized
wealth is the purpose of the law. All the legal activities (including legislative and law
,
enforcement and judicial activities etc and all the legal systems (public law system,
private law system and judicial system etc) ,their ultimate aim is to wait for the most
effective use of natural resources and to maximize the increasing social wealth.
Therefore, the core of law and economy theory is to obtain the maximum
effectiveness with the minimum costs.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 207213, 2011.
Springer-Verlag Berlin Heidelberg 2011
208 C. Xiuping and L. Xianyan
documents. The three documents spontaneously come to the same conclusion that
world must organize and implement new strategy of sustainable development , they
have repeatedly emphasized that sustainable development is the only alternative way
at the end of the 20th century, and the 21st century mankind survive and develop.
In June 1992 session of the United Nations conference on environment,
development and sustainable development has become the most powerful voice.
Human finally chose the sustainable development, this is the historical evolution of
human civilization, and it is an important milestone for man to break the traditional
mode of development and open the development of modern civilization.
The report Our common future believed that human society ever gain the rapid
economic growth has also brought the deterioration of ecological environment,
thereby seriously restricted the humans economic and social development further.
Therefore, we must change its traditional development and choose a new
development strategy to ensure social development of persistence.
think we should focus on the following:
First, we should turn to promote growth mode in a prominent place put more
emphasis on technology progress. To develop cycle economy is significant to
improve the utilization of resources and environment.
Second, we should turn to optimize the industrial structure in a prominent place,
put more emphasis on services, particularly modern service industries development.
Third, people should be more rationally planning industrial development, and
analyze market trends for long-term supply and demand, to avoid excessive heavy
industry.
Fourth, we should pay more attention to coordination between economic growth
and saving resources and environmental protection.
Fifth, we should pay more attention to construct conserve resources typical society,
to eliminate waste and advocate economy in planning construction, circulation,
consumption and production.
3.2.3.1 Economic growth will make more countries in efforts to protect the
environment. the purpose of economic development is enhancing the overall national
strength, the country's strengthen up, we can protect and improve environment to
provide the necessary financial and material resources to promote environmental
quality improvement. with economic development and the improvement of people's
living standards for environmental quality, higher, stronger and environmental
awareness and environmental protection more conscientious, will no doubt to promote
the environmental protection.
References
[1] Posner, R.A.: Economic Analysis of Law. The Encyclopedia Publishing House of China,
Beijing (1997) (preface)
[2] Posner, R.A.: Economic Analysis of Law, 5th edn., New York, pp. 1315 (1998)
[3] Chen, Q.S.: The basic theory of the environmental sciences, pp. 3637. The Environmental
Sciences Press of China, Beijing (2004)
[4] Zhang, K.: The theory of sustainable development, p. 74. The Environmental Sciences
Press of China, Beijing (1999)
[5] Zhu, B.: The resources, environmental and social development. The Impact of Science to
Social 1 (1994)
The Law and Economic Perspective of Protecting the Ecological Environment 213
[6] Chen, Q.S.: The basic theory of the environmental sciences, p. 136. The Environmental
Law Press of China, Beijing (2004)
[7] Nie, G.: The economic analysis of our environmental protection in the transition period, p.
27. The China Economy Press, Beijing (2006)
[8] Li, K.: The environmental economics, p. 185. The Environmental Sciences Press of China,
Beijing (2003)
Research on the Management Measure for Livestock
Pollution Prevention and Control in China
1 Introduction
As the development of Chinas economy, peoples matter and cultural level has been
increase continuously, China's livestock breed industry has a very fast development,
for example, the meat, egg and milk output increase fast, but at the same time, the
livestock pollutes also more and more serious, become a main agriculture pollution
source. In order to solve the livestock problem, in recent years, our country try to
control the pollution by doing some policy guidance and giving some money support ,
we get a good result. But because the livestock pollution prevention starts late, we
have some problems at management, for example, the relative regulation and standard
system is not so tightness and perfect. This text has a analyzing about the
development for the livestock industry form economy and policy area.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 214218, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Management Measure for Livestock Pollution Prevention 215
production is more than 50s, cow's production is more than 10, chicken is more than
2000s, milk cow save column more than 20s, the egg chicken saves a column more
than 500 , has been reach 56.0%,38.0%,81.6% 36.1% and 76.9% respectively of
intensive type level, we can see that the small scale livestock raise is above 80% of
the intensive type raising [1].
Compare with the livestock raising, the livestock pollution prevention level is not
very high. The environmental protection department has a inspection for the 23
provinces and cities, 90% scale livestock arising didn't have the environment
evaluation, 60% arising place has no anti-pollution measurement. Livestock arising
has became the main pollution source for the village region.
In 2008, form the state council pollution inspection result, we can see that the
livestock breed industry generates the dejection as 243,000,000 tons a year, and
generate wet liquid 163,000,000 tons a year. Among them, chemical oxygen demand, T
total nitrogen and total phosphorus exhaust are 12682600 tons, 1024800 tons,
160400tons, and it is 41.87%, 21.67%, 37.90% of the whole countrys total pollution.
Pollution from the agriculture source, more outstanding one is livestock farming pollute,
the livestock farming industry chemical oxygen demand, total nitrogen and total
phosphor take up 96%, 38% and 56%of the agriculture source respectively [4 1].
3 Livestock Breed Industry Pollution Political Measure
In recent years, as the increasing of the intensive type and scale livestock breed, its
scale also became bigger and bigger, the pollution problem is getting serious. The
3.1 Government Department has Different Duty Makes the Management Problem
Agriculture department make the village economic development and agriculture structure
as the main point of their work, recently, they adjust the industry structure intensification
and going to scale, but they have not see the problem caused by this. Environment
department focus on water, air, voice, waste residue, they main manage big city and
industry pollution, didnt pay attention to the livestock pollution. So, in a sort of sense,
government duty doesnt match is a reason to make livestock pollution serious.
3.2 Lack of the Regulation for the Small Livestock Breeds Industry
Recently, the environment department has enhance the management about the big
scale livestock breeds industry, increase the management for the big and middle scale
company, "Three Meantime", environment evaluation, but small scale livestock
breeds industry is more than 80% of all the livestock company, the average number is
650 pigs,22000 chicken for each small company. Big caw which is above 200 is only
5% of the total number[2].Lack of the management to the small company is a policy
problem.
3.3 Lack of the Regulation to Encourage Livestock Breeds Industry to Make the
Livestock Pollution Prevention
livestock is influenced by ill and market, it belongs to high risk and low profits
industry, recently, our countrys main management for livestock pollution is penalty,
lack of related protection and policy to support, company has on interesting to build
up pollution treatment equipment, even they build up the equipment, they always
exhaust waste when it is no inspection.
Livestock breed is the main pollution in the village, but we cant find any data about
pollution situation and pollution exhaust in <Environmental conservation yearbook>,
<Chinese livestock husbandry yearbook> and <Chinese agriculture yearbook>, it will
impede our livestock pollution prevention work improvement, and also impede our
target come true.
Research on the Management Measure for Livestock Pollution Prevention 217
Improving regulation system about livestock breed industry pollution, add special
regulation for livestock breed pollution prevention, for example, make < regulation
for livestock breed pollution prevention>, make clear about the responsibility and
obligation for the environment protection, give a serious punishment to the illegal
company.
Enhance the making of the inspection document about livestock breed industry,
clear the responsibility of different department, to avoid administrative management
problem. Focus on the whole situation; make the treatment of livestock breed
industrys pollution prevention and improving the village environment as our final
target. Each department takes its own function.
4.2 Make Clear about the Department Function, Make the Livestock Breeding
Industry Management Integration
Environment protection is not only the work for the environment protection
department, it is related to work about industry, agriculture, commerce, sanitation,
programming department etc. We should let every department take their function to
engage in management about the pollution prevention, they should considerate about
the environment problem when they want to take any measures, enhance the pollution
prevention work, to solve the problem between economic development and
environment protection.
Enhance the communication between department, build up the horizontal
management system between each environment protection department, let the
livestock breed industry under management from building up plant improvement,
extension, normal inspection to pollution exhaust, control.
4.3 Enhance the Research about Economize-Type and Utility-Type Measure for
Livestock and Poultry Industrys Pollution Prevention
The livestock excrement pollution projects cost for pollution prevention is very high;
it is difficult to accept for the company without government support. Recently, it is
hurry to enhance the research for pollution prevention technique, to reduce the
livestock pollution prevention cost, and increase the treatment efficiency. Particularly
increase the input of the pollution prevention technique which can change into good
productivity. The livestock pollutes research should match request of market, should
be economize-type and utility-type, to promote the development of the pollution
prevention technique.
218 Y. Ji, K. Wang, and M. Zheng
4.4 Support Organic Fertilizers Production and Using, Manage Biogas, Biogas
Slurry, Biogas Residue Exchange Market
Strictly examine and approve of building up the new biogas projects, around the
biogas projects should have farm, fruit tree forest to use biogas slurry, biogas residue,
build up biogas projects, at the same time, finishing the piping for biogas slurry.
Secondly, for which biogas projects have no enough earth, they should make the
biogas slurry and biogas residue into organic fertilizer, and also supply to them the
way for selling.
Organic fertilizer is good agricultural fertilizer, but benefit offspring. It hasnt more
price advantage than chemical fertilizer. In order to push forward the building of
biogas projects, to establish good market environment, it is the most important to
strengthen organic fertilizer market management. It is necessary to encourage
production and use organic fertilizer on the political and economic side.
Because the cost and pollution quantity is low, medium and small scaled Livestock
farming always became the blind spot for the environment management, but long time
pollution of the fugitive discharge will cause big problem. The medium and small
scaled livestock breed industry is spread around, it is difficult to control on time, so,
for the medium and small scaled livestock breed industry, our main measure is guide
and encourage, and teach them the excrement treatment. Reduce the load of the
organic matter in wastewater. To build fertilizer company or sewage treatment plant
around medium and small scaled Livestock farming, which can resolve the fecal
treatment problem of medium and small scaled Livestock farming.
References
[1] National Bureau of Statistics, Ministry of Environmental Protection:China Statistical
Yearbook on Environment. China Statistics Concern, Peking (2009)
[2] Su, Y.: Research of Countermeasures on Waste Treating of Intensive Livestock and
Poultry Farms in China. Chinese Ecosystem Agriculture College Journal 2, 1518 (2006)
[3] Yang, J.: The Tension Rings A Red Alarm. Environment 4, 45 (2002)
[4] Peng, X.: Our Livestock Farming Pollution Treatment Policy and Its Character.
Environment Economy 1 (2009)
[5] Chinese The Editorial Board of The Livestock Husbandry Yearbook: Chinese Livestock
Husbandry Statistical Yearbook (2008), Peking, Chinese Agriculture, Book Concern (2010)
The Design of Supermarket Electronic Shopping Guide
System Based on ZigBee Communication
1 Introduction
With the development of economy and society, the emergence of large supermarkets
provides people with a convenient place to buy necessities. To some extent, it
facilitates the purchase and save time. However, the enormous size of the supermarket
with the increase quantities and types of goods makes customer inconvenient to find
out what they need and get the latest goods information. This paper presents a mobile
electronic supermarket shopping guide system, which can be fitted in the shopping
cart locating expected goods and offering the latest information on supermarket
product.
2 ZigBee Technologies
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 219224, 2011.
Springer-Verlag Berlin Heidelberg 2011
220 Y. Zhang, L. Han, and Y. Zhang
3.1 Location
The achievement of this wireless location system mainly depends on CC2431 ZigBee
network environment and the built-in wireless location engine [3]. There are three
types of nodesCentral node, blind nodes and reference nodes. Central node is
constituted of the coordinator (FFD), initiating a network. Blind node made up of
terminal device (RFD) is a key to locate, which can calculate the current coordinate.
The coordinates of Reference node comprising the router are known in their
respective networks, (FFD) which can help the blind node position.
CC2431 location engine is on the basis of RSSI technology [4], RSSI refers to the
strength of the wireless signal the node receives. The signal propagation loss can be
calculated by the strength of known transmitter node and receive node. Then it can be
changed into the distance using empirical models. Then the location of the node can
be known by existing algorithms and the fixed coordinates. Blind node receives the
packet signal from the reference node, which access to reference node coordinates
parameters and the corresponding RSSI value, and then send them into the location
engine. We only need to write the required parameters into the positioning engine.
The result can be read out after waiting for completion of the engine. The theoretical
RSSI value can be shown as equation (1) [5].
RSSI=-(10nlgd+A) (1)
Among them, the RF parameters A and n are used to describe the network
operating environment. RF parameter is defined as 1 m from the transmitter at the
absolute value of the received signal strength. RF parameter n is defined as the path
loss index, which indicates the rate of decay with the distance increases during the
signal energy to the transceiver. d is the distance between the transmitter and the
receiver.
P
G
G G
G
P
G G EOLQGQRGH
G G UHIHUHQFHQRGH
RSWLPDOUHIHUHQFHQRGH
< <
D E
Positioning operation use the " best" reference nodes the highest RSSI value.
For example, in the area as shown in Figure 1 (a), in the X, Y directions a reference
node is placed every 30 m. Firstly, find a reference node with the highest RSSI
values. Since each reference node of the horizontal and vertical coordinates has the
maximum length of 63.75m, thus identify a 64 m 64 m range in the center of "best"
reference nodes. As the RSSI value of this node is known, the distance d1 is available
The Design of Supermarket Electronic Shopping Guide System 221
between the blind nodes to this node. Locate the reference nodes except the "best"
node and calculate these distances away from the blind node (d2 ~ d8) in the same
way. After completion of this calculation, Blind node position is fixed in the global
grid as is shown in figure 1(b). Finally, get the value of all fixed coordinates into the
location engine and read out the results of the final position.
The net is made up of the structural design of components based on the mesh network
[6] basic network topology, as is shown in figure 2. It has self-establishment and
maintenance functions without human intervention. Each node can communicate with
at least one node, supporting jump multi-level routing.
Net structure of the system is shown in Figure 3, the supermarket will be divided
into several region, each region based on MESH-based network topology to establish
sub-networks. Central node (also known as the gateway) initiates the regional sub-
networks. At the same time it completes the communication between server and the
wireless network. Each shopping Cart is fitted a mobile device with a blind node to
complete the positioning. Blind nodes can be discovered independently adding and
exiting networks, receive and send data, but not for router. Whats more, reference
nodes are established in specific locations, reference nodes in their network
coordinates are known, so it can help the blind node position and routed the data.
Central node for each partition of the data can be summarized by the way of wire
transfer to the server.
Mobile devices first add into the sub-network which carrying the blind nodes after
power, reference node will notice the blind node using self coordinates by sending data
packet. Blind node receives the data packet signal from the reference node, obtains the
reference node coordinates and the corresponding RSSI value, sending it into
positioning engine. Then the currently own position after the positioning engine
calculation can be read out, that is the initial coordinates. After customers enter a query
term, the initial coordinate was sent into the server with customer inquiries through the
reference node and the central node. Server can obtain the aim coordinates after the
sever search the query term that can calculate the best path using these two coordinate
values and map of supermarket. Then server can replay this path to the mobile device
in form of a set of coordinates. Customer mobile follows this path, because blind node
can independently join and leave different sub network, so it can constantly refreshing
the current coordinates to achieve the navigation function. In this network model, blind
nodes can communicate with the server in any location, so customers can get the latest
product information anytime, anywhere in the supermarket.
222 Y. Zhang, L. Han, and Y. Zhang
UH IH UH QF H QRGH
VHUY HU EOLQGQRGH
JD WH ZD Q\FH QWUD OQRGH
(2) Reference node (gateway node): This part was made up with the CC2430,
power supply, reset circuit and an antenna minimum system which is the basic unit in
Network communication and positioning system. Structure is shown in Figure 4 (b).
This system nodes are all using OSAL real-time operating system implementation,
OSAL layer can provide information management, task synchronization, time
The Design of Supermarket Electronic Shopping Guide System 223
VWDU W
1 1
HFHLYHHH YH U \RQH
5 HFHLYH
5 HFHLYH3&GDWD QRGH GDWD "
< <
1
&DOLEU DWLRQ 7KU RXJK W K H
FRUUHFW
FRUUHFW" " VH U LDOSRU WVH QG
<
<
&DOFXODWLRQFDOLEUDWLRQ
6H QWWRWKH QRGH YDOXH WKU RXJK W K H VH U LDO
SRU WWRVH QG
H QG
(2) Reference node: Itself coordinates are known in positioning system which as
the router. Its service is to provide an accurate data packet that contains position (X,
Y) coordinates and the RSSI value of itself to blind node [8]. This node must be
configured correctly in the area. Work flow chart is shown in Figure 6.
(3) The blind node: It is a mobile node in positioning system, belong to the
terminal device. Through receiving the coordinates of all reference nodes and RSSI
value in positioning the region, it can calculate the self-coordinate using positioning
algorithm [8].Work flow chart is shown in Figure7.
VWDU W VWDU W
5 HFHLYHWKHGDWD "
1
5HFHLYHVGDWD
5HFHLYHVGDWD
1
<
<
; < 5 66, < 0 DQGDWRU\
UHTXHVW SRVLW LRQI RXQG
< 6HQG5 66,
; < 5 66, UHTXHVW
UHTXHVW 1
DYHUDJH YDOXH
DYHUDJHYDOXH
/RFDWLQJQRGH < 0 DQGDWRU\
IRXQGUHTXHVW " SRVLW LRQI RXQG
1
5HIHUHQFHQRGH
< &RQILJXUDWLRQ 1
LQIRU PDWLRQ
PDWLRQ Z ULWH FRQILJXUDWLRQ
FRQILJXUDWLRQ "
LQWR )/6$ +
/RFDWLQJQRGH < LQIRUP DWLRQ Z ULWH
FRQILJXUDWLRQ "
LQW R) /$ 6+
1
5HPRYH 1
< FRQILJXUDWLRQ
5 HP RYH
FRQILJXUDWLRQ
5HIHUHQFHQRGHUHTXHVW /RFDWLQJQRGHUHTXHVW <
VHQWWR FRQILJXUDWLRQ "
VHQWWR
FRQILJXUDWLRQ " FRRUGLQDWH
FRRUGLQDWH LP SOHP HQW
LPSOHPHQW 1
1 < < 5 HFHLYH5 66,
& ROOHFW5 66, "
&ROOHFW566, DYHUDJH Y D OXH
&ROOHFW566,""
&ROOHFW566, WLP H V
1
1 H QG
H QG
Fig. 6. Reference node software process Fig. 7. Blind Node software process
224 Y. Zhang, L. Han, and Y. Zhang
5 Conclusions
The paper designs a ZigBee Network Model with the features of communication and
location which mainly rests on CC2431 ZigBee network location property and further
proposes a mobile electronic supermarket shopping guide system which can locate
expected goods and offer the latest information on supermarket product. Combining
the positioning technology, communication technology and computer technology is a
new and meaningful field for study accompanied by good economic and social
benefits we just update the related goods information on the server and check them on
mobile devices. The employment of the system can comfort customer and facilitate
management of supermarket.
References
1. Qu, L., Liu, S., Hu, X.: Technology and Application of ZigBee. Press of Beihang
University
2. Li, W., Duan, C.: Professional training on Wireless Network and Wireless location of
ZigBee. Press of Beihang University (2006)
3. Yao, Y., Fu, X.: Network Location of Wireless Sensor based on CC2431. Information and
Electronic Engineering
4. Gao, S., Wu, C., Yang, C., Zhao, H., Chen, Q.: Teaching of ZigBee Technolog. Press of
Beihang University
5. Chai, J., Yang, L.: Location System of Patients based on ZigBee. Measuring and
Engineering of Compter
6. Ren, F., Huang, H., Lin, C.: Network of Wireless sensoring. Journal of Software
7. Sun, T., Yang, Y., Li, L.: Development Status of Wireless Sensor Network. Application of
Electronic
8. Sun, M., Chen, L.: Application of ZigBee in field of Wireless Sensor Network. Modern
Electronics Technique
The Research of Flame Combustion Diagnosis System
Based on Digital Image Processing
1 Preface
The combustion process inside power station boiler furnace is a complex physical and
chemical process. The flame temperature field distribution and the combustion
condition have the extremely vital practical significance for the safe operation of
power station. With the development of computer technology and electronic
technology, the flame image monitoring system based on the core of digital image
processing technology becomes the mainstream day by day.
This article based on the characteristic boiler combustion process, uses image
processing technology to extract features to characterize the feature of flame. In view
of high real time requirements of combustion diagnostics, we developed a set of PC
add DSP real-time image acquisition and burning diagnosis system and tested on the
200WM power plant boiler. Finally, we have a preliminary result of the study.
The optical system is optical periscope, we uses it to gain the flame image in the stove,
after the prism changes the direction and the optical fiber draws out in the CCD camera
target surface. In order to make the optical system work safely in the furnace, the
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 225230, 2011.
Springer-Verlag Berlin Heidelberg 2011
226 Y. Zhang, S. Hui, and Y. Zhang
The CCD camera uses Japanese WATEC Corporations WAT-250D color camera,
the parameter is: 1/3 colored CCD, the horizontal resolution is 450, signal-to-noise
ratio 48dB; the electric power supply is 12VDC.
The imagery processing and the combustion diagnosis system are made up by a PC
and DSP coprocessor inserted in the PCs PCI slots. DSP coprocessor completes
image collection, processing and characteristic extraction. PC as the host completes
the work of real-time display combustion diagnosis result and system control.
The working process of the system is that the system images through optics
periscope, and passes to the CCD target surface through the image optical fiber. The
CCD camera gathers flame image by the speed of 25 flames per second, and
transforms it into the video signal. Gain the real-time image information of the boiler
through DSP coprocessor and send the information to PC through PIC to further
processing and burning diagnosis.
The definition of the average gray is the ratio of gradation mathematical expectation
and image gray, it reflects the average light intensity of flame radiation. Takes a
threshold value r in the imagery processing process, supposes p(i) is the probability of
the ith level gradation, where r<i<L, L is the maximum of image gray, Iavre
represents average gray of the image, we have:
The Research of Flame Combustion Diagnosis System Based on Digital Image Processing 227
m
1 (1)
I aver =
m
i
ip (i )
3.2 Variance
The variance had reflected the non-uniform degree of the flame temperature
distributed, the bigger the variance, the bigger temperature difference in the
expression temperature field space scope is. When combustion chamber in low load
operation, due to the coal-burning quantity's reduction, the combustion exothermic
quantity reduces along with it, thus the flame temperature in the stove and the water
cooling wall surface temperature drops, so as to reduces combustion reaction speed,
reduces speed the heat release rate, enables the powdered coal air current fire to have
the detention compared with the high load situation. And causes the combustion flame
kernel to rising, the powdered coal combustion share changes in the spatial scope,
therefore stove's temperature field is evener in the spatial distribution compared with
the high load situation.
According to the probability of variance in knowledge can be defined as follows:
2
M N
_
(2)
= f (i,j )- f
i =1 j =1
In the formula, M, N are the height and width of the image (all pixels), f(i,j) is gray
_ _
value at(i,j), f is the average gray, whose value is f = I . aver
If the current burning stabilization, the grey level should be in a threshold value, and
the variance should be more stable. If combustion instability, have fire extinguishing
trends, which will shows: gray value falls, grayscale variance change is big.
i
Si is the flame area of ith sampling, G is the number of image pixels, g j represents
the ith pixel gray value of the image in the ith sampling, gth are the pre-setting
threshold, L(x) is step function, whose definition is:
1 x 0 (4)
L ( X ) =
0 x < 0
228 Y. Zhang, S. Hui, and Y. Zhang
Known by the combustion theory, in the case of other conditions certain, the higher
the combustion temperature is, the more stable combustion will be, the flame ability
of resistance disturbance is strong, and the combustion is stable. Therefore the flames
effective area and the high temperature region area value may reflect the combustion
the stability, and can take the combustion diagnosis as the basis.
VWDUW
LQLWLDOL]H (UURU+DQGOLQJ
,PDJH$FTXLVLWLRQ
$QDFTXLVLWLRQFRPSOHWHG 1
<
,PDJH3URFHVVLQJ
7UDQVIHUWKHUHVXOWVRI
LPDJHSURFHVVLQJ
warning output, the flame combustion situation real time display, the entire chamber
combustion information and the temperature field and so on. PC software uses VC +
+6.0 for development, including the data communications module, the data recording
module and historical trends module.
Fig. 3. Flame Original Image Fig. 4. Image Characteristic Quantity Extraction Surface
Fig. 3. is an image with bituminous coal altogether burns in the starting process.
Fig. 4. shows computed result of the flame image pretreatment and the characteristic
size through DSP.
Fig. 5. is the history tendency chart in the starting process recorded by PC, the left
box is the history tendency after the image potential, in the chart A is the flame
average gradation, B is the average gradation variance, C is the flame active surface,
D is the active surface variance; right side of the box is settings for the relevant
230 Y. Zhang, S. Hui, and Y. Zhang
parameter. We may see from this chart, the above flame characteristic quantity is able
real-time to reflect the boiler flames combustion condition, and it is take the basis for
the combustion diagnosis.
6 Conclusion
Compares with the traditional flame examination method, the method based on
imagery processing's combustion diagnosis has the incomparable superiority. The
preliminary experiment indicated that this article designs on the PC machine adds
DSP the real-time image gathering and the combustion diagnosis system, can real-
time completes the image processing and effective carries on the combustion
diagnosis.
References
1. Wei, C., Yan, J., Shang, M.: Combustion diagnosis based on stove in flame image. J.
Power Engineering 24(3), 24202427 (2003)
2. Yujie, Z., Pinglin, W., Changming, L.: Research based on digital image processing and
neural network chamber temperature field survey and combustion diagnosis method. J. In
Thermal Energy Electricity Generation (7), 1821 (2004)
3. Wang, Y., Liu, H.: Combustion diagnosis method study based on stove flame image. J.
Modern Electric Power 23(2), 6467 (2006)
4. He, W., Shen, A., Wu, Q.: Flame detector system based on DSP and digital video
technology image. J. Chinese Electric Power (10), 6467 (2000)
5. Mu, H.: Coal-fired boilers visualization combustion diagnoses and discharges the forecast.
D. Chinese Academy of Science masters degree paper (2006)
Design and Research of Virtual Instrument
Development Board
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 231238, 2011.
Springer-Verlag Berlin Heidelberg 2011
232 L. Zhang, T. Li, and Z. Chen
system, vxl system, PXI systems, data mining collection system or other systems, and
you can also choose from two or more systems consisting of hybrid systems. One of the
most simple and cheapest form of ISA or PCI bus based data acquisition card, or based
on RS - 232 or USB bus portable data acquisition module. Virtual instrument software,
including operating systems, instrument drivers and application software at three
levels. Operating system can be WindowSgx/NT/2000/XP, SUN05, Linux and so on.
Instrument driver software is the direct control made a variety of hardware interface
drivers, and application software and peripheral equipment drive hardware modules to
achieve a communication link. Functional applications include software programs to
achieve equipment and implement virtual instrument panel. The user through the
virtual surface Interact communicates with the virtual instrument panel.
3.1.2 PS / 2 Interface
1: data line (DATA);
Figure. 1 MCU and PC computer interface circuits
Figure. 2 SP / 2 wins 0 figure cited
2: Not used:
3: Power ground (GND);
4: Power (+5 V);
5: Clock (CL K);
6: not used
Now widely used PC, PS / 2 interface is miniODIN6pin connector, and PS / 2 devices
are the main from the points, female socket with the master device from the vice device
using Male plug. Now widely used PS / 2 keyboard and mouse are working under the
vice device. PS / 2 interface clock and data lines are open collector structure, must be an
external pull-up resistor (generally Pull-up resistor to set the main device 1.Data
communication between master and slave synchronous serial bidirectional
transmission, the clock signal from the Generated from the device).
Fig. 3. CPLD(EPM7128SLC8415)
Fig. 4. AD circuit
Design and Research of Virtual Instrument Development Board 235
can choose the less precision of speed 8-bit AD to make higher conversion. Then,
higher in accuracy can be used 12-bit, general requirements for precision can be
achieved in the virtual instrument development board. Consequently, this chip can
complete the application needs, different requirements for the development of virtual
instruments.
Fig. 5. DA circuit
Figure 7 for the 30MHz-pass filter design parameters and circuit diagram.
236 L. Zhang, T. Li, and Z. Chen
References
[1] IEEE1451.2 A Smart Transducer Interface for Sensors and Actuators-Transducer to
Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS)
Formats. IEEE Standards Department
[2] Andrade, H.A., Kovner, S.: Software synthesis from dataflow models for G and LabVIEW
TM. In: IEEE (ed.) Proceedings of the IEEE Conference Record of the 32nd Asilomar
Conference on Signals, Systems and Computers, vol. 2. IEEE, Pacific Grove (1998)
[3] Goldberg, H.: What is Virtual Instrumentation? IEEE Instrumentation and Measurement
Magazine 3(4) (2000)
Substantial Development Strategy of Land
Resource in Zhangjiakou
Abstract. Land resource is the foundation of the human beings survival and
development, and as well as a kind of nonrenewable resource. So substantial
development of land resource is very important. This paper analyses problems of
the development of land resource in Zhangjiakou. Then we put forward
some proposals such as classifying the land resource, improving land use
efficiency etc.
1 Introduction
Marx has said that land is the first resource for human being, is mother of wealth, and
is the origin of all production and wealth. Indeed, land resource is the basement and
origin of human beings life and development, and is non-renewable. However,
Chinese people occupy lower per capita land resource, and the growing contradiction
between human and land makes people realize the importance and necessity of earth
resources substantial development. Hence, to cherish every inch of land, develop, use
and manage land resource reasonably and scientifically, and achieve land resources
substantial development is one important task we are faced with now.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 239243, 2011.
Springer-Verlag Berlin Heidelberg 2011
240 Y. Sun, S. Wang, and Y. Zhao
Little lands in Zhangjiakou are covered with plants, and are full of water resource,
which leads to less mountains, bold and sandy earth, and prohibits the substantial
development. In detail, these bad aspects are represented in the following: earth
erosion. 63.8% of the basin is eroded, among which 9.6% of strong erosion scale
10000t/(kma) , 15.6% of serious erosion 5000-10000t/(kma) , 52.4% of medium
erosion 500-50000t/(kma) ,and 22.4% of light erosion
500t/(kma) . Besides,
there are about 30000hm topsoil(0-10cm) eroded per year. lack of water. As 86% of
the plow land is lack of water, some agricultural activities cannot be taken as usual.
some is short of nutrient. Rapidly-available phosphorus in 81% of the plow land is less
than 5mg/kg, total nitrogen in 46% of the land is less than 0.075%, organic matter in
27% of the land is less than 1%, and Rapidly-available potassium in 44% of the land is
less than 100 mg/kg. And the land which is lack of trace element is: boron in 24% is less
than 0.5 mg/kg, aluminum in 82% is less than 0.15 mg/kg, iron in 51% is less than 4.5
mg/kg, manganese in 81% is less than 5 mg/kg, and zinc in 79% is less than 0.5 mg/kg.
thin earth. Here 2.5% of the earth is thinner than 30cm, so it is not good for the forest
to grow. more salinity-alkalinity. 7.4% of the land here is salty land. some
bad-produced soil. 22.6% of the soil has calcium deposition layer, with 92% within
50cm far away from the topsoil, besides, 4% of the soil has red soil, 4.45 of the soil has
sand and gravel layer,1.44% of the topsoil is hardened, and there is more than 10% of
gravel in 21% of the topsoil.
The economy in Zhangjiakou develops rather slowly, and belongs to the poor continent
in Hebei Province with poor agriculture. Because of the serious natural environment,
there exists much system output and little system input, which is really not balanced.
And then, because of the strong wind and much sand, and over-ploughed land,
1200000hm of the land has been desertification, which is 33% of the citys total land
and 44.1% of the provinces desertification. And among these sections, the
desertification in Kangbao, Zhangbei, uyuan and Sangyi are the most serious. And
according to the research in Kangbao, the thickness of the eroded soil has declined by
5-10cms, and this has been the prominent aspect which influences local development.
Therefore, after the savage, simple and week balance is broken, the new one is ongoing,
the eco-system lost its self-sustaining ability, and will absolutely give rise to a set of
malignant recycle.
3 Strategy
The first class land lies in vally-basin in river-bottom, irrigation-silted soil in lake-side,
brown soil and cinnamon soil, which should be the material foundation when solving
Substantial Development Strategy of Land Resource in Zhangjiakou 241
the problem of grains. The third class land lies in the meadow soil on the hill, brown
calcium soil, chestnut soil and meadow soil, and forage should be planted to enlarge the
grazing capacity and fertilize the soil, which is helpful for little input and much output.
The second class land is the link between the above two sorts. After the reorganization,
the barn feeding of animal husbandry will be replaced with grazing, for the barn
feeding can not only enhance the forage grasss using ratio, but also roost the soil
fertility, and decrease the harm and quicken kinds of forests grow, therefore, a new
scientific eco-balance will be on the way.
As we know, water is the first problem to solve in Zhangjiakou. There are two ways:
open source and throttling. The former refers to develop and use water, develop
irrigation farming and water conservancy projects, dig hells and build reservoirs. All
these buildings are helpful on the whole. But at the same time we should realize the
facts that there is little surface water, much losing water, poor groundwater, deep bury
and unearthing difficulties. Therefore, solving the water problems in a short period of
time is beyond the citys economic ability. Besides, throttling refers to making full use
of the existed water and rain, enhance the using ratio and developing dry farming. So,
developing dry farming is suitable for the natural conditions in Zhangjiakou, and is one
necessary choice.
At first, basic construction and building should be achieved to enhance the
agricultural production. For instance, basic dry farms should be built in the dam-top
area; in the dam-bottom hill area, water should be rechanneled and collected; and in the
dam-bottom deep-water area, the low-production farms should be transformed. And
then, we should fertilize the soil and plant plants. When doing these, spreading farm
manure and fertilizing soil should be the basic points. And more than 45m3of farm
manure and 450kgs of phosphate in per acre land, with the advanced fertilizing
technology popularized. Meanwhile, building some shelter forests and caragana
korshinskii or medlar is necessary. And at last, we should reorganize the dry-land
plants layout, and popularize the drought resisting and drought-enduring sorts. For
example, in the dam-top and dam-bottom areas, the wheat and naked oats should be
controlled, and potatoes and beans should be enhanced; but in the dam-bottom hills
area grains and potatoes should be the main.
The dam-top areas unbalance is the most serious one in Hebei. There are so frequent
drying, wind and sand, salinization and so on that many lands are medium-low yield
field. Hence, we should keep the number and quantity of the basic lands, improve the
low-produced and sandy soils, prevent the soil from desertification and degradation,
and developing throttling agriculture, choose the lands and plant according to water,
and not open up virgin soil aimlessly. In the addition, the sandstorm should be managed
242 Y. Sun, S. Wang, and Y. Zhao
in desert areas. And grass should be planted in agriculture- husbandry areas. In the
detail:
Firstly, in the northern dam-top plateau, 2 sand-preventing systems should be built.
Referring to a sort of bad aspects, improving the sandy earth within the nets could be
achieved by returning the plow lands, forests and grasses; taking those advantages, we
can cover the soil with plants, enlarge the covering rate, to build a set of new
eco-balance and re-represent the grass scenery.
And then, the main problems which exist in the southeast dam-bottom area are,
serious water-eroded, thin soil layer and shortage of nutrient; but there are also many
advantages. Hence, preventing from soil erosion, protecting earth and fertilizing the
soil are the prominent aims.
Added to these, there are lots of disadvantages in the southern dam-bottom area:
sparse plants, large yellow land, serious soil erosion, thin hill soil layer, dry soil and
little nutrient. Meanwhile, it is suitable for agriculture; therefore, preventing from soil
erosion, protecting earth, keeping water, and fertilizing the soil are the prominent aims.
However, when using the hills, the covering rate should be largely enlarged. Besides,
grain producing base with crops and melons and fruits should be built.
3.4 Enlarge the Technology Input, Soil Using Efficiency and Benefit
First and foremost, depending on dry farming technology subjects, we should exclude
the obstacles to low production, to form stable comprehensive producing capability. At
the same time, concluding the experience of managing storm-sand-soil, we should hold
the key technologies (such as water-saving Irrigation on dry farmland, manage and
improve planting techologies and so on )and matched technologies, and make the best
of them. In addition, we should increase the capital and material input, to alleviate the
contradiction between input and output.
Secondly, we should develop precision farming, protect the basic land,practice
multiple cropping, and develop land-saving agriculture; popularize water-saving
irrigating technology, and develop water-saving agriculture, and make best of lands,
and develop courtyard economy and cubic economy. What is more, we should intake
and bring the advanced agricultural technologies, and set up scientific farming pattern.
In the end, we can enhance the land using efficiency and benefit by improving the
matching and level of technological personnel and fostering technology model
households.
4 Conclusion
This article analyzed the causes of the unreasonable development of land resource in
Zhangjiakou and figured out some detailed strategies, such as optimum use of land,
promoting management and enhancing high-technology input etc. The author hopes
that these measures could do something for the substantial development of land
resource in Zhangjiakou.
Substantial Development Strategy of Land Resource in Zhangjiakou 243
References
1. Sun, G., Wang, S.: Scientific and Technological Progress Promoting the Sustainable
Development of Land Resources of TaiHang Mountainous Areas in Hebei Province.
Chinese Agricultural Science Bulletin 18, 504506 (2009)
2. Li, J.: Issues and Some Strategies of Sustainable Development of Black Soil Resources in
Black Soil Zone in Northeast China. Chinese Agricultural Science Bulletin 4, 3236 (2005)
3. Lin, D.: Continuous development and utilization of soil resources. Journal of Science of
Teachers College and University 3, 7578 (2002)
4. Wang, L.-b., Zhang, R.-p.: Mortuary Management and Sustainable Development of Land
Resources. Ecological Economy 7, 4346 (2008)
Computational Classification of Cloud Forests in
Thailand Using Statistical Behaviors of Weather Data
1 Introduction
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 244250, 2011.
Springer-Verlag Berlin Heidelberg 2011
Computational Classification of Cloud Forests in Thailand 245
We installed automatic weather station (Davis weather station model Vantage Pro II
Plus) at nine study sites (Fig. 1a and 1b). There were different installation periods at
each study site as followings: (1) KHN (September 2007), (2) WLU (August 2006),
(3) NST (June 2006), (4) NHQ (September 2007), (5) HUL (November 2006), (6)
246 P. Sangarun et al.
DFC (January 2009), (7) DHC (March 2007), (8) MNC (January 2009) and (9) DIC
(March 2008). We used the data logger for data storage interval time of 30 min. for all
weather data.
(a) (b)
Fig. 1. (a) The map of Thailand and Doi Intanon (DIC) study site (9) and (b) Mt. Nan National
Park. The numbers represent (1) KHN, (2) WLU, (3) NST, (4) NHQ, (5) HUL, (6) DFC, (7)
DHC, and (8) MNC.
In this study, the weather data (i.e. temperature, relative humidity and solar radiation)
at nine study sites were used to fit the bimodal and power law distribution of relative
frequency. For temperature data, we used a bimodal distribution to fit the histogram
distribution curves in all sites, except at DIC where it had three normal distributions.
For relative humidity data, we analysed data as followings: (1) the bimodal
distribution was used to analyse KHN, WLU, NST and NHQ sites and (2) the power
law distribution was used to fit the first subpopulation distribution and the normal
distribution was then used to fit the second and third subpopulation distributions
where it applied. For solar radiation data, solar radiation data during 0600-1800 hours
were used for data analysis. The power law distribution was used to fit the first
subpopulation distribution and the normal distribution was then used to fit the second
and third subpopulation distributions at all sites.
The bimodal distribution and power law distribution are given by equation (1) and
(2), respectively.
(1)
Computational Classification of Cloud Forests in Thailand 247
3 Results
The temperature at all sites was bi-modally distributed, except at DIC site (Fig. 2a-2i)
which showed multimodal distribution according to three subpopulations. Cloud
forest sites had lower and than other sites (Fig. 2a-2i).
The relative humidity at KHN, WLU, NST and NHQ sites were fitted
by the bimodal distribution (Fig. 3a-3d). HUL, DFC, DHC and MNC sites were
fitted by power law distribution and normal distribution (Fig. 3e-3h). DIC site
was different from others sites that was fitted by power law and bimodal distribution
(Fig. 3i).
Relative Frequency
Fig. 2. Temperature distribution with bimodal curve at nine study sites. (a-c) coastal sites,
(d-e) lowland rainforests and (f-i) cloud forests
Solar radiation at KHN, WLU, NST, NHQ and MNC sites were fitted by power
law and bimodal distribution (Fig. 4a-4d and 4h). HUL, DHC and DIC sites were
fitted by power law and normal distribution (Fig. 4e, 4g and 4i). DFC site was fitted
by power law distribution (Fig. 4a-4i).
248 P. Sangarun et al.
Relative Frequency
Fig. 3. The percentage of relative humidity distribution with bimodal curve at nine study sites.
(a-c) coastal sites, (d-e) lowland rainforests and (f-i) cloud forests
Relative Frequency
Fig. 4. Solar radiation distribution with bimodal curve at nine study sites. (a-c) coastal sites,
(d-e) tropical forests and (f-i) cloud forests
The results from cluster analysis of temperature, relative humidity and solar
radiation identified clarification of two different groups of forests: (1) DFC, DHC,
DIC, HUL and MNC study sites. (2) NHQ, NST, KHN and WLU sites (Fig. 5).
Computational Classification of Cloud Forests in Thailand 249
Fig. 5. Cluster analysis of temperature, relative humidity and solar radiation data
4 Discussion
The bimodal model of temperature distribution composed of two subpopulations
which were the mean temperature in rainy and summer season. DIC site had three
subpopulations. The coolest subpopulation was the mean temperature in winter.
The solar radiation was also grouped in the study sites into two differences: (1)
DFC, DHC, DIC and HUL study sites. (2) MNC, NHQ, NST, KHN and WLU sites
(Fig. 5).
This study showed that MNC, DHF and DFC lower and than other sites. This
indicated the peak of first subpopulation can be used as an indicator of cloud forest
that is located near the equator. On the other hand, this could not be applied to high
latitude cloud forest like DIC because in winter (November to February) the
temperature was lower than the other month. The first peak of temperature was 6.97
C and the second peak was 11.75 C that was the mean temperature in rainy season
and the third peak was showed the mean temperature in summer was 14.95 C. It
was found that DFC had slightly higher temperature than DHC and MNC that =
21.12, 19.22, 17.80 C respectively. This could be due to the fact that DFC was
located near coastal area at low elevation (i.e. 700 m a.s.l). These results support the
Bruijnzeels study that can be found in TMCFs at lower altitudes. [1], [8] and [9].
The atmospheric characteristics of nine study sites could be classified by cluster
analysis into two groups of forests within cloud forest sites; MNC, DFC, DHC and
DIC and coastal sites; NHQ, NST, KHN and WLU. HUL site was different from the
other sites. This may be the result from the influence of specific location, HUL was
located at the valley that had frequent cloud presence and the water stream. The
canopies of the tall trees in this site are covered with fog in the morning. MNC study
site was located at the top of the hill that is exposed to the solar radiation and didnt
have the canopy cover.
The bimodal distribution of solar radiation showed the mean of the first and second
subpopulations and . NHQ, NST, KHN and WLU had higher than the other sites.
5 Conclusion
Bimodal distribution of temperature can be used to separate forest types. From nine
study sites, the mean and variance of the bimodal distribution can group them into
two types: (1) four cloud forest sites (DHC, DFC, MNC, and DIC stations), (2) two
250 P. Sangarun et al.
lowland rainforests (HUL and NHQ stations) and three coastal sites (WLU, KHN and
NST stations).
In this study, the relative humidity distribution of coastal sites that were WLU,
KHN and NST sites was fitted by power law and bimodal distribution of 4 study sites
and by using power law and normal distribution of 5 study sites.
The data of temperature, relative humidity, and solar radiation could be used to
analyze the weather variation and distribution. Further, these data could be
generalized for better understanding of the fluctuation of weather which tends to
change according to the global warming climate situation.
References
1. Bruijnzeel, L.A., Proctor, J.: Hydrology and biochemistry of tropical montane cloud forest:
what do we really know? In: Hamilton, L.S., Juvik, J.O., Scatena, F.N. (eds.) Tropical
Montane Cloud Forest, pp. 3878. Springer, New York (1995)
2. Bubb, P., May, I., Miles, L., Sayer, J.: Cloud Forest Agenda. UNEP-WCMC, Cambridge
(2004)
3. Foster, P.: The potential negative impacts of global climate change on tropical montane
cloud forests. Earth-Science Reviews 55, 73106 (2001)
4. Gonzlez-Mancebo, J.M., Romaguera, F., Losada-Lima, A., Surez, A.: Epiphytic
ryophytes growing on Laurus azorica (Seub.) Franco in three laurel forest areas in Tenerife
(Canary Islands). Acta Oecologica 25, 159167 (2004)
5. Grace, W., Curran, E.: A binormal model of frequency distributions of daily maximum
temperature. Australian Meteorology Management 42, 151161 (1993)
6. Hamilton, L.S., Juvik, J.O., Scatena, F.N.: The Puerto Rico Tropical Cloud Forest
Symposium: Introduction and Workshop Synthesis. In: Hamilton, L.S., Juvik, J.O., Scatena,
F.N. (eds.) Tropical Montane Cloud Forests, pp. 123. Springer, New York (1995)
7. Stadtmller, T.: Cloud forests in the humid tropic: a bibliographic review, pp. 181. The
United Nations University, Tokyo (1987)
8. Still, C.J., Foster, P.N., Schneider, S.H.: Simulating the effects of climate change on tropical
montane cloud forests. Nature 398, 608610 (1999)
9. Weathers, K.C.: The importance of cloud and fog in the maintenance of cosystems. Trends
in Ecology and Evolution 14, 214215 (1999)
Research on Establishing the Early-Warning Index
System of Energy Security in China
1 Introduction
Energy is the important material base on which people rely during surviving and
producing, is the artery of our economy. Minable reserve of oil in China is 3% of the
volume of the world. In recent years, Chinese energy security is faced with
unprecedented threat. Take the oil for example, our cost amount of oil is only less
than that of America, so we have to make up the gap between the poor self-
sufficiency ability and the increasing cost demand by importing. According to the
survey, our importing proportion has increased to more than 50% in 2008 from 7.6%
in 1995, which undoubtedly enhanced the uncertainty of oil supply. On the other
hand, with fluctuation of raw oil price at the international market, for instance, it had
been 100 dollars per hail since February 2nd 2008, and just has increased to 147.24
dollars per hail, which has enhanced the cost of our oil trade, besides, the large
fluctuation will extends to the national economy, which will not only influence
peoples normal life, but also pose a threat to national safety. So, the energy security
has been a sensitive issue.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 251256, 2011.
Springer-Verlag Berlin Heidelberg 2011
252 Y. Zhao, M. Zhang, and Y. Sun
consumption system, and the coal-cored has transferred to the oil-cored system, and
as the oil energy is dense on the earth, it evokes peoples sorrow towards the oil
supply assurance. At that time, energy security refers to whether there is enough
energy storage, productivity and smooth selling channel.
In 1970s, about during the forth Middle-East War, when the oil exporter countries
enhanced the raw oil price, the most serious global economic crisis after the second
world war prevailed the world. And until then, did people realize that the fluctuation
of the energy price could influence national economic work. So the concept of energy
security extended to enough supply and stable price.
With the increasing of global energy cost, a series of problems such as global
warming and air pollution have been turned out. And people begin to realize that the
energy use should not pose a threat to humans survival and environment. Therefore,
the meaning of energy usage safety is mixed into the previous concept of energy
security.
The concept of energy security mainly includes the following 3 levels: enough
supply, which refers to the energy supply which could meet a country or an areas
need; stable price, which refers to the price fluctuation could not exceed the country
or areas convey; safe use, which refers to the energy could not threaten the
environment which human beings depend on. Among these 3 levels, the first one is
the base, and the most focused one as well.
30%, and means the other 70% is left under the earth. But in fact, the rate with some
advanced technology is 50% to 60%. So the oil productivity in the developed
countries is once or twice higher than our country.
3) The highly centralized trade routes leads to the more serious supply crisis of
energy
Entering the 21st century, our oil imports amount increase fast with the economic
growth. And because the home product cannot meet the increasing oil consumption
demand, so we mainly import. In 2007, the total imports amount is 211.394 million
tons, which is 62% of our total consumption. And since 2000, we have imported
about over 50% of oil from the Middle-East, and more year by year. So our oil
depends much on the outside, and the area from where we import is dense, in the
Middle-East, which is the most unsteady area in society, politics, and military,
because of its oil energy. As oil is not renew, so only if there is no new fuel to take
place of oil, international conflict for oil would not stop.
Oil is not only the important energy in the national economy, but also the important
raw material in chemical industry, so we can say that oil is the food of industry. So we
call it black gold and economic blood. Wolfenson has said that if the oil price of
per hail keeps enhancing 10 dollars for one year, the international economic rate will
decline by 0.5%, but the developing countrys will decline by 0.75%, so the influence
with the oil price fluctuation is more obvious. As in 2008, the fluctuation in the
international oil price has once made our country faced with serious inflation, so once
the oil price inflates, a series of reactions will come out: it will increase the cost,
which will lead to national economic inflation and recession. On the other hand, too
low price will influence the interest of the oil industry, which decides to limit and stop
producing, which will give rise to the oil crisis, and once the supply stops, it will also
influence the national economic development. So no matter too high or too low, the
oil price will pose a serious threat to our national economic development.
254 Y. Zhao, M. Zhang, and Y. Sun
In the middle 1980s, with the global warming and declined air quantity, the
environmental problem is once again focused on. Our consumption system is
unreasonable, though lots of experts are calling for declining the rate of coal; coals
present rate is about 70% of the total energy consumption. As coal is impure energy,
especially the raw coal is not completely burned, sending out a lot of CO2 and SO2,
and worsens the environment. We all know that SO2 is the maker of soar rain, and
CO2 is the leader of the global warming, as 80% is directly burned, so it further
worsens the pollution, which will threaten our present environment but also influence
our international image, and spread to our descendants and damage our long-term
interest.
Level 1 Level 2
1)Oil reserve-production ratio
2)Oil recovery ratio
3)Concentration ratio of oil imports
4)Dependency of oil consumption on
Degree of energy import
security 5)Energy consumption per GDP
6)Energy consumption elastic index
7)Rate of change in oil price
8)Reducing discharge rate of CO2
9)Reducing discharge rate of SO2
5 Conclusion
The problem of the energy security has begun to hinder the further development of the
national economy and threat the social stability. This article has built a set of energy
security early-warning index system based on the analysis of national energy security.
At the same time, the changing of index can reflect the problems of national energy
security. The author hopes that the energy security early-warning index system this
article builds could do something for strengthening the management on risk and
realizing sustainable development of the national economy.
References
1. Kong, L.B., Hou, Y.B.: Study on the National Energy Safety Model. Metal Mine 11, 35
(2002)
2. He, Q.: Discussion and Strategy about the Energy Security of China. China Safety Science
Journal 6, 5257 (2009)
3. Ma, W., Wang, Z., Huang, C.: Some Issues of Energy Security of China and Tentative
Solutions. Studies In Interntional Technology and Economy 4, 711 (2001)
4. Gao, J.-l., Ou, X.-y.: Discussion on Chinas Low-carbon Economy under the Restriction of
Energy Security. Journal of Lanzhou Commercial College 2, 3843 (2010)
5. Li, S.-x.: Study on the Energy Efficiency Strategy and the Improvement of National Energy
Security. Journal of China University of Geosciences (Social Sciences Edition) 3, 4750
(2010)
Artificial Enzyme Construction with
Temperature Sensitivity
1 Introduction
In biological organisms, overproduction of reactive oxygen species (ROS), such as
superoxide anions, H2O2, organic peroxide, and hydroxyl radical, can result in a
variety of human diseases, example for ischemia/reperfusion injury, atherosclerosis,
neurodegenerative diseases, cancer, and allergy etc [1]. The antioxidant enzymes,
superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx)
contribute dominatingly to enhance cellular antioxidative defense against oxidative
stress in the human body [2]. SOD is a metalloenzyme that catalyzes the dismutation
of superoxide radical anion to form H2O2 and dioxygen. H2O2 is then detoxified either
to H2O and O2 by catalase (CAT) or to H2O by glutathione peroxidase (GPx) [3].
However, only when an appropriate balance between the activities of these enzymes
is maintained could the optimal protection of cells be achieved [4]. Owing to their
biologically crucial role, considerable effort has been devoted to designing artificial
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 257262, 2011.
Springer-Verlag Berlin Heidelberg 2011
258 T. Lin et al.
Fig. 1. Schematic representation of the enzyme model (A) the smart GPx enzyme model (B)
preparation of the bifunctional enzyme model
In the last decade, stimuli-responsive polymers have attracted strong scientific and
technical interest due to their reversible properties in response to changes in
environmental factors such as pH or temperature [5]. Among these materials, the
thermal sensitive poly(N-isopropylacrylamide) (PNIPAAm) attracts particular
attention. PNIPAAm are also called smart material. They have a volume phase-
transition temperature or lower critical solubilization temperature (LCST) at about 32
C. This means that below the LCST, the particles are swollen and above this
temperature they are collapsed [6]. A detailed knowledge and control of the swelling
behavior will also invoke a better physical understanding of the mechanism of the
observed volume phase transition. This knowledge can then be used to design
complex materials. Due to the reversible phase transition, PNIPAAm based microgels
can be used in many different application fields, for example, as functional materials
for drug delivery, optical filtering and controlled biomolecule recovery.
To construct antioxidant mimics with high catalytic efficiency, we focus on
PNIPAAm in the following aspects: 1) we designed and synthesized a smart GPx
enzyme model, the block copolymer PAAm-b-PNIPAAm-Te, by ATRP, in which the
GPx active site (tellurium moiety) was incorporated into the PNIPAAm chain (the
designed functional monomers and the substrates structure are given in Fig. 1A). 2)
To further design a smart bifunctional antioxidant enzyme model with both SOD and
Artificial Enzyme Construction with Temperature Sensitivity 259
2 Result
2.1 Stimuli-Responsive Micellization
Fig. 2A shows the temperature dependence of the average hydrodynamic radius <Rh>
for PAAm-b-PNIPAAm-Te in aqueous solutions. The critical micellization
temperature of PAAm-b-PNIPAAm-Te was found to be 34 C, which is higher than
that of PNIPAAm homopolymers. This is due to the fact that PNIPAAm is now
attaching with a hydrophilic PAAm. Furthermore, the actual morphology of the
micellar structure was observed by SEM (Fig. 3A).
A B
Fig. 2B shows the temperature dependence of the <Rh> for the pseudo-block
copolymer and b-CD-PEG-b-PNIPAAm-Te in aqueous solutions.Then, the
morphology of the b-CD-PEG-b-PNIPAAm-Te and star-shaped pseudo-block
copolymer was intuitively characterized via SEM, and the average diameter was
about 300 and 150 nm, respectively (Fig. 3B).
The GPx-like activity of the block copolymer catalyst for the reduction of cumene
hydroperoxide (CUOOH) by 3-carboxyl-4-nitrobenzenethiol (TNB, 1) was evaluated
according to a modified method reported by Hilvert et al [7]. The activities were
given (vide infra) assuming one molecule catalytic center (Te-moiety) as one active
site of the enzyme. The content of tellurium in the polymer was determined via UV
titration analysis. The reaction was initiated by the addition of hydroperoxide, and the
decrease of the absorbance at 410nm (pH 7.0) was recorded for a few minutes to
calculate the reaction rate. The relative activities are summarized in Table 1.
Assuming that the rate has a first-order dependence on the concentration of catalysts,
these data suggest that the GPx activity of the block copolymer catalyst is at least
251000-fold more efficient than that of PhSeSePh, and is also about 7-fold more
efficient than the previously reported selenium-micelle enzyme model [8,9].
Table 1. The Initial Rates for Reduction of Hydroperoxides by Thiol TNB and NBT in the
Presence of PAAm-Te-b-PNIPAAm
3 Conclusion
Through preparation of a novel block copolymer by ATRP, We have designed and
synthesized two block copolymers (BCP) which exhibited stable antioxidant catalytic
efficiency with temperature-responsive dependence characteristic. We anticipate that
this study will open up a new field in artificial enzyme design with environmental
responsive functions, and hope that such smart enzyme mimics could be developed to
use in an antioxidant medicine with controlled catalytic efficiency according to the
needs of the human body in the future.
References
1. Mugesh, G., du Mont, W.W., Sies, H.: Chemistry of biologically important synthetic
organoselenium compounds. Chem. Rev. 101, 21252179 (2001)
2. Berlett, B.S., Stadtman, E.R.: Protein oxidation in aging, disease, and oxidative stress. J.
Biol. Chem. 272, 2031320316 (1997)
3. Floh, L., Loschen, G., Gnzler, W.A., Eichele, E.: Glutathione peroxidase, V. The kinetic
mechanism. Hoppe. Seylers. Z. Physiol. Chem. 353, 987999 (1972)
4. Spector, A.: Oxidative stress-induced cataract: mechanism of action. FASEB. J. 9, 1173
1182 (1995)
262 T. Lin et al.
5. Stayton, P.S., Shimoboji, T., Long, C., Chilkoti, A., Chen, G., Harris, J.M., Hoffman, A.S.:
Control of protein-ligand recognition using a stimuli-responsive polymer. Nature 378,
472474 (1995)
6. Karl, K., Alain, L., Wolfgang, E., Thomas, H.: Volume transition and structure of
triethyleneglycol dimethacrylate, ethylenglykol dimethacrylate, and N,N-methylene bis-
acrylamide cross-linked poly(N-isopropyl acrylamide) microgels: a small angle neutron
and dynamic light scattering study. Colloids. Surf. 197, 5567 (2002)
7. Wu, Z.P., Hilvert, D.: Selenosubtilisin as a glutathione peroxidase mimic. J. Am. Chem.
Soc. 112, 56475648 (1990)
8. Huang, X., Dong, Z.Y., Liu, J.Q., Mao, S.Z., Xu, J.Y., Luo, G.M., Shen, J.C.: Tellurium-
based polymeric surfactants as a novel seleno-enzyme modelwith high activity. Macromol.
Rapid. Commun. 27, 21012106 (2006)
9. Huang, X., Dong, Z.Y., Liu, J.Q., Mao, S.Z., Xu, J.Y., Luo, G.M., Shen, J.C.: Selenium-
mediated micellar catalyst: An efficient enzyme model for glutathione peroxidase-like
catalysis. Langmuir. 23, 15181522 (2007)
10. McCord, J.M., Fridovich, I.: Superoxide dismutase. An enzymic function for
erythrocuprein (hemocuprein). J. Biol. Chem. 244, 60496055 (1969)
An Efficient Message-Attached Password Authentication
Protocol and Its Applications in the Internet of Things
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 263269, 2011.
Springer-Verlag Berlin Heidelberg 2011
264 A. Wang, Z. Li, and X. Yang
Server Client
Function
Computer
Module
Wired/Wireless
Internet Internet
Interface Interface
Channel
A new design philosophy of the message transmission scheme for the Internet of
things is proposed in this paper: combining the authentication with message
transmission. The outstanding character of our scheme is the decrease of interaction.
The number of steps in the whole protocol can be decreased to three, and one
signature will be economized. How to design a message-attached authentication
protocol? We can borrow ideas from the currently existing homogeneous protocols.
The conventional password authentication protocol is one of the most simple,
convenient and most widely used authentication modes. Lamport [4] first proposed
encrypting the users password with the safe one-way hash function. Hwang and Yeh
[5] proposed a mutual authentication protocol. Although the original author claimed
that this scheme could resist the known attacks, Chun et al [6] pointed out that this
protocol was vulnerable to denial of service (DoS) attack. Peyravian and Jeffries [7]
also raised a hash-based authentication proposal and declared that it could resist DoS
and off-line password guessing attacks. In 2008, Wang et al [8] proposed a mutual
anonymous password authentication scheme and declared that the efficiency was
equivalent to the Hwang and others protocol but it was more secure.
We find that due to the application of public key cryptography, the efficiency of
Wangs protocol is much lower than the Hwang and others. Besides, the
characteristics of resist-DoS are not very good and the so-called weak password
attack isnt really inexistent. In this case, an efficient message-attached password
authentication protocol based on hash functions is proposed and we also evaluate the
security and efficiency in detail.
Based on the new message-attached password authentication protocol, we design
universal system architecture of the Internet of things. Experiments show that the
protocol given in this paper, with high efficiency, can resist a variety of known
attacks and achieve the security requirement of the Internet of things perfectly.
User Register. When the user C registers at the server S, C passes H(PWC), the hash
value of the password PWC , to the server S. Then S establishes the password verifier,
which is shown in Table 1.
Furthermore, S generates the symmetric key SKC, imparts the value to C and both
sides store the key in safety.
Implementation
Denial of Service Attack. When an attacker carries out the denial of service attack on
the server S, S can discover the users illegality in Step 4. S doesnt need to do
complex calculations before that and just needs to receive and send data three times.
Meanwhile the step of verification just needs one look-up table and one calculation of
hash. The RS can be reused when the authentication failed, so the overhead of random
number generation can be completely ignored under the DoS attack. Therefore this
protocol can resist DoS attack very well.
Replay Attack. The preimage of the hash used to verify the identity contains the
random number chosen by the verifier, so the preimage will never repeat as long as
the random number doesnt repeat. Namely, ever-used hash values will arise with a
neglectable probability, which is equal to the probability of the collision in the hash
algorithms. Therefore this protocol can resist the replay attack very well.
Password Guess Attack. Password guess attack can be divided into two kinds: online
and offline. The online password guess attack can be prevented by limiting the logins.
The offline attack is still equivalent to the probability of the collision of hash
algorithms. So the protocol can resist the password guessing attack because of the
neglectable probability.
Stealing Verifier Attack. In the new proposal, there are nothing else than the users
identities and the hash values of the password in the verifier stored by S. The
symmetric key is not stored so the attacker cant generate the right hash value even if
he gets the password verifier, not to mention obtaining any useful information. For the
management of the symmetric key, we can usually adopt USB key and other
technologies to ensure the storage security.
Forge Server Attack. If an attacker wants to forge the server S, he must call the key
SKC to calculate the hash value of RS, RC, IDC, SKC, MS or attempt to replay by
means of previous hash value. Obviously the latter is not possible. Although the
preimage of the hash operation is transported in a public unsafe channel, its
impossible for the attacker to obtain the key SKC. As a result, the attacker can not
forge the server to communicate with users.
An Efficient Message-Attached Password Authentication Protocol and Its Applications 267
Forge Client Attack. Because the replay is infeasible, the attacker can but calculate
the hash value of H(PWC), RS, RC, IDC, SKC, MC to fake C. But even he can steal the
verifier and get H(PWC), he will not be able to generate the right hash value without
the key. Consequently it is also impossible to fake the client.
2.4 Contrast
In this part, Hwangs scheme proposed in 2002 [5], Peyravians scheme proposed in
2006 [7], Wangs scheme proposed in 2008 [8] and the new proposal in this paper are
compared from security and efficiency. The security details are given in Table 2,
which Yes means the scheme can be against the attack and No means the
resistance to the attack is in vain. Table 3 gives the efficiency contrast of the four
schemes.
As is shown in the table, the scheme proposed in this paper is the most effective,
meeting the security needs, and is better than the other three. We will take security
system of the Internet of things for example and discuss the specific implementation
and application in the following.
Device
(Local Client)
Function
Module
Wired Wireless
Internet Network Wireless
Interface Interface Interface
Channel Channel
We implement a system of street lamps with the fashion of the Internet of things.
In a control center, a computer can transfer a command for switching on or off some
lamp. The message is attached into our password authentication protocol which is
carried out between the control center and the concrete lamps. According to a further
test, we conclude that client and server can carry out our scheme exactly. Thats to
say an adversary cant finish any attacks on our system of street lamps.
4 Conclusions
In this paper, we proposed a message-attached password authentication protocol and
universal system architecture of the Internet of things. And this scheme can also solve
some problems of similar products such as low efficiency, complex procedure et al.
So it is of great significance and practical value to the development of the Internet of
things in the future. Next, we will continue to study the various needs of the users of
the Internet of things and design special schemes for different users. Moreover, as
people are highly dependent on mobile phones [10], we can design more function-rich
and practical products of the Internet of things with the control of mobile phones to
provide with human more convenient services.
An Efficient Message-Attached Password Authentication Protocol and Its Applications 269
References
1. International Telecommunication Union (ITU), ITU Internet Reports, The Internet of
Things (2005)
2. European Research Projects on the Internet of Things (CERP-IoT) Strategic Research
Agenda (SRA), Internet of Things Strategic Research Roadmap (2009)
3. Menezes, A.J., Van Oorschot, P.C., Vanstone, S.A.: Handbook of Applied Cryptography.
CRC Press, Boca Raton (1997)
4. Lamport, L.: Password Authentication with the Insecure Communication. Communications
of the ACM 24, 770772 (1981)
5. Hwang, J., Yeh, T.: Improvement on Peyravian-Zunics Password Authentication
Schemes. IEICE Transactions on Communications E85-B(4), 823825 (2002)
6. Chun, L., Hwang, T.: A Password Authentication Scheme with Secure Password Updating.
Computers & Security 22(1), 6872 (2003)
7. Peyravian, M., Jeffries, C.: Secure Remote User Access Over Insecure Networks.
Computer Communications 29(5/6), 660667 (2006)
8. Wang, B., Zhang, H., Wang, Z., Wang, Y.: A Secure Mutual Password Authentication
Scheme with User Anonymity. Geomatics and Information Science of Wuhan
University 33(10), 10731075 (2008)
9. Koc, C.K.: Cryptographic Engineering. Springer, Heidelberg (2008)
10. Google: Google Projects for Android (2010),
http://code.google.com/intl/en/android
Research on Simulation and Optimization Method for
Tooth Movement in Virtual Orthodontics
1 Introduction
People's living conditions have been greatly improved with the progress of
economy, and people gradually pay more attention to their appearance. In
recent years, with the development of image processing technology, computer
technology, and virtual reality technology, virtual surgery have become hot research
topics[1].
The virtual orthodontics can help medico design the treatment and can output data,
the tooth path planning is the important part in the virtual orthodontics, it can simulate
the tooth movement in the dental treatment and medico can look at the whole
processes and results in virtual treatment.
The tooth path planning for tooth movement is to form a movement path
from the initial position to the target position for each tooth. According to
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 270275, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Simulation and Optimization Method for Tooth Movement 271
Assume there are N teeth, and M stages for teeth movements from the initial position
to the final position (M is determined by doctor according to the teeth maximum
moving distance d0 and experience). There are two kinds of movement for each tooth
in every stage, one is translation and the other is rotation. The first objective is to
minimize the sum of all teeths translational distance. The calculation formulas are
defined as follows:
n m
fA = d ij
i =1 j =1
(1)
(L L )
G G
ij = arccos ij ij 1 (3)
n m
fB = ij
(4)
i =1 j =1
G
Where ij is the rotation angle of tooth i in stage j, L ij is the direction of tooth i in
stage j.
In the process of tooth movement, the tooth may collide to each other. Assume f c is
the times of collision, it can be expressed as follows:
n 1
g 1 = f c = ci = 0 (5)
i =0
Where ci represents tooth i collide to tooth i-1, if the collision occurred, its value is
1, otherwise is 0. In addition, according to medical experts experience, the translation
272 Z. Li and G. Yang
or rotation angle should not be too high each time in the orthodontic process. The
dij ij in mathematical model must meet the following constraints: d0
and 0 are the maximum move distance and maximum angle of rotation of each stage.
g 2 = d ij d 0 0
(6)
g 3 = ij 0 0
3 Principle of Algorithm
A* algorithm is a typical heuristic intelligent algorithm, because it contains predict
function to calculate the path which will go, so it include the information of heuristic
and it is better than other local search algorithm[3].
According to the heuristic algorithm of A*, we offer a new fitness function to meet
this problem combine with move distance and angle of rotation, the function such as
follows:
h
f =g+ + sin (7)
cos
The above formula is the new fitness function based on the A* algorithm combined
with the constraints of tooth movement. Where g represents the distance which teeth
has moved and suppose the distance as diagonal mode. The
angle ( 0 0 90 0 )is composed by the vector which is formed by the current
node and its parents node and the other vector is formed between the initial point
and the target point, such as figure 1, ( 0 90 )is the vector which is formed
0 0
between the current motion vector and its previous motion vector, such as figure 2.
The reflects the characteristic of rotary movement of each teeth, which the formula
( h + sin ) just is the new heuristic function.
cos
Initial
point
Previous
Parent
point
motion vector
Current motion
Current Target vector
point point
distance of the center points of adjacent projection circles. If the actual distance D is
less than D , then the collision occurred. If the routes are not in one plane,
meanwhile to calculate the line which perpendicular the two motion vector, if the
intersection points are between the two motion vectors, then to calculate the
intersection points which just are the collision points, otherwise we assume that the
collision does not occur temporarily. At this time the teeth p1 can move in the plane
formed by points p1p2p1, and the teeth p2 can move in the plane formed by points
p1p2p2, The area Q and Q are the two teeths collision area, such as follows :
P1 P2 P1 P2
Fig. 5. The collision area of teeth p1 Fig. 6. The collision area of teeth p2
In this system, the number d determines the size of the projection circle, and the
projection circle area just is the collision area in the grid environment. Based on the
collision area and A* algorithm this system search the final path, the flow chart of
collision points as follows:
274 Z. Li and G. Yang
Start
End
References
1. Sen, Y.: Orthodontic diagnosis and treatment plan. World Publishing Company, Singapore
(2002)
2. Gao, H., Yan, Y., Qi, P., et al.: Three-dimensional digital dental cast analysis and diagnosis
system. Computer Aided Design and Computer Graphics
3. Gao, Q.J., Yu, Y.S., Hu, D.: The feasibility of improving the path search and optimization
based on A* algorithm. College of China Civil Aviation (August 2005)
4. Huizhong puzzle of Beijing Science and Technology Co. Ltd., Client programming about
online game / Information Ministry software and integrated circuit promotion center
5. Wang, W.: Principles and Applications of Artificial Intelligence. Electronic Industry Press,
Bingjing
6. Lihui, Z., Zhang, L., Hou, M.: The application of A* algorithm in game routing. Inner
Mongolia Normal University (February 2009)
An Interval Fuzzy C-means Algorithm Based on Edge
Gradient for Underwater Optical Image Segmentation
1 Introduction
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 276283, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 277
means (FCM) proposed by Dunn [4] and promoted by Bezdek[5], has been widely
used in various areas of image segmentation[6-7]. In the actual operation of underwater
robots, because of the vibration of imaging equipment and other electronic
interference, result in the existence of measurement error of the pixel information, that
is to say, the actual data obtained are inaccurate. In order to study this imprecision
clustering, the most basic form of data - interval-valued data is set for analysis.
Using weighted interval number, Ishibuchi et al. designed an interval neural
networks[8]. Z Lv, et al. given similarity measure method of interval numbers with
Gauss distribution[9]. In the areas of unsupervised classification, many scholars have
conducted in-depth discussion. Fan Jiu-lun et al. proposed an interval-valued fuzzy c-
means (FCM) clustering algorithm to deal with unsupervised classification
problems[10]. Francisco et al. presented a fuzzy clustering algorithm for symbol
data[11]. Intuitionistic fuzzy sets clustering algorithm are discussed in detail by Z Xu,
et al.[12]. Antonio Irpino, et al.[13] proposed a dynamic clustering algorithm based on
interval-valued data Wasserstein distance. Marie-Helene, et al.[14] used credibility
function to interval-valued data clustering.
However, above various algorithms are based on simulated data, an interval
segmentation algorithm based on underwater image is firstly put forward in this paper,
and in order to improve the timeliness of calculation, combining with edge gradient
for image segmentation. Comparative experiments show that the new algorithm
can achieve better quality and higher timeliness and can be applied to the actual
task of AUV. Although the new algorithm is based on underwater images, but the
principles have also a high reference value for dealing with other types of image
segmentation, the knowledge about the interval clustering segmentation will be fully
tapped.
x + x+
x& =
2
x = x +
x (1)
f ( x 2, y 2 ) f ( x 1, y 2 ) f ( x , y 2 ) f ( x + 1, y 2 ) f ( x + 2, y 2
f ( x 2, y 1) f ( x 1, y 1) f ( x , y 1) f ( x + 1, y 1) f ( x + 2, y 1)
(2)
f ( x 2, y ) f ( x 1, y ) f ( x, y) f ( x + 1, y ) f ( x + 2, y )
f ( x 2, y + 1) f ( x 1, y + 1) f ( x , y + 1) f ( x + 1, y + 1) f ( x + 2, y + 1)
f ( x 2, y + 2 ) f ( x 1, y + 2 ) f ( x , y + 2 ) f ( x + 1, y + 2 ) f ( x + 2 , y + 2 )
i =1 k =1
Edge is important feature to image understanding and pattern recognition and can
keep feature information while effectively reducing the amount of data processing.
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 279
The first derivative can be used to detect whether a point is edge to highlight the
details of the image. By gradient sharpening [15] , the gradient of image at position
( x, y ) can be obtained:
f f T
f = [Gx G y ]T = [ ] (4)
x y
Definition 6: Define image gradient as follows:
if d dik m21
c
For any i, k ik >0 then ik = 1/ ( ) (6)
j =1 d jk
1, j = k
For any i, j , k if d =0 then = (7)
0, j k
ik ij
n n
Step 5: update cluster center matrix: vi = ik m xk / ik m , i = 1, 2,L , c (8)
k =1 k =1
Step 6: by definition 5 to calculate the objective function J, and compare with the
previous calculated value of J. if the difference is less than the threshold , then the
loop ends, continue to step 6, otherwise go to step 4 to continue the loop.
Step 7: post-processing: by the obtained cluster centers and fuzzy membership
matrix, execute the fuzzy clustering based on the interval value of every pixel
identified in step 1 and get the segmentation results.
280 S. Wang, Y. Xu, and L. Wan
In a computer with XP operating system, the main frequency is 2.60GHz, the memory
is 2G. For four types of representative underwater targets (image size is 576 768),
using the three algorithms for the segmentation experiments, iteration stop threshold
is set 1e-9, let clustering number be 2, and a taken 7. The segmentation results
were shown in Figs.1-4. (a-d is respectively to the original image, the traditional
algorithm, algorithm in literature [16], and new algorithm).
Figs. 1-2 show that the segmentation results by three different segmentation
algorithms can reflect clustering effects to different degree, and to deal with images
captured with low light environment all is indicative of the excellence. However, it
can be seen from the segmentation results of three-prism that the traditional FCM
method failed to separate the complete region of the target, and to the algorithm
proposed in literature [16], the results is better, but the upper part noise of the target is
still present, and the new algorithm not only can obtain a clear edge, but also to the
four-prism segmentation, can clearly distinguish details such as the cable in the lower
right of the image.
To the targets with serious uneven light and heavy noise, it can be seen from figs.
3-4 that the object in image can not be segmented from the background by traditional
FCM clustering algorithm, and using of the algorithm in literature [16], only few
object information can be segmented out when processing ellipsoid image, but for
multi-objects with more serious noise, it is very difficult to obtain the whole outline,
and the new algorithm can clearly obtain most of edge information of the target,
whats more, we can see the cable above the ellipsoid from the segmentation result of
multi-objects.
In summary, the segmentation results from figs. 1-4 sufficiently show that using
the new algorithm can all obtain good segmentation results to process the images both
with insufficient light and with heavy noise and serious uneven illumination. With the
good accuracy, high robustness and broad applicability, the new algorithm has
irreplaceable advantages on segmenting images captured in complex environment
with noise difficult to be eliminated.
To one of four types of targets, take 50 pieces of images captured in pool as the
samples, and to three algorithms, the number of cluster is set 2 and the let threshold
be = 1.0 109 , the average time consumption of single image is in table 1.
It can be seen from table 1 that to four types of underwater image segmentation,
the time consumption using the new algorithm can save about 10 times than
traditional FCM algorithm, and the timeliness is improved 4-8 times than the
algorithm in literature [16], which fully show that the new algorithm can excellently
meet the need of timeliness for AUV to execute practical underwater task.
282 S. Wang, Y. Xu, and L. Wan
4 Conclusions
Through the analysis and research on FCM algorithm, considering the inevitable
errors in the imaging process, for four types of images taken in pool, an interval fuzzy
c-means algorithm based on edge gradient is proposed, and some definitions and
parameters are given, and thus the membership matrix and clustering center matrix
are modified. Experimental results show that compared with the traditional FCM and
the algorithm in literature [16], using the new algorithm, segmentation quality are
obviously good, especially to the image with serious noise and uneven light, the
segmentation result is with high robustness. Moreover, the computing speed of the
new algorithm is quicker than the other two algorithms. Above all, by the new
algorithm, not only good quality segmentation is obtained, but also the timeliness is
improved, which meet the certain requirement for AUV to complete the special
mission[17], and provide a strong guarantee to feature extraction and target tracking.
References
1. Yuan, X.H., Qiu, C.C., et al.: Vision System Research for Autonomous Underwater
Vehicle. In: Proceedings of the IEEE International Conference on Intelligent Processing
System, vol. 2, pp. 14651469 (1997)
2. Wang, S.-L., Wan, L., Tang, X.-D.: A modified fast fuzzy C-means algorithm based
on the spatial information for underwater image segmentation. In: ICCDA 2010, vol. 1,
pp. 15241528 (2010)
3. Zhang, M.-j.: Image Segmention. Science Press, Peking (2001)
4. Dunn, J.C.: A fuzzy relative of the ISODATA process and its use in detecting compact,
well-separated clusters. J. Cybern. 3, 3257 (1974)
5. Bezdek, J.C.: Pattern recognition with fuzzy objective function algorithms. Plenum Press,
New York (1981)
6. Szilagyi, L., Benyo, Z., Szilagy, S.M., et al.: MR Brain Image Segmentation Using an
Enhanced Fuzzy C Means Algorithm. In: Proc. of the 25th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, Cancun, Mexico,
vol. 1, pp. 724726 (2003)
7. Rezaee, M.R., Zwet, P., Lelieveldt, B., et al.: A multiresolution image segmentation
technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans. on Image
Processing 9(7), 12381248 (2000)
8. Ishibuchi, H., Tanaka, H.: An Architecture of Networks with Interval Weights and Its
Applications to Fuzzy Regression Analysis. Fuzzy Sets and Systems 57(1), 2739 (1993)
9. Lv, Z., Chen, C., Li, W.: A new method for measuring similarity between intuitionistic
fuzzy sets based on normal distribution functions. In: Fourth International Conference on
Fuzzy Systems and Knowledge Discovery, Haikou, China, vol. (2), pp. 108113 (2007)
10. Fan, J.-l., Pei, j.-h., Xie, W.-x.: Interval fuzzy c-means clustering algorithm. In: Fuzzy Sets
Theory and Applications- The Ninth Annual Meeting of the Committee with the Fuzzy
System and Fuzzy Mathematics in China, pp. 127213 (1998)
11. de Francisco, A.T., de Carvalho: Fuzzy c-means clustering methods for symbolic interval
data. Pattern Recognitions Letters 28(4), 423437 (2007)
12. Xu, Z., Chen, J., Wu, J.: Clustering algorithm for intuitionistic fuzzy sets. Information
Sciences 178(19), 37753790 (2008)
An Interval Fuzzy C-means Algorithm Based on Edge Gradient 283
13. Irpino, A., Verde, R.: Dynamic clustering of interval data using a Wasserstein-based
distance. Pattern Recognition Letters 29(11), 16481658 (2008)
14. Masson, M.-H., Denoeux, T.: Clustering interval-valued proximity data using belief
functions. Pattern Recognition Letters 25(2), 163171 (2004)
15. Gonzalez, R.C., Woods, R.E.: Digital image processing, 2nd edn., pp. 9799. Electronic
Industry Press, Beijing (2002)
16. Wang, S.-l., Wang, L., Tang, X.-d.: An improved fuzzy C-means algorithm based on gray-
scale histogram for underwater image segmentation. In: CCC 2010, vol. 1, pp. 27782783
(2010)
17. Balasuriya, A., Ura, T.: Vision-based underwater cable detection and following using
AUVS. In: Proceedings of the Oceans 2002 Conference and Exhibition, pp. 15821587.
IEEE, Piscataway (2002)
A Generic Construction for Proxy Cryptography
Guoyan Zhang
Keywords: Proxy Cryptography Proxy-Protected, DBDH.
1 Introduction
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 284289, 2011.
Springer-Verlag Berlin Heidelberg 2011
A Generic Construction for Proxy Cryptography 285
In our notion, we consider all potential actions of the adversary. There are two types
of adversaries: the Type adversary presents the outside adversaries who aren't
delegated, the Type adversary presents the delegatee.
Definition 2. A proxy-protected anonymous proxy decryption ( PPAPD) scheme is
secure against adaptive chosen ciphertext attack (IND-CCA) if no probabilistic
polynomial time bound adversary has non-negligible advantage in either Game 1 or
Game 2.
286 G. Zhang
Game1
This game for Type
adversary. Taken a security parameter1k , the challenger runs
the Setup algorithm to get the delegatee's secret key SK O and the delegatee's public
key PKO , and he gives PKO to the adversary, keeping SK O secret.
The adversary can request two oracles: Partial-Proxy-Private-Key-Oracle and
Decryption Oracle.
Partial Proxy Private Key Oracle : On receiving the oracle
< PK P , SK P , ti , = Sign( PK P ) > :
The challenger checks the validity of the public key and the signature , if either
invalid, he aborts. Otherwise, he searches the PartialProxyPrivateKeyList for a
tuple < PK P , SK PP , ti > , if exists, he sends SK PP to the adversary.
Otherwise, the challenger runs the Partial-Proxy-Private-Key-Derivation
algorithm to get SK PP , and adds the tuple < PK P , SK PP , ti > to the
PartialProxyPrivateKeyList. He sends SK PP to the adversary.
Decryption oracles: On receiving the oracle < PK P , SK P , Ci , ti > :
The challenger checks the validity of the public key, if invalid, he aborts.
Otherwise, he searches the PartialProxyPrivateKeyList for a tuple < PK P , SK PP , ti > ,
if exists, he decrypts the ciphertext Ci using SK PP and SK P . And he sends the
plaintext M to the adversary.
Otherwise, the challenger runs the Partial-Proxy-Private-Key-Derivation algorithm
to get SK PP , and adds the tuple < PK P , SK PP , ti > to the PartialProxy
PrivateKeyList. He decrypts the ciphertext Ci using SK PP and SK P to get M and
sends the plaintext M to the adversary.
Challenge: The adversary generates a request challenger
< PK , SK , t* , M 0 , M1 > , ti* is the proxy time, and M 0 , M1 are equal length
P* P* i
plaintext. If the public key PK is valid, the challenger picks a random bit b {0,1}
P*
and sets C* = Enc( M b , ti* , PK , PKO ) . It sends C* to the adversary.
P*
The adversary can make polynomial queries, and the challenger responds as the
second steps.
At the end of the game, the adversary outputs b {0,1} and wins the game
if b = b , furthermore, there are also two restrictions that the adversary has never
request the partial proxy private key oracle on a tuple < PK , SK , t * > and the
P* P* i
decryption oracle on a tuple < C * , ti* , PK , SK >.
P* p*
Game 2
This game for Type
adversary. Taken a security parameter1k , the challenger runs
the Setup algorithm to get the delegatee's secret key SKO and the delegatee's public
key PKO , and he gives ( PK O , SK O ) to the adversary.
A Generic Construction for Proxy Cryptography 287
Delegation algorithm :
Secret key generation : The proxy decryptor P randomly picks x Z *p , and
4 Conclusion
In this paper, we give a generic proxy-protected anonymous proxy cryptography
followed with the security model. Our model not only captures all the properties
introduced by Lee et al., but also we also give a corresponding security model which
makes the schemes have precise security guarantee. Especially, in our model, the
encrypter needn't verify the validity of the public key of delegated decryptor,
Furthermore, the original decryptor can also run cryptographic operations as the
delegated decryptor does, but he cannot impersonation any delegated decryptor. Thus
this model not only protects the privacy of the delegated decryptor, but also the one of
the original decryptor. This is the first generic model satisfying the above properties
in literature. Finally, we give a concrete proxy decryption as examples.
References
1. Mambo, M., Okamoto, E.: Proxy cryptosystem: Delegation of the power to decrypt
ciphertexts. IEICE Trans., Fundamentals E80-A, 5463 (1997)
2. Blaze, M., Bleumer, G., Strauss, M.: Divertible protocol and atomic proxy cryptography.
In: EUROCRYPT 1998. LNCS, vol. 1403, pp. 127144. Springer, Heidelberg (1998)
3. Mu, Y., Varadharajan, V., Nguyen, K.Q.: Delegation decryption. In: Walker, M. (ed.)
Cryptography and Coding 1999. LNCS, vol. 1746, pp. 258269. Springer, Heidelberg
(1999)
4. Sarkar, P.: HEAD: Hybrid Encryption with Delegated Decryption Capability. In: Canteaut,
A., Viswanathan, K. (eds.) INDOCRYPT 2004. LNCS, vol. 3348, pp. 230244. Springer,
Heidelberg (2004)
5. Wang, L., Cao, Z., Okamoto, E., Miao, Y., Okamoto, T.: Transformation-free proxy
cryptosystems and their applications to electronic commerce. In: Proceeding of
International Conference on Information Security (InfoSecu 2004), pp. 9298. ACM Press,
New York (2004)
6. Canetti, R., Hohenberger, S.: Chosen-ciphertext secure proxy re-encryption. In: ACM
Conference on Computer and Communications Security, pp. 185194 (2007)
7. Libert, B., Vergnaud, D.: Tracing Malicious Proxies in Proxy Re-Encryption. In: Galbraith,
S.D., Paterson, K.G. (eds.) Pairing 2008. LNCS, vol. 5209, pp. 332353. Springer,
Heidelberg (2008)
8. Weng, J., Deng, R.H., Chu, C., Ding, X., Lai, J.: Conditional Proxy Re-Encryption Secure
against Chosen-Ciphertext Attack. In: Proc. of the 4th International Symposium on ACM
Symposium on Information, Computer and Communications Security (ASIACCS 2009),
pp. 322332 (2009)
VCAN-Controller Area Network Based Human Vital
Sign Data Transmission Protocol
Abstract. The vital signs diagnostics data is considered high risk critical data
that requires time constrained message transmission. The continuous real time
monitoring of patient allows prompt detection of adverse events and ensures
better response to emergency medical situations. This research work proposes a
solution for acquisition of human vitals and transferring this data to remote
monitoring station in real time, over Controller Area Network Protocol
(CAN).CAN is already been used as an industrial standardized field bus used in
a wide range of embedded system applications where real time data acquisition
is required. Data aggregation and few amendments in CAN are proposed for
better utilization of available bandwidth in context to patients vital sign
monitoring. The results show that proposed solution provides efficient
bandwidth utilization with sufficient number of monitored patients. Even with
high frame rate per patient per second, adequate number of patients can be
accommodated.
1 Introduction
Real time patient monitoring is the most important part of post operative or
emergency situation medical aid provision. Vital sign monitoring data forms
messages whose untimely delivery can lead to life threatening damage or serious
injury. The Vital signs data considered for monitoring consist of Heart Rate (HR),
Systolic blood pressure (SBP), Diastolic blood Pressure (DBP), Electrocardiogram
(EKG), Oxygen Saturation (SP02) and Body Temperature (Temp) [1]. Different
mediums and standards have been proposed by researchers to accomplish this task [2]
[3] [4]. This research work proposes a simplified solution that sense, store, process
and transmit human vitals using Controller Area Network Protocol with some
proposed amendments in the frame format. The CAN protocol is selected for its cost
effectiveness, robustness, minimum error rate and utilization in time critical
applications [5] [6]. The solution gives provision for each patients context data entry
at the bedside, to medical staff, in order to set ranges for the vital signs alarm values.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 290296, 2011.
Springer-Verlag Berlin Heidelberg 2011
VCAN-Controller Area Network Based Human Vital Sign Data Transmission Protocol 291
2 System Descriptions
The proposed system is divided into six major units. 1 illustrates the proposed system
using Controller Area Network (CAN) protocol for transferring patient vital sign data.
Following is a brief description of each unit.
1. Sensing and Digitizing Unit (SDU). The Sensing and Digitizing unit (SDU)
provides interface to the sensors attached to the patients for monitoring current
vital signs. It receives data from different sensors and digitizes the input if
needed. Analog to Digital conversion is required for most of the sensors
monitoring the vitals. SDU receives input from Context Data Input (CDU) unit to
change the sampling rate as per requirement of doctor or medical staff, for an
individual patient.
2. Bed Side Display Unit (BDU). Patient bed side display (BDU) unit is available at
patients bed sides. It displays currents vitals of the patient for visiting doctors
and support staff.
3. Context Data Input Unit (CDU). Every patient is treated differently according to
his specific disease and condition. Sometimes some vital sign values are
acceptable for a particular patient but can be alarming for the others with
immediate medical attention required. Patient Context Data Input Unit (CDU)
provides doctors and supporting staff to enter range against patients each vital
sign that can be alarming for him, according to his current medical condition.
CDU provides the input parameter to Processing and Aggregation Unit (PAU).
The given parameters are utilized by SDU, for setting alarm if the current vital
sign data is not within specified range.
4. Processing and Aggregation Unit (PAU). The Processing and Aggregation Unit
(PAU) is the most important unit of proposed System. Major processing tasks are
performed in this section. PAU gets the sensors data from SDU and buffers it for
processing. The buffered data is checked against the parameters provided by
CDU. If the current data value for any vital sign is not in the normal range, PAU
triggers an alarm at patients BDU and sets Ro bit for CAN Frame. Another
major task of PAU is that it aggregates the data of the six sensors into one data
set (56 bits) and provides it to CAN interface as a single value along with Ro bit
value. This reduces the control bits overhead that arise when data from individual
292 A. Azmi et al.
sensor is sent separately. This Ro bit is utilized by CAN receiver to set alarm at
remote monitoring site.
5. Controller Area Network Interface Unit (CANI.). Controller Area Network Interface
Unit (CANI) gets the aggregated data from PAU as a single value of 56 bits. CANI
adds control bits according to CAN protocol and also sets Ro bit, as given by PAU.
Finally CAN PDU is sent on the CAN bus to remote monitoring station.
4 CAN Protocol
Controller Area Network (CAN) is a serial communication protocol introduced by
BOSCH [7] in 1980.Varient of CAN are utilized as low level messaging protocol by
many vendors and researchers[8][9][10] for different time critical application.
This research work is based on standard CAN 2.0A protocol.
Standard CAN 2.0A: CAN 2.0A uses very simple and minimum control data bits
frame. Fig 1 shows the frame format of CAN2.0A [7].
Start of Frame (SOF): Used for marking start of message and also helps
synchronization.
Identifier: An 11 bit identifier that identifies the CAN module. Lower the binary
value of identifier higher is the priority for bus access.
Remote transmission request (RTR): RTR is used to request data from a particular
node.
Identifier extension IDE: IDE used to define that extended or standard CAN
identifier is used.
Reserved bit (Ro): A reserved bit that can be utilized for any new functionality.
Data Length Code (DLC): DLC defines how many bytes of data are contained in
CAN Frame.
DATA: CAN2.0A can accommodate up till 8 bytes of data per frame.
Cyclic Redundancy Check (CRC): CRC of 16 bits used for error detection.
ACK: 2 bits ACK utilized for indicating error free message.
End of Frame (EOF): EOF comprising of 7 bits that marks end of CAN frame.
Inter frame space (IFS): IFS is required before next CAN frame arrives.
Data Max. No
Vital Sign Range Values of Bits
Temp 96-108 F 13 8
Heart Rate 30-200 (bpm) 171 8
SPO2 0-100% 101 8
SBP 40-250 211 8
DBP 15-200 186 8
EKG 30-300 270 16
294 A. Azmi et al.
50 46
40
28
30 24
kbps
20
10
0
256 300 500
Frame Rate per Patient
Fig 3 illustrates bandwidth requirement per patient. On X-axis frame rate per
patient and on Y-axis bandwidth in kilo bits per second (kbps) is represented.
Different possible rates of 256 frames per second, 300 frames per second and 500
frames per second are presented. The reason for selecting such rates is that ECG/EKG
is sampled by different devices at different rates [10].The result shows bandwidth
requirement for a single user will be close to 24kbps, 28 kbps and 40kbps with 256,
300 and 500 frames respectively sent for single user per second.
Figure 4 and 5 shows bandwidth utilization with increasing number of patients.
Figure 6 depicts number of possible patients that can be monitored by our system with
proposed amendments in CAN2.0A protocol. In ideal condition with availability of
VCAN-Controller Area Network Based Human Vital Sign Data Transmission Protocol 295
1000
900
Bandwidth Kbps
800
700
600 24 Kbps
500
28 Kbps
400 c
300 46 Kbps
200
100
0
1
6
11
16
21
26
31
36
No of Users
900
800
700
Bandwidth kbps
600
500 24
Kbps
400 28
300 Kbps
46
200 Kbps
100
0
1
5
9
13
17
21
25
29
33
No of Users
maximum bandwidth i.e. 1 Mbps, the maximum No. of patients whose vital signs data
can be transferred to remote monitoring location are 39, 33 and 20, with bandwidth
requirements of 24kbps, 28 kbps and 40kbps respectively. These data rates as defined
earlier depend upon frame rates set selected per patient.
Figure 6 also shows decreased number of patients, when 20% bandwidth loss due
to line errors and other overheads is considered. But still the number of patients
that the given solution will support would be 33, 28 and 17 patients respectively. That
is still a high number of patients that can be supported with high frame rates in real
time.
296 A. Azmi et al.
45
39
40
35 33 33
28
No of Patient 30
25 1 Mbps
20
20 17 800 Kbps
15
10
5
0
24 28 46
Kbps
Fig. 6. Maximum number of patients accommodated by system with Mbps per patient and
800kbps per patient
References
1. Ahmed, A., Riedl, A., Naramore, W.J., Chou, N.Y., Alley, M.S.: Scenario-based Traffic
Modeling for Data Emanating from Medical Instruments in Clinical Enviornment. In: 2009
World Congress on Computer Science and Information Engineering. IEEE Computer
Society, Los Alamitos (2009)
2. Keong, H.C., Yuce, M.R.: Analysis of a Multi-Access Scheme and Asynchronous
Transmit-Only UWB for WBANs. In: Annual International Conference of the IEEE,
EMBC (2009)
3. Farshchi, S., Pterev, A., Nuyujukian, P.H., Mody, I., Judy, J.W.: Bi-Fi: An Embedded
Sensor/System Architecture for Remote Biological Monitoring. IEEE Transaction on
Information Technology in Biomedicine 11(6) (November 2007)
4. Wang, P.: The Real-Time Monitoring System For In-Patient Based on Zigbee. In: Second
International Symposium on Intelligent Information Technology Application (2006)
5. Chen, H., Tian, J.: Research on the Controller Area Network. In: IEEE International
Conference on Networking and Digital Society (2009)
6. Klhmet, U., Herpel, T., Hielscher, K., German, R.: Real Time Guarantees for CAN Traffic.
In: VTC Spring 2008, pp. 30373041. IEEE, Los Alamitos (2008)
7. Bosch: CAN Specification Version 2.0. J. Robert Bosch GmbH, Stttgart, Germany (1991)
8. Misbahuddin, S.: Al-Holou. N.:Efficient Data communication Techniques for Controller
Area Network (CAN) protocol. In: Proceedings of ACS/IEEE International Conference on
Computer and their Applications, Tunis, Tunisia (July 2003)
9. Cenesiz, N., Esin, M.: Controller Area Network (CAN) for Computer Integrated
Manufacturing Systems. Journal of Intelligent Manufacturing 15, 481489 (2004)
10. Zubairi, J.A., Misbahuddin, S., Tassudduq, I.: Emergency Medical data transmission
System and Techniques. HandBook of Research on Advances in Health Informatics
Study on the Some Labelings of Complete
Bipartite Graphs
Abstract. If the vertex set V of G =< V,E > can be divided into two un empty sets
X and Y , X Y = V,X Y = , but also two nodes of every edge belong to X and
Y separately, the G is called bipartite graph. If xi X,yi Y,(xi ,yi ) E then G is
called complete bipartite graph. if X = m,Y = n , the G is marked Km,n . In this
paper the graceful labeling, k-graceful labeling, odd graceful labeling and odd
strongly harmonious labeling are given.
1 Introduction
Graph theory is a branch of mathematics, especially an important branch of discrete
math-ematics. It has been applied in many different fields in the modern world, such
as physiccs, chemistry, astronomy, geography, biology, as well as in computer
science and engineering.
This thesis mainly researches on graph labeling. Graph labeling traces its origin to
thefamous conjecture that all trees are grceful presented by A. Rosa in 1966. Vertex
labeling is amapping that maps the vertex set into integer set. According to the
different requirement for themapping, many variations of graph labeling have been
evolved. In 1988, F. Harary introduced the notion of a (integral) sum graph. Sum
graph was generalied to mod sum graph by Bolland, Laskar, Yurner and Domke in
1990. The concepts of sum graph and integral sum graph havebeen extended to
hypergraphs by Sonntag and Teichert in 2000.
Bipartite graphs are widely applied, but not all bipartite graphs are graceful graphs,
therefore, it is necessary to research their gracefulness further. Based on the
conjecture put forwarded by professor Ma Kejie that crown of complete bipartite
graphs is k - graceful (mn, k2),the conjecture is proved in construction method
when (m = 1or 2 , k 2); (m 3, k (m 2)(n 1)) in the paper. And the demonstration extend
the k - graceful research rang.
Along with the development of computer, The labelings of graphs is more and
more extensive in the application and realms, such as network and
telecommunication...etc. But these years of various labelings of graphs have already
deveolped many kinds, Among them, the search of the graceful labelings and
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 297301, 2011.
Springer-Verlag Berlin Heidelberg 2011
298 W. Li, G. Li, and Q. Yan
harmoniously labelings more active. This text will vs a type of special labelings, Talk
about its various labelings.In this paper the graceful labeling, k-graceful labeling, odd
graceful labeling and odd strongly harmonious labeling are given.
2 Lemma
Definition 1. Let G =< V,E > be a simple graph, if each one of v V , there exist a
Nonnegative integers f (v) to satisfied:
1) u, v V , u v, f (u) f (v) ;
2) max{ f (v) | v V } = E ;
Definition 2. Let G =< V,E > be a simple graph, G is called k -graceful, if there exist a
injection f:V(G) { 0,1,2, , E + k 1 } such that induced mapping f *:E(G) {k,k + 1, , E + k 1 }
becomes a bijection. Where the induced mapping f * is defined as
e = uv E, f *(uv) = f(u) f(v) , then G is called k -graceful graph, f is called k -graceful
value.
Definition 3. Let G =< V,E > be a simple graph, G is called odd graceful, if there exist a
injection f:V { 0,1,2, ,2 E 1 } such that induced mapping f *:E(G) {1,3,5, ,2 E 1 } becomes a
bijection. Where the induced mapping f* is defined as e = uv, f *(uv) = f(u) f(v) , then G is
called odd graceful graph, f is called odd graceful value.
Definition 4. Let G =< V,E > be a simple graph, if there exist a injection f:V { 0,1,2, ,2 E 1 }
such that induced mapping f *:E(G) { 1,3,5, ,2 E 1 } becomes a bijection. Where the
induced mapping f* is defined as e = uv E, f *(uv) = f (u ) + f (v ) , then G is called odd graceful
graph, f is called odd strongly harmonious labelings value.
Definition 5. If the vertex set V of G =< V,E > can be divided into two un empty sets X
and Y , X Y = V,X Y = , but also two nodes of every edge belong to X and Y
separately, the G is called bipartite graph. If xi X,yi Y,(xi ,yi ) E then G is called
complete bipartite graph. if X = m,Y = n , the G is marked K m,n .
Discussed below the graceful labeling, k-graceful labeling, odd graceful labeling
and odd strongly harmonious labeling.
X = {x1, x2 ,, xm }, Y = { y1, y2 ,, yn }.
( xi , y j ) E ( K m, n ), (i = 1,2, , m; j = 1,2, , n) .
f(xi ) = i 1, (i = 1, 2, , m ),
f(y i ) = k + m 1 + ( j 1) m, ( j = 1, 2 , ...,n ),
f *(x i , y j ) ( k , k + 1, , k + E 1)
f(y j ) = 2 ( m 1) + 1 + 2 m ( j 1) = 2 mj 1, ( j = 1, 2 , ...,n ),
i
In f ( x ) , the maximum is 2(m 1) in f(y ) ,the minimum is 2(m 1) + 1 ,
j
f *(xi , y j ) {1,3,5, , 2 E 1} .
f(y j ) = 1 + 2 m ( j 1) , ( j = 1, 2 , ...,n ),
4 Conclusion
Bipartite graphs are widely applied. In this paper, the conjecture is proved in
construction method the graceful labeling, k-graceful labeling, odd graceful labeling
and odd strongly harmonious labeling.
Study on the Some Labelings of Complete Bipartite Graphs 301
References
1. Ma, K.: Graceful Graphs. Beijing University Press, Beijing (1991)
2. Slater, P.J.: On k-graceful graphs. In: Proc. of the 13th S.E. Conference on Combinatories
Graph theory and Computing, Boca Raton, pp. 5257 (1982)
3. Maheo, M., Thuillier, H.: On d-graceful graphs. Ars Combinatorics 13(1), 181192 (1982)
4. Gallian, A.: A dynamic survey of graph labeling. The Electronic Journal of Combinatorics
(July 2000)
5. Ma, K.J.: Graceful Graph. Peking University Press, Beijing (1991) (in chinese)
6. Slater, P.J.: On K- graceful graphs. In: Proc(C). of the 13th S. E. Confernece on
Combinatorics, Graph theoryand Computing, Boca Raton, pp. 5257 (1982)
7. Kotzig, A.: Recent results and open problems in of graceful. Congressus Numerantium 44,
197219 (1984)
An Effective Adjustment on Improving the Process of
Road Detection on Raster Map*
1 Introduction
Digital times requires digital communications, digital times make mass-production
and update the traffic vector map task to China. The current transportation system due
to population concentration and a high degree of modernization has several serious
problems. GPS traffic management, vehicle monitoring, the development and
practices of personal navigation technology to alleviate traffic congestion and
maintain the role of social security has become apparent. All applications involving
GPS navigation system of traffic management or control cannot do without the road
location, in a sense, the development of GIS vector traffic has become critical and
bottlenecks of precipitate traffic management.
The digital map in Geographic Information System (GIS) can be divided into grid
(image) map and vector map of two categories. Vector map is obtained from the grid
*
This work was supported in part by the National Natural Science Foundation of China under
Grant 60974092.
**
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 302308, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 303
map by computer processing. On the grid map of the road network traffic information
extraction process improvement has become established the basis for automatic
production vector map link. The current, literature proposed road network to
recognize the various methods of the grid map, many have the original, it is also very
effective for some map. However, from the perspective of system analysis, the whole
identification process is an open-loop system. It is need to manually observe the road
layer and the way of manually adjust the road sampling threshold. How to make road
information as significant variables of the system output, identification of the road
network throughout the recognition process into a feedback system to deal with it, to
solve these problems and improve identification process of the traffic grid road map
information, to achieve automatic recognition of the critical. Improve the traffic
information grid taking the whole process of identification, making a measurable
output, a target can control, the recognition process can be adaptive optimized closed-
loop system.
2 Basic Principle
The road map information comprehensive identification system is shown in Figure 1.
The basic idea is to adjust the recognition threshold by the output of the road layer.
Realization process is shown in Figure 2. Closed-loop system control objectives for
the measure RA0 and the measure R AM. This goal in control of the
controlled object Road, Recognition subsystem output R1Rall roads (that is
A20) and Area recognition subsystem output A1Aall non-road area (that is
R20). In other words, Comprehensive road map information identification system
optimization directly corresponds to the output path information to improve
recognition accuracy. Road of map comprehensive information identification system
output optimization directly corresponds to the output path information to improve
recognition accuracy.
Fig. 1. Map of road information the initial design of an integrated identification system
schematic
Fig. 2. Road of map information comprehensive identification system the initial design process
of the identification error
Comprehensive identify road network layer to the connectivity decision the key factors
of the correct identification of progress and feedback control. Just setting out from the
classical system design theorem, only consider systems with external input and simple
output feedback control that is difficult to control the internal process of comprehensive
identification system. The simple planning of the comprehensive identification, one is the
road, the other is the region. These two factors (equivalent to the two dominant poles
within the system) are attributable to simplify the handling process.
Take into account the recognition system's internal processes, recognition result of
each sub-process the image in fact is always constituted by the three elements
complete. These three elements are: road pixel block, regional block and pixel noise
pixel block. Need to improve the processing within the system, in the middle of each
process into a noise re-clustering process. The contents of each noise re-clustering
process:
Noise pixels block are planning to road or region;
The result of re-clustering, that is identify connectivity of the center line of the
road network layers, the width of feeder roads and other measurable characteristics to
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 305
distinguish, determine the reasonableness of the road network, whether the final result
of output.
Road network does not meet specifications, is based on irrational part of the
pixel block to modify the standardization map before this noise clustering, it should
not plan the road into the region on the change.
Start a new round of sub-processes of noise re-clustering.
Synthesis of the information used in the road map identification system diagram is
shown in Figure 3.
Fig. 3. Consider the system's internal road of map information processing integrated
identification system
Fig. 4. Improve identification of the city grid road traffic map extraction process
The characteristic module matching printed map to eliminate the unique is not
conducive to identification road marking, getting color maps T1 after pretreatment. In
the future, T1 will serve as a map of road identifying the input source extraction view.
306 Y. Li, X.-d. Zhang, and Y.-l. Bao
The way of eliminate noise N0 is to identify points or determine the noise to achieve
its conversion to a road or area, as noise road re-clustering and noise area re-
clustering. Until the noise set Nn empty set, Obtained complete clustering of road R
and regional A, as black and white binary image T2k +1 (k = 1,2,..., n) .
Used is based on the direction of extension features eight pixel noise completely
re-clustering guidelines. Including unbiased clustering and biased clustering criteria
guidelines. Use unbiased clustering criteria alone cannot completely remove the
noise, however, there is partial clustering criteria alone can completely remove the
noise. Shown in Figure 6 that T in = RUAUN and T out = RUA, as complete
clustering process.
Fig. 6. Noise of based on the direction of extension features eight pixel completely re-clustering
An Effective Adjustment on Improving the Process of Road Detection on Raster Map 307
Road network layer output binary bitmap black and white. As the previous cycle has
normalized threshold without changing the premise of complete correction of the road
network, if have problems in testing process, it will start the feedback process and
adjustment the threshold relevant of standardized maps, from the original map to start
a new identification process
References
1. Li, D.: Digital Earth and the Three S technology. GIS Forum (2001),
http://www.gischina.com
2. Fan, C., Li, Z., Ye, X., Gu, W.: The edge of the road approach recognition system and its
real-time implementation. Signal Processing 14(4), 337345 (1998)
3. Zhang, W., Huang, X., Bao, Y., Shi, J.: Automatic generation of vector electronic map.
Microelectronics and Computer 16(4), 3032 (1999)
4. Shen, q., Tang, l.: Introduction to Pattern Recognition. National Defense University Press
(1991)
5. Guo, J., Yao, Z., Bao, Y., Zhang, W.: An automatic correction algorithm of vector map.
China Image and Graphics 4(5), 423426 (1999)
6. Cherkassky, B.V., Goldberg, A.V., Radzik, T.: Shortest paths algorithms: Theory and
Experimental Evaluation. Technical Report 9321480, Computer Science Department,
Stanford University
7. Zhang, Z.: A Feed Close Loop Road Extraction of Color City Map. Journal of Engineering
Graphics 26(4), 2126 (2005)
8. Hai, T.: Recognition and Extraction of Road from Color Raster Traffic map Image. Journal
of Computer Aided Design & Computer Graphics 17(9), 20102014 (2005)
9. Li, J., Taylor, G., Kidner, D.B.: Accuracy and reliability of map-matched GPS coordinates.
Computers & Geosciences 31, 241251 (2005)
10. Haibach, F.G., Myrick, M.L.: Precision in multivariate optical computing. Appl.
Opt. 43(10), 212217 (2004)
Multi-objective Optimized PID Controller for Unstable
First-Order Plus Delay Time Processes
1 Introduction
In rectification, chemical, pulp and paper making industries, PID controllers are
spread-wide used in 97% control loops. However, the practical application of PID
controller does not meet the expectations of state. More than 30% of controllers
operate in manual mode and control errors are less when operate in manual mode than
in auto mode for 65% of control loops.
Lots of tuning methods for PID controller have been proposed recent years, most
of which are for self-regulating processes. Controlling unstable processes are much
more difficult. Besides stabilizing problems of control systems, the reachable
performances for unstable processes may be obviously differ from that of stable
processes [1-3].
The goals for an unstable process control should include: stabilizing unstable poles,
having a good servo tracking, and eliminating unknown disturbance. There are two
difficulties for achieving these targets in fact. One is the problem of servo tracking
and disturbance rejection, the other is the robustness and performance. For the former
problem, with two-degree-of freedom control structure has been able to deal with
effectively. Reference [3] introduces a method to obtaining the weighted coefficients
of proportional and derivative action for known PID. Reference [4] introduces a two
loops control structure to equivalent the conventional single loop PID, and meanwhile
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 309315, 2011.
Springer-Verlag Berlin Heidelberg 2011
310 G. Tan et al.
elaborates the partition coefficients for two loops. Solution methods for the latter
problem include robust method, optimization method, internal model control (IMC)
method, improved Smith design method, and direct design method, etc. In commonly,
dynamic performances are not good for robust criteria as a merely objective, and
stable margin will not good if the integral performance is only used. In IMC method,
Smith method, and direct design method, adjustable parameter is a compromise
between robustness performance and dynamic performance, which has a great
advantage. But realization structures are relatively complex when they are used in
controls of unstable plus delay time processes. Otherwise, the value of the adjustable
parameter is also a problem.
A multi-objective optimized PID control method for unstable first-order plus delay
time (UFOPDT) processes in industry is proposed in the paper.
2 Question Description
The structure of a two loops control system is as shown in figure 1. In which, r, d, u,
y, and e are the set-point signal, disturbance signal, control variable, controlled
variable, and error signal respectively. The controller C(s) is consists of three
elements A(s), B(s) and Kb, and P(s) is the process model, and these are expressed as
below
K a (1 + sTa )
A( s) = , K b B( s) = K b (1 + sTb ) (1)
s
K p sL p
P(s) = e (2)
1 sT p
where, Ka, Kb and Kp are static gain, Ta, Tb and Tp are time constant, and Lp is a delay
time. Tr=Tp/Lp is the ratio of lag time to delay time.
Controller C(s) can be equivalent to the weighted PID controller, that is
1 1
u = K c (a + + bTd )r K c (1 + + Td ) y
sTi sTi
(3)
K K
= ( aK c + i + bK d ) r ( K c + c + K d ) y
s s
where Kc, Ki, and Kd are the proportional gain, integral gain and derivative gain, Ti
and Td are integral time constant and derivative time constant, and a and b are the
d
r e u y
A(s) B (s) P(s )
Kb C ( s)
K c = K a (Ta + Tb ) + K b
Ta + Tb Ta
Ki = K a ,a = ,b = (4)
K = ( K T + K )T Ta + Tb + K b / K a Ta + K b / K a
d a a b b
L( s)
M t = T (s) = = sup T ( j ) e1 / Tr (5)
1 + L( s)
where T() is the function of complement sensitivity, |||| is the H norm. The smaller
Tr is, the greater lower bound of Mt is, and the robustness of the system will decrease.
For a stable process, the recommend Mt =1.3[7]. But it is hard to determine the value
of Mt for an unstable process with time delay.
In reference [8] and [9], the integral gain Ki is recommended as the index of
disturbance rejection. In reference [10], it is proved that the integral gain and the
integral absolute error (IAE) have similar behaviour for a stable process, but the
integral gain is easier.
Moreover, we expect the load-step response of our closed-loop system is almost no
oscillations. In my experiences, this type of response waves can be guaranteed by
damping ratio .
In conclusion, the maximum complement sensitivity Mt, damping ratio and
integral gain Ki are selected as performance and robust indexes for describing a
closed-loop system, and also as optimization indexes for three parameters of a PID
controller.
312 G. Tan et al.
s2 2
( s ) = (Tx s + 1)( + s + 1) (8)
n 2
n
where the coefficient relationships between equation (7) and equation (8) satisfies
Trr + 2 2rTbr + 4
2
Tar = + r (10)
K ir (Tbrr + 2) r (Tbrr + 2)
2 2
This means we have got the loop function and we can calculate the Mt with
equation (5). Obviously, Mt is a function with the variable Tbr.
We can select a Tbr = Tbm to make the optimal robust index for Mt, that is,
Km = 2ln(3.126 0.1528 / Tr 2 8 Tr )
Tam = (0.6926 + 0.8609 Tr 0.41914 Tr ) / Km
2
(13)
Tbm = 2ln(0.7492 0.008822Tr + 0.07364 Tr )
These formulas are obtained under condition Kb=0. This would make the control
variable u have the derivative term of set-point signal r. We can equivalent moving
this term to feedback tunnel, that is, there is an appropriate Kb=Kbm satisfies
So, the parameters of the final set-point weighted PID (SW-PID) controller of
equation (3) are
K m (Tam + Tbm )
Kc = , Ti = (Tam + Tbm ) L p ,
Kp
(15)
T T Tbm
Td = am bm L p , a = ,b = 0
Tam + Tbm Tam + Tbm
4 Simulations
Proportional and derivative weighted coefficients of all PID controllers list below are
set uniformly from equation (15) expect the improved IMC, which is realized
according to its complex structures in reference [3].
An unstable first-order plus delay time process with large Tr is expressed as P(s) = e-
s
/(8s-1). Select Lee PID[11] (=2L, Kc=5.552, Ti=6.4645, Td=0.2843, Ki=0.8588,
Mt=1.6323), Wang PID [1] (Ms=1.6, Kc=5.4582, Ti=5.8705, Td=0.1318, Ki=0.9298,
Mt=1.8691), and modified IMC[12](=2L, Mt=2.5003) compared with the
proposed PID (Kc=6.19, Ti=4.6398, Td=0.2276, Ki=1.3341, Mt=1.9163). Its step
response of the control system for set-point step at 0s and load step at 40s are as
shown in figure 2.
314 G. Tan et al.
Step Response
0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Wang PID
0
0 10 20 30 40 50 60 70 80
Time
An unstable first-order plus delay time process with medium Tr is expressed as P(s) =
e-0.4s/(s-1). Select Lee PID[11](=2L, Kc=2.1681, Ti=3.9753, Td=0.1368, Ki=0.5454,
Mt=2.4535), Wang PID[1] (Ms=1.6, Kc=2.0484, Ti=2.7727, Td=0.0785, Ki=0.7388,
Mt=3.6623), modified IMC[12] (=2L, Mt=2.9498) compared with the proposed PID
(Kc=2.4644, Ti=2.5172, Td=0.1287, Ki=0.979, Mt=2.7867). Its step response of the
control system for set-point step at 0s and load step at 40s are as shown in figure 3.
As shown in figure 2 and 3, the proposed method is more appropriate for
robustness and performance trade-offs.
1
Step Response
0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Wang PID
0
0 5 10 15 20
Time
Step Response
0.8
0.6 step
Proposed PID
0.4 Lee PID
0.2 Modified IMC
Huang PID
0
0 20 40 60 80 100 120
Time
5 Conclusions
For unstable first-order processes with time delay, the damping ratio is used to
constrain response waveforms, and the integral gain which represents disturbance
rejection and maximum complement sensitivity which represents robust stability are
optimized for PID controller. It has a clear physical meaning to use the damping ratio
for stationary response of a system, and it is easy for understanding and application.
Because damping ratio can be given appropriately, the proposed method is a two-
degree searching approach. It shortens the optimization time and it is worth to be
applied for large scale of time-delay ratio. Simulations are proved these results.
References
1. Wang, Y., Xu, X.: A PID Controller for Unstable Processes Based on Sensitivity.
Academic Journal of Shanghai University of Technology 31(2), 125128 (2009)
2. Huang, H.P., Chen, C.C.: Control-system Synthesis for Open-loop Unstable Process with
Time Delay. IEE Proc. Control Theory & Appl. 144(4), 334346 (1997)
3. Chen, C.-C., Huang, H.-P., Liaw, H.-J.: Set-Point Weighted PID Controller Tuning for
Time- Delayed Unstable Processes. Ind. Eng. Chem. Res. 47, 69836990 (2008)
4. Kaya, I., Tan, N., Atherton, D.P.: A Refinement Procedure for PID Controllers. Electrical
Engineering 88, 215221 (2006)
5. Jeng, J.-C., Huang, H.-P.: Model-Based Auto- tuning Systems with Two-Degree-of-
Freedom Control. Journal of Chinese Institute of Chemical Engineers 37(1), 95102 (2006)
6. Thirunavukkarasu, I., George, V.I., Saravana Kumar, G., Ramakalyan, A.: Robust Stability
and Performance Analysis of Unstable Process with Dead Time Using Mu Synthesis.
ARPN Journal of Engineering and Applied Science 4(2), 15 (2009)
7. Luyben, W.L.: Simple Method for Tuning SISO Controller in Multivariable Systems. Ind.
Eng. Chem., Process Des. Dev. 25, 654660 (1986)
8. strm, K.J., Hgglund, T.: Advanced PID Control. Instrument Society of America, North
Carolina (2006)
9. Tan, W., Liu, J., Chen, T., Marquez, H.J.: Comparison of Some Well-known PID Tuning
Formulas. Computers and Chemical Engineering 30, 14161423 (2006)
10. Chen, Y., Zeng, X., Tan, G.: Auto- tuning of Optimal PI Controller Satisfying Sensitivity
Value Constraints. Advanced Materials Research 204-210, 19381943 (2011)
11. Lee, Y., Lee, J., Park, S.: PID Controller Tuning for Integrating and Unstable Processes
with Time Delay. Chemical Engineering Science 55, 34813493 (2000)
12. Zhu, H., Shao, H.: The Control of Open-loop Unstable Processes with Time Delay Based
on Improved IMC. Control and Decision 20(7), 727731 (2005)
Water Quality Evaluation for the Main Inflow
Rivers of Nansihu Lake
1 Introduction
At present, water eutrophication and water quality deterioration are very serious, and
have become a global environmental problem. In the past two decades, water quality
deterioration of lakes in our country has become increasingly serious, a large number
of human activities has a negative impact on the lake environment. Water
eutrophication has become one of the major environmental problems to perplex
Chinas economic development.
Nansihu Lake locates in the southwest of Shandong Province, It consists of four
sublakes: Nanyanghu Lake, Dushanhu Lake, Zhaoyanghu Lake and Weishanhu Lake,
which belongs to Sihe River system in Huaihe River basin. It covers an area of 1266
km2 with a North-south length of 126 km. It has divided into Upper and Lower Lake
by a dam which was built in 1960. North of the dam is Upper Lake with catchment
area of 2.69104 km2, which accounted for 88.4% of the whole catchment area; south
of the dam is Lower Lake with catchment area of 0.35104 km2, which accounted for
only 11.6% of the whole area [1].
At present, the national pollution problems in rivers and lakes continue to appear,
which has brought serious negative impact. For lakes, the vast majority of non-point
source pollutants can enter into the lake through inflow rivers[2], and water quality of
inflow rivers is closely related to water quality of the lakes, so it is essential to
evaluate the water quality of rivers into the lake. This paper, the method of
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 316323, 2011.
Springer-Verlag Berlin Heidelberg 2011
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 317
In order to monitor the water quality conditions of inflow rivers, according to basin
environment, pollution emissions along the rivers and hydrological characteristics, we
identified 25 monitoring sections. All collected samples were sent to the Hydrology
and Water Resources Survey Bureau of Jining for analysis. The detection methods
were according to Quality Standard of Surface Water Environment (GB3838-
2002) and Water and Wastewater Monitoring and Analysis Methods (Fourth
Edition).As the inflow rivers with great catchment area mainly located in Upper Lake,
The cross-section data of four major inflow rivers were selected from 2006 to 2008,
which injected into Upper Lake to be the analysis objects: Malou monitoring section
of Baimahe River, Xiyao monitoring section of Dongyuhe River, Yingou monitoring
section of Sihe River and Yulou monitoring section of Zhuzhaoxinhe River.
According to monitoring data and the actual situation of pollution into the rivers,
we selected a group of organic pollution indicators to be evaluation indices which
were closely related to the water quality: DO, CODMn, NH3-N and BOD5. The
evaluation criterion was according to Quality Standard of Surface Water
Environment (GB3838-2002) .
The formula: I wq = np X
i
3 X4
Where:
p i
represents the total average of single-factor water quality
n
identification index, that is X1 X 2 ; n represents the number of evaluation indices;
X 3 represents the number of participating indicators that worse than functional area
objective; X 4 represents the comparison of comprehensive water quality type and
functional area standard.
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 319
x
x1 2 Comprehensive water quality level
1.0x x 2.0
2.0x x 3.0
Grade
1 2
3.0x x 4.0
Grade
1 2
Grade
4.0x x 5.0
1 2
Grade
5.0x x 6.0
1 2
Grade
6.0x x 7.0
1 2
Worse than Grade but not black-odor
x x 7.0
1 2
Table 2. Monitoring results of selected sections from 2006 to 2008 Unit: mg/L
Monitoring Monitoring
River Name DO CODMn NH3-N BOD5
Time Section
Baimahe River Malou 6.57 10.61 2.61
Dongyuhe River Xiyao 5.80 9.64 1.14
Year 2006
Sihe River Yingou 6.98 9.10 0.54
Zhuzhaoxinhe River Yulou 6.60 15.82 6.19
Baimahe River Malou 6.77 7.74 2.14 4.53
Dongyuhe River Xiyao 6.01 7.24 1.10 4.59
Year 2007
Sihe River Yingou 6.39 6.40 0.50 4.43
Zhuzhaoxinhe River Yulou 6.82 8.21 2.86 12.42
Baimahe River Malou 6.09 6.20 0.76 3.68
Dongyuhe River Xiyao 6.27 8.20 0.37 4.18
Year 2008
Sihe River Yingou 6.80 6.75 0.38 4.42
Zhuzhaoxinhe River Yulou 7.49 6.64 3.31 7.09
Note: In 2006, there were no values of BOD5 due to sampling.
320 Y. Liyuan et al.
As can be seen from Table 3 during the period of 2006-2008, only DO all met the
standards for raw water environment function, and the change range of single-factor
water quality identification indices was not large. Single-factor could determine that
these several rivers were in Grade II or III of water quality standard. The remaining
three indicators existed excessive phenomena of different levels, such as monitoring
section of Yulou of Zhuzhaoxinhe River in 2006, the single-factor water quality
identification index of indicator CODMn was 6.13, single-factor evaluation of this
river was worse than Grade V, and this had a difference of three grades compared to
the functional area objective. For NH3-N and BOD5, the concentrations had a large
difference in different rivers. The single-factor water quality identification indices in
Baimahe River and Zhuzhaoxinhe River were large, and they had poor water quality.
The concentration of BOD5 exceeded serious in Zhuzhaoxinhe River during the years
2007 and 2008, single-factor evaluation was Grade V or worse than Grade V.
Table 3. Single-factor water quality identification indices of selected sections from 2006 to 2008
Monitoring Monitoring
River Name DO CODMn NH3-N BOD5
Time Section
Baimahe River Malou 2.60 5.71 6.32
Dongyuhe River Xiyao 3.20 4.91 4.31
Year 2006
Sihe River Yingou 2.30 4.81 3.10
Zhuzhaoxinhe River Yulou 2.60 6.13 8.15
Baimahe River Malou 2.50 4.40 6.02 4.30
Dongyuhe River Xiyao 3.00 4.31 4.21 4.31
Year 2007
Sihe River Yingou 2.70 4.11 2.00 4.21
Zhuzhaoxinhe River Yulou 2.50 5.82 6.43 6.23
Baimahe River Malou 2.90 4.00 3.50 3.70
Dongyuhe River Xiyao 2.80 4.61 2.60 4.01
Year 2008
Sihe River Yingou 2.50 4.21 2.70 4.21
Zhuzhaoxinhe River Yulou 2.00 4.21 6.73 5.72
year 2006 year 2007 year 2008 year 2006 year 2007 year 2008
7.00 5.00
6.00
5.00 4.00
4.00 3.00
3.00
2.00 2.00
1.00
0.00 1.00
DO CODMn NH3-N BOD5 0.00
DO CODMn NH3-N BOD5
aBaimahe River bSihe River
year 2006 year 2007 year 2008 year 2006 year 2007 year 2008
5.00
8.00
4.00
6.00
3.00
4.00
2.00
2.00
1.00
0.00
0.00 DO CODMn NH3-N BOD5
DO CODMn NH3-N BOD5
cDongyuhe River dZhuzhaoxinhe River
Fig. 1. Tendency of Single-factor water quality identification indices from 2006 to 2008
322 Y. Liyuan et al.
the standard in Zhuzhaoxinhe River. Although they all had downward trends, because
of their large cardinal numbers, single-factor evaluations were still inferior to one or
two grades compared to functional area objectives.
As can be seen from Fig.2, the comprehensive water quality identification indices
of Sihe River were the smallest, comprehensive water quality evaluation was Grade
III, and it reached the standard of functional area. Water quality from good to bad of
the other three rivers was in the following order: Dongyuhe River, Baimahe River and
Zhuzhaoxinhe River. Water quality trends of these three rivers were getting better
during the three years, especially in Baimahe River, water quality had been changed
from Grade IV in 2006 to Grade III in 2008. Though water quality of Zhuzhaoxinhe
River improved somewhat, it was the same as single-factor evaluation due to its large
cardinal numbers, water levels were still not meet the standard of functional area. So
this river had the most serious polluting condition.
6 6
5.62
5.5 5.23 5.5
5 4.92
4.73 year 2006 5 Baimahe River
4.5 4.31 year 2007 Dongyuhe River
4.124.03 year 2008 4.5 Sihe River
4 Zhuzhaoxinhe River
3.50 3.52 3.413.323.42 4
3.5
3.5
3
Baimahe River Dongyuhe Sihe River Zhuzhaoxinhe 3
River River year 2006 year 2007 year 2008
Fig. 2. Tendency of comprehensive water quality identification indices from 2006 to 2008
4 Conclusion
During the period of 2006 to 2008, only DO all met the standards for raw water
environment function, single-factor could determine that these several rivers were in
Grade II or III of water quality standard. The remaining three indicators existed
excessive phenomena of different levels.
Water quality conditions in these four rivers all had good development trends
during the years from 2006 to 2008, especially in Baimahe River, water quality had
been changed from Grade IV in 2006 to Grade III in 2008. Though water quality of
Zhuzhaoxinhe River improved somewhat, water levels were still not meet the
standard of functional area due to its large cardinal numbers. So this river had the
most serious polluting condition.
References
1. Yang, L., Shen, J., Liu, E., et al.: Characteristic of nutrients distribution from recent
sediment in Lake Nansihu. Lake Sci. 19(4), 390396 (2007)
2. Song, H., Lv, X., Li, X.: Application of fuzzy comprehensive evaluation in water quality
assessment for the west inflow of Taihu Lake. Journal of Safety and Environment 6(1),
8791 (2006)
Water Quality Evaluation for the Main Inflow Rivers of Nansihu Lake 323
3. Guo, M.: Application of mark index method in water quality assessment of river.
Environmental Science and Management 31(7), 175178 (2006)
4. Xu, Z.: Comprehensive water quality identification index for environmental quality
assessment of surface water. Journal of Tongji University: Natural Science 33(4), 482488
(2005)
5. Fan, Z., Wang, L., Chen, L., et al.: Application of water quality identification index to
environmental quality assessment of Dianshan Lake. Journal of Shanghai Ocean
University 18(3), 314320 (2009)
6. Chang, H., Che, Q.: Study on methods of evaluation for water eutrophication. Journal of
Anhui Agri. Sci. 35(32), 1040710409 (2007)
Software Piracy: A Hard Nut to Crack
__A Problem of Information Security
Bigang Hong
1 Introduction
Easier Internet access in recent years has enabled software firms to distribute their
products online, instead of through traditional distribution channels. At the same time,
the cheap and easy channels have made the issue of protection of intellectual property
become more difficult and imperative. On the one hand, software products are hard to
protect due to their peculiar attributes and, thus, the traditional instruments of
protecting intellectual property such as patents seem ill-suited. On the other hand,
modern technologies make piracy easier in terms of its speed, absence of geographic
constraints, and the absence of the need for a middleman. The Internet is a double-
edged sword when it comes to software distribution. Negligible distribution cost is not
only true for the legitimate firms, but it is also true for the pirate firms which takes
advantage of the anonymity of the virtual world, bringing about large losses of benefit
for legitimate firms, and which in return has a negative impact on the industy itself
and the economy on the whole.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 324329, 2011.
Springer-Verlag Berlin Heidelberg 2011
Software Piracy: A Hard Nut to Crack__A Problem of Information Security 325
70
60
50
40
2008
30 2009
20
10
0
1 2 3 4 5 6 7 8
Source: Based on Seventh Annual BSA/IDC Global Software Piracy Sdudy
5 Conclusion
According to BSA, installations of unlicensed software on PCs dropped in 54 of the
111 individual economies studied, and rose in only 19 in 2009. It is clear that anti-
piracy education and enforcement campaigns spearheaded in recent years by the
software industry, national and local governments, and law enforcement agencies
continue to have a positive impact in driving legal purchases and use of PC software.
It is well recognized that a countrys stage of development and the quality of
governance may have a major impact on the incidence of software piracy. Greater
economic prosperity makes legal software more affordable on the one hand, and
increases the opportunity cost associated with illegal acts on the other hand. More
prosperous nations might also have better and strict monitoring mechanisms to check
piracy. The risk of exposure by a free press in politically free nations will surely
become a negative incentive machanism to software pirates. Consequently, the major
step to take in the campaign against global software piracy is to increase public
Software Piracy: A Hard Nut to Crack__A Problem of Information Security 329
education and awareness. Reducing software piracy often requires a fundamental shift
in the publics attitude toward it. Public education is critical. Governments can
increase public awareness of the importance of respecting creative works by
informing businesses and the public about the risks associated with using pirated
software and encouraging and rewarding the use of legitimate products. It is crucial
for the dedeloped countries, and for the develpoing economies as well in the long
term. As a new emerging economy, China still has a long way to go, for its own
development, and for the responsibility it should bear as a large economy.
References
1. Yang, D., Sonmez, M.: Economic and cultural impact on intellectual property violations:
A study of software piracy. Journal of World Trade 41, 731750 (2007)
2. Piquero, N.L., Piquero, A.R.: Democracy and intellectual property: Examining trajectories
of software piracy. Annals of the American Academy of Political and Social Science 605,
104127 (2006)
3. Greenstein, S., Prince, J.: The diffusion of the internet and the geography of the digital
divide in the United States. National Bureau of Economic Research, working paper #
12182 (2006)
Study of Bedrock Weathering Zone Features
in Suntuan Coal Mine
1 Introduction
In recently years many researchers at home and abroad have studied a lot on
lithological characters, distribution rules and watertight performance in bedrock
weathering zone. But compared with the focus attention- Cenozoic bottom aquifer ('the
fourth below'), it is quite out of proportion. Coal mine is deeply buried in bedrock
weathering zone of coal-seam roof, and has great difference in hydrology and
engineering geology features with weathering zone. And also it is quite inconvenient
for study and research to drill but to get imperfect core most of the cases in this area.
Lithological characteristics, mineral composition, physical properties of water and
physical and mechanical properties, weathering zone distribution and other factors of
bedrock weathering zone in coal-seam roof, determine the strength of its watertight
performance, while performance has a direct impact on impermeable overlying loose
surface water aquifers as well as the possibility of collapse into the roadway. Therefore,
we can prove and correctly evaluate the hydro geological conditions of bedrock
weathering zone in the working surface, which has an important practical significance
on reasonable safe mining coal rock pillar height and upper limit of mining, increasing
the recovery rate of coal resources, fully developing and utilizing coal resources. And it
also provides a reliable basis for mine achieve sustained normal safety production.
The 7211working surface is the fourth fully-mechanized working face of the
first area in 72 coal seams of 81 mining area in Suntuan Coal Mine and its original
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 330335, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Bedrock Weathering Zone Features in Suntuan Coal Mine 331
design Cap elevation is -250m, its Floor elevation is -315m, its dispersion is 65m.Its
mineral seam is mainly 72 coal seams. The working surface inner fault F10 fall head 10
120m , F10-1 fall head 0 5m , DF10 fall head 0 8m is of northeast spread,
and F10 crosses inclined the working surface. It set a large coal block height when
designing it, thus overstocks a great deal of coal resources which causes serious
resource damage. And it is seriously questioned of its technical economy rationality.
Damaged resource has the following characters as high degree of prospecting, shallow
buried depth, complete production system, and good mining technical conditions.
Therefore, whether the coal block height is reasonable or not becomes one of the
theoretical and technical difficulties to be solved in Suntuan coal mine. It shows,
according to the data of 7211 working surface and its eleven drill holes around, that its
Cenozoic loose bed thickness is 203.00 210.85m,its average thickness is 205.97m, its
bedrock face elevation is -176.70 -184.99m, its average elevation is -179.89m.
Generally, the working surface bedrock face elevation lessens gradually from
southwest to northeast.
>677;7@
,QWHQVLW\&RXQWV
!4 XDUW]6L2
!+DOOR\VLWH$$O6L2 2 ++2
!.DROLQLWH$$O6L2 2 +
!3 RO\OLWKLRQLWH0 . $ O)H/L6 L$ O2 2 +)
!.DROLQLWH0 G$O6L2 2 +
7KHWD
Fig. 1. Decency sandy putty rock sample XRD ingredient correspondence intensity chart
The thickness of the zone is determined by many factors as lithology and fracture
development degree. The Various inspection holes wind oxidation zone depth of
Suntuan Coal Mine is shown in Tab.2.
Table 2. Various inspection holes wind oxidation zone depth data sheet
According to its data, zone of weathering depth of the field is 15m; oxidation zone
depth is 25m. Its rocks are mainly khaki, motley, and taupe. When they are decayed
rocks, there is often full of crevice water in the fracture, but its degree mainly depends
on breakover degree and tiny fracture development. The rocks are soft, mostly
fragments, fracture development and low strength.
The study, with data statistic analysis of 7211 working surface and its eleven drilling
holes around, draws up bed rock surface ancient topographic map, elevation contour
map and weathering zone allocation plan, as are shown in Tab.3, picture 3, 4.
Table 3. 7211 working surface and nearby drill hole exposition weathering zone thickness
statistical table
1XPEHU +ROH QXPEHU %HG URFN VXUIDFH HOHYDWLRQ P %HG URFN Z HDWKHULQJ WKLFNQHVV P
*286+8,
6+8,
$YHUDJH YDOXH
PLQLPXP
PD[LPXP
334 X. Li, D. Yao, and J. Yang
3 Conclusions
(1) Minerals in it are mainly quartz and feldspar filled with clay minerals with certain
water proof ability.
(2) Ancient landform is mainly low in southeast and high in northwest which has an
obvious influence with the weathering zone development thickness change. Terrain
goes from high to low from north to south, and thickness in the weathering zone slightly
increases. It deposits gravel layer and claypan with different thicknesses in lower area
of ancient landform, and these two have good water-resisting property.
Study of Bedrock Weathering Zone Features in Suntuan Coal Mine 335
References
1. He, J.: Study of engineering-geological features of weathered zone of Wugou Coal Mines
base rock. Mining Engineering 24(11) (June 2008)
2. Xuan, Y.-q.: Study on the weathered damage attributes of rock and the law of reduction for
coal column protection. Chinese Journal of Rock Mechanics and Engineering 24(11) (June
2005)
3. Xuan, Y.: The possibility study of increasing the upper limit in combined working face under
the thick loose layer containing water. Journal of Anhui University of Science and
Technology 20(1), 4245 (2000)
4. State Bureau of CoalIndustry. In: Retaining Coal Column of the Building, Water Body, the
Railway and the Main Mine Tunnel and Exploitation Rules of the Coal Pressed. China Coal
Industry Publishing House, Beijing (2000)
5. Yang, B.: The testing study of the key technology safely mining coal seam in weathered
zone. Journal of China Coal Society 28(6), 608612 (2003)
6. Yang, B.: Examination and study of retaining sands resisting coal column under the
moderate water-bearing layer. Journal of China Coal Society 27(4), 342346 (2002)
Mercury Pollution Characteristics in the
Soil around Landfill
1 Introduction
After outbreak of "Minamata" disease which shocked the world in the 50s, Mercury, as
a global pollutant, has caused widespread attention and study. In recent years, landfill
and incineration of waste is considered a new mercury pollution, which is paid
widespread attention by domestic and foreign scientists and the relevant government
departments.
Europe and other developed countries attach great importance to mercury pollution
and release problems of landfill. From the 1980s, the U.S. EPA and other agencies have
carried out research work in this area. Research staff of Oak Ridge National Laboratory
(ORNL) surveyed on atmospheric mercury concentrations of several landfill in the
Florid USA, the result showed that mercury concentrations of downwind in the
atmosphere is usually 30 to 40 times of upwind or more[1-2]. Li Zhonggen and others
analyzed Mercury (Hg) in waste and soil at four municipal solid waste (MSW) landfills
in Guiyang and Wuhan City, the result showed that there is Hg pollution risk associated
with MSW land filling [3]. Ding Zhen, Wang Wenhua and others determined with an
AMA 254-Automatic Solid/Liquid Mercury Analyzer in topsoil samples from the
Laogang Landfill, and it is shown that mercury is released in the form of gaseous
mercury [4]-[7].
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 336340, 2011.
Springer-Verlag Berlin Heidelberg 2011
Mercury Pollution Characteristics in the Soil around Landfill 337
2.1 Materials
2.2 Methods
Analysis results of mercury content in surface soil near the landfill are shown in
Table 1, and using EXCEL software for statistical analysis, the results are shown in
Figure 1
From Table 1 and Figure 1, it is known that mercury content distribution in surface
soil around landfill has the certain laws. The mercury concentration in surface soil at
50m away from the garbage dump is higher than that at the 20m and 100m, the order is:
S50 (0.0411ppm)> S20 (0.0345ppm) >S100 (0.0257ppm); and mercury concentrations
in surface soil near the landfill in different directions are as follows: WS
(0.0525ppm)> W (0.0441ppm)> WS (0.0255ppm)> ES(0.0218ppm). But the surface
soil is not contaminated by mercury.
338 J. Yang, M. Zhang, and X. Li
Table 1. Mercury content in surface soil near the landfill (Unit: ppm)
0.06
0.05
Content ppm
0.04
0.03
0.02
0.01
0
W WS WS ES Position
Fig. 1. Mercury content distribution diagram in surface soil near the landfill
According to mercury content in the different section soil, using EXCEL for the
vertical analysis, the results were shown in Figure 2.
From Figure 2, it could be seen that the mercury content of different sampling points
in the rest direction decreases with depth increase, in addition to mercury content of the
three sampling points in the southeast increases with increasing depth and abnormal
occurring at the place of 50m in the southwest No. .
Mercury Pollution Characteristics in the Soil around Landfill 339
0.14
WS 20
0.12
WS 50
WS100
0.1
W20
Content(ppm)
0.08 W50
W100
0.06
ES20
0.04
ES50
0.02 ES100
0
WS 20
0-20 20-40 40-60 Profile
WS 50
4 Conclusions
(1) mercury content distribution in surface soil around landfill has the certain
laws: S50(0.0411ppm)> S20(0.0345)>S100(0.0257). The main reason may be that
mercury in the form of particulate matter and gaseous contaminates soil around
with wind migration and the influence range is about 50m Secondly, mercury
distribution of surface soil in different directions is as follows: WS (0.0525ppm)
>W(0.0441ppm)>WS (0.0255ppm)>ES(0.0218ppm).The main reason may be due to
the influence by the dominant wind direction in Huainan where dominant wind
direction is south-east of Huainan in summer and is northeast in winter.
(2) The vertical distribution of mercury content in the soil landfill around is as
follows: mercury content in the southeast increases with increasing depth. The main
reason may be that the underlying soil in southwest is coal gangue which makes the
mercury distribution in the soil show this trend in the southeast. And the mercury
content of different sampling points in the rest directions decreases with depth increase,
this result shows that mercury pollution in the soil around the landfill is mainly
concentrated in the topsoil.
Because the work time is short and the amount of data is limited, the study on
mercury pollution characteristics is not comprehensive, and other heavy metals
pollution in the soil near landfill has not yet been analyzed. Therefore, the research
work in future needs further study.
References
1. Pukkala, E., Pnk, A.: Increased incidence of cancer and asthma in houses builton a former
dump area. Environ. Health Perspect. 109(11), 11211125 (2001)
2. Vrijheid, M., et al.: Chromosomal congenital anomalies and residen -ce near hazardous
waste landfill sites. The Lancet 9303(359), 320322 (2002)
340 J. Yang, M. Zhang, and X. Li
3. Li, Z.-g., Feng, X.-b., Tang, S.-l., et al.: Distribution Characteristics of Mercury in the Waste,
soil and Plant at Municipal Solid Wastelandfills. Earth and Environment 34(4), 1118
(2006) (in Chinese)
4. Ding, Z.-h., Wang, W.-h., Tang, Q.-h., et al.: Release of Mercury From Laogang Landfill,
Shanghai. Earth and Environment 33(1), 610 (2005) (in Chinese)
5. Bartolacci, S., Buiatti, E., Pallante, V., et al.: A study on mortality around six municipal solid
waste landfills in Tuscany Region. Epidemiol. Prev. 29(5-6 suppl.), 5356 (2005)
6. Tang, Q.-h., Ding, Z.-h., Wang, W.-h.: Pollution and Transference of Mercury in Soil-Plant
System of Different Landfill Units. Shanghai Environmental Sciences 22(11), 768775
(2003) (in Chinese)
7. Chang, Q.-s., Ma, X.-q., Wang, Z.-y., et al.: Pollution characteristics and evaluation of heavy
metals in municipal rubbish landfill sites. Journal of Fujian Agriculture and Forestry
University (Natural Science Edition) 36(2), 194197 (2007) (in Chinese)
The Research on Method of Detection for
Three-Dimensional Temperature of the Furnace
Based on Support Vector Machine
1 Introduction
The flame burning process in the boiler furnace taking place in a large space within
the constantly fluctuating, and has obvious three-dimensional characteristics of the
physical and chemical processes. The stability of the furnace flame directly affects the
safety of productions. Therefore, testing the three-dimensional temperature field in
the furnace has important practical significance [1-2].
In this paper, taking CCD (charge-coupled device) camera as a two-dimensional
radiation energy distribution of the furnace sensors, receiving the heat radiation signal
of three-dimensional in the furnace, taking further research in the BP algorithm. The
CCD camera under the visible light received by the RGB three-color signal value as
the three SVM network input blackbody calibration furnace 50 groups of red visible
RGB three-color signal values corresponding to the temperature T value as output to
train the SVM network and temperature was established SVM network. SVM
predicted temperature of the network model results are analyzed, and SVM using well-
established network of two-dimensional image of the flame radiation temperature for
the strike, combined with the regular method to complete the three-dimensional
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 341346, 2011.
Springer-Verlag Berlin Heidelberg 2011
342 Y. Yu et al.
combustion flame temperature field reconstruction. Finally, the trained SVM which is
simulated and analyzed verifies that the results are ideal, which also proves that SVM
network using to obtain three-dimensional temperature distribution is feasible.
{ ( x )}
l
j represents a set of nonlinear transformation, l that the dimension of feature
j =1
space. Act as decision-making side of this hyperplane can be defined as Eq1 a feature
space.
l
( x) + b = 0
j =1
j j (1)
K ( xi , x j ) = ( xi )i ( x j ) (3)
If i* is the optimal solution, then the algorithm of the other conditions are
unchanged, but
n
* = i* yi ( xi ) (5)
i =1
n
f ( x ) = sgn i * yi K ( xi i x ) + b* (6)
i =1
SVM output is some linear combination of the middle layer nodes, each
corresponding to middle layer nodes in the input sample and a support vector inner
product, obtained form the decision-making function is similar to a neural network,
therefore, be called support vector network, as shown in Figure 1.
The Research on Method of Detection for Three-Dimensional Temperature 343
K ( x1, x)
x1
K ( x2 , x)
1 y1
x2 2 y2
y
h1yh1 sgn ( )
xm1
K ( xh1, x)
xm h y h
K ( xh , x)
With blackbody furnace, the use of temperature control system, intake of different
temperature thermal radiation burn the image. A total of 50 groups for red (R), green
(G), blue (B) three-color signal value and the corresponding relationship between the
temperature T, the sample data in Table 1.
R 700 135 141 154 173 180 182 186 183 190 192 193 194 194 198 200 201 201 204 206 208 210 211 214 216
G 52 58 62 75 92 98 56 65 103 79 84 85 90 98 103 104 107 112 118 115 148 112 115 139 152
B 1 3 3 9 13 14 28 33 37 39 38 40 36 11 19 28 11 12 50 43 20 13 17 32 55
T/C 700 732 781 797 876 884 894 905 913 927 930 938 945 952 962 971 980 984 994 1008 1011 1020 1022 1033 1049
R 217 220 221 225 227 234 235 237 240 242 247 249 252 255 252 255 255 255 255 255 255 255 255 255 255
G 155 160 166 170 185 134 197 200 176 146 146 150 167 171 183 190 198 201 207 215 220 223 227 228 231
B 57 59 58 60 61 23 64 66 55 36 28 32 51 61 75 77 80 81 82 83 84 85 86 87 89
T/C 1052 1068 1072 1090 1105 1125 1138 1149 1152 1167 1178 1182 1198 1205 1216 1227 1249 1255 1260 1271 1283 1305 1316 1321 1330
Selected kernel function in this article is Gaussian kernel function (also known as
RBF kernel function), and its representation as:
x xi 2
K ( x, xi ) = exp (7)
2
In the formula: is the width of the Gaussian distribution.
This choice SVM network input nodes is 3, the output nodes is 1. Characteristics of
the network using SVM, we construct the 3-layer SVM network, input layer of the
344 Y. Yu et al.
three, namely R, G, B three color values, the output layer one, the flame radiation
images of the two-dimensional temperature T. Network training are listed in Table 1.
Kernel parameter (g) and the penalty factor (c) is the SVM network are two
important parameters. This training process in the network range of the c value is set
to -10 to 10, g value range is set to -5 to 5. Obtained by training SVM parameter
optimization of the network diagram shown in Figure 2.
-3
x 10
4 3
3
2
2
1
Absolute error
Relative error
0
0
-1
-1
-2
-2
-3 -3
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
Sample Sample
(a)SVM network plans relative prediction error (b) SVM network prediction absolute error map
Figure 3(a) SVM network prediction for the relative error curve, the curve can be
seen that the prediction of the relative error of less than 0.4%, the prediction accuracy
is very high. Figure 3(b) is the absolute prediction error of SVM network graph can
be seen from the figure the prediction of the absolute error of less than 3 . Sum can
be drawn, after training the SVM network has good prediction accuracy.
The Research on Method of Detection for Three-Dimensional Temperature 345
Table 2. SVM method to strike the flame image in part (first 10 lines of 10) regional units of
the temperature
unit:
Temperatue/oC
Fig. 5. SVM-based network to strike a flame temperature of the third layer of three-dimensional
surface chart
346 Y. Yu et al.
5 Conclusion
The CCD cameras use two-dimensional radiation energy distribution as a sensor, to
receive from the three-dimensional furnace heat radiation signal, according to a new
model of radiation imaging two-dimensional temperature images of flame and three-
dimensional relationship between combustion temperature equation, the
regularization of the reconstruction methods dimensional combustion flame
temperature field reconstruction. Two-dimensional image of the flame temperature
distribution of radiant energy to strike, the introduction of SVM temperature
measurement network, with 50 groups of blackbody calibration furnace sample data
for training SVM network. After the network with the trained SVM two-dimensional
images of flame radiation temperature for the strike, combined with the regular
method to complete the three-dimensional combustion flame temperature field
reconstruction. Through the combustion flame temperature field reconstruction of
three-dimensional, and further realized the boiler safety, economic, clean operation, to
guide the optimization of boiler combustion is of great significance.
References
1. Zhou, H.C., Han, S.D., Sheng, F., Chu-Guang, Z.: Numerical Simulation on a Visualization
Monitoring Method of Three-Dimensional Temperature Distribution in Furnace. Power
Engineering 23, 21542156 (2003)
2. Chun, H.: Furnace flame detection principles and techniques of visual. Science Press,
Beijing (2005)
3. Wang, S.Z.: Support vector machines and application. PhD thesis, Harbin Institute of
Technology, 19-24 (2009)
4. Deng, N.Y., Tian, Y.J.: A new data mining method - support vector machine. Science
Press, Beijing (2004)
5. Peng, B.: Support Vector Machine Theory and Engineering Application. Xidian University
Press, Xian (2008)
6. Zhou, H.C., Han, S.D., Shen, G.F., et al.: Visualization of three-dimensional temperature
distributions in a large-scale furnace via regularized reconstruction from radiative energy
images: Numerical studies. J. of Quantitative Spectroscopy and Radiative Transfer 72,
361383 (2002)
Study on Wave Filtering of Photoacoustic Spectrometry
Detecting Signal Based on Mallat Algorithm
1 Introduction
With the technological progress, there are more and more the signal processing
methods. The choices of the signal processing methods are closely connected with the
characteristics of the signals to be tested, so even the same method can results in huge
discrepancies for the processing effects of different signals to be tested. To achieve
the expected signal processing effects, the researchers are always trying to study the
effective methods for processing all kinds of signals. The signal processing has
experienced the course from imitation to number and from definitive signals to
random signals, and it is striding towards the signal processing age with the unsteady
and non-gauss signals as main study objects and with the non-linear and uncertain
characteristics as main study features.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 347352, 2011.
Springer-Verlag Berlin Heidelberg 2011
348 Y. Yu et al.
1 t b
a ,b ( t ) = a , b R, a 0 (1)
|a| a
In Eq.1, a is an extension factor, b is a translation factor.
Suppose f ( t ) L ( R ) , the continuous wavelet transform is:
2
1 + t b 1 + t b
W f ( a, b ) =< f , a ,b >= f ( t ) dt = f ( t ) dt (2)
|a|
a |a|
a
1 + + dadb
f (t ) = W f ( a, b ) a ,b ( t ) (3)
C a2
The C expression is: b
+ 2
( )
1
C = d (4)
The continuous wavelet transform is mainly used for theoretic analyses and
derivations, and in the actual signal processing course, we need the discrete wavelet
transform more. Here the so-called discretization refers to the continuous measure
factor a and translation factor b , but not the time variable t . First discretize the
measure factor a and get the binary wavelet and the binary wavelet transform. Then
discretize the time center factor b in the binary integral way.
The continuous wavelet function is:
1 t b
a,b ( t ) = b R, a R + (5)
a a
In the Eq. 5, discretize the measure and the translation parameter of the continuous
wavelet transform. When j and k are integral numbers, the discrete wavelet is:
t kb0 a0j j
= a02 ( a0 t kb0 )
1
j,k (t ) = j
j
(6)
a0j a0
A f f , j , k
2
B f (8)
j k
Therefore j , k forms one of the wavelet frame of L2 ( R ) , in which A and B are the
frame border.
Study on Wave Filtering of Photoacoustic Spectrometry Detecting Signal 349
j
j , n ( x ) = 2 2 ( 2 x n ) ,
j
n Z (9)
Because ( x ) V0 V1 and there is the standard orthogonal
basis { }
2 ( 2 x n ) , n Z , so there must be a factor column {hn , n Z } L2 ( Z ) ,
and this is the measure equation of the wavelet.
Suppose ( t ) and ( t ) are the measure function and the wavelet factor
of the function f ( t ) approached by the resolving power 2 j , the discrete
approaching C j f (t ) and the detail part D j f (t ) can be expressed
as C j f ( t ) = c j , k j .k ( t ) and D j f ( t ) = d j ,k j .k ( t ) , in which d j , k and c j , k are the
measure factor and the wavelet factor with the resolving power 2 j .
According to Mallat Algorithm, C j f ( t ) is resolved as the summation of the rough
image C j 1 f ( t ) and the detail image D j 1 f ( t ) , and the expression is:
C j f ( t ) = C j 1 f ( t ) + D j 1 f ( t ) (10)
Replacing the Eq.11 and Eq.12 into Eq.10 can form the Eq.13:
C j 1 f ( t ) = C
m =
j 1, m j 1, m ( t ) (11)
D j 1 f ( t ) = d
m =
j 1, m ( t )
j 1, m (12)
350 Y. Yu et al.
C
m =
j 1, m ( t ) +
j 1, m D
m =
j 1, m ( t ) =
j 1, m C
m =
j,m (t )
j .m (13)
Then for the tower-style resolving course analysis of Mallat Algorithm, the
expression is:
C 0 ( n ) C1 ( n ) "" C J 1 ( n ) C j ( n )
(14)
D ( n)
1
D ( n)
2
D J 1
( n)
C 0 ( n ) is the result data of the experiment; C j ( n ) and D j ( n ) are called discrete
approaching and discrete detail. C j ( n ) = C j 1 ( n ) H = h ( l 2n )C j 1 ( l )
lZ
is the low frequency component of the original data, and
D ( n ) = C ( n ) G = g ( l 2n ) C ( l )
j j 1 j 1
is the high frequency component, J is
l z
the biggest resolving frequency.
{ }
H = h ( l ) , l Z and G = { g ( l ) , l Z } means: the mirror filter of the discrete
filter H and G . Though this kind of algorithm is very convenient, it requires that the
point number the original data C (0) ( n ) must be a multiple of 2( N ) , and the resolving
data point becomes half every time after resolving. Then the analysis result of the
optical spectrum becomes discontinuous, and this brings some difficulties for the
analysis of the optoacoustic signal result. Therefore this article adopts the improved
Mallat algorithm to filter the optoacoustic signals, and the expressions are:
l ( j ) 1
C( j) ( n) = h ( ) ( l ) C ( ) ( n 1)
l =0
j j 1
(15)
l ( j ) 1
D( j ) ( n ) = g ( ) ( l ) C ( ) ( n 1)
l =0
j j 1 (16)
The detecting platform for the optoacoustic optical spectrum to detect the water
content in oil consists of laser, chopper, preamplifier, ARM9 data processing module,
filter, phase-locked amplifier, microphone, and so on. By the experiment platform, a
great deal of oil-in-water optoacoustic data is collected, and because of the huge
amount, a part of the data is listed in Table 1.
Unit: v
Collection number of Collection number of
Optoacoustic data Optoacoustic data
times times
1 0.957031 12 0.969238
2 0.957071 13 0.97168
3 0.95459 14 0.96923
4 0.949707 15 0.974141
5 0.952158 16 0.974121
6 0.952153 17 0.97168
7 0.959473 18 0.981445
8 0.959493 19 0.966797
9 0.964356 20 0.966787
10 0.961911 21 0.959473
11 0.961914 22 0.952148
This article adopts the Eq.15 and Eq.16 to filter and remove the noise for the
optoacoustic data collected in this article. h ( ) shows the factor of the low pass filter in
j
the computing course; g ( j ) shows the factor of the high pass filter in the computing
course; l ( j ) shows the data point number of the filter; as the resolving number of
times increases once, the l ( j ) value doubles correspondingly, and this kind of
improved algorithm is suitable for processing the optoacoustic signals more.
The optoacoustic data curve before filtering is shown in Fig.1.
The optoacoustic data curves obtained after the Mallat algorithm filter is shown in
Fig.2. The optoacoustic curves after the wavelet Mallat algorithm filter are much
clearer. There is less influence of the noise curves on the optoacoustic curve result, and
such curves can decrease the influence on the water content in oil transform results and
be beneficial to enhancing the precision of the experiment results a lot.
The transform from the optoacoustic curves to water content in oil is realized by
the Eq. 17.
S = CPL N (17)
In the Eq.17, S shows the voltage value of the optoacoustic signals; PL shows the
luminous frequency of the semiconductor laser; N shows the volume fraction of the
water content in the sample oil; shows the water optical spectrum absorbing factor;
C shows the cell constant of the photocells and depends on all the factors such as the
sensitivity of the microphone and the geometric structure of the photocells.
By Fig.1 and Fig.2, the actual filtering experiment proves that the Mallat algorithm
based on orthogonal wavelet transform has very good filtering and noise removal
effects for the optoacoustic signal processing. This kind of filtering and noise removal
processing lays a foundation for the processing of the experiment platform results, and
this example also proves that this kind of wavelet transform algorithm is practical for
applying in the optoacoustic optical spectrum detection.
References
1. Zheng, J.: Research on retrieving and analysis of the characteristic information of
photoacoustic signal. South China University of Technology
2. Han, L., Tian, F.: Optical implementation of Mallat algorithm. Optics and Precision
Enginering (8) (2008)
3. Li, C.H., Hu, B.J.: Radar Target Signal Denoising Based on Wavelet Mallat. Algoritlunm
Mechatronics (16) (2010)
4. Liu, J.C.: Research on application of seismic signal de-noising based on Mallat algorithm.
HuNan University
Ontology-Based Context-Aware Management for
Wireless Sensor Networks
Abstract. Wireless sensor network (WSN) can provide the popular military and
civilian applications such as surveillance, monitoring, disaster recovery, home
automation and many others. All these WSN applications require some form of
self-managing and autonomous computing without any human interference.
Recently, ontology has become a promising technology for intelligent context-
aware network management. It may cope with various conditions of WSNs.
This paper describes an ontology-based management model for context-aware
management of WSNs. It provides autonomous self-management of WSNs
using ontology-based context representation, as well as reasoning mechanisms.
1 Introduction
Network management is the process of managing, monitoring, and controlling the
behavior of a network. Wireless sensor networks (WSNs) pose unique challenges for
network management that make traditional network management techniques
impractical [1]. WSN consists of a large number of small sensor nodes having
sensing, data processing, communication, and ad-hoc functions. WSNs have critical
applications in the scientific, medical, commercial, and military domains. Examples
of these applications include environmental monitoring, smart homes and offices,
surveillance, and intelligent transportation systems [2]. WSNs are characterized by
densely deployed nodes, frequently changing network topology, variable traffic, and
unstable sensor nodes of very low power, computation, and memory constraints.
These unique characteristics and constraints present numerous challenging problems
in the management of WSNs which are not encountered in the traditional wireless
networks [3][4].
The management of WSNs requires to be lightweight, autonomous, intelligent, and
robust [5]. A network management system needs to handle a certain amount of
control messages to cope with various conditions of the network. With WSNs, it is
extremely important to minimize the signaling overhead since the sensor nodes have
limited battery life, storage, and processing capability. Given the dynamic nature of
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 353358, 2011.
Springer-Verlag Berlin Heidelberg 2011
354 K.-W. Lee and S.-H. Cha
information on the Web, and has an abstract syntax that reflects a simple graph-based
data model. OWL enables the definition of domain ontologies and sharing of domain
vocabularies and RDF provides data model specifications and XML-based
serialization syntax. OWL is an object-oriented approach and a domain is described in
terms of classes and properties. OWL can be used to specify management
information, and can be employed in modeling and reasoning about context
information in WSNs. OWL provides three increasingly expressive sub-languages
designed for use by specific communities of implementers and users. OWL Lite
supports those users that primarily need a classification hierarchy and simple
constraints. OWL DL supports users who want more expressiveness. OWL DL (DL
for Description Logic) generally includes all OWL language constructs, but they can
be only used under certain restrictions. OWL Full is meant for users who want
maximum expressiveness and the syntactic freedom of RDF with no computational
guarantees. In this paper, OWL DL will be used to create an ontology model to
represent and interpret facts, because it represents a good compromise between
expressivity and computational complexity [11].
ContextEntity
hasThresholdParameter ...
When one or more measurement values fall out of the thresholds, the WSN's status
is updated with associations to current OutOfRangeDataParameters and the
corresponding Activity events. An activity can be triggered by abnormal
SensorParameters. When the system has automatically discerned that a critical
situation has occurred, due to abnormal sensor parameter values and/or potentially
dangerous environmental conditions, proper intervention actions should be planned.
4 Context Reasoning
In this section, we will describe context reasoning based on the proposed management
model to demonstrate the key feature of the ontology based context model. In the
context model, we represent contexts in first-order predicate calculus. The context
model has the form Predict(subject, value), in which Predict is the set
of predicate names (for example, has status, has sensor ID, or is owned by).
subject is the set of subject names (for example, a sensor, location, or object) and
value is the set of all values of subjects (for example, temperature value, vibration
value, or gas leak). Temperature(sensor12, 100) means that the temperature
of sensor12 is 100o F.
The ontology reasoning mechanism of the proposed ontology-based context-aware
management supports RDF Schema and OWL DL. The OWL reasoning
system supports constructs for describing properties and concepts (classes),
including relationships between classes. The RDF Schema rule sets are needed to
perform RDF Schema reasoning. For example, (?a rdfs:subClassOf ?b),
(?b rdfs:subClassOf ?c) (?a rdfs:subClassOf ?c). User-
defined rules provide the logic inferences over the ontology base. We have defined a
set of rules in order to determine if an alarm has to be triggered and which alarm level
should be activated, according to measurement values and corresponding thresholds.
For instance, the example shows a rule activating an alarm when both following
conditions occurs: the perceived value from an acoustic sensor is higher than 100kHz
and the sensed value from a seismic sensor is higher than 10Hz: (?A rdf:type
AcousticValue) ^ (?A hasMeasResult ?B) > (?B, 100) ^ (?C
rdf:type SeismicValue) ^ (?C hasMeasResult ?D) > (?D, 10)
(?SensorStatus hasActivity Alarm).
As previous mentioned, my context reasoner is built using Jena framework [12],
which supports rule-based inference over OWL/RDF graphs. It provides a
programmatic environment for RDF, RDF Schema and OWL, SPARQL and includes a
rule-based inference engine. Its heart is the RDF API, which supports the creation,
manipulation, and query of RDF graphs. The current RDF API provides a query
primitive, a method for extracting a subset of all the triples that a selector object selects
in a graph. A simple selector class selects all the triples matching a given pattern.
5 Conclusion
In this paper an ontology-based context model for autonomous sensor management of
the WSNs has been proposed. The paper employed OWL to describe the context
358 K.-W. Lee and S.-H. Cha
ontologies because it is more expressive than other ontology languages. And the paper
showed a context reasoning example to implement autonomic self-management by
using the proposed ontology-based model in the WSN. The ontology-based context
model will be feasible and necessary for reasoning and automatic managing in WSN
environments.
Future research activities will be devoted to implement and evaluate the proposed
ontology-based context model and logic-based context reasoning schemes in real
WSN environments.
References
1. Lee, W.L., Datta, A., Cardell-Oliver, R.: Network Management in Wireless Sensor
Networks. School of Computer Science & Software Engineering, University of Western
Australia, http://www.csse.uwa.edu.au/~winnie/
Network_Management_in_WSNs_.pdf
2. Sheetal, S.: Autonomic Wireless Sensor Networks. University of Southern California,
http://www-scf.usc.edu/~sheetals/publications/
AutonomicWSN.doc
3. Mini, R.A.F., Loureiro, A.A.F., Nath, B.: The Distinctive Design Characteristic of a
Wireless Sensor Network: the Energy Map. Computer Communications 27(10), 935945
(2004)
4. Zhao, F., Guibas, L.: Wireless Sensor Networks: An Information Processing Approach.
Morgan Kaufman, Elsevier (2004)
5. Phanse, H.S., DaSilva, L.A., Midkiff, S.F.: Design and Demonstration of Policy-based
Management in a Multi-hop Ad Hoc Network. Ad Hoc Networks 3(3), 389401 (2005)
6. Cha, S.-H., Lee, J.-E., Jo, M., Youn, H.Y., Kang, S., Cho, K.-H.: Policy-based
Management for Self-Managing Wireless Sensor Networks. IEICE Transactions on
Communications E90-B(11), 30243033 (2007)
7. Xu, H., Xiao, D.: A Common Ontology-based Intelligent Configuration Management
Model for IP Network Devices. In: First International Conference on Innovative
Computing, Information and Control (ICICIC 2006), pp. 385388. IEEE, Beijing (2006)
8. Semantic Web Standard: Web Ontology Language (OWL). W3C OWL Working Group
(2007), http://www.w3.org/2004/OWL/
9. Semantic Web Standard: Resource Description Framework (RDF). W3C RDF Working
Group (2004), http://www.w3.org/2004/RDF/
10. W3C Recommendation: RDF Vocabulary Description Language 1.0 RDF Schema
(2004), http://www.w3.org/TR/rdf-schema/
11. Kim, J.: An Ontology Model and Reasoner to Build an Autonomic System for U-Health
Smart Home. Master thesis, POSTECH (2009)
12. Jena framework, http://jena.sourceforge.net/
13. Jess rule engine, http://www.jessrules.com/
An Extended Center-Symmetric Local Ternary
Patterns for Image Retrieval
1 Introduction
Because of the simplicity and robustness to variations on illumination and orientation
in texture analysis and pattern recognition, local binary pattern (LBP) [1] and its
extensions [2-4] have been widely used in many fields, such as face recognition [5],
image retrieval [6] and medical image analysis [7],etc.
The LBP and its extensions are simple descriptors which generate a binary code for
a pixel neighbourhood. The origion LBP operator prodeces a 8-bit code (a long
histogram of 256-bin) for an 8-nieghbourhood, which make it difficult for interest
region description. In order to reduce such problem, Heikkil et al introduced the
center-symmetric local binary pattern (CS-LBP) for region description in [8], which is
defined according to the defference of opposing pixels around a given pixel and
produces a 4-bit code (a 16-bin histogram). In [9], Gupta et al presented the center-
symmetric local ternary pattern (CS-LTP) for region description based on CS-LBP.
Different from CS-LBP, CS-LTP only uses the diagonal comparisons and produces a
histogram of 9 bins for each spatial bin. However, whether CS-LTP or CS-LBP, the
central pixel of a region was ignored. Therefore, they can not describe the gradient of
the neighborhood efficiently. CS-LBP was improved in our previous work [10]. In
this paper, CS-LTP was also further extended to enhance its efficiency by considering
the central pixel.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 359364, 2011.
Springer-Verlag Berlin Heidelberg 2011
360 X. Wu and J. Sun
p5 p6 p7
p5 p6 p7
p4 pc p0
p4 pc p0
p3 p2 p1
p3 p2 p1
eCS _ LTP( D, T ) = g ( p5 pc , pc p1 ) + g ( p7 pc , pc p3 ) 3
2, x > T & y > T
(3)
g ( x, y ) = 0, x < T & y < T
1, else
Though 4 comparisons are made in the defination of eCS-LTP, it is clear that it
also produces a 9-bin histogram as CS-LTP.
5 Experimental Results
Two commonly used image database for research purposes are employed as test beds
and 2 distance is chosen as a measurement criterion of. In the experiments, we chose
D=1 for CS-LTP and eCS-LTP.
( I j , Ri ) ( I j , Ri )
Pi , N (q ) = j =1 , Ri , N (q ) = j =1
N N
(5)
N Ri
where Ri denotes the i-class texture which contains Ri similar images, and q is a
query image, and I 1 , I 2 ," , I j ," , I N are the first N retrieved images and
0, if I j Ri
( I j , Ri ) = .
1, if I j Ri
362 X. Wu and J. Sun
Image Set 1 were chosen form CUReT database. This database includes 45 classes
and each class has 20 texture images (http://www.robots.ox.ac.uk/~vgg/research/).
Fig. 3 shows the 45 example textures.
This experiment employs each image in the first 20 classes as query images, and
400 retrieval results are obtained. Fig. 4 (a) and (b) present the comparision graph of
the average results of 400 queries. The retrieval results show that eCS-LTP produces
better performance than CS-LTP.
0.6 0.8
CS-LTP
eCS-LTP
0.7
0.5
0.6
0.4
0.5
R
0.3
0.4
0.2
CS-LTP 0.3
eCS-LTP
0.1 0.2
4 8 12 16 20 24 28 32 36 40 4 8 12 16 20 24 28 32 36 40
N N
0.8 0.9
CS-LTP
eCS-LTP
0.7
0.8
0.6
0.7
0.5
R
0.6
P
0.4
0.5
0.3
eCS-LTP
0.1 0.3
3 6 9 12 15 18 21 24 27 30 3 6 9 12 15 18 21 24 27 30
N N
This experiment employs each image in the first 30 classes as query images, and
480 retrieval results are obtained. Fig. 6 presents the comparision graph of the average
results of 480 queries. The retrieval results also show that eCS-LTP produces better
average recall and precision than CS-LTP.
6 Conclusion
This paper points out the shortcoming of CS-LTP, and proposed an extended CS-LTP
operator. Because of the considering of the central pixel, more information of a region
is fused in the new descriptor. The new operator also keeps the robustness
characteristics of CS-LTP and CS-LBP, such as simplicity, robustness to variations on
illumination and orientation, a short dimension histogram, etc. The two methods CS-
LTP and eCS-LTP were compared in image retrieval experiments, and two
commomly used texture image database sets were chosen as test beds. Experimental
results show that the proposed method is very robust and can achieve significant
performance improvement over the CS-LTP operator.
Acknowledgment. This work was supported by the Key Project of Chinese Ministry
of Education (210128), the Internal Cooperation Science Foundation of henan
province (084300510065, 104300510066), provincial opening laboratory for Control
Engineering key disciplines (KG2009-14).
References
1. Ojala, T., Pietikinen, M., Hardwood, D.: A Comparative Study of Texture Measures with
Classification based on Feature Distribution. Pattern Recognition 1, 5159 (1996)
2. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution Gray-scale and Rotation Invariant
Texture Classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis
and Machine Intelligence 7, 971987 (2002)
3. Zhou, H., Wang, R.S., Wang, C.: A Novel Extended Local-Binary-Pattern Operator for
Texture Analysis. Information Sciences 22, 43144325 (2008)
4. Guo, Z.H., Zhang, L., Zhang, D.: Rotation Invariant Texture Classification using LBP
Variance (LBPV) with Global Matching, vol. 3, pp. 706719 (2010)
5. Tan, X., Triggs, B.: Enhanced Local Texture Feature Sets for Face Recognition under Difficult
Lighting Conditions. IEEE Transactions on Image Processing 6, 16351650 (2010)
6. Sun, J.D., Wu, X.S.: Content-base Image Retrieval based on Texture Spectrum Descriptors.
Journal of Computer-Aided Design & Computer Graphics 3, 516520 (2010) (in Chinese)
7. Loris, N., Sheryl, B., Alessandra, L.: A Local Approach based on a Local Binary Patterns
Variant Texture Descriptor for Classifying Pain States. Expert Systems with
Applications 37, 78887894 (2010)
8. Heikkil, M., Pietikinen, M., Schmid, C.: Description of Interest Regions with Local
Binary Patterns. Pattern Recognition 3, 425436 (2009)
9. Gupta, R., Patil, H., Mittal, A.: Robust Order-based Methods for Feature Description. In:
CVPR, pp. 334341 (2010)
10. Sun, J.D., Wu, X.S.: Image Retrieval Based on an Improved CS-LBP Descriptor. In: IEEE
International Conference on Information Management and Engineering, pp. 115117 (2010)
11. Ru, L.Y., Peng, X., Su, Z., et al.: Feature performance evaluation in content-based image
retrieval. Journal of Computer Research and Development 11, 15601566 (2003) (in Chinese)
Comparison of Photosynthetic Parameters and Some
Physilogical Indices of 11 Fennel Varieties
Agronomy Department,
Dezhou University
Dezhou 253023,
China
nwmy_sddz@163.com
1 Introduction
The experimental field is located in the Dezhou university, and is within east
longitude 11545-11736and north latitude 362425"-38032". Dezhou city is in a
warm temperate zone, and has a continental monsoon climate with four distinct
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 365369, 2011.
Springer-Verlag Berlin Heidelberg 2011
366 M. Wang, B. Xiao, and L. Liu
seasons. The annual average temperature is 12.9 , Highest temperature history is
43.4 , and Minimum temperature history is -27 . Average annual rainfall is
547.5mm, and the average frost-free period is 208 days.
2.2 Materials
The meteials are 11 fennel varieties: DC0410, DC0415, DC0503, LX0611, LY0901,
NJ0712, PY0811, QY0811, XJ0603, XJ0710, XJ0712.
The leaves of same parts on 11 fennels were examined net photosynthesis rate (Pn),
intercellular CO2 concentration (Ci), Stomatal conductance (Gs), Transpiration rate
(Tr) by Photosynthetic apparatus (LI-6400). The relative conductivity (L) and
Malondialdehyde (MDA) were examined by the methods of Zhao et al. (2000). All
experiments was examined 3 times at least. The analysis of variance and discrepancy
comparison ware analysised by the software SPSS (13.0).
Test of Tr Test of
Pn
Variety significance Variety mmolH2O.m-2.s-1 significance
(umolCO2.m-2.s-1)
5% 1% 5% 1%
LY0901 23.7 a A LY0901 4.723 a A
QY0811 23.33 a A XJ0712 4.257 a A
XJ0712 22.8 ab A XJ0603 4.257 a A
XJ0603 22.233 ab A DC0503 4.183 a A
DC0410 20.767 ab A QY0811 4.113 a AB
NJ0712 20.7 ab A NJ0712 4.1 a AB
DC0415 20.7 ab A LX0611 3.77 a AB
LX0611 20.567 ab A PY0811 3.69 ab AB
PY0811 20.3 ab A DC0415 3.577 ab AB
DC0503 20.067 ab A DC0410 3.503 ab AB
XJ0710 18.633 b B XJ0710 2.547 b B
Comparison of Photosynthetic Parameters and Some Physilogical Indices 367
From Table 2 we can see there is no obvious difference among all the
fennel varieties. The Ci value of all array is: DC0503> PY0811> NJ0712> LX611>
LY0901> DC0410> XJ0710> XJ0712> XJ0603> QY0811> DC0415. But the Pn/Ci
value of QY08 is highest, about 8.710-2, and which of XJ0710 is lowest,
about 6.510-2, among the 11 fennel varieties. The results of Pn/Ci have
obvious difference among all the fennel varieties except three varieties
DC0503, LX0611and XJ0710. The Pn/Ci value of all array is: QY0811> LY0901
>XJ0712 >XJ0603> PY0811> DC0415> DC0410> NJ0712> DC0503> LX0611>
XJ0710.
Ci Test of Test of
Variety Variety Pn/Ci
(UmolCO2.mol-1) significance significance
5% 1% Pn/Ci(10-3) 5% 1%
DC0503 303 a A QY0811 87 a A
PY0811 289.667 a A LY0901 83 ab A
NJ0712 288.667 a A XJ0712 82 ab A
LX0611 288 a A XJ0603 81 ab A
LY0901 288 a A PY0811 77 ab A
DC0410 284.667 a A DC0415 76 ab A
XJ0710 282.333 a A DC0410 73 ab A
XJ0712 279.667 a A NJ0712 72 ab A
XJ0603 279 a A DC0503 67 b A
QY0811 271.333 a A LX0611 65 b A
DC0415 270.667 a A XJ0710 65 b A
There is no obvious difference among Gs of all the fennel varieties (Table 3).
The Gs value of all varieties array is: Y0901>DC0503>XJ0712>PY0811>
NJ0712>XJ0603> LX0611>QY0811>XJ0710>DC0410>DC0415. This result
indicated Gs is not the factor that effecting the otherness between XJ0710 and other
varieties.
368 M. Wang, B. Xiao, and L. Liu
Gs Test of significance
Variety -2 -1
(molH2O.m .S ) 5% 1%
LY0901 644 a A
DC0503 608.333 a A
XJ0712 521.667 a A
PY0811 520.667 a A
J0712 511.333 a A
XJ0603 500.667 a A
LX0611 493.33 a A
QY0811 491.333 a A
XJ0710 480.667 a A
DC0410 466.667 a A
DC0415 403.33 a A
MDA is the lipid peroxidation product that can be induced to a higher level when
plants are exposed to a highly osmotic environment, and can be an indicator of
increased oxidative damage[6]. the MDA of fennel variety PY0811 is highest, ahout
7.323umol.g-1 FW, and which of LX0611 is lowest, about 3.593 umol.g-1FW. Both Ls
and MDAs are obviously difference among all the fennel varieties. These results
indicated all fennel varieties have different stress tolerance.
References
1. Jansen, P.C.M.: Spices, condiments and medicinal plants in Ethiopia, their taxonomy and
agricultural significance, pp. 2029. Pudoc, Wageningen (1981)
2. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, pp. 134137. Higher Education Press, Beijing (2000); pp. 258260 (in Chinese)
3. Wang, S.G., Zhang, H.Y., Guo, Y., Sun, X.L., Pi, Y.Z.: Sammarize of biomembrane and
fruit tree cold resistance. Tianjin Agricultural Sciences 6(1), 3740 (2000) (in Chinese)
4. Huang, L.Q., Li, Z.H.: Advances in the research of cold-resistance in landscape-plants.
Hunan Forestry Science & Technology 31(5), 1921 (2004) (in Chinese)
5. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, vol. 260, pp. 134137. Higher Education Press, Beijing (2000) (in Chinese)
6. Wise, R.R., Naylor, A.W.: Chilling-enhanced photooxidation: evidence for the role of
singletoxygen and superoxide in the break down of pigments and endogenous antioxidants.
Plant Physiol. 83, 278282 (1987)
7. Chinta, S., Lakshmi, A., Giridarakumar, S.: Changes in the antioxidant enzyme efficacy in
two high yielding genotypes of mulberry (Morus albaL.) under NaCl salinity. Plant
Sci. 161, 613619 (2001)
8. Verslues, P.E., Batelli, G., Grillo, S., Agius, F., Kim, Y.S., Zhu, J.H., et al.: Interactionof
SOS2 with nucleosidediphosphate kinase 2 and catalase sreveal sapoint of connection
between salt stress and H2O2 signaling in Arabidopsis thaliana. Mol. Cell Biol. 27, 7771
7780 (2007)
Effect of Naturally Low Temperature Stress on Cold
Resistance of Fennel Varieties Resource
Abstract. After being affected by naturally low temperature (-4~0 ), the
changes of relative electrical conductivity and MDA content of 11 fennel
varieties were tested. The experiment studied the relationship between the cold
resistance and changes of relative electrical conductivity and MDA content of
varieties. The results showed that the relative electrical conductivity and MDA
content were higher than CK under naturally low temperature stress. The
varieties had different increasing amplitude. The differences were significant.
The comprehensive comparison indicated that the cold resistance of XJ0712
was the strongest, while that of XJ0710 was the worst.
1 Introduction
Low temperature is the key environmental factor that effected plant growth,
development and geographic distribution. It is reported that there are hundred billions
loss on agriculture because of low temperature[1]. And the loss will be increased with
extreme weather frequently by earth warming[2]. Therefore, studying plant
physiological mechanism under low temperature has become one of the researches
focus in this field. In this report, we studied the changes of relative electrical
conductivity and MDA under low temperature stress. These results can offer some
reference to screen good fennel varieties resisting low temperature.
The experimental field is located in the Dezhou university, and is within east
longitude 11545-11736and north latitude 362425"-38032". Dezhou city is in a
warm temperate zone, and has a continental monsoon climate with four distinct
seasons. The annual average temperature is 12.9 , Highest temperature history is
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 370374, 2011.
Springer-Verlag Berlin Heidelberg 2011
Effect of Naturally Low Temperature Stress on Cold Resistance 371
43.4 , and Minimum temperature history is -27 . Average annual rainfall is
547.5mm, and The average frost-free period is 208 days. In march every year, cold
wave, gale and frost are the main baneful weather in Dezhou city.
We selected the leaves samples of 11 fennel varieties at am 8:00- 9:00 on 23 march
2010. The temperature of that day is 5 15 . And other samples were selected at the
same time on 24 march 2010 after a light snow in the night. The relative conductivity
(L), damage degree and Malondialdehyde (MDA) were examined by the methods of
Zhao et al. (2000) [3]. All experiments was examined 3 times at least. The analysis of
variance and discrepancy comparison ware analysised by the software SPSS (13.0).
3 Results
3.1 The Changes of Relative Conductivity of 11 Fennel Varieties After Cold
Stress
Relative membrane permeability (RMP) of plant cell is an important index to stress
tolerance. The change of relative conductivity (L) can reflect the relative membrane
permeability of cell under cold stress. Low relative conductivity indicated low harm
for plant[4-5]. As showed in Figure 1, after cold stress the relative membrane
permeability were increased compared with the controls. The relative conductivity
change and damage degree of PY0811variety were least compared with other 10
25
)
%
( 20 CK
y
t
i Low temperature
v
i
t
c
u 15
d
n
o
c
c
i
r
t
c 10
e
l
e
e
v
i
t 5
a
l
e
R
0
DC0410 DC0503 XJ0603 LX0611 XJ0710 NJ0712 XJ0712 QY0811 PY0811 LY0901 DC0415
Variety
Fig. 1. Phylogenetic analysis of ZmPP2C and PP2Cs from other plants. All selected genes
function are tangible in some degree now.
372 B. Xiao, M. Wang, and L. Liu
varieties (Table 1). And the second was XJ0712. On the contrary, the relative
conductivity change and damage degree of XJ0710 variety were highest. These
results indicated that PY0811variety has highest cold tolerance, and XJ0712 variety
has lowest cold tolerance among all 11 fennel varieties.
Table 1. Changes of low temperature stress on the relative electric conductivity of leaf in
fennel and membrane damage degree
Test of Test of
Increase of Damage
Variety significance Variety significance
RMP (%) degree(%)
5% 1% 5% 1%
XJ0710 387.4 a A XJ0710 16.3 a A
XJ0603 251.5 b B XJ0603 8.7 b B
DC0415 152.7 c C NJ0712 7.8 b BC
DC0410 125.7 cd CD DC0410 6.7 bc BCD
NJ0712 121.4 cd CD DC0415 4.9 cd CDE
LX0611 105.9 cde CD LX0611 3.5 de DE
QY0811 105.8 cde CD QY0811 2.9 de DE
LY0901 82.5 de CD LY0901 2.8 de E
DC0503 78.5 de CD DC0503 2.8 de E
XJ0712 70.3 de CD XJ0712 2.7 de E
PY0811 53.5 e D PY0811 2.3 e E
MDA is the lipid peroxidation product that can be induced to a higher level when
plants are exposed to a highly osmotic environment, and can be an indicator of
increased oxidative damage[6]. MDA often was be considered as a guideline that
response to cold stress[5]. From Figure 2, MDA contents obviously increased after
9
CK
Low temperature
8
W 6
F
g
/
l
o 5
m
u
t
n 4
e
t
n
o
c
A 3
D
M
0
DC0410 DC0503 XJ0603 LX0611 XJ0710 NJ0712 XJ0712 QY0811 PY0811 LY0901 DC0415
Variety
cold stress. The MDA increase of all varieties array is: XJ0710 (92.7 %)>LX0611
(50.8%)> PY0811( 40.8%) > DC0503(39.3%) >NJ0712(35.1%) > QY0811(31.3%)>
DC0410(28.4%)>LY0901(24%)>DC0415(16.6%)> XJ0603(14.5%)> XJ0712(8.6%).
This result illustrated that XJ0710 suffered greatest oxidative damage than other
variety with the same cold stress treatment, and XJ0712 suffered least.
Acknowledgement. The authors gratefully acknowledge the help from Hong Zhang
and Fangsheng Gao at Dezhou University, China. This work was supported by the
Grants from the Important Program of Shandong thoroughbred engineering: Packing
up, screening and utilization of fFennel cultivar resources in Dezhou city (2009).
References
1. Deng, J.M., Jian, L.C.: Advances of studies on plant freezing tolerance mechanism:
freezing tolerance gene expression and its function. Chinese Bulletin of Botany 18(5),
521530 (2001)
2. Bertrand, A., Castonguay, Y.: Plant adaptations to overwintering stresses and implications
of climate change. Canadian Journal of Botanical 81, 11451152 (2003)
3. Li, H.S., Sun, Q., Zhao, S.J.: Experiment principle and technology of plant physiology and
biochemistry, pp. 134137. Higher Education Press, Beijing (2000); pp. 258260
4. Wise, R.R., Naylor, A.W.: Chilling-enhanced photooxidation: evidence for the role of
single to xygen and superoxide in the break down of pigments and endogenous
antioxidants. Plant Physiol. 83, 278282 (1987)
5. Huang, L.Q., Li, Z.H.: Advances in the research of cold-resistance in landscape-plants.
Hunan Forestry Science & Technology 31(5), 1921 (2004) (in Chinese)
6. Zhu, J.K.: Plantsalttolerance. Trends Plant Sci. 6, 6671 (2001)
7. Prasad, T.K.: Role of calalase in inducing chilling tolerance in pre-emergent maize
seedlings. Plant Physiol. 114(4), 136913761 (1997)
374 B. Xiao, M. Wang, and L. Liu
8. Wahid, A., Shabbir, A.: Induction of heat stress tolerance in barley seedlings by pre-
sowing seed treatment with glycinebetaine. Plant Growth Regul. 46, 133141 (2005)
9. Wahid, A., Perveen, M., Gelani, S., Basra, S.M.: Pretreatment of seed with H2O2
improves salt tolerance of wheat gene ZmPP2C of Zea mays roots. J. Plant Physiol. Mol.
Biol. 31, 183189 (2005)
10. Chinta, S., Lakshmi, A., Giridarakumar, S.: Changes in the antioxidant enzyme efficacy in
two high yielding genotypes of mulberry (Morus albaL.) under NaCl salinity. Plant
Sci. 161, 613619 (2001)
11. Verslues, P.E., Batelli, G., Grillo, S., Agius, F., Kim, Y.S., Zhu, J.H., et al.: Interaction of
SOS2 with nucleoside diphosphate kinase 2 and catalase sreveal sapoint of connection
between salt stress and H2O2 signaling in Arabidopsis thaliana. Mol. Cell Biol. 27,
77717780 (2007)
Survey on the Continuing Physical Education in the
Cities around the Taihu Lake
JianQiang Guo
ChangZhou University,
Changzhou 213164 China
Jqguo5986@163.com
Abstract. Through the material and questionnaire survey analysis thinks: As for
the Physical Education (PE) attended by On-the-job adult students in college
among the cities around the Tai Lake(taihu lake), the teaching plan, training
goal and teaching material contents formulation must satisfy the physical,
psychological and healthy needs of on-the-job adult students, meet the demands
of social sports, meet the needs of the development of discipline intact.
Keywords: The cities around Tai Lake, Continue the sports education,
Teaching situation, survey.
Preface
1.1 The Object of Study: students who receive the continuing sports education in the
high schools, city circle around the Tai Lake(suzhou ,wuxi, Changzhou,huzhou,
jiaxing), which open the continuing sports education classes.
1.2 Research methods:
1.2.1 The questionnaire: sending out 900 questionnaires and withdrawing available
824 ones in the city circle around the Tai Lake(suzhou ,wuxi, Changzhou, huzhou,
jiaxing).The recovery is 91.6%.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 375380, 2011.
Springer-Verlag Berlin Heidelberg 2011
376 J. Guo
1.2.2 The methods of mathematical statistics: using computer processing all the
statistics , recycling the available questionnaires and handling the statistics with using
statistical software ,the SPSS11. 5.
1.2.3 The methods of looking into materials: looking into the relevant materials on the
continuing sports education in china.
The city circle around the Tai Lake lies in the developed economic regions, has
instinctive geographical features. In this area, the engineering education and higher
vocational high schools are developed well. Relatively speaking, a significant number
of technical school students have the desire to study further. As table 1 shows the
fact that 73.2% of students own the degree of Technical secondary school or senior
colleges can prove the point mentioned. There are many universities in the city circle
around Tai Lake. Every university has numerous subordinates schools, students there
continue their physical education and have physical exercise. The scale of admission
of full-time students who will receive continuing education focus on the students who
has finished the high school or who are willing to receive higher education in the
engineering education.
2.2 The survey on the students P.E. learning motivation and interest
JHQGHU WRWDO (QKDQFHG SK\VLTXH 0DVWHU VSRUWV VNLOOV ,PSURYH DELOLW\ HQWHUWDLQPHQW &XUH GLVHDVHV
Q Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ Q SURSRUWLRQ
PDOH
)HPDOH
$JJUHJDWH
Survey on the Continuing Physical Education in the Cities around the Taihu Lake 377
Table 2 show that students who receive continuing education have sports exercises
to enhance their physique (accounted for 30.5%), to master of sports skills and
technology (accounted for 18.5 percent). Therefore, as to the teaching form, we
should take a variety of forms, such as basic sports classes Special improve classes,
sports health classes. As to the selection of teaching material contents, ball games and
the body buildings should be the important parts. As to the teaching form, we should
carry out Hierarchical teaching . Otherwise, it will damage the enthusiasm of students'
learning sports severely [1].
Students value orientations in participating in sports activities focus on two
aspects: "enhanced physique" and "entertainment and peaceful mind and inner
tranquility" .Their value orientations reflect that as to the social, cultural and
psychological functions in the sports of "improving ability to adapt the society " ,this
choice rate is low. It shows the sports consciousness of contemporary college students
is required to be guided to establish the comparatively complete sports value. This
article will classify the socialization course into physical education curriculum system
and emphasize and give play to this function of sports. In addition, a not high choice
rate in the "mastering sports skills and technology" also shows that the traditional
guiding ideology on the sports curriculum and students main needs remain docking.
2.3 The survey on the sports that students like
According to table 3, we can see that surveyed students who are receiving the
continuing education like the following sports courses most: basketball, aerobics,
table tennis, badminton, football, martial arts, bodybuilding and volleyball. And in the
report of China mass sports present situation investigation results published by our
national sports administration in 2002, the top 10 sports mentioned above in the report
were included [2].
2.4 The survey on the teaching form of continuing education
According to table 4, we can see that, among the students who are receiving the
continuing education in city circle around Tai Lake. Boys pay little attention to the
teaching form, while girls requirements in the teaching form differed a lot. Mixed
classes or small classes are required. It has a direct links to the continuing students
origin structure in high schools in this city circle around Tai Lake. Relatively
speaking, girls are in a small age. Majority of them are in junior colleges, technical
secondary schools, high schools and very shy physiologically. Their preferences in
378 J. Guo
teaching form respectively are: elective classes (54.5%), club type classes (26.7%),
basic classes (17.6 percent). The reasons are: teaching mode of elective course
emphasize the sports ability. It is classified by the content of the project, so the
students have more choices. The advantage of Elective PE teaching mode is that it can
sufficiently meet the continuing students demand of development in personality. It is
beneficial to cultivate the ability of sports and fitness and interests. The guiding
ideology of club of teaching mode is to fully exert students' subjective initiative,
attach importance to cultivating students' interest in sports. The main purpose is to
improve students' sports ability and train students' life-long physical exercise habit. In
the basic class, the initiative of middle school students' is greatly depressed. Students
cannot make their choices on their own. It also hinder the development trend of
individualism.
3 Conclusions
3.1 The students who are receiving the continuing education in city circle around Tai
Lake based on the level of college and engineering schools. At the same time, due to
the geographical position, junior colleges and technical secondary schools are the
main matriculate. Our continuing sports education curriculum arrangement should
consider the characteristics of the students origin. We suggest that the time P.E. class
should increase from ordinary 70 hours appropriately to 100 hours and open four
semesters, which will be consistent to the time of ordinary time. We should formulate
plans to ensure continuing students have more sports time, cultivating habit of
participating in regular physical exercise. Increasing the teaching hour and teaching
the necessary sports knowledge to students, Vary from person to person, enable the
student to master at least one of the value of exercise with lifelong sports. The
content of Continuing PE should combine the physiological and psychological, social
sports needs, colleges should arranged the sport lessons according to the reality
.Featured by interest, safety, entertainment, confrontational, general audience ,aerobic
exercise program can help them keep fit and strong ,improve their own interests, vent
their emotions, learn the necessary physical theory knowledge, and can learn the
necessary physical theory knowledge, 1-2 sport skill, make oneself can not only for
the lifelong sports exercise, and participation in social sports activities [3].
Survey on the Continuing Physical Education in the Cities around the Taihu Lake 379
References
[1] Wu, T.P.: (1) of adult college students. The sport motivation research. Journal of Chinese
Sports Science and Technology (article 37 roll) supplement 19-20 (2001)
[2] Mass sport in China present situation investigation results report. The state general
administration of sports, adrenal system. br
[3] Cheng, D.F., Wang, Q.: student sports teaching employed adults. Chinese Adult
Education 10, 164165 (2006)
[4] Wang, Q.J., et al.: Zhejiang education sports teaching model selection of adult. Ningbo
University Journal (Education Science Edition) 26(5), 108109
[5] Chen, Q.S.: Theory of adult education sports education characteristics and countermeasures
study. Journal of Adult Education Sports 25(1), 34 (1993)
The Gas Seepage and Migration Law of Mine Fire Zone
under the Positive Pressure Ventilation
Keywords: positive pressure ventilation, mine fire zone, fire spread, smoke.
1 Introduction
The main fans in the mine have three common ways of ventilation: positive pressure
ventilation, negative pressure ventilation and hybrid ventilation. In the mine of forced
ventilation, after the main fan works, it forces fresh air into the roadway, making the
absolute pressure of underground mine greater than that outside of the mine or the
atmospheric pressure outside of air duct with the same elevation. Its relative pressure
is positive, and thus turns forced ventilation into positive pressure ventilation [1]. The
influence of positive pressure ventilation on the spread of fire zone in the mine is
mainly its influence on the migration of smoke currents and high temperature points
corrected pressure ventilation in the mine fire.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 381389, 2011.
Springer-Verlag Berlin Heidelberg 2011
382 H. Wang et al.
the roadway follows relevant equations of the motion of fields, the law of heat and
mass conservations and the basic equations of chemical fluid dynamics established by
taking them as the starting point[2], namely
continuity equation ( u j )
+ = 0; (1)
t x j
( h) ( u j h) h
energy equation + = h +I ; (3)
t x j x j x h
j
Goaf, top-coal caving region, and coal pillar in coalmine belong to double porous
media with hole and fracture. The wind flow in roadway penetrates into the coal, and
the oxygen in the air and coal molecules undergo physical adsorption, chemical
adsorption and chemical reactions, while producing a large amount of heat. Under the
adapted heat accumulated conditions, the coal continues to be oxidized and heated.
And when the temperature reaches the ignition point, it will lead to coal combustion.
According to the mass and energy conservations, the mathematical models for
spontaneous combustion of loose coal such as the coal in the gob can be obtained.
The effect of buoyancy can not be ignored during the combustion of fire zone.
Therefore, the seepage flow equation [3] of air leakage in fir zone follows,
K K
h + 0 gTk = ( A + B )v (5)
.
K
Where: v -permeability velocity of air flow, m.s-1; -modulus of permeability
velocity, m.s-1; -Hamiltonian operator; h-total pressure, h=p+0gz, Pa; p-leakage
wind pressure, here means the positive values, Pa; z-perpendicular coordinate, m; 0-
gas density of fresh wind flow in the roadway, kg.m-3; g-acceleration of gravity, m.s-2;
-gas expansion coefficient; T-temperature difference, T=T-T0, K; T-loose coal
temperature, K; T0-temperature of the fresh wind flow in the roadway, K; -gas
K
density, kg.m-3; k -vertical upward unit vector; A-linear drag coefficient, A=n/k;
B-quadratic drag coefficient, B=nDm/k; -movement viscosity coefficient, m2.s-1; n-
porosity; Dm-the average diameter of porous media skeleton, m; -permeability; -
geometric figure coefficient.
The Gas Seepage and Migration Law of Mine Fire Zone 383
T K
e / Ce = e ( T ) n g C g ( v T ) + q(T ) . (6)
t
Where: T-coal temperature, K; t-time variable, s; -loose coal density, kg.m-3;
C-heat capacity of loose coal, J.Kg-1.K-1; e-thermal conductivity coefficient of loose
coal, W.m-1.K-1; g-gas density, kg.m-3; Cg-gas specific heat, .Kg-1.K-1; q(T)-heat
intensity of coal, J.s-1.m-3.
Oxygen is a necessary condition for spontaneous combustion. Oxygen migration is
a complex process accompanied by the consumption and dilution of oxygen in coal
under simultaneous convective diffusion and molecular diffusion. According to mass
transfer law [5], oxygen migration meets
dC K (7)
+ (v C ) = ( D C ) V (T ) .
dt
In terms of solid boundary and artificial boundary, the physical boundary condition
during the process of spontaneous combustion includes seepage field boundary
condition and concentration field boundary condition.
The seepage field boundary conditions are as follows:
first kind boundary - given constant pressure
=P; (9)
third kind boundary -given wind pressure or value or given air volume.
The concentration field boundary conditions are as follows:
first kind boundary - given concentration on the boundary
C = f1 ; (11)
It was during the repeated mining process of No.11 coal seam when the accident
happened. The repeated mining thickness was 2 to 8 meters, and the inclination angle
of coal seam 20 degree. Coal seam roof and floor were composed of sandstone. It
belonged to low-gas coal mine. The volatile matter of coal was 30 percent. Coal-dust
was explosive-gas mixture, with an explosion index of 70 percent. The spontaneous
ignition tendency of coal seam was identified as grade one with a combustion stage of
6 to 8 months. The ventilation system is central tied forced ventilation, with auxiliary
shaft intaking the air and the main shaft returning the air. At 2:28 on September 20,
2008, the mine had a particularly serious fire accident, causing the death of 31 people.
The direct cause of the accident was: the second blind air shaft of the mine was
arranged at the bottom of mined No.11 coal seam. The coal pillar at the top of the
The Gas Seepage and Migration Law of Mine Fire Zone 385
roadway broke, exposed to the air and had air leakage, which caused spontaneous
combustion with temperature ascend because of coal oxidation, and leaded to the fire
incident. Figure 1 is a schematic diagram of ignition point. Based on this fire incident,
this paper analyzed the gas seepage and flow features of smoke in the goaf and mine
ventilation system after the accident, which provided theoretical support to avoid
similar incidents in the future.
According to the basic conditions of the accident mine ventilation, the paper selects
the pillar of fire existing area, the upper goaf, the lower end of coal and rock and the
ventilation system. As the accident occurred in the upper areas of the second blind air
shaft and fourth crossheading, there occurred no spontaneous combustion in 01 face
area, the second old blind main shaft, second new blind main shaft, sixth
crossheading, and the upper areas of 02 and 03 drifting faces and areas near the two
faces. The roadways of above areas are mainly circulation areas of gaseous products,
therefore, in the paper, the mine ventilation system after the crossing of the second
blind air shaft and fourth crossheading is simplified. Hexahedral structured grid is
adopted to divide and the number of grids is about 3.45 million. The paper taking into
account the complex structure of flow field in spontaneous combustion area, local
mesh refinement is applied in the grids of fire part and some parts. The simplified
physical model was shown in Fig.2.
Basic parameters and boundary conditions are as follows: the normal air inlet
speed of the second blind air shaft is 3 m.s-1, the porosity of coal pillar is 0.2, and
porosity of the gob is 0.2 to 0.3. The seepage field boundaries of the gob and face
boundary conditions are shown in Eq.10, namely, the mass flow of given penetration
is 0.01g.m-2. Gas concentration field boundary condition takes the second kind
boundary condition (Eq.12), namely normal no penetration. Outlet of the blind main
shaft adopts pressure exit boundary conditions. The side wall of the roadway below
the second blind air shaft along the fourth crossheading is impermeable, and the side
wall of second new blind air shaft is impermeable.
386 H. Wang et al.
Table 1 showed the mine data and phenomena explored and observed by the rescue
team somewhere after the accident in the ambulance during the emergency-rescue
process. Fig.3 is a stimulating concentration contour of carbon monoxide of the center
sections along the length direction of the second blind air shaft and fourth
crossheading at the stage of full development of spontaneous combustion. As it can be
seen from the figure, the second blind air shaft, fourth crossheading and second new
blind main shaft are full of dense smoke, and the concentration value of CO balances
between 0.0105 and 0.011. The simulation result approximately approaches the actual
monitoring result and observation. The error mainly arises from the simplification of
ventilation network. Through the above analysis it can see that the simulation results
Table 1. Concentration of carbon monoxide and temperature measured before the adjustment
of the ventilation system during the accident rescue and observed phenomena
basically agree with the development state of the accident, thus indicating the
accuracy and reliability of the mathematical model of fire developing tendency in the
mine fire zone under positive pressure ventilation.
Fig.3 is the volume fraction contour of carbon monoxide at the direction of center
section of the second blind air shaft and fourth crossheading in the fire accident. The
figure shows: due to the compression of mechanical wind pressure under positive
pressure ventilation, most gas in the fire zone flows into the goaf, and the fire has an
evident tendency to spread upwards from the goaf and coal pillar part. The trend of
carbon monoxide volume fraction contour during the initial period of fire reveals the
volume fractions of carbon monoxide in the second blind air shaft and fourth
crossheading are almost 0, and carbon monoxide enters into the upper part of the
goaf. But with the development of the fire, after the formation of fire center, the
carbon monoxide gas gradually spreads from the fire center towards the direction of
the roadway, and enters into the second blind air shaft, while a lot of smoke begins to
appear in the ventilation system. The volume fraction of carbon monoxide near the
second blind air shaft and fourth crossheading remains at about 0.01, which is close to
the concentration values actual measured by the ambulance crew while doing the
rescue work.
Fig. 4 is a velocity contour through the direction of center section of the second
blind air shaft and fourth crossheading during the development of the fire. The figure
shows that as the smoke that enters into the roadway at the initial stage of fire is
exiguous, it has little flow effect on the airflow inside the roadway. But with the
development of fire, a lot of smoke flows into the roadway, and the expansion of high
temperature gas speeds up the gas flow rate in the roadway. The acceleration will lead
directly to the spread and diffusion rate of smoke in the ventilation system, resulting
in the aggravation of accident.
Fig. 3. Volume fraction contour of carbon monoxide at the section x=1.3, z=61.72
388 H. Wang et al.
On the basis of the above simulation results and description of the process of the
accident, it shows that the following aspects should be concerned for fire monitoring
in mine safety:
Firstly, for areas with crushing coal on the side wall of the roadway and serious air
leakage, effective measures to prevent and control spontaneous combustion and air
leakage should be timely adopted, so as to get rid of the major hidden danger of
spontaneous combustion. Positive pressure ventilation can not avoid the reverse
spread of spontaneous combustion near the wind in the goaf. That is why the fire
accident occurred in the Fuhua coal mine.
Secondly, carbon monoxide, as a warning index gas of spontaneous combustion of
mine, should pay attention to the impact and effect of the volume and flow of the air.
Using positive pressure ventilation will pressure most gases produced by spontaneous
combustion into the goaf, making the concentration of carbon monoxide within the
return airway significantly reduce, which increases the difficulty of gas monitoring
and timely warning of spontaneous combustion. Furthermore, the air leakage of coal
in the goaf is serious, so a lot of fire smoke gas containing carbon monoxide directly
seeps from the gap of coal, which also greatly enhances the difficulty of making early
warning. This is also one of the major reasons for not timely warning of the fire
The Gas Seepage and Migration Law of Mine Fire Zone 389
accident. Also, it should be noted that positive pressure ventilation may still reverse
the spread of spontaneous combustion due to spread with absorption of oxygen.
Thirdly, correct the misconception of using 24ppm carbon monoxide concentration
as the warning index value. The mine is in the return airflow with the air volume of
1150 m3.min-1. The concentration of carbon monoxide gradually increased from 2
ppm on September 1st to 7ppm on September 14th, and reached 10ppm or so before
the accident happened, which showed signs of spontaneous combustion, but brought
no vigilance. Taking the standard of 24ppm carbon monoxide concentration as the
limit to toxic and harmful gases injurious to health as the warning index of
spontaneous combustion lost the opportunity of timely warning spontaneous
combustion. The problem exists in many mines of China, but still gets no attention.
5 Conclusion
Firstly, the model of fire developing tendency in the mine fire zone under positive
pressure ventilation was established, including roadway flow field, the fire zone
combustion, chemical reaction model between coal and oxygen and boundary
condition models.
Then, based on the prototype of the particularly serious fire accident of Fuhua
9.20 in Heilongjiang province, the paper verified the rightness of the mathematical
model above, and obtained the law of positive pressure ventilation influence on the
fire zone and its enlightenment to safety in production.
References
1. Zhang, G.S.: Ventilation Security. China University of Mining and Technology Press,
Xuzhou (2007)
2. Wang, H.Y.: Study on Three-Dimensional Smoke Flowing Theory and the Application
Technology of VR in Passage Fire. China University of Mining and Technology, Beijing
(2004)
3. Peng, B., Reynolds, R.G.: Culture algorithms: Knowledge Learning in Dynamic
Environments. In: Proceedings 2004 Congress on Evolutionary Computation, pp. 17511758.
World Scientific Publishing Co. Pte Ltd., Singapore (2004)
4. John, D., Anderson, J.R.: Computational Fluid Dynamics-The Basics with Applications.
Tsinghua University Press, Beijing (2002)
5. Incropera, F.P., Witt, D.P., Bergman, T.L., et al.: Fundamentals of Heat And Mass Transfer.
John Wiley & Sons, New York (2007)
6. Krause, U., Schmidt, M., Lohrer, C.: Computations on the Coupled Heat and Mass Transfer
during Fires in Bulk Materials, Coal Deposits and Waste Dumps. In: The Proceedings of the
COMSOL Multiphysics Users Conference, Frankfurt, pp. 16 (2005)
Study on Continued Industrys Development Path of
Resource-Based Cities in Heilongjiang Province*
1 Introduction
Heilongjiang province is a big resourceful province, resource-based cities in
Heilongjiang province occupied a very important position. In the past these cities
made significant contributions to national construction, regional economic and social
development, but caused a series of "shock", recently most of the cities and regions
have entered the mid-late resources exploitation. Because of natural resources
plummeted, enterprise economic hand and worsening ecological environment, these
result in the slower development of the cities and regions. They also faced with
relative decline predicament. To choose appropriate and sustainable developments
path in resource-based cities of Heilongjiang has become one of the important
strategic topics.
*
The article is subsidized by philosophy social science project of Heilongjiang province in
2010( item number:10B046) and Heilongjiang post-doctorate scientific research foundation
(item number:LBH-Q09178).
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 390395, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study on Continued Industrys Development Path of Resource-Based Cities 391
are seven regional cities, namely, yichun, jixi, hegang, shuangyashan, qitaihe, daqing,
heihe. According to the types of resources they can be divided into coal city,
petroleum city and logging city: 4Coal cities: jixi, hegang, shuangyashan, qitaihe;1
petroleum city: daqing; 9 Logging cities: yichun, heihe, five dalian pool, ShangZhi,
HaiLin, MuLeng, ning an, HuLin. The cities are mostly rich in mineral and forest
resources exploration and exploitation, on the basis of the mining and forest by
evolved, instead of natural form. Due to the need of development of mineral
resources, the state in relatively short period for the number of human, financial and
material related to the original injected rapidly. Only a few families in the village or a
deserted place suddenly became a city.
Resource-based cities in Heilongjiang province is mostly in the period of recession.
Transformation problems relative to industry development in the resource-based cities
is slow. Oil, coal and the forest resources declined. Natural resources relative to
industrial structure is single, continuous small industrial scale, irreparable mining
decline bring growth.
3.3 Industrial Correlativeness Degree Low, and the Economic Benefit is Low
3.4 Funds Highlights, the Contradiction between Supply and Demand Talent
Resource Scarcity
Due to the lack of early industrial transformation and upgrade technology and most
resource-based cities is still in the primary state of processed products output
resources, product low profit, lack of market competitiveness, which is lack of
financial resources city basic reasons. In addition, the resource-based cities because of
geographical location, natural environment and living conditions such as
disenchanted, often is the talents outflows than inflows. Talent is the foundation, the
regional innovation of intellectual resources shortage of affected this urban innovating
capacity, the state-owned enterprise technical renovation feels embarrassed, even if a
new &high-tech projects, but lack of talent, good project also reach the good benefit,
the new project cannot be started.
Study on Continued Industrys Development Path of Resource-Based Cities 393
A city development can not give up its fundamental, resource-based cities to ensure
the sustainable development of resources industry must be original dominance of
dominant industry as a prerequisite is the sustained and healthy development of the
urban steady economic smooth forceful guarantee. Oil industry has been supporting
daqing economic development of important farmar, so daqing petrochemical
enterprise should fully support the construction of large base, strive to ready to
petrochemical, daqing petrochemical industrial construction to become the biggest
pigtail industry. Deforestation is ownership economy development is the most
important pillar, future should continue to promote forest, lumbering, processing such
a complete industrial system. The coal city in huaibei coal resources exploitation and
may at the same time actively expand coal, coal turn electric industry.
The green industry development can not only bring new economic growth point, and
can reduce the damage to the natural environment, remove the deterioration of the
environment, the greatest hazard. Cows, pigs in daqing city can develop, beef, mutton
and big goose green food industry; etc. Focus on the development of wetland
landscape, green hot spring tourism projects. Ownership should be actively developed
wild fruit drinks, fresh edible, fruit milk, mineral water, quick-frozen brown wait for a
variety of products and pollution-free rice green food; etc. Use of abundant tourism
resources development has the characteristics of forest tourism in heilongjiang
province north coal city in huaibei also should make xingkai lake, wusuli river
WanDaShan forests, rivers, ZhenBaoDao wetland, longjiang gorges, WeiHe wetland,
montefiore meters city tourism projects.
References
1. Bhattacharjee, Y.: Industry Shrinks Academic Support. Science 312, 671 (2006)
2. Fritsch, M., Brixy, U., Falck, O.: The Effect of Industry, Region, and Time on New
Business Survival A Multi-Dimensional Analysis. Review of Industrial
Organization 28(3), 285306 (2006)
3. Bale, H.E.: Industry, innovation and social values. Science and Engineering Ethics 11(1),
3140 (2005)
4. Wang, S., Jiang, L.: Economic transformation capacities and developmental
countermeasures of coal-resource-based counties of China. Chinese Geographical
Science 20(2), 184192 (2010)
5. Pichon, C.L., Gorges, G., Bot, P., Baudry, J., Goreaud, F., et al.: A Spatially Explicit
Resource-Based Approach for Managing Stream Fishes in Riverscapes. Environmental
Management 37(3), 322335 (2006)
Study on Continued Industrys Development Path of Resource-Based Cities 395
1 Introduction
The issue of measuring precision in the wire and cable market becomes more and
more prominent. It is significant to measure the cable length precisely, rapidly and
economically. Compared with the traditional measurement methods, time domain
reflectometry (TDR) technology has an advantage of non-destruction, portability and
high-precision, which is an ideal cable length measurement method[1~3]. In order to
achieve high cable length measuring precision and study the key technologies of the
cable length measurement based on TDR, two cable length measurement systems
based on different platforms are developed.
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 396401, 2011.
Springer-Verlag Berlin Heidelberg 2011
Cable Length Measurement Systems Based on Time Domain Reflectometry 397
The cable length is measured n times. Taking into account of the random
measuring errors and according to the of random error accumulation, the average
measuring error can be calculated by formula (5):
v l n 1 f
l = l + N l (5)
v n i =1 N i f
The 1 quantization error can be expressed as:
1 1 1 1
= = ... = ( ) = (6)
N1 N 2 Nn N
398 J. Song, Y. Yu, and H. Gao
At present, the crystal oscillator temperature stability is better than 109 , therefore
the count pulse frequency error can be ignored. At this point the TDR cable length
measuring uncertainty ul can be expressed as
uv 2 v 2
ul = (l ) +( ) (7)
v 2f n
computer
sampling
signal generator oscilloscopes
temperature Cable
measurement module under test
l i
2
(8)
sl = i =1
= 0.12 ( m )
N 1
400 J. Song, Y. Yu, and H. Gao
( )
s l =
lt
N
= 0.01( m ) (9)
The coverage factor k is 2, then the expanded uncertainty of the cable length
measurement is
U l = ks (l ) 2 s (l ) = 0.02 ( m ) (10)
It can be seen from formula (8) to (10) that the measurement system developed in
this paper can achieve high precision cable length measurement.
80
Occurrences number of measurement results
60
40
20
0
103.25 103.33 103.42 103.50 103.58 103.67 103.75 103.83 103.92
Cable length(m)
5 Conclusions
Based on the principle of time domain reflectometry(TDR) cable length measurement,
Two cable length measurement systems based on the TDR principle are designed. The
high-precision time interval measurement module is the core of the first measurement
system. The sampling oscilloscope with a computer is the core of the second
measurement system. The experimental results show that the measurement systems
developed in this paper can achieve high cable length measuring precision.
References
1. Dodds, D.E., Shafique, M., Celaya, B.: TDR and FDR Identification of Bad Splices in
Telephone Cables. In: 2006 Canadian Conference on Electrical and Computer Engineering,
pp. 838841 (2007)
2. Du, Z.F.: Performance Limits of PD Location Based on Time-Domain Reflectometry. IEEE
Transactions on Dielectrics and Electrical Insulation 4(2), 182188 (1997)
Cable Length Measurement Systems Based on Time Domain Reflectometry 401
3. Pan, T.W., Hsue, C.W., Huang, J.F.: Time-Domain Reflectometry Using Arbitrary Incident
Waveforms. IEEE Transactions on Microwave Theory and Techniques 50(11), 25582563
(2002)
4. Langston, W.L., Williams, J.T., Jackson, D.R.: Time-Domain Pulse Propagation on a
Microstrip Transmission Line Excited by a Gap Voltage Source. In: IEEE MTT-S
International Microwave Symposium Digest, pp. 13111314 (2006)
5. Buccella, C., Feliziani, M., Manzi, G.: Detection and Localization of Defects in Shielded
Cables by Time-Domain Measurements with UWB Pulse Injection and Clean Algorithm
Postprocessing. IEEE Transactions on Electromagnetic Compatibility 46(4), 597605
(2004)
The Cable Crimp Levels Effect on TDR Cable Length
Measurement System
1 Introduction
All kinds of wires and cables have been widely used with the rapid development of
the national economy. The wire and cable industry have developed rapidly, and
gradually expanded the production scale and market share. Cable is an important
commodity, and its length measurement precision is strictly formulated in the national
standard. However, the issue of measuring precision in the wire and cable market
becomes more and more prominent. It is significant to measure the cable length
precisely, rapidly and economically. Compared with the traditional measurement
methods, time domain reflectometry (TDR) technology has an advantage of non-
destruction, portability and high-precision, which is an ideal cable length
measurement method[1~3].
It is found in practice that the measuring accuracy of the TDR cable length
measurement system can be influenced by the cable crimp levels. In order to reduce
the cable crimp levels effect on the measuring accuracy of the TDR cable length
measurement system, the cable crimp levels effect on traveling wave propagation
velocity is analyzed theoretically, and the corresponding experiment is done.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 402407, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 403
Dl
C= (1)
d
Where, D is the width of the capacitor plates, l is the length of the capacitor plates,
d is the distance between the two plates of the capacitor, is the dielectric constant.
According to the cable length measurement principle of capacity conversion of
capacitor, an open end cable is regarded as a capacitor. Two cores of the cable is
regarded as two plates of the capacitor. The effective width of the cable core can be
approximately interpreted as D in formula (1). The length of the cable can be regarded
as l in formula (1). The distance of the cable cores can be interpreted as d in
formula (1). For the two core wire cable whose internal structure, , D and d is
fixed, the capacitances between the two wires of the cable are proportional to the
cable length. That is
C1 l1
= (2)
C2 l 2
The cable length can be obtained by measuring the cable capacitance per unit
length[4].
At high frequencies, the propagation velocity of the electromagnetic waves can be
calculated as formula (3)[5~7].
1
v= (3)
L0 C0
Where, L0 is the distributed inductance per unit cable length. C0 is the distributed
capacitance per unit cable length.
It can be seen from formula (3) that the wave velocity is inversely proportional to
the square root of the distributed capacitance per unit length. The relationship
between the velocity and distributed capacitance can be expressed as follow
v 1/ C0 (4)
The distance between the adjacent insulated core decreases and the distribution
capacitance of the cable increases as the cable is curled. Therefore the velocity
decreases as the cable is curled. However, as long as the cable state under test is
uniformed, the distribution capacitance of the cable is also uniformed; therefore, the
velocity can be controlled in a small range.
Table 1. Equivalent capacitances and distributed capacitance of different cable length in state
of curl and straighten
Curl Straighten
Cable
Equivalent
under Equivalent Distributed Distributed
capacitances
test(m) capacitances(nF) capacitances(pF) capacitances(pF)
(nF)
30.00 1.5183 50.610 1.4615 48.717
86.75 4.3892 50.595 4.2257 48.711
Table 2. Equivalent capacitance ratio and length radio of with 86.75m and 30.00m cable length
in state of curl and straighten
Curl Straighten
Capacitance ratio Length ratio Capacitance ratio Length ratio
2.8909 2.8917 2.8913 2.8917
Table 3. Velocity and relative error of different cable length and different cable crimp degree
30.00 1.84
Curl 1.8886 108
Straighten 1.9258 108
86.75 1.94
Curl 1.8892 108
The equivalent capacitance ratio and length radio of 86.75m and 30.00m cable
length in state of curl and straighten is shown in table 2. The ambient temperature is
17.0 . It can be seen from table 2 that although the values of the equivalent
capacitance of the cable are different under different conditions, the capacitances
between the two wires of the cable are still proportional to the cable length. In the
same state, the equivalent capacitance of the cable is greater as the cable length is
longer. But the distributed capacitance per unit cable length is always the same. The
comparison of the distributed capacitance in state of curl and straighten, it can be
found that the equivalent capacitance and distributed capacitance in state of curl is
bigger than the equivalent capacitance and distributed capacitance in state of
straighten. It is because the distance between the adjacent insulated wire cores
decreases as the cable is curled, therefore the equivalent capacitance and distributed
capacitance in state of curl is bigger.
The velocity and relative error of different cable length and different cable crimp
degree is shown in table 3. It can be seen form table 3 that the relative error of
velocity in state of curl and straighten is big. The velocity relative error of the same
cable crimp degree with 86.75m and 30.00m cable length is shown in table 4. It can
be seen form table 3 and table 4 that the cable crimp degree of RVV 300/300V PVC
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 405
sheathed cable has a great impact on the velocity. However, as long as the cable state
under test is uniformed, the velocity can be controlled in a small range.
The measurements were performed on RVVP multi-core shielded cable. The
length of three cores is 40.73m. The length of seven cores is 41.57m. The length of
eight cores is 43.18m. The ambient temperature is 18.0 The equivalent
capacitances and distributed capacitance of different cable cores in state of curl and
straighten is shown in table 5. The velocity relative error is shown in table 6.
It can be seen form table 5 and table 6 that the cable crimp degree of RVVP multi-
core shielded cable has little impact on the velocity. However, the cable state under
test is better to be uniformed to control the velocity error in a small range.
The measurements were performed on SYV75-5-1 coaxial cable. The length is
156.06m. The ambient temperature is 18.0 . The equivalent capacitances and
distributed capacitance of coaxial cable in state of curl and straighten is shown in
table 7. The coaxial cable velocity relative error in state of curl and straighten is
shown in table 8.
Table 4. Velocity relative error of the same cable crimp degree with 86.75m and 30.00m cable
length
Straighten 0.1
Curl 0.03
Table 5. Equivalent capacitances and distributed capacitance of different cable cores in state of
curl and straighten
Straighten Curl
Cable under Equivalent Distributed Equivalent Distributed
test capacitances capacitance capacitances capacitance
(nF) (pF) (nF) (pF)
Three cores 5.7440 141.03 5.7890 142.14
Seven cores 4.4257 106.46 4.4891 107.99
Eight cores 4.7321 109.60 4.7709 110.50
Table 7. Equivalent capacitances and distributed capacitance of coaxial cable in state of curl
and straighten
Straighten Curl
Cable Equivalent Distributed Equivalent Distributed
under test capacitances capacitance capacitances capacitance
(nF) (pF) (nF) (pF)
Coaxial
11.993 76.848 12.022 77.034
cabel
Table 8. Coaxial cable velocity relative error of different cable crimp degree
It can be seen form table 7 and table 8 that the cable crimp degree of SYV75-5-1
coaxial cable has very little impact on the velocity. However, the cable state under
test is better to be uniformed to control the velocity error in a small range.
4 Conclusions
The cable crimp levels effect on traveling wave propagation velocity is studied. The
experiment results show that the cable crimp degree of RVV 300/300V PVC sheathed
cable has a great impact on the velocity. The cable crimp degree of RVVP multi-core
shielded cable has little impact on the velocity. The cable crimp degree of SYV75-5-1
coaxial cable has very little impact on the velocity.
References
1. Dodds, D.E., Shafique, M., Celaya, B.: TDR and FDR Identification of Bad Splices in
Telephone Cables. In: 2006 Canadian Conference on Electrical and Computer Engineering,
pp. 838841 (2007)
2. Du, Z.F.: Performance Limits of PD Location Based on Time-Domain Reflectometry. IEEE
Transactions on Dielectrics and Electrical Insulation 4(2), 182188 (1997)
3. Pan, T.W., Hsue, C.W., Huang, J.F.: Time-Domain Reflectometry Using Arbitrary Incident
Waveforms. IEEE Transactions on Microwave Theory and Techniques 50(11), 25582563
(2002)
The Cable Crimp Levels Effect on TDR Cable Length Measurement System 407
4. Shan, L.D., Meng, W.S., Zhang, L.H., Wang, Y.M.: Apply the Principle of Capacity
Conversion of Capacitor and Quick Determination the Inside Break Point of Cable. Metal
World (4), 4648 (2006)
5. Mugala, G., Eriksson, R.: Measurement Technique for High Frequency Characterization of
Semi-Conducting Materials in Extruded Cables. IEEE Transactions on Dielectrics and
Electrical Insulation 11(3), 471480 (2004)
6. Oussalah, N., Zebboudj, Y.: Analytic Solutions for Pulse Propagation in Shielded Power
Cable for Symmetric and Asymmetric PD Pulses. IEEE Transactions on Dielectrics and
Electrical Insulation 14(5), 12641270 (2007)
7. Oussalah, N., Zebboudj, Y., Boggs, S.A.: Partial Discharge Pulse Propagation in Shielded
Power Cable and Implications for Detection Sensitivity. IEEE Electrical Insulation
Magazine 23(6), 510 (2007)
The Clustering Algorithm Based on the Most Similar
Relation Diagram
Wei Hong Xu1, Min Zhu1, Ya Ruo Jiang1, Yu Shan Bai2, and Yan Yu2
1
College of Computer Science, Sichuan University, Chengdu, China
2
College of Education, Tianjin Normal University, Tianjin, China
weihong.xu.scu@gmail.com
yuyantj@yahoo.com.cn
Keywords: data mining, clustering algorithm, weighted graph, the most similar
relation diagram (MSRD), the most similar data (MSD), the most similar group
(MSG).
1 Introduction
In recent years, cluster analysis has become an important tool in data mining, and has
been extensively studied and applied to many fields such as pattern recognition,
customer segmentation, similarity search and trend analysis. Though thousands of
clustering algorithms have been presented, some new algorithms still continue to
appear [1] due to the demand in practice and theory. Some algorithms of recent trend
e.g. ensemble methods [2], semi-supervised method [3], methods for clustering large-
scale datasets, Multi-way clustering method [4] as well as kernel and spectral methods
[5] are developed notably.
Recently, Yan Yu et al. published a clustering algorithm based on the MSRD (Most
Similar Relation Diagram) [6]. This algorithm consists of two stages: constructing the
MSRD of the dataset and cutting the MSRD into clusters. In literature [6], the first
stage has been presented in detail, but the second stage seems to have not been
explored sufficiently. In this paper we develop a package of methods of cutting the
diagram into clusters and apply some of the methods on some synthesized and real
dataset.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 408415, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Clustering Algorithm Based on the Most Similar Relation Diagram 409
Definition 1. Let G=G (V, E) is a graph. V={Vn| n=1, ,N} is the set of vertexes.
Then vh is the Most Similar vertex (MSV) of vi if h=arg minj {dij}; dij is the
dissimilarity between vi and vj; vi, vj V; j=1, 2, , h, , N; ji.
A vertex vi and one of its MSV vj is called the most similar pair (MSP) of vertexes
and is denoted by viv j.
According to definition 1, each vertex has at least one MSV in a graph, but it is not
true that a vertex must be the MSV of another vertex. Some vertexes are not the MSV
of any vertex.
The edge connecting a vertex and its MSV is called the 1st level edges, denote by
(1)
eij .
Suppose Gi =Gi(Vi, Ei) is a sub-graph of G(V,E); Ei={eij(1)| j=1, ,J}, J is the
number of the edges of Gi. Then, Gi is called a 1st level sub-graph, which is denoted by
Gi(1).
Definition 2. Let (1) G is cut into M sub-graphs, G={Gm|m=1, ,M} (2) GiGj =,
(Gi, Gj G; i, j=1, 2, , M; ij); (3)Vi={vip|p=1, ,P}; Vj={vjq|q=1, ,Q} (Vi Gi; Vj
Gj). Then, Gl is the most similar graph (MSG) of Gi. vlh Gl and vik Gj is the most
similar pair (MSP) of vertexes between Gl and Gi, if l,h,k=arg minj,p,qwpq (wpq is the
weight of e(vlp, viq)); j=1, ,l, , M; p=1, , h, , P; q=1, , k, , Q; ji).
The edge between the two vertexes of a MSP, vik and vlh, in 1st level sub-graphs is
called the 2nd level edge, denoted by eij(2).
According to definition 2, any sub-graph has at least one MSG in a partition, but
sometimes sub-graph is not the MSG of any sub-graph.
Definition 3. Suppose there is a sub graph Gi(l)=Gi(Vi(l), Ei(l)), l=2,3, , L;
(l)
(1) (l-1)
th
Ei ={eij , , eik | j=1, 2, , J; k=1, 2, , K}. Then, Gi is a m level sub-graph and
is denoted by Gi(m) if m=argl max l.
Definition 4. Suppose there is a graph G=(V, E), V={Vn|n=1,,N}, E=E(eij); eij is the
edge between vi and vj(i,j=1, 2, , N; ij). Then, G is called the Most Similar
Relation Graph (MSRG) if vi vj.
(d) (e)
Fig. 1. (a) The coordinate graph of dataset 1; (b) The four 1st level MSRG of dataset 1; (c) The
2nd level MSRG of dataset 1; (d)The MSRD of dataset 1; (e) The r-MSRD of dataset 1
Connect every vertex (or node) to its MSV by an edge. By doing so, a certain
number of sub-graphs are composed. Following the definitions above, these sub-
graphs are all 1st level MSRGs, and all the edges are the 1st level edges.
According to the definition 2, find the MSGs of every 1st level sub graph and the
MSP of vertexes between the two sub-graphs. Connect the two vertexes of every MSP
so that each 1st level sub-graph will be connected to its MSGs to form a certain
number of 2nd level sub-graphs. Following the definitions above, these sub-graphs are
2nd level MSRGs.Iterate such process to merge the lower level sub-graphs into higher
level sub-graphs until all MSRGs are aggregated into one graph, the MSRD of the
dataset.At last, assign the weight value to every edge beside it.
By now the construction of the MSRD is accomplished.
As an example, we synthesized a simple two-dimension dataset including 20 data
called dataset 1. Fig.1 (a) is the coordinate graph of dataset 1. Fig. 1 (b) shows the
four 1st level sub-graphs and Fig. 1 (c) shows the 2nd level sub-graphs of dataset 1.
The numerals in Fig.1 (b) and (c) are the IDs of some data. The MSRD of dataset 1
constructed following the procedure above is shown in Fig.1 (d). Comparing Fig.1 (a)
with (d), the relation between a dataset and its MSRD can be seen.
From Fig.1 (b) it can be seen that 20 vertexes are connected together by 1st level
edges to construct four 1st level sub-graphs: G1(1){1, 2, 3, 4, 5, 6, 7,8}, G2(1){9, 10, 11,
12, 13, 14,15, 16}, G3(1){17,18} and G4(1){19,20}. Then, G1(1) and G2(1)are connected
by the 2nd level edge e14,17 , to form a 2nd level MSRG G1(2). G1(2){ G1(1), G2(1), G3(1) ,
G4(1)} is the highest level sub-graph and is also the MSRD of dataset 1.
data, the shape of the clusters, the uniformity of the partition, the size or volume of the
clusters etc. Suppose the weight of an edge eij is wij, we propose a package of methods
for weighting the edges of the MSRD:
The d-MSRD. wij=dij. dij is the dissimilarity between xi and xj (or between vi
and vj).
The l-MSRD. wij=lij. lij is the level of eij.
The r-MSRD. wij=rij.
The fragments contain vi vj in a MSRD can be one of the three cases in Fig. 2. In any
case, the rij of eij is calculated according to formula (1). The case of (a) is the normal
one and the formula (1) can be used directly. In the case of (b), take djk=; in the case
of (c), take djk=( djm+djl)/2, the rij is also calculated according to formula (1).
The u-MSRD. wij=uij.The uniformity uij of Eij is calculated according to formula (2)
N ij and N ij are the number of data in the two clusters respectively generated if Eij
is cut off.
The double factor methods. such as the ld-MSRD, ud-MSRD, lr-MSRD, ur-MSRD,
the wij is defined by wij=lijdij,, wij=uijdij, wij=lijrij, wij=uijrij.
The triple factor methods. such as the uld-MSRD and ulr-MSRD, the wij is defined
by wij=uijlijdij and wij=uijlijrij respectively.
Obviously, partitions with different features can be obtained by selecting different
methods.For example, the d-method can generate such partitions that the gaps between
the clusters are as big as possible. So it favors partitions containing isolated nodes. See
Fig.3 (a). It is interesting to notice that the partition generated by d-method is all the
same with that generated by Hierarchical simple method. For instance, the partition in
Fig.3 (a) is generated by Hierarchical simple method also.
The l-method cut the high level edges off to prevent the lower level MSRG from
dividing. Therefore, the partition produced by l-method is more uniform than that
produced by d-method. See Fig.3 (b). The u-method can generate partition with high
uniformity. Partition in Fig. 3 (c) demonstrates this conclusion. The r-method can
guarantee the data points in a manifold to be in the same cluster. See Fig.3 (b).
412 W.H. Xu et al.
(a) by d-method (b) by l-, r-, lr- ur- and ulr- methods (c) by u-method (d) by ud-method
The double or triple factor methods take multiple factors into account so that it can
generate partitions with multi-features. Fig. 3 (d) is the partition based on the ud-
method, it is more uniform than (a), but the gaps between some clusters are narrower
than that of (a).
These arguments support that the MSRD method is a rich clustering method for it
can generate variety of partitions with different features, some of which are not be able
to generate by K-means or Hierarchical, such as the partition in Fig.3 (b).
The partition in Fig. 3 (b) is generated by l-, r-, u-, lr- ur- and ulr- methods of
MSRD does not mean that these methods always generate the same partitions for any
dataset. Whether the different methods generate different partitions vary with the
characteristics of the datasets.
(a) d(1,2)<d(1,13) (b) d(1,2)>d(1,13) (c) by ward (d) by complete, average and centroid
(e) by single (f)by weighted and median (g) by k-means (h)by r-MSRD
Fig. 4. (a) and (b) the coordinate graph; (c), (d) ,(e), (f), (g) the partitions of dataset 3 by
different methods; (h) the partitions of dataset 3 by the r-MSRD methods
The Clustering Algorithm Based on the Most Similar Relation Diagram 413
Fig. 5. The partitions of dataset 4 generated by k-means (a), by single of Hierarchical (b), by
the oher 6 methods of Hierarchical (c), by r-MSRD (d).
Fig. 6. The coordinate graph (a), and the four-cluster partitions of dataset 5 by k-means (b),
single of Hierarchical (c), word of Hierarchical (d) and ulr-MSRD (e)
generated by ulr-method of MSRD is better than that generated by K-means and Word
of Hierarchical. However, the performance of ulr-MSRD, K-means and Hierarchical
ward in clustering the Wine are not much different.
6 Conclusion
In this paper we developed a package of clustering methods based on the MSRD and
applied them to some real and synthesized datasets. The performance verified the
validity of these methods and demonstrated that the clustering based on the MSRD can
detect both the spherical and non-spherical clusters simultaneously, and has the
capacity to distinguish the clusters of different size, with different density, having
different shape, or belong to the different manifolds. Therefore, it is more universal and
richer than K-means and Hierarchical.
References
1. Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recognition Letters 31, 651
666 (2010)
2. Fred, A., Jain, A.K.: Data clustering using evidence accumulation. In: Proc. Internat. Conf.
Pattern Recognition, ICPR (2002)
3. Chapelle, O., Schlkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press,
Cambridge (2006)
The Clustering Algorithm Based on the Most Similar Relation Diagram 415
4. Bekkerman, R., El-Yaniv, R., McCallum, A.: Multi-way distributional clustering via
pairwise interactions. In: Proc. 22nd Internat. Conf. Machine Learning, pp. 4148 (2005)
5. Filippone, M., Camastrab, F., Masulli, F., Rovetta, S.: A survey of kernel and spectral
methods for clustering. Pattern Recognition 41, 176190 (2008)
6. Yu, Y., Bai, Y.S., Xu., W.H., et al.: A Clustering Method Based on the Most Similar
Relation Diagram of Datasets. In: 2010 IEEE International Conference on Granular
Computing, San Jose, California, August 14-16, pp. 598603 (2010)
Study of Infrared Image Enhancement
Algorithm in Front End
1 Introduction
Original infrared image collected from infrared detector has some features, which are
existing non-operating pixels, low contrast, gray concentrating on a small grayscale
range in the histogram, and the contrast of target is too low to see the image details. In
order to see the infrared image details and improve image quality, enhancing the
original infrared image in front end is necessary.
The remainder of the paper is organized as follows: in section 2, traditional
infrared image enhancement methods are briefly exposed. In section 3, the proposed
method is introduced. In section 4, we present the improved algorithm. In section 5,
we give details of the experiment results. In section 6, we present our conclusions.
2 Traditional Algorithm
Traditional infrared image enhancement usually uses Histogram equalization (HE).
This method smoothes the histogram through specific gray-level transforming
function and then revises the original image according to new histogram. The
essential of HE is to expand those pixels that have large gray quantity probabilities to
neighboring gay-level pixels and compress those pixels whose quantity gray
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 416422, 2011.
Springer-Verlag Berlin Heidelberg 2011
Study of Infrared Image Enhancement Algorithm in Front End 417
probabilities are small [1]. According to the features of infrared image, which is the
target in the original image has small grayscale, when HE is used in infrared image
processing, the image quality will worsen much and we can't recognize the target
from the background, so the traditional HE method doesn't suit in these conditions .
Linear enhancement algorithm can expand those target pixels in original infrared
image and compress, even denoise the noise, and its key is to choose gray-level
transforming threshold. Such as using histogram statistics obtain 1/32 and 31/32 mean
value of the histogram as the transform threshold [2], intensify image contrast using
segmented histogram nonlinear extension [3], A kind of infrared image contrast
enhancement algorithm based on discrete stationary wavelet transform (DSWT) and
non-linear gain operator [4], A conventional piecewise linear grey transformation
based self-adaptive contrast enhancement algorithm [5]. These methods could achieve
good performance in back-end processing for original infrared image, but it does not
suit for front-end enhancement.
0 Xin < X1
Y Ymin
Xout = max X in X 2 Xin X1 (1)
X 2 X1
Ymax Xin > X 2
Where Xout, Xin is the output and input of the function, Ymax,Ymin is the largest and
smallest grayscale of result image, X2,X1 is the max and min transform threshold
chosen from original infrared images histogram, choosing X2,X1 is the algorithm that
proposed in this paper. The linear enhancement function is illustrated in Fig.1.
We suppose there is no non-operating pixel in infrared detector, we have histogram
of original infrared image like Fig.2.Here, we just regard the beginning and the end of
the aiguille as the X1 X2 in formula 1 to enhance the image, the result will be perfect.
But, actually, infrared detector has some non-operating pixels, UL04171 has about
2% .Real-time histogram of original infrared image consist of a main aiguille and
some small aiguilles. The main aiguille is from a majority of normal pixels, these
small aiguilles result from non-operating pixel and noises.
418 R. Zheng, J. Hong, and Q. Liao
X2
255
X1 255
Number
X1 X2
Grayscale
Data of original
infrared image
Enhance the original infrared image by the method above, the grayscale of target in
result image can reach the maximum, the sky have the same grayscale with the target,
it leads to a boundary between the target and sky. If we use an unfair distribution
about the grayscale, such as in process step, distribute all grayscale to the target and
little even no grayscale to the sky, there is no the boundary in result image, and the
quality of the result image is better than the commom linear enhancement.
5 Experiment Result
Fig. 4 shows the original infrared image transformed to grayscale 0-255 and the
histogram of original infrared image. From this figure, we cant see any image
information if the original infrared image is not taken any enhancement. On the
histogram of original infrared image, a majority of pixels locate in a main aiguille.
420 R. Zheng, J. Hong, and Q. Liao
Fig. 5 is the result image enhanced by the proposed method. From this figure, we
can see the image details appearing.
Fig. 4. Original infrared image and the histogram of original infrared image
Fig. 6 is the result image enhanced by the linear enhancement method when the
sky and earth appear in one infrared image at the same time, we can see the contrast
of earth in result image is low.
Fig. 7 are the result images enhanced by the linear enhancement algorithm and its
improved algorithm. Fig.7(a) is the result image enhanced by the commom linear
algorithm, the image is too white and the earth contrast is lowly; Fig.7(b) is enhanced
by improved linear algorithm with the earth and sky are distributed the same
grayscale 0-255,compared with (a),the images details are increased greatly and
arrangement of the image is better; Fig.7(c) is enhanced by improved linear algorithm
with the earth is distributed grayscale 0 to 255 but the sky is distributed no
grayscale(0),the images details are as good as Fig. 6 (b),however, there are no pitting
and the boundary between the target and sky, so the vision effects are improved ;
Fig.7(d) is enhanced by improved linear algorithm with the earth is distributed
grayscale between 0 and 255 and the key is distributed grayscale range 0 to 100,its
details is the best in these four result images, there are no pitting, no boundary
between the target and sky, and keeping the sky information.
(a) (b)
(c) (d)
Fig. 7. The result images enhanced by the the linear enhancement algorithm and its improved
algorithm
422 R. Zheng, J. Hong, and Q. Liao
6 Conclusion
This paper proposes a algorithm of choosing transform threshold for linear
enhancement based on the features of infrared detector, The algorithm is easily
implementable and fast, The experimental results show that the image details can be
revealed effectively by this algorithm. Since grayscale of target is not much enough in
result image enhanced by linear method when the sky and earth appear in one infrared
image at the same time, that leads to low contrast of result image, we propose an
improved algorithm, by which enhance the original infrared image both keeping
background information and enlarging the grayscale of target, the result images show
this improved algorithm achieve a good performance.
Acknowledgment. This work was supported by the Major Science and Technology
special project of Fujian Province, P.R.China (No.2009HZ0003-1).
References
1. Zhang, Y.: Image Processing, 2nd edn. Tsinghua University Press, Beijing (2006)
2. Wang, Y.H., Wang, D., Hu, Y.M., Zhang, T.: The FPGA-based Real-Time Image Linear
Contrast Enhancement Algorithm. Microcomputer Information, China (2007)
3. Wu, Z., Wang, Y.: An Image Enhancement Algorithm Based on Histogram Nonlinear
Transform. Acta Phot On Ica Sinica, China (2010)
4. Zhang, C.-J., Yang, F., Wang, X.-D., Zhang, H.-R.: An Efficient NotLinear Algorithm for
Contrast Enhancement of Infrared Image. In: Proceedings of the Fourth International
Conference on Machine Learning and Cybernetics, Guangzhou, August 18-21 (2005)
5. Li, Y., Zhou, J., Ding, W.: A Novel Contrast Enhancement Algorithm for Infrared Laser
Images. IEEE, Los Alamitos (2009)
6. Chen, Z., Ji, S.: Enhancement Algorithm of Infrared Images based on Otsu and Plateau
Histogram Equalization. Laser& Infrared, China (2010)
7. UL 04 17 1 NTC05011-1 Issue 1 640 x 480 LWIR 29 11 05.pdf
Influence of Milling Conditions on the Surface Quality in
High-Speed Milling of Titanium Alloy
Abstract. The study was focused on the machined surface quality of titanium
alloy under different milling conditions, including milling speed, tool rake angle
and cooling method. The surface quality was investigated in terms of residual
stress and surface roughness. The results show that the compressive residual
stresses are generated on the machined surface under all milling conditions. The
compressive residual stresses in both directions decreased and the surface
roughness increased with the milling speed increasing. The compressive residual
stresses increased with the tool rake angle increasing. The lowest surface
roughness was obtained when the rake angle was 8. Under the condition of dry
milling, the highest compressive residual stresses were obtained, approximately
350 MPa. The highest surface roughness was obtained when the oil mist coolant
was used. Water cooling was the best cooling method with uncoated cemented
carbide tool in high-speed milling of titanium alloy.
1 Introduction
Titanium alloys have been widely used in aerospace and other industrial applications
because of their elevated mechanical resistance having a low density and their excellent
corrosion resistance, even at high temperatures. However, titanium alloys are difficult
to machine due to their high temperature strength, relatively low modulus of elasticity,
low thermal conductivity and high chemical reactivity [1-2].Conventional milling
speeds range from 30 to 100 m min-1 when sintered carbide tools were used in the
machining of titanium alloys, resulting in low productivity [3].
High speed machining is widely appreciated in industry for its high material removal
rate, low milling force, high machining accuracy and high surface quality. All these
advantages have led to the application of high-speed machining technology in the
machining of titanium alloys [4].
Many researchers have researched on surface quality of milling of titanium alloy.
Che-Haron [5] carried on the research on surface quality of rough turning of titanium
alloy Ti6Al4V with uncoated carbide. The results showed that the machined surface
experienced microstructure alteration and increment in microhardness on the top white
layer of the surface. Machined surfaces have shown severe plastic deformation and
hardening after prolonged machining time with worn tools, especially when machining
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 423429, 2011.
Springer-Verlag Berlin Heidelberg 2011
424 X. Shen, L. Zhang, and C. Ren
under dry milling condition. A metallurgical analysis on chip obtained from high speed
machining of titanium alloy Ti6Al4V was performed by Puerta Velsquez [6].The
titanium phase was observed in all chips for any tested milling speeds. No evidence of
phase transformation was found in the shear bands. Ezugwu [7] investigated the effect
of machining parameters on residual stress of titanium alloy IMI-834 by milling. The
residual stresses were found to be compressive in nature at the milling speed of 11~56m
min-1 and a linear relationship could not explain the variation of the residual stresses
with respect to the milling parameters. Ge [8] reported experimental evidence that the
surface roughness was less than Ra 0.44m when milling gamma titanium aluminide
alloy at speeds of 60~240m min-1. Workpiece surface has a maximum microhardness of
approximately 600HV (0.100), and the depth of maximum hardened layer was confined
to 180m below the surface. Che-Harons study showed that surface roughness values
recorded were typically less than 1.5m Ra when milling gamma titanium aluminide
alloy at high speed. Microhardness evaluation of the subsurface indicated a hardened
layer to a depth of 300m. Initial residual stresses analysis showed that the surface
contained compressive stresses more than 500MPa. Tool flank wear and milling speed
have the greatest effect on residual stress. Ezugwu investigated the chip formation
mechanism and surface quality as well as milling process characteristics of titanium
alloy in high speed milling. The high milling speed with much more milling tooth will
be beneficial to reduction of milling forces, enlarge machining stability region,
depression of temperature increment, auti-fatigability as well as surface roughness.
The aim of this paper is to investigate the influence of milling speed, tool rake angle
and cooling method on surface quality in high-speed milling of titanium alloy. The
paper will provide experimental and theoretical basis for optimizing high-speed milling
parameters and controlling surface quality of titanium alloy. Consequently, it will lead
to the generation of counteractive machining procedures to improve component fatigue
life and machinability.
2 Experimental Procedure
2.1 Experimental Material
The workpiece material used in all the experiments was an alpha-beta titanium alloy
TC11. The nominal chemical composition of the workpiece material confirms to the
following specification (wt.%): 6.42 Al; 3.29 Mo; 1.79 Zr; 0.23 Si; 0.025 C; 0.096 O;
0.003 H; 0.077 Fe; 0.004 N; allowance Ti. The mechanical properties of tested material
at room temperature and high temperature are shown in Table 1. The workpiece
dimension was 80mm50mm30 mm.
Room temperature mechanical properties High temperature mechanical properties
Tensile strength b/MPa 1128 Test temperature/ 500
Yield strength 0.2/Mpa 1030 Tensile strength b/MPa 765
Elongation /% 12 Rupture strength 100/MPa 667
Shrinkage /% 35
Impact value k/Jcm-2 44.1
Influence of Milling Conditions on the Surface Quality in High-Speed Milling 425
All the machining experiments were carried out on a three-axis Mikron HSM 800 high
speed milling center with an ITNC 530 controller. The milling tools used were K44
uncoated cemented carbide milling cutter with four teeth. The diameter of the cutter
was 10 mm, the helix angle and relief angle were 30, 10, respectively. The milling
mode was down milling.
Three different milling speeds were selected as 376.8 m/min, 471 m/min and 565.2
m/min, and feed per tooth, milling depth and milling width were maintained constant at
0.05 mm/tooth, 0.2 mm and 10 mm, respectively. Meanwhile the tool rake angle used
was 14 . The water coolant was used.
The three tool rake angles used were 4, 8 and 14 to investigate the influences of
rake angle on surface quality. Meanwhile the milling speed, feed per tooth, milling
depth and milling width were maintained constant at 251 m/min, 0.05 mm/tooth, 0.2
mm and 10 mm, respectively. The coolant used was oil mist.
In addition, three different cooling methods including dry, water and oil mist were
selected.
After each processing step, the surface residual stress was measured by X-ray
diffraction. The residual stress was measured at three locations both in the feed and
stepover directions, and then the average value is calculated. The surface roughness of
the machined surface after each test was measured using a contact type profilometer
instrument (Taylor Hobson Form Talysurf 120). The evaluation length was set at 5.6
mm and the sampling length was fixed at 0.8 mm. The instrument was calibrated using
a standard calibration block prior to the measurements. The surface roughness was
measured by positioning the stylus in the feed direction. The surface roughness was
taken at five locations and repeated twice at each point on the face of the machined
surface, and then the average value is gained.
Fig. 1 shows the influence of milling speed on surface residual stresses. It is observed
that all surface residual stresses are compressive, and increasing the milling speed
tends to decrease the compressive residual stresses on the surface in both directions.
The results show the trend of the residual stresses is from compression to tensile with
the milling speed increasing. This trend is expected because as milling speed
increases, machining becomes more adiabatic, so the temperature rise softens the
metal and thus reducing the milling forces. Fig. 2 shows the influence of milling
speed on surface roughness. It can be seen that an increase of milling speed causes an
increase of surface roughness. This was probably due to rapid tool wear at the milling
edge closer to the nose.
426 X. Shen, L. Zhang, and C. Ren
-1
Milling speed vc/m min
350 400 450 500 550 600
0
-50
-100
-200
-250
-300
x
-350
y
-400
1.20
1.15
1.10
Surface roughness Ra/m
1.05
1.00
0.95
0.90
0.85
0.80
0.75
Fig. 3 shows the influence of rake angle on surface residual stresses. It is observed that
all the surface residual stresses are compressive, and the magnitude of the compressive
stresses increased on the surface in both directions with the rake angle increasing. Fig. 4
shows the influence of rake angle on surface roughness. It can be seen that the surface
roughness decreases slightly when the rake angle is changed from 4 to 8 . However,
the surface roughness increases remarkably when the rake angle is increased from 8
to 14 .
Influence of Milling Conditions on the Surface Quality in High-Speed Milling 427
-50 x
y
-100
Residual stress/MPa
-150
-200
-250
-300
0.8
0.7
Surface roughness Ra/m
0.6
0.5
0.4
0.3
4 6 8 10 12 14
Rake angle /degree
-50
-100
Residual stress/MPa
-150
-200
-250
-300 x
y
-350
0.8
0.7
0.6
Surface roughness Ra/m
0.5
0.4
0.3
0.2
0.1
0.0
Dry Water Oil mist
4 Conclusion
Experimental investigation of the influence of milling conditions, involving milling
speed, tool rake angle and cooling methods, on the surface quality in terms of residual
stress and surface roughness in high-speed milling titanium alloy with uncoated
cemented carbide tool, were carried out. The results show the milling speed, rake angle
and cooling method have important influence on residual stress and surface roughness.
Compressive residual stresses are higher in the feed direction than in the stepover
direction. The best surface quality can be obtained when the rake angle is 8. Water
cooling is the best cooling method.
References
[1] Shen, X.L., Zhang, L.X., Ren, C.G., Zhou, Z.X.: Research on Design and Application of
Control System in Machine Tool Modification. Adv. Mater. Res. 97-101, 20532057 (2010)
[2] Shen, X.L., Luo, Y.X., Zhang, L.X., Long, H.: Natural frequency computation method of
nonlocal elastic beam. Adv. Mater. Res. 156-157, 15821585 (2011)
[3] Su, Y., He, N., Li, L., et al.: An experimental investigation of effects of cooling/lubrication
conditions on tool wear in high-speed end milling of Ti-6Al-4V. Wear 261, 760766 (2006)
[4] Shen, X.L., Zhang, L.X., Long, H., Zhou, Z.X.: Analysis and Experimental Investigation of
Chatter Suppression in High-speed Cylindrical Grinding. Appl. Mech. Mater. 34-35,
19361940 (2010)
[5] Che-Haron, C.H., Jawaid, A.: The effect of machining on surface integrity of titanium alloy
Ti-6% Al-4% V. J. Mater. Proc. Technol. 166, 188192 (2005)
[6] Puerta Velsquez, J.D., Bolle, B., Chevrier, P., et al.: Metallurgical study on chips obtained
by high speed machining of a Ti-6 wt.% Al-4 wt.% V alloy. Mater. Sci. Eng. A. 452-453,
469474 (2007)
[7] Ezugwu, E.O., Wang, Z.M.: Titanium alloys and their machinabilitya review. J. Mater.
Proc. Technol. 68, 262274 (1997)
[8] Ge, Y.F., Fu, Y.C., Xu, J.H.: Experimental Study on High Speed Milling of -TiAl Alloy.
Key Eng. Mater. 339, 610 (2007)
Molecular Dynamics Simulation Study on the
Microscopic Structure and the Diffusion Behavior of
Methanol in Confined Carbon Nanotubes
1 Introduction
In recent years, the technique of the molecular dynamics(MD) simulation plays an
important role in studying the properties of the restricted molecules[1,2].The limited
water molecule in carbon nanotubes (CNTs) has become the focus of research
[3,4].Especially, the hydrogen bond has a crucial influence on the physical and chemical
properties of the water molecules[5,6]. As water molecules, methanol molecules can
form hydrogen bond. Yu, LM et al [7]investigated the micro-structural and the dynamic
properties of the methanol by MD simulation, and showed that the average distance
increasing as temperature. The distance between the molecules and the diffusion
coefficients show an increase trend with temperature. The randomness of the molecular
distribution also increases in the carbon nanotube (CNT). Sun, W et al [8] also
deliberated the microstructure of the methanol. The result shows that each methanol
molecule form the average number of hydrogen bond is approximately equal to 2, which
suggests that there is not only one preferred orientation of methanol molecules with
respect to each other. In this paper, we explore the dependence of the microscopic
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 430436, 2011.
Springer-Verlag Berlin Heidelberg 2011
Molecular Dynamics Simulation Study on the Microscopic Structure 431
2 Methods
We use arm-chair mode of rolling the graphite sheet to construct the different
nanotubes in diameter [9,10]. The (8, 8) (9, 9), and (10, 10) single-walled CNT have
been chosen and their diameters are 1.08 nm, 1.22 nm, and 1.356 nm respectively,
according to the formula d = a 3(m2 +mn+n2) , Where a is the C-C distance fixed at
0.142nm. The lengths of the CNT in which contains 16 methanol molecules are
7.739nm thought out our simulations.
A considerable number of potential models have been developed so far to describe
methanol molecules interactions, namely, TIPS [11], OPLS [12], Haugney rigid three-
site model [13], flexible model [14], and polarizable model [15]. The OPLS model
can accurately predict the properties of methanol molecule at ambient temperature.
The intermolecular interaction between a pair of methanol molecules is given by
qi q j ij ij
u(ri , rj ) = + 4 ij ( )12 ( )6 (1)
rij rij rij
Here i and j are the label sites in methanol molecule. ij And ij are Lennard-Jones
energy and radius, respectively. rij Is the distance between sites i and j in methanol,
qi is the charge at site i in methanol. The Lennard-Jones interaction energy and radius
parameters of the cross-terms are calculated on the basis of the Lorentz-Berthelot
1
combining rules: = 1 ( + ) , ij = ( i j ) 2 . The force field parameters are listed in
ij i j
2
Table 1. In the model of methanol the O-H and O-Me distances are 0.91210-10and
1.43010-10m, respectively, and the H-O-Me angle is 108.5.
The simulations were conducted in the canonical ensemble (NVT) at temperature
of 298K with a Nose-Hoover thermostat. The long-range electrostatic interactions
were treated with the Ewald summation method. All simulations are carried out with
an integration time step of 0.5fs and a cutoff distance is 1.0nm for the short-range
interactions. The various equilibrium and dynamical properties were calculated over a
period of 360 ps, after the systems were equilibrated for 140 ps.
4.5 4.0
d=1.085nm d=1.085nm
4.0 d=1.22nm d=1.22nm
3.5
d=1.356nm d=1.356nm
3.5
3.0
3.0
2.5
2.5
g(r)
g(r)
2.0
2.0
1.5
1.5
1.0
1.0
0.5 0.5
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
r/nm r/nm
Fig. 1. O-H radial distribution functions Fig. 2. O-O radial distribution functions
As can be seen from Fig1 and Fig2, the first peaks in the O-H radial distribution
functions show a similar trend to that of the O-O radial distribution functions.
Furthermore, the first peak height of the radial distribution functions increases and
that of the first minimum decreases with the larger diameters. The positions of the
first peaks almost remain unchanged by the sizes of the diameters, but the positions of
the first minimum are shifted to right with the increasing diameters. This result
indicates that the hydrogen bonding structure of methanol is enhanced significantly by
increasing diameter. The aggregation densities of oxygen-hydrogen bond and the
oxygen-oxygen bond increase with the larger diameters. The density of the largest
region for the O-H radial distribution functions and the O-O radial distribution
functions are respectively gathered at 0.20nm and 0.29nm. It is possible to figure out
that the architecture graduate closely regular to change the loose disorder. One of the
most important curves is the O-H radial distribution functions. In this section the
influence of the different diameters on the hydrogen bond structure will be
investigated by calculating the average number of coordination numbers. We have
used the following geometric definition of the hydrogen bond that can be formed
among methanol molecules if the oxygen-oxygen distance is less than 0.35nm; the
hydrogen-oxygen distance is less than 0.245nm and the oxygen-oxygen-hydrogen
angleis less than 30. The number of the hydrogen bonds and the coordination
numbers have relation with each other. The coordination numbers ( nOH (r) ) are
obtained by integrating the corresponding O-H pair distribution function ( gOH (r) )
rmin
nOH ( r ) = 4 g OH ( r )r 2 dr (2)
0
The diameters of the CNTs and the corresponding coordination numbers are listed
in table2. The tendency of the coordination numbers indicate the axial dispersion
coefficient increasing with the larger diameters, the number of the hydrogen bonds
also follows this law.
2.5
Wall of CNT
2.0
1.5
<>
1.0
0.5
0.0
0.0 0.2 0.4 0.6 0.8
r/nm
Fig. 3. The radial density distributions of sites of methanol molecules in the (10, 10) CNTs
gained from MD simulation, illustration shows the longitudinal section of the methanol
molecules in CNT
Coordinate origin represents the center axis of carbon nanotube in Fig 3.The
position of the maximum is away from the pipe wall at a distance of 0.36nm. The
limited effect of the methanol weakens with the larger diameters of the CNT. More
and more methanol molecules which connect to form a ring tend to locate near the
tube wall. However, there are little molecules near the tube axis and keeping away
from the wall. In eq. (1), the Lennard-Jones energy and radius parameters of the
cross-interactions are calculated on the basis of the Lorentz-Berthlot combining rules.
Electrostatic interactions can be ignored on account of low-energy. The derivative of
the Lennard-Jones energy reach conclusion: r = 2 .According to the potential
6
model parameters of the methanol molecule: = 0.331nm. The value of the radius is
0.36nm.The consequence corresponds with the simulated result (Fig 3). We can
conclude that the interaction energy of the methanol molecule which is away from the
wall at a distance of 0.36nm is lowest. The molecules which locate in a relatively
stable state are gathered here. Inner layer represents methanol and outer layer
indicates CNTs as shown in the illustration of Fig 3.The methanol in (8, 8) (9, 9)
434 H. Liu et al.
CNTs gathered near the central axis of the rube due to the strength of the Lennard-
Jones energy. The limited effect of molecules obvious which appear mission-shaped
or liner arrangement
As can be seen from Fig 4, the first characteristic peak is monotonically
decreasing on the CH3-CNT, O-CNT, and H-CNT radial distribution. The first
characteristic peak of the CH3-CNT radial distribution function is the maximum, the
corresponding minimum is the H-CNT radial distribution function. It can probably be
attributed to the space factor of the restricted pipe diameter. The characteristic peak of
the (10, 10) CNTs is lower than the corresponding characteristic peak of the (8, 8) (9,
9) CNTs. This means that the former orderly degree is generally lower than the later.
This is due to the tube space increasing and the limited effect weakens. In order to
make a better understanding of the relationship between diffusion coefficient and the
channel diameters, MD simulation was explored to solve the diffusion coefficient by
two kinds of methods.
1.16
1.14 CH3-CNT 300
d=1.085nm
1.12 O-CNT d=1.22nm
1.10 H-CNT d=1.356nm
250
1.08
1.06
200
1.04
2
MSD/nm
1.02
g(r)
150
1.00
0.98
100
0.96
0.94
0.92 50
0.90
0.88 0
1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 0 100 200 300 400 500
r/nm t/ps
Fig. 4. The diagram of the first characteristic Fig. 5. The mean square distance (MSD) of
peak on the CH3-CNT, O- CNT, H-CNT radial methanol molecules on the axial direction in
distribution functions with the different diameters CNTs
The diffusion coefficient can be obtained either from axial velocity autocorrelation
via the Green-Kubo relations or from mean square displacement via the Einstein
expression for the MD simulation. We take advantage of the Einstein formula and
Green-Kubo relations to calculate the self-diffusion coefficient. The self diffusivity is
conveniently defined using the Einstein expression [17]
N
1
D S = lim ri ( t ) ri (0)
2
(3)
i 6N t i =1
Here, ri (t ) and ui (t ) are the position and velocity of the tagged particle, respectively.
The velocity autocorrelation function reach null point in a short time while adopt
the Green-Kubo equation. However, with the time passing, the numerical instability
leads to the curve waves. The eq. (3) is without this phenomenon. In this paper, it
Molecular Dynamics Simulation Study on the Microscopic Structure 435
would be best to adopt the eq. (3) to calculate the diffusion coefficients of the
methanol. The methanol has particle-wall collisions, and the rate is lower than the
particle-to-particle collisions. This diffusion which is calculated with the following
formula called Knudsen diffusion.
2 8RT
DK = r (5)
3 M
Where r denotes the pore radius and R denotes gas constant; M denotes molecular
diffusion components.
Table 3. Using Einstein method and Knudsen equation separately calculate the diffusion
coefficient of the methanol
4 Conclusions
We have carried out extensive research on the diffusion of methanol molecules in
CNTs by molecular dynamics simulation, offering a microscopic description of the
relevant static and dynamic properties. It is found that the limited effect plays a
critical role in the RDF and the diffusion coefficient of the methanol. An
enhancement of the numbers of the hydrogen bonds and the coordination numbers of
the methanol molecules with increasing diameters have been deduced from the RDFs.
The coordination numbers of the methanol are shown in table2.Thus, it is seen that
the diffusion of the methanol molecules that are located in the carbon nanotubes show
a strong dependence on the different pipe diameters are Knudsen diffusion.
References
1. Zhang, X.R., Wang, W.C.: Journal of Chemistry 60, 1396 (2002)
2. Lv, L.H., Wang, Q., Li, Y.C.: Journal of Chemistry 61, 1232 (2003)
436 H. Liu et al.
3. Hummer, G., Rasaiah, J.C., Noworyta, J.P.: Nature 414, 188 (2001)
4. Wang, J., Zhu, Y., Zhou, J., Lu, X.H.: Journal of Chemistry 61, 1891 (2003)
5. Netz, P.A., Starr, F.W., Stanley, H.E., et al.: Static and dynamic properties of stretched
water. J. Chem. Phys. 115, 344348 (2001)
6. Netz, P.A., Starr, F.W., Stanley, H.E., et al.: Relation between structural and dynamical
anomalies in super cooled water. J. Physical A. 314, 470476 (2002)
7. Yu, L., Tang, Y., Liu, J., Liu, J.: Molecular Dynamics Simulation of liquid Methanol.
Chemical Industry and Engineering 7(26), 338341 (2009)
8. Sun, W., Chen, Z., Huang, S.Y.: Liquid methanol microstructure of molecular dynamics
simulation. 11(33), 5153 (2005)
9. Ayappa, K.G.: Langmuir 14, 880 (1998)
10. Zhang, F.J.: Chem. Phys. 111, 9082 (1999)
11. Jorgensen, W.L.: Microstructure and Diffusive Properties of Liquid Methanol. J. Am.
Chem. Soc. 102, 543549 (1980)
12. Jorgensen, W.L.: Optimized intermolecular potential functions for liquid alcohols. Phys.
Chem. 90, 12761284 (1986)
13. Haughney, M., Ferrario, M., McDonald, I.R.: Molecular dynamics simulation of liquid
methanol. J. Phys. Chem. 91, 49344940 (1987)
14. Haughney, M., Ferrario, M., McDonald, I.R.: Molecular dynamics simulation of liquid
methanol with a flexible threesite model. J. Phys. Chem. 91, 43344341 (1987)
15. Skaf, M.S., Fonseca, T., Ladanyi, B.M.: Wave vector dependent dielectric relaxation in
hydrogen-bonding liquids: a molecular dynamics study of methanol. J. Chem. Phys. 98,
89298945 (1993)
16. Gordillo, M.C., Marti, J.: Chem. Phys. Let. J. 329, 341 (2000)
17. Allen, M.P., Tildesley, D.J.: Computer simulation of liquids Methanol. Oxford University
Press, Oxford (1987)
Spoken Emotion Recognition Using Radial Basis
Function Neural Network
1 Introduction
Affective computing [1], which aims at understanding and modeling human emotions,
is currently a very active topic within the engineering community. Speech is one of
the most powerful, natural and immediate means for human beings to communicate
their emotions, speech is thus a main vehicle of human emotion expressions.
Recognizing human emotions from speech signals, called spoken emotion
recognition, has attracted extensive research interests within artificial intelligence
field due to its important applications to human-computer interaction [2], call centers
[3], and so on.
A typical spoken emotion recognition system comprises of two parts: feature
extraction and emotion classification. Feature extraction involves in extracting the
relevant features from speech signals with respect to emotions. Emotion classification
maps feature vectors onto emotion classes through a classifiers learning by data
examples. After feature extraction, the accuracy of emotion recognition relies heavily
on the use of a good pattern classifier. So far, the widely used emotion classification
methods, such as linear discriminant classifiers (LDC), K-nearest-neighbor (KNN),
C4.5 decision tree, have been applied for spoken emotion recognition [4-7]. Radial
basis function neural networks (RBFNN) [8, 9] as representative neural networks has
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 437442, 2011.
Springer-Verlag Berlin Heidelberg 2011
438 S. Zhang, X. Zhao, and B. Lei
became a well-known method for classification since its main advantages are
computational simplicity, supported by well-developed mathematical theory, and
robust generalization, powerful enough for real-time real-life tasks. Motivated by
little attention to explore the performance of RBFNN on spoken emotion recognition
task, in this study we employ RBFNN to conduct emotion recognition experiments in
Chinese speech.
This paper is structured as follows. The basic principle of RBFNN for
classification is reviewed briefly in Section 2. The emotional speech corpus and
feature extraction are described in Section 3. Section 4 analyzes the experiment
results. Finally, the conclusions are given in Section 5.
The used Gaussian Radial Basis Function for the hidden layer is defined as
x ci
exp( ), i = 1, 2, ", n
yi = 2 i2 (1)
i=0
1,
Spoken Emotion Recognition Using Radial Basis Function Neural Network 439
where ci and i represent the center and the width of the neuron, respectively, and
the symbol denotes the Euclidean distance. The weight vector between the input
layer and the i -th hidden layer neuron corresponds to the center ci . The closer x is
to ci , the higher the value the Gaussian function will produce.
The output layer consists of m neurons corresponding to the possible classes of the
problem. Each output layer neuron is fully connected to the hidden layer and
computes a linear weighted sum of the outputs of the hidden neurons as follows:
n
z j = yi wij , j = 1, 2, ", m (2)
i =0
where wij is the weight between the i -th hidden layer neuron and the j -th output
layer neuron.
3 Experiment Setup
The emotional speech Chinese corpus reported in our present study [10] was used for
experiments. The Chinese corpus was collected from 20 different Chinese dialogue
episodes from a talk-show on TV. In each talk-show, two or three persons discuss the
problems such as social typical issues, family conflict, inspiring deeds, etc. Due to the
spontaneous and unscripted manner of the episodes, the emotional expressions can be
considered authentic. Because of the limited topics, the speech corpus consists of four
kinds of common emotion including angry, happy, sad and neutral. This corpus
collected in total contains 800 emotional utterances from 53 different speakers (16
male /37 female), speaker-independent, each of four emotion for about 200
utterances. All utterances recorded at a sample rate of 16 kHz and 16 bits resolution
with mono-phonic Windows WAV format stored in computer.
It has been found speech prosody and voice quality features are closely related to the
expression of human emotion in speech [3, 10]. The popular prosody features contain
pitch, intensity and duration. And the representative voice quality features include the
first three formants (F1, F2, F3), spectral energy distribution, harmonics-to-noise-ratio
(HNR), pitch irregularity (jitter) and amplitude irregularity (shimmer). The popular
prosody and voice quality features are extracted for each utterance from the Berlin
emotional speech database. Some typical statistical parameters such as mean, standard
derivations (std), median, quartiles, and so on, are computed for each extracted
feature. These extracted acoustic features, 48 in total, are presented as follows.
Prosody features:
(1-10)Pitch: maximum, minimum, range, mean, std (standard deviation), first
quartile, median, third quartile, inter-quartile range, mean-absolute-slope.
(11-19)Intensity: maximum, minimum, range, mean, std, first quartile, median,
third quartile, inter-quartile range.
440 S. Zhang, X. Zhao, and B. Lei
4 Experiment Results
All extracted different acoustic features were normalized by a mapping to [0, 1]
before anything else. Three typical classification methods, i.e., linear discriminant
classifiers (LDC), K-nearest-neighbor (KNN) and C4.5 decision tree were used for
emotion recognition and compared their performance with RBFNN. Note that the
number of nearest neighbor for KNN method is set to 1 due to its best performance.
For all emotion classification experiments, we performed 10-fold stratified cross
validations over the data sets so as to achieve more reliable experiment results. In
other words, each classification model is trained on nine tenths of the total data and
tested on the remaining tenth. This process is repeated ten times, each with a different
partitioning seed, in order to account for variance between the partitions.
The different emotion recognition results of four classification methods, including
LDC, KNN, C4.5 and RBFNN, are presented in Table 1. From the results in Table 1,
we can observe that RBFNN performs best with the highest accuracy of 83.3%,
outperforming the other used methods including LDC, KNN and C4.5. This
demonstrates that RBFNN has the best generalization ability among all used four
classification methods. Note that LDC and KNN get an accuracy of about 75%,
indicating that LDC is highly close to KNN on classification task. Additionally, C4.5
performs better than LDC and KNN, and gets a recognition accuracy of 80%.
To further explore the recognition results of different kinds of emotions when
using RBFNN classifier, Table 2 gives the confusion matrix of emotion recognition
Spoken Emotion Recognition Using Radial Basis Function Neural Network 441
results obtained with RBFNN. As shown in Table 2, we can see that that neutral is
best classified with an accuracy of 91.2%, and angry is discriminated with an
accuracy of 85.6%. While other two emotions, happy and sad, only could be classified
with relatively lower accuracies, in detail 80% for happy and 76.4% for sad. This can
be explained to that happy is highly confused to angry and sad for each other since
they have confused acoustic parameters to a great extent.
5 Conclusions
In this paper, a new method based on RBFNN is presented for spoken emotion
recognition. To effectively compare the performance of RBFNN on spoken emotion
recognition task, three well-known emotion classification methods, i.e., LDC, KNN,
and C4.5, were used. From the experiment results on emotional Chinese speech
corpus, we can conclude that RBFNN performs best with an average recognition
accuracy of 83.3%, due to its good generalization ability. Its worth pointing out that
in this study we focus on recognizing only four emotions such as angry, happy, sad
and neutral. However, in real world scenery some other common emotions such as
disgust, fear and surprise also existed. Therefore, in the future its an interesting
subject to extend our emotional speech Chinese corpus and identify more other
different emotion categories such as disgust, fear, surprise and so on.
Acknowledgments. This work is supported by Zhejiang Provincial Natural Science
Foundation of China (Grant No.Z1101048, No. Y1111058).
References
1. Picard, R.: Affective computing. MIT Press, Cambridge (1997)
2. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W.,
Taylor, J.G.: Emotion Recognition in Human-Computer Interaction. IEEE Signal
Processing Magazine 18(01), 3280 (2001)
442 S. Zhang, X. Zhao, and B. Lei
3. Lee, C.M., Narayanan, S.S.: Toward Detecting Emotions in Spoken Dialogs. IEEE
Transactions on Speech and Audio Processing 13(2), 293303 (2005)
4. Dellaert, F., Polzin, T., Waibel, A.: Recognizing emotion in speech. In: Proceedings of
4th International Conference on Spoken Language Processing, Philadelphia, PA, USA,
pp. 19701973 (1996)
5. Petrushin, V.: Emotion in speech: recognition and application to call centers. In:
Proceedings of 1999 Artificial Neural Networks in Engineering, New York, pp. 710
(1999)
6. Yacoub, S., Simske, S., Lin, X., Burns, J.: Recognition of emotions in interactive
voice response systems. In: Proceedings of EUROSPEECH 2003, Geneva, Switzerland,
pp. 729732 (2003)
7. Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.S.: Emotion recognition using a
hierarchical binary decision tree approach. In: Proceedings of INTERSPEECH 2009,
Brighton, United Kingdom, pp. 320323 (2009)
8. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks.
Neural Computation 3(2), 246257 (1991)
9. Er, M.J., Wu, S., Lu, J., Toh, H.L.: Face recognition with radial basis function (RBF)
neural networks. IEEE Transactions on Neural Networks 13(3), 697710 (2002)
10. Zhang, S.: Emotion Recognition in Chinese Natural Speech by Combining Prosody and
Voice Quality Features. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds.) ISNN 2008,
Part II. LNCS, vol. 5264, pp. 457464. Springer, Heidelberg (2008)
Facial Expression Recognition Using Local Fisher
Discriminant Analysis
1 Introduction
Facial Expression is one of the most powerful, nature, and immediate means for
human beings to communicate their emotions and intentions. Automatic facial
expression recognition has attracted much attention over two decades due to its
important applications to human-computer interaction, data driven animation, video
indexing, etc.
An automatic facial expression recognition system involves two crucial parts:
facial feature representation and classifier design. Facial feature representation is to
extract a set of appropriate features from original face images for describing faces.
Mainly two types of approaches to extract facial features are found: geometry-based
methods and appearance-based methods [1]. In geometric feature extraction system,
the shape and location of various face components are considered. The geometry-
based methods require accurate and reliable facial feature detection, which is difficult
to achieve in real time applications. In contrast, the appearance-based methods, image
filters are applied to either the whole face image known as holistic feature or some
specific region of the face known as local feature to extract the appearance change in
the face image. Up to now, Principal Component Analysis (PCA) [2], Linear
Discriminant Analysis (LDA) [3], and Gabor wavelet analysis [4] have been applied
to either the whole-face or specific face regions to extract the facial appearance
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 443448, 2011.
Springer-Verlag Berlin Heidelberg 2011
444 S. Zhang, X. Zhao, and B. Lei
changes. Recently, Local Binary Patterns (LBP) [5], originally proposed for texture
analysis [6] has been successfully applied as a local feature extraction method in
facial expression recognition [7, 8]. No matter which kind of facial features is used,
the dimensionality of facial features is still high-dimensional. Therefore, its desirable
to perform dimensionality reduction on high-dimensional facial features in purpose of
extracting the low-dimensional embedded data representations for facial expression
recognition.
In recent years, a new supervised dimensionality reduction method called Local
Fisher Discriminant Analysis (LFDA) [9] has been proposed to overcome the
limitation of LDA. LFDA effectively combines the ideas of LDA and locality
preserving projection (LPP) [10], that is, LFDA maximizes between-class separability
and preserves within-class local structure at the same time. Thus, LFDA is capable of
extracting the low-dimensional discriminant embedded data representations.
Motivated by the deficiency of studies about LFDA for facial expression recognition,
in this work we use LFDA to extract the low-dimensional discriminant embedded
data representations from high-dimensional LBP features on facial expression
recognition task. We compare LFDA with PCA and LDA to verify the effectiveness
of LFDA for facial expression recognition. We conduct facial expression recognition
experiments on the popular JAFFE [11] facial expression database.
The remainder of this paper is organized as follows. In Section 2 Local Fisher
Discriminant Analysis (LFDA) is reviewed briefly. The popular JAFFE facial
expression database is introduced in Section 3. Section 4 presents the experiment
results. Finally, the conclusions are given in Section 5.
n
l =1
l =n (1)
yi = T T xi (2)
1 n
S ( lb ) =
2 i , j =1
Wi ,(lbj ) ( xi x j )( xi x j )T (4)
Ai , j / nl if li = l j
Wi ,(lwj ) = (5)
0 if li l j
Ai , j (1/ n 1/ nl ) if li = l j
Wi ,(lbj ) = (6)
1/ n if li l j
where A is a affinity matrix between xi and x j . Using the local scaling heuristic, A
is defined as
2
Ai , j = exp( xi x j / i j ) (7)
where i is the local scaling around xi and defined by i = xi x (k )
i
(k )
, and x
i is the
k-th nearest neighbor of xi . A heuristic choice of k=7 has shown to be the best
performance.
The LFDA transformation matrix TLFDA is defines as
The popular JAFFE facial expression database [11] used in this study contains 213
images of female facial expressions. Each image has a resolution of 256256 pixels.
The head is almost in frontal pose. The number of images corresponding to each of
the 7 categories of expression (neutral, happiness, sadness, surprise, anger, disgust
and fear) is almost the same. A few of them are shown in Fig.1.
Fig. 2. (a) Two eyes location (b) the final cropped image of 110150 pixels
4 Experimental Results
Since LBP has a predominant characteristic, that is, LBP tolerates against illumination
changes and operates with its computational simplicity, we adopted LBP for facial
image representations for facial expression recognition. The K-nearest-neighbor
(KNN) classifier with the Euclidean distance was used to conduct facial expression
recognition experiments owing to its simplicity. And the parameter K for KNN was
set to 1 for its best performance. To compare with LFDA, PCA and LDA was used
for dimensionality reduction. The reduced feature dimension is limited to the range
[2, 20]. For all facial expression recognition experiments, a 10-fold stratified cross
validation scheme was performed and the average recognition results were reported.
Table 1. The best accuracy for different methods with corresponding reduced dimension
Dimension 2478 17 6 11
The different recognition results of three used dimension reduction methods, i.e.,
PCA, LDA and LFDA, are given in Fig.3. It should be pointing out that the reduced
feature number of LDA is set to the range [2, 6] because LDA can find at most 6 (less
than 7 categories of expression) meaningful embedded features due to the rank
Facial Expression Recognition Using Local Fisher Discriminant Analysis 447
deficiency of the between-class scatter matrix [3]. The best accuracy for different
methods with corresponding reduced dimension is presented in Table 1. Note that the
Baseline method denotes that the result is obtained on the original 2478-
dimensional LBP features without any dimensionality reduction. As shown in Fig.3
and Table 1, we can see that LFDA obtains the highest accuracy of 85.71% with 11
embedded features, outperforming PCA, LDA and Baseline. This indicates that
LFDA is capable of extracting the most discriminant low-dimensional embedded
representations for facial expression recognition. In addition, LDA performs better
than PCA, since LDA is a supervised reduction method and can extract the low-
dimensional embedded data representations with higher discriminating power than
PCA.
100
90
80
Recognition accuracy / %
70
60
50
40 LDA
PCA
30 LFDA
20
2 4 6 8 10 12 14 16 18 20
Reduced dimension
5 Conclusions
In this paper, we presented a new method of facial expression recognition based on
LFDA for. To testify the effectiveness of LFDA, two well-known dimensionality
reduction methods including PCA and LDA, were used to compare with LFDA. The
experiment results on the popular JAFFE facial expression database indicate that
LFDA performs best and obtains the highest accuracy of 85.71% with 11 embedded
features. This is attributed to that LFDA has the better ability than PCA and LDA to
extract the low-dimensional embedded data representations for facial expression
recognition.
References
1. Tian, Y., Kanade, T., Cohn, J.: Facial expression analysis. In: Handbook of Face
Recognition, pp. 247275 (2005)
2. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Maui, USA, pp. 586591 (1991)
3. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition
using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine
Intelligence 19(7), 711720 (1997)
4. Lyons, M.J., Budynek, J., Akamatsu, S.: Automatic classification of single facial images.
IEEE Transactions on Pattern Analysis and Machine Intelligence 21(12), 13571362
(1999)
5. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution gray scale and rotation invariant
texture analysis with local binary patterns. IEEE Transactions on Pattern Analysis and
Machine Intelligence 24(7), 971987 (2002)
6. Ojala, T., Pietikinen, M., Harwood, D.: A comparative study of texture measures with
classification based on featured distributions. Pattern Recognition 29(1), 5159 (1996)
7. Shan, C., Gong, S., McOwan, P.: Robust facial expression recognition using local
binary patterns. In: IEEE International Conference on Image Processing (ICIP), Genoa,
pp. 370373 (2005)
8. Shan, C., Gong, S., McOwan, P.: Facial expression recognition based on Local Binary
Patterns: A comprehensive study. Image and Vision Computing 27(6), 803816 (2009)
9. Sugiyama, M.: Dimensionality reduction of multimodal labeled data by local Fisher
discriminant analysis. Journal of Machine Learning Research 8, 10271061 (2007)
10. He, X., Niyogi, P.: Locality preserving projections. In: Advances in Neural Information
Processing Systems (NIPS), pp. 153160. MIT Press, Cambridge (2003)
11. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor
wavelets. In: Third IEEE International Conference on Automatic Face and Gesture
Recognition, Nara, Japan, pp. 200205 (1998)
12. Tian, Y.: Evaluation of face resolution for expression analysis. In: First IEEE Workshop
on Face Processing in Video, Washington, USA, pp. 8282 (2004)
13. Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer
Vision 57(2), 137154 (2004)
Improving Tracking Performance of PLL Based on
Wavelet Packet De-noising Technology
1 Introduction
The robustness of GPS receivers has gained more and more attention in recent years.
For GPS receivers, a phase-locked loop provides the carrier phase measurement for
accurate positioning. Unfortunately, the PLL tracking performance is often affected
by many errors, mainly the thermal noise and dynamic stress [1]. To tolerate dynamic
stress, the conventional method is to broaden PLL bandwidth and reduce the pre-
integration time. However, in order to reduce the thermal noise, a narrow bandwidth
and longer pre-integration time are required. Actually, some compromise must be
made to resolve this conflict [2].
This paper proposed an algorithm based on wavelet packet de-noising technology
to improve the tracking performance of PLL. Due to their flexibility, software
receivers are quite valuable and convenient in executing and evaluating the algorithm.
The proposed algorithm was implemented in a GPS software receiver developed in
C/C++ language. The papers frame are as follows: firstly, the basic structure of
software GPS receiver is illustrated; Secondly, some basic concepts of PLL and
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 449456, 2011.
Springer-Verlag Berlin Heidelberg 2011
450 Y. Li, X. Xu, and T. Zhang
The structure of the software receiver used in this paper is shown as Fig.1. GPS
Software receiver realizes base-band signal processing and navigation solution by the
way of software, through a generalized and modularized hardware platform, viz.,
front-end [3]. Achieving the functions by software makes the software receiver have
characteristics of good compatibility and flexibility. Different algorithms can be
executed and evaluated in the receiver.
IF signal
input Bit synchronization Decode
Signal Signal Position
Frame synchronization Navigation
Acquisition Tracking Solution
Parity check message
The principle of a software receiver is as follows: firstly, the receiver should read
the IF data. Secondly the acquisition module is used to acquire the rough code phase
and Doppler frequency from each satellite. Thirdly the tracking module is used to
implement carrier tracking and code tracking based on the parameters from the
second step. Then, synchronization, parity and decoding are implemented in sequence
[4]. As a result, the position message is then calculated. The carrier tracking is
accomplished by PLL to synthesize the replica carrier frequency and phase with the
one of the satellite signals. The performance of PLL plays an important role to the
GPS receiver.
and a voltage controlled oscillator (VCO). LF is used to filter the result from the PD
and generate the control signal u f (t ) to control the VCO to generate the output
signal.
f (t ) = a (t ) . (1)
Where { (t ) } are the basis of the other space. We can abstract the characteristic of
the signal from the coefficients a . So the processing of the signal can be replaced by
the processing of a [6]. In this paper, the wavelet packet decomposition was used.
The wavelet packets transform the signal in time domain into the coefficients in the
inner product space of wavelet packet. Define the following notation: ( x ) is an
scaling function and ( x) is the corresponding wavelet function, {Vk } is a multi-
resolution space, also called scale space generated by ( x ) .{Wk } is a wavelet space
generated by ( x) , Wk 1 is an fill space differences between Vk 1 and Vk , so Lvesque
space L2 ( R ) can be decomposed as [7]:
L2 ( R ) = L W2 W1 W0 W1 W2 L (2)
h
nZ
h
n 2 k n 2l = k ,l hn = 2 g k = (1) k hl k .
nZ
(4)
2 n ( x ) = 2 hk n (2 x k )
k
(5)
2 n ( x) = 2 g k n (2 x k )
k
The signal can be expressed by the following wavelet packet bases function:
Where
Based on the theory of local maxima value of the wavelet transformation, the
characteristic information is concentrated in a few coefficients. So, de-noising process
can be done by saving the character coefficients and threshold other coefficients.
After threshold, the modified coefficients can be used to reconstruction of signal. The
reconstruction formulation of a discrete signal is done by [8]:
Secondly, compute the best tree based on the entropy of Shannon, which is
computing the best wavelet packet basis.
Thirdly, a threshold is computed and then a soft-threshold is applied to the
coefficients. The threshold can be calculated as follows [8]:
= 2 * log 2 N . (9)
There are two kings of threshold functions, hard threshold and soft threshold
functions. In this paper, soft threshold function was selected. It is defined as [8]:
) sign(W j , k )( W j , k ), W j , k
W j,k = . (10)
0 , W j ,k <
uI (k)
SI (k)
uO(k)
ud (k)
ui (k) uf (k)
,
uO (k) SQ (k)
uQ(k)
The main function of the wavelet packet de-noising is to reduce the noise level
within the bandwidth of the loop filter. The loop filter could filter out the noise
beyond the bandwidth of the loop filter, but the noise within the bandwidth still could
pass through and affect the NCO tracking performance. It should be noted that the
wavelet packet de-noising technique can only reduce the noise level rather than
eliminate the noise totally. The remaining noise still can affect the NCO tracking
performance but at a lower level [2].
The wavelet packet de-noising technique may cut off some useful signals which
are smaller than the threshold. This disadvantage will make the PLL spend more time
to lock due to the loss of some control signals and the output from PLL distorted.
However the decrease of the noise will produce smaller phase error that will help the
PLL to maintain locked and smooth the output [6].
6 Test Results
The performance of the proposed algorithm is assessed by using real IF data acquired
by RF front-end. The front-ends parameters are as follows: IF (intermediate
frequency) is 4.123968 MHz and sampling frequency is 16.367667MHz. The
software receiver is simulated based on VC++6.0.
The algorithm processed the outputs from the discriminator. The performance is
evaluated in terms of signal-to-noise (SNR) ratios and Relative Mean-Square Error
(RMSE). The formulas of SNR and RMSE are as follows [9]:
( I (i) I d (i)) 2
S N R = 1 0 lo g 10 i =1 . (11)
N
i=1
2
I (i)
N N
RM S = [ ( I (i) I d (i)) 2 ] I 2
d (i) . (12)
i =1 i =1
I (i ) and I d (i ) denote the original signal and the de-noised signal separately.
Doppler frequencies from the ordinary PLL and the modified PLL are compared in
figure5 and figure 6. The tested data is 36000ms long, parts of which are selected to
display for briefness. Both figures show that the data from the proposed algorithms
has less noise than the ordinary PLL , but with a little offset. The results comply with
the theory above. The slope of the curves stands for change rate of Doppler
frequency.
Improving Tracking Performance of PLL 455
Fig. 5. The difference of Doppler frequency between modified PLL and ordinary PLL for satellite 9
Fig. 6. The difference of Doppler frequency between modified PLL and ordinary PLL for satellite 18
456 Y. Li, X. Xu, and T. Zhang
The SNR and RMSE improvements of the proposed algorithm are depicted in
table 1. In general, the results comply with the results from the figures. The wavelet-
de-noising algorithm provides some improvement in SNR and RMSE.
7 Conclusion
The wavelet de-noising technique in the PLL can reduce the noise within the
bandwidth of the loop filter and therefore, a less noisy tracking output can be obtained
from the NCO. So this proposed method is effective to improve the PLL tracking
performance. However, the bias between the de-noised signal and the original signal
are related to the soft-threshold function and will still affect the performance of the
PLL. How to improve the threshold function will be the focus of the future work.
Besides, operating the software in real time instead of post-processing is also a key
issue to be considered in later research.
References
1. Ward, P.W., Betz, J.W., Hegarty, C.J.: Satellite Signal Acquisition, Tracking, and Data
Demodulation. In: Understanding GPS Principles and Applications, pp. 153241. Artech
House, Inc., Norwood
2. Lian, P., Lachapelle, G., Ma, C.L.: Improving tracking performance of PLL in high
dynamics applications. ION. NTM, 10421052 (2005)
3. Tsui, James, B.Y.: Fundamentals of Global Positioning System Receivers: A Software
Approach. John Wiley & Sons Inc., Chichester (2000)
4. Cai, B.-G., Shang, G.W., Wang, J., Liu, H.-C.: Design and Realization of Software
Receiver Based on GPS Positioning Algorithm. In: International Conference on
Information Science and Engineering, pp. 20302033. IEEE, Los Alamitos (2009)
5. Gardner, F.M.: Phase Lock Techniques, 3rd edn. John Wiley & Sons, Inc., USA (2005)
6. Qian, H.-m., Ma, J.-c., Li, Z.-y.: Fiber optical gyro de-noising based-on wavelet packet
soft-threshold algorithm. Journal of Chinese Inertial Technology 15(5), 602605 (2007)
7. Daubechies, I.: Orthonormal bases of compactly supported wavelets. Commun. Pure
Appl. 41, 909996 (1998)
8. Peng, Y., Weng, X.H.: Thresholding-based Wavelet Packet Methods for Doppler Ultrasound
Signal Denoising. In: IFMBE Proceedings, APCMBE 2008, vol. 19, pp. 408412 (2008)
9. Yu, W.X., Zhang, Q.: Signal De-noising in Wavelet Packet Based on An Improved-
threshold Function. Communications Technology 43(6), 79 (2010)
Abstract. In order to enhance the display result of the LED display image, an
improved algorithm of LED display image based on composed correction is
proposed. Firstly, the correction principle and the development status of the
current correction technologies of the LED display image is introduced, and the
two main correction technologies are analyzed. Secondly, the theory of
composed correction is proposed by constructing a mathematical model.
Thirdly, this algorithm is achieved in VHDL which is one of hardware
description language, and verified in an experimental platform. Experimental
results show that this algorithm is able to enhance the display result of the LED
display image significantly.
1 Introduction
LED display panel has been used widely in various occasions such as stage
background, traffic guidance, sports events living and so on. The reason that the LED
display panel is widespread attention and development rapidly is that it itself has
many advantages. The reason why widespread attention to LED, and rapid
development, is that it itself has many advantages. The sums of these advantages are:
high brightness, low voltage, low power consumption, and long life. Although the
LED display panel has so many advantages, its development is still constrained by
some unfavorable factors. One of them is the non-uniformity among pixels after the
LED display panel works a period of time, because different pixels have different
length of working [1-6].
In order to solve the above problem, there are two main correction technologies
which are correction based on CCD and correction based on dedicated video
processor. The correction based on CCD has been used in the LED display panel
many years and this technology is already mature. The correction based on video
processor is only in recent years developing. The cost of this technology is high
because the dedicated video processor is expensive. Therefore, an improved algorithm
of LED display image based on composed correction is proposed [7-10].
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 457463, 2011.
Springer-Verlag Berlin Heidelberg 2011
458 X.-j. Song et al.
0 f R ( x, y ) Lmax
0 f G ( x, y ) Lmax (1)
0 f ( x, y ) L
B max
In (1), Lmax denotes the highst grayscale level of the pixel. The coordinates of the
LED display image should be agreed the constants shown in Fig. 1.
Fig. 1. Coordinate constraints of the LED display image. denotes one pixel and (0, 0)
denotes the pixel which in the first row and the first column. The resolution of the LED display
panel is VH. It means that the LED display panel has the number of H rows and V columns
pixels.
Improved Algorithm of LED Display Image Based on Composed Correction 459
The LED display image can be described as a matrix F because it itself is digital
array. Each pixel of the LED display panel is the corresponding element of the matrix
F. It should be note that the size of the matrix F is not VH but (3V)H. F is
combined with three sub-matrixes which are FR, FG and FB.
F = [ FR : FG : FB ] (2)
In (2), FR, FG and FB is expresses in (3), (4) and (5) respectively.
f = [ fR fG fB ] (6)
f R = [ f0 f1 " fV 1 ] (7)
f G = [ fV fV +1 " f 2V 1 ] (8)
f B = [ f 2V f 2V +1 " f 3V 1 ] (9)
f (0, i )
f (1, i )
fi = (i = 0,1,"3V 2,3V 1) (10)
#
f ( H 1, i )
4 Experiments Results
This algorithm is achieved in VHDL which is one of hardware description language.
This program has four modules which are data_in, data_pro, data_driv and
correction_end. Data_in is a module which receive the data from the data sources
such as display card, video processor and so on. Date_pro is a module which
processes the data provided by data_in based on composed coreection. Data_driv is a
module which transfores the data to the LED display panel. Correction_end is module
which controls the processing of correction.
The algorithm based on composed correction has been applied in the projects, and the
following figures, Fig.3, Fig.4, Fig.5 and Fig.6 are the LED display result before and
after using this algorithm.
Improved Algorithm of LED Display Image Based on Composed Correction 461
display unit
control unit
Fig. 2. The experimental platform is combined with the data transform unit, the control unit, the
drive unit and the display unit
(a) (b)
Fig. 3. (a) is the LED dispplay result in red before correcint and (b) is after correcting
(a) (b)
Fig. 4. (a) is the LED dispplay result in green before correcint and (b) is after correcting
462 X.-j. Song et al.
(a) (b)
Fig. 5. (a) is the LED dispplay result in write before correcint and (b) is after correcting
(a) (b)
Fig. 6. (a) is the LED dispplay result in write before correcint and (b) is after correcting
5 Conclusion
The improved algorithm of LED display image based on composed correction is able
to reduce the non-uniformity among pixels after the LED display panel works a
period of time. The experimental results show that this algorithm is able to enhance
the display result of the LED display image significantly.
References
1. Goh, J.C., Chung, H.J., Jang, J., et al.: A new pixel circuit for active matrix organic light
emitting diodes. IEEE Transactions on Electron Device, 544546 (2002)
2. Critchley, B.R., Blaxtan, W.P., Eckersley, B.: Picture quality in large-screen projectors
using the digital micro-mirror. Society for Information Display, 199202 (2008)
3. Winkler, S.: Issues in vision modeling for perceptual video quality assessment. Signal
Processing, 231252 (1999)
4. Cheng, W.S., Zhao, J.: Correction method for pixel response non-uniformity of CCD. Opt.
Precision Eng., 103108 (2008)
5. Chang, F., Wang, R.-g., Zheng, X.-f., et al.: Algorithm of reducing the non-uniformity of
images in LED display panel. IEEE Transactions on Power Electronics, 169172 (2010)
Improved Algorithm of LED Display Image Based on Composed Correction 463
LiMei Fu
1 Introduction
With the rapid development of modern education technology,which put computer
technology and network technology as the core, The schools pay more and more
attention to use computers to develop multimedia courseware .Multimedia courseware
in teaching shows its advantage role. A multimedia courseware can be considered as a
teaching system ,which constitute by six basic part.The six part Respectively is cover
introduction, knowledge content, exercises part, jump relations, navigation strategy
and the interface, combining author actual experience in developing C programming
courseware,This paper introduced the development process of courseware using
Authoware, and summarized some problems encountered on the development
process.Finally this paper presents some solution of these problems.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 464469, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Development Process of Multimedia Courseware Using Authoware 465
(2) Scientific.The content of courseware should be science. At the same time the
materials of courseware should be tightly arould of syllabus.The content of
courseware should also adapt to the needs of teaching object.
(3) Technical.Courseware can run in independent environment with quick speed
and continuous animation.It also should have ability of fault-tolerant and
trouble-free. The interface of courseware should have characteristic of
friendly and strong interactivity.
(4) Artistic.The picture of courseware should be concise, artistically and
unification. Iit also should be reasonable for applying of image, animation
and text font.The size of font should be easily identified.
The second step is to design the core part of courseware . Based on the framework
of courseware and function, we determine the main menu, submenu and
corresponding button of courseware.The design should be convenient to swith
between one part and another part.At the same time the design should have some
buttons which can return home page and exit the courseware system. In order to make
courseware more beautiful, the main menu and submenu of chapters all use flash
production. The design process of main menu as follows, first insert a flash animation
into flow line, and then drag an interactive icon to flow line.In the following of the
interactive icon, it can drag many group icons. there are many types of interactive
response .We choose "hotspots response" as interaction.In those group icons,we can
make use of all kinds of icon to finish the development of courseware.Such as
display icon to display the content of courseware and all kinds of pictures,audio
iconto play sounds,move icon to play movies. Return button and exit button
use caculate icon to finish return main menu and exit system. In order to jump
rightly,each icons are shall be named accurately in the whole process. Each icon
should not have the same name.If have the same name, during execution of program
jumping command would be caused wong runnig. Program flow see figure 3 shows.
Initial interface displays each big chapter of the main menu, the submenu display the
menu of chapters.Click the main menu can enter submenu interface.
The Development Process of Multimedia Courseware Using Authoware 467
5
Fig. 3. The t structure of core courseware
Third step, each part of teaching contents is refined and divided into several
independent unit of knowledge .Meanwhile it should be clear for every knowledge
unit. According to different knowledge unit, the design gives the corresponding
interface. Pages can be connected through the next page button. If submenus have
been the last page, it should be return the main meun of this chapter byreturn
button.In various state of sub menu, it can exit courseware by pressing "exit button
at anytime. Figure 5 is the structure chart of statements. In order to complete the
corresponding function, you can drag different types of icon to the group icon.
Fourth step is to complete the tests of each unit. Using Authorware knowledge
objectcan be done quickly. There are two kinds of question which respectively is
choice and judgment.The feedback function is very important for the testing,when
students do right,it should be design a encorage feedback,such as a smiling face.or
an encouraging expression etc.When students answer wrong ,the test system should
explain the reasonsts. In order to record the lerning situation of student every time,the
system provids the function of learning diary.Finally the system also provide more
resources of learning by course resource part.
6 Conclusion
This paper expounds the development process of a multimedia courseware using
Authoware,based on authors some experiences in developing the courseware of C
programming.Meanwhile this paper also analysed common problem encountered on
thedevelopment process of courseware. Practical teaching effect shows that using
multimedia courseware teaching has achieved good teaching effect.
References
1. Koper, E.J.R.: A method for the development of multimedia courseware. The British
Journal of Educational Technology 26(2) (May 1995)
2. Flori, R.E.: Computer-Aided Instruction in Dynamics: Does it Improve Learning? In:
Session 3D1 77-81, Proceedings Frontiers in Education, 24th Annual Conference, San
Jose, CA, November 2-6 (1994)
3. Ning, J.X.: How to make the multimedia courseware for teaching. Education Teaching
BBS (March 2011)
4. Zhou, D.: The analysis of common problem using Authoware. Natural Sciences Journal of
Harbin Normal University 17(2) (2001)
Design of Ship Main Engine Speed Controller Based on
Expert Active Disturbance Rejection Technique
1 Introduction
Ship's main engine performance and service life depends very much on the
performance of the speed adjust. At home and abroad most of the advanced speed
control system of ship used digital governor, which used mainly PID algorithm. For
the ship ME speed control processes are nonlinear, time-varying as well as the
uncertainty of environment, it is very difficult to meet the requirements which obtain
optimal control system performance index by depending on the traditional PID
algorithm, because PID algorithm is usually used in the control system for constant
coefficients and little environmental interference, otherwise, the control system can
not guarantee optimal performance, or even become unstable.This paper design a new
type of ship ME speed controller using expert ADRC technique and simulation study
indicate that it achieves good results when parameter perturbations of the ship and
environmental disturbances.
2 Ship ME Model
Main Engine speed control system structure is shown in Fig.1.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 470475, 2011.
Springer-Verlag Berlin Heidelberg 2011
Design of Ship Main Engine Speed Controller 471
In this paper, MAN B&W S60M large low-speed diesel engine is used, its
mathematical model as follows:
kT1 n f (t ) + kn f (t ) = s (t ) (1)
For other parameters are easy to designed, the expert system is used to optimal
{k p , k d , 01 , 02 , 03 } five parameters. It is easy to select the parameters compare
with other manual adjustment based on experience.
1) The design of the expert ADRC
Controller must be self-tuning parameters before operation. In the controller
parameter tuning stage, based on analysis approximate model of the controlled object,
the inference mechanism of expert system select the best set of ADRC parameters
from the knowledge base and embed into the ADRC technology Control system, the
final aim is achieved which control the uncertain object (Fig.4).
2) Knowledge acquisition
The basic task of knowledge acquisition are get knowledge and create sound,
complete and effective knowledge base for expert systems to meet the needs of the
field of problem solving, which is the key to the controller design, correct or not of
knowledge base directly affect the control accuracy of the controller.
The system has two main methods of knowledge acquisition.
The first is by knowledge engineers, in the process of acquiring knowledge, the
knowledge engineers Coordinate knowledge experts in the field, generating expert
knowledge base. ADRC controller has strong robustness, and can be said to have
"universal" in a certain type of object, but the ADRC controller of fixed parameters is
not able to control all of the objects.Through research on some controlled objects
which have pure delay one order inertia model found that: the constant of
proportionality is 1 (it can be implemented by software), the time constant is
generally 1 ~ 1000s, the delay time is generally 0 ~ 800s; Controller designed ten sets
of ADRC parameters (Tab.1) to control the object.Parameter selection criteria are: the
uncertain objects can be controlled always by a set of knowledge base parameters of
ADRC, and control accuracy can meet the requirements (with a good fast, less than
15% overshoot); for the short delay objects, the knowledge base in the ADRC
parameters should be as little as possible; for the long delay, big inertia objects, the
knowledge base in the ADRC parameters should be as much as possible in order to
meet the requirements of high-precision control .
The second method for failure to control, the user should manually adjust the
intelligent controller parameters, and improve the content of the knowledge base.
No. kp kd 01 02 03
1 0.1 80 5 25 80
2 0.5 90 10 30 40
3 1.0 50 15 25 70
4 1.5 30 20 25 30
5 2.0 40 25 30 50
6 3 60 30 30 10
7 4 70 35 30 20
8 5 30 40 25 40
9 6 80 45 25 80
10 8 20 50 30 30
3) Knowledge base
All ADRC parameters (Tab.1) and the controlled object similar model are stored in
knowledge base.
4) Inference mechanism
Inference mechanism is the core of the controller design which equivalent the brain of
experts. It analysis approximate mathematical model of controlled object, use square
474 W. Pan et al.
error integral indicators ISE = e 2 (t )dt as a basis for parameter selection, and
0
simulate online by the parameters of knowledge base, then, the best performance
index which obtained through online simulation embedded in the ADRC control
system to achieve control of an uncertain object.
5 Simulation Results
Simulating experiment is carried out on a ship which the ME is used MAN B&W
S60M, nominal parameters are:
T =12.4s, k=98.5, =0.0656s.
The parameters of ADRC by tuning principle of the ADRC parameters are:
r =100, h =30, 01 =0.02, 02 =0.04, 1 =0.5, 2 =0.25, 1 =0.05, 2 =0.1.
By expert system, the fourth sets of parameters in expert database is used. The
parameters are:
k p =1.5, k d =30, 01 =20, 02 =25, 03 =30.
Set the periodic sample time as 0.1s, simulation time as 500s.
Due to limited space, there are just some simulation results under typical
conditions described in Fig.5-6.
120
100
n
i
m80
/
n
o
i
t 60
u
l
o
v PID
r40
e
20 expert
ADRC
0
0 100 200 300
time(s) 400 500 600
120
100
n
i
m
/ 80
n
o
i
t 60
u
l
o
v 40 PID
e
r
20 expert
ADRC
0
0 100 200 300 400 500 600
time(s)
References
1. Han, J.Q.: From PID technique to active disturbance rejection control technique. Control
Engineering of China 9(3), 1318 (2002)
2. Han, J.Q.: Auto disturbances rejection control technique. Frontier Science 1, 2431 (2007)
3. Zhao, X.R., Zheng, Y., Hou, Y.H.: The rational spectrum modeling of the ocean wave and
its simulation method. Journal of System Simulation 4(2), 3339 (1992)
4. Hu, Y.H., Jia, X.L.: Theory and Application of Predictive Control of Ships Subject to State
Constraints. Control and Decision 17(4), 542547 (2000)
5. Cheng, Q.M., Wan, D.J.: Overview on the development and comparison of the control
techniques on ship maneuvering. Journal of Southeast University 29(1), 1419 (1999)
6. Pan, W.G., Zhou, Y.B., Han, Y.Z.: Design of Ship Main Engine Speed Controller Based on
Optimal Active Disturbance Rejection Technique. In: 2010 International Conference on
Automation and Logistics, vol. 1, pp. 528532 (2010)
Design of Ship Course Controller Based on Genetic
Algorithm Active Disturbance Rejection Technique
1 Introduction
Ship autopilot will navigate through the designed optimal course, saving lots of
energy, time and manpower. Since 1920s, PID based autopilot, has been designed and
used in the field of ship course control. In 1970s, some modern control methods, such
as the self-adaptive control, are introduced to ship course control. However, the
traditional PID controller is so sensitive to high frequency disturbance that it will
cause steering operation frequently, lacking sufficient adaptability to dynamic
condition of ship and sea. Furthermore, the self-adaptive steer, with a high cost and
the difficulty of parameters regulating, in addition to ship's nonlinearity, can't make
sure that there will be a good control effect.This paper design a new type of ship
course controller using genetic algorithm ADRC technique and simulation study
indicate that it achieves good results when parameter perturbations of the ship and
environmental disturbances.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 476481, 2011.
Springer-Verlag Berlin Heidelberg 2011
Design of Ship Course Controller 477
reflection of the wind, wave disturbances, but its calculation is very complex. The
latter is called responding ship model method. The method, having eliminated the
transverse excursion, is mainly based on the ship dynamic condition-the relation of
. However, the differential equation can still include the nonlinear
disturbances and even can reflect the wind and wave disturbance as an equivalent
rudder angle d which together with rudder angle is taken into the ship model. The
linear Nomoto responding ship model is: T + = K .
Ship course motion is essentially nonlinear, whose model parameters are perturbed
with changes of water depth, navigation speed, loading weight and also wind and
wave disturbances. Besides, auto-rudder is a dynamic system, which is restrained by
the maximum rudder angle and the maximum rudder angular velocity. Taking all of
the above into account, the ship course motion model is usually based on improved
two-dimension Nomoto model which includes uncertain items. The model is
described as follows:
(T + T ) + ( K + K ) H ( ) = ( K + K )( + d )
H ( ) = ( + ) + ( + ) 3
(T + T ) + = ( K + K )
E E E c (1)
max
max
Where, , : ship course angular acceleration (rad/s2) and angular velocity (rad/s).
, : steering autopilot angular velocity (rad/s) and angle (rad).
c , d : steering autopilot control angle (rad) and disturbance equivalent angle
(rad).
T , T , TE , TE : ship time constant(s) and steering autopilot time constant(s) and
their perturbation.
K , K , K E , K E : ship course control gain (s-1) and rudder control gain
(dimensionless) and their perturbation.
, , , : nonlinear coefficient and their perturbation (s2/rad2) .
, : maximum rudder angular velocity (rad/s) and maximum rudder angle
max max
(rad).
There are different methods in different literatures, but in this paper, uniform
stochastic wave disturbance is used only which is shown as follows:
The ultimate goal for designed the ship course controller is to maintain the course, so
the servo mechanism and the ship course mathematical model can be considered as a
whole, and thus greatly simplify the controller design, this design is shown in Fig.1,
where ADRC. is the two-order.
The performance of the ADRC is heavily related with the selection of its parameters
which is mainly confirmed by experiments. Large numbers of simulation research
shows that ADRC controller can completely be designed by separability principle,
namely to individually design TD, ESO and error feedback part and combine into
complete ADRC. Among them, 1 , 2 , 1 , 2 , 01 , 02 are invariable parameter,
and three parameter of ESO can be generate automatically, only k p and k d need be
setting manually. So it is not convenient when actually operating and parameters
change. Using the Genetic Algorithm control, a kind of method called Genetic
Algorithm control is proposed. In this method, k p and k d can be adjusted
automatically. In this paper, a Genetic Algorithm controller is designed, which can
optimally approximate k p and k d automatically according to e1 and e2 . In this
design, e1 and e2 are considered as input. The adaptive disturbance rejection
parameters are modified on-line under the Genetic Algorithm control rules in order to
satisfy the requirements of different time and improve the control performances of
ADRC. With the analysis above, the structure of Genetic Algorithm ADRC designed
is shown as Fig.2.
For solving optimization problems by used genetic algorithms, the encoding schem
genetic methods of operation and setting the fitness function are very important,
because it is not only the judge criteria of individual's adaptive
capacity, but also the optimal solution key to ensure the best individual optimization
problem. Fitness function is closely related to the specific research question, it
should not only reflect the essence of the problem under study, but also easy to
compute.
Design of Ship Course Controller 479
Genetic
Algorithm
kd kp
v2 e2
- NLSEF
u0 u
Object
y
v1 e1 - 1/b
- b
z3 E
z2 S
z1
O
Its reciprocal is fitness function. Optimal control parameters are the corresponding
controller parameters when the fitness function is the largest.
2) Encoding and genetic manipulation
Encoding: the individual coding forms use real number coding, so it avoid the
trouble of common binary encoding and decoding. {k p , k d } are selected as the
individual expression which is composed by ADRC parameters.
Genetic manipulation: a simple single point crossover fashion. Variation use
adaptive mutation way. When the fitness is high, the mutation rate reduce; When
fitness is low, the mutation rate increase. The common proportion and the optimal
retention policies are choosen. The implementation of the optimal retention policy can
guarantee the optimal individual obtained so far will not be destroyed by cross and
mutation, it cooperating with the other selection methods is an important guarantee of
the optimal value.
4 Simulation Results
Simulating experiment is carried out on a ship which nominal parameters of Nomoto
ship non-linear motion model are :
10
Rudder Angle()
-5
0 50 100 150 200 250 300 350 400 450 500
Time(s)
10
HeadingAngle()
6
4
0
0 50 100 150 200 250 300 350 400 450 500
Time(s)
10
Rudder Angle()
-5
0 50 100 150 200 250 300 350 400 450 500
Time(s)
10
8
HeadingAngle()
0
0 50 100 150 200 250 300 350 400 450 500
Time(s)
References
1. Han, J.Q.: From PID technique to active disturbance rejection control technique. Control
Engineering of China 9(3), 1318 (2002)
2. Han, J.Q.: Auto disturbances rejection control technique. Frontier Science 1, 2431 (2007)
3. Zhao, X.R., Zheng, Y., Hou, Y.H.: The rational spectrum modeling of the ocean wave and
its simulation method. Journal of System Simulation 4(2), 3339 (1992)
4. Hu, Y.H., Jia, X.L.: Theory and Application of Predictive Control of Ships Subject to State
Constraints. Control and Decision 17(4), 542547 (2000)
5. Zhou, Y.B., Pan, W.G., Xiao, H.R.: Design of Ship Course Controller Based on Fuzzy
Adaptive Active Disturbance Rejection Technique. In: 2010 International Conference on
Automation and Logistics, vol. 1, pp. 232236 (2010)
A CD-ROM Management Device with Free Storage,
Automatic Disk Check Functions
1
Science and Technology Projects in Guangdong Province
2
Department of Computer Science and Information Management,
Shengda Economics Trade & Management College of Zhengzhou,
Zhengzhou, China
chendaohe2006@yahoo.com.cn
2
College of Automation Science and Technology,
South China University of Technology,
Guangzhou, China
xhwang@scut.edu.cn
Abstract. With the development of information resources, there are more and
more CD-ROM resources. The traditional disc management, storage and search
methods can not meet the current requirements of the fast-paced CD-ROM
management infomationization and standardization. Based on the demand, we
studied a distributed intelligence disc management system with random access,
fuzzy search and automatic pop-up functions.
0 Introduction
With the development of information resources, there are more and more CD-ROM
resources in libraries, archives, television stations, radio stations, hospitals, schools,
military units, enterprises and institutions. A large number of discs need to use special
storage systems. The original manually manage and storage search methods by number
in CD case / box / drawer / cabinet can not meet the current requirements of the
fast-paced CD-ROM management infomationization and standardization. Therefore,
designing an intelligent CD-ROM management device is increasingly becoming a
pressing need.
The system consists of two parts, the first is to design disc storage mechanism to
achieve free CD-ROM disc storage and automatic access functions; second part is the
disc's information management and sharing system.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 482488, 2011.
Springer-Verlag Berlin Heidelberg 2011
A CD-ROM Management Device with Free Storage, Automatic Disk Check Functions 483
Actually, the discs number of different users needs to storage and manage are great
different. Starting from the universal, to design a program of convenient expansion is
necessary. This device uses a modular unit storage structure to achieve.
Each modular unit storage structure is mainly composed of axial motor, screw, dial,
dial drive motor, rail and bracket components. Axial motor is stepper motor. It is
connected with the screw. By controlling the stepper motor turn, the screw rotates in
accordance with the direction of rotation, drive motor and dial dial to achieve axial
movement. Then the dial can drive the motor and run in the axial movement. The dial
drive motor (which is low power stepper motor) and the dial taken together make up the
check disk body. When it arrived at the designated location, it would drive the dial to
release the disc through the dial drive motor. Why the axial motor and the dial drive
motor using stepper motors? Compared to DC motors and AC motor, the stepper, the
biggest advantage is that the stepper motor control is simple. In the case of not losing
steps, it can be used for open-loop control. In the case of a short trip ,it is with high
accuracy. Using stepper motor control is not only simple but also can avoid the more
484 D. Chen, X. Wang, and W. Li
The disc management device specially designed the detection circuit of random
deposition. When the user needs to deposit a disc, can choose an empty position at
random according to the prompt light position.
The circuit consists of two main parts: storage position detection and light. We use
optical test tube to achieve storage position detection, which is installed at the entrance
of each disc storage grid. When the optical test tube is blocked by disc, it will produce
transitions to identify the stored position. The main control circuit can get optocoupler
level signal through Parallel-in/ Serial-out Shift Register. The optical test tube uses the
spatial arrangement of delta shape . This reduces the limitations that the launch tube
and receiver tube thickness have for disc storage grid size. Indication lamp uses ultra
bright LED. The main control circuit can control indication lamp through
Serial-in/Parallel-out Shift Register, respectively. Shown in figure 3.
Here, Altera Corporation EPM7128 using the CPLD, developing software using the
MAX + plus . Shown in figure 4.
The control of the system uses software and hardware combination. The computer
only to send pulses and pulse direction instruction to the interface circuit. Software
takes very little storage unit. Program development is no time limit , computer can
perform other tasks freely between each step of the motor to achieve more motion
control of stepper motor.
As the core of the interface circuit, pulse generating circuit shown in Figure 5. The
circuit consists of the data latch circuits, counters, data comparators and necessary logic
circuit. Pulse input in figure is continuous pulse signal send by pulse source. It is
486 D. Chen, X. Wang, and W. Li
controlled by the "start " signal and "send end " signal, that is only when the "start"
signal starts, and the pulse "send end " signal is invalid (for 1), pulse signal send by
pulse resource can enter the counter and output to the stepper motor drive.
Pulse send flow chart shown in Figure 6. First, the CPU send clear signal to the
counter, through the expansion port of the CPLD. Then send the data latch again the
number of pulses planned in advance. After the data latch, the latch output showing the
number of pulses in hexadecimal. For data comparison of two data input device, the latch
input is the number of pulses, and counter input zero, so the data is not equal to
comparison. Data comparator ouput is zero, after negated, is sent to AND gate, and then
send pulses "start" signal through the expansion port. In this case, pulse signal sent by the
pulse source can send the pulse signal to pulse output port and counter pulse input port ,
through the AND gate. With pulse output, the counter starts counting. When the number
of pulses is the same as the data in data latch, data comparator output is 1, after nagated,
through the AND gate, the output pulse blockade, thus, the sending pulse process is over.
When the CPU detects the end of the signal pulse, it can repeat the above process,
the next pulse transmission. If it send a large number of pulses, it can send through
multiple cycles, or add CPLD latches, counters, width of the data comparator to
resolve.
When the stepper motor start and stop with a higher frequency or speed of mutation,
they will appear out of step or even the phenomenon of stepping motor can not start, so
the program functions must have a lifting-speed. Many are mentioned in the literature
on the lifting-speed curve, lifting-speed circuit block diagram shown in Figure 7. This
part of the circuit consists of frequency divider, frequency synthesizer circuit, data
latch, the data selector. After the crystal pulse signal passed through the frequency
divider and pulse synthesis circuit, it will produce several frequency pulse signals.
These pulse signals are sent to data selector data input port. CPU send the
corresponding control signal to the data selector , through the data latch, then the output
port will send pulses of different frequencies. The pulse signal, as a pulse source, is
provided to the pulse circuit before. About process control, in pulse flow chart, only
need to set the pulse frequency in any step before the start signal send signal.
Server as development tool, taking into account the needs of actual use, the use of C / S
(Client / Server) and B / S (Browser / Server) structure combined. Large amount of
information data is to use C / S implementation, including the image file to upload and
so on. Small amount of data is to use B / S implementation, such as add, modify, and
delete the disc information, the user management.
The distributed uni uses the Wince operating system, through writing low-level
driver functions of ISA bus, to achieve the communication between the control unit and
the lower control board. A small embedded database is installed on the Wince system,
which with the total server composes of the network database structure, with the total
server synchronization. When the server has modified the information, it can notify the
distributed unit, to trigger a database update event.
4 Experimental Results
The project has successfully developed a multi-layer disc management unit pilot
prototype, realized accessing CD-ROM through the name of the disc, number, type,
content , and other related information on webpage, using disc directory structure
scanning functions to save disc directory structure information and automatically
associate with the disc number. So as to solve the existing technology to save the
information singularly, the shortcomings of low utilization of space. The modular
structure is designed to achieve an unlimited level of the expansion, provides references
for the design of similar products.
5 Conclusion
With the increase in disc capacity, CD-ROM management will become more
intelligent, humane. Future improvements will focus on network management, remote
query and share information. The disc management will be more efficient, more
convenient.
References
1. Xia, M.: Research and Design of Library CD-ROM. SuZhou University (2002)
2. Zhen, T.: Analysis and Design of CD-ROM Electronic Records Management System Based
on Disc Database. Xian University of Architecture Technology (2002)
3. Zha, Y.: CD-ROM Library File Cache Management System. Wu Han University (2004)
4. Xu, D.Y.: Principles of Disc Storage System. National Defence Industry Press (January
2000)
An Efficient Multiparty Quantum Secret Sharing with
Pure Entangled Two Photon States*
1 Introduction
Secret sharing had been studied extensively in the classical setting and was extended
to the quantum setting. The first quantum secret sharing (QSS) protocol was presented
by Hillery, Buek and Berthiaume [1] in 1999. Afterward, there were a lot of studies
focused on QSS in both the theoretical [2-4] and experimental [5-6] aspects. At
present, most QSS can only split a (classical or quantum) secret by quantum
mechanics principles. Furthermore, these QSS can be divided into two categories: one
only splits a classical secret and another shares a quantum secret (i.e., an arbitrary
unknown quantum state). In this paper, we mainly focus on the former.
To date, a lot of QSS schemes for sharing a classical secret have been proposed.
According to the differences of using quantum resources, these QSS schemes can also
be divided into three categories: QSS with single photons [7-10], QSS with the
maximally entangled states [11-14] and QSS with the non-maximally entangled states
(i.e., the pure entangled states) [15, 16]. For the first kind of QSS with single photons,
the most difficult problem is to obtain the ideal single photons since requiring the
high precision of the experimental equipments. For the second kind of QSS with the
maximally entangled states, the generation and distribution of maximally entangled
multi-particle states is a difficult task. For the third kind of QSS with the pure
*
This work was supported by the Natural Science Foundation of Anhui Province (No.
11040606M141), Research Program of Anhui Province Education Department (No.
KJ2010A009) and the 211 Project of Anhui University.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 489495, 2011.
Springer-Verlag Berlin Heidelberg 2011
490 R.-h. Shi and H. Zhong
entangled states, the advantage is easy to implement, but the drawback is the low
efficiency. For example, each pure entangled states only carries one bit of classical
information in References [15, 16].
In this paper, we will present an efficient multiparty quantum secret sharing
scheme with pure entangled two-photon states. In our scheme, the sender takes the
pure entangled two-qubit states as quantum resources, instead of the pure entangled
three or multi-qubit states, which makes our scheme more convenient in a practical
application than the scheme in Ref. [15], since the generation of the three or multi-
qubit states is more difficult than the two-qubit states. Especially, the total efficiency
of our QSS scheme is higher than the existing third type of QSS schemes [15, 16].
ab
= 00 ab
+ 11 ab
, (1)
where + = 1 and .
2 2
Alice first applies any one of the four unitary operators {U 1 , U 2 , U 3 , U 4 } on the
photon a and then sends the photon a to Bob, where
U1 = I = 0 0 + 1 1 , U 2 = z = 0 0 1 1 ,
U 3 = x = 0 1 + 1 0 , U 4 = i y = 0 1 1 0 . (2)
After receiving the photon a from Alice, the state of Bobs two-photon pair ( a, b ) is
one of the following four cases:
(I I ) ab
= 00 ab
+ 11 ab
= ab
,
( z I ) ab
= 00 ab
11 ab
= ' ,
ab
(3)
( x I) ab
10 ab
01 ab ab
,
(i y I) ab
10 ab
01 ab
'
.
ab
An Efficient Multiparty Quantum Secret Sharing with Pure Entangled Two Photon States 491
Obviously, the above four states are not mutually orthogonal, so these states cannot be
distinguished with certainty. But, they can be distinguished with some probability of
success.
In order to distinguish the four states, Bob first performs a projection onto the
subspaces spanned by the basis states { 00 , 11 } and { 01 , 10 } with projection
operators: P1 = 00 00 + 11 11 and P2 = 01 01 + 10 10 . It is obvious that P1
and P2 are mutually orthogonal, and there exist the following equations:
ab
P1 ab
= 0, ' P1 ' =0, ab
P2 ab
=0, ' P2 ' = 0. (4)
ab ab ab ab
That is, if Bob obtains P1 , then he knows that the state of the two-photon pair ( a, b )
will be either ab
or ' ; if he gets P2 , the state will be either ab
or ' .
ab ab
Without loss of generality, we assume that Bob obtains P1 . Thus, Bob knows that the
state of the two-photon pair ( a, b ) will be either ab
or ' , but he cannot know
ab
state is in ab
or ' , Bob performs a generalized measurement on his two-
ab
photon entangled states with the corresponding POVM elements in the subspace
{ 00 , 11 }
2
1 2 1 2 1 ( ) 0
M1 = , M2 = , M = .
2 2 2 2 2 2 (5)
3
0 0
3
Here M i = I . If Bob obtains M 1 then his two-photon state is ab
because of
i =1
ab M2 ab
= 0 . However, if he gets M 3 the state is completely indecisive and Bob
cannot obtain any information. Obviously, the success probability of distinguishing
ab
and ' is 2 2 [18]. Similarly, for the case of P2 one can show that the
ab
References
1. Hillery, M., Buzek, V., Berthiaume, A.: Quantum secret sharing. Phys. Rev. A 59(3),
1183418229 (1999)
2. Cleve, R., Gottesman, D., Lo, H.K.: How to share a quantum secret. Phys. Rev.
Lett. 83(3), 648651 (1999)
3. Gottesman, D.: Theory of quantum secret sharing. Phys. Rev. A. 61(4), 042311(1-8)
(2000)
4. Sudhir, K.S., Srikanth, R.: Generalized quantum secret sharing. Phys. Rev. A. 71(1),
012328(1-6) (2005)
5. Tittel, W., Zbinden, H., Gisin, N.: Experimental demonstration of quantum secret sharing.
Phys. Rev. A 63(4), 042301(1-6) (2001)
6. Lance, A.M., Symul, T., Bowen, W.P., et al.: Tripartite quantum state sharing. Phys. Rev.
Lett. 92(17), 177903(1-4) (2004)
7. Zhang, Z.J.: Multiparty quantum secret sharing of secure direct communication. Phys.
Lett. A 34(1-2, 4), 6066 (2005)
8. Zhang, Z.J., Li, Y., Man, Z.X.: Multiparty quantum secret sharing. Phys. Rev. A 71(4),
044301(1-4) (2005)
9. Deng, F.G., Zhou, H.Y., Long, G.L.: Bidirectional quantum secret sharing and secret
splitting with polarized single photons. Phys. Lett. A 337(4-6), 329334 (2005)
10. Wang, T.Y., Wen, Q.Y., Chen, X.B., et al.: An efficient and secure multiparty quantum
secret sharing scheme based on single photons. Opt. Commun. 281(24), 61306134 (2008)
11. Zhang, Z.J., Man, Z.X.: Multiparty quantum secret sharing of classical messages based on
entanglement swapping. Phys. Rev. A 72(2), 022303(1-4) (2005)
12. Deng, F.G., Long, G.L., Zhou, H.Y.: An efficient quantum secret sharing scheme with
Einstein-Podolsky-Rosen pairs. Phys. Lett. A 340(1-4, 6), 4350 (2005)
13. Deng, F.G., Li, X.H., Li, C.Y., et al.: Multiparty quantum secret splitting and quantum
state sharing. Phys. Lett. A. 354(3), 190195 (2006)
14. Shi, R.H., Huang, L.S., Yang, W., Zhong, H.: Quantum secret sharing between multiparty
and multiparty with Bell states and Bell measurements. Sci. Chain Phys. Mech. Astron. 53,
22382244 (2010)
15. Zhou, P., Li, X.H., Liang, Y.J., et al.: Multiparty quantum secret sharing with pure
entangled states and decoy photons. Physica A 381(15), 164169 (2007)
16. Shi, R.H., Zhong, H.: Multiparty quantum secret sharring with the pure entangled two-
photon states. Quantum Information Processing (2011), doi:10.1007/s11128-011-0239-9
17. Bennett, C.H., Wiesner, S.J.: Communication via one- and two-particle operators on
Einstein-Podolsky-Rosen states. Phys. Rev. Lett. 69, 28812884 (1992)
18. Luo, C.L., Ouyang, X.F.: Controlled dense coding via generalized measurement.
International Journal of Quantum Information 7(1), 365372 (2009)
Problems and Countermeasures of Educational
Informationization Construction in
Colleges and Universities
1 Introduction
Promoting the educational modernization by the educational informationization and
changing traditional education pattern by information technology have become the
inevitable trend of education development. The standard of the educational
informationization has become an important mark of national educational level.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 496500, 2011.
Springer-Verlag Berlin Heidelberg 2011
Problems and Countermeasures of Educational Informationization Construction 497
(1) The Multimedia of the Teaching Content and the Intelligent Teaching Process
By integrating text, figure, image and sound together, it deals with various kinds of
media information and then makes the teaching content structural, dynamic and
graphic. Meanwhile, the combination of IT and cognitive science bring about the
automation of Computer Thinking and Mental Labor. For instance, through the
intelligent teaching system, the students can study, review, and take stimulated test
and self assessment test by Self Assessment Test.
(2) The Network and Virtualization of Information Transmission
It makes the information transmission based on the NET and virtualization,
connecting the global educational resources as a whole for the large number of
learners. Meanwhile, the virtual world made by computer simulation set up a new
teaching pattern for net teaching, distance learning, and virtual laboratory. The
learners perceive the objective world and acquire the relative skill through Virtual
Reality.
(3) The Digitalization of the Learning Resources and the Variety of the Teaching
Pattern Reforms
The digitalization of the information processing and transmission enables the net
collect a plenty of teaching resources, such as Multimedia Resources Database,
Library Information Database, Dynamic Comprehensive Information Database and
etc. Learners can choose relevant learning resources according to the different
conditions, purposes and phases, breaking the limitation of time, spaces and the
teaching management of the traditional Class Education System. It will promote a
mass, global and lifelong education.
an open learning community so that the new personnel training modes of lifelong
learning, learning organization, etc., can be formed under the circumstances of
educational informationization.
(1) Strive for financial support from higher levels and receive donations from social
communities to increase funds used. Colleges and universities can set up resource-
searching groups responsible for collecting all kinds of outstanding teaching materials
500 J. Luo and J. Yu
available from the market according to teaching needs. For example, regular contracts
can be signed with Internet companies, such as Teaching Resources Networks of
Chinese Universities and others of such kind to buy network teaching materials and
resources.
(2) Virtuous cycles of educational cooperation can be created with enterprises,
educational institutions and financial institutions, such as mirror-image using some
Internets high level and high-quality educational resource libraries to establish all or
part of images so as to obtain newer and more specialized resources and links. A
schools WWW server and a number of Internets free educational teaching resources
sites can be linked to directly achieve sharing of teaching resources.
(3)Try to Gain the Non-stated Donation and Achieve the Replacement of the
Equipment with the Manufacturers
4 Conclusion
The Process of educational informationization is that of continuous application of
informational science, in which a lot of problems and phenomena will necessarily
occurs and leave us to recognize and solve which will in hence promote the
development of the educational informationization theories.
References
[1] Cai, Y.: The Connotation of the Concept of Educational Informationization: from a
Perspective of Sociology. Journal of Educational Science of Hunan Normal
University 4(1), 1822 (2005)
[2] Lu, P., Ge, N., Liu, Q.: The Gateway to the Future of Chinathe Process of
Informationization in China. Nan Jing University Press, Nan Jing (1998)
[3] Wang, H., et al.: The Modern Distant Learning which Adapting to the Knowledge
Economy. Higher Education Research (5) (1999)
[4] Fu, D.: The Purpose, Content and Significance of Educational Informationization. The
Study of Educational Technology (4) (2000)
[5] Luo, J., Yang, J., et al.: Construct the Platform of Campus Network Course and Accelerate
the Construction of Course. Heilongjiang Researches on Higher Education (10), 146147
(2006)
[6] Zhang, B., Luo, J.: The study of Multimedia Courseware Resource Database Based on the
Campus Network. Journal of Jiangxi University of Science and Technology (3), 3638
(2007)
[7] Luo, J., Yang, J., et al.: Several Problems in the Network Construction of Campus
Electronic Commerce. Market Modernization, 11-3518/TS (20), 122123 (2006)
[8] Chen, F.: Introduction Remarks on the Socialization of Technologya Sociology Study of
Technology. Renmin University of China Press, Peking (1995)
Synthesis and Characterization of Eco-friendly
Composite: Poly(Ethylene Glycol)-Grafted Expanded
Graphite/Polyaniline
Mincong Zhu1, Xin Qing1, Kanzhu Li1, Wei Qi1, Ruijing Su1, Jun Xiao2,
Qianqian Zhang2, Dengxin Li1,*, Yingchen Zhang1,2,3, and Ailian Liu3
1
College of Environmental Science and Engineering, Donghua University,
Shanghai 201620, P.R. China
2
College of Textiles, Zhongyuan University of Technology,
Henan 450007, P.R. China
3
Langsha Holding Group Co., LTD, Zhejiang 322000, P.R. China
lidengxin@dhu.edu.cn, yczhang2002@163.com
1 Introduction
Expanded graphite (EG), a kind of modified graphite contains abundant multi nano-
and micro-pore structure, is an important raw material for the production of flexible
graphite sheets, which have been widely used as gaskets, thermal insulators, fire-
resistant composites, etc.[1-3] EG keeps a layered structure similar to natural graphite
flakes but with larger interlayer spacing, and has a higher specific surface area than
carbon powder and carbon nanotubes.[4,5] Therefore EG nanosheets can easily
grafted the polymer on the surface.
Polyaniline (PANi) is a well-known and preferred conducting polymer, its
synthesis route is straightforward since it simply involves only one step of oxidation
of aniline (An), and most of all, its monomer is quite inexpensive. Due to its easy
synthesis, environmental stability and simple doping/dedoping chemistry, PANi has
been subjected to extensive use in most commercial application such as lightweight
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 501506, 2011.
Springer-Verlag Berlin Heidelberg 2011
502 M. Zhu et al.
2 Experimental
2.1 Materials and Reagents
Commercially available expandable graphite was purchased from Qingdao Nanshu
Hongda Graphite Co., Ltd. Dodecylbenzenesulfonic acid (DBSA) was bought from
TCI (Shanghai) Development CO., Ltd (China) and isophorone diisocyanate (IPDI)
was obtained from Shanghai Spring Chemical Technology Co., Ltd (China). Another
reagents used, such as aniline (An), ammonium persulfate (APS), NaOH, HCl, N, N-
dimethylform amide (DMF), thionyl chloride (SOCl2), tetrabutyl ammonium bromide
(TBAB), 4-aminophenol and polyethylene glycol (PEG) with molecular weight (MW
= 1000) were all of analytical grade and bought from Sinopharm Chemical Reagent
Company (Shanghai, China). All reagents were used without any further purification
and all solutions were prepared with distilled water.
2.3 Characterization
3.1 IR Spectra
Fig. 1 shows the IR spectra of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-
PEG, (d) PEG-grafted EG/PANi. The IR spectrum in Fig. 1a and Fig. 1c show that the
same broad absorption peak at 3448.6 cm-1 is of hydrogen bonded O-H stretching
vibration. After in-situ polymerization of PANi, the new peaks appeared in spectra.
Fig. 1b shows the spectra of EG/PANi in which the news peaks appeared at a lower
wave number side in between 500 and 1000 cm-1. The peak at 3498.7 cm-1 is
attributed to O-H stretching vibration. The PANi gave absorption bands at 1573.8,
1386.7, 1186.1, 1124.4, 1070.4 and 1008.7 cm-1, as shown in Fig. 1b. There were
some peaks which appeared both in the spectra of PEG-grafted EG (Fig. 1c) and
PEG-grafted EG/PANi (Fig. 1d): the broad peak at 3448.6 cm-1 was attributed to O-H
bond; the peak at 2925.9 cm-1 corresponded to saturated C-H stretching absorption;
the absorption at 1451.2 cm-1 was attributed to the vibration absorption of CH2 group
in PEG.
Fig. 1. IR spectra for products: (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG, (d)
PEG-grafted EG/PANi
504 M. Zhu et al.
The XRD patterns of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG, (d)
PEG-grafted EG/PANi are shown in Fig. 2, respectively. As shown in Fig. 2a, all the
patterns can be easily indexed as graphite materials, which is in good agreement with
the literature value (JCPDS Card number 75-2078). Fig. 2b shows the diffraction
patterns of as-prepared EG/PANi composite, in which there is one diffraction peak of
located at 18.78. Fig. 2c shows the diffraction patterns of as-prepared PEG-g-EG
composite, in which there is one diffraction peak of located at 21.08, and there is a
broad band centered at 2 = 14 24 , which reveal the sample are partially
crystallized.
Fig. 2. X-ray diffraction patterns of products: (a) EG, (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-
PEG, (d) PEG-grafted EG/PANi
Fig. 3 shows SEM images of (a) EG and (b) EG/PANi ((mEG/An)=1.0), (c) EG-g-PEG
and (d) PEG-grafted EG/PANi. The micrographs depicting the EG, as shown in Fig.
3a, revealed that in graphite flakes interlayer spacing is separated by increased
distance and leads into highly porous structure and high surface area. PANi is not
only grafted on the surface, but also in the interspace of EG. As shown in Fig. 3b,
significant changes in morphology are seen in the EG/PANi composite synthesized by
in-situ polymerization. PANi are absorbed into the pores of EG which gives the
composites without distinguishing individual phase. As shown in Fig. 3c, PEG is not
only grafted on the surface, but also in the interspace of EG. As shown in Fig. 3d, the
PEG-grafted EG composite showed a good dispersion, which indicated that PEG
chains could reduce Van der Waals force between composite flakes.
Synthesis and Characterization of Eco-friendly Composite 505
Fig. 3. SEM images of products: (a) EG50.0k, (b) EG/PANi ((mEG/An)=1.0)50.0k, (c) EG-g-
PEG50.0k, (d) PEG-grafted EG/PANi5.0k
4 Conclusions
In this work, EG was modified by the in-situ polymerization with the presence of An
to prepare the EG/PANi composite. Subsequently, EG/PANi composite was graft
polymerization with the as-prepared PEG-g-PANi under routine conditions. The
SEM, IR and XRD observation well demonstrated that the surface of EG flakes was
successfully modified by both PANI and PEG-g-PANi. PEG-grafted EG/PANi
composite was prepared successfully.
References
1. Chung, D.D.L.: Exfoliation of graphite. J. Mater. Sci. 22, 41904198 (1987)
2. Furdin, G.: Exfoliation process and elaboration of new carbonaceous materials. Fuel 77,
479485 (1998)
506 M. Zhu et al.
3. Inagaki, M., Tashiro, R., Washino, Y., Toyoda, M.: Exfoliation process of graphite via
intercalation compounds with sulfuric acid. J. Phys. Chem. Solids 65, 133137 (2004)
4. Celzard, A., Mareche, J.F., Furdin, G., Puricelli, S.: Electrical conductivity of anisotropic
expanded graphite-based monoliths. J. Phys. D Appl. Phys. 33, 30943101 (2000)
5. Chen, G.H., Weng, W.G., Wu, D.J., Wu, C.L.: PMMA/graphite nanosheets composite and
its conducting properties. Eur. Polym. J. 39, 23292335 (2003)
6. Shimano, J.Y., MacDiarmid, A.G.: Polyaniline, a dynamic block copolymer: key to
attaining its intrinsic conductivity. Synth. Met. 123, 251262 (2001)
7. Ahmad, R., Kumar, R.: Conducting Polyaniline/Iron Oxide Composite: A Novel
Adsorbent for the Removal of Amido Black 10B. J. Chem. Eng. Data 55(9), 34893493
(2010)
8. Chawla, K., Lee, S., Lee, B.P., Dalsin, J.L., Messersmith, P.B., Spencer, N.D.: A novel
low-friction surface for biomedical applications: Modification of poly(dimethylsiloxane)
(PDMS) with polyethylene glycol(PEG)-DOPA-lysine. J. Biomed. Mater. Res. 90, 742
749 (2008)
9. Tryba, B., Morawski, A.W., Inagaki, M.: Preparation of exfoliated graphite by microwave
irradiation. Carbon 43, 24172419 (2005)
10. Zhu, M.C., Qi, W., Mao, Y.J., Hu, Y., Qing, X., Li, K.Z., Yang, T., Zhang, Y.C., Li, D.X.,
Chai, H.M.: Synthesis, characterization and adsorption property of expanded graphite-
conducting polymer composite. Accepted by Adv. Mater. Res. (2011)
11. Xiang, C., Li, L.C., Jin, S.Y., Zhang, B.Q., Qian, H.S., Tong, G.X.: Expanded
graphite/polyaniline electrical conducting composites_ Synthesis, conductive and dielectric
properties. Mater. Lett. 64, 13131315 (2010)
12. Wang, P., Tan, K.L.: Synthesis and Characterization of Poly(ethylene glycol)-Grafted
Polyaniline. Chem. Mater. 13(2), 581587 (2001)
The Features of a Sort of Five-Variant Wavelet Packet
Bases in Sobolev Space*
Abstract. Wavelet packets have been the focus of active research for twenty
years, both in theory and applications. In this work, the notion of orthogonal
nonseparable five-variantl wavelet packets is introduced. A new approach for
de-signing them is presented by iteration method. We proved that the five-
variant wavelet packets are of the orthogonality trait. We give three
orthogonality formulas regarding the wavelet packets. We show how to
construct nonseparable five-variant wavelet packet bases. The orthogonal five-
dimensional wavelet packets may have arbitrayily high regularities.
1 Introduction
The concept of wavelet packets have received much research attention and evidence
has shown that they can be used to improve the localization of the frequency field of
wavelet bases. Although the Fourier transform has been a major tool in analysis for
over a century, it has a serious laking for signal analysis in that it hides in its phases
information concerning the moment of emission and duration of a signal. The main
feature of the wavelet transform is to hierarchically decompose general functions, as a
signal or a process, into a set of approximation functions with different scales.
Wavelet packets[1], owing to their good properties, have attracted considerable atten-
tion. They can be widely applied in science and engineering [2,3]. Coifman R. R. and
Meyer Y. firstly introduced the notion for orthogonal wavelet packets. Chui C K. and
Li Chun [4] generalized the concept of orthogonal wavelet packets to the case of non-
orthogonal wavelet packets so that wavelet packets can be employed in the case of the
spline wavelets and so on. The introduction for biorthogonal wavelet packs attributes
*
Foundation item: The research is supported by the Natural Science Foundation of Shaanxi
Province (Grant No: 2009J M1002), and by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No:11JK0468).
**
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 507512, 2011.
Springer-Verlag Berlin Heidelberg 2011
508 Y. Hu, Q. Chen, and L. Zhao
to Cohen and Daubechies [5]. The introduction for the notion of nontensor product
wavelet packets attributes to Shen [6]. Since the majority of information is multi-
dimensional information, many researchers interest themselves in the investigation
into multivariate wavelet theory. The classical method for constructing multivariate
wavelets is obtained by means of the tensor product of some univariate wavelets. But,
there exist a lot of obvious defects in this method, such as, scarcity of designing free-
dom. The objective of this paper is to generalize the concept of univariate orthogonal
wavelet packets to orthogonal five-variate wavelet packets.
We start from the following notations. Z and N stand for integers and nonnegative
n
integers, respectively. Let R be the set of all real numbers. R denotes the n -
2 n
dimensional Euclidean space. By L (R ), we denote the square integrable function
space on R . Set x = ( x1 , x2 , , xn ) R , v = (v1 , v2 ,
n n
, vn ), = (1 , 2 ,
, n ), z = e i 2 , where = 1, 2, , n and n N , n 2 . The inner product
for arbitrary g ( x ), ( x ) L (R ) and the Fourier transform of ( x ) are defined as,
2 n
respectively
g , := n g ( x ) ( x) dx , ( ) := n ( x) e ix dx ,
R R
1
2
h, H ( Rn )
:= h( ) ( ) (1+ | | ) d , h, L ( R ).
2 n
(1)
(2 ) Rn
n
[ g , ]( ) := uZ n g , ( u ) e iu = uZ n g ( + 2u ) ( + 2u ) . (2)
h, H ( Rn )
= h, + h( ) , ( ) , h, L2 ( R n ) . (3)
u , v = n ,v , u, v Z n , (4)
Fc ( ) = C ( ) = vZ n c(v) e 2 ix dx (6)
Note that the discrete-time Fourier transform is Z -periodic. Let Tv ( x ) stand for
n
( x) = 32 vZ 5 b(v) (2 x v) , (7)
( ) = B ( z1 , z2 , z3 , z4 , z5 ) ( 2). (8)
B( z1 , z2 , z3 , z4 , z5 ) = b(v ) z v1 v2 v3 v4 v5
z z z z .
1 2 3 4 5 (9)
vZ 5
Define a subspace U l L2 (R 5 ) ( l Z ) by
U l = closL2 2l (2l n) : n Z 5 , (10)
(R )
5
510 Y. Hu, Q. Chen, and L. Zhao
( x) = 32 vZ d ( ) (v) (2 x v), ,
5 (12)
( ) = D( ) ( z1 , z2 , z3 , z4 , z5 ) ( 2 ) , , (13)
B ( ) ( z1 , z2 , z3 , z4 , z5 ) = d ( ) (n) z
n1 n2 n3 n4 n5
z z z z .
1 2 3 4 5 (14)
nZ 5
Taking the Fourier transform for the both sides of (19) yields
32 n + ( ) = D ( ) ( z1 , z2 , z3 , z4 , z5 ) n ( 2 ) . (19)
where 0 , D ( ) ( z1 , z2 , z3 , z4 , z5 ) = D ( ) ( / 2) = vZ 5 d ( ) (v) z1v1 z2v2 z33 z4v4 z5v5 .
v
P ( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 )
2 2 2
2
+ P( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4, , z5 ) + P ( z1 , z2 , z3 , z4 , z5 )
2 2
+ P ( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4 , z5 ) + +
2 2
+ P( z1 , z2 , z3 , z4 , z5 ) + P( z1 , z2 , z3 , z4 , z5 ) + P ( z1 , z2 , z3 , z4 , z5 ) = 1 .
2 2 2
, = {D ( )
( y1 , y2 , y3 , y4 , y5 ) D ( ) ( y1 , y2 , (1) j z3 ,(1) j z4 , (1) j z5 )
1
j =0
Proof. For the case of m = n (22) follows from Theorem 1. As m n and
m, n 0 , (22) can be established from Theorem 1. Assuming that m is not equal to
n and at least one of {m, n} doesnt belong to 0 , rewrite m , n as
m = 32m1 + 1 , n = 32n1 + 1 , where m1 , n1 Z + , and 1 , 1 0 .
Case 1. If m1 = n1 , then 1 1 . By (14), (16) and (18), (22) holds, since
(2 )5 m (), n ( k ) = R5 32 m1 + 1 ( ) 32 n1 + 1 ( ) exp{ik}d
= R5 D ( 1 ) ( z1 , z2 , z3 , z4 , z5 ) m1 ( 2) n1 ( 2) D ( 1 ) ( z1 , z2 , z3 , z4 , z5 ) eik d
= [0,2 ]5 1 , 1 exp{ik} d = O.
Case 2. If m1 n1 , we set m1 = 32m2 + 2 , n1 = 32n2 + 2 , where
m2 , n2 Z + , and 2 , 2 0 . If m2 = n2 , then 2 2 . Similar to Case 1,
we have m (), n ( k ) = O. That is to say, the proposition follows in such
case. As m2 n2 , we order m2 = 32m3 + 3 , n2 = 32n3 + 3 , once more, where
m3 , n3 Z + , and 3 , 3 0 . Thus, after taking finite steps (denoted by r ),we
obtain that mr , nr 0 , and r , r 0 . If r = r , then r r . Similar to
Case 1, (22) follows. If r r , Similar to Lemma 1, we conclude that
m (), n ( k )
r
r
{ B ( ) ( )} O { B ( ) ( )} eik d = O.
((2 ) )
= 1 r +1
2
5 [0,2 ]
5
=1 =1 2
References
1. Telesca, L., et al.: Multiresolution wavelet analysis of earthquakes. Chaos, Solitons &
Fractals 22(3), 741748 (2004)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian space-
time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
4. Chen, Q., et al.: Existence and characterization of orthogonal multiple vector-valued
wavelets with three-scale. Chaos, Solitons & Fractals 42(4), 24842493 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal., 26(4),
1061--1074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
The Features of Multiple Affine Fuzzy Quarternary
Frames in Sobolev Space
Hongwei Gao
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 513518, 2011.
Springer-Verlag Berlin Heidelberg 2011
514 H. Gao
2 4
functions of integer translates L ( R ) . We demonstrate that the GQMS has a pyramid
decomposition scheme and obiain a frame-like decomposition based on such a
2 4
GQMS.It also lead to new constructions of affine of L ( R ) . Since the majority of
information is multidimensional information, many researchers interest themselves in
the investigation into multi-variate wavelet theory. The classical method for
constructing multi-variate wavelets is that separable multivariate wavelets may be
obtained by means of the tensor product of some univariate wavelet frames. It is
significant to investigate nonseparable multivariate wavelet frames and pseudoframes.
Let be a separable Hilbert space .We recall that a sequence { v }vZ 4 is a
frame for , if there exist positive real numbers L, M such that
, L v | ,v | M ,
2 2 2
(1)
A sequence {v }vZ 4 is called a Bessel sequence if only the upper inequality of (1)
follows. If only for all element g Q , the upper inequality of (1) holds, the
sequence {v }vZ 4 is a Bessel sequence with respect to (w.r.t.) Q .If {v } is a
frame, there exist a dual frame { v } such that
*
h , h = h,v v = h, v v . (2)
v v
h( x ) H s ( R 4 ) such that
Rn
| h( ) |(1+ || ||2 s ) d < +.
The space H ( R n ) is a Hilbert space equipped with the inner product given by
1
2
:= h( ) g ( ) (1+ | | ) d , h, g L ( R ).
2 4
h, g H (R4 )
(5)
(2 ) R4
4
M , = 4 , Tv j Tv j
(7)
jJ vZ
, Tv j Tv h j 2 , Tv j Tv j = .
jJ vZ 2 j J vZ
| j | c > 0, j J } and
V0 = PW = { f L2 ( R 4 ) : supp ( f ) } . (8)
Vn {( x) L ( R ) : ( x / 2 ) V }
2 4 n
nZ , (9)
jJ h j ( ) h( ). ( ) = ( ) a.e., (10)
where is the characteristic function on .
Proof. For all PW consider
F ( jJ vZ 2 , Tv j Tv j = ( jJ vZ 2 , Tv j F (Tv j )
= ( 4 R4 ( ) j ( ).e 2 iv d j ( )e 2 jv
j J vZ
( + n ).e
1
= 4 4 ( + n) j
2 iv
d j ( )e 2 jv
jJ vZ 0 nZ
= j ( ) ( + n ) j ( + n ) = jJ ( ) j ( ) j ( ).
jJ n 4
n2
( + n ) ( + n), j J is 1-periodic function. Therefore
jJ
j j = , a.e.,
L ( R ) be such that
2 4
Example 1. Let j
1/ r
a.e., || || 18 1, a.e.,
|| || 18 ,
j ( ) = (3 16 || ||)1/ r , a.c., 8 <|| ||< 16 , ( ) = 5 16 || ||, a.e., 18 <|| ||< 165 ,
1 3
0, otherwise. 0. otherwise.
j J . Choose = { R :| ( ) | 1r } = [ 1 4 , 1 4 ] , and define V0 = PW ,
The Features of Multiple Affine Fuzzy Quarternary Frames in Sobolev Space 517
where . By taaking the Fourier transform for the both sides of (13), we have
Theorem 3[6]. If { 16 n + ( x) : n = 0,1, 2, 3, , } is called a nonseparable quarte-
rnary wavelet packs with respect to the orthogonal scaling function 0 ( x) , we have
m (), n ( k ) = m , n 0, k , m, n Z + , k Z 4. (12)
Proof. The convergence of all summations of (7) and (8) follows from the
assumptions that the family {Tv }vZ 4 is a Bessel sequence with respect to the
subspace , and he family {Tv }vZ 4 is a Bessel sequence in L2 ( R 4 ) with which the
proof of the theorem is direct forward.
Theorem 5[8]. Let ( x ) , ( x ) , ( x ) and ( x) , be functions in L ( R ) .
2 4
15 n 1
, n , k n , k ( x ) = , :v ,k :v ,k ( x ) . (17)
k Z
4
=1 v = k Z 4
15
( x) = , :v , k :v , k ( x ) . ( s ) L2 ( R 4 ) (18)
=1 v = k Z 4
References
1. Daubechies, I.: The wavelet transform, time-frequency localization and signal analysis.
IEEE Trans. Inform. Theory 39, 9611005 (1990)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., Wei, Z.: The characteristics of orthogonal trivariate wavelet packets. Information
Technology Journal 8(8), 12751280 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4), 1061
1074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
8. Li, S., et al.: A theory of generalized multiresolution structure and pseudoframes of
translates. Fourier Anal. Appl. 7(1), 2340 (2001)
Characters of Orthogonal Nontensor Product Trivariate
Wavelet Wraps in Three-Dimensional Besov Space*
1 Introduction
Although the Fourier transform has been a major tool in analysis for over a century, it
has a serious laking for signal analysis in that it hides in its phases information con-
cerning the moment of emission and duration of a signal. In her celebrated paper[1],
Daubechies constructed a family of compactly supported univariate orthogonal scali-
ng functions and their corresponding orthogonal wavelets with the dilation factor 2.
Since then wavelets with compact support have been widely and successfully used
in various applications such as image compression and signal processing. Scaling
biorthogonal wavelets in both univariate case and multtivariate case have been
extensively studies in literature. With symmetry and many other desired properties,
*
Foundation item: This work is supported by the Science Research Foundation of Education
Department of Shaanxi Provincial Government (Grant No: 11JK0513 and Grant
No:11JK0468).
**
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 519524, 2011.
Springer-Verlag Berlin Heidelberg 2011
520 J. Zhao and Q. Chen
scaling biorththogonal wavelets have been found to be more efficient and useful in
many applications than the orthogonal ones. Wavelet analysis has been developed a
new branch for over twenty years. Its applications involve in many areas in natural
science and engineering technology. The main advantage of wavelets is their time-
frequency localization property. Many signals in areas like music, speech, images,
and video images can be efficiently represented by wavelets that are translations and
dilations of a single function called mother wavelet with bandpass property. Wavelet
packets, owing to their good properties, have attracted considerable attention. They
can be widely applied in science and engineering [2,3]. Coifman and Meyer firstly
introduced the notion for orthogonal wavelet packets which were used to decompose
wavelet components. Chui and Li Chun [4] generalized the concept of orthogonal
wavelet packets to the case of non-orthogonal wavelet packets so that wavelet packets
can be employed in the case of the spline wavelets and so on. Tensor product
multivariate wavelet packs has been constructed by Coifman and Meyer. The
introduction for the notion on nontensor product wavelet packs attributes to Shen Z
[5]. Since the majority of information is multidimensional information, many
researchers interest themselves in the investigation into multivariate wavelet theory.
There exist a lot of obvious defects in this method, such as, scarcity of designing
freedom. Therefore, it is significant to investigate nonseparable multivariate wavelet
theory. Nowadays, since there is little literature on biorthogonal wavelet packs, it is
necessary to investigate biorth-ogonal wavelet packs.
In the following, we introduce some notations. Z and Z + denote all integers and
all nonnegative integers, respectively. R denotes all real numbers. R 3 denotes the 3-
dimentional Euclidean space. L2 ( R3 ) denotes the square integrable function space.
Let x = ( x1 , x2 , x3 ) R , = ( 1 , 2 , 3 ) R 3 , k = ( k1 , k2 , k3 ) Z 3 , y = e i a , w here
3
( ) = ( x ) e i x dx. (1)
R3
g ( x) = a 3 nZ bn g ( ax n),
3 (2)
where {bn }nZ 3 is real number sequence which has only finite terms.and g ( x ) is
called scaling function. Formula (1) is said to be two-scale refinement equation. The
frequency form of formula (1) can be written as
Characters of Orthogonal Nontensor Product Trivariate Wavelet Wraps 521
g ( ) = B( y1 , y2 , y3 ) g ( a), (3)
where
B ( y1 , y2 , y2 ) = bn y1n y2 n y3 .
n3
1 2
(4)
3
n Z
Define a subspace X j L2 ( R 3 ) ( j Z ) by
X j = closL2 ( R3 ) a 3 j / 2 g (a j u ) : = 1, 2, , a 3 1; u Z 3 . (5)
( ) = Q ( ) ( y1 , y2 , y3 ) g ( a), = 1, 2, , a3 1. (8)
g (), ( u ) = 0 , , u Z 3 , (11)
where = 0,1, 2, , a 3 1. Taking the Fourier transform for the both sides of (13)
yields
a 3n + ( ) = B ( ) ( y1 , y2 , y3 ) n ( a ), (14)
where
B ( ) ( y1 , y2 , y3 ) = B ( ) ( / a ) = b ( )
( k ) y1k1 y2k2 y3k3 . (15)
k Z 3
3
( + 2k ) |2 = 1. .
kZ
(16)
B( y1 , y2 , y3 ) + B ( y1 , y2 , y3 ) + B( y1 , y2 , y3 ) + B ( y1 , y2 , y3 ) = 1. (17)
2 2 2 2
, = {B ( )
(( 1) j y1 , ( 1) j y2 , ( 1) j y3 ) B ( ) (( 1) j y1 , ( 1) j y2 , ( 1) j y3 )
1
j =0
+ B ( ) (( 1) j +1 y1 , ( 1) j y2 , ( 1) j y3 ) B ( ) (( 1) j +1 y1 , ( 1) j y2 , ( 1) j y3 )
+ B ( ) (( 1) j y1 , (1) j +1 y2 , ( 1) j y3 ) B ( ) (( 1) j y1 , (1) j +1 y2 , ( 1) j y3 )
+ B ( ) ((1) j y1 ,(1) j y2 ,(1) j +1 y3 ) B ( ) ((1) j y1 ,(1) j y2 ,(1) j +1 y3 )} = , . (18)
For an arbitrary positive integer n Z + , expand it by
n = j =1 j 8 j 1 ,
. j = {0,1, 2,3, , 7} . (19)
Characters of Orthogonal Nontensor Product Trivariate Wavelet Wraps 523
(2 ) 3 m (), n ( k ) = R3 8 m1 + 1 ( ) 8 n1 + 1 ( ) eik d
= R3 B (1 ) ( z1 , z2 , z3 ) m1 ( 2) n1 ( 2) B ( 1 ) ( z1 , z2 , z3 ) exp{ik}d
= [0,4 ]3 B
( 1 )
( z1 , z2 , z3 ) m ( 1
2 + 2 s ) m1 ( 2 + 2 s ) B
( 1 )
( z1 , z2 , z3 ) eik d
sZ
3
= [0,2 ]3 1 , 1 exp{ik} d = O.
Case 2. Provided that m1 n1 , we order m1 = 8m2 + 2 , n1 = 8n2 + 2 , where
m2 , n2 Z + , and 2 , 2 0 . If m2 = n2 , then 2 2 . Similar to Case 1, it
holds that m (), n ( k ) = O. That is to say, the proposition follows in such
case. Sin -ce m2 n2 , then order m2 = 2m3 + 3, n2 = 2n3 + 3 , n2 = 2n3 + 3
once more, where m3 , n3 Z + , and 3 , 3 0 . Thus, after taking finite steps
(denoted by r ), we obtain mr , nr 0 , and r , r 0 . If r = r , then
r r . Similar to Case 1, (3.11) follows. If r r , Similar to Lemma 1, we
conclude that
1
m (), n ( k ) = R 3 8 m1 + 1 ( )8n1 + 1 ( ) eik d
(2 )3
1 r
r
= r +1
]
3 { B ( ) (
)} O { B ( ) (
)} e ik d = O .
(2 ) 3 [0,2 =1 2 =1 2
524 J. Zhao and Q. Chen
where the family { ( x ), Z +3 } are trivariate wavelet packets with respect to the
orthogonal function g ( x ) . Then For arbitrary Z +3 , the space D can be
orthogonally decomposed into spaces 2 + , 0 , i.e., D = 0 2 + .
For arbitrary j Z + , define the set
j = { = (1 , a2 , 3 ) Z +3 {0}:2 j 1 l 2 j 1, l = 1, 2,3}.
Theorem 3[6]. The family { ( k ), j , k Z 3 } form an orthogonal basis of
DjW0. In particular, { ( k ), Z + , k Z } constitutes an orthogonal basis of
3 3
space L2 ( R3 ) .
References
1. Daubechies, I.: The wavelet transform, time-frequency localization and signal analysis.
IEEE Trans. Inform. Theory 39, 9611005 (1990)
2. Iovane, G., Giordano, P.: Wavelet and multiresolution analysis:Nature of Cantorian
space-time. Chaos, Solitons & Fractals 32(4), 896910 (2007)
3. Zhang, N., Wu, X.: Lossless Compression of Color Mosaic Images. IEEE Trans. Image
Processing 15(16), 13791388 (2006)
4. Chen, Q., et al.: Existence and characterization of orthogonal multiple vector-valued
wavelets with three-scale. Chaos, Solitons & Fractals 42(4), 24842493 (2009)
s
5. Shen, Z.: Nontensor product wavelet packets in L2 ( R ) . SIAM Math. Anal. 26(4),
10611074 (1995)
6. Chen, Q., Qu, X.: Characteristics of a class of vector-valued nonseparable higher-
dimensional wavelet packet bases. Chaos, Solitons & Fractals 41(4), 16761683 (2009)
7. Chen, Q., Cao, H., Shi, Z.: Construction and decomposition of biorthogonal vector-valued
wavelets with compact support. Chaos, Solitons & Fractals 42(5), 27652778 (2009)
8. Chen, Q., Huo, A.: The research of a class of biorthogonal compactly supported vector-
valued wavelets. Chaos, Solitons & Fractals 41(2), 951961 (2009)
9. Yang, S., Cheng, Z., Wang, H.: Construction of biorthogonal multiwavelets. J. Math. Anal.
Appl. 276(1), 112 (2002)
Research on Computer Education and Education
Reform Based on a Case Study*
1 Introduction
Nowadays, there is a widespread concern over the issue that China education is a
success or not. In fact, the opinion concerning this hot topic varies from person to
person. Some people hold the optimistic idea that China education is successful.
Because that many Chinese students can get offer from top universities of Britain,
America or other country on full scholarships and can achieved the highest test scores
in the world in reading comprehension, writing, math, and science with the second
language, English. However, some people doubt that why the schools of China have
not cultivated some worlds leading scientists with creativity. Anyway, China
education has been unchanged for dozens of years, some disadvantage that improper
to the present era has become gradually apparent. It is a consensus to both sides about
the education needs reform.
It is not a long time since the computer as an independent discipline into education.
Therefore, computer science education research is an emergent area and is still
giving rise to a literature [1]. However, for keep abreast of new technology, the
computer science as one of the fastest developing disciplines and itself also impacts
*
Pecuniary aid of Honghe University Discipline Construction Fund (081203).
Pecuniary aid of Honghe University Bilingual Course Construction Fund (SYKC0802).
**
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 525529, 2011.
Springer-Verlag Berlin Heidelberg 2011
526 J. Sun et al.
on modern education deeply needs education reform timely more than others. In this
paper, we analysis the issues exist in computer education based on a case study
research, and the countermeasures contributing to these issues also are presented.
As a contrast, a textbook in class of China, the impact on teaching is much larger than
in class of the United States. Because the role of the textbook is significantly different
as follows: in China, teachers usually carry out the teaching plan in strict accordance
with a textbook, do not advocate mentioning other knowledge out of the book scope;
in America, the textbooks only works as reference books, and not only limited to one
book [3]. In another words, most teachers of China have been used to teaching on
textbook-directed instead of on knowledge-oriented. A textbook will narrow a
students focus on if he lack of initiative and capability of self-study to extend the
range of his knowledge by the Internet or other books. Therefore, we should change
the traditional way, try to select a couple valuable reference books for the students. It
will be better if the teachers offer guidance for reading at the same time.
students in class. Teaching students how to use the Internet resource and reference
books to extend professional knowledge will make a better use of the limited class
time.
Finally, improve the form of teaching, the form and content of examinations and
make them serve their purpose better. Panel discussion, group projects are useful for
training teamwork spirit and communication skills of students. Presentation is helpful
for training students to express their opinions. And the examination should consider
every phase of previous teaching forms. The content of homework, projects and
examination should be related with the practice application. This will help enhance
the sense of accomplishment and interest of students in learning.
Thirdly, from primary school to college or university, the students have already
used to accept the traditional teaching method. In fact, the teaching methods such as
panel discussion, group projects and presentation are difficult to achieve the desired
effect. This situation is not just several courses reform can be changed except the
whole education system has a significant revolution.
5 Discussion
Many well-known educationalists and scholars have realized that the Chinese
traditional education model exists many problems need to be changed. Qin Xuesen, a
well-known physics of China had asked a question while in his deathbed: why
Chinese schools have not nurtured outstanding elitists? As more and more people
concerning this issue, the exist problems is likely to be solved soon.
References
1. Fincher, S., Petre, M.: Computer science education research, p. 1. Taylor & Francis Group,
London (2004)
2. Sun, J.H., Zhu, Y.B., Fu, J.W., Xu, H.C.: The Impact of Computer Based Education on
Learning Method. In: 2010 International Conference on Education and Sport Education,
ESE 2010, Wuhan, China, vol. 1, pp. 7679 (2010)
3. Zhu, Y.X., Zhen, H.: Comparison of Choice of Textbooks in Universities between
China and the United States. Igher Educational Research in Areas of Communications (6),
104105 (2004) (in Chinese)
4. Han, H.Y.: Researching on Teaching Method of Undergraduate Education. Joural of Jilin
Business and Technology College 25(3), 8688 (2009) (in Chinese)
5. Duan, H., Wang, S.: Deepen Education Reform Innovative Teaching Modes. China
University Teaching 4, 3537 (2009) (in Chinese)
6. Deng, Y., Shang, Q.: A Comparison of Chinese and American College Textbooks,
Teaching Methods and Curriculums. Journal of Zhanjiang Normal College 22(10), 3437
(2001) (in Chinese)
The Existence and Uniqueness for a Class of Nonlinear
Wave Equations with Damping Term
Department of Mathematics,
Henan Institute of Science and Technology,
Xinxiang 453003, P.R. China
cheersnow@163.com,qingshan11@yeah.net
Abstract. In this paper the existence and uniqueness of the global generalized
solution and the global classical solution are studied by the Galerkin method,
The class of nonlinear wave equations describe the propagation of long waves
with the viscosity in the medium with the dispersive effect. It can also be
governing the problem of the longitudinal vibration of the 1-D elastic rod.
Keywords: nonlinear wave equation with damping term, initial boundary value
problems, global generalized solution, global classical solution.
1 Introduction
In this paper we are concerned with the following initial boundary value problem
u ( x , 0 ) = ( x ) , ut ( x, 0 ) = ( x ) , x [ 0, 1] (1.3)
where > 0, b > 0 are constants, u(x, t)is the unknown function. The subscripts x and
t indicate the partial derivative with respect to x and t, respectively, n N, (x) and
(x) are given initial value functions definite in [0, 1].
The equation (1.1) is contacted with many equations. For example, in the study of
a weakly non-linear analysis of elasto-plastic-microstructure models for longitudinal
motion of an elasto-plastic bar in [1] there arises the model equation
where u(x, t) is the longitudinal displacement, > 0, = 0 are any real numbers.
Moreover, the special solution of equation (1.4), its instability, and the instability of
the ordinary stain solution were studied in [1]. In [2], [3], the authors studied the
problem for determining solution of the generalized equation of the equation (1.4).
And the authors proved the existence and uniqueness of the global generalized
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 530533, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Existence and Uniqueness for a Class of Nonlinear Wave Equations 531
solution and the global classical solution of several initial boundary value problems
by the contraction mapping principle, and give the sufficient conditions of the non-
existence of the solution.
Because the equation (1.4) can describe the propagation of the wave in the medium
with the dispersion effect, its meaningful[4] to study the following nonlinear wave
equations with the viscous damping term.
The paper [5], [6] studied the equation with the viscous damping term proved the
global existence, asymptotic property of its solution of the initial boundary value
problem, gave some sufficient conditions of the blow-up of the solution.
In this paper, we are going to prove the existence and uniqueness of the global
generalized solution and the global classical solution of the initial boundary value
problem (1.1)--(1.3) by Galerkin method.
problem
y + y = 0, x ( 0, 1) ,
(2.1)
y ( 0 ) = y (1) = 0
i =1
be Galerkin approximate solution of the problem (1.1)-(1.3), where Ni(t) are the
undetermined functions, N is a natural number. Substituting the approximate solution
uN(x, t) into Eq.(1.1) and the initial value functions, multiplying both sides by ys(x)
and integrating on (0, 1), we obtain
where ( , ) denotes the inner product of L [0, 1].
2
Substituting the approximate solution uN(x, t) and the approximate of the initial
value functions into Eq.(1.1) and initial conditions (1.3), we get
In order to prove the existence of the global generalized solution for the problem
(1.1)--(1.3), we make a series of esti- mations for the approximate solution uN(x,t) .
532 B. Lu and Q. Zhang
xL [0,1], and (x) and (x) satisfy the boundary conditions(1.2). Then for any
n+1
N, the initial value problem (2.3), (2.4) has a global classical solution Ns C [0, T ]
2
u N (, t ) + u Nt (, t ) C1 (T ), t [0, T ],
2 2
H2
(2.5)
where and in the sequel C1(T ) and Ci(T )( i = 2, 3,) are constants which only depend
on T .
Lemma 2.2[6]. Suppose that the conditions of Lemma 2.1 hold. H [0,1] ,
4
H [0,1], then, the approximate solution of the problem (1.1)--(1.3) satisfies the
2
following estimate:
and (x) and (x) satisfy the boundary conditions(1.2), the initial boundary value
problem (1.1)--(1.3) exists the unique global generalized solution
where u(x, t) satisfies the boundary value conditions (1.2) in the generalized sense,
and it satisfies the initial value conditions (1.3) in the classical sense.
Proof: From (2.6) we know that
Lemma 3.1[6]. Suppose that the conditions of Lemma 2.2 hold. H [0,1] ,
7
H [0,1], then, the approximate solution of the problem (1.1)--(1.3) satisfies the
5
following estimate:
u N (, t ) + u Nt (, t ) + u Ntt (, t )
2 2 2
H7 H5 H3
(3.1)
+ u Nttt (, t ) C4 (T ), t [0, T ],
2
1
H
and (x) and (x) satisfy the boundary conditions(1.2), the initial boundary value
problem (1.1)--(1.3) exists the unique global generalized solution
4 Conclusion
By the method in Section 2 and Section 3, we can obtain the other initial boundary
value problem of the equation.
References
1. An, L.J., Peire, A.: A weakly nonlinear analysis of elasto-plastic-microstructure models.
SIAM J. Appl. Math. 55(1), 136155 (1995)
2. Chen, G.W., Yang, Z.J.: Existence and nonexistence of global solutions for a class of
nonlinear wave equations. Math. Meth. Appl. Sci. 23, 615631 (2000)
3. Zhang, H.W., Chen, G.W.: Potential well method for a class of nonlinear wave equa-tions
of fourth-order. Acta Mathematica Scientia 23A(6), 758768 (2003)
4. Guenther, R.B., Lee, J.W.: Patial Differential Equations of Mathematical Physics and
Integral Equations. Prentice Hall, NJ (1988)
5. Yang, Z.: Global existence asymptotic behavior and blow-up of solutions for a class of
nonlinear wave equations with dissipative term. J. Differential Equations 187, 520540
(2003)
6. Chen, G.W., Lu, B.: The initial C boundary value problems for a class of nonlinear wave
equations with damping term. J. Math. Anal. Appl. 351, 115 (2009)
7. Mzja, V.G.: Sobolev Spaces. Springer, New York (1985)
Research on the Distributed Satellite Earth Measurement
System Based on ICE Middleware
Abstract. Satellite earth measurement is one of the crucial steps during the
development of satellites, and also plays an important role in system validation
and performance evaluation. This paper presents a distributed satellite earth
measurement system based on ICE(Internet Communication Engine)
middleware, which guarantees the high scalability and flexible configuration by
decoupling the correlative relationship between the message distributers and
subscribers in the system, and thereby realizes the dynamic location and load-
balance. The system implementation and test results prove that this system
performs better in distributed deployment, flexibility and network wideband
occupancy compared with traditional systems.
1 Introduction
Satellite earth measurement is one of the crucial steps during the development of
satellites, and also plays an important role in system validation and performance
evaluation. Satellite earth measurement system itself is a sophisticated giant system
whose design and implementation process are affected by many factors. First of all,
during its application, measuring units of the system have to constantly and
periodically transfer all sorts of results to difference-based service units, during which
information intercourse is of distributing-subscribing model. Second, different data
reports transmission may have different requirements in transferring efficiency,
safety and reliability, which requires the information intercourse to effectively support
those differences. In addition, in view of the diversity of the measuring environments,
the system has to be operated on multi-platforms. But in current system, the compact
coupling association between measuring units and service units has severely limited
the scalability of the system, so the different needs of service units cannot be meet.
Meanwhile, the current system has a poor inter-operability and is hard to be practically
deployed in large scale.
Information subscribe/publisher system based on ICE (Internet Communication
Engine) middleware provides a new solution for the above-mentioned problems. ICE
middleware guarantees the system high scalability and flexible configurations by de-
coupling the association between publishers and subscribers, which lays a realistic
foundation for the distributed design of the large satellite earth measurement system.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 534541, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Distributed Satellite Earth Measurement System 535
Based upon the above analysis, this paper presents a new design of distributed
satellite earth measurement system based on ICE middleware. Compared with
traditional one, the new system is effectively simplified, realizes the dynamic location
and load-balance of the target server, and performs much better in distributed
deployment, flexibility and the occupancy of network bandwidth.
subscriber
publisher subscriber
subscriber
subscriber
subscriber
6
3 7
6
3 7
6
3 7
6
The service is consist of registration positioning service and arbitrary nodes. They
cooperate to manage information and service processes consisting of the application,
provide redundancy for the same service running in different servers. Thus, when one
of the servers cannot provide service, the other servers in the same group could
provide the same service to the client.
Analyzing the demands of the system and integrating the advantages that ICE have in
information subscribe/publish and publisherd management service, this paper puts
forward a implementation solution for the publisherd satellite earth measurement
system based on ICE middleware, which is shown as Fig.4.
Research on the Distributed Satellite Earth Measurement System 537
Server
Software
Modules
Server Server
Server Server
VISA-DTEA
net
ICE middleware
Ice general
interface
Distribute thread
Event buffer Memory
pool
Based on the aforesaid function design, the process of the event message transmission
is shown as Fig.6.
Message Message
sender receive
Interface Interface
Send Binding
command identify
Enter event
Waiting for
buffer zone connection
network
Event buffer
Thread pool Data decode Waiting data
zone
Qos Qos
Qos Qos
management management
Waiting Waiting
connection connection
network network
ICEStorm
Ice general server
Ice general
interface interface
In Fig. 6, before the sending end sends a message, the receiving end have to call a
user interface binding to its unique identity, and establish network connection with
ICEStorm to receive messages. After the connection is established, the receiving end
enters waiting for data state, returns the received data to upper application through
callback function.
The distribution end calls for user interface to publisher message and sends event
messages to event buffer zone with sequential priorities; when there is a idle thread
in the distributing thread pool, the event message will be picked out of the event
buffer zone, and the transfer parameters will be managed by QoS. When the data is
ready, it waits for the connection management module allocates handles of network
Research on the Distributed Satellite Earth Measurement System 539
connection. During this period, connection management module will first decide
whether there is an idle connection or a multiplexed connection, and return network
connection handle when there is one. The general transmission interface publishers
messages to ICEStorm server by calling for handles.
After receiving an event message, ICEStorm server will first traverse subscribers
relevant to the theme of the message, and sequentially forward the message to each
subscriber. After receiving the event message, the receiver will wait for the receiving
thread pool allocate a processing thread, correspondingly process the data after
acquiring the handle of the receiving thread, and eventually send it to the callback
functions that subscribers registered to receive.
We take a theme T1 for example as follow, in order to briefly demonstrate the
process of event messages publishe/subscribe of the satellite earth measurement
system. The publisherrs implementation process is as follow:
4 System Test
In order to verify the validity of the design of the system and test its performances,
this paper established a testing environment, shown as Fig.7.
server
73/,1.
URXWHU
4 client computers open 4-6 clients, which makes 20 clients in all. The server will
send orders to clients with a rate of 10 frames/s. The byte length of every order is 30K.
The test time is 24*3 hours.
The testing result is shown as table 1.
Table 1.
The test results shows that the volume of the order transmission is
10*20*60=12000 frames/m; the volume of the data transferred is
12000*30K=360000K/m. After inspecting log information of the 24*3 hour test, we
find the server operate normally and send orders correctly; its feedback time is less
than 0.001s; packet loss rate is under 0.01%, and report in 0.003s after losing a
packet; the command packets with QoS could be sent by 100%. The results are far
beyond the satellite earth measurement system s performance indexes and meet the
requirements of the design.
In order to test the event message transmission performance under rough network
environment, we limit the network speecd to 1M. The server send orders to clients
with a rate of 10 frames/s. The byte length of every order is 5K. The test time is 24*3
hours. The volume of the order transmission is 10*20*60=12000frames/m; the
volume of the data transferred is 12000*5K=60000K/m.
The results show that the system could reach the satellite earth measurement
system s performance indexes even under a relatively rough network environment
and full loaded.
In this paper, we use single-ended subscriber timing method to test throughput.Test
case contains a publisher and a subscriber, the publisher is incessant publishing event,
the subscriber continues to receive events. The number n events in time for Tn, the
throughput is calculated as:
The results shown in Figure 8, when the event is small, the throughput with events
increases linearly with increase in size, but due to network bandwidth and
Environmental factors such as hardware constraints.The throughput increases to 9MB
/ S reaches the limit.
5 Conclusion
Analyzing the needs of the satellite earth measurement, this paper put forward a
satellite earth measurement system based on ICE middleware. Relying on the
technological advantages of the ICE middleware, the system de-coupled the
association between the service units of the satellite earth measurement system,
accomplished the flexible configuration of the system and the hi-efficient data
distribution, and realized the dynamic location and load-balance of the target server.
Eventually, the system test effectively verified the validity and performances of the
solution.
References
1. ZeroC. Publisherd Computing with Ice. Revision 3.4 (June 2010),
http://zeroc.com/doc/index.html
2. Joseph, H.Y.D.: Space Telecommunications Systems Engineering. Plenum Press, New
York (1983)
3. Chen, Y.-y.: Satellite Radio Monitoring and Control Technology. China Astronautics
Publishing House (2007)
4. Tesauro, G.: Reinforcement Learning in Autonomic Computing: A Manifesto and Case
Studies. IEEE Internet Computing 11, 2230 (2007)
5. Gokul, S., Cristiana, A.: Towards end-to-end quality of service: controlling I/O
interference in shared storage servers. In: Proceedings of the 9th ACM/IFIP/USENIX
International Conference on Middleware, Springer-Verlag New York, Inc., Leuven (2008)
6. Welch, V., Siebenlist, F., Foster, I., Bresnahan, J., Czajkowski, K., Gawor, J., Kesselman,
C., Meder, S., Pearlman, L., Tuecke, S.: Security for grid services. In: The Twelfth IEEE
International Symposium on High Performance Distributed Computing, Seatle,
Washington, pp. 4857 (June 2003)
7. Blanquer, J.M., Batchelli, A., Schauser, K.E., Wolski, R.: Quorum: Flexible Quality of
Service for Internet Services. In: Processings of Symposium on Networked Systems
Design and Implementations, NSDI 2005 (2005)
The Analysis and Optimization of KNN Algorithm
Space-Time Efficiency for Chinese Text Categorization
Abstract. The performance of any algorithm for text classification are reflected
in the of reliability classification results and classification algorithm is high
efficient. We analyze the space-time efficiency of different stages based on the
traditional KNN algorithm process for Chinese text classification and ensure the
reliability of classification. And we optimize efficiency of the algorithm and the
feasibility in the practical application from these aspects including feature
extraction, feature weighting, similarity computing etc.
1 Introduction
With the Web site of resources and the popularization of electronic text, the human
began to pursue an efficient and reliable method of information processing in
response to the rapid development of information technology industry brought about
by the explosion of knowledge and other issues. Now many scholars concern the text
classification technology.
Text classification will be determined by one or more pre-defined class method
based on the text content [1]. The current text classification methods can be divided
into general rule-based calssifying and statistical-based classifying which including
Decision Tree, K-nearest neighbor (KNN), Support Vector Machine, Bayes etc.[2].
However, the performance of any classification is reflected in two aspects, namely,
the reliability and high efficiency of the classification algorithm. Chinese text
classification requires more reliability and efficiency because of its inherent
complicated, confusing meaning of the word, language forms and other
characteristics.
The text classification algorithm is often optimized for the reliability of the
algorithm itself and is very difficult to implement it. At the same time it is easy to
ignore time and space efficiency of classification algorithms. Or efficiency for the
algorithm optimization of a certain stage, the whole traditional classification
algorithms advantage is dispersed weakened. Therefore we analyze the efficiency
of time and space in difference stage in order to get a reasonable proportion of
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 542550, 2011.
Springer-Verlag Berlin Heidelberg 2011
The Analysis and Optimization of KNN Algorithm Space-Time Efficiency 543
space-time according to the process of traditional KNN algorithm. It can get high
efficiency and feasible for optimization algorithm of the practical application in
ensuring the reliability of classification.
2 KNN Algorithm
The basic idea of the traditional KNN algorithm can be expressed as: According to the
traditional vector space model, text features are formalized as the weighted feature
vector[5]. For a given text to be classified, calculate similarity (distance) for each text
in the training set. Then select the K texts with the nearest distance between the
training set of documents and text sets to be classified. Determine which categories of
the new text [6] according to the above K texts category. The algorithm flow is as the
Figure1.
KNN algorithm design and implementation of the optimization process should follow
a few principles to improve the space-time efficiency of algorithms.
(1) Store intermediate results with a disk file.
(2) Minimize the number of disk file access.
(3) Hash table used as the basic storage structure.
We analyze the space-time efficiency of the traditional KNN algorithm. It will divide
three stages including feature extraction, feature vector computing and similarity
computing.
The solution of feature item on the traditional KNN algorithm exists time and space
both defects.
First, the current widespread use of evaluation function to extraction feature items.
But the evaluation function only increase in the extraction accuracy within a limited,
and the time cost and the calculation cost of flat text similarity is same, time-
consuming is too high and increase the training corpus part of the burden.
Secondly, feature extraction is not strictly the requirements of space resources,
however, the characteristics of large-scale text feature entry will greatly increase the
computational complexity of subsequent algorithms. In the calculation feature vector
stage, each feature item is calculated as a dimension of vector.
It is the basis in the text classification that the document was changed into the format
computer can do it by using simple and accurate method [7]. Formalization of the
classic text is as the feature vector with feature as following: (W1,W2,W3,,Wn),
where Wi is the ith weight of feature item. The text formalization of the feature is
calculated by assigning weight and form feature vectors.
Feature vectors are composition numerical by weight. Feature items weighted
based on the following two main experiences [1].
(1)The more lexical item appearing in a text the more it related to the subject of it.
(2)The more times appear set of lexical items in the text the worse the term
discrimination between items.
Traditional KNN algorithm use tfidf(term frequencyinverse document frequency),
weighting formula[7], that is, feature items wk in the text t k weight is:
We calculate the similarity between the test and the training corpus to reflect how
similarity and provide data support for the classification. We use cosine formula as a
formula for calculating the similarity in Chinese text categorization [8].
Between the text t i and t j , the similarity is :
M
M M
sim ( t i , t j ) = ( w ik * w jk ) / w ik2 w 2jk (2)
k =1 k =1 k =1
wik is the kth feature item weight in the text t i . M is the total number of feature
items.
Because the traditional KNN algorithm needs to calculate the similarity with each
training text, and therefore simplifying the training process will cost a lot of
time[9].The traditional KNN algorithm for classification is low efficiency. Testing
corpus choose single text . The basic design of the similarity is replacing time with
space. Traditional similarity algorithm is as follows.
for(weight_i=first_weight to M) put into hashtable ha;
for(text_j=first_text to N)
for(j_weight_k=j_first_weight to M)
if(search j_weight_k in ha) sim_j();
546 Y. Cai and X. Wang
Then with the results of KNN algorithm space-time efficiency analysis, we design and
test space-time efficiency optimization schemes according to every stages. Mainly
including extraction of feature items optimization scheme, feature items weighted
optimization scheme, similarity calculation optimization scheme.
The feature item data are the most is resources in the KNN classification algorithm. If
the evaluation function is ignored, the establishment of good quality stop words can
also reduce the dimension of feature vectors, and to ensure time efficiency of the
algorithm. At present, stop words table is no uniform standard [10]. According to
different extraction method stop words can be constructed of different tables, the
space-time classification algorithm will affect the performance. Feature extraction
program supports different items filter design is as follows.
if(word.length()>lower_limit && word.length<upper_limit)
if(word.trait==n or word.trait==v) tag_hash.put(word);
Choose different word frequency, word combinations form different feature item
extraction scheme. The test result is as the Table1. It includes 500 training Corpus.
The test results are as the Table2. It includes 500 training Corpus.
As the group size increases, the time complexity of the algorithm tends to O(n2),
space complexity tends to O(n). When the block size is 1, optimization algorithm is
the same to the traditional one. When the block size is greater than 100, the time cost
is reduced, space is increased slightly, and the system is easy to bear. Therefore, the
optimization algorithm is feasible and effective.
We consider to using the space instead of time if test corpus is composed of the text
corpus. We send all the training feature vectors into memory, also can get the
similarity. Optimized algorithm time complexity is O(n2) and space complexity
increase to O(n2) but it still can withstand within the system.
for(train_text_i=train_first_text to N)
for(i_weight_k=i_first_weight to M)
put into hashtable hash_train[];
for(test_text_j=test_first_test to N)
for(j_weight_k=j_first_weight to M)
if(search j_weight_k in hash_train[]) sim_j();
Select the K Neighbors. Select the K training text with big similar as the K neighbors
for the current test version. In practical problems, it is difficult to determine the K
value of the selected, often only rough estimates based on experience. This method of
valuation may cause a decline in the accuracy of KNN algorithm.
Scoring by Category. Test the text similarity of K neighbors by accumulation,
accumulated value will be scored [11].
Classification Proposed. According to the principle of the KNN algorithm, we
should include the class which is in the highest scores.
F = (1 + 2 ) pr /( 2 p + r ) (3)
Sougou Corpus was selected which involve the nine categories including financial
business, information technology, food hygiene, sports, tourism, education exams,
employment workplace, culture arts, military weapons. There are 1990 paper in each
category. Dictionary contains 275,613 words, excluding stop words 68767 words.
Choose 900 test text which is 5% of total corpus. Classifier test results is as shown the
Table 3.
Perfor-
K mance FB IT FH SE TV EE EW CA MW
index
Precision 91.1 83.7 81.0 94.1 78.0 69.9 79.8 85.3 92.4
20 Recall 92.0 82.0 85.0 95.0 78.0 86.0 79.0 58.0 97.0
F1 91.5 82.8 82.9 94.5 78.0 77.1 79.4 69.1 94.6
Precision 89.3 82.7 83.3 96.0 86.1 72.7 80.8 88.6 91.5
100 Recall 92.0 81.0 85.0 96.0 87.0 88.0 80.0 62.0 97.0
F1 96.5 81.8 84.2 96.0 86.6 79.6 80.4 72.9 94.2
Precision 87.3 81.8 84.2 98.0 85.3 72.1 81.2 89.6 91.7
400 Recall 89.0 81.0 85.0 96.0 87.0 88.0 82.0 60.0 97.0
F1 88.1 81.4 84.6 97.0 86.1 79.3 81.6 71.9 94.3
Precision 84.0 81.1 96.5 99.0 85.3 72.9 77.0 87.0 100.0
800 Recall 89.0 77.0 82.0 95.0 87.0 86.0 87.0 60.0 99.0
F1 86.4 79.0 88.7 96.9 86.1 78.9 81.7 71.0 99.5
Precision 80.9 81.5 87.1 99.0 85.2 74.1 77.2 100.0 100.0
1500 Recall 89.0 75.0 81.0 95.0 86.0 86.0 88.0 61.0 99.0
F1 84.8 78.1 83.9 96.9 85.6 79.6 82.2 75.8 99.5
Performance test results above are automatically statistics by the classifier without
manual checking. The results show that K value has little influence on the classifier.
The above implementation and testing environment for the computer is 2GHz CPU
frequency, 3GB main memory, Windows XP operating system, Java Language
compiler. Thus, the traditional KNN algorithm efficiency is recognized by common
PC environment. The weighted average time of feature vectors are 63ms / articles,
similarities average time 4043ms / articles, the traditional KNN classifier level are
classified time 125ms / articles.
550 Y. Cai and X. Wang
6 Conclusion
In this paper, we analysis time and space efficiency for the conventional KNN
algorithm of Chinese text classification process. We present a set of detailed
efficiency optimization scheme in ensuring the reliability of the classification
including extraction of feature items optimization scheme, feature items weighted
optimization scheme, similarity calculation optimization scheme. Tests results
satisfied the expected results.
References
1. A Survey on Automated Text Categorization,
http://wenku.baidu.com/view/64589a4bcf84b9d528ea7a45.html
2. Guo, G.D., Wang, H., Bell, D., Bi, Y.X., Greer, K.: An KNN Model-based Approach and
Its Application in Text Categorization. J. Computer Science, 986996 (2003)
3. Yang, Y.M., Pedersen, J.O.: A Comparative Study on Feature Selection in Text
Categorization. In: 14th Intl Conf. on Machine Learning (ICML 1997), pp. 412420.
Morgan Kaufmann Publishers, San Francisco (1997)
4. Vries, A.D., Mamoulis, N., Nes, N.: Efficient KNN search on vertically decomposed data.
In: 2002 ACM SIGMOD International Conference on Management of Data, pp. 322333.
ACM Press, Madison (2002)
5. Sun, R.Z.: An Improved KNN Algorithm for Text Classification. J. Computer Knowledge
and Technology. 6(1), 174175 (2010)
6. Ma, J.B., Li, J., Teng, G.F., Wang, F., Zhao, Y.: The Comparison Studies on the Algorithm
of KNN and SVM for Chinese Text Classification. Journal of Agricultural University of
HeBei 31(3), 120123 (2008)
7. Wang, X.Q.: Research of KNN Classif ication Method based on Parallel Genetic
Algorithm. Journal of Southwest China Normal University 35(2), 103106 (2010)
8. Zhu, G.H., Cheng, C.P.: An Improved k-Nearest Neighbor Classification Method. Journal
of HeNan Institute of Engineer Ing., 6567 (2008)
9. Liu, B., Yang, L., Yuan, F.: Improved KNN Method and Its Application in Chinese Text
Classification. Journal of Xihua University 27(2), 3336 (2008)
10. Zhou, Q.Q., Sun, B.D., Wang, Y.: Study on New Pretreatment Method for Chinese Text
Classification System. J. Application Research of Computers (2), 8586 (2005)
11. He, F., Lin, Y.L.: Summary of Improving KNN text classification algorithm. J. FuJian
Computer (3), 3336 (2005)
Research of Digital Character Recognition Technology
Based on BP Algorithm
Xianmin Wei
Abstract. This paper describes the digital character recognition process and
steps. Using artificial neural networks with momentum term and adaptive
learning rate back-propagation algorithm to train and identify the ideal signal
and noise signal containing the number of characters. By comparing the results
to obtain that using the same network with the ideal signal and noise signal for
training the network, the system can be more fault-tolerant.
1 Introduction
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 551555, 2011.
Springer-Verlag Berlin Heidelberg 2011
552 X. Wei
The premise work of Digital Identification is to change the visual image into
binary image with computer processing, which uses a given threshold metod to
change pixels in the image into two colors according to a certain standard. However,
fonts of binary images blurred in many cases, or spread appearing messy white or
dark dots, causing some difficulties to identify, using gradient sharpening methods to
sharpen the image, so that the blurred image becomes clear, and can play a role in
removing noise.
When identification Only by the characteristics of each digital character to
determine, so the binary image after sharpening needs to be split into individual
characters, for character refinement. Shelling algorithm commonly used, from the
boundary of layer by layer to remove the black spots until you find a collection, this
collection coincides with the boundary (thickness of 1 or 2.) In order to extract the
characteristics of any character, also normalized the digital characters, that is the size
of the character transforms into a uniform size, character position (rotation,
translation) corrected. Many people believe that regulation of each character image
into 5 9 pixels of a binary image is ideal, because the smaller size of the image, the
higher the recognition rate, the faster network training. In fact, compared to identify
the character images, 5 9-pixel map is too small. The normalized, the image
information is lost a lot, when the image recognition, the accuracy is not high.
Experimental results show that the regulation of a character image into 10 18 pixels
binary image is the real ideal. Processed from the characters is split, the extract can
best embody the characteristics of the character feature vectors, on behalf of the BP
into the network, the network training. Then extract the sample to be identified in the
generation of feature vectors into the trained BP network, the character can be
identified. Commonly used method of extracting a feature vector extraction method
pixel by pixel, frame feature extraction method, extraction of vertical and other
statistics. This experiment uses a pixel-by-pixel extraction method.
The basic idea of BP algorithm is: For an input sample, after weights, thresholds,
and activation function operation, get a output, and then compared it with the
expectations of the sample, if any deviation, from the output began to back-
propagation deviation, to adjust the right value and the threshold, and output of
network gradually become the same as hope output. Thus, BP algorithm is based on
the steepest descent method, the steepest descent method as the inherent
disadvantages: falling into local minimum easily, slow convergence and causing
oscillation effect, in adjusting the weights to use the momentum method, which
accelerated convergence rate, and to some extent reduce the probability of falling into
local minimum, but can not completely overcome these shortcomings. To speed up
the convergence rate, also used the adaptive learning rate.
Table 1. Signals Training and Test Error with Noise of Hidden Layer
Layer 3 is the output layer, the target output vector which containing 10 data shows
that the layer has 10 nodes. Hidden layer and output layer activation function are
Sigmnid, S-type function on the network structure shown in Figure 2.
554 X. Wei
To find a suitable training methods, and found with the increasing of samples
number, training results of separate BP method or adaptive learning rate BP are not
ideal, but both adaptive learning rate and momentum term of the BP training
algorithm works well, Therefore using this function to train the neural network. In
order to produce a certain input vector network fault tolerance, the best way is to use
both with an ideal signal and noise signal to train the network. In this study, the first
using 15 group ideal signal to train the network; the 2nd using 15 signals with noise
first and then the ideal signal with 15 groups on the same network training. With 10
kinds of increasing noise signal, which is obtained by the ideal signal alphabet to add
the average of 0, standard deviation from 0.05 to 0.5. Changes in network training
error is shown in Figure 3.
Observed indicators of these curves shows that training time can be quickly
achieved.
Meanwhile, in using different levels of noise signal case, respectively, from 0 to 9
numbers for the 10 100 tests, the network identification error rate curve with the noise
signal shown in Figure 4.
Dotted line in Figure 4 is the error recognition rate curve without the error training
network, solid line is the error recognition rate curve with the network trained by the
error. It can be seen that from Figure 4 the network training error tolerance is greatly
improved.
References
1. Bian, Z.: Pattern recognition. Tsinghua University Press, Beijing (2002)
2. Yang, S.: Image pattern recognition technology-VC + +. Tsinghua University Press,
Beijing (2005)
3. Chen, Y.: Feedforward network pattern recognition preprocessing method to handwritten
digit recognition application. Chinese Academy of Sciences Semiconductors, Beijing
(1995)
4. Yang, Y.: Based on Neural Network Handwritten Digit Recognition. East China
Geological University 26(4), 383386 (2003)
5. Roth, M.W.: SurveyofNeural Network Technologyfor Automatic Target Recognition.
IEEE Trans. Neural Networks 1(1), 2843 (1993)
Image Segmentation Based on D-S Evidence
Theory and C-means Clustering
Xianmin Wei
1 Introduction
Dempster-Shafer evidence theory is an expansion of the classical form of the
probability theory, trust function of evidence is associated with upper and lower
probability values, and using trust function and likelihood function to explain the
multi-valued mapping, and on this basis, dealing with uncertain information to form
the theory of evidence. Classification of image texture image features can not be
separated on the selection and extraction, a single feature can not meet the
requirements of the image target classification and recognition. To this end, the DS
evidence theory based on the extracted image texture features based on the use of it
for information fusion, the final decision-making strategies using the image texture
classification.
This paper first decomposed the full image into more detailed sub-graph by
wavelet decomposition, then applied the mean center of mobile convergence
to reduce the amount of data, then made the fuzzy C-means clustering on the
center of the convergence, making the texture characteristics of the image more
concentration.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 556561, 2011.
Springer-Verlag Berlin Heidelberg 2011
Image Segmentation Based on D-S Evidence Theory and C-means Clustering 557
Fig. 1. Schematic diagram of full wavelet decomposition ((a) Original, (b) wavelet
decomposition, (c) wavelet frame)
Mean shift algorithm is approach which based on the texture density gradient to
estimate the center of the cluster, can deal with unsupervised cluster classification.
Mean shift algorithm is a move in the feature space sample points close to the
average, until convergence to a specific location. The location is considered the center
of the texture. For any point to itself as the center and radius of a given point within
the region mean shift algorithm processing in order to achieve the purpose of
convergence. Shown in Figure 2.
Mean shift algorithm formula is:
(1)
(2)
For any finite set of data models {xm;m=1......M}, the number of clusters c (2 C
M), cluster weight index w, 1 <w <, initialize matrix elements of the division of
fuzzy clustering is , where c = 1 ..... C, m = 1 ...... M, Fuzzy C-means clustering
by minimizing on the membership matrix U and cluster center V of the objective
function to be achieved. Matrix U(0) formed by the C M. Performing the following
steps:
calculated of cluster centers using
update U(l) apply where
if ,then the loop ends where is the convergence
threshold, when the time to meet the objective function of the type similar to the
minimum, or return to the first step to continue. The algorithm has been shown to
have good convergence.
Weighted index w determined the degree of fuzzy clustering , w is too small, the
effect of clustering is not good, w is too large, the effect of clustering unclear,
experiments show that w = 2, clustering effect is good.
coefficient Pi (j), j on behalf of the texture model library classification number, and
finally by Pi (j) fi structural characteristics of the basic probability assigned texture mi
j (j). Here's how: as fi and texture characteristics of the maximum
correlation coefficient j.
(3)
(4)
(5)
Feature i given the recognition framework with the basic probability value, that
is, the uncertainty of the probability characteristics:
(6)
It can calculate a single feature (the image gray value and GLCM entropy) of the
basic probability value, and then calculate the basic probability integration features.
Value from the basic probability distribution functions and confidence can be
calculated likelihood function, resulting in evidence of confidence intervals, the
difference between the level of ignorance expressed proposition.
The analysis and decision-making of basic probability distribution function, using rule-
based methods to analyze significance based on probability distribution, determine the
following four rules: a) target class should have a maximum value of the basic probability
distribution; b) target category the basic values and other types of probability distribution
of the basic difference between the value of the probability distribution must be greater
than a threshold value 1, it means that all the different types of evidence for each level of
support should be kept large enough difference; c) uncertainty probability mi ( ) must be
smaller than a certain threshold value 2, the level of the target categories of ignorance or
uncertainty of evidence can not be too big; d) the basic probability distribution of target
class value must be greater than the probability of uncertainty mi (), ie know very little of
a target, you can not on its classification.
In summary, the application basic steps of D-S evidence theory for image texture
classification are as following; extract the image texture of the image gray value
and GLCM entropy, and model base texture matching, calculation of correlation
coefficient; each were calculated evidence of basic probability assignment
function, belief function and likelihood function; use of combination rules,
560 X. Wei
integration of all the evidence obtained under the basic probability assignment
function of trust and the likelihood function; according to decision rule, choose the
largest integrated under the assumption that the evidence.
4 Experimental Results
This treatment targets is to select cell's horizontal bamboo screen, Figure 3 (a) is a
bamboo stalk of bamboo stem cross-section of the original crystallization of vascular
images, Figure 3 (b) is the application of edge detection based on phase consistency
for the initial segmentation results Figure 3 (c) is the application of the edge
information based on morphological characteristics of the regional growth area of the
segmentation result.
By comparing the original figure of Figure 3 (a) with the processing results of
Figure 3 (c), which shows that it is in general satisfactory. It not only split out of the
vascular area, and loss of crystalline fibrous cap smaller amount of information, more
complete information about the target area reserved for the accuracy of extracting
feature information to provide a guarantee.
5 Conclusion
In view of the natural features details of the image texture with rich diverse, for
single-use application of D-S evidence theory and fuzzy C-means clustering in many
cases can not meet the requirements of image segmentation. Segmentation method
described in the text using mean shift algorithm, centralized information area.
Application of D-S evidence theory and then test using the border to provide the
potential for regional growth and regional models. Automatically selected using the
seed point boundary information, complete the final segmentation, region
segmentation analysis based on the characteristics of regions of interest. Experimental
results show that the method for image texture segmentation, and has good
robustness, segmentation results obtained with the human visual system to determine
basically the same.
References
1. Ma, W.M., Chow, E., Tommy, W.S.: A new shifting grid clustering algorithm. Pattern
Recognition 37(3), 503514 (2004)
2. Pilevar, A.H., Sukumar, M.: GCHL: A grid-clustering algorithm for high-dimensional very
large spatial data bases. Pattern Recognition Letters 26(7), 9991010 (2005)
3. Nanni, M., Pedreschi, D.: Time-Focused clustering of trajectories of moving objects.
Journal of Intelligent Information Systems 27(3), 267289 (2006)
4. Birant, D., Kut, A.: An algorithm for clustering spatial-temporal data. Data & Knowledge
Engineering 60(1), 208221 (2007)
5. Cai, W.L., Chen, S.C., Zhang, D.Q.: Fast and robust fuzzy c-means clustering algorithms
incorporating local information for image segmentation. Pattern Recognition 40(3),
825833 (2007)
6. Sun, J., Liu, J., Zhao, L.: Clustering algorithm. Journal of Software 19(1), 4861 (2008)
7. Ke, Y., Zhang, J., Sun, J., Zhang, Y., Zhou, X.: Combined with support vector machines
and C means clustering for image segmentation. Computer Application 26(9), 20822083
(2006)
8. Tian, J., Li, Y., Cao, R.: A Markov process and the evidence theory based on multi-source
image fusion segmentation methods. Microelectronics & Computer 23(2), 2734 (2007)
9. Wang, Y., Han, J.: Dempster-Shafer theory of evidence based on iris image classification
method. Xian Jiaotong University 39(8), 829831 (2005)
Time-Delay Estimation Based on Multilayer Correlation
1 Introduction
Temperature field measurement based on acoustic time-of-flight (TOF) tomography
[1] has been used in atmospheric monitoring and heat management. It has many
advantages such as nondestructive, noncontact sensing and quick in response. Time-
delay estimation technique is a key to acoustic temperature field measurement.
Time-delay estimation is an important signal processing problem and has received
significant amount of attentions during past decades in various applications, including
radar, sonar, radio navigation, wireless communication, acoustic tomography, etc [1].
The signals received at two spatially separated microphones in the presence of noise
can be modeled by
r1 (t ) = s (t ) + n1 (t ), r2 (t ) = s (t D ) + n2 (t ), (0 t T ) (1)
where r1(t) and r2(t) are the outputs of the two microphones, s(t) is the source signal,
n1(t) and n2(t)represent the additive noises, T denotes the observation interval, and D
is the time-delay between the two received signals.
Time-delay estimation is not an easy task because of various noises and the short
observation interval. There are many algorithms to estimate the time-delay D. The
cross-correlation (CC) is one of the basic algorithms. Many methods, such as second
correlation (SC) method [2] and generalized correlation method [3] develop based on
this algorithm.
There are two forms of correlations: auto- and cross-correlations. The cross-
correlation function is a measure of the similarities or shared properties between two
signals. It can be used to detect/recover signals buried in noise, for example the
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 562567, 2011.
Springer-Verlag Berlin Heidelberg 2011
Time-Delay Estimation Based on Multilayer Correlation 563
The CC method cross-correlates the microphone outputs r1(t) and r2(t), and considers
the time argument that corresponds to the maximum peak in the output as the
estimated time- delay. The CC method can be modeled by:
DCC = arg max[ Rc ( )]
The signal and the noise are assumed to be uncorrelated. Thus Rn1 s ( D ) ,
Rsn2 ( ) are zero. Also, the additive noises are assumed uncorrelated, thus Rn1n2 ( ) is
zero. However, Rn1n2 ( ) is usually not be neglected in practice due to the existing
correlation between the two noises. So we have
Rc ( ) = Rss ( D) + Rn1n2 ( ) = RS1 ( D) + N1 ( ) (3)
where Rss ( ) or RS1 ( ) is the auto-correlation of the source signal s(t), RS1 ( D)
reaches maximum when =D, Rn1n2 ( ) or N1 ( ) is the cross-correlation of noise n1(t)
and n2(t). RS1 ( ) and N1 ( ) can be thought of as the signal component and noise
component of Rc ( ) , respectively.
where Rn2 n2 ( ) or N1' ( ) is the auto-correlation of noise n2(t). RS 1 ( ) and N1' ( ) can
be thought of as the signal component and noise component of R1 ( ) , respectively.
The SC method cross-correlates R1 (t ) and Rc (t ) , and considers the time argument
that corresponds to the maximum peak in the output as the estimated time- delay. The
SC method can be modeled by:
DSC = arg max[ Rs ( )]
(6)
Rs ( ) = E[ R1 (t ) Rc (t + )] = E [ RS 1 (t ) + N1' (t )][ RS1 (t D + ) + N1 (t + )]
For longer data sequences, correlation operations can be speeded up by using the
correlation theorem and fast Fourier transform as follows [4].
where FFT and IFFT denote the inverse fast Fourier transform and fast Fourier
transform, respectively. * is conjugate operator.
The fast implementation of CC, SC and MC methods is given in Fig.1. In Fig.1,
REAL means obtaining the real part and MAX means obtaining the peak position,
respectively.
4 Simulation Research
In order to avoid the influence of acoustic travel-distance measuring error and so on,
the acoustic travel-time estimation method based on multilayer correlation is verified
566 H. Yan, Y. Zhang, and G.N. Chen
using MATLAB simulation data. The acoustic source signal s(t) is a linear swept-
frequency cosine signal generated by chirp(t,f0,t1,f1) and the acoustic signal at
receiving point can be written as s(t-). In this paper, f0=200Hz, f1=850Hz,
t1=T/2=N/2/fs, N=25000 or 50000, fs=250kHz, = 0.014412 s. N is the number of
samples, fs is the sample frequency.
Simulation research shows that the acoustic travel-time estimation value is stable
and exact if the noise is weak. But if the noise isnt weak, the acoustic travel-time
estimation value will fluctuate slightly. The standard deviation (std) and the relative
root-mean-square error (R_RMSE) are used to assess the stability and accuracy of the
travel-time estimation values. They are defined as follows.
n n
1 n
( i ) i ( i )
2
n i =1
std = i =1
, = i =1
R _ RMSE = 100% (14)
n 1 n
where is the actual acoustic travel-time; i is the ith estimation value of the acoustic
travel-time; n is the number of measurement (estimation), in this paper, n=100.
The estimation results when Gaussian white noises and colored noises are added
are given in Table1 and Table 2, respectively. The colored noise is obtained by
feeding the white noise through a band-rejection filter. The system function of the
filter is
2
H (z) = 1
(15)
1 2z 1 .2 2 7 z 2 0 .6 1 9 2 z 3
Following can be found from Table 1~Table 4.
1) The stability and accuracy of time-delay estimation will decrease with the
decreasing of SNR, or the decreasing of samples.
2) Among CC method, SC method and MC method, MC method has best stability
and accuracy of time-delay estimation.
5 Conclusion
In order to acquire a stable acoustic time-of-flight in low SNR, a time-delay
estimation method combined multilayer cross-correlation and multilayer auto-
correlation (MC method) is proposed and compared with cross-correlation (CC)
method and second correlation (SC) method. Theory analysis and simulation research
in MATLAB prove that MC method can obtain highest estimation precision these
three methods when the signal to noise ratio is low.
References
1. Ostashev, V.E., Voronovich, A.G.: An Overview of Acoustic Travel-Time Tomography in
the Atmosphere and its Potential Applications. Acta Acust. United Ac. 87, 721730 (2001)
2. Gedalyahu, K., Eldar, Y.C.: Time-delay Estimation from Low-rate Samples: A Union of
Subspaces Approach. IEEE T. Signal Proces. 58, 30173031 (2010)
3. Tang, J., Xing, H.Y.: Time Delay Estimation Based on Second Correlation. Computer
Engineering 33, 265269 (2007) (in Chinese)
4. Knapp, C.H., Carter, C.G.: The Generalized Correlation Method for Estimation of Time
Delay. IEEE T. Acoust., Speech, Signal Processing ASSP21, 320327 (1976)
5. Ifeachor, E.C., Jervis, B.W.: Digital Signal Processing: A Practical Approach, 2nd edn.
Publishing House of Electronics Industry, Beijing (2003)
Applying HMAC to Enhance Information Security for
Mobile Reader RFID System
1 Introduction
RFID is a contact-less automatic identification technology which combines
information and communication technologies. It has become very popular and been
applied in various business processes. The RFID application types of the most
common are asset management, asset tracking, automated payment, access control,
and supply chain management, etc[1-3]. An RFID system is comprised of tags,
readers, and application systems. In system structure shown in Fig.1, the tag stores
detail information of customer and the owner company. The RFID readers read or
write data to tags and transfer the data to the application system by using a radio
frequency interface.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 568573, 2011.
Springer-Verlag Berlin Heidelberg 2011
Applying HMAC to Enhance Information Security for Mobile Reader RFID System 569
The configuration of RFID system can be categorized as passive and active types.
In present mobile era, active type application is more attractive to most company.
However, there is a few of study, focusing at active type RFID systems, been found
[4-11]. The situation what we face is that the ubiquitous computing is accompanied
with new weakness which induces information attack. Moreover, owing to the wide
distributed item sites in application environment, owned company could not manger
assets satisfied. In general case, information attack occurs in some weakness and fails
the service. The potential security controls are identified to mitigate the corresponding
risks with following the Guidelines for RFID Systems [12]. The system security
requirement have been highlighted in [13] as in Table.1.
The remainder of this paper is presented as follows: Section 2 describes the
proposed security protocol. Section 3 presents the systems implement and discussion,
Section 4 contains the conclusions.
1) Initial phase: First, the tag embedded in item is carried on in factory and the
secret keys belong to each tag are specified, stored in database. The related HMAC is
generated and written to tags, the detail step is depicted in Fig.2.
2) Maintain phase: Third party personnel is responsible for deciding whether the
tag is legal or not by checking the HMAC. Only and only if the HMAC match,
maintainer can be allowed to process the next step and write the encrypted data to tag.
The detail operation process is depicted in Fig.3.
3) Revoke phase: Only and only if the HMAC match, the tag could be revoked by
following the same procedure as shown in Fig.3.
6KRZVXFFHVV
\ PHVVDJH
(QG
Request tag y
data
Write encrypted
data(maintain/
revoke) to tag
n Read tag data
correctly? Show fail
message
y Reflash the
database
Show fail Get target
message information
Show success
message
Calculate
HMAC
End
4 Conclusion
Integrating RFID technique to asset management is an innovative application. The
mobile reader based system is preferred. It can less human resources and analyzes
business data in time. Then, a framework comprise HMAC authentication and DES
encryption method has been proposed to protect information security. The prototype
Applying HMAC to Enhance Information Security for Mobile Reader RFID System 573
system demonstrates that the proposed approach can meet the system security
requirement. Furthermore, the proposed method provides flexibility in spending the
memory space. It is no doubt that Lower computation cost and more memory safe are
two factors favoring the successful implement of the RFID. Proposed protocol is
valuable for deploying a real RFID based asset management system and the security
framework could also be used for other mobile reader RFID system deployment.
References
1. Chan, S.Y., Luan, S.W., Teng, J.H., Tsai, M.C.: Design and implementation of a RFID-
based power meter and outage recording system. In: IEEE International Conference on
Sustainable Energy Technologie, pp. 750754 (2008)
2. Rieback, M.R., Crispo, B., Tanenbaum, A.S.: The Evolution of RFID Security. IEEE
Pervasive Computing, 6269 (2006)
3. Wu, M.Y., Ke, C.K., Tzeng, W.L.: Applying Context-Aware RBAC to RFID Security
Management for Application in Retail Business. In: IEEE Asia-Pacific Conference on
Service Computing, pp. 12081212 (2008)
4. Ding, Z.H., Li, J.T., Feng, B.: A Taxonomy Model of RFID Security Threats. In: IEEE
International Conference on Communication Technology, pp. 765768 (2008)
5. Schaberreiter, T., Wieser, C., Sanchez, I., Riekki, J., Roning, J.: An Enumeration of RFID
Related Threats. In: IEEE Second International Conference on Mobile Ubiquitous
Computing, Systems, Services and Technologies, pp. 381389 (2008)
6. Sharif, V.P.: A Critical Analysis of RFID Security Protocols. In: IEEE International
Conference on Advanced Information Networking and Applications, pp. 13571362
(2008)
7. Chien, H.Y.: Secure access control schemes for RFID systems with anonymity. In: 2006
International Workshop on Future Mobile and Ubiquitous Information Technologies,
pp. 9699 (May 2006)
8. Zhai, J., Park, C., Wang, G.: Hash-Based RFID Security Protocol Using Randomly Key-
Changed Identification Procedure. In: Gavrilova, M.L., Gervasi, O., Kumar, V., Tan,
C.J.K., Taniar, D., Lagan, A., Mun, Y., Choo, H. (eds.) ICCSA 2006. LNCS, vol. 3983,
pp. 296305. Springer, Heidelberg (2006)
9. Avoine, G., Oechslin, P.: A scalable and Provably Secure Hash-Based RFID Protocol. In:
Communications Workshops on IEEE Pervasive Computing, pp. 110114 (2005)
10. Gao, X., Xiang, Z., Wang, H., Shen, J., Huang, J., Song, S.: An Approach To Security
and Privacy of RFID System for Supply Chain. In: IEEE International Conference on
E-Commerce Technology for Dynamic E-Business, pp. 164168 (2004)
11. Kang, S., Lee, I.: A Study on New Low-Cost RFID System with Mutual Authentication
Scheme in Ubiquitous. In: IEEE International Conference on Multimedia and Ubiquitous
Engineering, pp. 527530 (2008)
12. Karygiannis, T., Eydt, B., Barber, G., Bunn, L., Phillips, T.: Guidelines for Security Radio
Frequency Identification (RFID) Systems. NIST: Special Publication 800-98 (2007)
13. Wang, F.-T., Wu, T.-D.: Information Security Study on RFID Based Power Meter System.
In: IEEE International Conference on Information Management and Engineering,
Chengdu, China, pp. 317320 (2010)
14. Maiwald, E.: Nectwork Security, A beginners Guide. The McGraw-Hill Companies,
New York (2003)
Analysis Based on Generalized Regression Neural
Network to Oil Atomic Emission Spectrum Data of a
Type Diesel Engine
1 Introduction
Modern mechanical equipment and power plant commonly runs oil film of several
microns, lubricating oil carries abundant wear information. The analysis of physics &
chemistry performance target, grain and concentration of elements can reveal wear
conditions, the oil quality decay and contamination of machine[1]. Oil monitor
technique is a technique of obtaining information of lubricant and wear, predicting
faults and ascertaining reasons, types and parts of faults through analyzing performance
change of lubricant and wear debris carried of checked equipment[2]. There are many
common methods of deeply mining information of oil atomic emission spectrum data,
including the grey system theory[3], the Factor Analysis[4], the maximum entropy
principle analysis[5], the correlation coefficient analysis[6], the regression analysis[7]
and the support vector machines[8], etc. This paper aims at setting up a relation
between the concentration of wearing elements of diesel engine and its loads,
cylinders clearances and run-time oil renewed by applying Generalized Regression
Neural Network (GRNN) to dealing with the oil atomic emission spectrum data of a
6-cylinder diesel engine.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 574580, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis Based on GRNN to Oil Atomic Emission Spectrum Data 575
W 1 S1 R
W2
x1 dist
x2 f1 ( x ) f 2 (x )
n1 A1 n2 A2
S1 R P
Nprod
S1 1 S1 1 S 2 1 S 2 1
P
P P
P P
P
P
P S 2 S1 P
P
P
xR b1 P
S1 1
P
1
R i ( x ) = exp x di
2
(1)
2 i
2
In the equation (1),x is the input sample; d i is center of the hidden layer node;
i is smooth factor which decided basis function shape of the location of i hidden
1
layer. The output of hidden layer A is as following.
The output layer is a pure linear layer, the weight function of it is a normalization dot
product the weight function, expressed by npord , which calculate
576 C.H. Zhang, H.X. Tian, and T. Liu
A 1
= exp
( di st b i1 )
2
2 2
(2)
the n2 of network. The elements of n2 are quotients that dot products of which vector
1 1
A and each row element of weight matrix W 2 and the sum of elements of vector A .
( )
The value of output of network A = purelin n2 . The trait that the learning of
2
network depends on the samples of dates decides to extremely avoid effects of results
from ones supposition.
Figure 2 shows the most of samples are simulated successfully. The SPECTROIL M
spectrum instrument has prescribed the acceptable accuracy indices of Cu when the
standard densities of Cu are 0, 5, 10, 30, 50, 100 and 300ppm, as shown in Table 2.
Cubic function of MATLAB interp1(x, y, x i , ' cubic ') is employed for estimating
accuracy indices of Cu of 69 samples.
Standard concentration accuracy Indices of Table 2 are used as input vector x , y
among the function.
Standard concentration
0 5 10 30 50 100 300
/ppm
The absolute errors of the simulation value of Cu concentration have been obtained
by comparing the simulation value with the observation value. Comparing the absolute
errors with the acceptable accuracy indices obtained from cubic function can reveal the
stimulating result. The result shows that all the 69 samples absolute errors are lower
than the acceptable accuracy indices, as shown in Table 3.
Table 3. (continued)
The data shown in Table 4 suggests that 19 samples absolute errors are lower than
the acceptable accuracy indices.
580 C.H. Zhang, H.X. Tian, and T. Liu
4 Conclusions
Based on the Generalized Regression Neural Network algorithm, the simulation model
has been set up for the relation between Cu concentration of lubricating oil of diesel
engine and its loads, cylinders clearances and running time oil renewed. The
simulation model has proved to be effective, for the absolute errors of the simulation
value of the 69 samples are lower than the acceptable accuracy indices.
As regards to seven working conditions, the Cu concentrations of the random 19
samples were predicted accurately through the predicting model based on the
Generalized Regression Neural Network analysis.
References
1. Toms, L.A., Toms, A.M.: Machinery Oil Analysis. Society of Tribologists & Lubrication
Engineers, 120 (2008)
2. Yan, X.: Development and think Of oil monitor technique. Lubrication Engineering 14(7),
68 (1999)
3. Wang, L., Li, L.: The Sampling Time Prediction of Oil Analysis for Power-shift Steering
Transmission. Lubrication Engineering 35(8), 8487 (2010)
4. Liu, T., Tian, H.X., Guo, W.Y.: Application of Factor Analysis to a Type Diesel Engine
SOA. In: 2010 International Conference on Measuring Technology and Mechatronics
Automation, Changsha, China, March 13-14, pp. 612615 (2010)
5. Huo, H., Li, Z., Xia, Y.: Application of maximum entropy probability concentration
estimationapproach to constituting oil monitoring diagnostic criterions. Tribology
International 39, 528532 (2006)
6. Tian, H.X., Ming, T.F., Liu, Y.: Comparison among oil SPECTRAL date of six types
of marine engine. In: International Conference on Transportation Engineering, Chengdu,
pp. 4348 (2009)
7. Zhou, P., Liu, D.F., Shi, X., Li, G.: Threshold Setting of Oil Spectral Analysis Based on
Robust Regression. Lubrication Engineering 35(5), 8588 (2010)
8. Fan, H.B., Zhang, Y.T., Ren, G.Q., Luo, H.F.: Study on Prediction Model of Oil Spectrum
Based on Support Vector Machines. Lubrication Engineering 183(11), 148150 (2006)
9. Shi, F., Wang, X.C., Yu, L.: Analysis of 30 cases of MATLAB neural network, pp. 7380.
Press of Beijing University of Aeronautics and Astronautics, Beijing (2010)
10. Yan, W.J.: Research on Discrimination of Tongue Diseases with Near Infrared
Spectroscopy. Infrared Technology 32(8), 487490 (2010)
11. Liu, T.: Study in the Field of Oil Atomic Emission Spectrum Data Mining, pp. 1820. Naval
University of Engineering, Wuhan (2009)
12. Liu, T., Tian, H.X., Guo, W.Y.: Application of Factor Analysis to a Type Diesel Engine
SOA. In: 2010 International Conference on Measuring Technology and Mechatronics
Automation, Changsha, China, March 13-14, pp. 612615 (2010)
Robust Face Recognition Based on KFDA-LLE
and SVM Techniques
1 Introduction
Face recognition has been researched extensively in the past decade due to the recent
emergence of applications such as secure access control, visual surveillance, content-
based information retrieval, and advanced human and computer interaction. However,
facial expression, occlusion and lighting conditions also change the overall
appearance of faces. The high degree of variability in those factors makes face
recognition a challenging task. To recognition human faces efficiently, dimensionality
reduction is an important and necessary operation for multi-dimensional data. The
objective of dimensionality reduction is to obtain a more compact representation of
the original data, a representation that nonetheless captures all the information
necessary for higher-level decision-making. Wang et al. [1] present four reasons for
reducing the dimensionality of observation data: (1) To compress the data to reduce
storage requirements; (2) To extract features from data for face recognition; (3) To
eliminate noise; and (4) To project data to a lower-dimensional space so as to be able
to discern data distribution. For face recognition, classical dimensionality reduction
methods have includes Principal Component Analysis (PCA) [2], Independent
Component Analysis [3], and Linear Discriminate Analysis [4].
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 581587, 2011.
Springer-Verlag Berlin Heidelberg 2011
582 G. Wang and C. Gao
2 KFDA-LLE
LLE maps its inputs into a single global coordinate system of lower dimension,
attempting to discover nonlinear structure in high dimensional data by exploiting the
local symmetries of linear reconstructions. Its optimizations do not involve local
minima though capable of generating highly nonlinear embeddings.
The LLE transformation algorithm is based on simple geometric intuitions, where
the input data consist of N points xi , xi R D , i [1, N ] , each of dimensionality
D , which obtained by sampling an underlying manifold. Provide there is sufficient
data (such that the manifold is well sampled), each data point and its neighbors are
expected to lie on or near a locally linear patch of the manifold. Linear coefficients
that reconstruct each data point from its neighbors are used to characterize the local
geometry of these patches. As an output, it provides N points y i , yi R d ,
i [1, N ] where d << D . A brief description of LLE algorithm is as follows:
Stage I, the cost function to be minimized is defined as:
2
N N
(W ) = xi Wij xij (1)
i =1 j =1
2
K K K
(W ) =
i
W
j =1
ij ( xi x j ) = WijWim C ijm
j =1 m =1
(2)
Stage II, the weight matrix W is fixed and new m-dimensional vectors y i are
sought which minimizes another cost function:
2
N N
(Y ) = yi Wij yij (4)
i =1 j =1
The Wij can be stored in an n n sparse matrix M, then re-writing equation (4)
gives:
N N
(Y ) = M ij yiT y j (5)
i =1 j =1
algorithm, the feature vector y is nonlinearly mapped into a high dimensional space,
and then a FLD method is utilized, globally forming a nonlinear mapping, i.e.KFDA.
The computation in high dimension space can be facilitated by the Mercer kernel
function.
The main ideal of KFDA first maps the feature vectors y into a high dimension
space F by a nonlinear mapping . Fisher linear discriminant analysis (FLD) can
then be performed in F . Thus, we define the within-class scatter matrix S w , the
between-class scatter matrix S b for the mapped training samples respectively as
follows:
c ni
S w = ( ( g j i )( ( g j ) i )T (6)
i =1 j =1
c
S b = ni ( i )( i ) T
(7)
i =1
where i is mean vector of class i and is the total mean vector in the mapped
space respectively. If S w is nonsingular, the optimal projection Wopt is chosen as the
matrix with orthonormal columns which maximizes the ratio of the determinant of the
between-class scatter matrix to that of the within-class scatter matrix, i.e.,
W T S bW
Wopt = arg max (8)
W W T S wW
584 G. Wang and C. Gao
According to the theory of reproducing kernel [6], any solution w must lie in the
space which is spanned by { ( x1 ),..., ( x N )} , i.e,
n
w = i ( g i ) = (9)
i =1
wT S w w = T K w (11)
where,
c
K b = ni (mi m)(mi m)T (13)
i =1
c ni
K w = ni ( j mi )( j m) T (14)
i =1 j =1
T
1 j
n nj
m j = k ( g1 , g i ),..., k ( g n , g i ) (15)
n j i =1 i =1
T
1 n n
m = k ( g1 , g i ),..., k ( g n , g i ) (16)
n i =1 i =1
Then, the following eigenvalues problem is obtained:
AT K b A
A = arg max = = [ 1 ,..., c 1 ] (17)
A AT K w A
To deal with the singularity of with-scatter matrix K w that one often encounters in
classification problems, we can use a regulation strategy with adding a multiple of the
identity matrix to the within-scatter matrix, i.e., K w = K w + I where is a small
Robust Face Recognition Based on KFDA-LLE and SVM Techniques 585
number). This also makes the eigenvalue problem numerically more stable. Another
viable approach for singularity problem by using PCA could also be adopted [3]. This
method performs a linear dimensionality reduction removing the null space to obtain
a matrix of full rank.
For a new testing sample z , whose projection onto the optimal discriminant vector
w of the feature F is
n n
wT ( z ) = k T ( xk ) ( z ) = k k ( xk , z )
k =1 k =1
(18)
= T [k ( g1 , z ),..., k ( g n , z )]T
n
f ( x) = sgn(
i =1
*
i y i k ( xi x) + b * ) (19)
where i (i = 1,..., n) and b * are the best solutions to the optimization problem, the
*
classifier with the maximal output defines the estimated class label of the current
input vector. While the former has too many computations, we adopt the latter one for
our face recognition task.
4 Experimental Results
We test both the original and improved LLE method against Eigenface and Fisherface
methods using the face database B of the BVC2005 [9]. This face database contains
2000 face images corresponding to 100 distinct subjects. The 2000 face images were
acquired under vary illumination conditions, different facial expressions, diverse
background and certain pose changes. Each subject has twenty images of size of
640x480 with RGB color. Since the database has large background, to reduce the
adverse effect, original images have been cropped containing the facial contours with
size 100x100. The cropped images of a few subjects in the database are shown in
figure 1.
Table 1. The experimental results on test database using the different methods
5 Conclusion
In this paper, an improved version of LLE, namely KFDA-LLE, is proposed for face
recognition. The KFDA-LLE is capable of identifying the underlying structure of
high dimensional data and discovering the embedding space nonlinearly of the same
class data set. Then KFDA is used to find an optimal projection direction for
classification. In face recognition experiments, KFD-LLE serves as a feature
extraction process compared with LLE, and two other well-established subspace
methods combined with SVM classifier. Experimental results on face database show
that KFD-LLE excels LLE and is highly competitive with those two baseline methods
for face recognition.
References
1. Wang, J., Zhang, C.S., Kou, Z.B.: An Analytical Mapping for LLE and lts Application in
Multi-pose Face Synthesis. In: The 14th British Machine Vision Conference (2003)
2. Turk, M.A.: pentland, A.P.: Face Recognition using Eigenfaces. In: Proc. IEEE Conf.
Computer Vision and Pattern Recognition, pp. 586591 (1991)
3. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by ICA. IEEE
Transactions on Neural Networks 13(6), 14501463 (2002)
4. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition
using class specific linear projection. IEEE Trans. on Pattern Analysis and Machine
Intelligence 19(7), 711720 (1997)
5. Roweis, S.T., Saul, L.K.: Nonlinear Dimensionality Reduction by LLE. Science 290,
23232326 (2000)
6. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)
7. Li, Y.F., Ou, Z.Y., Wang, G.Q.: Face Recognition Using Gabor Features and Support
Vector Machines. In: Zhou, T.H., Bloom, T., Schaffert, J.C., Gairing, M., Atkinson, R.,
Moss, E., Scheifler, R. (eds.) CLU. LNCS, vol. 114, pp. 114117. Springer, Heidelberg
(1981)
8. Schwenker, F.: Hierarchical Support Vector Machines for Multi-Class Pattern
Recognition. In: Fourth International Conference on Knowledge-Based Intelligent
Engineering Systems & Allied Technologies, vol. 2, pp. 561565 (2000)
9. The First Chinese Biometrics Verification Competition,
http://www.sinobiometrics.com
10. Kovesi, P.: Symmetry and Asymmetry From Local Phase. In: The 10th Australian Joint
Conf. on A.I. (1997)
An Improved Double-Threshold Cooperative
Spectrum Sensing
1 Introduction
With the expansion of broadband multimedia services, the radio spectrum resources
are becoming scarcer and scarcer, and the contradiction between supply and demand
of the spectrum is becoming more and more acute. The emergence of cognitive radio
alleviates the contradiction between supply and demand of the current spectrum.
Spectrum sensing is one of the most critical technologies in cognitive radio. The
detection of spectrum holes can improve the utilization of wireless spectrum
resources. In the cognitive network (CRN), the cognitive user (Secondary User, SU)
can only use the frequency band which is not used by the statutory main user (Primary
User, PU). In order to limit the interference to the PU, the SU should exit the band
immediately if the PU exists and can switch to other free unused spectrum to continue
communicating. It is a major challenge that how SU can detect the existence of PU
timely and reliably. Therefore, spectrum sensing is very important in cognitive radio
technology.
The existing spectrum sensing methods have single-user detection and
collaborative detection [1]. Single-user detection mainly includes matched filter
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 588594, 2011.
Springer-Verlag Berlin Heidelberg 2011
An Improved Double-Threshold Cooperative Spectrum Sensing 589
1 2 Yi
Assuming that each SU can does energy detection independently and has the
identical detection threshold values for simplicity. If Yi satisfies 1<Yi< 2 then the i-
th SU will send Yi to the fusion center, Otherwise, If Yi satisfies Yi < 1or Yi > 2, it
will report its local decision Li to the fusion center. We use Ri to denote the
information that the fusion center receives from the i-th SU, then it can be given by:
. (1)
otherwise
590 D. Zhang and H. Zhang
and
0 0
. (2)
1
Without loss of the generality, we assume that the fusion center receives K local
decisions and N-K energy detection values among N cognitive users. Then the fusion
center makes an upper decision according to N-K energy detection values, which is
given by:
0 0
. (3)
1
where is the energy detection threshold value of the fusion center according to
the proper false alarm probability. It shows that these N-K cognitive users could not
distinguish between the presence and the absence of the PU, so the fusion center
collects their observational values and then makes an upper decision instead of the
local decision by themselves [5].
The energy values fusion that the fusion center collects follows the distribution as
given below [6]:
H
Y~ . (4)
2 H
where represents the sum of SNR for N-K cognitive users, we can
make a further decision according to energy detection [7].
The fusion center makes a final decision based on OR-rule [8]:
1 D 1 H
. (5)
0 otherwise H
In the double threshold detection, some cognitive users cannot make their local
decisions directly, so they need send the detection values to the fusion center where
compares their energy values fusion with the threshold value for further judgment.
However, without considering that the different cognitive user will have different
detection reliability caused by their different geographical location and environment,
which would affect the final detection performance.
the credibility of information each cognitive user detected cannot be equated, in order
to solve this problem, we proposed a weighting judgment method by using reliability
factor which based on SNR values in this paper. In [9], the reliability factor has been
defined, and in this paper we introduce it into the double threshold cooperative
detection. The basic idea of reliability factor [9] is to set weight by using the SNR
value of SU received. The higher its SNR value is, the bigger its weight will be, and
the lower its SNR value is, the smaller its weight will be. The contribution of each SU
to the overall decision can be determined according to the detection performance,
which can help to fuse detection value of each SU better and improve the accuracy of
the cooperative spectrum sensing. First of all, the SU send the measured SNR value to
the fusion center, and then fusion center will calculate its weight factor according to
the following formula. Then the fusion center makes decision based on both the
weight factor and its own detection value.
W . (6)
Y WY . (7)
The energy value Yw is the summation of the product of the detected energy value
Yi and the weight factor Wi , where Yi denotes the collected energy value of the i-th
SU, Wi denotes the weight factor of the i-th SU, N-K is the number of cognitive users.
Since Yi is an independent chi-square distribution with 2TW freedom degrees,
likewise, Yw is also a Chi-square distribution the same with the Yi, then it can be
given by [10]:
H
Y . (8)
2 H
And
W . (9)
is the SNR of the i-th SU, u=TW, T is the spectrum detection time, W is the
bandwidth when band-pass signal performs energy detection. W1 W2 WN-K
are the weighting factors of the cognitive users respectively.
592 D. Zhang and H. Zhang
To facilitate the analysis, we introduce two parameters 0,i and 1,i to represent the
probability of1<Yi<2 for the i-th SU under hypothesis H0 and H1 respectively, then
we have :
, P |H . (10)
, P |H . (11)
P, PY |H Q 2 , . (12)
P , PY |H 1 , P, . (13)
,
P, PY |H . (14)
Using QdQm and Qf to denote the cooperative probability of detection, missing and
false alarm respectively, then we have:
Q P , , 1 Q 2 P , . (15)
Q 1 Q . (16)
,
Q 1 1 , P, 1 , P, , 1 . (17)
In the front, we have analyzed the performance of the improved double threshold
spectrum sensing, and now we verify its performance by using MATLAB simulation.
It can be clearly seen that this method performs better.
Parameters are given as follows: AWGN channel, , = , =0.1the number of users
N=10 u=5, the SNR of the i-th SU =-15:1:-6dB the simulation results can be
shown by the follow figures:
An Improved Double-Threshold Cooperative Spectrum Sensing 593
Fig. 2. Detection probability vs. false alarm Fig. 3. Missing probability vs. false alarm
probability probability
In figures, it can be seen that the improved double threshold collaborative detection
has a better performance compared with the traditional method, with higher detection
probability and lower missing probability.
In this paper, the reason that the improved double threshold collaborative detection
method can improve the detection performance is due to that the cognitive users have
different impact to fusion center according to their different reliability factors.
Considering this situation, we introduce the reliability factors while cognitive users
send their detection values to the fusion center. It can be seen that this improved
method can significantly improve the cooperative spectrum sensing capability in the
cognitive radio network from the simulation results.
5 Conclusions
In this paper, we have proposed an improved double threshold collaborative detection
method, in the traditional method, the cognitive users that need send their detection
values to the fusion center have different channel conditions, which result in that their
detection performance vary, the improved method fully thinks about this situation
which did not in the traditional method, it can be clearly seen that the performance of
the improved method is significantly better from the simulation results.
References
1. Akyildiz, I.F., Lee, W.-Y., Vuran, M.C., et al.: Next generation/dynamic spectrum
access/cognitive radio wireless networks: A survey. Computer Networks 50(13), 21272159
(2006)
2. Zhou, X., Ma, J., Li, G.Y., Kwon, Y.H., Soong, A.C.K.: Probability based combination for
cooperative spectrum sensing. IEEE Trans. Commun. 58(2), 463466 (2010)
594 D. Zhang and H. Zhang
3. Meng, J., Yin, W., Li, H., Hossain, E., Han, Z.: Collaborative spectrum sensing from
sparse observations using matrix completion for cognitive radio networks. In: Proc. IEEE
2010 International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
(March 2010)
4. Sun, C.-h., Zhang, W., Letaief, K.B.: Cooperative spectrum sensing for cognitive radios
under bandwidth constraints. In: Proc. IEEEWCNC (2007)
5. Zhu, J.: Double Threshold Energy Detection of Cooperative Spectrum Sensing in
Cognitive Radio. In: 3rd International Conference on Oriented Wireless Networks and
Communications, CrownCom 2008, May 15-17 (2008)
6. Digham, F.F., Alouini, M.S., Simon, M.K.: On the energy detection of unknown signals
over fading channels. IEEE Transactions on Communications 55(1), 2124 (2007)
7. Urkowitz, H.: Energy Detection of Unknown Deterministic Signals. Proceedings of the
IEEE 5(4), 523531 (1967)
8. Varshney, P.K.: Distributed detection and data fusion. Springer, New York (1997)
9. Visser, F.E., Janssen, G.J.M., Pawelczak, P.: Multi node Spectrum Sensing Based on
Energy Detection for Dynamic Spectrum Access. In: Vehicular Technology Conference,
pp. 13941398 (May 2008)
10. Bin Shahid, M.I., Kamruzzaman, J.: Weighted Soft Decision for Cooperative Sensing in
Cognitive Radio Networks. In: 16th IEEE International Conference Networks, pp. 16
(2008)
Handwritten Digit Recognition Based on Principal
Component Analysis and Support Vector Machines
1 Introduction
Handwritten digit recognition is an active topic in pattern recognition area due to its
important applications to optical character recognition, postal mail sorting, bank
check processing, form data entry, and so on.
The performance of character recognition largely depends on the feature extraction
approach and the classifier learning scheme. For feature extraction of character
recognition, various approaches, such as stroke direction feature, the statistical
features and the local structural features, have been presented [1, 2]. Following
feature extraction, its usually needed to reduce the dimensionality of features since
the original features are high-dimensional. Principal component analysis [3] is a
fundamental multivariate data analysis method and widely used for reducing the
dimensionality of the existing data set and extracting important information. The task
of classification is to partition the feature space into regions corresponding to source
classes or assign class confidences to each location in the feature space. At present,
the representative statistical learning techniques [4] including linear discriminant
classifiers (LDC) and the nearest neighbor (1-NN), and neural network [5], have been
widely used for handwritten digit recognition. Support vector machines (SVM) [6]
became a popular classification tool due to its strong generalization capability, which
was successfully employed in various real-world applications. In the present study we
employ PCA to extract the low-dimensional embedded data representations and
explore the performance of SVM for handwritten digit recognition.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 595599, 2011.
Springer-Verlag Berlin Heidelberg 2011
596 R. Li and S. Zhang
cov( X ) M = M (1)
where cov( X ) is the sample covariance matrix of the data X . The d principal
eigenvectors of the covariance matrix form the linear mapping M . And then the low-
dimensional data representations are computed by Y = XM .
whereas can be understood as the error of the classification and C is the penalty
parameter for this term.
l
By using Lagrange method, the decision function of w0 = i yi zi will be
i =1
l
f = sgn[ i yi ( z T zi ) + b] (5)
i =0
Handwritten Digit Recognition Based on Principal Component Analysis 597
4 Experiment Study
The popular MNIST database of handwritten digits, which has been widely used for
evaluation of classification and machine learning algorithms, is used for our
experiments. The MNIST database of handwritten digits, available from the web site:
http://yann.lecun.com/exdb/mnist, has a training set of 60000 examples, and a test set
of 10000 examples. It is a subset of a larger set available from NIST. The original
black and white images from NIST were size normalized to fit in a 2020 pixel box
while preserving their aspect ratio. The images were centered in a 2828 image by
computing the center of mass of the pixels, and translating the image so as to position
this point at the center of the 2828 field. In our experiments, for computation
simplicity we randomly selected 3000 training samples and 1000 testing samples for
handwritten digits recognition. Some samples from the MNIST database are shown in
Fig.1.
Digits 0 1 2 3 4 5 6 7 8 9
0 90 0 0 0 0 5 0 0 1 0
1 0 110 1 0 0 0 0 0 4 0
2 0 0 84 1 2 0 1 1 0 0
3 0 0 5 108 0 3 0 0 7 0
4 0 0 3 0 69 1 1 0 1 12
5 2 0 3 2 2 88 0 0 1 1
6 2 0 0 0 1 0 84 0 1 0
7 0 2 1 0 1 0 0 101 0 6
8 2 0 2 2 0 4 1 0 75 3
9 0 2 0 0 6 1 0 2 4 88
Handwritten Digit Recognition Based on Principal Component Analysis 599
5 Conclusions
In this paper, we performed reduction dimension with PCA for the grey digits image
features and explored the performance of four different used classification methods,
i.e., LDC, 1-NN, BPNN and SVM, for handwritten digits recognition from the
popular MNIST database. The experimental results on the MNIST database
demonstrate that SVM can achieve the best performance with an accuracy of 89.7%
with 10-dimensional reduced features, due to its good generalization ability. In our
future work, its an interesting task to study the performance of other more advanced
dimensionality reduction techniques than PCA on handwritten digits recognition.
References
1. Trier, O.D., Jain, A.K., Taxt, T.: Feature extraction methods for character recognitiona
survey. Pattern Recognition 29(4), 64662 (1996)
2. Lauer, F., Suen, C.Y., Bloch, G.: A trainable feature extractor for handwritten digit
recognition. Pattern Recognition 40(6), 18161824 (2007)
3. Partridge, M., Calvo, R.: Fast dimensionality reduction and simple PCA. Intelligent Data
Analysis 2(3), 292298 (1998)
4. Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: a review. IEEE
Transactions on Pattern Analysis and Machine Intelligence 22(1), 437 (2000)
5. Kang, M., Palmer-Brown, D.: A modal learning adaptive function neural network applied
to handwritten digit recognition. Information Sciences 178(20), 38023812 (2008)
6. Vapnik, V.: The nature of statistical learning theory. Springer, New York (2000)
Research on System Stability with Extended Small
Gain Theory Based on Transfer Function
1 Introduction
Robust control is researched by many scholars in recent years because not only the
stability should be concerned but also the robustness should be considered by system
designers. Many control methods are integrated with robust control to solve the
system uncertainties such as adaptive control, neural network control and sliding
mode control[1-5].
Small gain theory is one of the most important theories on system robustness in
control field. It is very revealing about the essence of robustness and stability of
control systems[4-7]. It also can guide the design of controller perfectly. Especially, it
has some mature and systematic conclusions for linear systems. Also, the small gain
theory can be extended in nonlinear systems in some special situation.
In this paper, a comparison between two stable system is researched for linear
controlled objects. And an extended small gain theory is proposed and it can be
extended in a large family complex systems.
2 Main Conclusion
The main conclusion of this paper can be concluded as the following theorem 1.
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 600605, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on System Stability with Extended Small Gain Theory 601
B(s )
Theorem 1: If a controlled object described by transfer function as G (s) = is
A( s )
D( s)
stable with a controlled described by transfer function as H ( s) = , then a
C (s)
controlled object described as G1 ( s ) = G ( s )G2 ( s ) is table with a controller
D( s) k0
H1 ( s ) = , where G2 ( s ) = .
k0C ( s ) (k1s + 1)(k2 s + 1)
3 Preliminary Knowledge
G( s)
C ( s) =
1 + G( s) H ( s) . (1)
Assumption 1: The controlled object is stable. It means that there is no unstable poles
in A( s ) .
Assumption 2: The controller is realizable. It means that there is no unstable poles in
C (s) .
G1 ( s )
C1 ( s ) =
1 + G1 ( s ) H1 ( s ) . (2)
Use the analogue , assume that the system is unstable, then there is unstable pole in
C1 ( s ) as
G1 ( s )
C1 ( s ) =
1 + G1 ( s ) H1 ( s )
602 Y. Jin and Q. Ma
Where
D(s)
1 + G1 (s) H1 (s) = 1 + G(s)G2 (s)
k0C (s)
G(s) D(s)
= 1+
(k1s + 1)(k2 s + 1)C ( s) . (3)
B(s) D(s)
= 1+
(k1s + 1)(k2 s + 1) A(s)C (s)
B(s) D(s) + (k1s + 1)(k2 s + 1) A(s)C (s)
=
(k1s + 1)(k2 s + 1) A(s)C (s)
B ( s ) D ( s ) + A( s )C ( s ) = F ( s ) , (4)
and
Then
Because k < 0 and the order of denominator polynomial is higher than the order of
numerator polynomial. So we have F () > 0 . And remember that F (0) < 0 , then
Research on System Stability with Extended Small Gain Theory 603
it means that there exists positive roots of this system. So we proves that the new
system is stable.
k0 (k3 s + 1)
If we consider the general situation such as G2 ( s ) = and
(k1s + 1)(k2 s + 1)
D( s)
controller as H1 ( s ) = , the denominator polynomial of close loop system can
k0C ( s )
be written as
If the system is a non-minimum phase system, then the small theorem is necessary to
cope it. We assume a transfer function is defined as
k0 (k3 s + 1)
G2 ( s ) = (10)
(k1s + 1)(k2 s + 1) .
G2 ( s )
= 3 > k0 (11)
.
G( s)
= 1 (12)
.
604 Y. Jin and Q. Ma
H (s)
= 2 (13)
.
G( s)
H ( s)
= 1 2 < 1 (14)
.
If the new system stable, then it should satisfy the following formula
G ( s )G2 ( s )
H1 ( s )
<1 (15)
,
and
G( s)G2 ( s)
H1 (s)
= G(s)G2 ( s)
H1 ( s)
< G( s) G2 ( s) H1 ( s) . (16)
= G(s) 3 H1 (s)
<1
So we can design
k
H1 ( s ) = H ( s) (17)
3 ,
Where k < 1 .
Then we have
k
H1 ( s ) = H (s) (18)
3
.
G ( s )G2 ( s )
H1 ( s )
< G ( s ) 3 H1 ( s )
k
= G( s) 3 H ( s) (19)
3
= k G (s)
H ( s)
<1
.
Research on System Stability with Extended Small Gain Theory 605
Obviously, the system is stable now. It means that the new system is guaranteed to
be stable only if the gain of the original system is amplified properly.
k0 (k3 s + 1)
And for some simple situations such as G2 ( s ) = , it is easy to
(k1s + 1)(k2 s + 1)
solve its infinite norm. For example, the infinite norm of
k0
G2 ( s ) = is k0 . So it is the simple situation of this theorem.
(k1s + 1)(k 2 s + 1)
5 Conclusion
Because H1 ( s ) is not necessary to be linear for small gain theorem, we can design a
nonlinear control law for a simple system, then we multiply the corresponding gain,
so a stable control law for a complex system is designed. Also, G2 ( s ) is not
necessary to be linear for small gain theorem, so this theorem can be applied in some
nonlinear situations.
Above all, the small gain theorem is extended based on the situation that the
controlled object is linear transfer function. And it can be applied in a large family of
complex system with the comparison method. Also, it is pointed that the comparison
method can be applied in some nonlinear situations.
References
1. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
2. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
3. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
4. Hu, M., Xu, Z., Zhang, R.: Parameters identification and adaptive full state hybrid
projective synchronization of chaotic (hyper-chaotic) systems. Phys. Lett. A 361, 231237
(2007)
5. Gao, T., Chen, Z., Yuan, Z.: Adaptive synchronization of a new hyperchaotic system with
uncertain parameters. Chaos Solitons Fractals 33, 922928 (2007)
6. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
7. Tang, F., Wang, L.: An adaptive active control for the modified Chuas circuit. Phys. Lett.
A 346, 342346 (2005)
Research on the Chattering Problem with VSC of
Supersonic Missiles Based on Intelligent Materials
1 Introduction
Because the sliding mode of VSC system is inflexible, it means that it is independent
with the outer interference and system uncertainties, so the VSC method is an ideal
robust control method. Many control methods are integrated with robust control to
solve the system uncertainties such as adaptive control, neural network control and
sliding mode control [1-5].
As the development of computer technology and automatic industry, the realization
of control algorithm need to use computers, so the discrete control algorithm is used
wildly. But the discrete control law can not lead to ideal sliding mode, it can only
cause quasi-slide control. And because of the existence of quasi-slide mode, the
chattering and accuracy problem are more prominent in discrete control field [6-9].
There are many factors caused the above problem. In this paper, a pitch channel
simplified model of missile control system is researched based on the sliding mode
control with overload and angle velocity signals. And the essential reason for causing
the chattering problem is studied. Also the integral adaptive strategy and soft function
are analyzed and adopted to reduce the chattering problem. As a side note, the
chattering situations caused by discrete computer algorithm or caused by the choosing
of simulation step are neglected in this paper.
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 606611, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on the Chattering Problem with VSC of Supersonic Missiles 607
2 Model Description
Taking the model of pitch channel of anti-ship missile for a example, the air dynamic
of missile pitch channel system can be written as follows:
1
= z ( P sin + Y mg cos )
mv
M
z = z = z
Jz
m qS L
Jz
(1)
P sin + Y
ny =
mg
= z a34 a35 z
z = a24 + a22 z + a25 z
. (2)
v v
n y = a34 + a35 z
g g
S = c1e + c2 e + c3 z
, (4)
= l1 z + l2 + l3 z + l4z + l5 1 ( , t ) + l6 2 ( , t ) + l7 e
608 J. Lei et al.
v g v g
l1 = (c1 a34 + c3 a22 ) , l2 = (c3a24 c1 a34 + c2 ) , (5)
g va34 g v
v va v va
l3 = (c1 a34 35 + c3 a25 c1 a34 a35 c3 a24 35 ) (6)
g g g ga34 ,
l4 = c1
v
g
a35 l
5 = c1n dy c2 n dy . (7)
s = l1 z + l2 n y + v + l5 . (8)
4 System Analysis
We consider to use a integrator to approximate the equal control item such
that s 0 . Then we define a new variable as ek = k d kd and design
It holds as
Then
Where k1d = sdt is used to estimate the equal control item. Then we have
= k1 s k2 s 2 kd s + (l1 z + l2 n y + l5 ) s
. (14)
k1 s k2 s 2 kd s + kd s + satkeq (kd ) s
If we choose k1 > keq , then the equal control item can be estimated properly. Also
because of the introduce of estimation of equal control item, the chattering of the
system will be reduced.
610 J. Lei et al.
Undoubtedly, there always exists chattering only there exists the sign function. So
we use a continuous function instead of the sign function.
For the standard system s = l1 z + l2 ny + v + l5 , we design the control law as
v = l1 z l2 n y l5 k1 sgn( s ) k2 s to make the system stable.
s
v = l1 z l2 n y l5 k1 k2 s . (15)
s +
Then it holds as
s2
ss = s (l1 z + l2 n y + v + l5 ) = k1 k2 s 2 < 0 . (16)
s +
v = k1 sgn( s ) k2 s kd sgn( s )
. (17)
kd = l1z l2 n y l5
So the system is stable. And considering the situation of soft function, design
s s
v = k1 k2 s k g
s + s + , (18)
kd = max( l1 z l2 n y l5 )
Then it holds
s s
ss = s(l1z + l2ny ++l5 k1 k2s kg )
s + s +
, (19)
s2 s2
< k1 k2s2 + kd s kg
s + s +
s2 kg s (kd k g ) s + kd
kd s k g = s (kd )= s . (20)
s + s + s +
Research on the Chattering Problem with VSC of Supersonic Missiles 611
s2 k 1
kd s k g = s d kd ( s )1/ 2 , (21)
s + s + 2
s2 k s s (k1 s kd )
ss < k1 k2 s 2 + d = k2 s 2 . (22)
s + s + s +
So the stable area is solved as s > kd / k1 . And the dead zone is small enough
only if is small enough. Also if we increase k1 , then the dead zone will be reduced.
5 Conclusions
The functions of integral equal control and soft function are researched to reduce the
chattering of a class of pitch channel simplified model of missile system. And the
relationship between the system gain and dead zone of soft function is analyzed.
References
1. Gao, W., Hung, J.C.: Variable Structure Control of Nonlinear systems: A New Approach.
IEEE Trans. Indus. Electro. 40(1) (February 1993)
2. Polycarpou, M.M., Ioannou, P.A.: A Robust Adaptive Nonlinear Control Design.
Automatic 32(3), 423442 (1996)
3. Kim, S.-H., Kim, Y.-S., Song, C.: A robust adaptive nonlinear control approach to missile
autopilot design. Contr. Engin. Prac. 12, 149154 (2004)
4. Tang, F., Wang, L.: An adaptive active control for modified Chuas circuit. Phys. Lett.
A 346, 342346 (2005)
5. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
6. Qian, X., Zhao, Y.: Flying Mechanics of Missiles. Beiing University of Engineering Press,
Beijing (2000)
7. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
8. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
9. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
Research on Backstepping Nussbaum Gain
Control of Missile Overload System
1 Introduction
Backstepping technology is a well-known method which is widely used in the design
of missile control systems[1-5]. The control system of missiles described by the
differential equations have strong nonlinearities and time varying characteristics.
In this paer, the linear model of missile pitch channel motion is studied basd on
previous research work. The input coefficient is assumed to be unknown, which is
usually satisfied in real missile flight control. And the Nussbaum gain method is used
in this paper to solve the unknown control direction problem. Also uncertain linear
supersonic missile system is controlled where only overload and angle velocity are
necessary to be measurable. Most important of all, no aero-coefficient is necessary to
be known because of the adopting of integral action.
2 Model Description
Without considering the first order dynamic of actuator, the linear model for the
missile motion in the pitch channel are given by
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 612615, 2011.
Springer-Verlag Berlin Heidelberg 2011
Research on Backstepping Nussbaum Gain Control of Missile Overload System 613
= z a34 a35 z
z = a22z + a24 + a25 z
v v (1)
ny = a34 + a35 z
g g .
The control objective is to design a control u such that the overload of the system
d
n
( y )can track the desired value n y , where the sign of a25 is unknown.
v v v v
ny = a34z a34 a34 f (ny , z ) a35 a34 z + a35z
g g g g , (2)
z = a22z + a24 f (ny ,z ) + a25 z
3 Stability Analysis
v v v
f1 = a34 a34 f ( ny , z ) a35 a34 z + a35z . (3)
g g g
Assume that there exist two parameters d11 and d10 such that
f1 d1e1 + d0 . (4)
Then it holds
v v v
e1e1 = a34 k11e12 a34 k11e1 e1dt + f1e1 + a34 e2 , (6)
g g g
614 J. Shi et al.
Where e2 is defined as e2 = z zd .
Also define a new variable as
Similarly, also assume that there exist two parameters d 21 and d 20 such that
f 2 d 21e2 + d 20 . (8)
If the sign of a22 is known, so in the same way, the virtual control for this subsystem
can be designed as:
Since the sign of a22 is unknown, a Nussbaum gain strategy is adopted to solve the
unknown input coefficient proble, the virtual control can be designed as follows:
z = N ( k ) zdd 2 , (10)
Where
And design
N ( k ) = k cos( k ) , (12)
k = e2 zdd 2 . (13)
e2e2 = f2e2e2 k21 a22 e22 k22 a22 e2 e2dt e2 (1 + a25 N (k )) zdd 2 (14)
.
1 2 1 2
V = e1 + e2 (15)
2 2 .
Research on Backstepping Nussbaum Gain Control of Missile Overload System 615
V (1 + a25 N ( k )) k . (16)
k (t )
Vi (t ) Vi (0) (k (t ) k (0) + a25 ( N (k ))dk . (17)
k (0)
Use the apagoge method, assume that k (t ) will be unstable in finite time, so when
t tn , k (t ) . With the help of Nussbaum gain function characteristics, it is
easy to prove the above inequality is contradict. So k (t ) is bounded in finite time and
the system is stable.
4 Conclusions
The main contribution of this paper can be summarized as follows. A nussbaum gain
strategy is adopted to slove the unknown control direction problem for the linear
model of missile motion of pitch channel . Also, the backstepping technology and
integral action are applied to guarantee the whole system is stable and only overload
and angle velocity are necessary to be measured. But the defect of this paper is that
the dyanmic of control fin is not considered. So it will be throughtly considered in
our future work.
References
1. Elabbasy, E.M., Agiza, H.N., El-Dessoky, M.M.: Adaptive synchronization of a
hyperchaotic system with uncertain parameter. Chaos Solitons Fractals 30, 11331142
(2006)
2. Qian, X., Zhao, Y.: Flying Mechanics of Missiles. Beiing University of Engineering Press,
Beijing (2000)
3. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
4. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
5. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
Adaptive Control of Supersonic Missiles
with Unknown Input Coefficients
1 Introduction
Adaptive backstepping controllers has been researched in many papers[1-5]. The
uncertainties in missile pitch planes model considered in [2] are consisted of
uncertain parameters and unknown nonlinear functions, where the unknown functions
represents the model error or the time varying of the system. But the input coefficient
are assumed to be positive. In fact, the sign of input coefficient is possibly to be
unknown or it will be changed unexpectly under some complex fight conditions.
In this paper, a Nussbaum gain strategy is proposed to control the uncertain
supersonic missile described in [2]. Comparing with the above adaptive method, the
unknown control direction problom is solved by adopting the Nussbaum gain control
technology.
2 Model Description
The nonlinear model for the missile motion in the pitch plane is adopted by Hull &
Qu[2]. Considered the first order dynamic of actuator, equations of motion in the
pitch plane are given by
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 616620, 2011.
Springer-Verlag Berlin Heidelberg 2011
Adaptive Control of Supersonic Missiles with Unknown Input Coefficients 617
QS
= [C z ( , M m ) + B z ] + q
mV
,
QSd (1)
q =
I [C m ( , M m ) + Bm ], = au a
yy
where , q and are angle of attack, pitch rate and control fin deflection angle,
respectively, and m, V , I yy , Q, S and d are mass, velocity, pitching moment of
inetia, dynamic pressure, reference area, and reference length, respectively, and
M m is Mach number, u and a are control input and actuator bandwidth, respectively.
The aerodynamic coefficients in Eq.(1) are represented as the function of Mach
number and angle of attack, the expression of functions can see ref [1].
Taking uncertainties of the aerodynamics into consideration, we can rewrite Eq.(1)
as
x1 = f1 ( x1 ) + f1 ( x1 ) + x2 + [ g1 + g1 ]x3
, (2)
x2 = f 2 ( x1 ) + f 2 ( x1 ) + [ g 2 + g 2 ]x3
And
and
1 s
s 0
lim inf N ( x)dx = . (6)
s
Design a integral virtual controller for the first order subsystem as follows:
Define
And assume
f d1 e1 + d 2 , (10)
Define
g = f 2 ( x1 ) x2d , (13)
Assume
g d3 e3 + d 4 , (14)
x3 = N ( ) x3dd , (15)
Where
N ( ) = cos( 2 / 4 ) , (16)
= e2 x2dd . (17)
1 2 1 2
V= e1 + e2 . (18)
2 2
It is easy to prove that
Then it satisfies
V [1 + ( g 2 + g 2 ) N ( )] . (20)
(t )
Vi (t ) Vi (0) ( (t ) (0) + ( g 2 + g 2 ) ( N ( ))d . (21)
(0)
620 J. Wu et al.
Use the apagoge method, we assume that (t ) will be unstable in finite time, so
when t tn , we have (t ) . With the help of Nussbaum gain function
characteristics, it is easy to prove the above inequality is contradict. So (t ) is
bounded in finite time. And it is easy to prove that the system is stable.
4 Conclusions
A novel adaptive controller is designed with Nussbaum gain strategy and the
unknown input coefficient problem of supersonic missile is solved in this paper. Also,
the uncertainties are coped by the adopting of integral action and backstepping
technology. So the whole system is guaranteed to be stable with the help of the
Lyapunov stability theorem.
References
1. Kim, S.-H., Kim, Y.-S., Song, C.: Contr. Eng. Prac. 12, 149154 (2004)
2. Hull, R.A., Qu, Z.: Design and evaluation of robust nonlinear missile autopilot from a
performance perspective. In: Proce. of the ACC, pp. 189193 (1995)
3. Lei, J., Wang, X., Lei, Y.: How many parameters can be identified by adaptive
synchronization in chaotic systems? Phys. Lett. A 373, 12491256 (2009)
4. Lei, J., Wang, X., Lei, Y.: A Nussbaum gain adaptive synchronization of a new
hyperchaotic system with input uncertainties and unknown parameters. Commun.
Nonlinear Sci. Numer. Simul. 14, 34393448 (2009)
5. Wang, X., Lei, J., Lei, Y.: Trigonometric RBF Neural Robust Controller Design for a Class
of Nonlinear System with Linear Input Unmodeled Dynamics. Appl. Math. Comput. 185,
9891002 (2007)
The Fault Diagnostic Model Based on MHMM-SVM
and Its Application
Abstract. A new method of incipient fault diagnosis for analog circuits based on
MHMM-SVM was presented. The model of MHMM has the ability of dealing
with continuous dynamic signals and is suitable to depict for the samples that are
in the same kinds, the model of SVM is based on Structural Risk Minimization
and is adapt in classifying the different kinds. The two models are
complementary for each other, and the method made them fused. At first, the
dimensions of the experimental samples were decreased and divorced easily with
the LDA technology, and secondly, the MHMM-SVM model was built using the
samples. Finally, from experimental results that are compared with
MHMM-based and SVM-based diagnostician methods, the conclusion can be
drawn that the method has certain advantages for the incipient fault diagnosis.
1 Introduction
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 621627, 2011.
Springer-Verlag Berlin Heidelberg 2011
622 F. Zhu et al.
have started to apply it for faults diagnosis and positioning, the effect is better than the
traditional methods such as the neural network[2,3,5].The model of SVM is machine
learning method that comes from data classification statistics theory, and has high
classification accuracy and well spreading ability, ,and currently at faults diagnosis
realm has already got an extensive application[2,4]. MHMM model is suitable to depict
for the samples that are in the same categories, SVM model is adapt in classifying the
different categories[1,2]. The two merits are complementary for each other, the method
of the passage made them fused to deal with the samples of incipient faults, from
experimental results comparing with MHMM-based and SVM-based diagnostician
methods, the conclusion can be drawn that the method has certain advantages for the
incipient fault diagnosis.
SVM is a learning technique developed by Vapnik and his team in 1959, which is based
on statistical learning theory. It has remarkable advantages in dealing with
small-sample, non-linear and high dimension problems and is rest on the VC dimension
theory and structural risk minimization of statistical learning theory. SVM also
provides the best compromise between complexity(that is ,the learning accuracy for
given training samples) and leaning ability (that is, the ability to identify each samples
faultlessly) of the model for limited set of sample information to obtain the best
generalization ability.
Structural risk minimization (SRM) denotes the minimal sum of empirical risk and
belive risk. Machine learning is essentially a approximation of real model of the
problem. Empirical risk denotes the error of classifier on given samples, and believe
risk has a relationship with two parameters: one is the number of samples, it is obvious
that the larger the number of samples and the higher the accuracy of leaning outcomes,
the believe risk is littler; the other is VC dimension of a classification function (the
complexity of samples), simply, the VC dimension is larger, the generalization ability
is worse and the confidence risk is higher.
The Fault Diagnostic Model Based on MHMM-SVM and Its Application 623
7KHVDPSOHVXVHGIRUWHVWLQJ
7KHVDPSOHVXVHGRQWUDQQLQJ
7KH&ODVVLILFDWLRQEDVHG
RQ0+00
6WDWHRI
1RUPDO 6WDWHRI 6WDWHRI
IDXOW
VWDWH IDXOWRQH IDXOWWZR
WKUHH
7ZRVWDWHVWKDW 6WDWHVWKDW
PRVWOLNHQHVV PRVWXQOLNHQHVV
UHPRYH
6WDWHRI
1RUPDO 6WDWHRI 6WDWHRI
IDXOW
VWDWH IDXOWRQH IDXOWWZR
WKUHH
7KHFODVVLILFDWLRQEDVHGRQ690
7ZRVWDWHVWKDW 6WDWHVWKDW
P R
V W O
L N HQ
H V V
P
R V W
X Q OL
N H Q
HV V U
H P R Y
H
7+(5(68/76
Fig. 1. Flow char of fault diagnosis based on MHMM-SVM
7+(5(68/76
The whole process contains four parts: feature extraction, diagnosis of MHMM,
diagnosis of SVM and making a judgment.
In this paper, take the typical circuit of active filter for example, the tolerances of each
components are 5%:
9 & &
5 9
9 & &
& N 8 $
5
9 N
Q )
5
P 9 UP V &
Q ) N 5 / 0 1
+ ]
m N 9 ' '
5
9 ' ' N
9
If the value of a component goes beyond the normal range of 20%, it was called the
incipient soft fault.
A Monte Carlo analysis is conducted to one of the states of the circuit by Multisim:
3.1 The Original Data Processing Based on Linear Discriminant Analysis (LDA)
The original samples collected from the circuit are pretty redundant and complex,
which would not only affect the results but also increase the complexity of training
when it is used for training of MHMM-SVM model directly. However, with the LDA
technique, the original data could be transformed into a lower dimensional space, thus
it promotes the arithmetic speed and has high accuracy
LDA is a dimension reduction and supervised method used in classification, which
is widely applied in face recognition at the moment. It based on Fisher criterion,
searching for the best projection vector that maps a high-dimensional sample into a
low-dimensional space, which obtains the maximum between-class scatter and the
minimum within-class scatter of all projection samples. That is to say, samples of the
same class are gathered and the different one is separated as far as possible in
dimensionality reduction[6].
Through dimensionality reduction by LDA, two columns of data that has maximum
eigenvalue are selected to plot the figure. As is obviously showed, not only
Z C
0 .3 R 1 S
R 1 J
R 3 S
0 .2 R 3 J
R 4 S
R 4 J
0 .1
C 1 S
C 1 J
0 C 2 S
C 2 J
-0 .1
-0 .2
-0 .3
-0 .4 -0 .2 0 0 .2 0 .4 0 .6 0 .8
dimensionality reduction is realized but also the data obtains preliminary classification
to some extent. When R2 goes beyond the normal range of 20%, the outputs are exactly
the same with the normal one's, which does not affect its results. Therefore, there are
eleven states as follows: the value of R1 rises 20%, the value of R1 drops 20%, the
value of R3 rises 20%, the value of R3 drops 20%, the value of C1 rises 20%,the value
of C1 drops 20%, the value of C2 rises 20%,the value of C2 drops 20%, the value of R4
rises 20%, the value of R4 drops 20%, and normal state. The original feature vectors are
reduced to five-dimensional ones, which are used as data samples.
Fifty feature vectors of one hundred ones serve as the training samples. Five
ones are chosen in random to compose a observation sequence, so fifty observation
sequences are obtained. In the same way, the other fifty feature vectors are used as
testing samples. Eleven models of MHMM should be trained for eleven states of the circuit.
The feature vector is the mixture of three Gaussian models. The value of the
states of MHMM is set as five and the structure is classed as left to right one. Initial
state is normal, so the initial matrix is [1,0,0,0,0]. The state transition matrix is
[0.5,0.5,0,0,0;0,0.5,0.5,0,0;0,0,0.5,0.5,0;0,0,0,0.5,0.5;0,0,0,0,1]. K-means algorithm is used
to approximate the observation sequence to determine the parameters of Gaussian of
Mixture HMM. The model is trained by E-M algorithm where iterations is set as 50 and
fluctuation tolerance is 1e-4. Figure 5 shows the iteratin of the state of R1 rising for 20%.
9 5 0
9 0 0
8 5 0
8 0 0
7 5 0
7 0 0
6 5 0
6 0 0
5 5 0
5 0 0
0 2 4 6 8 1 0 1 2 1 4 1 6 1 8
The model of SVM is mainly used to classify the two categories whose
log-likelihood values are the most similar in the judgment of MHMM. Therefore,
fifty-five binary classifications should be trained, for example, fig.6 and fig.7.
When a testing sequence has been judged by the model of MHMM, two states
are obtained whose log-likelihood values are the most similar. Subsequently, put
the above testing sequence into the SVM classification trained for the two state to
compartmentalize. At last, faults diagnosis result is obtained as that the following
table 1 shows:
It can be seen from table 1 that the fault detection rate of MHMM-SVM is higher
than the single model of MHMM or SVM, and MHMM is better than SVM in a certain
extent. As for high separation samples, the three model all have good fault detection
rate such as the normal state. But for the extreme overlapping samples such as the
change of R1 and C2, the results are different widely. The sample separation of R1
and R1 is larger from fig.4, so it is good to the classification of SVM, but the
log-likelihood values of MHMM are -612.02 and -612.86 that are almost the same; On
the contrary, the sample separation of R1 and C2 is lower, which does bad to the
classification of SVM, but the log-likelihood values of MHMM are -612.02 and
-630.66, and the classification can be carried out easily. Therefore, the combination of
two models has better results of classification for extreme overlapping samples.
4 Conclusion
The method based on MHMM-SVM for the incipient fault diagnosis of electronic
device has better advantages than ones based on the single model of MHMM or SVM.
Based on comprehensive analysis of the advantages of both models, that is, the abilities
of MHMM describing related sequences and the abilities of SVM depicting the
between-class most distance, this paper bears out the practicability and feasibility of the
combination of two models.
References
1. Shi, D.C., Han, L.Y., Yu, M.H.: Automatic audio stream classigication based on hidden
markov model and support vector machine. Journal of Changchun University of Technology
(Natural Science Edition) 29(2), 178182 (2008)
2. Liu, X.M., Qiu, J., Liu, G.J.: HMM-SVM Based Mixed Diagnostic Model and Its
Application. Acta Aeronautica et Astronautica Sinica 26(4), 497500 (2005)
3. Xu, L., Wang, H.J.: Study on Fault Prognostic and Health Management for Electronic
System. University of Electronic Science and Technology of China, Chengdu (2009)
The Fault Diagnostic Model Based on MHMM-SVM and Its Application 627
4. Chen, S.J., Lian, K., Wang, H.J.: Method for Analog Circuit Fault Diagnosis Based on GA
Optimized SVM. Journal of University of Electronic Science and Technology of
China 38(4), 554558 (2009)
5. Alpaydin, E.: The Grid: Introduction to Machine Learning. China Machine Press, Beijing
(2009)
6. Zhang, A.N., Zhuang, Z.M.: A Study on the Method of Face Recognition Based on
Optimized LDA and RBF Neural Network. Shantou University (2007)
7. Yang, S., Hu, M., Wang, H.: Study on Soft Fault Diagnosis of Analog Circuits. The
Electronics and Computer 25(1), 18 (2008)
Analysis of a Novel Electromagnetic Bandgap Structure
for Simultaneous Switching Noise Suppression
1 Introduction
Today, the reliability of digital circuit system is attend to more and more challenged
with that, the ceaseless increase of the frequency of system clock, the enlarge size of the
digital ICs, the sharp increase of the organ density on the Printed Circuit Board (PCB).
The power layer and ground layer in multilayer PCB are actually equal to a pair of
parallel-plate resonator in high frequency station. Simultaneous Switching Noise(SSN,
or delta I noise) which can lead to significant Signal Integrity(SI) problems and
Electromagnetic interference(EMI) issues [1-3]will came into being by the high
impedance in resonance condition. But the favorable Signal Integrity and
Electromagnetic compatibility (EMC) are important contents in reliable design of high
speed digital circuit systems. In order to improve the reliability of digital circuit system,
the SSN in power/ground plane must be suppressed effectively.
A typical method for SSN suppression is to use decoupling capacitor between the
power plane and ground plane[4]. However, because of the high frequency parasitic
parameters in itself, the decoupling capacitor will lost its ability to suppress SSN in the
frequency higher than 600MHz[5]. In order to restrain the SSN between power plane
and ground plane in high frequency region, a novel concept of mitigating SSN using
*
Corresponding author.
S. Lin and X. Huang (Eds.): CSEE 2011, Part I, CCIS 214, pp. 628634, 2011.
Springer-Verlag Berlin Heidelberg 2011
Analysis of a Novel Electromagnetic Bandgap Structure for SSN Suppression 629
(a) (b)
Fig. 1. Conventional UC-EBG structure. (a) unit cell (b) the equivalent LC circuit in adjacent unit
cells
In UC-EBG structures, the resonance of the periodic unit cells plays an important
role in the stop-band forming. The unit structure can be analyzed by equivalent LC
circuit, and its character of electromagnetic also can be described by equivalent
inductance and equivalent capacitance.
630 H. Yang et al.
Fig.1 (b) shows the equivalent LC circuit in adjacent unit cells of UC-EBG
structure, the equivalent capacitance which is represented by C parameter comes from
gaps between the adjacent unit cells, and the equivalent inductance which is
represented by L parameter comes from the current on narrow metal strips between the
adjacent unit cells, the center frequency of stop-band and the 3dB bandwidth can be
approximately expressed by the following formula [11]:
1
0 = ( 1)
LC
1 L
BW = = (2)
0 C
Parameter 0 represents the center frequency, parameter represents the width of
the stop-band, = 120 is the wave impedance in freedom space.
Actually, the UC-EBG structure is equal to a series of parallel connection LC pairs,
it shows a character of high impedance around the resonance frequency of the
power/ground planes. The UC-EBG structure supplies a high-impedance gateway
which is equivalent to a band-stop filter for the SSN when it spreads along the
power/ground plane, in this way, the SSN is restricted in the local cell and can not
transmit and excitated along the power/ground pairs. As a result, the resonance between
power/ground planes is minished, and the impedance around the resonance frequency
is also reduced. This is the mechanism of the UC-EBG structure for SSN suppression.
the conventional UC-EBG structure has an obvious stop-band ranging from 0.83GHz to
3.78GHz, and the bandwidth is 2.95GHz. Compared with the continuous plane, the
conventional UC-EBG structure can suppress the SSN in the depth of -40dB in its
stop-band, thus, UC-EBG structure can effectively suppress the SSN between power and
ground planes. However, we can find that the SSN in the frequency lower than 0.83GHz
are still ineffectively suppressed, and the relative width of the stop-band is still narrow.
So, a new EBG structures are proposed to resolve these problems in section 3.
(a) (b)
Fig. 4. Improved UC-EBG unit cells. (a) new unit cell (b) unit cell in literature[8]
632 H. Yang et al.
Fig.4 (a) shows the new proposed EBG structure in this paper. This UC-EBG
structure is consisted of metal patches and spiral-shaped metal strips, the spiral-shaped
strips is in the outside of the patches. The metal strips increase the length of connection
between the adjacent unit cells, as a result, the equivalent inductance of EBG structure
unit cells is enlarged. This measure improves the characteristic of the stopband and
enlarge the bandwidth. In order to show the excellent performance for SSN suppression
of the new UC-EBG structure, we take an improved UC-EBG structure which proposed
in literature [8] for comparison. The reference structure is shown in Fig.4 (b).
4 Simulation Results
To achieve good SSN suppression, we propose a new UC-EBG structure by adding
spiral-shaped inductance. We use the commercial available software of Ansoft
SIwave3.0 to verify the excellent performance of the new UC-EBG structure. The test
PCB also has four metal layers with the size of 9090mm, and 33 unit cells are etched
in the power plane. The parameters for simulation are setted as follows: a=30mm,
Fig.7 shows the simulated results of S12 parameters compared between the new EBG
structure and reference EBG structure with the same size. Based on the -40dB
suppression level, the reference EBG structure has a stop-band ranging from 0.42GHz to
4.12GHz, the bandwidth is 3.70GHz. Through simulations, we can find that the new
UC-EBG structure proposed by adding spiral-shaped metal strips has a bandwidth which
is 17.5% wider than the reference, the lower corner frequency of the stop-band is
240MHz lower than the reference structure. Therefore, compared with the reference EBG
structure in literature [8], the new UC-EBG structure provides not only the lower
frequency SSN suppression characteristic but also the wider stop-band SSN suppression.
5 Conclusion
In this paper, based on the conventional UC-EBG structure, we proposed a new
UC-EBG structure which formed by adding spiral-shaped metal strips on the metal
634 H. Yang et al.
patches to effectively suppress the SSN in Power/Ground planes. The -40dB stop-band
of the proposed EBG structure is ranging from 0.18GHz to 4.53GHz, this stop-band is
47.5% wider than the conventional UC-EBG structure, and 17.5% wider than the
reference EBG structure in literature [8]. The SSN in low frequency region is also
effectively suppressed by the new EBG structure. The excellent performance of the
SSN suppression is simulated by using commercially available software. Good
performance is achieved. The proposed EBG structure has better performance than the
reference structure in literature for SSN suppression.
References
1. Abhari, R., Eleftheriades, G.V.: Metallo-dielectric electromagnetic band-gap structures for
suppression and isolation of the parallel-plate noise in high-speed circuit. IEEE Trans.
Microw. Theory Tech. 51(6), 16291639 (2003)
2. Kamgaing, T., Ramahi, O.M.: Design and modeling of highimpedance electromagnetic
surface for switching noise suppression in power planes. IEEE Trans. Electromagnetic
Compatibility 47(3), 479489 (2005)
3. Shahparnia, S., Ramahi, O.M.: Electromagnetic interference (EMI) reduction from printed
circuit boards (PCB) using Electromagnetic band-gap structures. IEEE Trans.
Electromagnetic Compatibility 46(4), 580586 (2006)
4. Power Integrity and Ground Bounce Simulation of High Speed PCBs. Empowering
Profitability worldwide workshop. Ansoft (2002)
5. Ricchiuti, V.: Power-Supply Decoupling on Fully Populated High-Speed Digital PCBs.
IEEE Trans. Electromagnetic Compatibility 43, 671676 (2001)
6. Chen, G., Melde, K., Prince, J.: The applications of EBG structures in power/ground plane
pair SSN suppression. In: IEEE 13th Topical Meeting on Electrical Performance of
Electronic Packaging and Systems, pp. 207210. IEEE Press, Portland (2004)
7. Zhang, M.s., Li, Y.-S., Jia, C., et al.: A Power Plane With Wideband SSN Suppression
Using a Multi-Via Electromagnetic Bandgap Structure. IEEE Microwave and Wireless
Components Letters 17(4), 307309 (2007)
8. Du, J.-Y., Kim, J.-M., et al.: Analysis of Separately Arranged Patterns for Suppression of
Simultaneous Switching Noise. In: IEEE, Proceeding of Asia-Pacific Microwave
Conference, pp. 5558. IEEE Press, Yokohama (2006)
9. Kim, B., Kim, D.-W.: Improvement of Simultaneous Switching Noise Suppression of Power
Plane using Localized Spiral-Shaped EBG Structure and /4 Open Stubs. In: IEEE,
Proceeding of Asia-Pacific Microwave Conference, pp. 14. IEEE Press, Bangkok (2007)
10. Yang, F., Ma, K., Qian, Y., et al.: A uniplanar compact photonic bandgap (UC-EBG)
structure and its applications for microwave circuits. IEEE Trans. Microwave Theory and
Technol. 47, 15091514 (1999)
11. Sievenpiper, D.: High-impedance electromagnetic surface. Ph.D.dissertation. Dept.
Electrical Engineering, Univ. California, Los Angeles, CA (1999)
12. Li, Y., Fan, M.Y., Feng, Z.H.: A spiral electromagnetic bandgap (EBG) structure and its
application in microstrip antenna arrays. In: IEEE, Proceeding of Asia-Pacific Microwave
Conference, p. 4. IEEE Press, Suzhou (2005)
Author Index
Shi, Jianhong I-606, I-612, I-616, II-7 Sun, Yuqiang I-239, I-251, III-165
Shi, Ji cui I-201 Sun, Yuzhou III-36
Shi, Jinliang V-376 Sun, Zebin I-534
Shi, Ming-wang III-73 Sun, Zhongmin III-160
Shi, Penghui II-89 Suo, Zhilin V-442
Shi, Run-hua I-489
Shi, ShengBo IV-536 Tai, David Wen-Shung V-309
Shi, Yaqing IV-303 Tai, David W.S. IV-376, V-362
Shu, Yang IV-602, V-402 Takatera, Masayuki II-28
Song, Ani V-81 Tan, Gangping II-458
Song, HaiYu IV-388 Tan, Gongquan I-309
Song, Huazhu V-477 Tan, Ran III-378
Song, Jianhui I-396, I-402 Tan, Wenan IV-243
Song, Jianwei V-100 Tan, Wentao V-213
Song, Lixia V-402 Tan, Wenxue IV-215
Song, XiaoWei I-187 Tan, Xianghua IV-514
Song, Xi-jia I-457 Tan, Xilong V-213
Song, Yunxia IV-11, IV-313, IV-418, Tan, YaKun IV-233
IV-433 Tan, Zhen-hua V-221, V-233
Soomro, Safeeullah I-161, I-290 Tan, ZhuWen V-43, V-48
Soomro, Sajjad Ahmed I-161 Tang, Xian III-288, IV-293
Su, Ruijing I-501, II-89 Tang, Zhihao I-22
Su, Te-Jen V-386 Tao, Yi II-545
Su, YangNa III-68 Tao, Zhiyong I-341
Su, Yen-Ju V-362 Teng, Yusi IV-88
Sui, Li-ping V-467 Tian, DaLun I-123
Suk, Yong Ho II-34 Tian, Dan III-283
Sun, Chunling V-576 Tian, Daqing IV-474
Sun, Dong III-156 Tian, Hongjuan II-367
Sun, Hongqi IV-439 Tian, HongXiang I-574
Sun, Jianhong I-525 Tian, Hua V-489
Sun, Jie III-516 Tian, Jian V-53
Sun, Jinguang V-448 Tian, JianGuo IV-393, V-559
Sun, Junding I-359 Tian, Qiming V-303
Sun, Li IV-470 Tian, Xianzhi III-26
Sun, Peixin I-347 Tian, Yinlei IV-52, IV-59
Sun, Qiudong III-485 Tian, Yu II-176, II-380
Sun, Qiuye IV-577 Tian, Yuan V-43, V-48
Sun, Shusen V-185 Tien, Li-Chu V-320
Sun, Shuying V-418 Tong, XiaoJun V-88
Sun, Wei III-160 Tsai, Chang-Shu III-333, IV-367
Sun, Weiming II-243, II-248 Tsai, Chung-Hung III-333
Sun, Xiao IV-274 Tsai, Hsing-Fen IV-371
Sun, XiaoLan I-155 Tsai, Tzu-Chieh III-557
Sun, XiuYing III-592 Tu, Fei IV-399
Sun, Yakun I-118
Sun, Yansong I-149 Wan, Benting IV-583
Sun, Ying II-513 Wan, Guo-feng III-322
Sun, Yu II-192 Wan, Lei I-276
Sun, Yuei-Jyun V-386 Wang, An I-263
642 Author Index