Sie sind auf Seite 1von 22

INTERNATIONAL JOURNAL of COMPUTERS

Editor-in-Chief Prof. Metin Demiralp Istanbul Technical University, Turkey

ISSN: 1998-4308 FORMAT Format (.doc) or Format (LaTeX) JOURNALS' POLICY TOPICS

Editorial Board Wasfy B Mikhael, Univ of Central Florida,

Year 2011
All papers of the journal were peer reviewed by two independent reviewers. Acceptance was

Prof. Irwin W. Sandberg, The granted when both reviewers' recommendations were positive. University of Texas at Austin, USA.

Previous Volumes: 2007 2008 2009 2010

Prof. Lotfi A. Zadeh, University of California, Berkeley,

Prof. Angel Kuri-Morales, Instituto Tecnologico Autonomo de Mexico, Mexico. Abstract: With the rise of Web 2.0 paradigm new trends in information retrieval Prof. Colin Fyfe, University of the West of Scotland, UK. Prof. Remi Leandre, Universite de Bourgogne, France. Prof. Yan Wu, Georgia Southern University, U.S.A. Prof. Brian J. McCartin, Kettering University, USA. Prof. Jiancheng Guan, Fudan University, China Prof. Melvin A. Breuer, University of Southern California, Los Angeles, CA, USA

Paper Title, Authors, Abstract (Issue 1, Volume 5, 2011) Information Retrieval and Information Extraction in Web 2.0 Environment Nikola Vlahovic

Pages

(IR) and information extraction (IE) can be observed. Significance of IR and IE as fundamental method of acquiring new and up-to-date information is crucial for efficient decision making. Social aspects of modern information retrieval are gaining on its importance over technical aspects. The main reason for this trend is that IR and IE services are becoming more and more widely available to end users that are not information professionals but regular users. Also new methods that rely primarily on user interaction and communication show similar success in IR and IE tasks. Web 2.0 has overall positive impact on IR and IE as it is based on a more structured data platform than the earlier web. Moreover, new tools are being developed for online IE services that make IE more accessible even to users without technical knowledge and background. The goal of this paper is to review these trends and put them into context of what improvements and potential IR and IE have to offer to knowledge engineers, information workers, but also typical Internet users. Tiny Programming Language to Improve Assembly Generation for Automation Equipments Jose Metrolho, Monica Costa, Fernando Reinaldo Ribeiro

1-9

10-17

Prof. Maria I. Garcia-Planas, Abstract: The development time in industrial informatics systems, within industry Universitat Politecnica de environments, is a very important issue for competitiveness. The usage of adequate Catalunya, Spain. Prof. M. Nasseh Tabrizi, UEast Carolina University, USA.

target-specific programming languages is very important because they can facilitate and improve developers productivity, allowing solutions to be expressed in the idiom and at the level of abstraction of the problems domain. In this paper we present a target-specific programming language, which was designed to improve the design cycle of code generation, for an industrial embedded system. The native Prof. Irma Becerra-Fernandez, assembly code, the new language structure and their constructs, are presented in the Florida International University, paper. The proposed target-specific language is expressed using words and terms U.S.A. that are related to the targets domain and consequently it is now easier to program, understand and to validate the desired code. It is also demonstrated the language efficiency by comparing some code described using the new language against the Journal's Policy: previous used code. The design cycle is improved with the usage of the targetAuthors can send their papers specific language because both description and debug time are significantly reduced by email to NAUN Journals with this new software tool. This is also a case of university-industry partnership. regardless of whether they
have attended a NAUN conference or not. The NAUN Journals are open access journals. The Authors do not pay any kind of registration fees or publication fees or "donation". However the full PDF files of the papers are permanently open for everybody, without any restrictions, while the authors are not charged with any kind of fees. The Editors-in-Chief, being assisted by the members of the Editorial Boards, are the absolute decision makers for the acceptance or not of the papers. Submitted papers must not be under consideration by any other journal or publication.

Solving Multiobjective Optimization under Bounds by Genetic Algorithms Anon Sukstrienwong Abstract: For complex engineering optimizing problems, several problems are required to be controlled within the specific interval in which something can operate or act efficiently. Most researchers minimize the objective vector into a single objective and interested in the set known as Pareto optimal solution. However, in this paper is concerned with the application of genetic algorithm to solve multi-objective problems in which some objectives are requested to be balanced within its objective bounds. The proposed approach called genetic algorithms for objective boundary (GAsOB scheme) for searching the possible solutions for the particular multi-objectives problems. The elite technique is employed to enhance the efficiency of the algorithm. The experimental results have compared with the results derived by a linear search technique and traditional genetic algorithms through the search space. From the experimental results, GAsOB scheme generates the solution efficiently with customization of the number of eras and immigration rate. Image Authentication and Recovery Using BCH Error-Correcting Codes Jose Antonio Mendoza Noriega, Brian M. Kurkoski, Mariko Nakano Miyatake, Hector Perez Meana

18-25

The final decision will be made based on peer review Abstract: In this paper an image authentication and recovery algorithm is proposed reports by the guest editors and the Editors-in-Chief jointly where the modified areas in an image are detected, and in addition an Authors and Readers do not pay any kind of registration fees neither publication fees nor "donation".

26-33

approximation of the original image, called a digest image Cdig, is recovered. Twodifferent watermarks are used. One semi-fragile watermark w1 is used for the authentication phase. The second watermark wdig, is obtained by compressing the digest image Cdig using an arithmetic code, then redundancy is added by applying a BCH error correcting code (ECC). Finally both watermarks are embedded in the

Topics: integer wavelet transform (IWT) domain. The proposed scheme is evaluated from Computer Languages and different points of view: watermark imperceptibly, payload, detection of the tamper Programming* Distributed and Parallel Processing * Distributed area and robustness against some non-intentional attacks. Experimental results Systems * E-commerce and E- show the system detects accurately where the image has been modified, and it is governance * Event Driven able to resist large modifications; for example, the system can tolerate Programming * Expert Systems modifications close to 10% of the total pixels of the watermarked image and * High Performance Computing recover the 100% of the digest image. The watermarked image and recovered digest * Human Computer Interaction * image have good quality, with average PSNR 39.88 dB and 28.63 dB, respectively, Information Retrieval * using ECC rate 0.34. The proposed system also is robust to noise insertion. It is Information Systems * Knowledge Data Engineering * able to tolerate close to 5% errors produced by salt and pepper noise insertion, Mobile Computing * Multimedia while recovering 100% of the digest image. Applications * Natural Language Processing * Parallel and Distributed Computing * Pattern A Face Recognition Algorithm using Eigenphases and Histogram Equalization Kelsey Ramirez-Gutierrez, Daniel Cruz-Perez, Jesus Olivares-Mercado, Mariko Recognition * Performance Evaluation * Programming Nakano-Miyatake, Hector Perez-Meana Languages * Grid Computing * Reconfigurable Computing Abstract: This paper proposes a face recognition algorithm based on histogram Systems * Security & equalization methods. These methods allow standardizing the faces illumination Cryptography * Software reducing in such way the variations for further features extraction; which are Engineering & CASE * Education * Technology extracted using the image phase spectrum of the histogram equalized image Management * Theoretical together with the principal components analysis. Proposed scheme allows a Computer Science * Ubiquitous Computing * Wireless Sensor reduction of the amount of data without much information loss. Evaluation results show that the proposed feature extraction scheme, when used together with the Networks * Sensors support vector machine (SVM), provides a recognition rate higher than 97% and a * Computer Architecture & VLSI * Computer Architecture verification error lower than 0.003%. and Embedded Systems * Computer Games * Computer The Chinese as Second Language Multidimensional Computerized Adaptive Graphics & Virtual Reality * Signal and Image Processing * Testing System Construction Databases * Digital Libraries * Hsuan-Po Wang, Bor-Chen Kuo, Rih-Chang Chao, Ya-Hsun Tsai Digital Systems and Logic Design * Software Engineering Abstract: With rising demand of Chinese as Second Language (CSL) learning, Compilers and Interpreters * Chinese Proficiency Test became more and more popular recently. There are Computer Animation

34-41

several major proficiency tests with paper-pencil (P&P) formats for Chinese learners including Taiwan's test of proficiency-Huayu (TOP-Huayu), the mainland's Hanyu Shuiping Kaoshi (HSK), and America's Scholastic Assessment Test (SAT). In this study, Common European Framework Reference (CEFR) is applied and CSL Proficiency Index is used as guidelines to develop a multidimensional computerized adaptive testing (MCAT) system for enhancing the CSL proficiency test. This research collected empirical data via the computerized based test (CBT) followed by developing and conducting a simulation study on a MCAT system. The proposed system provides a framework of using item response theory (IRT) as the ability scoring method and applies to the process as a MCAT. In addition, this research will also go through the evaluation of the effectiveness of the process on MCAT system. There were 658 empirical data collected from Grace Christian Collage in Philippine on September 2009. At the end of this research the result indicated that recommend CSL MCAT System applied MAP as the ability

42-49

estimation method for this MCAT System. The interface of the MCAT system is also present at this research. A Non-Secure Information Systems and the Isolation Solution Tai-Hoon Kim Abstract: In this paper, we define Intrusion Confinement through isolation to address such security issue, its importance and finally present an isolation protocol. Security has emerged as the biggest threat to information systems. System protection mechanisms such as access controls can be fooled by authorized but malicious users, masqueraders, and trespassers. As a result, serious damage can be caused either because many intrusions are never detected or because the average detection latency is too long. Path-Bounded Finite Automata on Four-Dimensional Input Tapes Yasuo Uchida, Takao Ito, Makoto Sakamoto, Ryoju Katamune, Kazuyuki Uchida, Hiroshi Furutani, Michio Kono, Satoshi Ikeda, Tsunehiro Yoshinaga Abstract: M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967. Since then, many researchers in this field have been investigating many properties about automata on two- or three-dimensional tapes. By the way, the question of whether processing four-dimensional digital patterns is much difficult than two- or three-dimensional ones is of great interest from the theoretical and practical standpoints. Recently, due to the advances in many application areas such as computer animation, motion image processing, virtual reality systems, and so forth, it has become increasingly apparent that the study of four-dimensional pattern processing has been of crucial importance. Thus, the study of four-dimensional automata, i.e., fourdimensional automata with the time axis as a computational model of four-dimensional pattern processing has also been meaningful. On the other hand, the comparative study of the computational powers of deterministic and nondeterministic computations is one of the central tasks of complexity theory. This paper investigates the computational power of nondeterministic computing devices with restricted nondeterminism. There are only few results measuring the computational power of restricted nondeterminism. In general, there are three possibilities to measure the amount of nondeterminism in computation. In this paper, we consider the possibility to count the number of different nondeterministic computation paths on any input. In particular, we deal with seven-way four-dimensional finite automata with multiple input heads operating on four-dimensional input tapes. A Relationship between Marker and Inkdot for Four-Dimensional Automata Yasuo Uchida, Takao Ito, Makoto Sakamoto, Ryoju Katamune, Kazuyuki Uchida, Hiroshi Furutani, Michio Kono, Satoshi Ikeda, Tsunehiro Yoshinaga Abstract: A multi-marker automaton is a finite automaton which keeps marks as

50-57

58-65

66-73

pebbles in the finite control, and cannot rewrite any input symbols but can make marks on its input with the restriction that only a bounded number of these marks can exist at any given time. An improvement of picture recognizability of the finite automaton is the reason why the multi-marker automaton was introduced. On the other hand, a multi-inkdot automaton is a conventional automaton capable of dropping an inkdot on a given input tape for a landmark, but unable to further pick it up. Due to the advances in many application areas such as moving image processing, computer animation, and so on, it has become increasingly apparent that the study of four-dimensional pattern processing has been of crucial importance. Thus, we think that the study of four-dimensional automata as a computational model of four-dimensional pattern processing has also been meaningful. This paper deals with marker versus inkdot over four-dimensional input tapes, and investigates some properties. Effort and Cost Allocation in Medium to Large Software Development Projects Kassem Saleh Abstract: The proper allocation of financial and human resources to the various software development activities is a very important and critical task contributing to the success of the software project. To provide a realistic allocation, the manager of a software development project should account for the various activities needed to ensure the completion of the project with the required quality, on-time and withinbudget. In this paper, we provide guidelines for cost and effort allocation based on typical software development activities using existing requirements-based estimation techniques. A Halftoning-Based Multipurpose Image Watermarking with Recovery Capability Carlos Santiago-Avila, Mario Gonzalez-Lee, Mariko Nakano-Miyatake, Hector Perez-Meana Abstract: Nowadays digital watermarking has become an important technique, because using computational tools, digital contents can be copied and/or modified easily. At the beginning, the digital watermarking has been used for either copyright protection purpose or content authentication purpose. However, in many situations both purposes (copyright protection and content authentication) are required to be satisfied at same time. The watermarking scheme that satisfies both purposes is called multipurpose watermarking scheme. In this paper, a novel multipurpose watermarking scheme is proposed, in which a self-embedding technique based on halftoning is used for content authentication and recovery purpose, and a binary pattern is embedded into the halftone image using quantization-based embedding method for copyright protection purpose. Experimental results show favorable performance of the proposed algorithm. A Logical Approach to Image Recognition with Spatial Constraints R. K. Fedorov, A. O. Shigarov

74-79

80-87

88-95

Abstract: In this paper an approach to recognizing objects on images is proposed. The approach is based on a logical inference in CLP Prolog using structural descriptions of objects. Searching edges of objects on image is performed as a unification of built-in predicate line satisfying a set of constraints defined by the description. Structural description is presented as rules of CLP Prolog. Extended Residual Aggregated Risk Assessment A Tool For Managing Effectiveness of the IT Audit Traian Surcel, Cristian Amancei, Ana-Ramona Bologa, Alexandra Florea, Razvan Bologa Abstract: This paper proposes an audit methodology which aims to identify key risks that arise during the IT audit within an organization and presents the impact of identified risks. This involves evaluating the organization's tolerance to IT systems unavailability, identifying auditable activities and subtasks, identifying key risk factors and the association of weights, evaluating and classifying significant risks 96-105 identified, conducting audit procedures based on questionnaires and tests and assessing the remaining aggregate risk that was not reduced by effective controls. Verifying the existence of compensating controls and the possibility of their implementation in an iterative manner, followed by a reassessment of covered risks, after each iteration, eventually provides an insignificant remaining aggregate risk. The development of the audit mission has to be correlated with the corporate governance requirements, the quality assurance and marketing the audit function. The results obtained are evaluated by taking into consideration the confidentiality and integrity of resources involved. The Effect of Organizational Readiness on CRM and Business Performance Cristian Dutu, Horatiu Halmajan Abstract: CRM is a business strategy which aims to create value for both organization and customers through initiating and maintaining customer relationships. As a core strategy, CRM is based on using a marketing information system and the companys IT infrastructure. CRM technology plays an important role in creating customer knowledge, which is the core of any CRM initiative. The CRM strategy will not yield the expected results without the proper use of 106-114 information technology in the CRM processes. Organisational CRM readiness is related to the level of available technological resources which may be oriented towards CRM implementation. This paper examines the direct outcomes of the CRM activities, as well the relationship among these outcomes and business performance. We also analysed the effect of the level of organisational CRM readiness on the degree to which companies implemented CRM activities. We conducted a survey on 82 companies operating in the Western region of Romania, which revealed that CRM implementation generates superior business performance. Extending a Method of Describing System Management Operations to EnergySaving Operations in Data Centers 115-122

Matsuki Yoshino, Michiko Oba, Norihisa Komoda Abstract: The authors propose a method for describing system management operations based upon patterns identified by analyzing operations in data centers. Combined with a CMS (Configuration Management System) defined in ITIL(Information Technology Infrastructure Library), it is possible to calculate energy consumption of an information system managed by management operations described by the proposed method. To demonstrate the effectiveness of the method, examples of saving energy in operations described by the proposed method are shown with an example of calculating the energy savings. Vehicle Track Control Debnath Bhattacharyya, Tai-Hoon Kim Abstract: Lane Design for Optimal Traffic (LDOT) is considered as an effective tool to improve the level of traffic services. It integrates the newly emerged IT technologies with the traditional traffic engineering. By providing the traffic partners with better communications, LDOT can significantly boost the traffic managements and operations. Meanwhile, however, the deployment of the LDOT applications often involves a huge amount of investments, which may be discouraging in a challenging economy like now. Therefore how to increase the cost-effectiveness of the LDOT systems is a widely concerting issue. There has been a limited research effort on the optimization of the LDOT systems. Lane Design for Speed Optimization (LDSO) presents a new critical lane analysis as a guide for designing speed optimization to serve rush-hour traffic demands. Physical design and speed optimization are identified, and methods for evaluation are 123-131 provided. The Lane Design for Speed optimization (LDSO) analysis technique is applied to the proposed design and speed optimization plan. Lane Design for Speed Optimization can robustly boost the speed management and operations. Therefore how to increase the Speed optimization of the Lane is widely concerting issue. There has been a limited research effort on the optimization of the LDSO systems Design of Non Accidental Lane (DNAL) presents a new optimal lane analysis as a guide for designing of non accidental lane to serve better utilization of lane. The accident factors adjust the base model estimates for individual geometric design element dimensions and for traffic control features. The Design of Non Accidental Lane (DNAL) analysis technique is applied to the proposed design and speed optimization plan. Design of Non Accidental Lane can robustly manage and operations on lane for avoiding accident. Therefore how to increase the Speed optimization with non accidental zone of the Lane is widely concerting issue. There has been a limited research effort on the optimization of the DNAL systems. Faster Facility Location and Hierarchical Clustering J. Skala, I. Kolingerova 132-139 Abstract: We propose several methods to speed up the facility location, and the single link and the complete link clustering algorithms. The local search algorithm

for the facility location is accelerated by introducing several space partitioning methods and a parallelisation on the CPU of a standard desktop computer. The influence of the cluster size on the speedup is documented. The paper further presents the computation of the single link and the complete link clustering on the GPU using the CUDA architecture. Paper Title, Authors, Abstract (Issue 2, Volume 5, 2011) Pages A Question Answering System on Domain Specific Knowledge with Semantic Web Support Borut Gorenjak, Marko Ferme, Milan Ojstersek Abstract: In todays world the majority of information is accessible via the World Wide Web. A common way to access this information is through information retrieval applications like web search engines. We already know that web search engines flood their users with enormous amount of data from which they cannot 141-148 figure out the essential and most important information. These disadvantages can be reduced with question answering systems. The basic idea of question answering systems is to be able to provide answers to a specific question written in natural language. The main goal of question answering systems is to find a specific answer. This paper presents an architecture of our ontology-driven system that uses semantic description of the processes, databases and web services for question answering system in the Slovenian language. Using Geographic Information System for Wind Parks Software Solutions Adela Bara, Anda Velicanu, Ion Lungu, Iuliana Botha Abstract: A Geographic Information System can be used in order to store, analyze and predict data regarding wind parks. Such data can refer to the natural factors that 149-156 can affect the wind turbines, the placement of the turbines or their power capacity. In this paper we discuss the possibility to manage wind parks in Romania, based on the wind speed and altitude of different regions. A Phased Migration Strategy to Integrate the New Data Acquisition System into the Laguna Verde Nuclear Power Plant Ramon Montellano-Garcia, Ilse Leal-Aulenbacher, Hector Bernal-Maldonado Abstract: This paper focuses on the strategy applied in the gradual integration of a new data acquisition system with the online Plant Process Computer of the Laguna Verde Nuclear Power Plant. Due to the fact that the data acquisition modules 157-165 needed to be replaced, the need for a New Acquisition System arose. The issue of whether or not to embark on a complete or modular replacement of its elements required careful consideration. At Laguna Verde, we opted for a phased migration approach, considering two main aspects: that the plant monitoring must remain online during the whole process, because it is required for plant operation and that human machine interfaces and computations design basis must be maintained, in order to minimize regulatory impact. The core of a phased migration strategy

hinges on a flexible modular system capable of accepting data streams from multiple data acquisition systems and computers and consolidating this data for their presentation in control room displays and in the power plant historical archive. This paper describes the methodology that was applied to integrate the new data acquisition system into the legacy system, which is based on a real-time mechanism and historical data stream transfer. Java Interrogation of an Homogeneous System of Inheritance Knowledge Bases by Client-Server Technology Nicolae Tandareanu Abstract: The subject developed in this paper is connected by the remote interrogation of a knowledge base. We suppose we have a collection of the same kind of knowledge bases, namely, extended inheritance knowledge bases. We use the client-server technology to query each element of such a system of knowledge bases. To implement the application we used Java technology. The reasoning process is based on an inference engine. The mechanism of this engine is based on the extended inheritance presented in [18], [22] and [23]. A methodological 166-174 description is given based on Java technology. Both the server and client side of the application are presented step by step. The way of presentation is divided into stages, each stage is well defined according to the proposed tasks. Each step of the presentation can be easily modified and adapted by a person who wants to write his/her own application to query a knowledge base by client-server technology. The use of the extended inheritance knowledge bases can be explained by the fact that the inference engine in this case is easier to write than the inference engine for other methods of knowledge representation. The last section enumerates several developing directions. Stereoscopy in Objects Motion Parameters Determination A. Zak Abstract: Computer vision is the science and technology of machines which are able to extract information from an image that is necessary to solve some task. As a scientific discipline, computer vision is concerned with the theory that extract information from images. It must be noticed that computer vision is still very strong and fast developing discipline because of technology expansion especially computers and cameras. The image data can take many forms, such as video 175-182 sequences or views from multiple cameras which is in interesting of this paper. Paper presents method of calculation of objects movement parameters in threedimensional space using system which ensure stereoscopic vision. There was described the algorithm of movement discovering and moving object tracking, including methods of separate and actualization of background, method of distinguishing moving objects and its positions calculation on acquired pictures. Next the methods of objects coordinate calculation in three dimensional space basis on data retrieved from stereoscopic image computation was discussed in detail. More over the problem of images rectification and stereovision system

calibration was in detail discussed. At last the method of movements parameters calculation in 3D space was described. At the end of the paper some chosen results of research which were conducted in laboratory conditions were presented. The Impact of Software Quality on Maintenance Process Anas Bassam Al-Badareen, Mohd Hasan Selamat, Marzanah A. Jabar, Jamilah Din, Sherzod Turaev Abstract: The software is always required to be developed and maintained a quality to the rapid progresses in industry, technology, economy, and other fields. Software maintenance is considered as one of the main issues in software development life cycle that is required efforts and resources more than other phase. Studies estimated that the cost of software maintenance rapidly increased that 183-190 reached the 90% of the total cost of software development life cycle. Therefore, it is considered as an economic impact in information system community. Several researches are intended to estimate and reduce the cost of this task. This study introduces a model of software maintenance process that emphasizes the impact of the software quality on the maintenance process. The study presents the process of the software maintenance, and then discussed the quality characteristics that affect these tasks. Furthermore, the evaluation criteria for these factors are discussed. Reusable Software Component Life Cycle Anas Bassam Al-Badareen, Mohd Hasan Selamat, Marzanah A. Jabar, Jamilah Din, Sherzod Turaev Abstract: In order to decrease the time and effort of the software development process and increase the quality of the software product significantly, software engineering required new technologies. Nowadays, most software engineering design is based on reuse of existing system or components. Also, it is become a main development approach for business and commercial systems. The concept of reusability is widely used in order to reduce cost, effort, and time of software development. Reusability also increases the productivity, maintainability, 191-199 portability, and reliability of the software products. That is the reusable software components are evaluated several times in other systems before. The problems faced by software engineers is not lack of reuse, but lack of widespread, systematic reuse. They know how to do it, but they do it informally. Therefore, strong attention must be given to this concept. This study aims to propose a systematic framework considers the reusability through software life cycle from two sides, build-for-reuse and build-by-reuse. Furthermore, the repository of reusable software components is considered, and the evaluation criteria from both sides are proposed. Finally, an empirical validation is conducted by apply the developed framework on a case study. Extending XML Conditional Schema Representations with WordNet Data Nicolae Tandareanu, Mihaela Colhon, Cristina Zamfir

200-209

Abstract: Conditional Knowledge Representation and Reasoning represents a new brand of KR&R, for which several formalisms have been developed. In this paper we define XML Language Specifications for a graph-based representation formalism of such knowledge enriched with WordNet linguistic knowledge. Our task is to detect when pairs of words (in our formalism they are named objects) could be linked by means of is_a and part_of relationships. Smart Human Face Detection System Iyad Aldasouqi, Mahmoud Hassan Abstract: Digital Image Processing (DIP) is a multidisciplinary science that borrows principles from diverse fields such as optics, surface physics, visual psychophysics, computer science and mathematics. Some of image processing applications can be finding in: astronomy, ultrasonic imaging, remote sensing, video communications and microscopy. Face detection/recognition has attracted much attention and its research has rapidly increased in many potential applications in computer, communication and automatic access control system. Furthermore, 210-217 face detection as a first step is an important part of face recognition. Since the image has lots of variations in appearance, face detection is not straightforward, such as pose variation, occlusion, image orientation, illuminating condition and others. The full face detection and gender recognition system is made up of a series of connected components. There are much software that can facilitate the detection process such as: Matlab, Labview, C and others. In this paper we propose a fast algorithm for detecting human faces in color images using HSV color model without sacrificing the speed of detection. The proposed algorithm has been tested on various real images and its performance is found to be quite satisfactory. An Approach for 3D Object Recognition of Universal Goods Bernd Scholz-Reiter, Hendrik Thamer, Claudio Uriarte Abstract: Today, unloading processes of standard container units are mainly executed manually. An automatic unloading system could automate this labor and time intensive process step. The crucial challenge in developing such a system is the object recognition of goods with undefined shape and size. The development and the successful market launch of the Paketroboter has shown the feasibility of the correct detection of cubic goods inside a standard container unit. Nevertheless, there exists no established system that is able to unload universal packaged goods. 218-225 The requirements for a suitable object recognition system for goods with undefined shapes are very high. In the case of an high error rate, the automatic unloading process has to be aborted or a manually intervention is necessary. This paper presents a concept that aims to develop an object recognition system for classification and pose detection of universal packaged goods inside a standard container unit. In order to classify different packaged goods inside a less lighted container unit significant sensor data is required. On the basis of the sensor data, the object recognition system detects all goods and calculates suitable 3D gripping points for the manipulator unit. Therefore, range images from Time-of-Flight

cameras and simulated images are used for image analysis. GPU-Based Translation-Invariant 2D Discrete Wavelet Transform for Image Processing Dietmar Wippig, Bernd Klauer Abstract: The Discrete Wavelet Transform (DWT) is applied to various signal and image processing applications. However the computation is computational expense. Therefore plenty of approaches have been proposed to accelerate the computation. Graphics processing units (GPUs) can be used as stream processor to speed up the calculation of the DWT. In this paper, we present a implementation of the translation-invariant wavelet transform using consumer level graphics hardware. As 226-234 our approach was motivated by infrared image processing our implementation focuses on gray-level images, but can be also used in color image processing applications. Our experiments show, that the computation performance of the DWT could be significantly improved. However, initialisation and data transfer times are still a problem of GPU implementations. They could dramatically reduce the achievable performance, if they cannot be hided by the application. This effect was also observed integrating our implementation in wavelet-based edge detection and wavelet denoising. Text Analysis with Sequence Matching Marko Ferme, Milan Ojstersek Abstract: This article describes some common problems faced in natural language processing. The main problem consist of a user given sentence, which has to be matched against an existing knowledge base, consisting of semantically described words or phrases. Some main problems in this process are outlined and the most common solutions used in natural language processing are overviewed. A sequence matching algorithm is introduced as an alternative solution and its advantages over 235-242 the existing approaches are explained. The algorithm is explained in detail where the longest subsequences discovery algorithm is explained first. Then the major components of the similarity measure are defined and the computation of concurrence and dispersion measure is presented. Results of the algorithms performance on a test set are then shown and different implementations of algorithm usage are discussed. The work is concluded with some ideas for the future and some examples where our approach can be practically used. Grid Learning Classifiers - A Web Based Interface Manuel Filipe Santos, Wesley Mathew, Henrique Santos Abstract: The toolkit for learning classifier system for grid data mining is a 243-251 communication channel between remote users and gridclass system. Gridclass system is the system for grid data mining, grid computing approach in the distributed data mining. This toolkit is a web based system therefore end users can set the configuration of each node in the grid environment and execute the grid

class system from the remote location. Mainly, configuration module of the toolkit is designed for the sUpervised Classifier System (UCS) as a data mining algorithm. Toolkit has three fundamental functions such as creating new project, updating the project, and executing the project. Initially, user has to define the project based on the complexity of the problem to the system. While creating a new project all the data and configuration information about all nods are stored in the file under a user defined project name. The updating phase user can makes changes in the configuration file or replace the training data for new experiments. There are two sub functions in the phase of execution: do the execution of gridclass system and do the comparison and evaluation of the performance of the different executions. Toolkit can store the global model and related local models and it testing accuracies in the server system. The main focus of this work is to improve the performance of learning classifier system: therefore an attempt is made to compare the performance of learning classifier system with different configurations, which has a significant role. The ROC graph is the best option to represent the performance of classifier system. Accuracy under the curve (ACU) is a numerical value to represent the ROC curve. Therefore, users can easy to measure the performance of global model with the help of AUC. Other objective of this work is to provide friendly environment to the end users and gives better facilities to evaluate the performance of the global model. Colour Image Segmentation Using Relative Values of RGB in Various Illumination Circumstances Chiunhsiun Lin, Ching-Hung Su, Hsuan Shu Huang, Kuo-Chin Fan Abstract: We propose a novel colour segmentation algorithm can work in various illumination circumstances. The proposed colour segmentation algorithm operates directly on RGB colour space without the need of colour space transformation and it is very robust to various illumination conditions. Our approach can be employed 252-261 in various domains (e.g., human skin colour segmentation, the maturity of tomatoes). Furthermore, our approach has the benefits of being insensitive to rotation, scaling, and translation. In addition, the system can be applied to different applications, for example, colour segmentation for fruits (vegetables) quality control by merely changing the values of the parameters ( , 1, 2, 1, 2). Experimental results demonstrate the practicability of our proposed approach in colour segmentation. Research on the Real-time 3D Image Processing System using Facial Feature Tracking Jae-gu Song, Yohwan So, Eunseok Lee, Seoksoo Kim Abstract: This research is on the real-time 3D image processing system using facial feature tracking and how the system works. When transferring an input 2D image to a 3D stereoscopic image, this system provides real-time 3D synthetic images. It also provides measures to trace a face in the input image in order to distinguish a person from the background and includes measures to digitize 262-269

positional values within the face by tracking colors and facial feature points. The real-time 3D image processing system in this study that uses facial feature tracking is the preprocessing system for special effects. Firstly, it allows users to utilize basic positional values obtained from a person and a background in the collected images when applying special effects. Secondly, by checking a 3D stereoscopic image in real-time, users can verify composition and image effects prior to application of special effects. Lastly, data that successfully generated facial areas can be constantly improved and used as foundation to standardize facial area detecting data as well as to create the plug-in. Modified Progressive Strategy for Multiple Proteins Sequence Alignment Gamil Abdel-Azim, Mohamed Ben Othman, Zaher Abo-Eleneen Abstract: One of the important research topics of bioinformatics is the Multiple proteins sequence alignment. Since the exact methods for MSA have exponential time complexity, the heuristic approaches and the progressive alignment are the most commonly used in multiple sequences alignments. In this paper, we propose a modified progressive alignment strategy. Choosing and merging the most closely sequences is one of the important steps of the progressive alignment strategy. This depends on the similarity between the sequences. To measure that similarity we need to define a distance. In this paper, we construct a distance matrix. The elements of a row of this matrix correspond to the distance between a sequence to other sequences. A guide tree is built using the distance matrix. For each sequence 270-280 we define a descriptor which is called also feature vector. The elements of the distance matrix are calculated based on the distance between the descriptors of the sequences. The descriptor reduces the dimension of the sequence then yields to a faster calculation of distance matrix and also to obtain preliminary distance matrix without pairwise alignment in the first step. The principle contribution in this paper is the modification of the first step of the basic progressive alignment strategy ie the computation of the distance matrix which yields to a new guide tree. Such guide tree is simple to implement and gives good result's performance. A comparison between the results got from the proposed strategy and from the ClastalW over the database BAliBASE 3.0 is analyzed and reported. The Results of our testing in all dataset show that the proposed strategy is as good as Clustalw in most cases. Rule Based Bi-Directional Transformation of UML2 Activities into Petri Nets A. Spiteri Staines Abstract: Many modern software models and notations are graph based. UML 2 activities are important notations for modeling different types of behavior and system properties. In the UML 2 specification it is suggested that some forms of 281-288 activity types are based on Petri net formalisms. Ideally the mapping of UML activities into Petri nets should be bi-directional. The bi-directional mapping needs to be simplified and operational. Model-to-Model mapping in theory offers the advantage of fully operational bi-directional mapping between different models or formalisms that share some common properties. However in reality this is not easily

achievable because not all the transformations are similar. Previous work was presented where it was shown how Triple Graph Grammars are useful to achieve this mapping. UML 2 activities have some common properties with Petri nets. There are exceptions which require some special attention. In this paper a simple condensed rule based solution for complete bi-directional mapping or transforming UML 2 activities into Petri nets is presented. The solution should be operational, and can be represented using different notations. A practical example is used to illustrate the bi-directional transformation possibility and conclusions are explained. Rewriting Petri Nets as Directed Graphs A. Spiteri Staines Abstract: This work attempts to understand some of the basic properties of Petri nets and their relationships to directed graphs. Different forms of directed graphs are widely used in computer science. Normally various names are given to these structures. E.g. directed acyclical graphs (DAGs), control flow graphs (CFGs), task graphs, generalized task graphs (GTGs), state transition diagrams (STDs), state machines, etc. Some structures might exhibit bisimilarity. The justification for this work is that Petri nets are based on graphs and have some similarities to them. Transforming Petri nets into graphs opens up a whole set of new interesting 289-297 possible experimentations. Normally this is overlooked. Directed Graphs have a lot of theory and research associated with them. This work could be further developed and used for Petri net evaluation. The related works justifies the reasoning how and why Petri nets are obtained or supported using graphs. The transformation approach can be formal or informal. The main problem tackled is how graphs can be obtained from Petri nets. Possible solutions that use reduction methods to simplify the Petri net are presented. Different methods to extract graphs from the basic or fundamental Petri net classes are explained. Some examples are given and the findings are briefly discussed. Detection of Pornographic Digital Images Jorge A. Marcial-Basilio, Gualberto Aguilar-Torres, Gabriel Sanchez-Perez, L. Karina Toscano-Medina, Hector M. Perez-Meana Abstract: In this paper a novel algorithm to detect explicit content or pornographic images is proposed using the transformation from the RGB model color to the YCbCr or HSV color model, moreover using the skin detection the image is segmented, finally the percentage of pixels that was detected as skin tone is 298-305 calculated. The results obtained using the proposed algorithm are compared with two software solutions, Parabens Porn Detection Stick and FTK Explicit Image Detection, which are the most commercial software solutions to detect pornographic images. A set of 800 images, which 400 pornographic images and 400 natural images, is used to test each system. The proposed algorithm carried out identify up to 68.87% of the pornographic images, and 14.25% of false positives, the Parabens Porn Detection Stick achieved 71.5% of recognizing but with 33.5% of false positives, and FTK Explicit Image Detection achieved 69.25% of

effectiveness for the same set of images but 35.5% of false positives. Finally the proposed algorithm works effectively to carry out the main goal which is to apply this method to forensic analysis or pornographic images detection on storage devices. Paper Title, Authors, Abstract (Issue 3, Volume 5, 2011) Unhealthy Poultry Carcass Detection Using Genetic Fuzzy Classifier System Reza Javidan, Ali Reza Mollaei Pages

Abstract: In this paper automatic unhealthy detection of poultries in slaughter houses is discussed and a new real-time approach based on genetic fuzzy classifier for classification of textural images of poultries is proposed. In the presented method, after segmentation of the image into the object (poultry) and background, the size (area), shape (elongation) and the color of the object are calculated as 307-313 features. Then, these crisp values are converted to their normalized fuzzy equivalents, between 0 and 1. A fuzzy rule base system is then used for inferring that the poultry is normal or not. The parameters of the fuzzy rule based system are optimized using genetic algorithm. Finally, if the output of the optimized fuzzy classifier system shows any abnormality, the carcass of the poultry should be omitted from the slaughter. Experimental results on real data show the effectiveness of the proposed method. Conceptual Model of Mobile Services in the Travel and Tourism Industry Antonio Portolan, Krunoslav Zubrinic, Mario Milicevic Abstract: Today, in a time of economic crisis, companies in all economic sectors should reevaluate their strategies to achieve the necessary market success. Recent studies show that the potential customers would rather spend their earnings on domestic equipment and electronic devices like laptops and mobile phones, than on vacations and traveling. This behavior generates huge losses for the travel industry 314-321 and tourism. The potential solution for that problem is to connect the mobile industry with the travel and tourism in a way that will encourage customers to travel more and enjoy the time by using interactive and helpful content. In this paper we discuss the possibility of mobile device integration in the travel and tourism industry and its impact on potential customer groups. At the end of paper, a conceptual model of mobile services integration in the current travel and tourism industry is presented. Computational Technologies for Accreditation in Higher Education Aboubekeur Hamdi-Cherif Abstract: Academic accreditation and assessment in Higher Education (A3-HE) is, 322-331 above all, a social status meant to acknowledge that an institution or program is following recognized and requested quality criteria issued from common good practice. In a previous work, we described the main processes involved in A3- HE. Two main issues were reported. First, heavy and tedious paperwork characterizes

actual academic processes. Second, subjective judgments might interfere with the processes. Indeed, both the internal self-examination undergone by institutions / programs and the external reviewing processes made by recognized accrediting bodies are prone to errors and subjective biases as they are largely based on rules of thumb human judgments despite the presence of standards. In this paper, we describe a set of computational technologies to address these issues. Emphasis is made on technologies spanning (crude) data, information, refined information including decision support, ultimately leading to the most refined and expensive piece of information, i.e., knowledge and its discovery in large and diversified databases over the Web, based on cloud computing solutions. A human-machine interactive knowledge-based learning control system for A3-HE is our far-reaching goal. However, the A3-HE processes are too complex to be addressed by computerized systems alone. As a result, scaling up to real-life applications still require much time to reach tangible implementations. Terminator for E-mail Spam - A Fuzzy Approach Revealed P. Sudhakar, G. Poonkuzhali, K. Thiagarajan, K.Sarukesi Abstract: In this information technology world, the highest degree of communication happens through e-mails. Realistically most of the inboxes are flooded with spam e-mails as most of transactions through this internet is affected by Passive attacks and Active attacks. Several algorithms exist in the e-world to defend against spam e-mails. But the fulfilment of accuracy in deducting spam email is still oscillating between 80-90%. This clearly shows the necessity for improvement in spam control algorithms on various projections. In this proposed work a new solvent was chosen in the fuzzy word to combat against spam emails. Various fuzzy rules are created for spam e-mails and every e-mail is enforced to pass through fuzzy rule filter for identifying spam. Results of the each fuzzy rule for the input emails are derived to classify the e-mail to be spam or consent. An Automatic Method to Generate the Emotional Vectors of Emoticons Using Blog Articles Sho Aoki, Osamu Uchida Abstract: In recent years, reputation analysis and opinion mining services using the articles written in personal blogs, message boards, and community web sites such as Facebook, MySpace, and Twitter have been developed. To improve the accuracy of the reputation analysis and the opinion mining, we have to extract emotions or 346-353 reactions of writers of documents accurately. And now, graphical emoticons (emojis in Japanese) are often used in blogs and SNSs in Japan, and in many cases these emoticons have the role of modalities of writers of blog articles or SNS messages. That is, to estimate emotions represented by emoticons is important for reputation analysis and opinion mining. In this study, we propose a methodology for automatically generating the emotional vectors of graphical emoticons automatically using the collocation relationship between emotional words and emoticons which is derived from many blog articles. The experimental results show

332-345

the effectiveness of the proposed method. National Healthcare Information System Integration: A Service Oriented Approach Usha Batra, Saurabh Mukharjee Abstract: Healthcare in our home country, India is a cause of concern even after 63 years of Independence. There is a need to create world-class medical infrastructure in India and to make it more accessible and affordable to a large cross section of our people. Introduction of information technology in healthcare system may eventually enhance the overall quality of national standards. The success in current healthcare system requires reengineering of healthcare infrastructure for India. For this, there is a high requirement in India to invest in IT infrastructure to provide interoperability in healthcare information system. Also, integration of IT with healthcare system may lead to open connectivity at all levels (i.e. InPatient and OutPatient care), ensuring that patient information is available anytime and right at the point of care, eliminating unnecessary delay in treatment, avoiding replication 354-361 of test reports, improving more informed decisions and hence leading to improved quality of care.With this intent, this paper attempts to present software design patterns for Service Oriented Architecture (SOA) and its related technologies for integrating both intra and inter enterprise stovepipe applications in healthcare enterprise to avoid replication of business processes and data repositories. We aim to develop a common virtual environment for intra and inter enterprise wide applications in National Healthcare Information System (NHIS). The ultimate goal is to present a systematic requirement driven approach for building an Enterprise Application Integration (EAI) solution using the Service Oriented Architecture and Message Oriented Middleware (MOM) principles. We aim to discuss the design concept of Enterprise Application Integration for integration of a healthcare organization and its business partners to communicate with each other in a heterogeneous network in a seamless way. A Method to Extract Unsteadiness of Concept Attributes Based on Weblog Yosuke Horiuchi, Osamu Uchida Abstract: Concept bases are composed of a collection of concept attributes and used for multiple purposes such as improving efficiency of information retrieval and making commonsensical judgments using computers recently. To construct concept bases, the data of the dictionaries is generically used. However, concept attributes are not always static, that is, some of them shift by the influence of 362-369 various events and incidents. For example, it is to be expected that the attributes of the sports in the concept attribute of the country holding some sports event are stronger than usual time, or they are append to the concept attribute of the country. In this study, we consider the application of weblogs to extract the fluctuations of concept attributes. Many of articles of weblogs are influenced by the news, and the number of documents of weblogs is very large. Then, in this study, we propose a new method to extract the influence of various events and incidents to attributes by regarding the tags given to an article as an attribute of the words in the article, and

verify the effectiveness of our method by an experiment. Semantic Search Itinerary Recommender System Liviu Adrian Cotfas, Andreea Diosteanu, Stefan Daniel Dumitrescu, Alexandru Smeureanu Abstract: In this paper we present a novel approach based on Natural Language Processing and hybrid multi-objective genetic algorithms for developing mobile tourism itinerary recommender systems. The proposed semantic matching technique allows users to find Points of Interest POIs that match highly specific 370-377 preferences. Moreover, it can also be used to further filter results from traditional recommender techniques, such as collaborative filtering, and only requires a minimal initial input from the user to generate relevant recommendations. A hybrid multi-objective genetic algorithm has been developed in order to allow the tourists to easily choose between several Pareto optimal itineraries computed in near realtime. Furthermore, the proposed system is easy to use, thus it can be stated that our solution is both complex and at the same time user-oriented. Symbolic Neural Networks for Clustering Higher-Level Concepts Kieran Greer Abstract: Previous work has described linking mechanisms and how they might be used in a cognitive model that could even begin to think [6][7][8]. One key problem is enabling the system to autonomously form its own concept structures from the information that is presented. This is particularly difficult if the information is unstructured, for example, individual concept values being presented in 378-386 unstructured groups. This paper suggests an addition to the current model that would allow it to filter the unstructured information to form higher-level concept chains that would represent something in the real world. The new architecture also starts to resemble a traditional feedforward neural network, suggesting what future directions the research might take. This extended version of the paper includes results from some clustering tests, considers applications for the model and takes a closer look at the intelligence side of things. An Autonomous Fuzzy-controlled Indoor Mobile Robot for Path Following and Obstacle Avoidance Mousa T. AL-Akhras, Mohammad O. Salameh, Maha K. Saadeh, Mohammed A. ALAwairdhi Abstract: This paper provides the design and implementation details of an 387-395 autonomous, battery-powered robot that is based on Fuzzy Logic. The robot is able to follow a pre-defined path and to avoid obstacles, after avoiding an obstacle, the robot returns back to the path. The proposed system is divided into two main modules for path following and for obstacle avoidance and line search. Path following controller is responsible for following a pre-defined path, when an obstacle is detected, obstacle avoidance and line search controller is called to avoid

the obstacle and then to return to the path. When the robot finds the path again, path follower controller is called. LEGO Mindstorms NXT robot was used to realise the proposed design. The detailed design steps of the robot are provided for readers who are interested in replicating the design. For the implementation of the path following, Fuzzy Logic was employed. Fuzzy Logic controller takes the robot's light and ultrasonic sensory readings as input and sends commands to the robot motors to control the robot's speed and direction. An extensive set of experiments considering both simple and complicated scenarios for path following and obstacle avoidance were conducted and the results proved the effectiveness of the system. Images of such scenarios are provided for reference and many videos were uploaded and the links are given in the paper for interested readers. B2B Process Integration using Service Oriented Architecture through Web Services Adrian Besimi Abstract: The electronic commerce of B2B for Small and Medium Enterprises is experiencing obstacles due to the nature of the processes involved. SMEs have different barriers to enter the B2B market due to lack of understanding, lack of finances and lack of IT experts that can create customized applications for standardized B2B ecommerce, such as ebXML. They have difficulties choosing the 396-403 appropriate channel of communicating B2B messages, whether the public emarketplace or own private Web Services. COTS software costs a lot, and this paper proposes a solution based on the ebXML framework and Web Services as a middleware that will do most of the job for these companies. It will offer private Web Services for the public e-marketplace usage as well. This Service Oriented Architecture can be further used by external partners in order to integrate their B2B processes in their own Enterprise Systems. Solving the Protein Folding Problem Using a Distributed Q-Learning Approach Gabriela Czibula, Maria-Iuliana Bocicor, Istvan-Gergely Czibula Abstract: The determination of the three-dimensional structure of a protein, the so called protein folding problem, using the linear sequence of amino acids is one of the greatest challenges of bioinformatics, being an important research direction due to its numerous applications in medicine (drug design, disease prediction) and genetic engineering (cell modelling, modification and improvement of the functions 404-413 of certain proteins). We are introducing in this paper a distributed reinforcement learning based approach for solving the bidimensional protein folding problem, an NP-complete problem that refers to predicting the bidimensional structure of a protein from its amino acid sequence. Our model is based on a distributed Q ? learning approach. The experimental evaluation of the proposed system has provided encouraging results, indicating the potential of our proposal. The advantages and drawbacks of the proposed approach are also emphasized. Data Loss Prevention for Confidential Web Contents and Security Evaluation with 414-422 BAN Logic

Yasuhiro Kirihata, Yoshiki Sameshima, Takashi Onoyama, Norihisa Komoda Abstract: Since the enforcement of the Private Information Protection Law of Japan, protection of confidential information is one of the significant issues in enterprises and organizations. However, many incidents of confidential information leakage occur and this becomes a serious issue in the industrial society. There is no effective countermeasure to prevent it so far. In this paper, we propose a web content protection system to realize the protection of confidential web contents. The system provides special viewer application to view the encrypted content data and realize the prohibition of copying and taking snapshots for the displayed confidential data. Adopting the dynamical encryption methodology by the intermediate encryption proxy, it is possible to protect the web contents generated dynamically by web applications. Applying our approach to the conventional web system, system administrators can manage the distribution of the confidential information and prevent them from being leaked out from the office. We describe the system architecture and implementation details. We also evaluate the security of the system implementation and the internal authentication protocol with BAN logic. Formal Verification of Embedded Software based on Software Compliance Properties and Explicit Use of Time Miroslav Popovic, Ilija Basicevic Abstract: The complexity of embedded software running in modern distributed large-scale systems is going so high that it becomes hardly manageable by humans. Formal methods and the supporting tools are offering effective means for mastering complexity, and therefore they are remaining to be an important subject of intensive research and development in both industry and academia. This paper makes a contribution to the overall R&D efforts in the area by proposing a method, and supporting tools, for formal verification of a class of embedded software, which may be modeled as a collection of distributed finite state machines. The method is based on the model checking of certain properties of embedded software models by 423-430 Cadence SMV tool. These properties are systematically derived from the compliance test suites normally defined by relevant standards for compliance software testing, and therefore we refer to them as the compliance software properties. Another specificity of our approach is that we enable explicit usage of time within the software properties being verified, which gives more expressiveness to these properties and bring them more close to system properties that are analyzed in other engineering disciplines. The supporting tools enable generation of these models from the high-level design models and/or from the target source code, for example in C/C++ language. We demonstrate the usability of the proposed method on a case study. The subject of the case study is formal verification of distributed embedded software actually used in real telephone switches and call centers. Using UML Diagrams for Object Oriented Implementation of an Interactive Software for Studying the Circle A. Iordan, M. Panoiu, I. Muscalagiu, R. Rob

431-439

Abstract: This paper presents the necessary steps required for Object Oriented Implementation of a computer system used in the study of circle. The modeling of the system is achieved through specific UML diagrams representing the stages of analysis, design and implementation, the system thus being described in a clear and concise manner. The software is very useful to both students and teachers because the mathematics, especially geometry, is difficult to understand for most students. A Novel Approach to Analyzing Natural Child Body Gestures using Dominant Image Level Technique (DIL) Mahmoud Z. Iskandarani Abstract: A novel approach to child body gesture is presented and discussed. The developed technique allows the monitoring and analysis of a child's behavior based on correlation between head, hand, and body poses. The DIL technique produces 440-448 several organized maps resulting from image conversion and pixel redistribution, hence lumping individual gestures of the child and results in computable matrices, which is fed to an intelligent analysis system. The obtained results proved the technique to be capable of classifying child presented body pose with ability to model child's body gesture under various conditions.

Previous Volumes: 2007 2008 2009 2010


Copyrighted Material, www.naun.org NAUN

Das könnte Ihnen auch gefallen