Sie sind auf Seite 1von 4

©ELECTRON

Department of ECE, Amrita Vishwa Vidyapeetham, Coimbatore

A J2ME-Based Wireless Automated Video Surveillance System Using Motion Detection Method

Ashwin S, Sathiya Sethuram A, Varun A, Vasanth P

AMRITA School of Engineering,AMRITA VISHWA VIDAPEETHAM , Coimbatore- 641105

sashwinamrita@gmail.com s_sethuram2004@yahoo.co.in varun007amrita@gmail.com

vasanthrpjan1@yahoo.co.in

Abstract

remote terminals. A low-cost intelligent wireless security and monitoring solution using motion detection technology is presented in this paper. The system has good mobility, which makes it a useful supplement of traditional monitoring system. Limited by the memory consuming and computing capacity in a mobile phone, cross correlation technique is presented to be adopted in mobile phones. In order to be adapted to the slow and slight changes of the natural environment, a self-adaptive background model updated automatically and timely is detailed.

2. Motion Detection Method

The mobile-phone’s camera is switched on and the images are captured after every 0.25 seconds. A background template is constructed first and is updated periodically with respect to weather changes. The acquired images are compared with the template using cross correlation method and if any suspicious movement is found, the user is notified by MMS of the motion detected image.

Figure 1: Motion Detection method
Figure 1: Motion Detection method

2.1 Background Template Construction

Before the moving objects can be identified, a background template must be built. The foreground cannot be removed so the ideal background image cannot

87

A low-cost automated mobile phone-based wireless Video surveillance solution using motion detection method is proposed in this paper. The proposed solution can be applied not only to various security systems, but also to environmental surveillance. Firstly, the basic principle of motion detection is given. Limited by the memory Consuming and computing capacity of a mobile phone, a cross-correlation technique is presented for adaptation. Then, a self-adaptive background model that can update automatically and timely to adapt to the slow and slight changes of natural environment is detailed. Divide the current captured image into four quadrants and When the cross correlation of one or more quadrant reaches a certain threshold, a moving object is considered to be in the current view, and the mobile phone will automatically notify the user by sending the particular image through MMS (Multimedia Message Service). The proposed method can be implemented in an embedded system with little memory consumption and storage space, so it’s feasible for mobile phones and the proposed solution can be used in constructing mobile security monitoring system with low-cost hardware and equipments. Based on J2ME (Java2 Micro Edition) technology, a prototype system was developed using JSR135 (Java Specification Requests 135: Mobile Media API) and JSR120 (Java Specification Requests 120: Wireless Messaging API) and the test results show the effectiveness of proposed solution.

1. Introduction

The increasing need for intelligent video surveillance in public, commercial and family applications makes automated video surveillance systems one of the main current application domains in computer vision. Intelligent video surveillance systems deal with the real- time monitoring of persistent and transient objects within a specific environment. Intelligent surveillance system has been evolution to third generation, known as automated wide-area video surveillance system. Combined computer vision technology, the distributed system is autonomic, which can also be controlled by

Conference Proceedings

RTCSP’09

©ELECTRON

Department of ECE, Amrita Vishwa Vidyapeetham, Coimbatore

be retrieved. But the moving objects do not exist in the same location in each image of a real-time video

sequence. The gray values of pixels which have the same location in each frame of the video sequence are averaged

to represent the gray value of the pixel which located in

the same place in the approximate background. An average value of pixels in the same location of each frame in a video sequence is calculated. To simplify, the approximate background is also called “background template”. In our prototype, the first 10 images are captured to calculate the background template.

2.2 Correlation Technique

In this technique, the captured images are Sub- divided

into four equal parts. A two dimensional cross correlation

is calculated using the formula given below between each

sub image with its corresponding part in the Background template.

with its corresponding part in the Background template. r = Correlation value A = matrix representing

r = Correlation value

A = matrix representing pixel values of first image

B = matrix representing pixel values of second image

Such that Size (A) = Size (B)

A = Average of (A)

B = Average of (B)

This process produces four values ranging from -1 to 1depending on the difference of the two correlated images. The value is 1 if there is no motion and this value keeps on decreasing as the level of motion increases. The

goal of this division is to achieve more sensitivity so the minimum value of correlation can be used as variance value which shows maximum motion. If the value lies between the ranges 0.9 to 0.8 then it is low level motion

,if it is nearer to 0.5 then it is medium level motion else it

is high level motion.

For the purpose of correlating, the first two images are considered. These images are resized to get equal ordered matrices for the purpose of ease in quadrant division. Comparing the pixel values when they are in RGB form,

it is quite difficult and complex as each pixel consists of

three values (Red intensity, Green intensity and Blue intensity at that point). So they are converted to gray images where the pixel comparison is flexible. Then these

Conference Proceedings

RTCSP’09

two images will be divided into 4 equal quadrants. First quadrant of the first image with the corresponding quadrant of the second image is compared. The pixel values of these two quadrants are used to obtain the correlation value. Similarly the other three quadrants are compared and the formula is applied to get the remaining three values. These four values are checked with the correlation value (r =1).If it is less than 1, depending upon the range, the type of the motion is detected. Among the four values obtained, the minimum value represents maximum motion. This entire process is repeated for all remaining consecutive images and the value of “r” is calculated for all cases.

images and the value of “r” is calculated for all cases. Figure 2: Basic Steps of

Figure 2: Basic Steps of Correlation Network

2.3 Background Template Update

Due to the sun light changing very slowly, the background template must be updated timely. Otherwise the foreground cannot be correctly identified anymore. Add 1 to the pixels value in the background template if the corresponding pixels value is more than it in the template, or subtract 1 if it is less. This algorithm is more efficient than “Moving Average Algorithm” because it only uses addition and subtraction operation, and do not need much memory storage.

88

©ELECTRON

Department of ECE, Amrita Vishwa Vidyapeetham, Coimbatore

Department of ECE, Amrita Vishwa Vidyapeetham, Coimbatore Pixel k is a pixel in frame j, and

Pixel k is a pixel in frame j, and Pixel background k is the corresponding pixel in background template. These two pixels have the same location in their frames. With this method, the background template can adjust automatically according to environment change.

Figure 3: Background Template/Captured Image
Figure 3: Background Template/Captured Image

3. J2ME Technology

In this paper, we have implemented a prototype on Mobile phones based on J2ME technology. Java™ Platform, Micro Edition (Java ME) is the most ubiquitous application platform for mobile devices across the globe. It provides a robust, flexible environment for applications running on a broad range of other embedded devices, such as mobile phones, PDAs, TV set-top boxes, and printers. Applications based on Java ME software are portable across a wide range of devices.

3.1 Mobile Media API (JSR-135)

The Mobile Media API (MMAPI) is an API specification for the Java ME platform CDC and CLDC devices such as mobile phones. Depending on how it's implemented, the APIs allow applications to play and record sounds and video, and to capture still images. MMAPI was developed under the Java Community Process as JSR 135. The Multimedia Java API is based around four main types of classes in the javax.microedition.media package-the Manager, the Player, the PlayerListener and various types of Control.

3.1.1 Getting a Video Capture Player

Conference Proceedings

RTCSP’09

The first step in taking pictures (officially called video capture) in a MIDlet is obtaining a Player from the Manager. Player mPlayer = Manager.createPlayer("capture://video");

The Player needs to be realized to obtain the resources that are needed to take pictures. mPlayer.realize();

3.1.2 Showing the Camera Video

The video coming from the camera can be displayed on the screen either as an Item in a Form or as part of a Canvas. A VideoControl makes this possible. To get a VideoControl, just ask the Player for it:

VideoControl mVideoControl = (VideoControl)mPlayer.getControl("VideoControl");

3.1.3 Capturing an Image

Once the camera video is shown on the device, capturing an image is easy. All you need to do is call VideoControl's getSnapshot() method. The getSnapshot() method returns an array of bytes, which is the image data in the format you requested. The default image format is PNG (Portable Network Graphic).

byte[] raw = mVideoControl.getSnapshot(null); Image image = Image.createImage(raw, 0, raw.length);

3.2 Wireless Message API (JSR-120)

The J2ME Wireless Toolkit supports the Wireless Messaging API(WMA) with a sophisticated simulation environment. WMA 1.1 (JSR 120) enables MIDlets to send and receive Short Message Service (SMS) or Cell Broadcast Service (CBS) messages. WMA 2.0 (JSR 205) includes support for MMS messages as well.

3.2.1 Creating a Message Connection

To create a client Message Connection just call Connector.open(), passing a URL that specifies a valid

WMA messaging protocol. MessageConnection mc = (MessageConnection)Connector.open(addr);

3.2.2 Creating and Sending a Text Message

The connection is a client, the destination address will already be set by the implementation (the address is taken from the URL that was passed when the client connection was created). Before sending the text message, the method populates the outgoing message by calling

setPayloadText().

TextMessagetmsg=(TextMessage)mc.newMessage(Messa

geConnection.

89

©ELECTRON

Department of ECE, Amrita Vishwa Vidyapeetham, Coimbatore

TEXT_MESSAGE);

tmsg.setPayloadText(msg);

mc.send(tmsg);

4. Prototype

In the prototype system, if the difference between real- time image and template reaches a predefined

Figure 4: System Architecture
Figure 4: System Architecture

Correlation value, moving objects are considered to appear. Then the handset will send out an alert MMS. Since the device has good mobility, it can be put anywhere including those area not covered by other surveillance system. And it can be deployed rapidly in emergency.

5. System Requirements

Java enabled Mobile phones with minimum 1.3 MP Camera.

Mobile phone with Video capturing facility.

All embedded platforms with camera equipped and JSR135(MMAPI)/JSR120(WMA) supported can install this system.

MMS functionality enabled SIM card.

6. Expected Results

The proposed system can be implemented in any java enabled phones using SUN JAVA Wireless Tool kit 2.5 and JAVA ME (Micro Edition) SDK 3.0. The Java application created using the environment mentioned above is installed in the mobile phone to be used for surveillance purpose. The owner has another mobile with MMS activated network on the other hand. When a suspicious object is detected, the corresponding image is sent to the owner mobile through MMS. The owner’s mobile number is stored in the surveillance mobile.

7. Conclusion

The motion detection method using cross correlation led to the development of autonomous systems, which also minimize the network traffic. With good mobile ability, the system can be deployed rapidly in emergency. And can be a useful supplement of traditional monitoring system. With the help of J2ME technology, the differences of various hardware platforms are minimized. All embedded platforms with camera equipped and JSR135/JSR120

Conference Proceedings

RTCSP’09

supported can install this system without making any changes to the application. Also, the system can be extended to a distributed wireless network system. Many terminals work together, reporting to a control center and receiving commands from the center. Thus, a low-cost wide-area intelligent video surveillance system can be built. Further more, with the development of embedded hardware, more complex digital image process algorithms can be used to give more kinds of application in the future.

8. Acknowledgements

We would like to thank the Authors Renzo Perfetti, Daniele Casali and our tutors Smitha H and Karthi R for giving many helpful suggestions.

9. References

1.

An Efficient Moving Object Detection and Description Tao Xia1, Chaoqiang Liu2, Hui Li2 Centre for Wavelets, Approximation and Information Processing, National University of Singapore Temasek Laboratories and Centre for Wavelets, Approximation and Information Processing, National University of Singapore

2.

A J2ME-Based Wireless Intelligent Video Surveillance System Using Moving Object Recognition Technology Lizhong Xu , Zhong Wang , Huibin Wang , Aiye Shi , Chenming Li College of Computer and Information Engineering Hohai University Nanjing, P. R. China 210098

3.

Detection of Moving Images Using Neural Network P. Latha, L. Ganesan, N. Ramaraj, and P.

V.

Hari Venkatesh

4.

Moving object detection Renzo Perfetti, Daniele Casali, Giovanni Costantini

5.

Javed, O. and Shah, M., “Tracking and Object Classification for Automated Surveillance,” Proc European Conf. on Computer Vision (ECCV), 2002.

6.

M Valera, SA Velastin, Intelligent distributed

surveillance systems: a review. IEE Proceedings on Visual Image Signal Processing, April. 2005, vol. 152, vo.2, pp.192-204.

7.

M.

Piccardi, Background subtraction techniques: a

review, IEEE International Conference on Systems, Man and Cybernetics, Oct. 2004, vol. 4, pp. 3099–

3104.

90