Sie sind auf Seite 1von 14

IMPROVING FACE RECOGNITION PERFORMANCE USING RBPCA MAXLIKE AND INFORMATION FUSION

Abstract
We have implemented an efficient system to recognize faces from images with some near real-time variations. Our approach essentially was to implement and verify the algorithm Eigen faces for Recognition Maximum Likelihood which solves the recognition problem for 2-D image of faces, using the principal component analysis.

Existing System
Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. If novel face representation using shapes derived from the masked trace transform here after simply called shape transform (STT) The Trace transform is a very rich representation of an image and in order to use it directly for recognition, one has to produce a much simplified version of it.

In our approach, a shape is represented by a discrete set of points obtained from an edge detector. Let us denote by the set of edge pixels.

In other word, at edge pixel , we compute a coarse histogram of the relative coordinates of all other edge points in the image, with respect to pixel.

Proposed System
Proposed methodology is connection of two stages Feature extraction using Regularization Block Based Principle Component Analysis(RBPCA) and recognition using the feed forward back propagation Neural Network. The proposed technique is coding and decoding of face images, emphasizing the significant local and global features. Image size normalization, histogram equalization and conversion of the images.

One Neural Network is used for each person in the database in which face descriptors are used as inputs for positive and negative values variations. New test image is taken for recognition (from test dataset) and its face descriptor is calculated from the Eigen faces found before.

MODULES:
User

Login Input Face Image Image Storage(RBPCA MaxLike) Face Featuring Extraction (Information Fusion) Feature Matching Comparison/Decision

USER LOGIN:

User Login module provides the authentication of

the user.

It checks whether the user is the correct person to access the resources by checking the username and password (entered by the user) by comparing it with the information stored in the database.

import java.awt.Font; import java.io.File; ----------------------------------------------------------------------------------------------------------public UserAuthentication() { super(); try{


UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.Window sLookAndFeel"); }catch(Exception e){e.printStackTrace(); ----------------------------------------------------------------------------------------------------------This method initializes this private void initialize() { this.setSize(new java.awt.Dimension(800,600)); this.setContentPane(getJPanel()); this.setTitle("Face Recognition"); this.setVisible(true);

USER NAME: private JTextField getUsername() { if (username == null) { username = new JTextField(); username.setBounds(new java.awt.Rectangle(165,60,118,25)); } return username;

PASSWORD: private JPasswordField getPassword() { if (password == null) { password = new JPasswordField(); password.setBounds(new java.awt.Rectangle(166,105,113,27)); } return password; }

IMAGE STORAGE (RBPCA MAXLIKE)

package EigenFaces; import Jama.*; public class EigenFaceComputation { private final static int MAGIC_NR =11; public static FaceBundle submit(double[][] face_v, int width, int height, String[] id, boolean debug) { int length = width*height; int nrfaces = face_v.length; int i, j, col,rows, pix, image; double temp = 0.0; double[][] faces = new double[nrfaces][length];

ImageFileViewer simple = new ImageFileViewer(); simple.setImage(face_v[0],width,height); double[] avgF = new double[length]; /* Compute average face of all of the faces. 1xN^2 */ for ( pix = 0; pix < length; pix++) { temp = 0; for ( image = 0; image < nrfaces; image++) { temp += face_v[image][pix]; } avgF[pix] = temp / nrfaces; } simple.setImage(avgF, width,height);

/* Compute difference. */ for ( image = 0; image < nrfaces; image++) { for ( pix = 0; pix < length; pix++) { face_v[image][pix] = face_v[image][pix] - avgF[pix]; } } /* Copy our face vector (MxN^2). We will use it later */ for (image = 0; image < nrfaces; image++) System.arraycopy(face_v[image],0,faces[image],0,length); System.arraycopy(face_v,0,faces,0,face_v.length); //simple.setImage(face_v[0],width,height); /* Build covariance matrix. MxM */ Matrix faceM = new Matrix(face_v, nrfaces,length); Matrix faceM_transpose = faceM.transpose(); /* Covariance matrix - its MxM (nrfaces x nrfaces) */ Matrix covarM = faceM.times(faceM_transpose); double[][] z = covarM.getArray(); //System.out.println("Covariance matrix is "+z.length+" x "+z[0].length); /* Compute eigenvalues and eigenvector. Both are MxM

Das könnte Ihnen auch gefallen