FINGERPRINT AND IRIS FUSION BASED RECOGNITION USING RBF NEURAL NETWORK

GAWANDE U.1*, ZAVERI M.2, KAPUR A.3
1Department of Computer Technology, Yeshwantrao Chavan College of Engineering, Nagpur- 441 110, MS, India.
2Department of Computer Engineering, Sardar Vallabhbhai National Institute of Technology, Surat- 395 007, Gujarat, India.
3Nagar Yuwak Shikshan Sanstha, Nagpur- 440 022, MS, India.
* Corresponding Author : ujwallgawande@yahoo.co.in

Received : 30-03-2013     Accepted : 25-04-2013     Published : 02-05-2013
Volume : 4     Issue : 1       Pages : 142 - 148
J Signal Image Process 4.1 (2013):142-148

Cite - MLA : GAWANDE U., et al "FINGERPRINT AND IRIS FUSION BASED RECOGNITION USING RBF NEURAL NETWORK." Journal of Signal and Image Processing 4.1 (2013):142-148.

Cite - APA : GAWANDE U., ZAVERI M., KAPUR A. (2013). FINGERPRINT AND IRIS FUSION BASED RECOGNITION USING RBF NEURAL NETWORK. Journal of Signal and Image Processing, 4 (1), 142-148.

Cite - Chicago : GAWANDE U., ZAVERI M., and KAPUR A. "FINGERPRINT AND IRIS FUSION BASED RECOGNITION USING RBF NEURAL NETWORK." Journal of Signal and Image Processing 4, no. 1 (2013):142-148.

Copyright : © 2013, GAWANDE U., et al, Published by Bioinfo Publications. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

Abstract

Multimodal biometric systems are widely used for different security applications. Major usage of such systems is in authentication and identification purpose, for example, authorized access control or terrorist identification, unique identity for human being etc. Multimodal biometric systems over come various limitations of unimodal biometric systems, such as non-universality, lower false acceptance and higher genuine acceptance rates. In this paper, we propose a feature-level fusion framework for combining features of Iris and Fingerprint, as they contain most prominent features. We derive a single multimodal template by fusing the unimodal templates based on Mahalanobis distance measure. For recognition in our proposed algorithm the Radial Basis Function based neural network (RBFNN) is used which is trained using a single fused multimodal template generated. The proposed algorithm is evaluated using the standard database and real database. The simulation results demonstrate that our proposed multimodal biometric based system provides much better recognition rate compared to unimodal biometric system.

Keywords

Multimodal, Feature, Fusion, Classifier, Block sum, Mahalanobis, Radial Basis Function Neural Network.

Introduction

Biometric technologies have established their importance in a various security, access control and monitoring applications. Many new techniques are attempted for improving the systems performance [1-5] . Research is still pursued for performance enhancement. Password based security systems are susceptible (prone) to theft or cracking [2-3] . For this reason, biometrics has turned out to be a practicable option to traditional identification methods in several application areas [1] . Biometric is expressed as the science of identifying an individual on the basis of their physiological or behavioral traits. It seems to achieve acceptance as a rightful method for obtaining an individual’s identity [6] . Biometric systems possess numerous advantages like, it is difficult to share and reproduce, it is user friendly. i.e. there is no need to remember long and random passwords, it cannot be obtained by direct observation and it safeguards against repudiation by the user. A number of biometric characteristics are being employed today, which comprise Fingerprint, DNA, Iris pattern, retina, thermogram, face, gait, hand geometry, palm-vein pattern, smell, keystroke dynamics, signature, and voice [7-10] .
Biometric systems that generally utilize a single attribute for recognition are influenced by some practical issues like noisy in sensor data, non-universality and/or lack of distinctiveness of the biometric trait [11] . Multimodal biometric systems prevail over some of these issues by strengthening the proof acquired from several sources [12,13] . It is believed that to enhance the performance of the system by providing more valuable information to the classifier. Biometric traits are acquired from multiple sources to recognize a person. Different characteristics can be examined by a single system or separate systems that functions on its own and their decisions can be merged together [14] . Multibiometric systems are being widely adopted in many large-scale identification. A number of software and hardware multibiometric products have also been introduced in the market [15,16] .
Reliability and accuracy are improved remarkably by multibiometric systems. But at the same time not much of the attention was paid to time complexity. This motivates the proposed approach to train the multiple biometric templates using RBFNN classifier. This classifier was chosen because it has a simple structure, fast convergence capacity, and can be easily trained.
This paper focuses on the feature level fusion approach for creating multibiometric template due to two reasons: (i) it provides synergetic (complimentary) information for identification and (ii) it contains raw data (more information) as compared to decision level [9] . This work investigates the feasibility of creating a single multibiometric template using similarity measure. For similarity measure the mahalanobis distance is used in our proposed method. The generation of a single multibiometric template is significant as the traits that are being fused have different feature representations. In this paper, we describe an approach for multimodal based recognition that works with following steps: 1) Extraction of features from Iris through block sum method and extraction of features from Fingerprint by Haar wavelet method 2) Fusion of unimodal templates into single representation 3) Train the RBFNN with fused feature vector and subsequent classification 4) Train the RBFNN with two unimodal features of Iris and Fingerprint for comparison of results with fused feature vector. We analyze the performance of the proposed system on the basis of Genuine Acceptance Rate (GAR), False Acceptance Rate (FAR), training and testing time.
The rest of the paper is organized as follows. Section 2 provides a background of comparison between various multimodal biometric schemes proposed in the literature. Fusion framework for multibiometric based system and the associated algorithms are introduced in Section 3. Section 4 presents the implementation details and performance evaluation of the proposed multibiometric systems. In Section 5 the results are discussed with comparative evaluation. Section 6 is summarily devoted to conclusion.

Literature Review

In recent times, multimodal biometrics fusion techniques have attracted reasonable attention as the synergic information between different modalities can enhance the recognition performance. Voluminous literature deals with variety of techniques making features more informative [17-20] . Many researchers have demonstrated that the fusion process is effective, because the fused information provides much better discrimination than individual information. In most cases, the fusion process can be broadly categorized into three groups: fusion at the feature level, fusion at the match level and fusion at the decision level [13,20,23,31] . Fusion at the feature level includes the incorporation of feature sets relating to multiple modalities. The feature set holds richer information about the raw biometric data than the match score or the final decision. Integration at feature level is expected to offer good recognition results. But, the feature level fusion is hard to accomplish due to different types of modalities and its representations. The information obtained from different modalities may be so heterogeneous, that process of fusion is difficult with dismal benefit. Feature space and scaling differences make it further difficult to homogenize features from different biometrics [21] . Fused feature vector sometimes may result into increased dimension, compared to unimodal features. Though rarely, fused vector with two modalities is likely to end up with twice the dimension of unimodal features.
Here, a brief review of recent researches is as follows. The unimodal Iris system, unimodal palmprint system, and multibiometric system (Iris and palmprint) have been presented in [22] . The system fusion utilizes a matching scores feature in which each system provides a matching score indicating the similarity of the feature vector with the template vector. A multimodal biometric system using Fingerprint and Irisfeatures has been proposed in [23] . The Fingerprint features were extracted by minutiae method and matched using Hamming Distance. Iris feature were extracted by template encoding and matched using Hamming Distance. These decisions were ANDed for final decision. Ratha, et al. [24] proposed a unimodal distortion-tolerant Fingerprint authentication technique based on graphic representation. Using the Fingerprint minutiae features, a weighted graph of minutiae was constructed for both the query Fingerprint and the reference Fingerprint.
A frequency-based approach resulting in a homogeneous biometric vector, integrating Iris and Fingerprint data has been worked out in [25] . Successively, a Hamming-distance-based matching algorithm was dealt with the unified homogenous biometric vector. Nagar et al. [26] proposed a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. They implemented framework using two well-known biometric cryptosystems, namely, fuzzy vault and fuzzy commitment. Approach suggested in [27] , recommended the use of multichannel and Gabor filters to capture local texture information of the Iris. Fixed-length feature vector was generated on the basis of local texture information. Aguilar, et al. [28] worked on multibiometric using a combination of fast Fourier transform (FFT) and Gabor filters to enhance Fingerprint imaging. Successively, a novel stage for recognition using local features and statistical parameters were used. They used the Fingerprints of both thumbs. Each Fingerprint was separately processed and the unimodal results were combined to obtain final fused result.
Yang and Ma [3] used Fingerprint, palm print, and hand geometry to implement personal identity verification. Unlike other multimodal biometric systems, these three biometric features can be taken from the same image of hand. They implemented matching score fusion to establish identity, performing first fusion of the Fingerprint and palm-print features, and later, a matching-score fusion between the multimodal system and the unimodal palm-geometry. An approach suggested in [29] , has shown improved data fusion for face, Fingerprint and Iris images. The approach proposed by Lin Lue et al. Their work was based on the eigen-face and the Gabor wavelet methods. They recommended new fusion system that exhibited improved performance. Jamir, et al. [30] implemented fusion of Iris and Fingerprint. They used adaptive rank level fusion directly at verification stage. Teddy Ko [9] worked on fusion of Fingerprint, Face and Iris. Various possibilities of multimodal biometric fusion, evaluation of image quality, and their influence on identification accuracy were discussed in [9] . Baig, et al. [31] worked on the state of art framework for multimodal biometric identification system. It is adaptable to any kind of biometrics system. Small memory requirement and faster processing were benefits of the system proposed in [31] . Framework for fusion of Iris and Fingerprint has been developed and the classification was based on single Hamming distance.
Jagadeesan, et al. [32] prepared a secured cryptographic key on the basis of Iris and Fingerprint Features. Minutiae points were extracted from Fingerprint. Similarly texture properties were extracted from Iris. Feature level fusion was further employed. 256-bit cryptographic key was the outcome of the fusion. Improvement in authentication and security, using 256 bit encryption was claimed as a part of their result in [32] . An authentication method presented by Nagesh kumar, et al. [10] , focuses on multimodal biometric system with two features i.e. face and palmprint. Integrated feature vector resulted into robustness of the person authentication. The final assessment was done by fusion at matching score level. Unimodal scores were fused after matching. Maurer and Baker have presented a fusion architecture based on Bayesian belief networks for Fingerprint and voice [33] . The features were modeled using statistical distributions.
The present work is based on fusion of features from Fingerprint and Iris. Fusion process implemented in this work is based on Mahalanobis distance. This fusion has the advantage of reduction in fused feature vector size, which is the main issue (of high dimensions) in feature level fusion. Extraction of features from multimodalities, their fusion using distinct process and classification using RBFNN is the core of this work. The novelty of the work is to create a single template from two biometric modalities and the use of RBFNN for recognition purpose.

Proposed Multimodal Biometric System

The proposed approach implements an innovative idea to fuse the features of two different modalities - Fingerprint and Iris. We extract features of Fingerprint by Haar wavelet based method. The choice of Haar Wavelet is preferred for processing the input images in real time as it has less number of filter coefficients. Similarly features of Iris are extracted by Block sum based method. The choice of block sum is due to its accurate and reduced size feature vector. It extracts global features of Iris. Global information can quickly increase performance by calculating simple features. The features so obtained are fused based on similarity measure and for similarity measure we use mahalanobis distance. This is a distance measure that utilizes the correlation between features. Because the Mahalanobis distance utilizes the feature covariance, no prior normalization for the features is necessary. This is a great advantage of using this as a distance metric. To evaluate the performance of multimodal versus unimodal, three independent training sessions are carried out by RBFNN - one for Fingerprint features, second for Iris features and third using fused template generated using both Fingerprint and Iris feature set.
The feature extraction for Fingerprint and Iris is explained in detail as below.

Fingerprint Feature Extraction

Before extracting the features, the preprocessing steps are required in Fingerprint to enhance the image. Fingerprint image enhancement starts with normalization of input Fingerprint image, so that it has pre-specified mean and variance. Enhancement is performed using Histogram equalization (HE) technique, here the basic idea is to map the gray levels based on the probability distribution of the input gray levels. HE flattens and stretches the dynamic range of the image’s histogram [23] . This results in overall contrast improvement of the image as shown in [Fig-1] . It produces an enhanced Fingerprint image that is useful for feature extraction.
The texture details in the Fingerprint image can be analyzed at different resolutions using its mulitiscale wavelet decomposition. The Haar wavelets can capture sharp discontinuities in the spatial gray-level texture. In this approach, the enhanced images are decomposed into four levels by the Haar wavelets. The 4th level decomposition offered the most discriminative information and therefore all the coefficients from this decomposition were employed to obtain the feature vector of 1x60.

Haar Wavelet Transform

Wavelet transform is a mathematical tool based on many layer function decomposition. After applying wavelet transform, a signal can be described by many wavelet coefficients which represent the characteristics of signal. If the image has distinct features with some frequency and direction, the corresponding sub images have larger energies in wavelet transform. For this reason wavelet transform has been widely used in signal processing, pattern recognition and texture recognition [34] . By applying wavelet transform, vital information of original image is transformed into compressed image without much loss of information.
Haar wavelet transform technique, one most popular amongst wavelets, is applied for feature extraction from Fingerprint. Ordinary benefit of Haar transform is its ease of implementation and also it can work well on non linear intensity image.
If f (x, y) represents an image signal, its Haar wavelet transform is equal to two 1D filters (x-direction and y-direction). As shown in [Fig-2] , where LL represents low frequency vectors (approximate), HL represents high frequency vectors in horizontal direction, LH represents high frequency vectors in vertical direction, HH represents diagonal high frequency vectors. After first decomposition LL quarter i.e approximate component is submitted for next decomposition. In this manner the decomposition is carried out four times, as shown in [Fig-2] . This reduces array size by 1x60 along x as well as y direction. The original image of 160x96 is reduced to 10x6 after fourth decomposition. From this image a single 1x60 feature vector is extracted by row wise serialization. This itself is treated as extracted feature vector for Fingerprint.

Iris Feature Extraction

Preprocessing of the Iris image is required before feature extraction. Iris preprocessing comprises of Localization, Segmentation and Normalization. The localization and segmentation are performed by Canny edge detection [35] and circular Hough transform [35] to detect the Iris boundaries and deduce their radius and center. It separates two circles of Iris i.e. the Iris /sclera boundary and the Iris /pupil boundary as shown in [Fig-3] . An important feature of canny operator is its optimality in handling noisy images. The exact boundary is obtained using canny edge detection algorithm. Canny edge detection is performed both in vertical direction and horizontal directions as suggested by Wildes, et al [36] .
We apply Hough transform first for Iris /sclera boundary and then to Iris /pupil boundary for more accurate segmentation. The output of this step results in storing the radius and x, y parameters of inner and outer circles. If the maximum Hough space is less than the threshold it represents non occlusion of eyelids. For isolating eyelashes it is easier by using thresholding, since they are darker when compared with other elements in eye. The extracted Iris region is fixed into rectangular block by remapping each point within the Iris region to polar coordinates using rubber sheet model developed by Daugman [35] . The strip of 240x20 pixels is separated for further processing.

Block Sum Method

It is new technique of global feature extraction [37] . This method is based on the blocks. Number of blocks joined together, form the entire normalized image, as shown in [Fig-4] . The normalized strip of 240x20 IRIS image is divided into 60 blocks of size 16x5. Hence a reduced size feature vector (1x60) is obtained for further recognition. A representative value of each block is computed as follows:

Fusion of Iris and Fingerprint Feature Vector

Fingerprint features and Iris are extracted as explained earlier. These two features vectors are further combined into the new feature vector by proposed feature level fusion technique. The present work includes an innovative method of fusion at feature level. This makes it distinct from different methods reported in the literature. Two different features are extracted from Fingerprint (F) and Iris (I) images. The extracted features are homogeneous; each vector is of size 1x60 elements. Process of fusion combines the characteristics of these two features pragmatically to generate the feature vector of 1x60 elements. As most of the feature fusion in literature is performed serially or parallel, it ultimately results into high dimensional vector. This is the major problem in feature level fusion. The proposed algorithm generates the same size fused vector as that of unimodal, hence nullified the problem of high dimension. The process is explained as:
1. Features F and I of the query images are obtained.
2. The nearest match for query feature vectors of Fingerprint and Iris is selected from 4x100 reference feature vectors of Fingerprints and Iris using Mahalanobis distance.
3. The Mahalanobis distance (Md) between a sample x and a sample y is calculated using the following equation:
Md (x,y)2= (x,y)' S-1 (x,y)
4. where S is the within-group covariance matrix. In this paper, we assume a diagonal covariance matrix. This allows us to calculate the distance using only the mean and the variance. The minimum distance vector is considered as most similar vector.
5. A new difference vectors for both the query vectors of Fingerprint and Iris are generated by obtaining the numerical difference for each element with the corresponding element in its nearest matching reference vectors (most similar vector).
6. A new fused vector of 1x60 is generated by averaging elements from two difference vectors.
This fused vector is used for training using RBFNN.

RBF Neural Network

A radial basis function (RBF) is a real valued function whose value depends only on the distance from the origin. RBFNN is a nearest neighbor classifier. It uses Gaussian transfer function having radial symmetry. The centers coefficient vector W = [W1,W2,W3,…,Wn] , f(x) being a real valued vector and x = [x1,x2,x3,..xn] implements the input-output map of the RBFNN. RBFNN has internal representation of hidden neurons which is radially symmetric. Generally, the RBF network contains three layers: input, hidden and output layers as shown in [Fig-5] .
This thin architecture has great advantage in terms of computing speed compared to multiple hidden layer nets like PNN. The function newrb iteratively creates a radial basis network one neuron at a time. Neurons are added to the network until the rms (root mean square) error falls below the stipulated limit (in our case it is 0.00469). The network adjusts itself such that processing input yields nearness to target [38] .
The numbers of neurons in input layer are numerically equal to the number of components of input vector. The units of hidden layer correspond to the clustering centers of training sample set. Number of neurons and their positions in hidden layer are adjustable. The number neurons in output layer are equal to the classes. In this work, the database contains 500 Fingerprint and 500 Iris images which are obtained from 100 persons taken at different times. 400 of these features are used in training (TRN) process, and the rest are used in testing process. Considering the fused feature vector of size 1x60, the structure of RBFNNs has 60-216-100 neurons in input, hidden and output layer. If the query vector belongs to nth class, then nth neuron of output layer indicates 1, leaving all other neurons with 0 outputs. The weights between the input and hidden layers are set as 1, while those between the hidden and output layers are adjustable.

Experimental Details

For evaluating our proposed algorithm we performed large number of experiments. We use CASIA database for Iris and real Fingerprint. Fingerprint and Iris images are attached arbitrarily, to maintain the attachment throughout the experimentation. The Fingerprint database of same finger (this is acquired by Digital Persona, 4500 Fingerprint reader) was obtained from a set of 100 users. Similarly 100 users Iris images were obtained from CASIA database. From each user 4 images of Iris and 4 images of Fingerprint per user are stored as reference images and one image of Iris and one image of Fingerprint are stored as query images. The mutual independence assumption of the biometric traits allows us to randomly pair the users from the two sets. Experiment was also carried out for 50 imposter cases. The biometric data captured from every user is compared with that of all the users in the database leading to one genuine acceptance. For unimodal biometric recognition 4 features per user were used in training and one image per user is used for testing the performance of the system. For multimodal, four fused features per user were used for training and one is used for testing. This separation of the database into training and test sets was used for finding the average performance results of Genuine Acceptance Rate (GAR), False Acceptance Rate (FAR), Training and Testing time. [Table-1] depicts the network parameters for RBFNN.
The query images were tested for 100 genuine cases onto the system. Fused feature turned out to be the excellent feature input to RBFNN, amongst three feature vectors tried. The results obtained are tabulated in [Table-2] . From tabulated results it is evident that the highest recognition rate of 92% is shown by feature level fusion technique. The recognition performance of Fingerprint and Iris systems were operated as unimodal systems. The next best result of 88% is obtained for unimodal Iris using block sum feature extraction. Fingerprint feature extraction by Haar wavelet exhibited lowest performance with 84% accuracy. We observe that there is a significant poor recognition performance in unimodal biometric as compared to multimodal. 50 samples were tested for false acceptance with each feature input method. The best possible result of 0% FARs, is achieved by fused feature vector.
Training was carried out for 100 individual persons with four reference images, making it 400 pairs of Fingerprint and Iris. So, effectively it was trained for 400 samples pairs. 100 samples were tested to find the performance of the system. Time in sec’s required for this training and testing for unimodal and multimodal is tabulated in [Table-3] . The best performing method i.e fused feature, required training time of 5.30s. The minimum time required for training is for Iris by Block sum based method (4.86s). The best performing method, fused feature stands second (5.30s). Fingerprint feature extraction using Haar wavelet based method consumed largest time of 5.97s. All 100 query images were tested with RBFNN classifier. Mean value of testing time per person is tabulated in [Table-3] Fused feature required execution time of 0.12s which is minimum amongst the 3 input features.
Results clearly indicate the combination of fused feature and RBFNN is the best performer. It has highest GAR of 92% and FAR of 0%. It has least testing time of 0.12s. As to training time, it is second best performer. It consumes 5.30s per person that slightly lacks behind the minimum of 4.86s. Since the training is to be carried out only once, it should be tolerated.

Discussion and Comparison

From literature it is observed that majority of work published on multimodal biometric systems are based on matching-score-level fusion or decision level fusion. This paper presents a feature level fusion method for a multimodal biometric system based on Fingerprints and Irises. The proposed approach for Fingerprint and Iris feature extraction, fusion and classification by RBFNN has been tested for unimodal as well as multimodal identification systems using the real Fingerprint database and CASIA Iris database. In greater detail, the proposed approach performs Fingerprint feature extraction using the Haar wavelet based method. On the other hand, Iris features are extracted by block sum method. These codified features are the representation of unified template. Improved performance is exhibited by proposed feature level fusion compared to the fusion methods used by other researchers. From table 2, it can be easily concluded that feature level performs better compared to unimodal biometric systems.
Similar attempt made by Jagadeesan, et al. [32] , proposed an efficient approach for generation of secured cryptographic key by incorporating Iris and Fingerprint modalities, so as to provide better security. Their proposed approach was composed of three modules namely, 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. Firstly, the features, minutiae points and texture properties were extracted from the Fingerprint and Iris images respectively. The extracted features were combined together with their innovative method, to obtain the fused multi-biometric template. A 256-bit secure cryptographic key was generated from the multi-biometric template. For experimentation, they employed the Fingerprint images obtained from publicly available sources (so we used our real Fingerprint database) and the Iris images from CASIA Iris Database. Training and testing aspects were not covered by Jagadeesan, et al. [32] . We go one step further by training their proposed key by RBFNN to compare our results with their technique. The comparison chart of FAR and FRR is tabulated intable 4.
The training and testing time per person for [32] is tabulated in [Table-5] .
The experimental results demonstrate the effectiveness of our proposed approach. The results suggest that the feature level fusion gives better performance than the individual traits. This statement is even supported by the results obtained using the technique of Jagadeesan, et al. Results obtained from our method of fusion are superior to Jagadeesan, et al. For feature level fusion trained by RBFNN, we found the results of GAR= 92%, FAR = 0%, training time = 5.30s and testing time of 0.12s in our case and GAR = 90%, FAR = 0%, training time = 5.41s and testing time of 0.22s, by Jagadeesan’s technique. The comparison is supportive to claim the superiority of our proposed method of feature level fusion.

Conclusion

In this paper a novel algorithm for feature level fusion and recognition system using RBFNN has been proposed. The proposed algorithm has been compared with unimodal biometric based system. The simulation results show clearly the advantage of feature level fusion of multiple biometrics modalities over single biometric feature identification. The uniqueness of fused template generation also outperforms the cryptographic key generation out of the multimodal biometrics. The recognition rate using fused template is better than cryptographic key based system. The improvement of 2% in FRR and response time is observed as compared to cryptographic key based system. From the experimental results it can be concluded that the feature level fusion produces better recognition than individual modalities. The proposed method sounds to be strong enough to enhance the performance of multimodal biometric. The work can be extended with other biometric modalities also. The performance analysis using noisy database may be performed.

References

[1] Jang-HeeYoo, Jong-Gook Ko, Sung-Uk Jung, Yun-Su Chung, Ki-Hyun Kim, Ki-Young Moon and Kyoil Chung (2007) Report of ETRI-Information Security Research Division, 1-6.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[2] Ross A. and Jain A. (2003) Pattern Recognition Letter, 24(13), 2115-2125.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[3] Yang F. and Ma B. (2007) 4th IEEE International Conference on Image and Graphics, Jinhua, 689-693.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[4] Cui J., Li J.P. and Lu X.J. (2008) IEEE International Conference on Apperceiving Computing. Intelligence Analysis, Chengdu, China, 66-69.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[5] Dahel S.K. and Xiao Q. (2003) Information Assurance Workshop, IEEE Systems, Man and Cybernetics Society, Ottawa, Ont., Canada, 170-173.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[6] Ross A. and Jain A.K. (2004) 12th European Signal Processing Conference, Vienna, Austria, 1221-1224.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[7] Rabuzin K., Baca M. and Malekovic M. (2007) Journal of Software, 2(4), 24-31.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[8] Baca M. and Rabuzin K. (2005) 28th International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO, 205-210.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[9] Teddy Ko (2005) 34th Applied Imagery and Pattern Recognition Workshop, Arlington, VA, 88-95.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[10] Nageshkumar M., Mahesh P.K. and ShanmukhaSwamy M.N. (2009) International Journal of Computer Science, 2, 49-53.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[11] Jain A.K. and Ross A. (2004) Communications of the ACM, 47(1), 34-40.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[12] Hong L., Jain A.K. and Pankanti S. (1999) IEEE Workshop on Automatic Identification Advanced Technologies, New Jersey, USA, 59-64.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[13] Jain A., Nandakumar K. and Ross A. (2005) Journal on Pattern Recognition, 38(12), 2270-2285.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[14] Camlikaya E., Kholmatov A. and Yanikoglu B. (2008) Biometric Technology for Human Identification, 6944(5), 1-9.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[15] Cogent Systems, 3M Cogent- Fusion-Multi-Modal Biometric Handheld Device.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[16] Trak M., MetaMatcher- A Multi-Biometric Matching Architecture.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[17] Wang R. and Bhanu B. (2006) IEEE International Conference on Pattern Recognition, Istanbul, Turkey, 586-589.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[18] Jing X., Yao Y., Zhang D., Yang J. and Li M. (2007) Journal on Pattern Recognition, 40(11), 3209- 3224.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[19] Zhang T., Li X., Tao D. and Yang J. (2008) IEEE Journal on Pattern Recognition, 41(3), 805-813.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[20] Yan Yan and Yu-Jin Zhang (2008) 19th International Conference on Pattern Recognition, Beijing, China.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[21] Ross A. and Govindarajan R. (2004) Biometric Consortium Conference (BCC).  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[22] Subbarayudu V.C. and Prasad M.V.N.K. (2008) 1st IEEE International Conference on Emerging Trends Engineering Technology, 635-640.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[23] Besbes F., Trichili H. and Solaiman B. (2008) 3rd International IEEE Conference on Information and Communication Technologies, From Theory to Applications, Syria, 1-5.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[24] Ratha N.K., Bolle R.M., Pandit V.D. and Vaish V. (2008) 5th IEEE Workshop Applications of Computer Vision, 29-34.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[25] Conti V., Militello C., Sorbello F. and Vitable S. (2010) IEEE Transactions on Systems, Man and Cybernetics, Part C, 40(4), 388-395.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[26] Nagar A., Nandakumar K. and Jain A.K. (2012) IEEE Transactions on Information Forensics and Security, 7(1), 255-279.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[27] Ma L., Wang Y. and Zhang D. (2004) IEEE Transaction on Image Processing, 13(6), 739-750.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[28] Aguilar G., Sanchez G., Toscano K., Nakano M. and Perez H. (2007) International Conference on Intelligent and Advanced Systems, Kuala, 145-150.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[29] Liu L., Feng Gu X., Jian Ping Li, Jie Lin, Jin Xin Shi and Yuan Yuan Huang (2009) International Conference on Apperceiving Computing and Intelligent Analysis, Chengdu, China, 112-115.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[30] Basha A.J., Palanisamy V., Purusothman T. (2010) International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 1-8.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[31] Baig A., Bouridane A., Kurugollu F. and Qu G. (2009) International Journal of Bio-Science and Bio-Technology, 1(1), 47-58.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[32] Jagadeesan A., Thillaikkarasi T., Duraiswamy K. (2011) European Journal of Scientific Research, 49(4), 488-502.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[33] Maurer D.E. and Baker J.P. (2008) IEEE Transaction on Pattern Recognition, 41(3), 821-832.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[34] Huang Y.P., Luo S.W., Chen E.Y. (2002) IEEE 1st International Conference on Machine Learning and Cybernetics, Beijing, China, 1, 450-454.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[35] Daugman J. (2004) IEEE Journals on Circuits and Systems for Video Technology, 14(1), 21-39.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[36] Wildes R.P., Asmuth J.C., Green G.L., Hsu S.C., Kolczynski R.J., Matey J.R., McBride S.E. (1994) Second IEEE Workshop on Applications of Computer Vision, 252-258.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[37] Gawande U., Zaveri M. and Kapur A. (2011) Biometric Technology Today Journal, 4, 8-10.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

[38] Jing X.Y., Zhang D., Yofng-Fang Yao, Miao Li (2008) Journal of Neurocomputing, 71(13), 3044-3048.  
» CrossRef   » Google Scholar   » PubMed   » DOAJ   » CAS   » Scopus  

Images
Fig. 1- Preprocessing of Fingerprint Image (a) Input Fingerprint Image, (b) Histogram equalized Image
Fig. 2- a. wavelet decomposition b. 4 level wavelet decomposition
Fig. 3- Iris Localization, segmentation and normalization
Fig. 4- Dividing the normalized image into cells
Fig. 5- RBFNN Architecture
Table 1- Network Parameters for RBFNN
Table 2- Average GAR (%) and FAR (%) for unimodal and feature level fusion technique
Table 3- Training and testing time (mean) in sec’s
Table 4- Comparison of proposed approach with other literature approach [32] (FAR & FRR)
Table 5- Training and testing time (mean) in sec’s for [32]