Báo cáo hóa học: " An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation" potx

16 358 0
Báo cáo hóa học: " An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation" potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

RESEARCH Open Access An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation Meng-Hui Lim and Andrew Beng Jin Teoh * Abstract Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a numb er of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach. Keywords: biometric discretization, quantization, feature selection, linearly separable subcode encoding 1. Introduction Binary representation of biometrics has been receiving an increased amount of attention and demand in the last decade, ever since biometric se curity schemes were widely proposed. Security applications such as bio- metric-based cryptographic key generation schemes [1-7] and biometric template protect ion schemes [8-13] require biometric features to be present in binary form before they can be implemented in practice. However, as security is in concern, these applicatio ns require bin- ary biometric representation to be • Discriminative: Binary representation of each user ought to be highly representative and distinctive so that it can be derived as reliably as possible upon ever y query request of a genuine user and will neither be mis- recognized as others nor extractable by any non-genuine user. • Informative: Information or uncertainty contained in the binary representation of each user should be made adequately high. In fact, the use of a huge number of equal-probable binary outputs creates a huge key space which could render an attacker clueless in guessing the correct output during a brute force attack. This is extre- mely essential in security provision as a malicious impersonation could take place in a straightforward manner if the correct key can be obtained by the adver- sary with an overwhelming probability. Entropy is a common measure of uncertainty, and it is usuall y a bio- metric system specification. By denoting the entropy of abinaryrepresentationasL,itcanthenberelatedto the N number of outputs with probability p i for i = {1, , N}by L = −  N i=1 p i log 2 p i . If the outputs are equal- probable, then the resultant entropy is maximal, that is, L =log 2 N. Note that the current encryption standard based on the advanced encryption standard (AES) is specified to be 256-bit entropy, signifying that at least 2 256 possible outputs are required to withstand a brute force attack at the current state of art. With the consis- tent technology advancement, adversaries will become more and more powerful, resulting from the growing capability of computers. Hence, it is utmost important to derive highly informative binary strings in coping with the rising encryption standard in the future. * Correspondence: bjteoh@yonsei.ac.kr School of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, South Korea Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 © 2011 Lim and Teoh; licens ee Springer. This is an Open Access ar ticle distributed under the terms of the Creative Commons Attribution License (http://creativecomm ons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. • Privacy-p rotective: To avoid devastated consequence upon compromise of the irreplaceable biometric features of every user, the auxiliary information used for bit- string regeneration must not be correlated to the raw or projected features. In the case of system compromise, such non-correlation of the auxiliary information should be guaranteed to impede any adversarial reverse engi- neering attempt in obtaining the raw features. Other- wise, it has no difference from storing the biometric features in the clear in the system database. To date, only a handful of biometric modalities such as iris [14] and palm print [15] have their features repre- sented in the binary form upon an initial feature-extrac- tion process. Instead, many remain being represented in the continuous domain upon the feature extraction. Therefore, an additional process in a biometric system is needed to transform these inherently continuous fea- tures i nto a binary string ( per user), known as the bio- metric discretization process. Figure 1 depicts the general block diagram of a biometric discretization- based binary string generator that employs a biometric discretization scheme. In general, most biometric discretization can be decomposed into two essential components, which can be altern atively described as a tw o-stage mapping process: • Quantization: The first component can be seen as a continuous-to -discrete mapping process. Given a set of feature elements per user, every one-dimensional feature space is initially constructed and segmented into a num- ber of non-overlapping intervals where each of which is associated to a decimal index. • Encoding: The second component can be regarded as a discrete-to-binary mapping process, where the resul- tant index of each dimension is mapped to a unique n- bit binary codeword of an encoding scheme. Next, the codeword output of every feature dimension is concate- nated to form the final bit string of a user. The discreti- zation performance is finally evaluated i n the Hamming domain. These two components are governed by a static or a dynamic bit allocation algorithm, determining whether thequantityofbinarybitsallocated to every dimension is fixed or varied, respectiv ely. Besides, if the (genuine or/and imposter) class information is us ed in determin- ing the cut points (intervals’ boundaries) of the non- overlapping quantization intervals, the discretization is thus known as supervised discretization [1,3,16], and otherwise, it is referred to as unsupervised discretization [7,17-19]. On the other hand, information about the constructed intervals of each dimension is stored as the helper data during enrolment so as to assist reproducing the same binary string of each genuine user during the verifica- tion phase. However, similar to the security and the privacy requirements of the binary r epresentation, it is important that such helper data, upon compromise, should neither leak any helpful information about the output binary string (security c oncern), nor the b io- metric feature itself (privacy concern). 1.1 Previous works Over the last decade , numerous biometric discretization techniques for producing a binary string from a given Figure 1 A biometric discretization-based binary string generator. Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 2 of 16 set of features of each user have been reported. These schemes base upon either a fixed-bit allocation principle (assigning a fixed number of bits to each feature dimen- sion) [4-7,10,13,16,20] or a dynamic-bit allocation prin- ciple (assigning a different number of bits to each feature dimension) [1,3,17-19,21]. Monrose e t al. [4,5], Teoh et al. [6], and Verbitsky et al. [13] partition each feature space into two inter vals (labeled by ‘0’ and ‘1’) based on a prefix threshold. Tuyls et al. [ 12] and Kevenaar et al. [9] have used a simi lar 1- bit discretization technique, but instead of fixing the threshold, the mean of the background probability den- sity function (for modeling inter-class var iation) is selected as the threshold in each dimension. Further, reliable components are identified based on either the training bit statistics [12] or a reliability (RL) function [9] so that unreliable dimensions can be eliminated from bits’ extraction. Kelkboom et al. have analytically expressed the genu- ine and imposter bit error probability [22] and subse- quently modeled a discretization framework [23] to analytically estimate the genuine and imposter Ham- ming distance probability mass functions (pmf) of a bio- metric system. This model is based upon a static 1-bit equa l-probable discretization under the assumption that both intra-class and inter-class variations are Gaussian distributed. Han et al. [20] proposed a discretization technique to extract a 9-bit pin from each user’s fingerprint impres- sions. The discretization derives the first 6 bits from six pre-identified reliable/stable minutiae: If a minutia belongs to bifurcation, a bit “0” is assigned; otherwise, if it is a ridge ending, a bit “1” is assigned. The derivation of the last 3 bits is constituted by a single-bit discretiza- tion on each of three triangular features. If the biometric password/pin is used directly a s a cryptographic key in security applications, it will be too short to survive brute force attacks, as an adversary would only require at most 512 attempts to crack the biometric password. Hao and Chan [3] and Chang et al. [1] employed a multi-bit supervised user-specific biometric discretiza- tion scheme, each with a different interval-handling technique. Both schemes initially fix the position of the genuine interval of each dimension dimension around the modeled pdf of the jth user: [μ j -ks j , μ j +ks j ]and then construct the re mainin g intervals based on a con- stant width of 2ks j within every feature space. Here, μ j and s j denote mean and standard deviation (SD) of the user pdf, respectively and k is a free parameter. As for the boundary portions at both ends on each feature space, Hao and Chan unfold every feature space arbitra- rily to include all the remaining possible feature values in forming the leftmost and rightmost boundary inter- vals. Then, all the const ructed inter vals are labeled with direct binary representation (DBR) encoding elements (i. e. 3 10 ® 011 2 ,4 10 ® 100 2 ,5 10 ® 101 2 ). On the other hand, Chang et al. extend each f eature space to acco unt for t he extra equal-width intervals to form 2 n intervals in accordance to the entire 2 n codeword labels from each n-bit DBR encoding scheme. Although both these schemes are able to generate bin- ary strings of arbitrary length, they turn out to be greatly inefficient, since the ad-hoc interval handling strategies may probably result in considerable leakage of entropy which will jeopardize the security of the users. In particular, the non-feasible labels of all extra intervals (including the boundary intervals) would allow an adver- sary to eliminate the corresponding codeword labels fromherorhisoutput-guessingrangeafterobserving the helper data, or after reliably identifying the “fa ke” intervals. Apart from this security issue, another critical problem with these two schemes is the potential expo- sure of the exact location of each genuine user pdf. Based on the knowledge that the user pdf is located at the center of the genuine interval, the constructed inter- vals thus serve as a clue at which the user pdf could be located to the adversary. As a result, the possible loca- tions of user pdf could be reduced to the amount of quantization intervals in that dimension, thus potentially facilitating malicious privacy violation attempt. Chen et al. [16] demonstrated a likelihood-ratio-based multi-bit biometric discretization scheme which is like- wise to be supervised and user specific. The quantiza- tion scheme first constructs the genuine interval to accommodate the l ikelihood ratio (LR) detected in that dimension and creates the remaining intervals in an equal-probable (EP) manner so that the background probability mass is equally distributed within every interval. The leftmost and rightmost boundary intervals with insufficient background probab ility mass are wrapped into a single interval that is tagged with a com- mon codeword label from the binary reflected gray code (BRGC)-encoding scheme [24] (i.e., 3 10 ® 010 2 ,4 10 ® 110 2 ,5 10 ® 111 2 ). This discretization scheme suffers from the same privacy problem as the previous super- vised schemes owing to that the genuine interval is con- structed based on the user-specific information. Yip et al. [7] presented an unsupervised, non-user spe- cific, multi-bit discretization scheme based on equal- width intervals’ quantization and BRGC encoding. This scheme adopts the entire BRGC code for labeling, and therefore, it is free from the entropy loss problem. Furthermore, since it does not ma ke use of the user pdf to determine the cut points of the quantization intervals, this scheme does not seem to suffer from the aforemen- tioned privacy problem. Teoh et al. [18,19] developed a bit-allocation approach based on an unsupervised equal-width quantization with Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 3 of 16 a BRGC-encoding scheme to compose a long binary string per user by assigning different number of bits to each feature dimension according to the SD of each esti- mated user pdf. Particularly, the intention is to assign a larger quantity of binary bits to discriminative dimen- sions and smaller otherwise. In other words, the larger the SD of a user pdf is detected to be, the lesser the quantity of bits will be assigned to that dimension and vice versa. Nevertheless, the length of the binary string is not decided based on the actual position of the pdf itself in the feature space. Although this scheme is invulnerable to the privacy weakness, such a deciding strategy gives a less accurate bit allocation: A user pdf falling across an interval boundary may result in an undesired intra-class variation in the Hamming domain and thus should not be prioritized for bit extraction. Another concern is that pure SD might not be a pro- mising discriminative measure. Chen et al. [17] introduced another dynamic bit-allo- cation approach by considering detection rate (DR) (user probability mass captured by the genuine interval) as their bit-allocation measure. The scheme, known as DR-optimized bit-allocation (DROBA), employs an equal-probable quantization intervals construction with BRGC encoding. Similar to Teoh et al.’s dynamic bit allocation scheme, this scheme assigns more bits to more discriminative feature dimensions and vice versa. Recently, Chen et al. [21] developed a similar dynamic bit-allocation algorithm based on optimizing a different bit-allocation meas ure: area under the FRR curve. Given the bit-error probability, the scheme allocates bits dyna- mically to every feature component in a similar way as DROBA except that the analytic area under the FRR curve for Hamming distance evaluation is minimized instead of DR maximization. 1.2 Motivation and contributions It has been recently justified that DBR- and BRGC- encoding-based discretization could not guarantee a dis- criminative performance when a large per-dimensional entropy requirement is imposed [25]. The reason lies in the underlying indefinite feature mapping of DBR and BRGC codes from a discrete to a Hamming space, caus- ing the actual distance dissimilarity in the Hamming domain unable to be maint ained. As a result , feature points from multiple different intervals may be mapped to DBR or BRGC c odewords which share a common Hammingdistanceawayfromareferencecodeword,as illustrated by the 3-bit discretization instance in Figure 2. Fo r this reason, regardless of ho w discriminative the extracted (real-valued) features could be, deriving discri- minative and informative binary strings with DBR or BRGC encoding will not be practically feasible. Linearly separable Subcode (LSSC) [25] has been put forward to resolve such a performance-entropy tradeoff by introducing bit redundancy to maintain the perfor- manceaccuracywhenahighentropyrequirementis imposed. Although the resultant LSSC-extracted binary strings require a larger bit length in addressing an 8- interval discretization problem as exemplified in Figure 3, mapping discrete elements to the Hamming space becomes completely definite. This article focuses on discretization basing upon the fixed bit-allocation principle. We extend the study of [25] to tackle the open problem of generating desirable binary strings that are simultaneously highly discrimina- tive, informative, and privacy-protective by means of dis- cretization based on LSSC. Specifically, we adopt a discriminative feature extraction with a further feature selection to extract discriminative featu re components; an unsupervised quantiza tion approach to offer promis- ing privacy protection; and an LSSC encoding to achieve large entropy without having to sacrifice the actual clas- sification performance accuracy of the di scriminative feature components. Note that t he preliminary idea of this article has appeared in the context of global discre- tization [26] for achieving strong security and privacy protection with high training efficiency. In general, the sig nificance of our contribution is three-fold: Figure 2 An in definite discrete-to-binary mapping from each discrete-labelled quantization interval to a 3-bit BRGC codeword. The labelg(b) in each interval on the continuous feature space can be understood by “index number (associated codeword)”. Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 4 of 16 a) We propose a fixed bit-allocation-based discreti- zation approach to extract a binary representation which is able to fulfill all the required criteria from each given set of user-specific features. b) Required by our approach, we study empirically various discriminative measures that have been put forward for feature selection and identify the reliable ones among them. c) We identify and analyze factors that influence improvements resulting from the discriminative selection based on the respective measures. The structure of this article is organized as follows . In the next section, the efficiency of using LSSC over BRGC and DBR for encoding is highlighted. In section 3, detailed descriptions about our approach in generat- ing desirable binary representation will be given and ela- borated. In section 4, experimental results justifying the effectiveness of our approach are presented. Finally, con- cluding remarks are provided in Section 5. 2. The emergence of LSSC 2.1 The security-performance tradeoff of DBR and BRGC Two common encoding schemes adopted for discretiza- tion, before LSSC is introduced, are DBR and BRGC. DBR has each of its decimal indices directly converted into its binary equivalent, while BRGC is a special code that restricts the Hamming distance between every con- secutive pair of codewords to unity. Depending on the required size S of a code, the length of both DBR and BRGC are commonly selected to be n DBR = ⌈log 2 S⌉. Instances of DBR and BRGC with different lengths (n DBR and n BRGC respectively) and sizes S are shown in Table 1. Here, the length of a code refers to the number of bits in which the codewords are represented, while the size of a code refers to the number of elements in a code. The codewords are indexed from 0 to S-1. Note that each codeword index corresponds to the quantiza- tion interval index as well. Conventionally, a tradeoff between discretization per- formance and entropy length is inevitable when DBR or BRGC is adopted as the encoding scheme. The rationale behind was identified to be the indefinite discrete-to- binary mapping behavior during the discretization pro- cess, since the employment of an encoding scheme in general affects only on how each index of the quantiza- tion intervals is mapped to a unique binary codeword. More precisely, one may carefully notice t hat multiple DBR as well as BRGC codewords share a common Hamming distance with respect to any reference code- word in the code for n DBR and n BRGC ≥ 2, mapping pos- sibly most initially well-separated imposter feature elements from a genuine feature element in the index space much nearer than it should be in the Hamming Figure 3 A d efinite discrete-to-binary mapping from each discrete-labelled quantization interval to a 7-bit LSSC codeword. The labelg(b) in each interval on the continuous feature space can be understood by “index number (associated codeword)”. Table 1 A collection of n DBR -bit DBRs and n BRGC -bit BRGCs for S = 8 and 16 with [τ] indicating the codeword index. Direct binary representation (DBR) Binary reflected gray code (BRGC) n DBR =3 S =8 n DBR =4 S =16 n BRGC =3 S =8 n BRGC =4 S =16 [0] 000 [0] 0000 [8] 1000 [0] 000 [0] 0000 [8] 1100 [1] 001 [1] 0001 [9] 1001 [1] 001 [1] 0001 [9] 1101 [2] 010 [2] 0010 [10] 1010 [2] 011 [2] 0011 [10] 1111 [3] 011 [3] 0011 [11] 1011 [3] 010 [3] 0010 [11] 1110 [4] 100 [4] 0100 [12] 1100 [4] 110 [4] 0110 [12] 1010 [5] 101 [5] 0101 [13] 1101 [5] 111 [5] 0111 [13] 1011 [6] 110 [6] 0110 [14] 1110 [6] 101 [6] 0101 [14] 1001 [7] 111 [7] 0111 [15] 1111 [7] 100 [7] 0100 [15] 1000 Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 5 of 16 space. Taking 4-bit DBR-based discretization as an example, the interval labelled with “1000”, located 8 inter- vals away from the reference interval “0000”, is eventually mapped to one Hamming distance away in the Hamming space. Worse for BRGC, interval “1 000” is located even further (15 intervals away) from interval ‘0000’. As a result, imposter feature components might be misclassified as genuine in the Hamming domain and eventually, the dis- cretization performance would be greatly impeded by such an imprecise discrete-to-binary map. In fact, this defective phenomenon gets more critical as the required entropy increases, or as S increases [25]. 2.2 LSSC Linea rly separable s ubcode (LSSC) [25] was put forward to tackle the aforementioned inabilities of DBR and BRGC effectively in fully preserving the separation of feature points in the index domain when the eventual distance evaluation is performed in the Hamming domain. This code particularly utilizes redundancy to augment the separability in the Hamming space for enabling one-to-one corresponden ce between every non-reference codeword and the Hammin g distance incurred with respect to every possible reference codeword. Let n LSSC denotes the code length of LSSC. An LSSC contains S =(n LSSC + 1) codewords, that is a subset of 2 n LSSC codewords (in total). The construction of LSSC can be given as follows: Beginning with an arbitrary n LSSC -bit codeword, say, an all zero codeword, the next n LSSC codewords can be sequentially derived by comple- menting a bit at a time from the lowest-order (right- most) to the highest-order (leftmost) bit position. The resultant n LSSC -bit LSSCs in fulfilling S =4,8and16 are shown in Table 2. The amount of bit disagreement, or equivalently the Hamming distance between any pair of codewo rds hap- pens to be the same as the corresponding positive index difference. For a 3-bit LSSC, as an example, the Ham- ming distance between codewords “111” and “001” is 2, which appears to be equal to the difference between the codeword index “3” and “1”. It is in general not difficult to observe that n eighbour codewords tend to have a smaller Hamming distance compared to any distant codewords. Thus, unlike DBR and BRGC, LSSC en sures every distance in the index space being thoroughly pre- served in the Hamming space, despite the large bit redundancy a system might need to afford. As reported in [25], increasing the entropy per dimension has a tri- vial effect on discretization performance through the employment of LSSC, with the condition that the quan- tity of quantization intervals constructed in each dimen- sion is not too few. Instead, the entropy now becomes a function of the bit redundancy incurred. 3. Desirable bit string generation and the appropriate discriminative measures In the literature review, we have seen that user-specific information (i.e., user pdf) should not be utilized to define cut points of the quantization intervals to avoid reduction of possible locations of user pdf to the quan- tity of intervals in each dimension. Therefore, strong privacy protection basically limits the choice of quanti- zation to unsupervised techniques. Furthermore, the entropy-performance independence aspect of LSSC encoding allows promisi ng performance to be preserved regardless of how large the entropy is augmented per dimension, and correspondingly how large the quantity of feature-space segmentation in each dimension would be. Therefore, if we are able to extract discriminative feature components for discretization, deriving discrimi- native, informative, and privacy-protective bit strings can thus be absolutely possible. Our strategy can gener- ally be outlined in the four following fundamental steps: i. [Feature Extraction]-Employ a discriminative fea- ture extractor ℑ(·) (i.e., Fisher’s linear discriminant analysis (FDA) [27], Eigenfeature regularization and extraction (ERE) [28]) to ensure D quality features being extracted from R raw features; ii. [Feature Selection]-Select D fs (D fs <D <R)most discriminativ e feature compone nts from a total of D dimensions according to a discriminative measure c (·); iii. [Quantization]-Adopt an unsupervised equal- probable quantization scheme Q (·) to achieve strong privacy protection; and iv. [Encoding]-Employ LSSC for encoding ℰ LSSC (·) to maintain such discriminative performance, while satisfying arbitrary entropy requirement imposed on the resultant binary string. This approach initially obtains a set of discriminative feature components in steps (i) and (ii); and produces Table 2 A collection of n LSSC -bit LSSCs for S = 4, 8 and 16 where [τ] denotes the codeword index. n LSSC =3 S =4 n LSSC =7 S =8 n LSSC =15 S =16 [0] 000 [0] 0000000 [0] 000000000000000 [8] 000000011111111 [1] 001 [1] 0000001 [1] 000000000000001 [9] 000000111111111 [2] 011 [2] 0000011 [2] 000000000000011 [10] 000001111111111 [3] 111 [3] 0000111 [3] 000000000000111 [11] 000011111111111 [4] 0001111 [4] 000000000001111 [12] 000111111111111 [5] 0011111 [5] 000000000011111 [13] 001111111111111 [6] 0111111 [6] 000000000111111 [14] 011111111111111 [7] 1111111 [7] 000000001111111 [15] 111111111111111 Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 6 of 16 an informative user-specific binary string (with large entropy) while maintaining the prior discriminative per- formance in steps (iii) and (iv). The privacy protection is offered by unsupervised quantization in step (iii), where the correlation of helper data with the user-specific data is insignificant. This makes our four-step approach to be capable of producing discriminative, informative, and privacy-protective binary biometric representation. Among the steps, implementations of (i), (iii), and (iv) are pretty straightforward. The only uncertainty lies in the appropriate discriminative measure and the corre- sponding parameter D fs in step (ii) for attaining absolute superiority. Note that step (ii) is embedded particularly to supplement the restrictive performance led by employment of unsupervised quantization. Here, we introduce a couple of discriminative measures that can be adopted for discretization and perform a study on the superiority of such measures in the next section. 3.1 Discriminative measures X(·) for feature selection The discriminativeness of each feature component is closely r elated to the well-known Fisher’s linear discri- minant criterion [27], w here the discriminant criterion is defined to be the ratio of between-class variance (inter-class variation) to within-class variance (intra- class variation). Suppose that we have J users enrolled to a biometric system, where each of them is represented by a total of D-ordered feature elements v 1 ji , v 2 ji , , v D ji upon feature extraction from each measurement. In view of potential intra-class variation, the dth feature element of the jth user can be modeled from a set of measurements by a user pdf, denoted by f d j (v) where d Î {1, 2, ,D}, j Î {1, 2, ,J}andv Î feature space d .Ontheotherhand, owing to inter-class variation, the dth feature element o f the measurements of the entire population can be mod- eled by a background pdf, denoted by f d (v). Both distribu- tions are assumed to be Gaussian according to the central limit theorem. That is, the dth-dim ensional back- ground pdf has mean μ d and SD s d while the jth user’s dth-dimensional user pdf has mean μ d j and variance σ d j . 3.1.1. Likelihood ratio (c = LR) The idea of using LR to achieve optimal FAR/FRR per- formance in static discretization was first exploited by Chen et al. [16]. The LR of the jth user in the dth dimensional feature space is generally defined as LR d j = f d j (v) f d (v) (1) with the assumption that the entire population is suffi- ciently large (excluding a single user should not have any significant effect in changing the background distri - bution). In their schem e, the cut points v 1 , v 2 ∈ d of the j-th user’sgenuineinterval int d j in the dth-dimen- sional feature space are chosen based on a prefix thresh- old t, such that int d j = {[v 1 , v 2 ] ∈ V d | LR d j ≥ t } (2) The remaining intervals are then constructed equal- probably, that is, with reference to the portion of back- ground distribution captured by t he genuine interval. Since different users will have different intervals con- structed in each feature dimension, this discretization approach turns out to be user specific. In fact, the LR could be used to assess discriminativity of each feature component efficiently, since max(f d j (v)) is reverse ly proportional to (σ d j ) 2 because  f d j (v)dv =1 , or equivalently the dth dimensional intra-class variation; and f d (v) is reversely proportional to the dth dimen- sional inter-class variation, which imply LR d j =max  f d j (v) f d (v)  ∝ max  inter - class variation intra - class variation  , j ∈{1, 2, , J}, d ∈{1, 2, , D } (3) Therefore, adopting D fs dimensions with maximum LR would be equivale nt to selecting D fs feature elements with maximum inter- over intra-class variation. 3.1.2. Signal-to-noise ratio (c = SNR) Signal-to-noise ratio (SNR) c ould possibly be another alternative to discriminative measurement, since it is a measure that captures both intra-class and inter-class variations. This measure was first used in feature selec- tion by a user-specific 1-bit RL-based discretization scheme [12] to sort the feature elements which are iden- tified to be reliable. However, instead of using the default average intra-class vari ance to d efine SNR, w e adopt the user-specific intra-class variance to compute the user-specific SNR for each feature component to obtain an improved precision: SNR d j = (σ d ) 2 (σ d j ) 2 =  inter - class variance intra - class variance  , j ∈{1, 2, , J}, d ∈{1, 2, , D } (4) 3.1.3. Reliability (c = RL) Reliability was employed by Kevenaar et al. [9] to sort the discriminability of the feature components in their user-specific 1-bit-discretization scheme. Thus, it can be implemented in a straightforward manner in our study. The definition of this measure is given by RL d j =1/2 ⎛ ⎜ ⎝ 1+erf ⎛ ⎜ ⎝ | μ d j − μ d |  2(σ d j ) 2 ⎞ ⎟ ⎠ ⎞ ⎟ ⎠ ∝ max  inter - class variation intra - class variation  , j ∈{1,2, , J}, d ∈{1,2, , D} (5) Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 7 of 16 where erf is the error function. This RL measure would produce a higher value when a feature element has a larger difference between μ d j and μ d relative to σ d j . As a result, a high RL measurement indicates a high discriminating power of a feature component. 3.1.4. Standard deviation (c = SD) In dynamic discretization, the amount of bits allocated to a feature dimension indicates how discriminative the user-specific feature component is detected to be. Usually, a more discriminative feature component is assigned with a larger quantity o f bits and vice versa. Thepureuser-specificSDmeasure σ d j signifying intra- class variance, was adopted by Teoh et al. as a bit-allo- cation measure [18,19] and hence may serve as a poten- tial discriminative measure. 3.1.5. Detection rate (c = DR) Finally, unlike all the above measures that depend solely on the statistical distribution in determining the discri- mination of the feature components, DR co uld be another efficient discriminative measure for discretiza- tion that takes into account an additional factor: t he position of the user pdf with reference to the con- structed genuine interval (the interval that captures the largest portion of the user pdf) in each dimension. This measure, as adopted by Chen et al. in their dynamic bit- allocation scheme [17], is defined as the area under curve of the user pdf enclosed by the genuine interval upon the respective intervals construction in that dimension. It can be described mathematically by δ d j (S d )=  int d j f d j (v)dv (6) where δ d j denotes the jth user’sDRinthedth dimen- sion and S d denotes the number of constructed intervals in the dth dimension. To select D fs discriminative feature dimensions prop- erly, sch emes employing LR, SNR, RL, and DR measures should take dimensions with the D fs largest measure- ments {d i | i = 1, , D fs } =argmax D f s max values [χ(v 1 j1 , v 1 j2 , , v 1 jI ), , χ(v D j1 , v D j2 , , v D jI )], d 1 , , d D fs ∈ [1, D], D fs < D , (7) while schemes employing SD measure should adopt dimensions with the D fs smallest measurements: {d i | i = 1, , D fs } =argmin D f s min values [χ(v 1 j1 , v 1 j2 , , v 1 jI ), , χ(v D j1 , v D j2 , , v D jI )], d 1 , , d D fs ∈ [1, D], D fs < D . (8) We shall empirically identify discriminative measures that can be reliably employed in the next section. 3.2 Discussions and a summary of our approach In a biometric-based cryptographic key generation appli- cation, there is usually an entropy requirement L imposed on the binary output of the discretization scheme. Based on a fixed-bit-allocation principle, L is equ ally divided by D dimensions for typical equal-prob- able discretization schemes a nd by D fs dimensions for our feature selection approach. Since the entropy per dimension l is logarithmically proportional to the num- ber of equal-probable intervals S (or l fs &S fs for our approach) constructed in each dimension, this can be written as l = L/D =log 2 S for typical EP discretization scheme (9) or l fs =  L/D fs  =  lD/D fs  =log 2 S fs for our approach (10) By denoting n as the bit length of each one-dimen- sional binary output, the actual bit length N of the final bit string is simply N = Dn; while for LSSC-encoding- based schemes where n LSSC =(2 l -1)bits,andforour approach where n LSSC ( fs ) =(2 l fs − 1 ) bits, the actual bit length N LSSC and N LSSC(fs) can respectively be described by N LSSC = Dn LSSC = D(2 l − 1) (11) and N LSSC(fs) = D fs n LSSC(fs) = D fs (2 l fs − 1) (12) With the above equations, we illustrate the algorith- mic description of our approach in Figure 4. Here, g and d* are dimensional variables, and || denotes binary concatenation operator. 4. Experiments and analysis 4.1. Experiment set-up Two popular face datasets are selected to evaluate the experimental discretization performance in this section: FERET This employed dataset is a subset of the FERET face dataset [29], in which the images were collected under varying illumination conditions and face expressions. It contains a total of 1800 images with 12 images for each of 150 users. FRGC The adopted dataset is a subset of the FRGC dataset (version 2) [30], containing a total of 2124 images w ith 12 images for each of the 177 identities. The images were taken under controlled illumination condition. For both datasets, proper alignment is applied to the images based on standard face landmarks. Owing to possible strong variation in hair style, only the face region is extracted for recognition by cropping the images to the size of 30 × 36 for FERET dataset and 61 Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 8 of 16 × 73 for FRGC dataset. Finally, histogram equalization is applied to the cropped images. Half of each identity’simagesareusedfortraining, while the remaining half are used for testing. For mea- suring the system’s false acceptance rate (FAR), each image of the corresponding user is matched against that of every other user according to its corresponding image index, while for the False Rejection Rate (FRR) evalua- tion, each image is matched against every other images ofthesameuserforeveryuser.Inthesubsequent experiments, the equal error rate (EER) (error rate where FAR = FRR) is used fo r comparing the discretiza- tion performance among different discretization schemes, since it is a quick and convenient way to com- pare the performance accuracy of the discretizations. Basically, the performance is considered to be better when the EER is lower. The experiments can be divided into three parts: The first part identifies the reliable discriminative feature selection measures among those listed in the previous Figure 4 Our fixed-bit-allocation-based discretization approach. Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 9 of 16 section. The second part examines the performance of our approach and illustrates that replacing LSSC with DBR- or BRGC-encoding scheme i n our approach would achieve a much poorer performance when high entropy is imposed because of the conve ntional perfor- mance-entropy tradeoff of DBR- and BRGC-encoding- based discretization; The last part scrutinizes and reveals how one could attain reliable parameter estimation, i.e., D fs , in achieving the highest possible discretization performance. The experiments were carried out based on two differ- ent d imensionality-reduction techniques: ERE [28] and FDA [27], and two different datasets: FRGC and FERET. In the first two parts of the experiments, 4453 raw dimensions of FRGC images and 1080 raw dimensions of FERET images were bot h reduced to D = 100 dimen- sions. While for t he last part, the raw dimensions of images from both datasets were reduced to D =50and 100 dimensions for analytic purpose. Note that EP quantization was employed in all parts of experiment. 4.2. Performance assessment 4.2.1. Experiment Part I: Identification of reliable feature- selection measures Based on t he fixed-bit-allocation principle, n bits are ass igned equally to each of the D feature dimensions. A Dn-bit binary string is then extracted for each user through concatenating n-bit binary outputs of the indi- vidual dimensions. Since DBR as well as BRGC is a code which comprise the entire 2 n n-bit codewords for label- ling S =2 n intervals in every dimension, the single- dimensional l can be deduced from (9) as l =log 2 S =log 2 2 n = n. (13) The total entropy L is then equal to the length of the binary string: L =  D d=1 l =  D d=1 n = Dn. (14) Note that L = 100, 200, 300 and 400 correspond to n = 1, 2, 3 and 4, respectively, for each baseline scheme (D = 100). For the feature-selection-based discretization schemes to provide the same amount of entropy (with n fs and l fs denoting th e number of bits and the entropy of each selected dimension, respectively), we have L =  D fs d=1 l fs =  D fs d=1 n fs = D fs n fs. (15) With this, L = 100, 200, 300 and 400 correspond to l fs = n fs = 2, 4, 6 and 8 respectively, for D fs =50.This implies that the number of segmentation in each selected feature dimension is now larger than the usual case by a factor of 2 n−n fs . For LSSC encoding scheme which utilizes longer codewords than DBR and BRGC in each dimension to fulfil a system-specified entropy requirement, the rela- tion between bit length n LSSC and single-dimensional entropy l can be described by n LSSC = S − 1=2 l − 1; (16) and for our approach, we have n LSSC(fs) =2 l fs − 1=2  L/D fs  − 1 (17) from (10). For the baseline discretization scheme of EP + LSSC with D = 100, L = Dl = Dlog 2 (n LSSC + 1) = 100log 2 (n LSSC + 1). Thus, L = {100, 200, 300, 400} corresponds to l = {1, 2, 3, 4}, n LSSC = {1, 3, 7, 15} and the actual length of the extracted bit string is Dn LSSC = {100, 300, 700, 1500}. While for the feature-selection schemes with D fs =50whereL = D fs l fs = D fs log 2 ( n LSSC(fs) +1) = 50log 2 (n LSSC(fs) +1), L = {100, 200, 300, 400} corre- sponds to l fs = {2, 4, 6, 8}, n LSSC(fs) = {3, 15, 63, 255} and the actual length of the extracted bit string becomes D fs n LSSC(fs) = {150, 750, 3150, 12750}. The implication here is that when a particularly large entropy specification is imposed on a feature selection scheme, a much longer LSSC-generated bit string will always be required. Figure 5 illustrates the EER performance of (I) EP + DBR, (II) EP + BRGC, and (III) EP + LSSC discretiza- tion schemes which adopt different discriminative mea- sures-based featu re selections with respect to that of the baseline (discretization without feature selection where D fs = D) based on (a) FERET and (b) FRGC datasets. “Max” and “Min” in each subfigure are referred to as whether D fs largest or smallest measurements were adopted corresponding to each feature selection method, as illustrated in (7) and (8). A great discretization performance achieved by a fea- ture-selection scheme basically implies a reliable mea- sure for estimating the discriminativity of the features. In all the subfigures, it is noticed that the discretization schemes that select features based on the LR, RL, and DR measures give the best performance among the fea- ture selection schemes. RL seems to be the most reliable discriminative measure, followed by LR and DR. In con- trast, SNR and SD turn out to be some poor discrimina- tive measures that could not guarantee any improvement compared to the baseline scheme. When LSSC encoding in our 4-step approach (see Section 3) is replaced with DBR in Figure 5Ia, Ib; and BRGCinFigure5IIa,IIb,RL-,LR-,andDR-basedfea- ture selection schemes manage to outperform the respective baseline sc heme at low L. However, in most cases, these DBR- and BRGC-encoding-based Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Page 10 of 16 [...]... Bowyer, J Chang, K Hoffman, J Marques, J Min, W Worek, Overview of the Face Recognition Grand Challenge, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR “05) 1, 947–954 (2005) doi:10.1186/1687-6180-2011-107 Cite this article as: Lim and Teoh: An effective biometric discretization approach to extract highly discriminative, informative, and privacyprotective binary representation... quantization, and LSSC encoding Although our binary strings are capable of fulfilling the desired criteria, the binary strings could be significantly longer than any typical static bit-allocation approach due to the employment of LSSC encoding and feature selection, thus requiring advanced storage and processing capabilities of the biometric system We have investigated a couple of existing measures to. .. between any pair of interval indices is not equal to the Hamming distance incurred between the (Ia) (Ib) (IIa) (IIb) Figure 6 EER and ROC performances of EP + DBR, EP + BRGC, and EP + LSSC discretizations with reliable feature selection (Dfs = 50) ERE feature extraction and FERET and FRGC datasets were adopted The baseline is referred to as the reference scheme without featureselection capability (discretization. .. Workshop on Automatic Identification Advanced Technologies (AutoID ‘05) 21–26 (2005) 10 J-P Linnartz, P Tuyls, New Shielding Functions to Enhance Privacy and Prevent Misuse of Biometric Templates, in 4th International Conference on Audio and Video Based Person Authentication (AVBPA 2004), LNCS 2688, 238–250 (2003) 11 ABJ Teoh, A Goh, DCL Ngo, Random multispace quantisation as an analytic mechanism for... discriminative feature selection can reliably be preserved Therefore, along with the employment of an unsupervised quantization approach, binary strings that fulfil all three desired criteria: discriminative, informative, and privacy protective can potentially be derived From both EER and ROC plots in Figure 6, the performance curves of LSSC-encoding-based discretizations with LR-, RL-, and DR-based feature selection... the performance, security, and privacy criteria of a binary representation Among the five discriminative measures in our evaluation, LR, RL, and DR 5 Conclusion In this article, we have proposed a four-step approach to generate highly discriminative, informative, and privacyprotective binary representations based on a fixed-bitallocation principle The four steps include discriminative feature extraction,... Experiments, based on two different numbers of (I) FDA- and (II) ERE-extracted features and two different numbers of users, were evaluated dimensions to be utilized to avoid any bit pattern being similarly repeated among other users Taking performance curves in Figure 7Ia, IIa as an instance, using Dfs = 5 to represent 60 users and 200 users are apparently not as effective as using Dfs = 12, even though Dfs =... in the following aspects: • BRGC- and DBR-encoding schemes are not appropriate for being employed to generate highly discriminative, informative, and privacy protective bit strings due to its inability to uphold the perfect discrete-tobinary mapping behavior for performance preservation when high entropy requirement is imposed • Since LSSC-encoding scheme is able to maintain the discriminativity of... components and drive it into a stable state (with insignificant fluctuations) irrespective of how high the entropy requirement could be, this encoding scheme appears to be extremely useful when it comes to discriminative and informative bit-string generations • Our approach integrates high-quality feature extraction, discriminative feature selection, unsupervised quantization and LSSC encoding to address... not be set too small to avoid inefficient user representation problem and enormous bit redundancy overhead Also, it should not be fixed too large to avoid trivial improvement relative to the baseline Lim and Teoh EURASIP Journal on Advances in Signal Processing 2011, 2011:107 http://asp.eurasipjournals.com/content/2011/1/107 Acknowledgements This study was supported by the Korea Science and Engineering . Access An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation Meng-Hui Lim and Andrew Beng Jin Teoh * Abstract Biometric. (2003) 11. ABJ Teoh, A Goh, DCL Ngo, Random multispace quantisation as an analytic mechanism for Biohashing of biometric and random identity inputs. IEEE Trans Pattern Anal Mach Intell. 28(12), 1892–1901. dataset; and 75 and 150 users for FRGC dataset) and the num- ber of extracted dimensions (D = 50 for FDA; and D = 100 for ERE) to observe the performance of the discreti- zation schemes in relation to

Ngày đăng: 20/06/2014, 22:20

Từ khóa liên quan

Mục lục

  • Abstract

  • 1. Introduction

    • 1.1 Previous works

    • 1.2 Motivation and contributions

    • 2. The emergence of LSSC

      • 2.1 The security-performance tradeoff of DBR and BRGC

      • 2.2 LSSC

      • 3. Desirable bit string generation and the appropriate discriminative measures

        • 3.1 Discriminative measures X(·) for feature selection

          • 3.1.1. Likelihood ratio (χ = LR)

          • 3.1.2. Signal-to-noise ratio (χ = SNR)

          • 3.1.3. Reliability (χ = RL)

          • 3.1.4. Standard deviation (χ = SD)

          • 3.1.5. Detection rate (χ = DR)

          • 3.2 Discussions and a summary of our approach

          • 4. Experiments and analysis

            • 4.1. Experiment set-up

              • FERET

              • FRGC

              • 4.2. Performance assessment

                • 4.2.1. Experiment Part I: Identification of reliable feature-selection measures

                • 4.2.2. Experiment Part II: Performance evaluation of EP + LSSC discretization with RL-, LR- and DR-based feature-selection capabilities

                • 4.2.3. Experiment part III: A meticulous analysis on EP + LSSC discretization with LR-, RL- and DR-based feature-selection capability

                • 4.3. Summary

                • 5. Conclusion

                • Acknowledgements

                • Competing interests

Tài liệu cùng người dùng

Tài liệu liên quan