Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

edge sharpening in fingerprint image processing

Status
Not open for further replies.
image processing fingerprint

I don't think Hough can be applied.
really??

and can i get some help regarding gabor, i am clueless as to what it is.

thanks..

Added after 1 hours 17 minutes:

hi, if anyone has gone through raymond thais thesis paper, then could you please tell me about the equations 2.10 and 2.11,
(equations for gaussian smoothing), what does the
uw and vw mean??

thanks..
 

fingerprints in image processing

That part only want to smooth out the values of orientation field using Gaussian mask. Eq 2,10, 2.11 are only represent the 2D filtering process. That's all. So vw and uw should be v and u. I think it's typos, or otherwise w should be 1,2, .... = the pixels skipped in the filtering which doesn't make much sense.
 

ok i get it.
just one more thing, in equation 2.6, i.e. the orientation estimation equation, Vy is calculated by multiplying the squared terms of sobel gradients.
but in some papers i see that it is calculated by taking the difference of the squared sobel gradients,
so what to follow.

thanks.

Added after 3 hours 54 minutes:

i went through it again, raymond thai has calculated local orientation pixel wise, and in the other paper it is blockwise. so maybe that or??
these are the points:
1.Divide the input fingerprint image into non-overlapping blocks of size W x W
2.Compute the gradients Gx (i, j) and Gy (i, j) at each pixel (i, j) using Sobel or Marr-Hildreth operator
3.Least squares estimate of the local orientation of the block centered at (i, j) is
(in the formula)..

correct me if i am wrong, but is this how it works,
you divide the fingerprint into equal sized blocks, and in each blocks the pixels which make up the ridge(i.e. foreground pixels), are orinted in a particular predominant angle. and for each block there is one such angle. and this makes the local orientation field. and if the block holds a minutiae point, then that minutiae is oriented in the angle of the block>

have i understood it right??
 

1.Divide the input fingerprint image into non-overlapping blocks of size W x W

You misunderstood. It's just a basic fact that to find gradient at point (i,j), it is necessary to use the neighbor pixels around it. In general, it is easy to use square block (W*W) in this case "centered at (i,j)". <---It is just operator in square block applied by sliding through the image. Not partition image into blocks.

You might want to look at some book on image processing. I saw some were uploaded here.

What operator to use to find gradient?....doesn't matter as long as the resulting estimation of gradient is good.

I think it is important to understand what the author was doing, but not necessarily to follow exactly step for step.
 

well, if you dont partition the image into blocks then how will you get the orientation map diagram as showed in the report of raymond thai. i am so confused.
yes, maybe i should refer to some books.
where are they actually??
 

hi me2please, i downloaded the book u posted long back.
but was never able to find a djvu reader.
can u name some?? or any other way to open the file..
 

www.lizardtech.com

I didn't mean it doesn't use neighbor pixels. I mean it doesn't partition into "non-overlapping blocks". The operator use square neighbors, but it applied to every points, not just from one block to another non-overlapping block.

I didn't mean to suggesting fingerprint image processing. I mean "fundamental image processing". Seriously!
 

thanks, i got the browser plugin.

ya i got to develop my basics, ok, but yes i understand that pixel operations take local neighbourhood and if a window is used it is centered on every pixel. so this orientation operation is done similarly with determining the orientation angle for each and every pixel in the image with a local neighbourhood window centered at each pixel.

and subsequently, the ridge frequency is also found similarly for each pixel in the image.
AND to move further, the gabor filter is applied to each pixel, with the available data for the pixel.

please dont tell me i am wrong this time too...:roll:
 

hi,
you can also try canny edge detector i think it performs better than ususal sobel..operators. Also there is a book specifically on fingerprint recognition by maltoni it is available in eda books upload/download options i think it might be helpful to u.
pimr
 

pimr said:
hi,
you can also try canny edge detector i think it performs better than ususal sobel..operators. Also there is a book specifically on fingerprint recognition by maltoni it is available in eda books upload/download options i think it might be helpful to u.
pimr

i tried laplacian and sobel, but it was not sucessful. so i moved on removing the whole edge detection process and towards ridge extraction. and to that end i am trying gabor now.
 

Hi,

i have a doubt regarding the sobel operator.
should the matrix be divided by 4.
like;

|1 0 -1|
|2 0 -2|*(1/4)
|1 0 -1|
 

NO!!! There is no requirement of the division... The division is done so that the entire image is not offset by an amount...
The sum of all the values in the operator is 0, so the division is not required. Moreover, you will need this division only in case of averaging filters.

Regards,
asymbian.
 

ok i got that.

now i have a problem with minutiae matching.
i have two maps now. a query map and template map. and for each minutiae, its spatial position and angle.
i referred to this.
CHOOSING A COMMON REFERENCE POINT:
It is suggested that the template minutiae should be used as reference points. They
should be tried as reference point one by one starting with the one closest to the
center of the image. The matching should continue until the number of matching
minutia is higher than the system threshold or until enough minutiae have been
tested to conclude that it will be impossible to reach the threshold. The matcher
should then move on to use the next template minutia as reference point. Whenever
a template minutia is tried as reference point the position and angle of the reference
minutia should be used to align the second minutia set before the matching continue.

it says align the second image with respect to a minutiae angle and distance. but what point in the second image is to be taken for alighning??
 

well can anyone tell me the how to implement this sobel edge detection n VHDL or verilog.. and on wot basics too..

with regards,
 

I have a question in matching of fingerprint

we have a problem in matching:

we have a data base of fingerprint taken by veridicom sensor with
size 256*300 and we apply the following functions on the fingerprint
to get the features:
1-Orientation field
2-Binarization
3-Segmentation
4-Thining
5-Feature extraction

our features are the distance between each minutiae and its nearest 6
minutiaes , the difference angle between each minutiae and the same
nearest 6 minutiaes.In our matching technique we compare each distance
around each minutiae of the online fingerprint with the 6th distances
of the minutiae of the template fingerprint that has the same type
by subtracting the the distance with each of the 6 distances.
also we make this for the diffrence angle.if there is at least 3
minutiaes have near distances and near diffrence angles we say that
the minutiae in the online matched with that in the template.
our problem is the threshold of the subtraction of the distances
and that of the diffrence angles ,we try many methods to get that
threshold and we fail.so any one can help to get that
threshold or suggest any other method for matching technique but
with a given values of the used threshold as we will deliver our
graduation project soon.
 

i need matlab implementation of "Raymond Thai thesis on fingerprints".............
plzz help me..........i have to submit my degree project..
**broken link removed**
 

Not sure this will be applicable or pedantic, but it looks like everyone is concentrating on edge detection. If instead you take your data and tranform it into the spatial frequency domain, pass it thru a highpass filter, then inverse transform it back, you will have an automatic enhancement of the edges.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top