Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

normalization of histogram

Status
Not open for further replies.

mamech

Full Member level 3
Joined
Nov 9, 2010
Messages
176
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,296
Activity points
3,135
hello

I am newbie in world of image processing. I want to make a simple classifier using neural networks using scilab. I created 3 photos, one of circle , one of rectangle and one of triangle. I wanted to get a descriptive value for those images, so I converted them to binary and got their histograms. The problem is that histogram values are not good enough to be fed to neural network, for example , I got thee histograms for the three images:
imhis1 =

2930.
315502.

imhis2 =

3504.
314928.

imhis3 =

4160.
314272.

I tried to give these inputs as they are to the neural network toolbox, and I got error related to singularity problem. I concluded that this because the numbers are so huge. I know that I am missing something. should I divide the histogram by a factor? I read about something called histogram normalization, does it have something to do with this?


Code:
im1=imread('E:\Science\Courses\Artificial Intelligience\Learning - Neural Networks\Image Recognition\shape1.jpg');
im2=imread('E:\Science\Courses\Artificial Intelligience\Learning - Neural Networks\Image Recognition\shape2.jpg');
im3=imread('E:\Science\Courses\Artificial Intelligience\Learning - Neural Networks\Image Recognition\shape3.jpg');
im1b=im2bw(im1,0.2)
im2b=im2bw(im2,0.2)
im3b=im2bw(im3,0.2)
imhis1=imhist(im1b,2)
imhis2=imhist(im2b,2)
imhis3=imhist(im3b,2)
 

The use of the color histogram pattern (supposedly in grayscale) as an evaluation factor for image matching, in my personal opinion, is not a suitable criterion for a classifier, unless either the size and "color" of those pictures are fairly different, or if you are interested in a fastest algorithm. In fact, I think histogram equalization is nothing more than an image enhancement tool to emphasize some features of the image, but no more than that; Obviously it is a personal opinion, and in the case of image processing there are no general rules, and in your particular case you can even check if it meets the needs, who knows ...By the way, why not use the raw image itself for the neural network ?
 
  • Like
Reactions: mamech

    mamech

    Points: 2
    Helpful Answer Positive Rating
The use of the color histogram pattern (supposedly in grayscale) as an evaluation factor for image matching, in my personal opinion, is not a suitable criterion for a classifier, unless either the size and "color" of those pictures are fairly different, or if you are interested in a fastest algorithm. In fact, I think histogram equalization is nothing more than an image enhancement tool to emphasize some features of the image, but no more than that; Obviously it is a personal opinion, and in the case of image processing there are no general rules, and in your particular case you can even check if it meets the needs, who knows ...By the way, why not use the raw image itself for the neural network ?

the images are fairly simple, I just want to test my understanding to image classification so I wanted to check it with an example (attached the some photos that I used)
how to use raw image to the neural network? you mean that pixels themselves become the input to the network?

shape3.jpg

shape2.jpg
 

the images are fairly simple

Okay, but when using only the color histogram as a classifier, if both pictures above have the same color and perimeter (ie the same linear length) if I'm not mistaken - this would produce exactly the same pattern on the histogram graph, because there would have the same amount of pixels with same "color" in the histogram.

how to use raw image to the neural network? you mean that pixels themselves become the input to the network?

Obviously this would require a lot of processing resources and time spent, sorry if I wasn't too clear. When I said the raw image, I was rather referring to the clustered image. You should split the whole picture into smaller "squares", then optionally apply some kind of math to either dilate the lines and to take the average colour of each square, and just after that, apply the neural learning process.
 

ah thanks, I got it, even if I do not know how to do it using SIVP (scilab image and video processing toolbox)
by the way, the histogram that I used is not color histogram. I converted the image to binary image first and too histogram (it is binary intensity histogram, and the value of it for images were not the same and I could use it as input to neural network, but after scaling it to reasonable value.)
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top