Continue to Site

Welcome to

Welcome to our site! is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Image Compression idea

Not open for further replies.


Junior Member level 3
Jun 2, 2005
Reaction score
Trophy points
Activity points
Regarding image compression i thought of an idea please reply me whether its existing or not or its useless.
The idea is i am going to divide the total image into blocks. In each and every block i will determine all the different color values present there and then i will enter the color values into a table linking with a lower symbol consisting of less no of bits then the orginal and the replace the values in the block with there corresponding sysmbols. here no loss of data so the precision of the image can be maintain, but it will fails when all the values present in the block are diff, but in most of the case it will not be so


I think hafman codes are little bit similar with what you have described.

I don't understand your description exactly, but it seems like a form of waveform encoding... read up on it if you're interested.

And it's Huffman encoding

MirekCz said:
I don't understand your description exactly, but it seems like a form of waveform encoding... read up on it if you're interested.

And it's Huffman encoding

Sorry for wrong writing - huffman .

Excerpt from Robert Sedgewick's book "Algorithms" :

"The Huffman code achieves economy in space by encoding
frequently used characters with as few bits as possible so that the total number
of bits used for the message is minimized....

The first step is to count the frequency of each character within the
message to be encoded...

The next step is to build a “coding tree” from the bottom up according
to the frequencies. (character)"


Actually your idea can be used in all kinds of files (not only in images). Because in all of them there is an encoding in order to store the data (ie. ascii code). As far as I understand you are trying to optimize this encoding process. In a simple way you can check efficiency of your idea by comparing with another lossless compression algorithm. For example try winzip. I do not really know but probably these algorithms also use approaches similar to yours.

regarding to ur question as a source coding (compression ). Lempel-Ziv coding is more optimum from Huffman coding.

As for image compression, the most popular codec is JPEG200, which uses wavelet transform. I think usually an image compression book will introduce a lot of methods used.

Opt JPEG2000...the best encoding technique for still images as of now.

I think that author should learn more about lossless and lossy compression methodology.
See for example

Proceedings of SPIE
Volume 3970 -- Media Processors 2000
Computationally efficient lossless image coder
Parthasarathy Sriram and Subramania I. Sudharsanan
pp. 50-58

Friend, a good introduction is "Introduction to Data Compression" 2nd Edition by Sayood

Not open for further replies.

Part and Inventory Search

Welcome to