Even with compression, the overhead to compress 100 bytes would be larger than the amount of saving. In general, compression would add a block of data to the beginning of the 100 bytes, holding information on the decompression method and block identifiers. In only 100 bytes it would be difficult to make the overall result smaller. Compression works best on large data sets where the header is small in comparison to the part being reduced.
Best case is when the 100 bytes are all the same value, worst case is when they are all random. As 'htg' states, you need to know more about the properties of the data before even considering if compression is worth the effort.
Brian.