Wavelet Based Image Compression Using Sub band Threshold
ABSTRACT
Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.
INTRODUCTION:
Image compression is a technique of encoding an image to store it or send it using as fewer bits as possible. Presently the most common compression methods for still images fall into two categories: Discrete Cosine Transform (DCT) based techniques and methods based on wavelet transform. Widely used image compression technique JPEG achieves compression by applying DCT to the image, whereas wavelet transform methods generally use discrete wavelet transform (DWT) for this purpose.
With the recent developments in wavelet compression, this method has arisen to be an efficient coding method for still image compression, outperforming today’s DCT based JPEG standards. This state of the art compression technique is accomplished in three stages: 1) wavelet transform, 2) zerotree coding and 3) entropy based coding. Wavelet transform decomposes the image into several multi-resolution subbands in an octave manner, and perfectly reconstructs the original image from them. This multi-level decomposition is done using two dimensional wavelet filters (basis function), among which Haar and Daubechies filters are very popular. The appropriate choice of filters for the transform is very important in compression schemes to achieve high coding efficiency. Splitting of subband into next higher level four subbands using wavelet transform is shown in Figure 1.
Among the wavelet coding schemes, Shapiro was the first to develop Embedded Zerotree Wavelet (EZW) coding scheme in 1993. It utilizes dependencies among subbands decomposed by wavelets, and
uses zerotree to achieve high compression. For successive approximation quantization, the coefficients in subbands are scanned in a pre-determined fashion and their values are compared with an octavely decreasing threshold. Higher compression ratio is achieved using variable length coding method that depends on this previously coded EZW data. Sometimes adaptive arithmetic coding is used for further compression with a cost of complexity and computation time
WAVELET IMAGE COMPRESSION:
Image compression can be implemented using a variety of algorithms; such as transform based schemes, vector quantization and subband coding. The selection of an image compression algorithm depends mostly on criteria of achievable compression ratio and the quality of reconstructed images. Wavelet transform based coding is an emerging field for image compression with high coding efficiency. Recently a new wavelet based image compression scheme JPEG-2000 has been standardized, indicating the wavelet compression as the promising compression scheme of the future. This section presents an overview of wavelet image compression and later describes in detail a typical wavelet transform algorithm with
Embedded Zerotree wavelet coding scheme.
1. Wavelet Transform
Wavelet Transform provides a compact multi-resolution representation of the image. It inherits an excellent energy compaction property suitable to exploit redundancy in an image to achieve compression.
Discrete Wavelet Transform (DWT) can be implemented using two-channel wavelet filter bank in a recursive fashion. For an image, 2D-DWT is generally calculated using a separable approach. Input image is first scanned in a horizontal direction and passed through lowpass and highpass decomposition filters. The decomposed data is then sub-sampled vertically to produce low frequency as well as high frequency data in horizontal direction. This filtered output data is then scanned in a vertical direction and again these filters are applied separately to generate different frequency subbands. After sub-sampling of the resultant data in horizontal direction, the transform generates subbands LL, LH, HL and HH each with one forth
the size of the original image. Most of the energy is concentrated in low frequency subband LL and represents a down sampled low resolution version of the original image, whereas higher frequency subbands contain detail information of the image in horizontal, vertical and diagonal directions
respectively. After one level of transformation, the image can be further decomposed by again applying 2-D DWT to the existing LL subband in the similar manner. This iterative process results in multiple levels of transformation with energy further compacted into few low frequency coefficients. An example of threelevel decomposition of image into subbands using wavelet transform is illustrated in figure 2(a), whereas Figure 2(b) shows parent/children relationship among levels.
Figure 2. (a) 3 level wavelet decomposition, (b) Relationship between higher and lower level coefficients(parents/children)
2. Embedded Zerotree Coding
After performing wavelet transform on the input image, EZW encoder progressively quantizes the coefficients using a form of bit plane coding to create an embedded representation of the image. EZW coder uses the relationship between higher and lower level coefficients (parents and children) of the same orientation in the wavelet block to gain its coding efficiency. This coding technique is performed in two passes in a recursive manner: (1) Dominant pass and (2) Subordinate pass. In the dominant pass,
magnitude of wavelet coefficients are compared with a set threshold value T to determine significant data – coefficients with absolute value greater than threshold. As the scanning progresses from low to high spatial frequencies, a two bit symbol is used to encode the sign and position of all significant coefficients. The positive significant (POS) and negative significant (NEG) coefficients are above a set threshold value that starts at the highest power of two below the peak wavelet coefficient value. If the wavelet coefficient is
insignificant and has its descendent that is significant, it is indicated as an isolated zero (IZ) and must be coded separately. Otherwise the coefficient is classified as a zerotree root (ZTR), which itself and all its descendents are insignificant and can be classified with just one symbol. The inclusion of the ZTR symbol greatly increases the coding efficiency as it allows the encoder to exploit inter scale correlation in an image. The subordinate pass follows next, in which all the detected significant coefficients are refined using the Successive Approximation Quantization (SAQ) approach. It adds a significant bit to each of the significant coefficients by encoding the next lower bit using either a zero or a one. Once all the coefficients are coded for a certain threshold value T, the process is repeated for leftover insignificant coefficients with the threshold value lowered by a power of two. Overall, the result is an embedded bitstream transmitted by the encoder that has a specific order of importance based on threshold magnitude with these four symbols, indicating types of significance as well as resolution bits for all significant coefficients at each level. EZW technique can be enhanced using entropy coding before transmission, to achieve further compression.
Figure 3. Block diagram of a wavelet based image coding algorithm
3. PROPOSED ALGORITHM:
The development of EZW (Embedded Zerotree Wavelet) image coding has attracted great attention among researchers. It is the most popular wavelet based compression algorithm and is widely used in a number of applications. This paper concentrates on EZW algorithm and proposes an algorithm that is basically an extension of it. The image is first decomposed into subbands using wavelet transform. Recursive transformation method is used for multi-level decomposition. The output data is then preprocessed before undergoing zerotree compression. Block diagram of wavelet based image coding algorithm is shown in figure 3
The main objective of this new algorithm is to enhance compression ratio of an image with minimal loss during reconstruction. This algorithm concentrates on the pre-processing stage of compression and removes some of the unwanted data present in transformed image that contribute less in image reconstruction, but require more bits during compression. Exploiting the tradeoff between compression ratio and reconstructed image quality, it eliminates some least important image data to achieve further compression with slight reduction in image quality.
The algorithm is based on the fact that the subband with low values has little effect on output, as compared to subbands with higher values. The higher the values, the higher the dependency on that subband with sharp edges in that direction, whereas the lower the values, the little (or no) dependency on the respective subband in that direction. In wavelet-transformed image, coefficients in the LL subband represent low-resolution image whereas high frequency subbands contain detail subbands in each direction respectively. These three subbands contribute less in image reconstruction since coefficients of these subbands are mostly zero (or close to zero) with only a few large values that correspond to edges and textures information in the image. We propose an algorithm to reduce the least important data in order to attain further compression. Retaining the high value coefficients, the lower values of minimum valued subband are eliminated.
For this purpose, we first find the minimum valued subbands, and then apply a threshold value to eliminate low valued data from it that contributes egligibly in image reconstruction. The algorithm uses weight calculation method to obtain one minimum valued subband for each level. Weight of each subband is calculated by adding absolute value of all the coefficients present in the subband. Out of three detail subbands ( LHi, HLi and HHi) the one with minimum weight at every level is marked as minimum weight subband. i.e, one minimum weight subband for each level is obtained.
After finding the required subbands in each level, the algorithm reduces the data present in these subbands, depending on its importance for reconstruction. Since most of the values are close to zero,
coefficients in minimum detail subband undergo a threshold process to eliminate low-valued data in that subband in the transformed domain. The coefficients whose value is greater than a set threshold value are retained, while those below a certain threshold value are set to zero resulting in little loss in picture quality. In our experiments, we used different threshold values to show the effect of compressed output and reconstructed image. Zerotree coding is done next on the thresholded data for compression. The reduction of low valued significant coefficients in minimum weight subbands, result in higher compression ratio with slight loss in decoded PSNR. Results show that this algorithm shows better efficiency with a cost of negligible loss in picture quality.
4. EXPERIMENT RESULTS
The proposed algorithm was implemented in software and computer simulation results were obtained. Three different 256x256 8-bit grayscale images, Lena, Barbara and Baboon, were used for experiments and
results. Experiments have shown that three-level wavelet decomposition achieves best results in terms of compression ratio and reconstructed PSNR. Therefore, the input image was decomposed into three-level wavelet transform using Daubechies 9/7 biorthogonal wavelet filters. The wavelet transformed data then underwent a preprocessing stage. There, weight of each detail subband was calculated to find the minimum weight subband in every level. Absolute values of all the coefficients in a subband were added together to calculate subband weight. Out of all the three subbands in each level, the subband with minimum weight was marked as minimum weight subband. During experiments, the minimum value subband was found to
be diagonal detail subband (HH) most of the times, showing that diagonal subband offers least contribution in image reconstruction. Either a horizontal (HL) or a vertical subband (LH) was marked as minimum only a few times. A lossy threshold process was applied to minimum weight subband to remove low valued data in it in order to achieve further compression. In the experiments, a threshold value of 2 and 5 were used separately to eliminate coefficients equal to or below it.
Embedded zerotree coding algorithm was implement after preprocessing stage for compression. The proposed algorithm used z-scan coding approach to compress significant pixels starting from highest level
and moved on to next lower level in z-fashion. Adaptive arithmetic coding was also used in the end for further compression, with a cost of complexity and computation time.
reconstructed image were observed. It can be seen that as the threshold is increased, the PSNR of reconstructed image decreases with rise in compression ratio. Generally a low-valued threshold performs better with negligible drop in PSNR. The algorithm applied threshold to minimum weight subband, and generated less amount of coded bitstream data (high compression) with some loss in PSNR at the reconstruction. Comparative results of compressed data size and output PSNR of the method with EZW method are shown in Table 1, whereas results of further compression using adaptive arithmetic coding are shown in Table 2. In the tables below, bytes show compressed output in bytes, and PSNR show reconstructed PSNR in decibels(dB). Also, HL=horizontal, H=vertical HH=diagonal subband respectively. These results show that the modified algorithm gives further compression with a cost of slight decrease in reconstructed PSNR. Figure 4 indicates different image outputs of the wavelet based compression.
5. CONCLUSION
The above method exploits the property of tradeoff between compression ratio and output PSNR, and reduces least important data in order to attain further compression. Better compression ratio is achieved compared to original EZW coder after applying threshold with slight reduction in PSNR during reconstruction
REFERENCES
[1] M. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients”, IEEE Transactions on Signal
Processing, vol 41, pp 3445-3462, Dec 93.
[2] Pankaj N. Topiwala, “Wavelet image and video compression”, 1998 Edition
[3] Google search
plz send me latest projects of ece.
ReplyDeleterojapriya.garikipati@GMAIL.COM
Great Article
DeleteIEEE Projects for Engineering Students
JavaScript Training in Chennai
IEEE Project Ideas for CSE
JavaScript Training in Chennai
Hi, Great.. Tutorial is just awesome..It is really helpful for a newbie like me.. I am a regular follower of your blog. Really very informative post you shared here.
ReplyDeleteKindly keep blogging. If anyone wants to become a Front end developer learn from Javascript Online Training from India . or learn thru JavaScript Online Training from India. Nowadays JavaScript has tons of job opportunities on various vertical industry.
ES6 Training in Chennai