• General Enquiries: (00) 357 22768633

    Discuss Your Order: 0044 (0) 1159 667 993

    Today's Opening Times: 10:00 - 20:00 (GMT)

  • Offers
  • Loading...
  • My Account

The Need For Image Compression Information Technology Essay

Digital Image Compression compresses and reduces the size of images by use of various algorithms and standards. Two of the common Digital Image Compression Techniques are lossless compression and lossy compression. The lossless compression technique, as the name indicates, produces no loss in the quality of image. This technique is used in places where the quality and accuracy of image is extremely important and cannot be compromised on. Some of the examples are technical drawings, medical images etc. Lossy compression technique is one which produces a minor loss of quality to the output image. This minor loss is almost invisible and hard to identify. This technique finds use where minor alteration or loss of quality causes no problem like in photographs. There are different methods and algorithms used in lossless and lossy compression [1].

Digital Image Compression has numerous applications which range from image compression for personal use to compressing more crucial images like medical images. Digital Image Compression helps in saving a lot of memory space and hence is extensively used for compression of photographs, technical drawings, medical imaging, artworks, maps etc. Images reduced in size by Digital Image Compression can be sent, uploaded or downloaded in much lesser time and thus it makes sharing of images lot easier [1] [6].


Previous reserach


The objective of this research is to study various methods of digital image compression and implement image compression technique using DCT and wavelet transform in MATLAB. The goal of this research is to make comparative analysis of both methods in terms of compression ratio, quality, signal to noise ratio, mean square error.



An image, 1024 pixel x 1024 pixel x 24 bit, without compression, would require 3 MB of storage and 7 minutes for transmission, utilizing a high speed, 64 Kbit/s, ISDN line. If the image is compressed at a 10:1 compression ratio, the storage requirement is reduced to 300 KB and the transmission time drops to under 6 seconds. Seven 1 MB images can be compressed and transferred to a floppy disk in less time than it takes to send one of the original files, uncompressed, over an AppleTalk network [3].

In a distributed environment large image files remain a major bottleneck within systems. Compression is an important component of the solutions available for creating file sizes of manageable and transmittable dimensions. Increasing the bandwidth is another method, but the cost sometimes makes this a less attractive solution. Platform portability and performance are important in the selection of the compression/decompression technique to be employed. The easiest way to reduce the size of the image file is to reduce the size of the image itself. By shrinking the size of the image, fewer pixels need to be stored and consequently the file will take less time to load [3] [6].

The figures in Table 1.1 show the qualitative transition from simple text to full-motion video data and the disk space, transmission bandwidth, and transmission time needed to store and transmit such uncompressed data.

Table 1.1 Multimedia data types and uncompressed storage space, transmission bandwidth, and transmission time required. The prefix kilo- denotes a factor of 1000 rather than 1024.


Image compression has increased the efficiency of sharing and viewing personal images, it offers the same benefits to just about every industry in existence. Image compression was most commonly used in the data storage, printing and telecommunication industry. The digital form of image compression is also being put to work in industries such as fax transmission, satellite remote sensing, and high definition television [4][6].

In certain industries, the archiving of large numbers of images is required. A good example is the health industry, where the constant scanning and/or storage of medical images and documents take place. Image compression offers many benefits here, as information can be stored without placing large loads on system servers. Depending on the type of compression applied, images can be compressed to save storage space, or to send to multiple physicians for examination. And conveniently, these images can uncompress when they are ready to be viewed, retaining the original high quality and detail that medical imagery demands [5] [6].

Image compression is also useful to any organization that requires the viewing and storing of images to be standardized, such as a chain of retail stores or a federal government agency. In the retail store example, the introduction and placement of new products or the removal of discontinued items can be much more easily completed when all employees receive, view and process images in the same way. Federal government agencies that standardize their image viewing, storage and transmitting processes can eliminate large amounts of time spent in explanation and problem solving. The time they save can then be applied to issues within the organization, such as the improvement of government and employee programs [5] [6].

In the security industry, image compression can greatly increase the efficiency of recording, processing and storage. However, in this application it is imperative to determine whether one compression standard will benefit all areas. For example, in a video networking or closed-circuit television application, several images at different frame rates may be required. Time is also a consideration, as different areas may need to be recorded for various lengths of time. Image resolution and quality also become considerations, as does network bandwidth, and the overall security of the system [4] [6].

Museums and galleries consider the quality of reproductions to be of the utmost importance. Image compression, therefore, can be very effectively applied in cases where accurate representations of museum or gallery items are required, such as on a Web site. Detailed images that offer shorter download times and easy viewing benefit to all types of visitors, from the student to the discriminating collector. Compressed images can also be used in museum or gallery kiosks for the education of that establishment’s visitors. In a library scenario, students and enthusiasts from around the world can view and enjoy a multitude of documents and texts without having to incur traveling or lodging costs to do so [5] [6].

Regardless of industry, image compression has virtually endless benefits wherever improved storage, viewing and transmission of images are required. And with the many image compression programs available today, there is sure to be more than one that fits your requirements best [4] [6].


There are various softwares and tools available in market to study and perform simulations of digital image processing.

1) CVIP tools-

It is a software package for the exploration of computer vision and image processing. One of the primary purposes of the CVIPtools development is to allow students, faculty, and other researchers to explore the power of computer processing of digital images. The new Windows version of CVIPtools, developed at the Computer Vision and Image Processing Laboratory here at Southern Illinois University at Edwardsville, under the continuing direction of Dr. Scott E Umbaugh, is currently available with the textbook, Digital Image Processing and Analysis: Human and Computer Vision Applications with CVIPtools, Second Edition. Different versions of CVItools are written in C, C++,C# and VB.Net.[7]

2) Lispix-

Lispix is a public domain image analysis program for Microsoft Windows (PC), written and maintained by David Bright. Lispix is useful for processing and analyzing images, and stacks of images or data cubes. Image pixels can be bit, integer, real, complex and color. Common Lisp is well suited to large programming projects and explorative programming. The language has a dynamic semantics which distinguishes it from languages such as C and Ada. It features automatic memory management, an interactive incremental development environment, a module system, a large number of powerful data structures, a large standard library of useful functions, a sophisticated object system supporting multiple inheritance and generic functions, an exception system, user-defined types and a macro system which allows programmers to extend the language. Most of Lispix is written in Common Lisp. Windows specific code that is written in Allegro Common Lisp, is segregated into a separate folder, and is comparatively small. Lispix is designed to be portable – previous versions have also run on the Macintosh. Source code is available on request. [8]

3) MacLispix-

MacLispix is an image processing program that runs on the Macintosh. It is freely available. It is written in Macintosh Common Lisp, hence its name. It does some of the same things that NIH Image, and some commercial programs do, however it is primarily a special purpose research tool for the Microanalysis Research Group at NIST. In spite of that, many of the tools incorporated into MacLispix have been useful for other researchers. Features of MacLispix are stacks, groups, pixels, statistical measurements and special purpose widgets. Stacks include movies, depth profiles, cropping and saving of large data sets. Groups include coordinated measurements, color overlays, scatter diagrams. Pixels include bit, byte, integer, RGB, real, complex. Statistical measurements include signal / noise determinations. Special purpose widgets include diffraction analysis, segmentation (blobbing and measurement), registration, principal component analysis [9].

4) ImageJ-

ImageJ is a public domain open source Java image processing program inspired by NIH Image for the Macintosh. It runs, either as an online applet or as a downloadable application, on any computer with a Java 1.4 or later virtual machine. Downloadable distributions are available for Windows, Mac OS, Mac OS X and Linux. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations. ImageJ was designed with an open architecture that provides extensibility via Java plugins. Custom acquisition, analysis and processing plugins can be developed using ImageJ's built in editor and Java compiler. User-written plugins make it possible to solve almost any image processing or analysis problem. ImageJ is being developed on Mac OS X using its built in editor and Java compiler, plus the BBEdit editor and the Ant build tool. The source code is freely available. The author, Wayne Rasband ([email protected]), is at the Research Services Branch, National Institute of Mental Health, Bethesda, Maryland, USA [10].

5) NIH image-

NIH Image is a public domain image processing and analysis program for the Macintosh. It was developed at the Research Services Branch (RSB) of the National Institute of Mental Health (NIMH), part of the National Institutes of Health (NIH). It has been superseded by ImageJ, a Java program inspired by NIH Image that runs on the Macintosh, Linux and Windows. Image can acquire, display, edit, enhance, analyze and animate images. It reads and writes TIFF, PICT, PICS and MacPaint files, providing compatibility with many other applications, including programs for scanning, processing, editing, publishing and analyzing images. It supports many standard image processing functions, including contrast enhancement, density profiling, smoothing, sharpening, edge detection, median filtering, and spatial convolution with user defined kernels [11].

6) SIP tool-

The 'Signal and Image Processing Tool' is a multimedia software environment for demonstrating and developing signal and image processing techniques. It has been used at CalPoly for three years. A key feature is extensibility via C/C++ programming. The tool has a minimal learning curve, making it amenable for weekly student projects. The software distribution includes multimedia demonstrations ready for classroom or laboratory use. SIPTool programming assignments strengthen the skills needed for life-long learning by requiring students to translate mathematical expressions into a standard programming language, to create an integrated processing system [12].

7) MATLAB-image processing toolbox-

Image Processing Toolbox provides a comprehensive set of reference-standard algorithms, functions, and apps for image processing, analysis, visualization, and algorithm development. You can perform image enhancement, image deblurring, feature detection, noise reduction, image segmentation, geometric transformations, and image registration. Many toolbox functions are multithreaded to take advantage of multicore and multiprocessor computers. Image Processing Toolbox supports a diverse set of image types, including high dynamic range, gigapixel resolution, embedded ICC profile, and tomographic. Visualization functions let you explore an image, examine a region of pixels, adjust the contrast, create contours or histograms, and manipulate regions of interest (ROIs). With toolbox algorithms you can restore degraded images, detect and measure features, analyze shapes and textures, and adjust color balance [13].



1) Lossless Compression- If the decompressed image is exact replica of the original image then compression is called lossless image compression, in which there is no information loss occurs. Therefore lossless compression results in a compressed file that is identical to the original file. Lossless compression is used in applications where exact reproduction of original image is highly desirable. Lossless compression breaks up original files into smaller segments that can be stored for future use or transmission to remote location and by reassembling those segments original image file can be reproduced without any information loss. Lossless compression gives lesser compression ratio (2:1) as quality of the image cannot be compromised [15]. Lossless compression methods may be categorized according to the type of data they are designed to compress. The common lossless compression methods are Run-Length Encoding (RLE) and LZW. Lossless compression is required for text and data files, such as bank records, text articles, etc. Lossless compression is necessary for many high performance applications such as geophysics, telemetry, nondestructive evaluation, and medical imaging, which require exact recoveries of original images [14].

2) Lossy Compression- If decompressed image is not exact match of the original image then compression is called lossy image compression. Some image information is lost using lossy compression method. Lossy compression can be used in applications where exact reproduction of original image is not much important but higher compression ratio is desirable. Lossy compression methods give higher compression ratio (50:1) then lossless because of information loss. JPEG is the best known lossy compression standard and widely used to compress still images stored on compact disc. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony [15].


Data compression is defined as the process of encoding data that reduces the amount of data required to represent a given quality image. This reduction is possible when the original image contains some type of redundancy. Digital image compression is a field that focuses methods for reducing the total number of bits required to represent an image. This can be achieved by eliminating different types of redundancy that exist in the image pixel values. In general, three basic redundancies exist in digital images that follow [6] [15].

1) Psycho-visual Redundancy:

It is a redundancy corresponding to different perception sensitivities to all image signals by human eyes. Therefore, eliminating some less relative important information in our visual processing may be acceptable [15].

2) Inter-pixel Redundancy:

It is a redundancy corresponding to statistical dependencies among pixels, especially between adjacent pixels. Most 2-D intensity arrays are correlated spatially in image and information is unnecessary repeated in the representation. This repeated information is redundant and exploited for compression purpose [15].

3) Coding Redundancy:

The redundancy by using code-words with different lengths to losslessly represent symbols with different probabilities is known as coding redundancy. Use small number of bits to represent more frequent symbols and use large number of bits to represent less frequent symbols so that the length of overall symbols can be reduced [6].


Figure-1 functional block diagram of general image compression [6]

Figure-1 shows basic block diagram of general image compression system. Basic image compression system composed of two distinct functional components. 1) Encoder and 2) Decoder. Encoder performs compression operation and decoder performs reverse operation decompression. An input image F(x, y) is fed to the encoder, which creates compressed representation of input image. This representation is stored for later use or transmitted to remote location. When compressed image is fed to the decoder then reconstructed image F^(x, y) is created. F(x, y) and F^(x, y) may or may not be exactly identical. If F(x ,y) and F^(x, y) are exactly same then compression is called lossless compression system. If F(x, y) and F^(x, y) are not exactly identical then compression is known as lossy compression. [6]


The encoder is designed to remove the redundancies described in section 2.3 through a series of three different operations.

The mapper is the first stage of encoding process that transforms F(x, y) into format with reduced temporal and spatial redundancy. This operation is usually reversible and may or may not reduce the amount of information required to represent image. Mapper is also called transformer which maps the pixels of image into set of co efficient that can be further quantized encoded.

Second stage, Quantizer reduces accuracy of mapper’s output in accordance with pre-established fidelity criteria. The aim is keep redundant information or data out of compressed representation. This operation is irreversible. Therefore this step must be omitted in order to achieve error-free or lossless compression.

Final stage of encoder is symbol coder that generates fixed or variable length code to represent the quantizer output and maps the output in accordance with code. Many times, variable length code is used instead of fixed length code. Most frequently occurred quantizer output values are assigned shortest code words and thus minimizing coding redundancy. This is a reversible operation.


Decoder consists of two different segments which are symbol decoder and inverse mapper. They perform reverse operations of symbol encoder and mapper shown in encoder section. Inverse quantizer block is not included in decoder section as quantizer results in irreversible information loss.

Symbol decoder decodes the codes into co-efficients given as the input to the inverse mapper. Inverse mapper converts these co-efficients to the image pixels representation.


The most common image file formats for cameras, printing, scanning, and internet use are JPG, TIF, PNG, and GIF [16].

1) JPG-

Digital cameras and web pages normally use JPG files, because JPG compress the data to be very much smaller in the file but it uses lossy compression method. Degree of compression is selectable. Smaller filesize and higher compression can be achieved using JPG but quality of image suffers, while keeping larger file size and lesser compression then some good quality image can be expected using JPG compression. Photo images have continuous tones, meaning that adjacent pixels often have very similar colors. Graphic images are usually not continuous tone and gradients are possible in graphics, but are seen less often. JPG is good for photo images, and is the worst possible choice for most graphics or text data without high image quality settings [16].

2) TIF-

TIF is lossless that uses LZW compression option, which is considered and known as the highest quality format for commercial and professional work. TIF is the most versatile and universal format across all platforms, MAC, Windows, Unix etc., except the fact that web pages don't show TIF files. Most of any special file formats such as camera RAW files, fax files etc. are generally based on TIF format. Data up to 48 bits is supported in TIF format. TIF files for photo images are generally pretty large. Uncompressed TIFF files are about the same size in bytes as the image size in memory [16].

3) GIF-

GIF always uses lossless LZW compression, but it is always an indexed color file with 8-bits and 256 colors maximum, which is poor for 24-bit color photos. GIF is still very good for web graphics with a limited number of colors. For graphics of only a few colors, GIF can be much smaller than JPG, with more clear pure colors than JPG. Graphics generally use solid colors instead of graduated shades, which limits their color count drastically, which is ideal for GIF's indexed color. GIF files offer optimum compression (smallest files) for solid color graphics, because objects of one exact color compress very efficiently in LZW [16].

4) PNG-

PNG can replace GIF today, and PNG also offers many options like indexed or RGB, 1 to 48-bits, etc. of TIF too. PNG was invented more recently than the others. It is designed to bypass possible LZW compression patent issues with GIF. It offers other options like RGB color modes, 16 bits too since it was modern. One additional feature of PNG is transparency for 24 bit RGB images. Normally PNG files are a little smaller than LZW compression in TIF or GIF. PNG incorporates special preprocessing filters that can greatly improve the lossless compression efficiency, especially for typical gradient data found in 24 bit photographic images. This filter preprocessing causes PNG to be a little slower than other formats when reading or writing the file. PNG is another good choice for lossless quality work although it is less used than TIF or JPG [16].


2.4.1 Measurement for lossy compression method

Lossy compression methods result in some loss of information in the compressed image. This loss of information occurs due to redundant information present in original image. Due to compression, there is always some distortion present in compressed image. There is a tradeoff between the quality of compressed image and compression ratio. Some types of distortion measurement parameters are necessary to evaluate to quantify the quality of reconstructed image as well as compression ratio [15].

1) Compression Ratio-

If b is the size original image and b’ is size of compressed image then compression ratio is defined as the ratio of the size of the original image to the size of compressed image [6] [15].

where, C=compression ratio

2) Root Mean Square Error (RMSE)-

Root mean square error can define information loss in terms of mathematical expression. The average of square of difference between compressed image and original image is known as mean square error. RMSE is also called square root of squared error loss [6] [15].

Let is an input or original image and be an approximation of that results from compressing and subsequently decompressing the input or original image. Images are of size then RMSE between and is the square root of the squared error averaged over the array, or

3) Peak Signal to Noise Ratio (PSNR)-

It is the the ratio between the maximum possible power of a signal (original image) and the power of corrupting signal (compressed image). PSNR is usually expressed in terms of the logarithmic decibel scale (DB). The PSNR is most commonly used as a measure of quality of reconstruction in image compression [16].

Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, where higher is better. PSNR is computed by measuring the pixel difference between the original image and compressed image. Values for PSNR range between infinity for identical images, to 0 for images that have no commonality. PSNR decreases as the compression ratio increases for an image [16].


Various compression methods have been developed to address major challenges and issues faced by digital imaging in last two decades. These compression methods are broadly classified in two main classes. 1) Lossy compression methods and 2) Lossless compression methods.













2.1 CODE




Your conclusion should provide a summary and analysis of key results and findings. You can also include a recap of difficulties encountered and how you overcame them. This is also a good place to state the "next step" to be taken if you were to do so.


To export a reference to this article please select a referencing style below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.