Project Topics

Engineering Projects

Published on Feb 11, 2016


Data compression is the reduction or elimination of redundancy in data representation in order to achieve savings in storage and communication costs. Data compression techniques can be broadly classified into two categories:

Lossless, Lossy schemes. In lossless methods, the exact original data can be recovered while in lossy schemes a close approximation of the original data can be obtained.

The lossless method is also called entropy coding schemes since there is no loss of information content during the process of compression. Digital images require an enormous amount of space for storage.

This work is to design VLSI architecture for the JPEG Baseline Image Compression Standard. The architecture exploits the principles of pipelining and parallelism to the maximum extent in order to obtain high speed the architecture for discrete cosine transforms and the entropy encoder are based on efficient algorithms designed for high speed VLSI.

For example, a color image with a resolution of 1024 x 1024 picture elements (pixels) with 24 bits per pixel would require 3.15 Mbytes in uncompressed form. Very high-speed design of efficient compression techniques will significantly help in meeting that challenge. In recent years, a working group known as Joint Photographic Expert Group (JPEG) has defined an international standard for coding and compression of continuous tone still images.

This standard is commonly referred to as the JPEG standard. The primary aim of the JPEG standard is to propose an image compression algorithm that would be application independent and aid VLSI implementation of data compression.

In this project, we propose efficient single chip VLSI architecture for the JPEG baseline compression standard algorithm. The architecture fully exploits the principles of pipelining and parallelism to achieve high speed.

The JPEG baseline algorithm consists mainly of two parts: (i) Discrete Cosine Transform (DCT) computation and (ii) Entropy encoding. The architecture for entropy encoding is based on a hardware algorithm designed to yield maximum clock speed


Image compression is an important topic in commercial, industrial, and Academic applications. Whether it is in commercial photography, industrial imaging and digital pixel information can comprise considerably large amounts of data.

Management of such data can involve significant overhead in computational complexity, storage, and data processing. Typical access speeds for storage medium are inversely proportional to capacity. Through data compression, such tasks can be optimized




Simulation: modelsim5.8c

Synthesis: Xilinx 9.1