Статьи журнала - International Journal of Image, Graphics and Signal Processing

Все статьи: 1056

2D Convolution Operation with Partial Buffering Implementation on FPGA

2D Convolution Operation with Partial Buffering Implementation on FPGA

Arun Mahajan, Paramveer Gill

Статья научная

In the modern digital systems, the digital image processing and digital signal processing application form an integrated part in the system design. Many designers proposed and implemented various resources and speed efficient approaches in the recent past. The important aspect of designing any digital system is its memory efficiency. The image consists of various pixels and each pixel is again holds a value from 0 to 255 which requires 8 bits to represent the range. So a larger memory is required to process the image and with the increase in size of the image the number of pixels also increases. A buffering technique is used to read the pixel data from the image and process the data efficiently. In the work presented in this paper, different window sizes are compared on the basis of timing efficiency and area utilization. An optimum window size must be selected so as to reduce the resources and maximize the speed. Results show the comparison of various window operations on the basis of performance parameters. In future other window operation along with convolution filter like Adaptive Median filter must be implemented and used by changing the row and column values in Window size.

Бесплатно

3D Brain Tumors and Internal Brain Structures Segmentation in MR Images

3D Brain Tumors and Internal Brain Structures Segmentation in MR Images

P.NARENDRAN, V.K. NARENDIRA KUMAR, K. SOMASUNDARAM

Статья научная

The main topic of this paper is to segment brain tumors, their components (edema and necrosis) and internal structures of the brain in 3D MR images. For tumor segmentation we propose a framework that is a combination of region-based and boundary-based paradigms. In this framework,segment the brain using a method adapted for pathological cases and extract some global information on the tumor by symmetry based histogram analysis. We propose a new and original method that combines region and boundary information in two phases: initialization and refinement. The method relies on symmetry-based histogram analysis.The initial segmentation of the tumor is refined relying on boundary information of the image. We use a deformable model which is again constrained by the fused spatial relations of the structure. The method was also evaluated on 10 contrast enhanced T1-weighted images to segment the ventricles,caudate nucleus and thalamus.

Бесплатно

3D Face Recognition based on Radon Transform, PCA, LDA using KNN and SVM

3D Face Recognition based on Radon Transform, PCA, LDA using KNN and SVM

P. S. Hiremath, Manjunatha Hiremath

Статья научная

Biometrics (or biometric authentication) refers to the identification of humans by their characteristics or traits. Bio-metrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Three dimensional (3D) human face recognition is emerging as a significant biometric technology. Research interest into 3D face recognition has increased during recent years due to the availability of improved 3D acquisition devices and processing algorithms. Three dimensional face recognition also helps to resolve some of the issues associated with two dimensional (2D) face recognition. In the previous research works, there are several methods for face recognition using range images that are limited to the data acquisition and pre-processing stage only. In the present paper, we have proposed a 3D face recognition algorithm which is based on Radon transform, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The Radon transform (RT) is a fundamental tool to normalize 3D range data. The PCA is used to reduce the dimensionality of feature space, and the LDA is used to optimize the features, which are finally used to recognize the faces. The experimentation has been done using three publicly available databases, namely, Bhosphorus, Texas and CASIA 3D face databases. The experimental results are shown that the proposed algorithm is efficient in terms of accuracy and detection time, in comparison with other methods based on PCA only and RT+PCA. It is observed that 40 Eigen faces of PCA and 5 LDA components lead to an average recognition rate of 99.20% using SVM classifier.

Бесплатно

A 1-V 10-bit 16.83-fJ/Conversion-step Mixed Current Mode SAR ADC for WSN

A 1-V 10-bit 16.83-fJ/Conversion-step Mixed Current Mode SAR ADC for WSN

Dipak S. Marathe, Uday P. Khot

Статья научная

This paper proposes a 10-bit mixed current mode low power SAR ADC for sensor node application. The different entities of a successive approximation register (SAR) analog-to-digital converter (ADC) circuit has a hybrid or mixed mode approach i.e.,voltage mode regenerative comparator; mixed SAR logic; and current mode digital-to-analog converter (DAC). The performance limitation of speed and the kick-back noise of a dynamic comparator is resolved using duty cycle controlled regenerative comparator. A mixed mode logic of a SAR is partitioning the design into synchronous ring counter and asynchronous output register. The data shifting of a ring counter is with the common clock tick while the output register exchanged it asynchronously using handshake signals, resulting in a low power SAR. The current mode switching function in a DAC to reduce asynchronous switching effect resulting in a low energy conversion per step. In overall, the proposed mixed SAR ADC consumes a 41.6 power and achieves an SFDR 69.3 dB at 10 MS/sec and 1 V supply voltage. It is designed and simulated in the 0.18 m TSMC CMOS process.

Бесплатно

A 3-Level Secure Histogram Based Image Steganography Technique

A 3-Level Secure Histogram Based Image Steganography Technique

G V Chaitanya, D Vamsee Krishna, L Anjaneyulu

Статья научная

Steganography is an art that involves communication of secret data in an appropriate carrier, eg. images, audio, video, etc. with a goal to hide the very existence of embedded data so as not to arouse an eavesdropper's suspicion. In this paper, a steganographic technique with high level of security and having a data hiding capacity close to 20% of cover image data has been developed. An adaptive and matched bit replacement method is used based on the sensitivity of Human Visual System (HVS) at different intensities. The proposed algorithm ensures that the generated stego image has a PSNR greater than 38.5 and is also resistant to visual attack. A three level security is infused into the algorithm which makes data retrieval from the stego image possible only in case of having all the right keys.

Бесплатно

A Brief Review on Different Driver's Drowsiness Detection Techniques

A Brief Review on Different Driver's Drowsiness Detection Techniques

Anis-Ul-Islam Rafid, Amit Raha Niloy, Atiqul Islam Chowdhury, Nusrat Sharmin

Статья научная

Driver drowsiness is the momentous factor in a huge number of vehicle accidents. This driver drowsiness detection system has been valued highly and applied in various fields recently such as driver visual attention monitoring and driver activity tracking. Drowsiness can be detected through the driver face monitoring system. Nowadays smartphone-based application has developed rapidly and thus also used for driver safety monitoring system. In this paper, a detailed review of driver drowsiness detection techniques implemented in the smartphone has been reviewed. The review has also been focused on insight into recent and state-of-the-art techniques. The advantages and limitations of each have been summarized. A comparative study of recently implemented smartphone-based approaches and mostly used desktop-based approaches have also been discussed in this review paper. And the most important thing is this paper helps others to decide better techniques for the effective drowsiness detection.

Бесплатно

A CLB priority based power gating technique in field programmable gate arrays

A CLB priority based power gating technique in field programmable gate arrays

Abhishek Nag, Sambhu Nath Pradhan

Статья научная

In this work, an autonomous technique of power gating is introduced at coarse level in Field Programmable Gate Array (FPGA) architecture to minimize leakage power. One of the major disadvantages of FPGA is the unnecessary power dissipation associated with the unused logic/inactive blocks. These inactive blocks in a FPGA are automatically cut-off from the power supply in this approach, based on a CLB priority algorithm. Our method focuses on introducing gating into both the logic blocks and routing resources of an FPGA at the same time, contrary to previous approaches. The proposed technique divides the FPGA fabric into clusters of CLBs and associated routing resources and introduces power gating separately for each cluster during runtime. The FPGA prototype has been developed in Cadence virtuoso spectrum at 45 nm technology and the layout of the proposed power gated FPGA is developed also. Simulation has been carried out for a ‘4 CLB’ prototype and results in a maximum of 55 % power reduction. The area overhead is 1.85 % for the ‘4 CLB’ FPGA prototype and tends to reduce with the increase in number of CLBs. The area overhead of a ‘128 CLB’ FPGA prototype is only 0.058 %, considering 4 sleep transistors. As an extension to the proposed gating in ‘4 CLB’ prototype, two techniques for an ‘8 CLB’ prototype are also evaluated in this paper, each having its own advantages. Due to the wake up time associated with power gated blocks, delay tends to increase. The wake-up time however, reduces with the increase in sleep transistor width.

Бесплатно

A Case Study in Key Measuring Software

A Case Study in Key Measuring Software

Naeem Nematollahi, Richard Khoury

Статья научная

In this paper, we develop and study a new algorithm to recognize and precisely measure keys for the ultimate purpose of physically duplicating them. The main challenge comes from the fact that the proposed algorithm must use a single picture of the key obtained from a regular desktop scanner without any special preparation. It does not use the special lasers, lighting systems, or camera setups commonly used for the purpose of key measuring, nor does it require that the key be placed in a precise position and orientation. Instead, we propose an algorithm that uses a wide range of image processing methods to discover all the information needed to identify the correct key blank and to find precise measures of the notches of the key shank from the single scanned image alone. Our results show that our algorithm can correctly differentiate between different key models and can measure the dents of the key with a precision of a few tenths of a millimeter.

Бесплатно

A Chaos-based Pseudorandom Permutation and Bilateral Diffusion Scheme for Image Encryption

A Chaos-based Pseudorandom Permutation and Bilateral Diffusion Scheme for Image Encryption

Weichuang Guo, Junqin Zhao, Ruisong Ye

Статья научная

A great many chaos-based image encryption schemes have been proposed in the past decades. Most of them use the permutation-diffusion architecture in pixel level, which has been proved insecure enough as they are not dependent on plain-images and so cannot resist chosen/known plain-image attack usually. In this paper, we propose a novel image encryption scheme comprising of one permutation process and one diffusion process. In the permutation process, the image sized is expanded to one sized by dividing the plain-image into two parts: one consisting of the higher 4bits and one consisting of the lower 4bits. The permutation operations are done row-by-row and column-by-column to increase the speed of permutation process. The chaotic cat map is utilized to generate chaotic sequences, which are quantized to shuffle the expanded image. The chaotic sequence for permutation process is dependent on plain-image and cipher keys, resulting in good key sensitivity and plain-image sensitivity. To achieve more avalanche effect and larger key space, a chaotic Bernoulli shift map based bilateral (i.e., horizontal and vertical) diffusion function is applied as well. The security and performance of the proposed image encryption have been analyzed, including histograms, correlation coefficients, information entropy, key sensitivity analysis, key space analysis, differential analysis, encryption rate analysis etc. All the experimental results suggest that the proposed image encryption scheme is robust and secure and can be used for secure image and video communication applications.

Бесплатно

A Chaotic Lévy flight Approach in Bat and Firefly Algorithm for Gray level image Enhancement

A Chaotic Lévy flight Approach in Bat and Firefly Algorithm for Gray level image Enhancement

Krishna Gopal Dhal, Iqbal Quraishi, Sanjoy Das

Статья научная

Recently nature inspired metaheuristic algorithms have been applied in image enhancement field to enhance the low contrast images in a control manner. Bat algorithm (BA) and Firefly algorithm (FA) is one of the most powerful metaheuristic algorithms. In this paper these two algorithms have been implemented with the help of chaotic sequence and lévy flight. One of them is FA via lévy flight where step size of lévy flight has been taken from chaotic sequence. In the Bat algorithm the local search has been done via lévy flight with chaotic step size. Chaotic sequence shows ergodicity property which helps in better searching. These two algorithms have been applied to optimize parameters of parameterized high boost filter. Entropy, number of edge pixels of the image have been used as objective criterion for measuring goodness of image enhancement. Fitness criterion has been maximized in order to get enhanced image with better contrast. From the experimental results it is clear that BA with chaotic lévy outperforms the FA via chaotic lévy.

Бесплатно

A Color-Texture Based Segmentation Method To Extract Object From Background

A Color-Texture Based Segmentation Method To Extract Object From Background

Saka Kezia, I. Santi Prabha, Vakulabharanam Vijaya Kumar

Статья научная

Extraction of flower regions from complex background is a difficult task, it is an important part of flower image retrieval, and recognition .Image segmentation denotes a process of partitioning an image into distinct regions. A large variety of different segmentation approaches for images have been developed. Image segmentation plays an important role in image analysis. According to several authors, segmentation terminates when the observer's goal is satisfied. For this reason, a unique method that can be applied to all possible cases does not yet exist. This paper studies the flower image segmentation in complex background. Based on the visual characteristics differences of the flower and the surrounding objects, the flower from different backgrounds are separated into a single set of flower image pixels. The segmentation methodology on flower images consists of five steps. Firstly, the original image of RGB space is transformed into Lab color space. In the second step 'a' component of Lab color space is extracted. Then segmentation by two-dimension OTSU of automatic threshold in 'a-channel' is performed. Based on the color segmentation result, and the texture differences between the background image and the required object, we extract the object by the gray level co-occurrence matrix for texture segmentation. The GLCMs essentially represent the joint probability of occurrence of grey-levels for pixels with a given spatial relationship in a defined region. Finally, the segmentation result is corrected by mathematical morphology methods. The algorithm was tested on plague image database and the results prove to be satisfactory. The algorithm was also tested on medical images for nucleus segmentation.

Бесплатно

A Comparative Analysis of Image Scaling Algorithms

A Comparative Analysis of Image Scaling Algorithms

Chetan Suresh, Sanjay Singh, Ravi Saini, Anil K Saini

Статья научная

Image scaling, fundamental task of numerous image processing and computer vision applications, is the process of resizing an image by pixel interpolation. Image scaling leads to a number of undesirable image artifacts such as aliasing, blurring and moiré. However, with an increase in the number of pixels considered for interpolation, the image quality improves. This poses a quality-time trade off in which high quality output must often be compromised in the interest of computation complexity. This paper presents a comprehensive study and comparison of different image scaling algorithms. The performance of the scaling algorithms has been reviewed on the basis of number of computations involved and image quality. The search table modification to the bicubic image scaling algorithm greatly reduces the computational load by avoiding massive cubic and floating point operations without significantly losing image quality.

Бесплатно

A Comparative Analysis of Lossless Compression Algorithms on Uniformly Quantized Audio Signals

A Comparative Analysis of Lossless Compression Algorithms on Uniformly Quantized Audio Signals

Sankalp Shukla, Ritu Gupta, Dheeraj Singh Rajput, Yashwant Goswami, Vikash Sharma

Статья научная

This paper analyses the performance of various lossless compression algorithms employed on uniformly quantized audio signals. The purpose of this study is to enlighten a new way of audio signal compression using lossless compression algorithms. The audio signal is first transformed into text by employing uniform quantization with different step sizes. This text is then compressed using lossless compression algorithms which include Run length encoding (RLE), Huffman coding, Arithmetic coding and Lempel-Ziv-Welch (LZW) coding. The performance of various lossless compression algorithms is analyzed based on mainly four parameters, viz., compression ratio, signal-to-noise ratio (SNR), compression time and decompression time. The analysis of the aforementioned parameters has been carried out after uniformly quantizing the audio files using different step sizes. The study exhibits that the LZW coding can be a potential alternative to the MP3 lossy audio compression algorithm to compress audio signals effectively.

Бесплатно

A Comparative Evaluation of Feature Extraction and Similarity Measurement Methods for Content-based Image Retrieval

A Comparative Evaluation of Feature Extraction and Similarity Measurement Methods for Content-based Image Retrieval

S.M. Mohidul Islam, Rameswar Debnath

Статья научная

Content-based image retrieval is the popular approach for image data searching because in this case, the searching process analyses the actual contents of the image rather than the metadata associated with the image. It is not clear from prior research which feature or which similarity measure performs better among the many available alternatives as well as what are the best combinations of them in content-based image retrieval. We performed a systematic and comprehensive evaluation of several visual feature extraction methods as well as several similarity measurement methods for this case. A feature vector is created after color and/or texture and/or shape features extraction. Then similar images are retrieved using different similarity measures. From experimental results, we found that color moment and wavelet packet entropy features are most effective whereas color autocorrelogram, wavelet moment, and invariant moment features show narrow result. As a similarity measure, cosine and correlation measures are robust in maximum cases; Standardized L2 in few cases and on average, city block measure retrieves more similar images whereas L1 and Mahalanobis measures are less effective in maximum cases. This is the first such system to be informed by a rigorous comparative analysis of the total six features and twelve similarity measures.

Бесплатно

A Comparative Study between Bandelet and Wavelet Transform Coupled by EZW and SPIHT Coder for Image Compression

A Comparative Study between Bandelet and Wavelet Transform Coupled by EZW and SPIHT Coder for Image Compression

Beladgham Mohammed, Habchi Yassine, Moulay Lakhdar Abdelmouneim, Taleb-Ahmed Abdelmalik

Статья научная

Second generation bandelet transform is a new method based on capturing the complex geometric content in image; we use this transform to study medical and satellite image compressed using the bandelet transform coupled by SPIHT coder. The goal of this paper is to examine the capacity of this transform proposed to offer an optimal representation for image geometric. We are interested in compressed medical image, In order to develop the compressed algorithm we compared our results with those obtained by the bandelet transform application in satellite image field. We concluded that the results obtained are very satisfactory for medical image domain.

Бесплатно

A Comparative Study between X_Lets Family for Image Denoising

A Comparative Study between X_Lets Family for Image Denoising

Beladgham Mohamed, Habchi Yassine, Moulay Lakhdar Abdelmouneim, Abdesselam Bassou, Taleb-Ahmed Abdelmalik

Статья научная

Research good representation is a problem in image processing for this, our works are focused in developing and proposes some new transform which can represent the edge of image more efficiently, Among these transform we find the wavelet and ridgelet transform these both types transforms are not optimal for images with complex geometry, so we replace this two types classical transform with other effectiveness transform named bandelet transform, this transform is appropriate for the analysis of edges of the images and can preserve the detail information of high frequency of noisy image. De-noising is one of the most interesting and widely investigated topics in image processing area. In order to eliminate noise we exploit in this paper the geometrical advantages offered by the bandelet transform to solve the problem of image de-noising. To arrive to determine which type transform allows us high quality visual image, a comparison is made between bandelet, curvelet, ridgelet and wavelet transform, after determining the best transform, we going to determine which type of image is adapted to this transform. Numerically, we show that bandelet transform can significantly outperform and gives good performances for medical image type TOREX, and this is justified by a higher PSNR value for gray images.

Бесплатно

A Comparative Study in Wavelets, Curvelets and Contourlets as Denoising Biomedical Images

A Comparative Study in Wavelets, Curvelets and Contourlets as Denoising Biomedical Images

Mohamed Ali HAMDI

Статья научная

A special member of the emerging family of multi scale geometric transforms is the contourlet transform which was developed in the last few years in an attempt to overcome inherent limitations of traditional multistage representations such as curvelets and wavelets. The biomedical images were denoised using firstly wavelet than curvelets and finally contourlets transform and results are presented in this paper. It has been found that contourlets transform outperforms the curvelets and wavelet transform in terms of signal noise ratio

Бесплатно

A Comparative Study of Feature Extraction Methods in Images Classification

A Comparative Study of Feature Extraction Methods in Images Classification

Seyyid Ahmed Medjahed

Статья научная

Feature extraction is an important step in image classification. It allows to represent the content of images as perfectly as possible. However, in this paper, we present a comparison protocol of several feature extraction techniques under different classifiers. We evaluate the performance of feature extraction techniques in the context of image classification and we use both binary and multiclass classifications. The analyses of performance are conducted in term of: classification accuracy rate, recall, precision, f-measure and other evaluation measures. The aim of this research is to show the relevant feature extraction technique that improves the classification accuracy rate and provides the most implicit classification data. We analyze the models obtained by each feature extraction method under each classifier.

Бесплатно

A Comparative Study of Soft Biometric Traits and Fusion Systems for Face-based Person Recognition

A Comparative Study of Soft Biometric Traits and Fusion Systems for Face-based Person Recognition

Samuel Ezichi, Ijeoma J.F. Ezika, Ogechukwu N. Iloanusi

Статья научная

Soft biometrics is not a unique trait in itself, but it is valuable in enhancing the performance of unique traits used in biometric recognition systems. In this paper, we perform a comparative analysis of soft biometric traits and fusion schemes for improving face recognition systems. Specifically, we present an analysis of the performance of such systems as a function of the fusion strategy used and the soft biometric feature employed. We outline the strengths and weaknesses of the biometric feature employed in fused face and soft biometric systems. The analysis presented in this work is significantly important and different from existing works as the performance profiles of a wider variety of soft biometric traits are compared over major metrics of permanence, ease of collection and distinctiveness.

Бесплатно

A Comparative Study of Wavelet Thresholding for Image Denoising

A Comparative Study of Wavelet Thresholding for Image Denoising

Arun Dixit, Poonam Sharma

Статья научная

Image denoising using wavelet transform has been successful as wavelet transform generates a large number of small coefficients and a small number of large coefficients. Basic denoising algorithm that using the wavelet transform consists of three steps – first computing the wavelet transform of the noisy image, thresholding is performed on the detail coefficients in order to remove noise and finally inverse wavelet transform of the modified coefficients is taken. This paper reviews the state of art methods of image denoising using wavelet thresholding. An Experimental analysis of wavelet based methods Visu Shrink, Sure Shrink, Bayes Shrink, Prob Shrink, Block Shrink and Neigh Shrink Sure is performed. These wavelet based methods are also compared with spatial domain methods like median filter and wiener filter. Results are evaluated on the basis of Peak Signal to Noise Ratio and visual quality of images. In the experiment, wavelet based methods perform better than spatial domain methods. In wavelet domain, recent methods like prob shrink, block shrink and neigh shrink sure performed better as compared to other wavelet based methods.

Бесплатно

Журнал