Статьи журнала - International Journal of Image, Graphics and Signal Processing

Все статьи: 870

2D Convolution Operation with Partial Buffering Implementation on FPGA

2D Convolution Operation with Partial Buffering Implementation on FPGA

Arun Mahajan, Paramveer Gill

Статья научная

In the modern digital systems, the digital image processing and digital signal processing application form an integrated part in the system design. Many designers proposed and implemented various resources and speed efficient approaches in the recent past. The important aspect of designing any digital system is its memory efficiency. The image consists of various pixels and each pixel is again holds a value from 0 to 255 which requires 8 bits to represent the range. So a larger memory is required to process the image and with the increase in size of the image the number of pixels also increases. A buffering technique is used to read the pixel data from the image and process the data efficiently. In the work presented in this paper, different window sizes are compared on the basis of timing efficiency and area utilization. An optimum window size must be selected so as to reduce the resources and maximize the speed. Results show the comparison of various window operations on the basis of performance parameters. In future other window operation along with convolution filter like Adaptive Median filter must be implemented and used by changing the row and column values in Window size.

Бесплатно

3D Brain Tumors and Internal Brain Structures Segmentation in MR Images

3D Brain Tumors and Internal Brain Structures Segmentation in MR Images

P.NARENDRAN, V.K. NARENDIRA KUMAR, K. SOMASUNDARAM

Статья научная

The main topic of this paper is to segment brain tumors, their components (edema and necrosis) and internal structures of the brain in 3D MR images. For tumor segmentation we propose a framework that is a combination of region-based and boundary-based paradigms. In this framework,segment the brain using a method adapted for pathological cases and extract some global information on the tumor by symmetry based histogram analysis. We propose a new and original method that combines region and boundary information in two phases: initialization and refinement. The method relies on symmetry-based histogram analysis.The initial segmentation of the tumor is refined relying on boundary information of the image. We use a deformable model which is again constrained by the fused spatial relations of the structure. The method was also evaluated on 10 contrast enhanced T1-weighted images to segment the ventricles,caudate nucleus and thalamus.

Бесплатно

3D Face Recognition based on Radon Transform, PCA, LDA using KNN and SVM

3D Face Recognition based on Radon Transform, PCA, LDA using KNN and SVM

P. S. Hiremath, Manjunatha Hiremath

Статья научная

Biometrics (or biometric authentication) refers to the identification of humans by their characteristics or traits. Bio-metrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Three dimensional (3D) human face recognition is emerging as a significant biometric technology. Research interest into 3D face recognition has increased during recent years due to the availability of improved 3D acquisition devices and processing algorithms. Three dimensional face recognition also helps to resolve some of the issues associated with two dimensional (2D) face recognition. In the previous research works, there are several methods for face recognition using range images that are limited to the data acquisition and pre-processing stage only. In the present paper, we have proposed a 3D face recognition algorithm which is based on Radon transform, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The Radon transform (RT) is a fundamental tool to normalize 3D range data. The PCA is used to reduce the dimensionality of feature space, and the LDA is used to optimize the features, which are finally used to recognize the faces. The experimentation has been done using three publicly available databases, namely, Bhosphorus, Texas and CASIA 3D face databases. The experimental results are shown that the proposed algorithm is efficient in terms of accuracy and detection time, in comparison with other methods based on PCA only and RT+PCA. It is observed that 40 Eigen faces of PCA and 5 LDA components lead to an average recognition rate of 99.20% using SVM classifier.

Бесплатно

A 3-Level Secure Histogram Based Image Steganography Technique

A 3-Level Secure Histogram Based Image Steganography Technique

G V Chaitanya, D Vamsee Krishna, L Anjaneyulu

Статья научная

Steganography is an art that involves communication of secret data in an appropriate carrier, eg. images, audio, video, etc. with a goal to hide the very existence of embedded data so as not to arouse an eavesdropper's suspicion. In this paper, a steganographic technique with high level of security and having a data hiding capacity close to 20% of cover image data has been developed. An adaptive and matched bit replacement method is used based on the sensitivity of Human Visual System (HVS) at different intensities. The proposed algorithm ensures that the generated stego image has a PSNR greater than 38.5 and is also resistant to visual attack. A three level security is infused into the algorithm which makes data retrieval from the stego image possible only in case of having all the right keys.

Бесплатно

A CLB priority based power gating technique in field programmable gate arrays

A CLB priority based power gating technique in field programmable gate arrays

Abhishek Nag, Sambhu Nath Pradhan

Статья научная

In this work, an autonomous technique of power gating is introduced at coarse level in Field Programmable Gate Array (FPGA) architecture to minimize leakage power. One of the major disadvantages of FPGA is the unnecessary power dissipation associated with the unused logic/inactive blocks. These inactive blocks in a FPGA are automatically cut-off from the power supply in this approach, based on a CLB priority algorithm. Our method focuses on introducing gating into both the logic blocks and routing resources of an FPGA at the same time, contrary to previous approaches. The proposed technique divides the FPGA fabric into clusters of CLBs and associated routing resources and introduces power gating separately for each cluster during runtime. The FPGA prototype has been developed in Cadence virtuoso spectrum at 45 nm technology and the layout of the proposed power gated FPGA is developed also. Simulation has been carried out for a ‘4 CLB’ prototype and results in a maximum of 55 % power reduction. The area overhead is 1.85 % for the ‘4 CLB’ FPGA prototype and tends to reduce with the increase in number of CLBs. The area overhead of a ‘128 CLB’ FPGA prototype is only 0.058 %, considering 4 sleep transistors. As an extension to the proposed gating in ‘4 CLB’ prototype, two techniques for an ‘8 CLB’ prototype are also evaluated in this paper, each having its own advantages. Due to the wake up time associated with power gated blocks, delay tends to increase. The wake-up time however, reduces with the increase in sleep transistor width.

Бесплатно

A Case Study in Key Measuring Software

A Case Study in Key Measuring Software

Naeem Nematollahi, Richard Khoury

Статья научная

In this paper, we develop and study a new algorithm to recognize and precisely measure keys for the ultimate purpose of physically duplicating them. The main challenge comes from the fact that the proposed algorithm must use a single picture of the key obtained from a regular desktop scanner without any special preparation. It does not use the special lasers, lighting systems, or camera setups commonly used for the purpose of key measuring, nor does it require that the key be placed in a precise position and orientation. Instead, we propose an algorithm that uses a wide range of image processing methods to discover all the information needed to identify the correct key blank and to find precise measures of the notches of the key shank from the single scanned image alone. Our results show that our algorithm can correctly differentiate between different key models and can measure the dents of the key with a precision of a few tenths of a millimeter.

Бесплатно

A Chaos-based Pseudorandom Permutation and Bilateral Diffusion Scheme for Image Encryption

A Chaos-based Pseudorandom Permutation and Bilateral Diffusion Scheme for Image Encryption

Weichuang Guo, Junqin Zhao, Ruisong Ye

Статья научная

A great many chaos-based image encryption schemes have been proposed in the past decades. Most of them use the permutation-diffusion architecture in pixel level, which has been proved insecure enough as they are not dependent on plain-images and so cannot resist chosen/known plain-image attack usually. In this paper, we propose a novel image encryption scheme comprising of one permutation process and one diffusion process. In the permutation process, the image sized is expanded to one sized by dividing the plain-image into two parts: one consisting of the higher 4bits and one consisting of the lower 4bits. The permutation operations are done row-by-row and column-by-column to increase the speed of permutation process. The chaotic cat map is utilized to generate chaotic sequences, which are quantized to shuffle the expanded image. The chaotic sequence for permutation process is dependent on plain-image and cipher keys, resulting in good key sensitivity and plain-image sensitivity. To achieve more avalanche effect and larger key space, a chaotic Bernoulli shift map based bilateral (i.e., horizontal and vertical) diffusion function is applied as well. The security and performance of the proposed image encryption have been analyzed, including histograms, correlation coefficients, information entropy, key sensitivity analysis, key space analysis, differential analysis, encryption rate analysis etc. All the experimental results suggest that the proposed image encryption scheme is robust and secure and can be used for secure image and video communication applications.

Бесплатно

A Chaotic Lévy flight Approach in Bat and Firefly Algorithm for Gray level image Enhancement

A Chaotic Lévy flight Approach in Bat and Firefly Algorithm for Gray level image Enhancement

Krishna Gopal Dhal, Iqbal Quraishi, Sanjoy Das

Статья научная

Recently nature inspired metaheuristic algorithms have been applied in image enhancement field to enhance the low contrast images in a control manner. Bat algorithm (BA) and Firefly algorithm (FA) is one of the most powerful metaheuristic algorithms. In this paper these two algorithms have been implemented with the help of chaotic sequence and lévy flight. One of them is FA via lévy flight where step size of lévy flight has been taken from chaotic sequence. In the Bat algorithm the local search has been done via lévy flight with chaotic step size. Chaotic sequence shows ergodicity property which helps in better searching. These two algorithms have been applied to optimize parameters of parameterized high boost filter. Entropy, number of edge pixels of the image have been used as objective criterion for measuring goodness of image enhancement. Fitness criterion has been maximized in order to get enhanced image with better contrast. From the experimental results it is clear that BA with chaotic lévy outperforms the FA via chaotic lévy.

Бесплатно

A Color-Texture Based Segmentation Method To Extract Object From Background

A Color-Texture Based Segmentation Method To Extract Object From Background

Saka Kezia, I. Santi Prabha, Vakulabharanam Vijaya Kumar

Статья научная

Extraction of flower regions from complex background is a difficult task, it is an important part of flower image retrieval, and recognition .Image segmentation denotes a process of partitioning an image into distinct regions. A large variety of different segmentation approaches for images have been developed. Image segmentation plays an important role in image analysis. According to several authors, segmentation terminates when the observer's goal is satisfied. For this reason, a unique method that can be applied to all possible cases does not yet exist. This paper studies the flower image segmentation in complex background. Based on the visual characteristics differences of the flower and the surrounding objects, the flower from different backgrounds are separated into a single set of flower image pixels. The segmentation methodology on flower images consists of five steps. Firstly, the original image of RGB space is transformed into Lab color space. In the second step 'a' component of Lab color space is extracted. Then segmentation by two-dimension OTSU of automatic threshold in 'a-channel' is performed. Based on the color segmentation result, and the texture differences between the background image and the required object, we extract the object by the gray level co-occurrence matrix for texture segmentation. The GLCMs essentially represent the joint probability of occurrence of grey-levels for pixels with a given spatial relationship in a defined region. Finally, the segmentation result is corrected by mathematical morphology methods. The algorithm was tested on plague image database and the results prove to be satisfactory. The algorithm was also tested on medical images for nucleus segmentation.

Бесплатно

A Comparative Analysis of Image Scaling Algorithms

A Comparative Analysis of Image Scaling Algorithms

Chetan Suresh, Sanjay Singh, Ravi Saini, Anil K Saini

Статья научная

Image scaling, fundamental task of numerous image processing and computer vision applications, is the process of resizing an image by pixel interpolation. Image scaling leads to a number of undesirable image artifacts such as aliasing, blurring and moiré. However, with an increase in the number of pixels considered for interpolation, the image quality improves. This poses a quality-time trade off in which high quality output must often be compromised in the interest of computation complexity. This paper presents a comprehensive study and comparison of different image scaling algorithms. The performance of the scaling algorithms has been reviewed on the basis of number of computations involved and image quality. The search table modification to the bicubic image scaling algorithm greatly reduces the computational load by avoiding massive cubic and floating point operations without significantly losing image quality.

Бесплатно

A Comparative Study between Bandelet and Wavelet Transform Coupled by EZW and SPIHT Coder for Image Compression

A Comparative Study between Bandelet and Wavelet Transform Coupled by EZW and SPIHT Coder for Image Compression

Beladgham Mohammed, Habchi Yassine, Moulay Lakhdar Abdelmouneim, Taleb-Ahmed Abdelmalik

Статья научная

Second generation bandelet transform is a new method based on capturing the complex geometric content in image; we use this transform to study medical and satellite image compressed using the bandelet transform coupled by SPIHT coder. The goal of this paper is to examine the capacity of this transform proposed to offer an optimal representation for image geometric. We are interested in compressed medical image, In order to develop the compressed algorithm we compared our results with those obtained by the bandelet transform application in satellite image field. We concluded that the results obtained are very satisfactory for medical image domain.

Бесплатно

A Comparative Study between X_Lets Family for Image Denoising

A Comparative Study between X_Lets Family for Image Denoising

Beladgham Mohamed, Habchi Yassine, Moulay Lakhdar Abdelmouneim, Abdesselam Bassou, Taleb-Ahmed Abdelmalik

Статья научная

Research good representation is a problem in image processing for this, our works are focused in developing and proposes some new transform which can represent the edge of image more efficiently, Among these transform we find the wavelet and ridgelet transform these both types transforms are not optimal for images with complex geometry, so we replace this two types classical transform with other effectiveness transform named bandelet transform, this transform is appropriate for the analysis of edges of the images and can preserve the detail information of high frequency of noisy image. De-noising is one of the most interesting and widely investigated topics in image processing area. In order to eliminate noise we exploit in this paper the geometrical advantages offered by the bandelet transform to solve the problem of image de-noising. To arrive to determine which type transform allows us high quality visual image, a comparison is made between bandelet, curvelet, ridgelet and wavelet transform, after determining the best transform, we going to determine which type of image is adapted to this transform. Numerically, we show that bandelet transform can significantly outperform and gives good performances for medical image type TOREX, and this is justified by a higher PSNR value for gray images.

Бесплатно

A Comparative Study in Wavelets, Curvelets and Contourlets as Denoising Biomedical Images

A Comparative Study in Wavelets, Curvelets and Contourlets as Denoising Biomedical Images

Mohamed Ali HAMDI

Статья научная

A special member of the emerging family of multi scale geometric transforms is the contourlet transform which was developed in the last few years in an attempt to overcome inherent limitations of traditional multistage representations such as curvelets and wavelets. The biomedical images were denoised using firstly wavelet than curvelets and finally contourlets transform and results are presented in this paper. It has been found that contourlets transform outperforms the curvelets and wavelet transform in terms of signal noise ratio

Бесплатно

A Comparative Study of Feature Extraction Methods in Images Classification

A Comparative Study of Feature Extraction Methods in Images Classification

Seyyid Ahmed Medjahed

Статья научная

Feature extraction is an important step in image classification. It allows to represent the content of images as perfectly as possible. However, in this paper, we present a comparison protocol of several feature extraction techniques under different classifiers. We evaluate the performance of feature extraction techniques in the context of image classification and we use both binary and multiclass classifications. The analyses of performance are conducted in term of: classification accuracy rate, recall, precision, f-measure and other evaluation measures. The aim of this research is to show the relevant feature extraction technique that improves the classification accuracy rate and provides the most implicit classification data. We analyze the models obtained by each feature extraction method under each classifier.

Бесплатно

A Comparative Study of Wavelet Thresholding for Image Denoising

A Comparative Study of Wavelet Thresholding for Image Denoising

Arun Dixit, Poonam Sharma

Статья научная

Image denoising using wavelet transform has been successful as wavelet transform generates a large number of small coefficients and a small number of large coefficients. Basic denoising algorithm that using the wavelet transform consists of three steps – first computing the wavelet transform of the noisy image, thresholding is performed on the detail coefficients in order to remove noise and finally inverse wavelet transform of the modified coefficients is taken. This paper reviews the state of art methods of image denoising using wavelet thresholding. An Experimental analysis of wavelet based methods Visu Shrink, Sure Shrink, Bayes Shrink, Prob Shrink, Block Shrink and Neigh Shrink Sure is performed. These wavelet based methods are also compared with spatial domain methods like median filter and wiener filter. Results are evaluated on the basis of Peak Signal to Noise Ratio and visual quality of images. In the experiment, wavelet based methods perform better than spatial domain methods. In wavelet domain, recent methods like prob shrink, block shrink and neigh shrink sure performed better as compared to other wavelet based methods.

Бесплатно

A Comprehensive Image Steganography Tool using LSB Scheme

A Comprehensive Image Steganography Tool using LSB Scheme

Sahar A. El_Rahman

Статья научная

As a consequence of the fact, transmitting data has been fast and easy these days due to the development of the Internet. Where internet is the most important medium for confidential and non-confidential communications. Security is the major matter for these communications and steganography is the art of hiding and transmitting secret messages through carriers without being exposed. This paper presents a secured model for communication using image steganography. The main concern is to create a Java-based tool called IMStego that hides information in images using Least Significant Bit (LSB) algorithm (1-LSB) and modified Least Significant one Bit algorithm, i.e. Least Significant 2 Bits algorithm (2-LSB). IMStego is a more comprehensive security utility where it provides user-friendly functionality with interactive graphical user interface and integrated navigation capabilities. It provides the user with two operations, which are hiding secret data into images and extracting hidden data from images using 1-LSB or 2-LSB algorithm. IMStego tool hides secrete information in color static images with formats BMP and PNG.

Бесплатно

A Comprehensive Survey on Human Skin Detection

A Comprehensive Survey on Human Skin Detection

Mohammad Reza Mahmoodi, Sayed Masoud Sayedi

Статья научная

Human Skin detection is one of the most widely used algorithms in vision literature which has been numerously exploited both directly and indirectly in multifarious applications. This scope has received a great deal of attention specifically in face analysis and human detection/tracking/recognition systems. As regards, there are several challenges mainly emanating from nonlinear illumination, camera characteristics, imaging conditions, and intra-personal features. During last twenty years, researchers have been struggling to overcome these challenges resulting in publishing hundreds of papers. The aim of this paper is to survey applications, color spaces, methods and their performances, compensation techniques and benchmarking datasets on human skin detection topic, covering the related researches within more than last two decades. In this paper, different difficulties and challenges involved in the task of finding skin pixels are discussed. Skin segmentation algorithms are mainly based on color information; an in-depth discussion on effectiveness of disparate color spaces is elucidated. In addition, using standard evaluation metrics and datasets make the comparison of methods both possible and reasonable. These databases and metrics are investigated and suggested for future studies. Reviewing most existing techniques not only will ease future studies, but it will also result in developing better methods. These methods are classified and illustrated in detail. Variety of applications in which skin detection has been either fully or partially used is also provided.

Бесплатно

A Compressed Representation of Mid-Crack Code with Huffman Code

A Compressed Representation of Mid-Crack Code with Huffman Code

Sohag Kabir

Статья научная

Contour representation of binary object is increasingly used in image processing and pattern recognition. Chain code and crack code are popular methods of contour encoding. However, by using these methods, an accurate estimate of geometric features like area and perimeter of objects are difficult to obtain. Mid-crack code, another contour encoding method, can help to obtain more accurate estimate of the geometric features of objects. Though a considerable amount of reduction of the size of images is obtained by fixed-length mid-crack code, yet, more efficient encoding is possible by considering and applying variable-length encoding technique. In this paper, a compressed mid-crack code is proposed based on the Huffman code. Experiments performed on different images yield that the proposed representation reduces the number of bits require to encode the contour of an image with compared to the classical mid-crack code.

Бесплатно

A Connected Domain Analysis Based Color Localization Method and Its Implementation in Embedded Robot System

A Connected Domain Analysis Based Color Localization Method and Its Implementation in Embedded Robot System

Fei Guo, Ji-cai Deng, Dong-bo Zhou

Статья научная

A target localization method based on color recogni-tion and connected component analysis is presented in this paper. The raw image is converted to HSI color space through a lookup table and binarized, then followed by a line-by-line scan to find all the connected domains. By setting appropriate threshold for the size of each connected domain, most pseudo targets can be omitted and coordinates of the target could be calculated in the mean time. The main advantage of this method is the absence of extra filtering process, therefore real-time performance of the whole system is greatly improved. Another merit is we introduce the frame difference concept to avoid manually presetting the upper and lower bound for binarization. Thirdly, the localization step is combined with target enumeration, further simplified the implementation. Experiments on our ARM system demonstrate its capability of tracing multiple targets under a mean frame rate of 15FPS, which satisfied the requirement of real-time video processing on embedded robot systems.

Бесплатно

A Dynamic Object Identification Protocol for Intelligent Robotic Systems

A Dynamic Object Identification Protocol for Intelligent Robotic Systems

Akash Agrawal, Palak Brijpuria

Статья научная

Robotics has enabled the lessening of human intervention in most of the mission critical applications. For this to happen, the foremost requirement is the identification of objects and their classification. This study aims at building a humanoid robot capable of identifying objects based on the characters on their labels. Traditionally this is facilitated by the analysis of correlation value. However, only relying on this parameter is highly error-prone. This study enhances the efficiency of object identification by using image segmentation and thresholding methods. We have introduced a pre-processing stage for images while subjecting them to correlation coefficient test. It was found that the proposed method gave better recognition rates when compared to the conventional way of testing an image for correlation with another. The obtained results were statistically analysed using the ANOVA test suite. The correlation values with respect to the characters where then fed to the robot to uniquely identify a given image, pick the object using its arm and then place the object in the appropriate container.

Бесплатно

Журнал