Sitemap
 
 
IEEE Projects
>>
  Hiding Sensitive

>>
  Encouraging Persons

>>
  Truth Discovery

>>
  Reverse Nearest Neighbors

 
Search Projects
Project
Example keywords: image compression, B Sc, networking, etc
 
 
 
 
 
 
IEEE Category
 
    >> My Projects  
    >> Latest IEEE Projects  
    >> Image Processing  
    >> Networking  
    >> Network Security  
    >> Data Mining  
    >> Neural Network  
    >> Mobile Computing  
    >> Application Projects  
    >> Web Application Projects  
    >> Knowledge and Data Engineering  
    >> Parallel and Distributed Systems  
    >> Recently Downloaded  
 
 
 
 
Project Request
 
 
Name*
E-Mail*
Mobile*
Domain*
Comments*
 
 
 
Projects
 
Project Title
No. of Downloads
Action
Markov Random Field Model-Based Edge-Directed Image Interpolation
15
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Texture Analysis and Classification with Linear Regression Model Based on Wavelet Transform
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
The wavelet transform as an important multi resolution analysis tool has already been commonly applied to texture analysis and classification. Nevertheless, it ignores the structural information while capturing the spectral information of the texture image at different scales. In this paper, we propose a texture analysis and classification approach with the linear regression model based on the wavelet transform. This method is motivated by the observation that there exists a distinctive correlation between the sample images, belonging to the same kind of texture, at different frequency regions obtained by 2-D wavelet packet transform. Experimentally, it was observed that this correlation varies from texture to texture. The linear regression model is employed to analyze this correlation and extract texture features that characterize the samples. Therefore, our method considers not only the frequency regions but also the correlation between these regions. In contrast, the pyramid-structured wavelet transforms (PSWT) and the tree structured wavelet transform (TSWT) do not consider the correlation between different frequency regions. Experiments show that our method significantly improves the texture classification rate in comparison with the multi resolution methods, including PSWT, TSWT, the Gabor transform, and some recently proposed methods derived from these. Index Terms有inear regression, texture analysis, texture classification, wavelet transform
A New Watermarking Scheme for Color Images Captured By Mobile Phone Cameras
10
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
A new frequency domain based watermarking scheme for color images captured by mobile phone cameras is proposed. The proposed technique embeds personal mobile phone numbers inside the image. The aim of the scheme is to protect the copy right ownership of the image. Each bit of the decimal digits is inserted onto one low frequency coefficient of one of the DCT blocks of the host image. A DCT coefficient selection (DCS) process has been applied to increase the invisibility qualities, this process managed to find the coefficient with the maximum magnitude. Different embedding location depending on the spatial frequencies of the host image will be selected. The proposed algorithm achieves a high PSNR values and is found to be robust against JPEG compression and different image manipulation algorithms.
Authentication Using Graphical Passwords: Effects of Tolerance and Image Choice
15
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called Pass Points, and evaluated it with human users. The results of the evaluation were promising with respect to memorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 ? 10 pixels) around the user痴 password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.
Semantic Texton forests for image categorization and segmentation
14
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
We propose semantic texton forests, efficient and powerful new low-level features. These are ensembles of decision trees that act directly on image pixels, and therefore do not need the expensive computation of filter-bank responses or local descriptors. They are extremely fast to both train and test, especially compared with k-means clustering and nearest-neighbor assignment of feature descriptors. The nodes in the trees provide (i) an implicit hierarchical clustering into semantic textons, and (ii) an explicit local classification estimate. Our second contribution, the bag of semantic textons, combines a histogram of semantic textons over an image region with a region prior category distribution. The bag of semantic textons is computed over the whole image for categorization, and over local rectangular regions for segmentation. Including both histogram and region prior allows our segmentation algorithm to exploit both textural and semantic context. Our third contribution is an image-level prior for segmentation that emphasizes those categories that the automatic categorization believes to be present. We evaluate on two datasets including the very challenging VOC 2007 segmentation dataset. Our results significantly advance the state-of-the-art in segmentation accuracy, and furthermore, our use of efficient decision forests gives at least a five-fold increase in execution speed.
Image Classification using Random Forests and Ferns
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al [16] from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions). (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance痴 position, and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multiway SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements; the result is about a 10% improvement over the state of the art for Caltech-256.
Blood Vessel Segmentation from Color Retinal Images using Unsupervised Texture Classification
14
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Automated blood vessel segmentation is an important issue for assessing retinal abnormalities and diagnoses of many diseases. The segmentation of vessels is complicated by huge variations in local contrast, particularly in case of the minor vessels. In this paper, we propose a new method of texture based vessel segmentation to overcome this problem. We use Gaussian and L*a*b* perceptually uniform color spaces with original RGB for texture feature extraction on retinal images. A bank of Gabor energy filters are used to analyze the texture features from which a feature vector is constructed for each pixel. The fuzzy C-means (FCM) clustering algorithm is used to classify the feature vectors into vessel or non-vessel based on the texture properties. From the FCM clustering output we attain the final output segmented image after a post processing step. We compare our method with hand-labeled ground truth segmentation of five images and achieve 84.37% sensitivity and 99.61% specificity.
Vision Processing for Real time 3-D Data Acquisition Based on Coded Structured Light
14
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Structured light vision systems have been successfully used for accurate measurement of 3D surfaces in computer vision. However, their applications are mainly limited to scanning stationary objects so far since tens of images have to be captured for recovering one 3D scene. This paper presents an idea for real-time acquisition of 3D surface data by a specially coded vision system. To achieve 3D measurement for a dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3D reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a vision processing method for fast 3D data acquisition. Practical experimental performance is provided to analyze the efficiency of the proposed methods
Sub sampling Image Compression using Al-Alaoui Back propagation Algorithm
14
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
With the advances in wireless communications and embedded systems, efficient storage and transmission of images and video over limited bandwidth is required. Novel image compression techniques need to be investigated; artificial neural networks subsampling image compression method is presented using the Al - Alaoui back propagation algorithm is used [1-5]. The Al-Alaoui algorithm is a weighted mean-square-error (MSE) approach to pattern recognition. It employs cloning of the erroneously classified samples to increase the population of their corresponding classes. Using the Al-Alaoui back propagation, obtained simulation results show a faster convergence rate, zero misclassified pixels and an improvement in PSNR around 2 dB.
Semi-supervised SVM batch mode active learning for image retrieval
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Active learning has been shown as a key technique for improving content-based image retrieval (CBIR) performance. Among various methods, support vector machine (SVM) active learning is popular for its application to relevance feedback in CBIR. However, the regular SVM active learning has two main drawbacks when used for relevance feedback. First, SVM often suffers from learning with a small number of labeled examples, which is the case in relevance feedback. Second, SVM active learning usually does not take into account the redundancy among examples, and therefore could select multiple examples in relevance feedback that are similar (or even identical) to each other. In this paper, we propose a novel scheme that exploits both semi-supervised kernel learning and batch mode active learning for relevance feedback in CBIR. In particular, a kernel function is first learned from a mixture of labeled and unlabeled examples. The kernel will then be used to effectively identify the informative and diverse examples for active learning via a min-max framework. An empirical study with relevance feedback of CBIR showed that the proposed scheme is significantly more effective than other state-of-the-art approaches.
Feature based wavelet shrinking algorithm for image denoising
15
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
A selective wavelet shrinkage algorithm for digital image denoising is presented. The performance of this method is an improvement upon other methods proposed in the literature and is algorithmically simple for large computational savings. The improved performance and computational speed of the proposed wavelet shrinkage algorithm is presented and experimentally compared with established methods. The denoising method incorporated in the proposed algorithm involves a two-threshold validation process for real-time selection of wavelet coefficients. The two-threshold criteria select wavelet coefficients based on their absolute value, spatial regularity, and regularity across multiresolution scales. The proposed algorithm takes image features into consideration in the selection process. Statistically, most images have regular features resulting in connected sub band coefficients. Therefore, the resulting sub bands of wavelet transformed images in large part do not contain isolated coefficients. In the proposed algorithm, coefficients are selected due to their magnitude, and only subsets of those selected coefficients which exhibit a spatially regular behavior remain for image reconstruction. Therefore, two thresholds are used in the coefficient selection process. The first threshold is used to distinguish coefficients of large magnitude and the second is used to distinguish coefficients of spatial regularity. The performance of the proposed wavelet denoising technique is an improvement upon several other established wavelets denoising techniques, as well as being computationally efficient to facilitate real-time image-processing applications.
Restoration of DWI Data Using a Rician LMMSE Estimator
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper introduces and analyzes a linear minimum mean square error (LMMSE) estimator using a Rician noise model and its recursive version (RLMMSE) for the restoration of diffusion weighted images. A method to estimate the noise level based on local estimations of mean or variance is used to automatically parameterize the estimator. The restoration performance is evaluated using quality indexes and compared to alternative estimation schemes. The overall scheme is simple, robust, fast, and improves estimations. Filtering diffusion weighted magnetic resonance imaging (DW-MRI) with the proposed methodology leads to more accurate tensor estimations. Real and synthetic datasets are analyzed.
Efficient Nonlocal Means for Denoising of Textural Patterns
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper contributes two novel techniques in the context of image restoration by nonlocal filtering. First, we introduce an efficient implementation of the nonlocal means filter based on arranging the data in a cluster tree. The structuring of data allows for a fast and accurate preselection of similar patches. In contrast to previous approaches, the preselection is based on the same distance measure as used by the filter itself. It allows for large speedups, especially when the search for similar patches covers the whole image domain, i.e., when the filter is truly nonlocal. However, also in the windowed version of the filter, the cluster tree approach compares favorably to previous techniques in respect of quality versus computational cost. Second, we suggest an iterative version of the filter that is derived from a variational principle and is designed to yield nontrivial steady states. It reveals to be particularly useful in order to restore regular, textured patterns.
Indexing of Satellite Images with Different Resolutions by Wavelet Features
15
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Space agencies are rapidly building up massive image databases. A particularity of these databases is that they are made of images with different, but known, resolutions. In this paper, we introduce a new scheme allowing us to compare and index images with different resolutions. This scheme relies on a simplified acquisition model of satellite images and uses continuous wavelet decompositions. We establish a correspondence between scales which permits us to compare wavelet decompositions of images having different resolutions. We validate the approach through several matching and classification experiments, and we show that taking the acquisition process into account yields better results than just using scaling properties of wavelet features.
Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using a hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.
Texture Analysis and Segmentation Using Modulation Features, Generative Models, and Weighted Curve Evolution
12
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
In this work we approach the analysis and segmentation of natural textured images by combining ideas from image analysis and probabilistic modeling. We rely on AM-FM texture models and specifically on the Dominant Component Analysis (DCA) paradigm for feature extraction. This method provides a low-dimensional, dense and smooth descriptor, capturing essential aspects of texture, namely scale, orientation, and contrast. Our contributions are at three levels of the texture analysis and segmentation problems: First, at the feature extraction stage we propose a regularized demodulation algorithm that provides more robust texture features and explore the merits of modifying the channel selection criterion of DCA. Second, we propose a probabilistic interpretation of DCA and Gabor filtering in general, in terms of Local Generative Models. Extending this point of view to edge detection facilitates the estimation of posterior probabilities for the edge and texture classes. Third, we propose the weighted curve evolution scheme that enhances the Region Competition/ Geodesic Active Regions methods by allowing for the locally adaptive fusion of heterogeneous cues. Our segmentation results are evaluated on the Berkeley Segmentation Benchmark, and compare favorably to current state-of-the-art methods.
Reversible Integer Color Transform
13
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
In this correspondence, we introduce a systematic algorithm that can convert any 3 times 3 color transform into a reversible integer-to-integer transform. We also discuss the ways to improve accuracy and reduce implementation complexity. We derive the integer RGB-to-KLA, IV1V2, YCbCr, DCT, YUV, and YIQ transforms that are optimal in accuracy.
Expansion Embedding Techniques for Reversible Watermarking
25
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Reversible watermarking enables the embedding of useful information in a host signal without any loss of host information. Tian's difference-expansion technique is a high-capacity, reversible method for data embedding. However, the method suffers from undesirable distortion at low embedding capacities and lack of capacity control due to the need for embedding a location map. We propose a histogram shifting technique as an alternative to embedding the location map. The proposed technique improves the distortion performance at low embedding capacities and mitigates the capacity control problem. We also propose a reversible data-embedding technique called prediction-error expansion. This new technique better exploits the correlation inherent in the neighborhood of a pixel than the difference-expansion scheme. Prediction-error expansion and histogram shifting combine to form an effective method for data embedding. The experimental results for many standard test images show that prediction-error expansion doubles the maximum embedding capacity when compared to difference expansion. There is also a significant improvement in the quality of the watermarked image, especially at moderate embedding capacities.
Blind and Semi-Blind Deblurring of Natural Images
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
A method for blind image deblurring is presented. The method only makes weak assumptions about the blurring filter and is able to undo a wide variety of blurring degradations. To overcome the ill-posedness of the blind image deblurring problem, the method includes a learning technique which initially focuses on the main edges of the image and gradually takes details into account. A new image prior, which includes a new edge detector, is used. The method is able to handle unconstrained blurs, but also allows the use of constraints or of prior information on the blurring filter, as well as the use of filters defined in a parametric manner. Furthermore, it works in both single-frame and multiframe scenarios. The use of constrained blur models appropriate to the problem at hand, and/or of multiframe scenarios, generally improves the deblurring results. Tests performed on monochrome and color images, with various synthetic and real-life degradations, without and with noise, in single-frame and multiframe scenarios, showed good results, both in subjective terms and in terms of the increase of signal to noise ratio (ISNR) measure. In comparisons with other state of the art methods, our method yields better results, and shows to be applicable to a much wider range of blurs.
Combining Local Filtering and Multiscale Analysis for Edge, Ridge, and Curvilinear Objects Detection
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper presents a general method for detecting curvilinear structures, like filaments or edges, in noisy images. This method relies on a novel technique, the feature-adapted beamlet transform (FABT) which is the main contribution of this paper. It combines the well-known beamlet transform (BT), introduced by Donoho , with local filtering techniques in order to improve both detection performance and accuracy of the BT. Moreover, as the desired feature detector is chosen to belong to the class of steerable filters, our transform requires only O(N log(N)) operations, where N = n 2 is the number of pixels. Besides providing a fast implementation of the FABT on discrete grids, we present a statistically controlled method for curvilinear objects detection. To extract significant objects, we propose an algorithm in four steps: 1) compute the FABT, 2) normalize beamlet coefficients, 3) select meaningful beamlets thanks to a fast energy-based minimization, and 4) link beamlets together in order to get a list of objects. We present an evaluation on both synthetic and real data, and demonstrate substantial improvements of our method over classical feature detectors.
Image Thumbnails That Represent Blur and Noise
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.
Generic Lossless Visible Watermarking
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
A novel method for generic visible watermarking with a capability of lossless image recovery is proposed. The method is based on the use of deterministic one-to-one compound mappings of image pixel values for overlaying a variety of visible watermarks of arbitrary sizes on cover images. The compound mappings are proved to be reversible, which allows for lossless recovery of original images from watermarked images. The mappings may be adjusted to yield pixel values close to those of desired visible watermarks. Different types of visible watermarks, including opaque monochrome and translucent full color ones, are embedded as applications of the proposed generic approach. A two-fold monotonically increasing compound mapping is created and proved to yield more distinctive visible watermarks in the watermarked image. Security protection measures by parameter and mapping randomizations have also been proposed to deter attackers from illicit image recoveries. Experimental results demonstrating the effectiveness of the proposed approach are also included.
Automatic Color Based Reassembly of Fragmented Images and Paintings
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
The problem of reassembling image fragments arises in many scientific fields, such as forensics and archaeology. In the field of archaeology, the pictorial excavation findings are almost always in the form of painting fragments. The manual execution of this task is very difficult, as it requires great amount of time, skill and effort. Thus, the automation of such a work is very important and can lead to faster, more efficient, painting reassembly and to a significant reduction in the human effort involved. In this paper, an integrated method for automatic color based 2-D image fragment reassembly is presented. The proposed 2-D reassembly technique is divided into four steps. Initially, the image fragments which are probably spatially adjacent, are identified utilizing techniques employed in content based image retrieval systems. The second operation is to identify the matching contour segments for every retained couple of image fragments, via a dynamic programming technique. The next step is to identify the optimal transformation in order to align the matching contour segments. Many registration techniques have been evaluated to this end. Finally, the overall image is reassembled from its properly aligned fragments. This is achieved via a novel algorithm, which exploits the alignment angles found during the previous step. In each stage, the most robust algorithms having the best performance are investigated and their results are fed to the next step. We have experimented with the proposed method using digitally scanned images of actual torn pieces of paper image prints and we produced very satisfactory reassembly results.
Active Reranking for Web Image Search
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Image search reranking methods usually fail to capture the user's intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the user's intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the user's labeling efforts. Furthermore, to localize the user's intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm.
Distributed Image Coding for Digital Image Recovery From the Print-Scan Channel
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
A printed digital photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a method for approximating the original digital image by combining a scan of the printed photograph with digital auxiliary information kept together with the print. We formulate and solve the approximation problem using a Wyner-Ziv coding framework. During encoding, the Wyner-Ziv auxiliary information consists of a small amount of digital data composed of a number of sampled luminance pixel blocks and a number of sampled color pixel values to enable subsequent accurate registration and color-reproduction during decoding. The registration and color information is augmented by an additional amount of digital data encoded using Wyner-Ziv coding techniques that recovers residual errors and lost high spatial frequencies. The decoding process consists of scanning the printed photograph, together with a two step decoding process. The first decoding step, using the registration and color auxiliary information, generates a side-information image which registers and color corrects the scanned image. The second decoding step uses the additional Wyner-Ziv layer together with the side-information image to provide a closer approximation of the original, reducing residual errors and restoring the lost high spatial frequencies. The experimental results confirm the reduced digital storage needs when the scanned print assists in the digital reconstruction.
On-line Learning of Mutually Orthogonal Subspaces for Face Recognition by Image Sets
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
We address the problem of face recognition by matching image sets. Each set of face images is represented by a subspace (or linear manifold) and recognition is carried out by subspace-to-subspace matching. In this paper, 1) a new discriminative method that maximises orthogonality between subspaces is proposed. The method improves the discrimination power of the subspace angle based face recognition method by maximizing the angles between different classes. 2) We propose a method for on-line updating the discriminative subspaces as a mechanism for continuously improving recognition accuracy. 3) A further enhancement called locally orthogonal subspace method is presented to maximise the orthogonality between competing classes. Experiments using 700 face image sets have shown that the proposed method outperforms relevant prior art and effectively boosts its accuracy by online learning. It is shown that the method for online learning delivers the same solution as the batch computation at far lower computational cost and the locally orthogonal method exhibits improved accuracy. We also demonstrate the merit of the proposed face recognition method on portal scenarios of multiple biometric grand challenge.
Misalignment-Robust Face Recognition
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Subspace learning techniques for face recognition have been widely studied in the past three decades. In this paper, we study the problem of general subspace-based face recognition under the scenarios with spatial misalignments and/or image occlusions. For a given subspace derived from training data in a supervised, unsupervised, or semi-supervised manner, the embedding of a new datum and its underlying spatial misalignment parameters are simultaneously inferred by solving a constrained テつソ1 norm optimization problem, which minimizes the テつソ1 error between the misalignment-amended image and the image reconstructed from the given subspace along with its principal complementary subspace. A byproduct of this formulation is the capability to detect the underlying image occlusions. Extensive experiments on spatial misalignment estimation, image occlusion detection, and face recognition with spatial misalignments and/or image occlusions all validate the effectiveness of our proposed general formulation for misalignment-robust face recognition.
Efficient Compression of Encrypted Grayscale Images
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Lossless compression of encrypted sources can be achieved through Slepian-Wolf coding. For encrypted real-world sources, such as images, the key to improve the compression efficiency is how the source dependency is exploited. Approaches in the literature that make use of Markov properties in the Slepian-Wolf decoder do not work well for grayscale images. In this correspondence, we propose a resolution progressive compression scheme which compresses an encrypted image progressively in resolution, such that the decoder can observe a low-resolution version of the image, study local statistics based on it, and use the statistics to decode the next resolution level. Good performance is observed both theoretically and experimentally.
Multiscale AM-FM Demodulation and Image Reconstruction Methods With Improved Accuracy
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
We develop new multiscale amplitude-modulation frequency-modulation (AM-FM) demodulation methods for image processing. The approach is based on three basic ideas: (i) AM-FM demodulation using a new multiscale filterbank, (ii) new, accurate methods for instantaneous frequency (IF) estimation, and (iii) multiscale least squares AM-FM reconstructions. In particular, we introduce a variable-spacing local linear phase (VS-LLP) method for improved instantaneous frequency (IF) estimation and compare it to an extended quasilocal method and the quasi-eigen function approximation (QEA). It turns out that the new VS-LLP method is a generalization of the QEA method where we choose the best integer spacing between the samples to adapt as a function of frequency. We also introduce a new quasi-local method (QLM) for IF and IA estimation and discuss some of its advantages and limitations. The new IF estimation methods lead to significantly improved estimates. We present different multiscale decompositions to show that the proposed methods can be used to reconstruct and analyze general images.
Registering a MultiSensor Ensemble of Images
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
Many registration scenarios involve aligning more than just two images. These image sets-called ensembles-are conventionally registered by choosing one image as a template, and every other image is registered to it. This pairwise approach is problematic because results depend on which image is chosen as the template. The issue is particularly acute for multisensor ensembles because different sensors create images with different features. Also, pairwise methods use only a fraction of the available data at a time. In this paper, we propose a maximum-likelihood clustering method that registers all the images in a multisensor ensemble simultaneously. Experiments involving rigid-body and affine transformations show that the clustering method is more robust and accurate than competing pairwise registration methods. Moreover, the clustering results can be used to form a rudimentary segmentation of the image ensemble.
A Robust Fuzzy Local Information C-Means Clustering Algorithm
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper presents a variation of fuzzy c-means (FCM) algorithm that provides image clustering. The proposed algorithm incorporates the local spatial information and gray level information in a novel fuzzy way. The new algorithm is called fuzzy local information C-Means (FLICM). FLICM can overcome the disadvantages of the known fuzzy c-means algorithms and at the same time enhances the clustering performance. The major characteristic of FLICM is the use of a fuzzy local (both spatial and gray level) similarity measure, aiming to guarantee noise insensitiveness and image detail preservation. Furthermore, the proposed algorithm is fully free of the empirically adjusted parameters (a, テつソg, テつソs, etc.) incorporated into all other fuzzy c-means algorithms proposed in the literature. Experiments performed on synthetic and real-world images show that FLICM algorithm is effective and efficient, providing robustness to noisy images.
Adaptive Kernel-Based Image Denoising Employing Semi-Parametric Regularization
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
The main contribution of this paper is the development of a novel approach, based on the theory of Reproducing Kernel Hilbert Spaces (RKHS), for the problem of noise removal in the spatial domain. The proposed methodology has the advantage that it is able to remove any kind of additive noise (impulse, gaussian, uniform, etc.) from any digital image, in contrast to the most commonly used denoising techniques, which are noise dependent. The problem is cast as an optimization task in a RKHS, by taking advantage of the celebrated Representer Theorem in its semi-parametric formulation. The semi-parametric formulation, although known in theory, has so far found limited, to our knowledge, application. However, in the image denoising problem, its use is dictated by the nature of the problem itself. The need for edge preservation naturally leads to such a modeling. Examples verify that in the presence of gaussian noise the proposed methodology performs well compared to wavelet based technics and outperforms them significantly in the presence of impulse or mixed noise.
Adaptive Color Feature Extraction Based on Image Color Distributions
0
View Icon   Download Abstract   Download Project   Add to My Project

Abstract
This paper proposes an adaptive color feature extraction scheme by considering the color distribution of an image. Based on the binary quaternion-moment-preserving (BQMP) thresholding technique, the proposed extraction methods, fixed cardinality (FC) and variable cardinality (VC), are able to extract color features by preserving the color distribution of an image up to the third moment and to substantially reduce the distortion incurred in the extraction process. In addition to utilizing the earth mover's distance (EMD) as the distance measure of our color features, we also devise an efficient and effective distance measure, comparing histograms by clustering (CHIC). Moreover, the efficient implementation of our extraction methods is explored. With slight modification of the BQMP algorithm, our extraction methods are equipped with the capability of exploiting the concurrent property of hardware implementation. The experimental results show that our hardware implementation can achieve approximately a second order of magnitude improvement over the software implementation. It is noted that minimizing the distortion incurred in the extraction process can enhance the accuracy of the subsequent various image applications, and we evaluate the meaningfulness of the new extraction methods by the application to content-based image retrieval (CBIR). Our experimental results show that the proposed extraction methods can enhance the average retrieval precision rate by a factor of 25% over that of a traditional color feature extraction method.
   
 
© olympods.com
Developed by: Olympods