English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/129703
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:


Image analysis by the method of moments using Piecewise Continuous Basis Functions (PCBF) | Análisis de imágenes mediante el método de los momentos usando funciones de base continuas a intervalos (PCBF)

AuthorsDomínguez, Sergio
KeywordsRecuperación basada en contenido
Análisis de imágenes
Bases ortonormales
Descriptores invariantes
Método de los momentos
Content based image retrieval
Image analysis
Invariant descriptors
Method of moments
Orthonormal basis
Issue Date2015
PublisherElsevier España
CitationRIAI - Revista Iberoamericana de Automatica e Informatica Industrial 12: 69- 78 (2015)
AbstractCopyright © 2015 CEA. Publicado por Elsevier España, S.L. Invariants generated departing from moments, previously extracted from an image, appear frequently in the bibliography as one of the most powerful means of describing images, and more precisely shapes. In this paper, the use of Piecewise Continuous Basis Functions (PCBF) is proposed as an alternative to those basis which have been used traditionally in the method of moments, all of them continuous as the well known Zernike, Legendre or Tchebichev basis. The use of discontinuous basis can be justified by the own discontinuous nature of the object of such analysis, namely images: it is thoroughly known that the contours of visible objects are modeled as discontinuities in the series of luminance values as we go from one side of the border to the other. Analyzing such discontinuous objects by means of continuous functions can lead to undesired results, as the Gibbs phenomenon, that can be avoided by simply shifting to discontinuous basis for the analysis, getting better approximations to the described object. Additionally, the proposed basis can easily generate, as shown in this paper, rotation invariants, which is a very desirable feature for a shape descriptor, given that the orientation that the shape will have in an image is not known in advance. Translation and scale invariance is obtained by means of a simple normalization process. Test confirming this hypothesis are presented as well, starting with an analysis of the behavior of the proposed invariantes in noisy environments, which allow to fix the number of invariants that have to be extracted. Next, once this description length has been determined, new experiments are carried out to assess the performance of the proposed invariants in a content based retrieval task, both in a noise free and in noisy environments, having images corrupted with different gaussian noise intensities. Results confirm our hypothesis that these descriptors are very well suited for this task, showing that they can achieve results similar to those obtained using the continuous reference basis, which is Zernike's, but with a description which is roughly a 40% shorter.
Identifiersdoi: 10.1016/j.riai.2014.11.006
issn: 1697-7920
Appears in Collections:(CAR) Artículos
Files in This Item:
File Description SizeFormat 
accesoRestringido.pdf15,38 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.