English   español  
Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/129703
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar a otros formatos:

Image analysis by the method of moments using Piecewise Continuous Basis Functions (PCBF) | Análisis de imágenes mediante el método de los momentos usando funciones de base continuas a intervalos (PCBF)

AutorDomínguez, Sergio
Palabras claveRecuperación basada en contenido
Análisis de imágenes
Bases ortonormales
Descriptores invariantes
Método de los momentos
Content based image retrieval
Image analysis
Invariant descriptors
Method of moments
Orthonormal basis
Fecha de publicación2015
EditorElsevier España
CitaciónRIAI - Revista Iberoamericana de Automatica e Informatica Industrial 12: 69- 78 (2015)
ResumenCopyright © 2015 CEA. Publicado por Elsevier España, S.L. Invariants generated departing from moments, previously extracted from an image, appear frequently in the bibliography as one of the most powerful means of describing images, and more precisely shapes. In this paper, the use of Piecewise Continuous Basis Functions (PCBF) is proposed as an alternative to those basis which have been used traditionally in the method of moments, all of them continuous as the well known Zernike, Legendre or Tchebichev basis. The use of discontinuous basis can be justified by the own discontinuous nature of the object of such analysis, namely images: it is thoroughly known that the contours of visible objects are modeled as discontinuities in the series of luminance values as we go from one side of the border to the other. Analyzing such discontinuous objects by means of continuous functions can lead to undesired results, as the Gibbs phenomenon, that can be avoided by simply shifting to discontinuous basis for the analysis, getting better approximations to the described object. Additionally, the proposed basis can easily generate, as shown in this paper, rotation invariants, which is a very desirable feature for a shape descriptor, given that the orientation that the shape will have in an image is not known in advance. Translation and scale invariance is obtained by means of a simple normalization process. Test confirming this hypothesis are presented as well, starting with an analysis of the behavior of the proposed invariantes in noisy environments, which allow to fix the number of invariants that have to be extracted. Next, once this description length has been determined, new experiments are carried out to assess the performance of the proposed invariants in a content based retrieval task, both in a noise free and in noisy environments, having images corrupted with different gaussian noise intensities. Results confirm our hypothesis that these descriptors are very well suited for this task, showing that they can achieve results similar to those obtained using the continuous reference basis, which is Zernike's, but with a description which is roughly a 40% shorter.
Identificadoresdoi: 10.1016/j.riai.2014.11.006
issn: 1697-7920
Aparece en las colecciones: (CAR) Artículos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
accesoRestringido.pdf15,38 kBAdobe PDFVista previa
Mostrar el registro completo

NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.