English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/179760
logo share SHARE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:


Visual Semantic Re-ranker for Text Spotting

AuthorsSabir, Ahmed; Moreno-Noguer, Francesc ; Padró, Lluís
KeywordsText spotting
Deep learning
Semantic visual context
Issue Date2018
PublisherSpringer Nature
CitationProgress in Pattern Recognition, Image Analysis, Computer Vision, and Applications: 884-892 (2018)
SeriesLecture Notes in Computer Science
AbstractMany current state-of-the-art methods for text recognition are based on purely local information and ignore the semantic correlation between text and its surrounding visual context. In this paper, we propose a post-processing approach to improve the accuracy of text spotting by using the semantic relation between the text and the scene. We initially rely on an off-the-shelf deep neural network that provides a series of text hypotheses for each input image. These text hypotheses are then re-ranked using the semantic relatedness with the object in the image. As a result of this combination, the performance of the original network is boosted with a very low computational cost. The proposed framework can be used as a drop-in complement for any text-spotting algorithm that outputs a ranking of word hypotheses. We validate our approach on ICDAR’17 shared task dataset.
DescriptionTrabajo presentado en el 23rd Iberoamerican Congress, CIARP 2018, celebrado en Madrid, del 19 al 22 de noviembre de 2018
Publisher version (URL)https://doi.org/10.1007/978-3-030-13469-3_102
Appears in Collections:(IRII) Libros y partes de libros
Files in This Item:
File Description SizeFormat 
Visual Semantic_Sabir.pdf963,56 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.