Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/108858
COMPARTIR / EXPORTAR:
logo share SHARE BASE
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE

Invitar a revisión por pares abierta
Título

Real-Time fusion of visual images and laser data images for safe navigation in outdoor environments

AutorGarcía-Alegre Sánchez, María C. CSIC ORCID; Martín, David; Guinea García-Alegre, Domingo M.; Guinea Díaz, Domingo CSIC ORCID
Palabras claveReal-Time fusion
Visual images
Laser data images
Safety
Location
Environment perception
Fecha de publicaciónjun-2011
CitaciónMaria C. Garcia-Alegre, David Martin, D. Miguel Guinea and Domingo Guinea (2011). Real-Time Fusion of Visual Images and Laser Data Images for Safe Navigation in Outdoor Environments, Sensor Fusion - Foundation and Applications, Dr. Ciza Thomas (Ed.), ISBN: 978-953-307-446-7, InTech, DOI: 10.5772/16690. Available from: http://www.intechopen.com/books/sensor-fusion-foundation-and-applications/real-time-fusion-of-visual-images-and-laser-data-images-for-safe-navigation-in-outdoor-environments
Resumen[EN]In recent years, two dimensional laser range finders mounted on vehicles is becoming a fruitful solution to achieve safety and environment recognition requirements (Keicher & Seufert, 2000), (Stentz et al., 2002), (DARPA, 2007). They provide real-time accurate range measurements in large angular fields at a fixed height above the ground plane, and enable robots and vehicles to perform more confidently a variety of tasks by fusing images from visual cameras with range data (Baltzakis et al., 2003). Lasers have normally been used in industrial surveillance applications to detect unexpected objects and persons in indoor environments. In the last decade, laser range finder are moving from indoor to outdoor rural and urban applications for 3D imaging (Yokota et al., 2004), vehicle guidance (Barawid et al., 2007), autonomous navigation (Garcia-Pérez et al., 2008), and objects recognition and classification (Lee & Ehsani, 2008), (Edan & Kondo, 2009), (Katz et al., 2010). Unlike industrial applications, which deal with simple, repetitive and well-defined objects, cameralaser systems on board off-road vehicles require advanced real-time techniques and algorithms to deal with dynamic unexpected objects. Natural environments are complex and loosely structured with great differences among consecutive scenes and scenarios. Vision systems still present severe drawbacks, caused by lighting variability that depends on unpredictable weather conditions. Camera-laser objects feature fusion and classification is still a challenge within the paradigm of artificial perception and mobile robotics in outdoor environments with the presence of dust, dirty, rain, and extreme temperature and humidity. Real time relevant objects perception, task driven, is a main issue for subsequent actions decision in safe unmanned navigation. In comparison with industrial automation systems, the precision required in objects location is usually low, as it is the speed of most rural vehicles that operate in bounded and low structured outdoor environments. To this aim, current work is focused on the development of algorithms and strategies for fusing 2D laser data and visual images, to accomplish real-time detection and classification of unexpected objects close to the vehicle, to guarantee safe navigation. Next, class information can be integrated within the global navigation architecture, in control modules, such as, stop, obstacle avoidance, tracking or mapping.Section 2 includes a description of the commercial vehicle, robot-tractor DEDALO and the vision systems on board. Section 3 addresses some drawbacks in outdoor perception. Section 4 analyses the proposed laser data and visual images fusion method, focused in the reduction of the visual image area to the region of interest wherein objects are detected by the laser. Two methods of segmentation are described in Section 5, to extract the shorter area of the visual image (ROI) resulting from the fusion process. Section 6 displays the colour based classification results of the largest segmented object in the region of interest. Some conclusions are outlined in Section 7, and acknowledgements and references are displayed in Section 8 and Section 9.
Versión del editorhttp://dx.doi.org/10.5772/16690
URIhttp://hdl.handle.net/10261/108858
DOI10.5772/16690
ISBN978-953-307-446-7
Aparece en las colecciones: (CAR) Libros y partes de libros




Mostrar el registro completo

CORE Recommender

Page view(s)

372
checked on 23-abr-2024

Download(s)

222
checked on 23-abr-2024

Google ScholarTM

Check

Altmetric

Altmetric


Este item está licenciado bajo una Licencia Creative Commons Creative Commons