English   español  
Por favor, use este identificador para citar o enlazar a este item: http://hdl.handle.net/10261/4297
Compartir / Impacto:
Estadísticas
Add this article to your Mendeley library MendeleyBASE
Ver citas en Google académico
Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL
Exportar otros formatos: Exportar EndNote (RIS)Exportar EndNote (RIS)Exportar EndNote (RIS)
Título : What is the invisible web? A crawler perspective
Autor : Arroyo, Natalia
Palabras clave : Invisible web
Crawlers
Cybermetrics
Web invisible
Cybermetría
Fecha de publicación : 2004
Citación : AoIR-ASIST 2004 Workshop on Web Science Research Methods, Brighton (UK)
Resumen: The invisible Web, also known as the deep Web or dark matter, is an important problem for Webometrics due to difficulties of conceptualization and measurement. The invisible Web has been defined to be the part of the Web that cannot be indexed by search engines, including databases and dynamically generated pages. Some authors have recognized that this is a quite subjective concept that depends on the point of view of the observer: what is visible for one observer may be invisible for others. In the generally accepted definition of the invisible Web, only the point of view search engines has been taken into account. Search engines are considered to be the eyes of the Web, both for measuring and searching. In addition to commercial search engines, other tools have also been used for quantitative studies of the Web, such as commercial and academic crawlers. Commercial crawlers are programs developed by software companies for other purposes than Webometrics, such as Web sites management, but might also be used for crawling Web sites and reporting on their characteristics (size, hypertext structure, embedded resources, etc). Academic crawlers are programs developed by academic institutions for measuring Web sites for Webometric purposes. In this paper, Sherman and Price’s “truly invisible Web” is studied from the point of view of crawlers. The truly invisible Web consists of pages that cannot be indexed for technical reasons. Crawler parameters are significantly different to search engines, due to different design purposes resulting in different technical specifications. In addition, huge differences among crawlers on their coverage of the Web have been demonstrated in previous investigations. Both aspects are clarified though an experiment in which different Web sites, including diverse file formats and built with different types of Web programming, are analyzed, on a set date, with seven commercial crawlers (Astra SiteManager, COAST WebMaster, Microsoft Site Analyst, Microsoft Content Analyzer, WebKing, Web Trends and Xenu), and an academic crawler (SocSciBot). Each Web site had been previously copied to a hard disk, using a file-retrieving tool, in order to compare them with the data obtained by crawlers. The results are reported and analyzed in detail to produce a definition and classification of the invisible Web for commercial and academic crawlers
URI : http://hdl.handle.net/10261/4297
Aparece en las colecciones: (CCHS-IEDCYT) Comunicaciones congresos
Ficheros en este ítem:
Fichero Descripción Tamaño Formato  
R-18.pdf32,97 kBAdobe PDFVista previa
Visualizar/Abrir
Mostrar el registro completo
 


NOTA: Los ítems de Digital.CSIC están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.