Please use this identifier to cite or link to this item:
|Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL|
|dc.identifier.citation||AoIR-ASIST 2004 Workshop on Web Science Research Methods, Brighton (UK)||en_US|
|dc.description.abstract||The invisible Web, also known as the deep Web or dark matter, is an important problem for Webometrics due to difficulties of conceptualization and measurement. The invisible Web has been defined to be the part of the Web that cannot be indexed by search engines, including databases and dynamically generated pages. Some authors have recognized that this is a quite subjective concept that depends on the point of view of the observer: what is visible for one observer may be invisible for others. In the generally accepted definition of the invisible Web, only the point of view search engines has been taken into account. Search engines are considered to be the eyes of the Web, both for measuring and searching. In addition to commercial search engines, other tools have also been used for quantitative studies of the Web, such as commercial and academic crawlers. Commercial crawlers are programs developed by software companies for other purposes than Webometrics, such as Web sites management, but might also be used for crawling Web sites and reporting on their characteristics (size, hypertext structure, embedded resources, etc). Academic crawlers are programs developed by academic institutions for measuring Web sites for Webometric purposes. In this paper, Sherman and Price’s “truly invisible Web” is studied from the point of view of crawlers. The truly invisible Web consists of pages that cannot be indexed for technical reasons. Crawler parameters are significantly different to search engines, due to different design purposes resulting in different technical specifications. In addition, huge differences among crawlers on their coverage of the Web have been demonstrated in previous investigations. Both aspects are clarified though an experiment in which different Web sites, including diverse file formats and built with different types of Web programming, are analyzed, on a set date, with seven commercial crawlers (Astra SiteManager, COAST WebMaster, Microsoft Site Analyst, Microsoft Content Analyzer, WebKing, Web Trends and Xenu), and an academic crawler (SocSciBot). Each Web site had been previously copied to a hard disk, using a file-retrieving tool, in order to compare them with the data obtained by crawlers. The results are reported and analyzed in detail to produce a definition and classification of the invisible Web for commercial and academic crawlers||en_US|
|dc.title||What is the invisible web? A crawler perspective||en_US|
|Appears in Collections:||(CCHS-IEDCYT) Comunicaciones congresos|
Show simple item record
WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.