An effective and efficient Web content extractor for optimizing the crawling process
Küçük Resim Yok
Tarih
2014
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Wiley
Erişim Hakkı
info:eu-repo/semantics/closedAccess
Özet
Classical Web crawlers make use of only hyperlink information in the crawling process. However, focused crawlers are intended to download only Web pages that are relevant to a given topic by utilizing word information before downloading the Web page. But, Web pages contain additional information that can be useful for the crawling process. We have developed a crawler, iCrawler (intelligent crawler), the backbone of which is a Web content extractor that automatically pulls content out of seven different blocks: menus, links, main texts, headlines, summaries, additional necessaries, and unnecessary texts from Web pages. The extraction process consists of two steps, which invoke each other to obtain information from the blocks. The first step learns which HTML tags refer to which blocks using the decision tree learning algorithm. Being guided by numerous sources of information, the crawler becomes considerably effective. It achieved a relatively high accuracy of 96.37% in our experiments of block extraction. In the second step, the crawler extracts content from the blocks using string matching functions. These functions along with the mapping between tags and blocks learned in the first step provide iCrawler with considerable time and storage efficiency. More specifically, iCrawler performs 14 times faster in the second step than in the first step. Furthermore, iCrawler significantly decreases storage costs by 57.10% when compared with the texts obtained through classical HTML stripping. Copyright (c) 2013 John Wiley & Sons, Ltd.
Açıklama
Anahtar Kelimeler
Web Content Extraction, Web Crawling, Classification, Intelligent Systems, Searching Strategies
Kaynak
Software-Practice & Experience
WoS Q Değeri
Q3
Scopus Q Değeri
Q2
Cilt
44
Sayı
10