<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    無為

    無為則可為,無為則至深!

      BlogJava :: 首頁 :: 聯系 :: 聚合  :: 管理
      190 Posts :: 291 Stories :: 258 Comments :: 0 Trackbacks
    A web crawler (also known as a web spider or web robot) is a program which browses the World Wide Web in a methodical, automated manner. Other less frequently used names for web crawlers are ants, automatic indexers, bots, and worms (Kobayashi and Takeda, 2000).

    Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).

    A web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

    Crawling policies
    There are two important characteristics of the Web that generate a scenario in which web crawling is very difficult: its large volume and its rate of change, as there are a huge number of pages being added, changed and removed every day. Also, network speed has improved less than current processing speeds and storage capacities.

    The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that by the time the crawler is downloading the last pages from a site, it is very likely that new pages have been added to the site, or that pages have already been updated or even deleted.

    As Edwards et al. note, "Given that the bandwidth for conducting crawls is neither infinite nor free it is becoming essential to crawl the Web in a not only scalable, but efficient way if some reasonable measure of quality or freshness is to be maintained." (Edwards et al., 2001). A crawler must carefully choose at each step which pages to visit next.

    The behavior of a web crawler is the outcome of a combination of policies:

    A selection policy that states which pages to download.
    A re-visit policy that states when to check for changes to the pages.
    A politeness policy that states how to avoid overloading websites.
    A parallelization policy that states how to coordinate distributed web crawlers.
    [edit]Selection policy
    Given the current size of the Web, even large search engines cover only a portion of the publicly available content; a study by Lawrence and Giles (Lawrence and Giles, 2000) showed that no search engine indexes more than 16% of the Web. As a crawler always downloads just a fraction of the Web pages, it is highly desirable that the downloaded fraction contains the most relevant pages, and not just a random sample of the Web.

    This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Website). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling.

    Cho et al. (Cho et al., 1998) made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies. The ordering metrics tested were breadth-first, backlink-count and partial Pagerank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain.

    Najork and Wiener (Najork and Wiener, 2001) performed an actual crawl on 328 million pages, using breadth-first ordering. They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that "the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates".

    Abiteboul (Abitebout et al., 2003) designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation). In OPIC, each page is given an initial sum of "cash" which is distributed equally among the pages it points to. It is similar to a Pagerank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of "cash". Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web.

    Boldi et al. (Boldi et al., 2004) used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against random ordering and an omniscient strategy. The winning strategy was breadth-first, although a random ordering also performed surprisingly well. One problem is that the WebBase crawl is biased to the crawler used to gather the data. They also showed how bad Pagerank calculations carried on partial subgraphs of the Web, obtained during crawling, can approximate the actual Pagerank.

    Baeza-Yates et al. (Baeza-Yates et al., 2005) used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies. They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are both better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one.

    [edit]Restricting followed links
    A crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a web resource's MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may alternatively examine the URL and only request the resource if the URL ends with .html, .htm or a slash. This strategy may cause numerous HTML web resources to be unintentionally skipped.

    Some crawlers may also avoid requesting any resources that have a "?" in them (are dynamically produced) in order to avoid spider traps which may cause the crawler to download an infinite number of URLs from a website.

    [edit]Path-ascending crawling
    Some crawlers intend to download as many resources as possible from a particular web site. Cothey (Cothey, 2004) introduced a path-ascending crawler that would ascend to every path in each URL that it intends to crawl. For example, when given a seed URL of http://foo.org/a/b/page.html, it will attempt to crawl /a/b/, /a/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling.

    [edit]Focused crawling
    The importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawlers or topical crawlers. Focused crawling was first introduced by Chakrabarti et al. (Chakrabarti et al., 1999).

    The main problem in focused crawling is that in the context of a web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton (Pinkerton, 1994) in a crawler developed in the early days of the Web. Diligenti et al. (Diligenti et al., 2000) propose to use the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points.

    [edit]Crawling the Deep Web
    A vast amount of web pages lie in the deep or invisible web. These pages are typically only accessible by submitting queries to a database, and regular crawlers are unable to find these pages if there are no links that point to them. Google’s Sitemap Protocol and mod_oai (Nelson et al., 2005) are intended to allow discovery of these deep-web resources.

    [edit]Re-visit policy
    The Web has a very dynamic nature, and crawling a fraction of the Web can take a long time, usually measured in weeks or months. By the time a web crawler has finished its crawl, many events could have happened. These events can include creations, updates and deletions.

    From the search engine's point of view, there is a cost associated with not detecting an event, and thus having an outdated copy of a resource. The most used cost functions, introduced in (Cho and Garcia-Molina, 2000), are freshness and age.

    Freshness: This is a binary measure that indicates whether the local copy is accurate or not. The freshness of a page p in the repository at time t is defined as:



    Age This is a measure that indicates how outdated the local copy is. The age of a page p in the repository, at time t is defined as:




    Evolution of freshness and age in Web crawlingCoffman et al. (Edward G. Coffman, 1998) worked with a definition of the objective of a web crawler that is equivalent to freshness, but use a different wording: they propose that a crawler must minimize the fraction of time pages remain outdated. They also noted that the problem of web crawling can be modeled as a multiple-queue, single-server polling system, on which the web crawler is the server and the websites are the queues. Page modifications are the arrival of the customers, and switch-over times are the interval between page accesses to a single website. Under this model, mean waiting time for a customer in the polling system is equivalent to the average age for the web crawler.

    The objective of the crawler is to keep the average freshness of pages in its collection as high as possible, or to keep the average age of pages as low as possible. These objectives are not equivalent: in the first case, the crawler is just concerned with how many pages are out-dated, while in the second case, the crawler is concerned with how old the local copies of pages are.

    Two simple re-visiting policies were studied by Cho and Garcia-Molina (Cho and Garcia-Molina, 2003):

    Uniform policy: This involves re-visiting all pages in the collection with the same frequency, regardless of their rates of change.

    Proportional policy: This involves re-visiting more often the pages that change more frequently. The visiting frequency is directly proportional to the (estimated) change frequency.

    (In both cases, the repeated crawling order of pages can be done either at random or with a fixed order.)

    Cho and Garcia-Molina proved the surprising result that, in terms of average freshness, the uniform policy outperforms the proportional policy in both a simulated Web and a real Web crawl. The explanation for this result comes from the fact that, when a page changes too often, the crawler will waste time by trying to re-crawl it too fast and still will not be able to keep its copy of the page fresh.

    To improve freshness, we should penalize the elements that change too often (Cho and Garcia-Molina, 2003a). The optimal re-visiting policy is neither the uniform policy nor the proportional policy. The optimal method for keeping average freshness high includes ignoring the pages that change too often, and the optimal for keeping average age low is to use access frequencies that monotonically (and sub-linearly) increase with the rate of change of each page. In both cases, the optimal is closer to the uniform policy than to the proportional policy: as Coffman et al. (Edward G. Coffman, 1998) note, "in order to minimize the expected obsolescence time, the accesses to any particular page should be kept as evenly spaced as possible". Explicit formulas for the re-visit policy are not attainable in general, but they are obtained numerically, as they depend on the distribution of page changes. (Cho and Garcia-Molina, 2003a) show that the exponential distribution is a good fit for describing page changes, while (Ipeirotis et al., 2005) show how to use statistical tools to discover paramters that affect this distribution. Note that the re-visiting policies considered here regard all pages as homogeneous in terms of quality ("all pages on the Web are worth the same"), something that is not a realistic scenario, so further information about the Web page quality should be included to achieve a better crawling policy.

    [edit]Politeness policy
    As noted by Koster (Koster, 1995), the use of Web crawlers is useful for a number of tasks, but comes with a price for the general community. The costs of using Web crawlers include:

    Network resources, as crawlers require considerable bandwidth and operate with a high degree of parallelism during a long period of time.
    Server overload, especially if the frequency of accesses to a given server is too high.
    Poorly written crawlers, which can crash servers or routers, or which download pages they cannot handle.
    Personal crawlers that, if deployed by too many users, can disrupt networks and Web servers.
    A partial solution to these problems is the robots exclusion protocol, also known as the robots.txt protocol (Koster, 1996) that is a standard for administrators to indicate which parts of their Web servers should not be accessed by crawlers. This standard does not include a suggestion for the interval of visits to the same server, even though this interval is the most effective way of avoiding server overload. A non-standard robots.txt file may use a "Crawl-delay:" parameter to indicate the number of seconds to delay between requests, and some commercial search engines like MSN and Yahoo will adhere to this interval.

    The first proposal for the interval between connections was given in (Koster, 1993) and was 60 seconds. However, if pages were downloaded at this rate from a website with more than 100,000 pages over a perfect connection with zero latency and infinite bandwidth, it would take more than 2 months to download only that entire website; also, only a fraction of the resources from that Web server would be used. This does not seem acceptable.

    Cho (Cho and Garcia-Molina, 2003) uses 10 seconds as an interval for accesses, and the WIRE crawler (Baeza-Yates and Castillo, 2002) uses 15 seconds as the default. The MercatorWeb crawler (Heydon and Najork, 1999) follows an adaptive politeness policy: if it took t seconds to download a document from a given server, the crawler waits for 10t seconds before downloading the next page. Dill et al. (Dill et al., 2002) use 1 second.

    Anecdotal evidence from access logs shows that access intervals from known crawlers vary between 20 seconds and 3–4 minutes. It is worth noticing that even when being very polite, and taking all the safeguards to avoid overloading Web servers, some complaints from Web server administrators are received. Brin and Page note that: "... running a crawler which connects to more than half a million servers (...) generates a fair amount of email and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen." (Brin and Page, 1998).

    [edit]Parallelization policy
    Main article: Distributed web crawling
    A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes. Cho and Garcia-Molina (Cho and Garcia-Molina, 2002) studied two types of policies:

    Dynamic assignment: With this type of policy, a central server assigns new URLs to different crawlers dynamically. This allows the central server to, for instance, dynamically balance the load of each crawler.

    With dynamic assignment, typically the systems can also add or remove downloader processes. The central server may become the bottleneck, so most of the workload must be transferred to the distributed crawling processes for large crawls.

    There are two configurations of crawling architectures with dynamic assignments that have been described by Shkapenyuk and Suel (Shkapenyuk and Suel, 2002):

    A small crawler configuration, in which there is a central DNS resolver and central queues per website, and distributed downloaders.
    A large crawler configuration, in which the DNS resolver and the queues are also distributed.
    Static assignment: With this type of policy, there is a fixed rule stated from the beginning of the crawl that defines how to assign new URLs to the crawlers.

    For static assignment, a hashing function can be used to transform URLs (or, even better, complete website names) into a number that corresponds to the index of the corresponding crawling process. As there are external links that will go from a website assigned to one crawling process to a website assigned to a different crawling process, some exchange of URLs must occur.

    To reduce the overhead due to the exchange of URLs between crawling processes, the exchange should be done in batch, several URLs at a time, and the most cited URLs in the collection should be known by all crawling processes before the crawl (e.g.: using data from a previous crawl) (Cho and Garcia-Molina, 2002).

    An effective assignment function must have three main properties: each crawling process should get approximately the same number of hosts (balancing property), if the number of crawling processes grows, the number of hosts assigned to each process must shrink (contra-variance property), and the assignment must be able to add and remove crawling processes dynamically. Boldi et al. (Boldi et al., 2004) propose to use consistent hashing, which replicates the buckets, so adding or removing a bucket does not requires re-hashing of the whole table to achieve all of the desired properties. crawling is an effective process synchronisation tool between the users and the search engine.

    [edit]Web crawler architectures

    High-level architecture of a standard web crawlerA crawler must have a good crawling strategy, as noted in the previous sections, but it also needs a highly optimized architecture.

    Shkapenyuk and Suel (Shkapenyuk and Suel, 2002) noted that: "While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability."

    Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents other from reproducing the work. There are also emerging concerns about "search engine spamming", which prevent major search engines from publishing their ranking algorithms.


    [edit]URL normalization
    Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of "." and ".." segments, and adding trailing slashes to the non-empty path component (Pant, 2004).

    [edit]Crawler identification
    Web crawlers typically identify themselves to a web server by using the User-agent field of an HTTP request. Website administrators typically examine their web servers’ log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the website administrator may find out more information about the crawler. Spambots and other malicious web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler.

    It is important for web crawlers to identify themselves so website administrators can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their web pages to be indexed by a particular search engine.

    [edit]Examples of web crawlers
    The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features:

    RBSE (Eichmann, 1994) was the first published web crawler. It was based on two programs: the first program, "spider" maintains a queue in a relational database, and the second program "mite", is a modified www ASCII browser that downloads the pages from the Web.

    WebCrawler (Pinkerton, 1994) was used to build the first publicly-available full-text index of a sub-set of the Web. It was based on lib-WWW to download pages, and another program to parse and order URLs for breadth-first exploration of the Web graph. It also included a real-time crawler that followed links based on the similarity of the anchor text with the provided query.

    World Wide Web Worm (McBryan, 1994) was a crawler used to build a simple index of document titles and URLs. The index could be searched by using the grep Unix command.

    Google Crawler (Brin and Page, 1998) is described in some detail, but the reference is only about an early version of its architecture, which was based in C++ and Python. The crawler was integrated with the indexing process, because text parsing was done for full-text indexing and also for URL extraction. There is an URL server that sends lists of URLs to be fetched by several crawling processes. During parsing, the URLs found were passed to a URL server that checked if the URL have been previously seen. If not, the URL was added to the queue of the URL server.

    CobWeb (da Silva et al., 1999) uses a central "scheduler" and a series of distributed "collectors". The collectors parse the downloaded Web pages and send the discovered URLs to the scheduler, which in turn assign them to the collectors. The scheduler enforces a breadth-first search order with a politeness policy to avoid overloading Web servers. The crawler is written in Perl.

    Mercator (Heydon and Najork, 1999) is a modular web crawler written in Java. Its modularity arises from the usage of interchangeable "protocol modules" and "processing modules". Protocols modules are related to how to acquire the Web pages (e.g.: by HTTP), and processing modules are related to how to process Web pages. The standard processing module just parses the pages and extract new URLs, but other processing modules can be used to index the text of the pages, or to gather statistics from the Web.

    WebFountain (Edwards et al., 2001) is a distributed, modular crawler similar to Mercator but written in C++. It features a "controller" machine that coordinates a series of "ant" machines. After repeatedly downloading pages, a change rate is inferred for each page and a non-linear programming method must be used to solve the equation system for maximizing freshness. The authors recommend to use this crawling order in the early stages of the crawl, and then switch to a uniform crawling order, in which all pages are being visited with the same frequency.

    PolyBot [Shkapenyuk and Suel, 2002] is a distributed crawler written in C++ and Python, which is composed of a "crawl manager", one or more "downloaders" and one or more "DNS resolvers". Collected URLs are added to a queue on disk, and processed later to search for seen URLs in batch mode. The politeness policy considers both third and second level domains (e.g.: www.example.com and www2.example.com are third level domains) because third level domains are usually hosted by the same Web server.

    WebRACE (Zeinalipour-Yazti and Dikaiakos, 2002) is a crawling and caching module implemented in Java, and used as a part of a more generic system called eRACE. The system receives requests from users for downloading Web pages, so the crawler acts in part as a smart proxy server. The system also handles requests for "subscriptions" to Web pages that must be monitored: when the pages change, they must be downloaded by the crawler and the subscriber must be notified. The most outstanding feature of WebRACE is that, while most crawlers start with a set of "seed" URLs, WebRACE is continuously receiving new starting URLs to crawl from.

    Ubicrawler (Boldi et al., 2004) is a distributed crawler written in Java, and it has no central process. It is composed of a number of identical "agents"; and the assignment function is calculated using consistent hashing of the host names. There is zero overlap, meaning that no page is crawled twice, unless a crawling agent crashes (then, another agent must re-crawl the pages from the failing agent). The crawler is designed to achieve high scalability and to be tolerant to failures.

    FAST Crawler (Risvik and Michelsen, 2002) is the crawler used by the FAST search engine, and a general description of its architecture is available. It is a distributed architecture in which each machine holds a "document scheduler" that maintains a queue of documents to be downloaded by a "document processor" that stores them in a local storage subsystem. Each crawler communicates with the other crawlers via a "distributor" module that exchanges hyperlink information.

    In addition to the specific crawler architectures listed above, there are general crawler architectures published by Cho (Cho and Garcia-Molina, 2002) and Chakrabarti (Chakrabarti, 2003).

    [edit]Open-source crawlers
    DataparkSearch is a crawler and search engine released under a GPL license.

    GNU Wget is a command-line operated crawler written in C and released under the GPL. It is typically used to mirror web and FTP sites.

    GRUB (acquired by Looksmart, no longer operational) was a distributed crawling project using an open architecture.

    Heritrix, the Internet Archive Crawler (Burner, 1997) is a crawler designed with the purpose of archiving periodic snapshots of a large portion of the Web. It uses several processes in a distributed fashion, and a fixed number of websites are assigned to each process. The inter-process exchange of URLs is carried in batch with a long time interval between exchanges, as this is a costly process. The Internet Archive Crawler also has to deal with the problem of changing DNS records, so it keeps an historical archive of the hostname to IP mappings.

    ht://Dig includes a Web crawler in its indexing engine.

    HTTrack uses a web crawler to create a mirror of a website for off-line viewing. It is written in C and released under the GPL.

    Larbin by Andreas Beder[1]

    Nutch is a crawler written in Java and released under an Apache License. It can be used in conjunction with the Lucene text indexing package.

    WebBase is a crawler used by the Stanford WebBase Project.

    WebSPHINX (Miller and Bharat, 1998) is composed of a Java class library that implements multi-threaded Web page retrieval and HTML parsing, and a graphical user interface to set the starting URLs, to extract the downloaded data and to implement a basic text-based search engine.

    WIRE (Baeza-Yates and Castillo, 2002) is a web crawler written in C++ and released under the GPL, including several policies for scheduling the page downloads and a module for generating reports and statistics on the downloaded pages so it has been used for Web characterization.

    [edit]See also
    Data mining
    Distributed web crawling
    Google
    Macurious
    PageRank
    Spambot
    Spider trap
    [edit]References
    Abiteboul, S., Preda, M., and Cobena, G. (2003). "Adaptive on-line page importance computation". In Proceedings of the twelfth international conference on World Wide Web: 280-290.
    Baeza-Yates, R. and Castillo, C. (2002). Balancing volume, quality and freshness in web crawling. In Soft Computing Systems – Design, Management and Applications, pages 565–572, Santiago, Chile. IOS Press Amsterdam.
    Baeza-Yates, R., Castillo, C., Marin, M. and Rodriguez, A. (2005). Crawling a Country: Better Strategies than Breadth-First for Web Page Ordering. In Proceedings of the Industrial and Practical Experience track of the 14th conference on World Wide Web, pages 864–872, Chiba, Japan. ACM Press.
    Boldi, P., Codenotti, B., Santini, M., and Vigna, S. (2004a). UbiCrawler: a scalable fully distributed Web crawler. Software, Practice and Experience, 34(8):711–726.
    Boldi, P., Santini, M., and Vigna, S. (2004b). Do your worst to make the best: Paradoxical effects in pagerank incremental computations. In Proceedings of the third Workshop on Web Graphs (WAW), volume 3243 of Lecture Notes in Computer Science, pages 168-180, Rome, Italy. Springer.
    Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1-7):107–117.
    Burner, M. (1997). Crawling towards eternity – building an archive of the World Wide Web. Web Techniques, 2(5).
    Castillo, C. (2004). Effective Web Crawling. PhD thesis, University of Chile.
    Chakrabarti, S. (2003). Mining the Web. Morgan Kaufmann Publishers. ISBN 1558607544
    Chakrabarti, S., van den Berg, M., and Dom, B. (1999). Focused crawling: a new approach to topic-specific web resource discovery. Computer Networks, 31(11–16):1623–1640.
    Cho, J., Garcia-Molina, H., and Page, L. (1998). "Efficient crawling through URL ordering". In Proceedings of the seventh conference on World Wide Web.
    Cho, J. and Garcia-Molina, H. (2000). Synchronizing a database to improve freshness. In Proceedings of ACM International Conference on Management of Data (SIGMOD), pages 117-128, Dallas, Texas, USA.
    Cho, J. and Garcia-Molina, H. (2002). Parallel crawlers. In Proceedings of the eleventh international conference on World Wide Web, pages 124–135, Honolulu, Hawaii, USA. ACM Press.
    Cho, J. and Garcia-Molina, H. (2003). Effective page refresh policies for web crawlers. ACM Transactions on Database Systems, 28(4).
    Cho, J. and Garcia-Molina, H. (2003). Estimating frequency of change. ACM Transactions on Internet Technology, 3(3).
    Cothey, V. (2004). "Web-crawling reliability". Journal of the American Society for Information Science and Technology 55 (14).
    Diligenti, M., Coetzee, F., Lawrence, S., Giles, C. L., and Gori, M. (2000). Focused crawling using context graphs. In Proceedings of 26th International Conference on Very Large Databases (VLDB), pages 527-534, Cairo, Egypt.
    Dill, S., Kumar, R., Mccurley, K. S., Rajagopalan, S., Sivakumar, D., and Tomkins, A. (2002). Self-similarity in the web. ACM Trans. Inter. Tech., 2(3):205–223.
    Eichmann, D. (1994). The RBSE spider: balancing effective search against Web load. In Proceedings of the First World Wide Web Conference, Geneva, Switzerland.
    Edward G. Coffman, Z. Liu, R. W. (1998). Optimal robot scheduling for Web search engines. Journal of Scheduling, 1(1):15–29.
    Edwards, J., McCurley, K. S., and Tomlin, J. A. (2001). "An adaptive model for optimizing performance of an incremental web crawler". In Proceedings of the Tenth Conference on World Wide Web: 106-113.
    Heydon, A. and Najork, M. (1999). Mercator: A scalable, extensible Web crawler. World Wide Web Conference, 2(4):219–229.
    Ipeirotis, P., Ntoulas, A., Cho, J., Gravano, L. (2005) Modeling and managing content changes in text databases. In Proceedings of the 21st IEEE International Conference on Data Engineering, pages 606-617, April 2005, Tokyo.
    Kobayashi, M. and Takeda, K. (2000). "Information retrieval on the web". ACM Computing Surveys 32 (2): 144-173.
    Koster, M. (1993). Guidelines for robots writers.
    Koster, M. (1995). Robots in the web: threat or treat ? ConneXions, 9(4).
    Koster, M. (1996). A standard for robot exclusion.
    Lawrence, S. and Giles, C. L. (2000). Accessibility of information on the web. Intelligence, 11(1), 32–39.
    McBryan, O. A. (1994). GENVL and WWWW: Tools for taming the web. In Proceedings of the First World Wide Web Conference, Geneva, Switzerland.
    Miller, R. and Bharat, K. (1998). Sphinx: A framework for creating personal, site-specific web crawlers. In Proceedings of the seventh conference on World Wide Web, Brisbane, Australia. Elsevier Science.
    Marc Najork and Janet L. Wiener. Breadth-first crawling yields high-quality pages. In Proceedings of the Tenth Conference on World Wide Web, pages 114–118, Hong Kong, May 2001. Elsevier Science.
    Nelson, M. L. , Van de Sompel, H. , Liu, X., Harrison, T. L. and McFarland, N. (2005). "mod_oai: An Apache module for metadata harvesting". In Proceedings of the 9th European Conference on Research and Advanced Technology for Digital Libraries (ECDL 2005): 509.
    Pant, G., Srinivasan, P., Menczer, F. (2004). "Crawling the Web". Web Dynamics: Adapting to Change in Content, Size, Topology and Use, edited by M. Levene and A. Poulovassilis, 153-178.
    Pinkerton, B. (1994). Finding what people want: Experiences with the WebCrawler. In Proceedings of the First World Wide Web Conference, Geneva, Switzerland.
    Risvik, K. M. and Michelsen, R. (2002). Search Engines and Web Dynamics. Computer Networks, vol. 39, pp. 289–302, June 2002.
    Shkapenyuk, V. and Suel, T. (2002). Design and implementation of a high performance distributed web crawler. In Proceedings of the 18th International Conference on Data Engineering (ICDE), pages 357-368, San Jose, California. IEEE CS Press.
    da Silva, A. S., Veloso, E. A., Golgher, P. B., Ribeiro-Neto, B. A., Laender, A. H. F., and Ziviani, N. (1999). Cobweb – a crawler for the Brazilian web. In Proceedings of String Processing and Information Retrieval (SPIRE), pages 184–191, Cancun, Mexico. IEEE CS Press.
    Zeinalipour-Yazti, D. and Dikaiakos, M. D. (2002). Design and implementation of a distributed crawler and filtering processor. In Proceedings of the Fifth Next Generation Information Technologies and Systems (NGITS), volume 2382 of Lecture Notes in Computer Science, pages 58–74, Caesarea, Israel. Springer.

    Retrieved from "https://secure.wikimedia.org/wikipedia/en/wiki/Web_crawler"
    2006-6-12 02:52 PM #1
    查看資料 ? 訪問主頁? Blog? 發短消息? 頂部
    ?
    hammer_shi
    管理員
    Rank: 9 Rank: 9 Rank: 9


    UID 7
    精華 29
    積分 7484
    帖子 1442
    威望 1153
    金錢 486830
    閱讀權限 200
    注冊 2004-4-15
    來自 北京
    狀態 離線
    [廣告]: 如何才能下載論壇資料
    網絡爬蟲定義
    所謂的網絡爬蟲,中文又稱網絡機器人或者網絡蜘蛛。英文名有Spider, Crawler, Bots, Robot, Wanderer,Hotbot等。在[15]中對其分別進行了廣義和狹義的兩種定義:狹義的Spider就是指軟件程序根據http協議利用超文本鏈接和檢索超文本文檔周游互聯網信息空間。而廣義的Spider則是指利用標準的http協議自動檢索web文檔的軟件程序。世界上第一個用于監測互聯網發展規模的“機器人”程序是Matthew Gray開發的World Wide Web Wanderer。剛開始它只用來統計互聯網上的服務器數量,后來則發展為能夠檢索網站域名。與Wanderer相對應,Martin Koster于1993年10月創建了ALIWEB[16],ALIWEB不使用“機器人”程序,而是靠網站主動提交信息來建立自己的鏈接索引,類似于現在我們熟知的Yahoo。到1993年底,一些基于此原理的搜索引擎開始紛紛涌現, 其中以JumpStation[17]、The World Wide Web Worm[18]和Repository Based Software Engineering(RBSE) spider[19]最負盛名。然而Jump Station和WWW Worm只是以搜索工具在數據庫中找到匹配信息的先后次序排列搜索結果,因此毫無信息關聯度可言。而RBSE是第一個在搜索結果排列中引入關鍵字串匹配程度概念的引擎[20]。隨后在1994年Crawler產生. 最早的全文索引的爬蟲程序是1994年的Repository Based Software Engineering (RBSE)之后Spider如雨后春筍涌現出來.

    網絡爬蟲作用及其相關協議
    據統計Spider的主要應用方向有以下五種: ? ? ? ?


    1. 個人搜索:基于主題的網頁抓取
    2. 網頁集合:為搜索引擎等服務
    3. web統計:統計網絡主機數目或者主機網頁數目等等
    4. 站點維護:檢查死鏈接
    5. 網絡檔案:為檔案館等服務,特定領域的.


      Spider利弊參半,由于spider的存在,在Spider程序訪問目標網站和網頁的時候會加大網站服務器的和網絡負載,同時可能有更嚴重的問題就是可能會傳播一些網站擁有者本不想公開公布的內容,Robots.txt協議的目的就是告訴Spider,使其明確他可以抓取網站哪些內容,禁止抓取網站哪些內容。
      Robots.txt是一種君子協定[21],因為他完全需要Spider所有者自覺遵守該協議,而無法通過相關法律來強迫執行。Robots.txt當前有兩種實現方式。其一是在網頁語言中加入meta說明。Meta部分一般放在html語言的<head></head>之間。對于網頁的meta部分放在之間, content部分可供使用的參數有:

      QUOTE:
      Index:可對該網頁進行抓取索引;
      Follow:可以訪問該網頁內的超鏈部分;
      Noindex:不對該網頁進行索引;
      Nofollow:不遍歷該網頁內的超鏈部分;
      若將index和noindex歸為A組,follow和nofollow歸為B組,要求是A,B組內部不能重復使用,A、B組間可以交叉。
      使用Meta tag使用方便,但是有其局限性,因為有些網站只是拒絕或者歡迎某些Spider程序對其進行操作,還有就是需要在每個頁面都寫工作量比較大,所以有必要了解另外一種君子協定來解決問題,那就是Robots.txt。這種協議規定在網站根目錄下存放一個文件名為:robots.txt的文件,如果你的網站域名為http://www.example.com,那么該文件路徑為:http:// www.example.com/robots.txt;如果域名為http://example.com,那么該文件路徑為:http:// example.com/robots.txt.該文件格式如下:

      #為注釋前綴
      User-agent: 參數可以是*(表示所有),也可以是某個spider 名,如:badspider
      Disallow:/(表示所以目錄)或者其他目錄文件

      如:

      以上文件表示對所有spider,均保留/cyberworld/map/不被索引;對于cybermapper的spider,允許其訪問任何目錄;最終結果也就是對cybermapper不作保留而其他的保留一個目錄,在disallow部分,每個disallow至多只能有一個目錄。若有多個目錄則需要分幾個寫,如:


      凡是有該標志的文章,都是該blog博主Caoer(草兒)原創,凡是索引、收藏
      、轉載請注明來處和原文作者。非常感謝。

      posted on 2006-06-24 14:21 草兒 閱讀(712) 評論(1)  編輯  收藏 所屬分類: BI and DM

      Feedback

      # re: Wiki 上Spider相關知識 2008-03-08 15:36 薇兒
      如什么啊?怎么不寫完啊?  回復  更多評論
        

      主站蜘蛛池模板: 一级a性色生活片久久无少妇一级婬片免费放| 一级毛片a女人刺激视频免费| 国产成人免费A在线视频| 一级人做人爰a全过程免费视频| 亚洲va国产va天堂va久久| 拍拍拍又黄又爽无挡视频免费| 一级特黄录像视频免费| 亚洲精品白色在线发布| 国产免费卡一卡三卡乱码| 一区二区三区在线免费看| 国产精品亚洲色图| 亚洲国语精品自产拍在线观看| 国产精品极品美女免费观看| 久久99热精品免费观看动漫| 亚洲欧美日韩综合久久久久| 亚洲精品午夜无码专区| 在线观看成人免费| 三年片在线观看免费观看大全一| 狠狠综合亚洲综合亚洲色| 久久亚洲AV无码精品色午夜麻豆| 亚洲国产精品日韩| 特级做A爰片毛片免费69| 大地资源中文在线观看免费版| 亚洲精品无AMM毛片| 亚洲欧洲日韩国产综合在线二区| 国产又粗又猛又爽又黄的免费视频 | 亚洲精品午夜无码电影网| 国产美女无遮挡免费网站| 曰批全过程免费视频网址| 丝袜捆绑调教视频免费区| 国产成人人综合亚洲欧美丁香花 | 啦啦啦在线免费视频| 99re热精品视频国产免费| www成人免费观看网站| 亚洲av无码专区在线观看下载| 中文字幕亚洲第一在线| 亚洲国产精品一区二区成人片国内| 国产一区二区免费在线| 国产三级在线观看免费| 四虎成年永久免费网站| 精品熟女少妇av免费久久|