Web crawler

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A web crawler or spider is a computer program that automatically fetches the contents of a web page. The program then analyses the content, for example to index it by certain search terms. Search engines commonly use web crawlers.[1]

References[change | change source]

  1. Masanès, Julien (February 15, 2007). Web Archiving. Springer. p. 1. ISBN 978-3-54046332-0. https://nocrus.co.uk/shared-hosting/. Retrieved April 24, 2014.