• en
  • de
  • Public Offering

    crawler

    A crawler is a program that extracts data from a web page and writes these into a database. Crawlers are also known as robots or spiders because their search is automatic and their path through the web is similar to a spider’s web.

    Spiders usually visit websites via hyperlinks embedded in websites that have already been indexed. The retrieved content is then cached, analysed and, if necessary, indexed. Indexing is based on the search engine’s algorithm. The indexed data then appears in the search engine results.

    Using special web analysis tools, web crawlers can analyse information such as page views and links and compile or compare data in the sense of data mining. Websites that do not link or are not linked to cannot be detected by crawlers and therefore cannot be found by search engines.

     

    Sources:

    https://www.techtarget.com/whatis/definition/crawler

    https://www.myrasecurity.com/en/knowledge-hub/crawler/