NWA-PCUG Newsletter Article, December 2005
Indexing the Web: Spiders, Web Crawlers & Bots
by Brian K. Lewis, Ph.D.,
Sarasota PC Monitor Sarasota FL PC Users Group<
http://www.spcug.org


Have you ever wondered how search engines such as Google manage to get the answers to your queries so rapidly? How could they search the web that fast, I mean usually less than one second to find the words you ask for? Well --- They don't. Actually the searching goes on constantly, 24/7. And, the mechanism they use is just a modification of what you use for browsing the web.

Although you may have heard about spiders, web crawlers and web bots, they don't actually traverse the web any more than does your web browser (Internet Explorer or Firefox or whatever browser you use). Instead they download web pages that are then scanned and the significant words added to an index.

To simplify the terminology, I will refer to all the web searching programs as “spiders”. (It takes less space and is easier to type.) These spiders are programs designed to find web addresses (URL's} and to download the pages. Some also do the indexing of the words on the page. However, Google uses a separate indexing program and stores the downloaded pages for future reference. Now if a single spider were being used to locate and download pages, the task would really be impossible. Sergey Brin and Lawrence Page, the originators of Google, published a paper while they were graduate students at Stanford that utilized three spiders. Each spider kept about 300 connections open simultaneously. With four spiders they could download about 600 pages per second. This paper referred to the prototype that became the commercial Google enterprise. Even with the prototype system they were able to download and index 24 million pages in a week. Their current methodology is proprietary and so is not public, but it is probably a significant improvement over their prototype system.

We can use the original Google system as a model of what could be used by search engines to prepare the index and database of web pages that you access when you send a query. The first step is to send a list of URLs to the spider to download. This is done by a server that maintains a list of URLs. The spider will download pages and also follow any hyperlinks to other pages. Addresses of pages that were linked to the original search list are also sent back to the server to be checked to see if they were already on the list. If not, they are added to the URL lists. Not every spider uses a URL server. The spider will continue crawling the web until it reaches a dead end or a page with no further links.

As I mentioned earlier, a spider isn't just working with one page, but has hundreds of connections open to different pages. Given that there are billions of pages on the Web, even with thousands of spiders collecting information, only a small fraction of the entire web is scanned. Some web sites, such as those with news or rapidly changing information are visited hourly. Every spider has a re-visitation policy that determines how frequently a page will be revisited and checked for changes.

There is another general policy that is usually programmed into these spiders. That is called the “ politeness” policy. This is used to prevent the overloading of web sites. After all, there is a finite limit to bandwidth and it would be possible to overwhelm a web site with visits from multiple spiders in a short period of time. This policy provides for an interval of time to elapse between accesses by a spider. This time interval seems to vary from 20 seconds to 3-4 minutes. This would be the case where multiple pages need to be downloaded from a single server. Revisiting indexed and stored web sites occurs at less frequent intervals.

However, even this politeness policy is sometimes inadequate. Frequent visits by spiders may result in complaints being sent back to the owner of the spider. So it is also possible to enter code on a web page which asks the spider to not access or download a page or pages. This can be done by the addition of meta tags in the page header or by a robots.txt file placed in the root directory for the web site. This is especially appropriate for game pages. These pages use a dynamic format that changes when pages are viewed or links are followed. When a spider downloads these pages the game program may respond as if a very high-speed player were logged on. This can create problems for the program and may result in crashing the game server. So we now have the robot exclusion protocol being used by owners of web pages that do not want their pages included in the search engine indexing.

In the original Google system the web pages were sent to another program referred to as the indexer. This program sorts through every word on the page and stores them in a database. The exceptions are the simple words such as a, an, the. However, simply entering the words into a database is not sufficient. They have to be identified to the particular page from which they came, the location on that page and a relative ranking in importance. The frequency with which they appear on the page as well as the position on the page may be used in determining the weight or relative rank. Words in the title or near the top of the page may be ranked as more important. So the storage of the words include the URL, and a calculated weight in an encoded format.

The word database is then indexed to speed the retrieval of the information. This is usually done by the building of a Hash Table. Hashing evens out the alphabetical sections so that it takes no longer to find a “z” than it does a more popular letter like “m”. It also separates the index from the actual entry for the word. This improves the efficiency of the storage of this information. The indexing and the Hash Table also speed the overall retrieval of the information. The complete web page is also stored in a separate location. Once the indexing process is completed, the information is available for your query.

Given the size of the web and the continuing changes to web pages, the spider's search is never ending. It may also be one where we will never have every page indexed. One other aspect of the size of the web and the time required for the crawling process is that broken links will always occur. If a page is not re-visited frequently, it may still be in the index and the database long after it has been removed from its server. Another situation may be where the URL has changed and the new location has not yet been crawled. So, the process is not perfect by any means.

The other aspect of searching the web is the design of the query you want to submit to a search engine. As I'm sure you know, you can simply list a few keywords in the search engine and hope you will get a useful result. Many times you will also get results that have no relationship to the information you are seeking. In some of these cases, you need to try the advanced search or learn to use Boolean operators. Those most frequently used are:

    AND – all the terms joined by “AND” must appear in the pages or documents.
    OR – at least one of the terms joined by “OR” must appear in the pages or documents.
    NOT – the term or terms following “NOT” must appear.
    Quotation marks – Words between quotation marks must appear as a phrase.
    Followed By – one of the terms must be followed by the other.
    Near – one of the terms must be within a specified number of words of the other.

Generally, search engines can use these Boolean operators to provide results more closely aligned to the topic you are trying to locate.

Like everything else related to computers, web indexing and searching are not static technologies. The search engine companies are researching “natural language” queries such as those handled by “Ask Jeeves”. Currently, these queries can accommodate only relatively simple phrases. However, there is heavy competition to develop an engine that can work with much more complex queries. Another area that is being pursued is “concept-based” searching. This would use a form of statistical analysis to determine if the page fit your query. And, as you may have read, Google has plans to put the content of the world's libraries on the web.

Just imagine what it would be like if we didn't have these search engines to help us find information on the web. So good searching and I hope you find what you are looking for.

Dr. Lewis is a former university & medical school professor. He has been working with personal computers for more than thirty years. He can be reached via e-mail: bwsail at yahoo.com. There is no restriction against any non-profit group using this article as long as it is kept in context with proper credit given the author. The Editorial Committee of the Association of Personal Computer User Groups (APCUG), an international organization of which this group is a member, brings this article to you.

Click here to return to top



==================================================================