[SEO Manual Chapter 2d]



To deliver the relevant content people want, successful search engines basically follow a three-part process, much of it behind the scenes:


1. Web Crawling

Search engines use web crawlers (often called spiders or web bots) to retrieve data from publicly available websites.

These spiders are basically automated web browsers which follow available links to pages. Once it finds a page, the spider web crawling program will retrieve the contents of that page.

Web bots and spiders will revisit pages they have previously crawled. SEOs welcome this as it encourages continuous indexing of the site. To encourage frequent indexing, SEOs strive to add fresh content to the site as often and as regularly as possible.


The Search Engine Gears at Work

The search engine process involves 3 steps: crawl the web, index the content and deliver search results.

2. Indexing Search Data

As web crawlers gather data for search engines, the next process comes into play.

Search engines take the collected data and indexes that data. The purpose of indexing data is to facilitate faster and more accurate data retrieval.

In other words, indexing allows search engines to deliver faster and more relevant results – by intelligently parsing and storing the collected data. Just as a book’s index makes it easier to find relevant topics or words, search engine indexes make it easier to find relevant search results.

The first search engines relied heavily on meta-tags, especially keyword meta-tags, to index websites for relevancy. As commercial sites started to game those keyword meta-tags, search engines shifted to full-text indexing. This approach tried to read and parse the contents of a page to determine search relevancy, which is why it’s important to have target keywords on the page itself.


3. Delivering Search Results

When a user runs a search query, the search engine generates a search engine results page (SERP), with links to entries that the search engine deems relevant to the search query.

So how does a search engine such as Google determine relevance and SERP position? They all rely on their search algorithm.

All search engines keep their search algorithms secret. Although Google, Bing and others do drop hints, SEOs can only estimate how those search algorithms operate through tests, research and reading search engine patent filings.




Next SEO Manual post:  Positive Ranking Factors


The Web1Media SEO Manual is an advanced search engine optimization (SEO) guide developed by Web1Media and serialized for public use. The SEO tactics, strategies and principles discussed in this SEO manual have been gleamed from our own experiences in the SEO, SEM and online marketing arena. If you would like to cite any portion of this SEO guide for non-profit use, we would be honored and simply request a citation link.

Please don’t forget to subscribe to our RSS feed to receive our upcoming blog posts. More importantly, because SEO is so dynamic, we encourage and invite any comments, corrections or questions you may have in the comment section below.