Month: March 2012

Anatomy of the SERP

[SEO Manual Chapter 2g]   Experienced SEOs can skip this section, but search engine results pages (SERPs) determine the results of our SEO efforts. In this late-2011 version of a SERP for the query “health insurance agents” in Dallas, TX, we point out key portions that all SEOs should understand:   1. Universal search Google and other major search engines offer specialized search engines for images, news, video and even shopping. The main SERP page we see tries to incorporate elements from all of those search engine types to provide what Google calls universal search results. 2. Location Google and Bing are increasingly focusing their top pages on localized results. Search engine users now have the option of checking local results for practically any city in the world. 3. Related search Google provides related search terms (and their results). 4. More search tools Click this link to access results from dictionaries, translated foreign pages that Google deems relevant to this search, and results adjusted by reading levels. 5. Top-3 ads This is prime real estate for PPC ad placements and accounts for a large chunk of Google’s ad revenue. 6. Sidebar ads The sidebar typically contains 7 PPC ad entries. In addition, Google increasingly adds additional PPC ads to the bottom of the page. 7. Related searches To assist users who haven’t quite found what they’re looking for, Google...

Read More

Negative Search Ranking Factors

[SEO Manual Chapter 2f]     In addition to the positive search ranking factors reviewed in the preceding blog post, there are also negative factors that can lower your rankings or incur a penalty from Google. Needless to say, your SEO program must audit for these negative factors and fix any instances of them. These negative factors include the following:   Hidden text – hiding text from human view (but not search engines), often by making the font color the same as the background. Cloaking – delivering different content to search engines versus human visitors. Keyword stuffing – multiple and frequent use of keywords, so that it actually diminishes readability and content quality. Doorway pages – pages specifically created to attract target keywords and offering no value to visitors (example: department store creates separate pages for all variations of keywords related to dresses, with little or no unique content on any of them). Automatic redirects – this is a legitimate function, except when used with doorway pages and domains. Duplicate content – technically speaking, there is NO penalty for having duplicate content. It’s just content considered duplicates receive little or no rankings from Google. Purchasing links – although they can assess a penalty, Google tends to just negate the value of those purchased backlinks. Over-optimization – having a site that is too optimized can actually get a site penalized....

Read More

Positive Search Ranking Factors

[SEO Manual Chapter 2e]     In Google’s case, we know that their search algorithm considers more than 200 factors when determining SERP position for a webpage on any particular search query. Here are some of the elements that SEOs polled by Webmaster World listed as probable “positive” factors used by Google’s algorithm: Back Links (inbound): Age of web page linking to you Back Links (inbound): Age of website linking to you Back Links (inbound): ALT tag of images linking Back Links (inbound): Anchor text of link Back Links (inbound): Authority links (NYTimes, Harvard, Wikipedia, etc) Back Links (inbound): Authority of top level domain (.edu, .gov) Back Links (inbound): Country specific top level domain Back Links (inbound): Location of link on linking page (footer, body, etc.) Back Links (inbound): Location of server for linking website Back Links (inbound): Quality of web page inbound linking Back Links (inbound): Quality of website inbound linking Back Links (inbound): Relevancy of page’s content linking to you Back Links (inbound): Title attribute of link Back Links (inbound): Uniqueness of class C IP address Back Links (internal cross-links): Anchor text of FIRST text link Back Links (internal cross-links): Location of link on page Back Links (internal cross-links): Number of internal links to page Domain: Age Domain: History Domain: IP address of hosted domain Domain: Keywords in domain name Domain: Location of host server Onsite Content:...

Read More

Health Searches: Google Just Became a Doctor?

  It probably wouldn’t surprise you to know that I’m a Trekker — as in Star Trek’er. For the uninitiated, “Trekkie” tend to apply to fans of the original series, while “Trekker” applies to fans of the “Star Trek: The Next Generation” and the spin-offs it spawned. [I’m not sure if these terms still hold, however, now that the most recent movie “reimagines” Captain Kirk’s Enterprise, while the last series spin-off was actually the precursor to Captain Kirk.  But I digress…] One of the reasons I loved the series were the gadgets and ideas the writers imagined for the future, many of which have come through (such as the cell phone). One such idea was introduced in Star Trek Voyager: the holographic physician. It used a huge medical database to diagnose and treat wounded and sick individuals. We still don’t have a holographic doctor. But Google seems to have tried launching the next best thing. Last month, Google announced that it will now offer possible diagnoses based on symptoms you search for on Google’s search engines. [For more information, check out http://insidesearch.blogspot.com/2012/02/improving-health-searches-because-your.html] For example, if you were to type in “chest pain right side” into Google, you’ll see possible diagnoses featured at the very top of the main column. In this case, they provide links to webpages regarding the following: Heart attack Stress Angina Pleurisy Gallstone Okay, maybe it...

Read More

How Search Engines Work

[SEO Manual Chapter 2d]     To deliver the relevant content people want, successful search engines basically follow a three-part process, much of it behind the scenes:   1. Web Crawling Search engines use web crawlers (often called spiders or web bots) to retrieve data from publicly available websites. These spiders are basically automated web browsers which follow available links to pages. Once it finds a page, the spider web crawling program will retrieve the contents of that page. Web bots and spiders will revisit pages they have previously crawled. SEOs welcome this as it encourages continuous indexing of the site. To encourage frequent indexing, SEOs strive to add fresh content to the site as often and as regularly as possible.   2. Indexing Search Data As web crawlers gather data for search engines, the next process comes into play. Search engines take the collected data and indexes that data. The purpose of indexing data is to facilitate faster and more accurate data retrieval. In other words, indexing allows search engines to deliver faster and more relevant results – by intelligently parsing and storing the collected data. Just as a book’s index makes it easier to find relevant topics or words, search engine indexes make it easier to find relevant search results. The first search engines relied heavily on meta-tags, especially keyword meta-tags, to index websites for relevancy. As...

Read More
  • 1
  • 2