How to get your Pages out of the Google Supplemental Index

Why do web pages go Supplemental

This article about the Google Supplemental index follows on from the SEO blog entries: What is the Google Supplemental Index and Checking for Supplemental Pages.

So why does your page feature in the Supplemental Index rather than Google’s Main index? There are various reasons, all of which are fairly easy to recognise and despite Google suggesting otherwise in the Google Webmasters Guidelines, are not too difficult to have re-indexed in the main index, or removed altogether.

How to get out of the Google Supplemental Index

The main reasons why a page is placed in the Supplemental Index are:

  • Identical Head tags. If all of the pages on your site have the same Page title and the same meta description, you have a higher chance of pages finding their way into the Supplemental index. The remedy is a simple one. Ensure that all of your title tags and description tags are unique and are focused on the content of the web page that they appear on.
  • Insufficient content on the page. If your page is heavily graphic based or you simply haven’t taken the time to write sufficient content on the page it is more likely to go supplemental. The remedy is again, simple – ensure that you have enough readable content on each page and that the content of the page matches the keyterms in the head tags. Having text-based footer navigation on each page of the site, not only helps PageRank to be distributed evenly throughout the site, but also helps prevent this issue, as the links in the footer all count towards the page content.
  • Duplicate pages. The Google spiders hate duplicate content as they don’t want to fill up their index with pages that essentially feature the same content. It’s often the case that all of the pages carrying duplicate content get into the main Google index for a short time before the duplicate content is discovered and all of the pages are moved to the supplemental index. Remedy this situation by removing duplicate pages and using 301 redirects in an htaccess file to point them at a single new page that features that content. Better still, don’t use duplicate content in the first place!
  • The web page doesn’t have any incoming links, or has too few quality links to warrant regular spidering. Footer navigation solves this problem, as each page on the site is linked directly to each page on the site. If the site is too big to feature all of the pages in the footer, make sure you have a site map that features all of the pages and ensure the site map is linked to from the footer navigation.  (by the way, I don’t mean a Google sitemap, I mean a sitemap on the website itself – I don’t trust Google sitemaps as far as I could chuck one!)
    • It’s sometimes the case that the pages that end up in the supplemental index were never intended to be part of the site, as is the case with the link I featured in yesterday’s article. That was an old search engine positioning report that was uploaded to the server for the site-owner to see. I didn’t’t link to it from anywhere other than the email that he received to say the report was ready. Unfortunately, he saw fit to link to it on a forum and Google indexed it from there and it ended up in the supplemental index. I have now put in a 301 permanent redirect to another page on the site and the next time that Google spider the page, it should hopefully disappear from the supplementals.
    • There is also the case when a web page used to have a link pointing to it, but no longer has – for various reasons – but usually because the page has been taken down or replaced in the linking structure with a new page on the subject with a different URL. Again, to get this out of the supplemental index, use a permanent 301 redirect.
  • URL too long, with too many parameters. Web pages with long URLs that include various directories, long numbers, dynamic characters such as question marks, equals signs and ampersands are an indication to Google that the site is dynamic and there is a good chance that there is duplicate content on the page, as there is with a lot of shopping cart sites, particularly if your online store uses an off the shelf e-commerce product. The remedy is to ideally have an SEO write a bespoke, spider friendly shopping cart for you, or instigate a mod re-write on your existing cart so that the URLs look more natural to the Google spiders.
  • Code Heavy pages. If your web page is non W3c compliant and the designer’s coding is long and heavy, with a poor code to content ratio, pages can go supplemental simply because the page takes so damn long to load. Again, simple to remedy, have your site code optimised by someone who knows what they are doing.

So there you have it, three articles to tell you everything that you need to know about the Google supplemental index – what it is – how to recognise if your pages are in it – and how to take the necessary steps to get your pages out of the supplemental index.

None of the techniques mentioned give overnight results, it can sometimes take a couple of months before a page stops being supplemental, due to irregular cacheings, etc – but it’s worth doing – so do it!

Leave a Reply





SEO Blog

SEO Blog

The Big Man's SEO blog is primarily aimed at website owners looking for ethical SEO tips, optimisation advice and who are interested in reading articles and opinions related to search engines, the internet, technology and software.