Technical Guidelines Continued

Continuing from the previous blog, today we are discussing the rest of the technical guidelines laid out in Google’s Webmaster guidelines. It is advisable for all web designers and optimisers to adhere to these guidelines.

Use Robots.txt - The robots.txt file identifies which of your pages can be crawled by the search engine spiders. This is essential if you want specific folders to remain out of search engine’s indexes. If this file is missing, spiders will crawl all pages as default unless instructed otherwise by meta tag robot commands. However, this is the first file the spiders will look for and should be included on every site. More information can be found at

Use An SEO Friendly CMS - If you have a dynamic site that uses a content management system, make sure that this system allows your pages to be easily crawled. Make sure it creates new pages when content is added, as some use frame sets that prevent sites from being properly indexed. Make sure it uses search engine friendly URLs and use URL rewrites if it does not.

Check Browser Compatibility - Different web browsers will not necessarily display your site in the same way. Check your site in all major browsers including IE, FireFox, Chrome and Safari to avoid this problem. Having code that is valid to W3C standards will help reduce the likelihood of discrepancies but manual checks should always be carried out.

Tags: , , ,

Leave a Reply