Let's talk

Call us: +44 (0)1603 878240

Development Mistakes That Affect SEO

10:30 on Mon, 23 Nov 2009 | SEO | 0 Comments

Here is a small array of developer mistakes which could have big complications to your site’s performance in the search engines. Web design and development companies all work in different ways with some having more or less SEO knowledge than others. And some simply make a lot of mistakes. Below is a guide of some of the ways your site can be affected from these errors.

The noindex, nofollow meta tag
 
There are a number of ways you can inadvertently fail with this tag especially when getting your brand new website off the ground. One of the ways of falling at the first hurdle is the often overlooked meta robots directive. The meta robots tag in question is one which tells all search engines to not index and not follow the links on your pages. Meta tags provide a range of information to search engine spiders to provide information about the page.

The noindex, nofollow meta tag looks like this: <meta name="robots" content="noindex, nofollow" />

The noindex part basically means ‘don’t index this page’. One of the goals of good search marketing strategy is to ensure all your pages are indexed. This is particularly important for large sites as this will help with your long tail traffic (unless you’re tactically removing various pages - see Pete’s search engine spider post for more info).

A number of dynamic websites are built with global includes which allow you to easily control parts of your site from one file. If this line of code is inadvertently added to your header file on every page then it will affect your entire site.

The nofollow part turns all links on the page into nofollowed links. Various (and free) online SEO tools, highlight nofollowed links in the browser by changing the appearance of the link. Not allowing your own site to pass relevance to other pages is also a big hindrance with your search engine rankings.

dead end in tempe, az
photo credit: leoboiko


Extra care should be taken when switching from testing to live versions of your site – double checking meta tags in header includes.

Temporary Web Space
Many web development companies will begin building their websites in a test environment. This differs from company to company. Some companies will work on local servers which allow testing of the site. Some web hosting solutions will come with their own temporary web space. If you’re using this temporary (but live) space for site testing be careful. I’ve seen in the past where a client has been allowed to populate their CMS on the temporary web space. Once this goes live it’s been uploaded to it’s correct live location. The problem being if there’s still rogue URLs pointing to the temporary web space URL’s then the web spiders will follow these links and start indexing the temporary site. This can cause huge duplicate content issues.

There are a number of ways to fix this issue. Add your temporary web space to the various Webmaster Tools services from the major Search Engines (Google, Yahoo! and Bing) and request removal of all URLs. Also add the tag to the header of each page. Another way of checking once your site goes live is to use ye olde Xenu Link Sleuth. This piece of software will crawl your website similar to a search engine robot and will alert you of any issues on the site such as broken links.

He Bit Me!
photo credit: Kapungo



Live development testing environment
There are other issues where the meta noindex, nofollow tag can cause problems. Some web developers will work on their live testing site with the noindex, nofollow tag applied. This isn’t bad working practise but it does mean you need to take extra caution. However it can be all too easy to forget to remove this when the all important ‘go live’ deadline arrives.

 Stop here. No Entry.
photo credit: Titanas



Once again proceed with caution when putting your site live.

Robots.txt
Never underestimate the power of the robots.txt file. This little fella can cause problems without you even knowing it. It sits in the root of your web folder and it kindly advises the major search engine spiders what content on the site it has it has permission to crawl. It’s not required to have this for your site, but it does come in handy for disallowing bots to areas of your which you don’t want crawled. The extreme of this being:

User-agent: *
Disallow: /

Bad robot
photo credit: kevindooley



These 2 small lines of code above tell the search engine robots to not crawl your content. This file can easily and inadvertently be uploaded from the testing environment to your live location of your site. You’ll have some time to rectify this issue if it goes unspotted. After this file goes live search engines, like Google, will slowly start de-indexing your pages depending on the amount of pages you have on your site and the amount that’s currently indexed. The implications of losing not just your deep pages but your home page can be very destructive to your existing and future search traffic.

So how can you avoid all these errors? Firstly make sure you've employed a web agency with a handle on SEO or get someone in to oversee the build. Make sure measures are in place to avoid unauthorised changes to site files. Use a browser plugin which highlights nofollowed links can also help make you aware of problems. Failing that, consider a professional SEO audit to see how you can improve the usability and search effectiveness of your site.

 

 

Comments & Discussion

(0 Comments)

Post a comment, your email will not be published, nor will it be harvested.