24th Feb 2017
Here is a small array of developer mistakes which could have big complications to your site’s performance in the search engines. Web design and development companies all work in different ways with some having more or less SEO knowledge than others. And some simply make a lot of mistakes. Below is a guide of some of the ways your site can be affected from these errors.
The noindex, nofollow meta tag
There are a number of ways you can inadvertently fail with this tag especially when getting your brand new website off the ground. One of the ways of falling at the first hurdle is the often overlooked meta robots directive. The meta robots tag in question is one which tells all search engines to not index and not follow the links on your pages. Meta tags provide a range of information to search engine spiders to provide information about the page.
The noindex, nofollow meta tag looks like this: <meta name=”robots” content=”noindex, nofollow” />
The noindex part basically means ‘don’t index this page’. One of the goals of good search marketing strategy is to ensure all your pages are indexed. This is particularly important for large sites as this will help with your long tail traffic.
A number of dynamic websites are built with global includes which allow you to easily control parts of your site from one file. If this line of code is inadvertently added to your header file on every page then it will affect your entire site.
The nofollow part turns all links on the page into nofollowed links. Various (and free) online SEO tools, highlight nofollowed links in the browser by changing the appearance of the link. Not allowing your own site to pass relevance to other pages is also a big hindrance with your search engine rankings.
Temporary Web Space
Many web development companies will begin building their websites in a test environment. This differs from company to company. Some companies will work on local servers which allow testing of the site. Some web hosting solutions will come with their own temporary web space. If you’re using this temporary (but live) space for site testing be careful. I’ve seen in the past where a client has been allowed to populate their CMS on the temporary web space. Once this goes live it’s been uploaded to it’s correct live location. The problem being if there’s still rogue URLs pointing to the temporary web space URL’s then the web spiders will follow these links and start indexing the temporary site. This can cause huge duplicate content issues.
There are a number of ways to fix this issue. Add your temporary web space to the various Webmaster Tools services from the major Search Engines (Google, Yahoo! and Bing) and request removal of all URLs. Also add the
tag to the header of each page. Another way of checking once your site goes live is to use ye olde Xenu Link Sleuth. This piece of software will crawl your website similar to a search engine robot and will alert you of any issues on the site such as broken links.
Never underestimate the power of the robots.txt file. This little fella can cause problems without you even knowing it. It sits in the root of your web folder and it kindly advises the major search engine spiders what content on the site it has it has permission to crawl. It’s not required to have this for your site, but it does come in handy for disallowing bots to areas of your which you don’t want crawled. The extreme of this being:
So how can you avoid all these errors? Firstly make sure you’ve employed a web agency with a handle on SEO or get someone in to oversee the build. Make sure measures are in place to avoid unauthorised changes to site files. Use a browser plugin which highlights nofollowed links can also help make you aware of problems. Failing that, consider a professional SEO audit to see how you can improve the usability and search effectiveness of your site.