17 of the Best SEO Chrome Extensions

As specialists in search engine optimisation we use a wide range of tools to help us complete our daily tasks as efficiently as possible. There are a wide range of SEO tools available to marketers, some good, some great and many not so good.

In this blog post I will share with you my top 17 SEO extensions for Google chrome. The extensions listed below are in no particular order – I find it impossible to compare one against the other, as they all do something spectacular and are all very useful in their own way.

Read more

The Power of Title Tags – Onsite SEO Tips

Bart Simpson Learns About Title Tags

In this new and exciting series of onsite SEO tips we are going to educate wannabe SEO rock stars on how Google weigh the various onsite ranking factors in a short and easy to digest manner.

If you are an intermediate SEO specialist don’t worry, we will be providing you with hints and tips on the more technical side of SEO at a later date set in the not too distant future. For now we are going to focus on helping the new generation.

Why Google?

The reason we are focusing on Google and not Bing and Yahoo is simple, Google control the lion’s search market share in all English speaking countries, until that changes any search engine marketer worth their salt will be focusing on ranking websites in Google.

Ranking Signals

It’s well documented in the industry Google have over 200 ranking signals in their algorithm. In my opinion it must be climbing closer to the 300 mark as Google add more weight to social signals and schema formats. Anyhow, that hypothesis can wait for another day.

The Title Tag

Title tags are one of the 200+ ranking signals in Google’s algorithm and is the single most important piece of ‘on page’ information on a page, providing all the other elements are in place.

What the title tag does and should do

Your title tag will define to users, and search engines, what the page will talk about. It provides the link within search engines to pages. As such it must relate to the appropriate keywords and also help to inform readers of what the page contains.

Title tag length

Google displays title tags in their search engine results pages and should be formatted to fit. If your title tag is too long Google will truncate it. Google determines the length of title tags by pixels but you don’t need to worry about that, as a rule of thumb keep your title tags under 70 characters and you should be fine.

Keywords in title tags

Search engines also consider the title tag element to be the most important place to identify keywords. Rankings are heavily influenced by keyword usage in the title tag.

Google also passes more weight to keywords placed at the beginning of title tags. Best practice is to place one or two generic keyword phrases followed by your brand or website name.

Google best handles segmenting keyword phrases present in title tags by use of a pipe or a hyphen between keywords.

How a title tag should look

Good title tag example
As you can see the title tag above is short sweet and handsome and gives a clear indication to Google and users what the content will be about should they decide to click through to the page via search results.

How a title tag should not looktitle tag note

Poor title tag example
The example above amuses me. The chap who wrote the article titled ‘Title Tag Best Practices‘ fails to optimise his own title tag. He even mentions in his article:

Length: Title tags should be a maximum of 70 characters long, including spaces.

Yet his own title tag is 140 characters in length???

This is definitely a good example of what not to do.

Keynotes

  • Keep tag under 70 characters in length
  • Place important traffic driving keywords to beginning of tag
  • Keep keyword usage to two keyword phrases
  • Place brand or website name to the end of title tag
  • Segment keyword phrases by pipe or hyphen

So that’s it ladies, gentleman and future SEO rock stars. Keep your title tags short, crisp and lean and your title tags will be picked up with enthusiasm by the Google gods.

Should you need further assistance in optimisation title tags or have questions don’t hesitate to get in touch or leave a comment below.

Tesco Rank No1 in Google For Interflora

tesco online marketing

The cheeky wee monkey’s (Tesco) look to be coining it in by inserting Interflora into their title tag. They now rank No1 in Google for the search term “Interflora UK” and No2 for generic brand term “Interflora”.

Now, we have no idea when this was inserted into the title tag but after looking at the Wayback Machine (page last crawled 16th Jan) we can see previous title tag was “Tesco Flowers Delivered to Your Door”.

Now it reads “Tesco Flowers From Interflora Delivered To Your Door”

Tesco certainly did stop and smell the roses!

Interflora Penalised by Google

Common SEO Mistakes & Crawling Issues

Dont get cought with pants down with SEO mistakesDon’t get caught with your pants down with the most common SEO mistakes!

On the back of Matt Cutts releasing a new webmaster video listing the top 5 SEO mistakes webmasters make, I thought I would elaborate on the first point Matt talks about in his video.

The first mistake Matt talks about is probably the biggest and most common SEO mistake out there, making a website un-crawlable to search engines.

We at Boyd Digital have come across a lot of websites in our time that for one reason or another could not be crawled by search engine bots.

The most common mistakes are relatively easy for the experienced search engine optmiser to spot however, we understand these common mistakes can easily slip through the net for webmasters and developers who specialise in management and development, and not SEO.

That’s why we are always glad to help webmasters identify the problems when their traffic volumes plummet.

Below are the most recent scenarios we have come across in the wonderful world of SEO.

Blocked by robots.txt File

Search engines blocked by robots.txt file. This is probably the most common issue in regards to search engines being blocked from crawling websites.

This usually happens when a new website goes live. When developers are in the process of building a new website they intentionally block all bots from crawling the test website (new website before it goes live).

When the new website gets launched without an SEO consultant or SEO expert directly involved in the migration and launch, more often than not the robots.txt file stays the same, blocking search bots from crawling the website.

What the robots.txt file looks like when it is set up to block all bots

User-agent: *
Disallow: /

 

What the file looks like when it allows all bots to crawl all sections of a website

User-agent: *
Disallow:

 

Blocked by Canonical Tags

Search engines blocked from crawling deep pages of a website due to incorrect use of canonical tags.

Canonical tags are mainly used to eradicate duplicate content issues on websites.

If Google finds pages with identical content they may only index and display one version in search results.

Their algorithms select the page they think best answers the user’s query. This does not always result in the most appropriate page being selected by Google hence the use of canonical tags.

Webmasters can specify a canonical page to search engines by adding a <link> element with the attribute rel=”canonical”.

This basically informs Google there may be duplicate versions of the page in question and Google has to ignore all subpages containing the tag and focus on indexing and ranking the parent page the canonical tag points to.

For example, many websites we come across have four versions of the homepage.

Root domain:

www.example.com

Duplicate versions

example.com

www.example.com/home

example.com/home

Google sees each of the pages above as individual pages. By inserting the canonical tag in the head section of the root domain:

<link rel=”canonical” href=”http://www.example.com”/>

Informs Google to ignore the duplicate versions and focus on indexing and ranking the root domain page in Google.

The same rules apply to the deep pages of websites.

Canonical tags are great in eradicating duplicate content issues however, improper use of canonical tags can seriously harm a websites visibility in Google.

In this instance we identified incorrect use of canonical tags. A developer mistakenly inserted a canonical tag pointing back to the root domain URL in the footer area of the website.

By placing this information in the footer area of the website resulted in the code being present on all pages of the website. This instructed Google to ignore all deep pages. As a result all deep pages vanished from Google SERPS, hundreds of generic keyword rankings were lost as were thousands of visitors per day.

Unfortunately, the client waited three months before outsourcing the investigation work to Boyd Digital. Within 15 minutes of us taking on the work the issue was identified and rectified.

Blocked by 302 Redirects

This issue is a first for me and one of the most unusual instances I have come across in regards to search engines being blocked by search engines.

The site in question operates in the UK alcohol industry and operates the ineffective age verification process required not by law, but by the voluntary industry body called the European Forum for Responsible Drinking.

How the age verification page works

  1. Enter any page on a participating website
  2. An age verification slash page / pop up box will appear
  3. Type in your date of birth
  4. If you are of drinking age bobs your uncle
  5. You may now enter the alcohol website

In this instance, instead of adding a pop up function like most alcohol websites, a 302 temporary redirect was placed on each page of the website. The redirect took all visitors to the age verification page. Once users credentials were entered they would then be allowed access to the website.

This works well from a user perspective, catastrophic from an SEO perspective. Each time a search bot tries to enter any given page on the website it is redirected to the age verification page.

Search bots are very clever but they are not human savvy. In the eyes of search bots the only page that existed on the website was the age verification page.

When we came across the redirect issue and informed the client they were none the wiser. The website had been like that for years resulting in untold amounts of paying clients being unable to find the website.

Conclusion

The web crawling issues we came across can be embarrassing for people involved in developing or managing websites.

We have come across serious players in the industry who have failed to spot the school boy errors mentioned above and what’s even more worrying, it took weeks, months and even years for the crawling issues above to be identified and rectified.

This never goes down well with clients who invest tens of thousands of pounds on new and improved cash cows, only for traffic and cash flows to fall through the floor while people scratch their heads wondering what’s happened.

Our advice to webmasters and agencies involved in managing websites or launching new websites is to hire an experienced SEO company or SEO consultant to help manage the SEO aspects of existing websites, or to help manage the migration process from mold to new websites.

Don’t get caught with your pants down!

Drop us a line!