Googles New Quality Guidelines and What it Means..

Today content and making a page ‘User Friendly’ is more important than ever when it comes to the current ‘SEO rules’ that Google use. This of course only covers On Page SEO (linking is another kettle of fish) but it is an area that covers a lot of ground.

With this in mind (and acknowledging that I do not know it all) I contacted an SEO Guru to find out if I had all the bases covered. I’m glad to say that I had, but the reply I got back did highlight the fact that Google have just changed their ‘Quality Guidelines’.

But back to my question. I wanted to know more about how content and UX were graded, especially, as before a page is visited by a human (and could hence provide Google with data via the Chrome Browser about time on page etc – if it wanted to take that into account) Google MUST have a means of calculating the ‘value’ of the content and how user friendly it is.

To me this is / was the ‘egg’ part of the chicken and egg story, it being the ‘egg’; a page not seen by Google or anyone else before, that is analysed and given a ‘value’ ranking. This is then being used as a basis for any later search related ranking procedure, pages with ‘higher value’ rankings being more likely to get a position at the top of the SERPS.

I listed the signals that I thought Google use, these being:-

  • Title of Page
  • Description of Page (not truly used, but a lot of poor ones can degrade an entire sites quality, so I have been told)
  • Header Tags on the pages (although these are not as powerful and many a site breaks the ‘rules’ about using them and still gets high ranks)
  • Bold, Itatlics, lists
  • Words used (more on this later)
  • Links out to relevant / useful sites (although I have seen comments from SEO professionals who also say this is not a useful signal)
  • Embedding videos
  • Using Images with the ‘descriptive’ file names and ALT text, and GeoTagging them for LOCAL SEO

Plus on the UX side

  • The Speed of the Page
  • Using whitespace
  • Not allowing too many adverts at the head of page
  • Ensuring that the above the fold area is not just images (the use of Carousels is said, by some to be harmful, but is used extensively and many sites still get a high rank..)

The Words Used on the page:

Here I pointed out that as Google uses a computer programme to analyse any page, that this in turn meant that it must use a lot of TRUE / FALSE checks, this leading on to the use of Words in the content. To me this is an important fact, as it would take a committee of ‘experts’ to view a page to tell whether it was truly good and useful (and they would surely disagree in many cases) and as this is just not the case with Google (even with the power of RankBrain), it surely MUST be making its decisions at a far lower, more ‘mechanical’ level.

The problem about what words to use has been overcome by the use of LSI and Reverse Engineering, and from the reply I got back, I would say that this is still the case today.

Of course, if you want to ‘get a message’ across to Google about what a page was all about, with some specific keyword phrases in mind,  you just CANNOT stuff a page with those target words, this being a dangerous method now.

Google’s Quality Guideline Update

It must be said that these guidelines ARE NOT A PART of the SEO algorithm, but they are important as they form a part of the ‘feed back’ process that Google use when evaluating their own SERPS listings…

The way it Works…

We know a fair bit about the way Google rates pages for any given term, and we also know that Google is constantly changing these rules. In the past, they had to keep changing the rules as SEO professionals were constantly ‘taking advantage’ of an anomaly in the algorithm, but today, with Google’s more holistic approach (also known as Semantic SEO), I believe that the changes they make are all about presenting the best possible results.

Google however has a problem here, as they need some way of checking that they are getting it right…

This is where their army of human evaluators come in. They have been around for many years of course, and were responsible for the rule set that Google used to highlight sites whose general quality is low. Here the sort of thing that they found was that sites which use a lot of duplicate Meta data, or Titles, or have a lot of pages with ‘thin content’ (low word count) tend to provide poor user experience and are basically not worth Google’s time to include in the results.

In order to help these evaluators, Google provided them with an Aide Memoir, this listing all the things that should be checked on a sites pages. We will cover this in more detail later in this post.

So, how does Google use the results of the human evaluators?

Of course, they don’t give you the full picture, but looking at it logically, if the human evaluators rate a page as being of the Highest Quality AND this page is NOT listed in the results for a relevant term, then, the Algorithm may well need some work. The same would be the case if those pages that were considered to be of Low quality WERE in the rankings.

So, even though you cannot affect that part of the ‘Quality Assessment’ that is not being worked out by the set of computer rules that is the Google rule set, you can help Google get it right.

This is important as if a human evaluator ranks a page on your site (or a page like it) highly, this feedback process will eventually ensure that your page gets the best possible rank…

The Google Quality Rules

There is a very detailed blog post on this, and you can also download the full details from Google if you want. But to help, the information below (taken from a part of the post mentioned) will enable you to ensure that all of your pages are of the highest quality.

 

Page Quality Ratings

Overall Page Quality Rating

Google has completely rewritten this part of their guide lines,  expanding this section from the very brief version it has had before.

Old version:

The overall Page Quality rating scale offers five rating options: Lowest , Low , Medium , High, and Highest .

New version:

At a high level, here are the steps of Page Quality rating:

  1. Understand the true purpose of the page. Websites or pages without any beneficial purpose, including pages that are created with no attempt to help users, or pages that potentially spread hate, cause harm, or misinform or deceive users, should receive the Lowest rating. No further  assessment is necessary.
    2. Otherwise, the PQ rating is based on how well the page achieves its purpose using the criteria outlined in the following sections on Lowest , Low , Medium , High, and Highest quality pages.

Here you can see that Google is putting the focus on the beneficial purpose of the page.

 

Page Quality Rating: Most Important Factors

Google’s change to this section yet again put the focus on the purpose of the page, but also bring in the ‘reputation of the creator’ of the content.

Here are the changes, with the changes in italics to this section:

Here are the most important factors to consider when selecting an overall Page Quality rating:

  • The Purpose of the Page
    Expertise, Authoritativeness, Trustworthiness: This is an important quality characteristic. Use your research on the additional factors below to inform your rating.
    Main Content Quality and Amount: The rating should be based on the landing page of the task URL.
    ● Website Information/information about who is responsible for the Main Content: Find information about the website as well as the creator of the MC.
    Website Reputation/reputation about who is responsible for the Main Content: Links to help with reputation research will be provided.

 

 

Expertise, Authoritativeness and Trustworthiness (E-A-T)

Again there are some significant changes here.  First, the instances where Google referred to “high quality” have  now been changed to “high EAT”.

Here we believe Google is directing its human evaluators to look beyond simple quality and consider other aspects that contribute to the value of that content.

So, Google has added this new part:

Remember that the first step of PQ rating is to understand the true purpose of the page. Websites or pages without some sort of beneficial purpose, including pages that are created with no attempt to help users, or pages that potentially spread hate, cause harm, or misinform or deceive users, should receive the Lowest rating.

For all other pages that have a beneficial purpose, the amount of expertise, authoritativeness, and trustworthiness (E-A-T) is very important. Please consider:

  • The expertise of the creator of the MC.
    ● The authoritativeness of the creator of the MC, the MC itself, and the website.
    ● The trustworthiness of the creator of the MC, the MC itself, and the website.

Later in the section, they make some changes specific to the content creators in several key areas, including medical, news, science and financial sites.

Here are those changes, with the changes in italics:

  • High E-A-T medical advice should be written or produced by people or organizations with appropriate medical expertise or accreditation. High E-A-T medical advice or information should be written or produced in a professional style and should be edited, reviewed, and updated on a regular basis.
    High E-A-T news articles should be produced with journalistic professionalism—they should contain factually accurate content presented in a way that helps users achieve a better understanding of events. High E-A-T news sources typically have published established editorial policies and robust review processes ( example 1 , example 2 ).
    High E-A-T information pages on scientific topics should be produced by people or organizations with appropriate scientific expertise and represent well-established scientific consensus on issues where such consensus exists.
    High E-A-T financial advice, legal advice, tax advice, etc., should come from trustworthy sources and be maintained and updated regularly.
    High E-A-T advice pages on topics such as home remodeling (which can cost thousands of dollars and impact your living situation) or advice on  parenting issues (which can impact the future happiness of a family) should also come from “expert” or experienced sources that users can trust.
    High E-A-T pages on hobbies, such as photography or learning to play a guitar, also require expertise.

Here you can see that  Google is putting a lot of stress on the content creators as well, this being all the more important for YMYL (Your Money or Your Life) sites.

 

High Quality Pages

Characteristics of High Quality Pages

Google has also expanded this section, the first reference to the new title changes being mentioned , as well as more on the beneficial purpose of a page.  Changes/additions are in italics.

High quality pages exist for almost any beneficial purpose, from giving information to making people laugh to expressing oneself artistically to purchasing products or services online.

What makes a High quality page? A High quality page should have a beneficial purpose and achieve that purpose well.  In addition, High quality pages have the following characteristics:

  • High level of Expertise, Authoritativeness, and Trustworthiness (E-A-T).
    ● A satisfying amount of high quality MC, including a descriptive or helpful title.
    ● Satisfying website information and/or information about who is responsible for the website. If the page is primarily
    for shopping or includes financial transactions, then it should have satisfying customer service information.
    ● Positive website reputation for a website that is responsible for the MC on the page. Positive reputation of the
    creator of the MC, if different from that of the website.

 

This is all very useful stuff, but hidden in the text is the interesting phrase ‘A satisfying amount of high quality MC, including a descriptive or helpful title’.  This is important as it highlights the fact there is no set number of words and that Titles need to be descriptive and relevant (Click Bait Titles could well result in penalisation).

The Highest Quality Pages

Highest Quality Pages

Again, beneficial purpose is added as a requirement for a highest quality page.

They have also added the “and quantity if MC” as a marker for a distinction between high and highest quality.  This does raise a question about whether all content length is really considered equal in the eyes of Google.  Both Gary Illyes and John Mueller have stated you don’t need to write an essay for a piece of content that doesn’t need it, and to write as much as you need to in order to answer the question the title presents.  But here, quantity of the main content is something rates should specifically look for when deciding if a page is highest quality or only high quality.

And we see yet another reference to the need of having a “very positive reputation of the creator of the main content, if different from that of the website.”

But they have removed references to this on pages for stores or other financial transactions.

Here is the old version:

Highest pages are very satisfying pages that achieve their purpose very well. The distinction between High and Highest is based on the quality of MC as well as the level of EAT and reputation of the website.

What makes a page Highest quality? A Highest quality page may have the following characteristics:

  • Very high level of Expertise, highly Authoritative, and highly Trustworthy for the purpose of the page (EAT), including the EAT
    of the publisher and/or individual author for news articles and information pages on YMYL topics.
    ● A satisfying amount of high quality MC.
    ● Highly satisfying website information and/or information about who is responsible for the website or for stores and pages involving financial transactions, highly satisfying customer service reputation is very important.
    ● Very positive website reputation for a website that is responsible for the MC on the page.

And the updated version:

Highest quality pages are created to serve a beneficial purpose and achieve their purpose very well. The distinction between High and Highest is based on the quality and quantity of MC, as well as the level of reputation and E-A-T.

What makes a page Highest quality? In addition to the attributes of a High quality page, a Highest quality page must have at least one of the following characteristics:

  • Very high level of Expertise, Authoritativeness, and Trustworthiness (E-A-T).
    ● A very satisfying amount of high or highest quality MC.
    ● Very positive website reputation for a website that is responsible for the MC on the page. Very positive reputation of the creator of the MC, if different from that of the website.

 

 

And for Low Quality Pages…

This entire section on low quality pages has been updated.  Some was removed as it was replaced with something more concise, while other areas were expanded, particularly around reputation and beneficial content.

Low Quality Pages

The first paragraph has been updated completely.

This was removed:

Low quality pages are unsatisfying or lacking in some element that prevents them from achieving their purpose well. These pages lack expertise or are not very trustworthy/authoritative for the purpose of the page.

And it was changed to this:

Low quality pages may have been intended to serve a beneficial purpose. However, Low quality pages do not achieve their purpose well because they are lacking in an important dimension, such as having an unsatisfying amount of MC, or because the creator of the MC lacks expertise for the purpose of the page.

Here is the reference to beneficial purpose once again.  But this time it also concedes that sometimes these pages were intended to serve a beneficial purpose but something on the page – or missing from it – means it is still low quality.

Google has removed the possibility that some pages that meet their “low quality pages” criteria might not be considered low.  Now, raters must always rate a page as Low – or Lowest – if any one or more applies.

Here is what the section used to be:

If a page has one of the following characteristics, the Low rating is usually appropriate:

  • The author of the page or website does not have enough expertise for the topic of the page and/or the website is not trustworthy or authoritative for the topic. In other words, the page/website is lacking EAT.
    ● The quality of the MC is low.
    ● There is an unsatisfying amount of MC for the purpose of the page.
    ● MC is present, but difficult to use due to distracting/disruptive/misleading Ads, other content/features, etc.
    ● There is an unsatisfying amount of website information for the purpose of the website (no good reason for anonymity).
    ● The website has a negative reputation.

And here is the new revised version:

If a page has one or more of the following characteristics, the Low rating applies:
● An inadequate level of Expertise, Authoritativeness, and Trustworthiness (E-A-T).
● The quality of the MC is low.
● There is an unsatisfying amount of MC for the purpose of the page.
● The title of the MC is exaggerated or shocking.
● The Ads or SC distracts from the MC.
● There is an unsatisfying amount of website information or information about the creator of the MC for the purpose of the page (no good reason for anonymity).
● A mildly negative reputation for a website or creator of the MC, based on extensive reputation research. If a page has multiple Low quality attributes, a rating lower than Low may be appropriate.

Note that it no longer includes the reference that anonymity for some content might be appropriate.

Lacking Expertise, Authoritativeness, or Trustworthiness (E-A-T)

This section has been completely rewritten, and was formerly section 6.5.

Removed:

Some topics demand expertise for the content to be considered trustworthy. YMYL topics such as medical advice, legal advice, financial advice, etc. should come from authoritative sources in those fields, must be factually accurate, and must represent scientific/medical consensus within those fields where such consensus exists. Even everyday topics, such as recipes and house cleaning, should come from those with experience and everyday expertise in order for the page to be trustworthy.

You should consider who is responsible for the content of the website or content of the page you are evaluating. Does the person or organization have sufficient expertise for the topic? If expertise, authoritativeness, or trustworthiness is lacking, use the Low rating.

Revised:

Low quality pages often lack an appropriate level of E-A-T for the purpose of the page. Here are some examples:

  • The creator of the MC does not have adequate expertise in the topic of the MC, e.g. a tax form instruction video made by someone with no clear expertise in tax preparation.
    ● The website is not an authoritative source for the topic of the page, e.g. tax information on a cooking website.
    ● The MC is not trustworthy, e.g. a shopping checkout page that has an insecure connection.

 

User Generated Content Guidelines

It also made some slight changes to the user generated content section of this, and now specifically includes references to social networking pages, video sharing sites, and wiki-type sites.

Old version:

User-generated websites span the Page Quality rating spectrum. Note that in some cases, contributors choose their own topics with no oversight and may have very poor writing skills or no expertise in the topic of the page. Contributors may be paid per article or word, and may even be eligible for bonuses based on the traffic to their pages. Depending on the topic, pages on these websites may not be trustworthy.

New version:

Note: Websites with user-generated content span the Page Quality rating spectrum. Please pay careful attention to websites that allow users to publish content with little oversight, such as social networking pages, video sharing websites, volunteer-created encyclopedias, article sharing websites, forums, etc. Depending on the topic, pages on these websites may lack E-A-T.

The user generated content section is noteworthy, because they aren’t automatically discounting user generated content as low or lowest, but rather as something that warrants further investigation before rating it.  There are plenty of examples of high quality user generated content, but it seems the majority is definitely lacking in quality and EAT.

It has also changed the notation at the end from “Important : Lacking appropriate EAT is sufficient reason to give a page a Low quality rating.” to “Important : The Low rating should be used if the page lacks appropriate E-A-T for its purpose.”  So Google has a new distinction on EAT for the purpose of the specific page.

 

Low Quality Main Content

This section has been significantly reduced, although some of it was incorporated into new individual sections Google has added to the guidelines, so just because it is noted as removed here, doesn’t mean it was removed entirely.  But we also get our new guidance on the clickbait style titles vs actual content that Google now wants its human evaluators to call Low.

They entirely removed this part which was an example used to illustrate types of low quality content, as well as the differentiation between professional websites and those from hobbyists:

One of the most important criteria in PQ rating is the quality of the MC, which is determined by how much time, effort, expertise, and talent/skill have gone into the creation of the page, and also informs the EAT of the page.

Consider this example: Most students have to write papers for high school or college. Many students take shortcuts to save time and effort by doing one or more of the following:

  • Buying papers online or getting someone else to write for them.
    ● Including inaccurate information, such as making things up, stretching the truth, or creating a false sense of doubt about well-established facts.
    ● Writing quickly with no drafts or editing.
    ● Failing to cite sources, or making up sources where none exist.
    ● Filling the report with large pictures or other distracting content.
    ● Copying the entire report from an encyclopedia, or paraphrasing content by changing words or sentence structure here and there.
    ● Using commonly known facts, for example, “Argentina is a country. People live there. Argentina has borders.”
    ● Using a lot of words to communicate only basic ideas or facts, for example, “Pandas eat bamboo. Pandas eat a lot of bamboo. Bamboo is the best food for a Panda bear.”

 

Here Google point out that the content of some webpages is similarly created. So, where you find content like this, it should be rated as Low quality if it is created without adequate time, effort, expertise, or talent/skill. Inaccurate or misleading information presented as fact is also a reason for Low or even Lowest quality ratings. Pages with low quality MC do not achieve their purpose well.

 

Keep in mind that we have very different standards for pages on large, professionally-produced business websites than we have for small amateur, hobbyist, or personal websites. The quality of MC we expect for a large online store is very different than what we might expect for a small local business website.

All Page Quality ratings should be made in the context of the purpose of the page and the type of website.

Important : Low quality MC is a sufficient reason to give a page a Low quality rating.

The very much abbreviated version of this section has specifics to clickbait:

The quality of the MC is an important consideration for PQ rating. We will consider content to be Low quality if it is created without adequate time, effort, expertise, or talent/skill. Pages with low quality MC do not achieve their purpose well.

In addition, please examine the title on the page. The title of the page should describe the content.

Exaggerated or shocking titles can entice users to click on pages in search results. If pages do not live up to the exaggerated or shocking title or images, the experience leaves users feeling surprised and confused. Here is an example of a page with an exaggerated and shocking title: “Is the World about to End? Mysterious Sightings of 25ft Sea Serpents Prompt Panic!” as the title for an article about the unidentified remains of one small dead fish on a beach. Pages with exaggerated or shocking titles that do not describe the MC well should be rated Low.

Important : The Low rating should be used if the page has Low quality MC.

 

Unsatisfying Amount of Main Content

Here there is a small change, but it does make a evaluator aware that there is a difference between the amount of content for the purpose of the page.

Old version:

Important : An unsatisfying amount of MC is a sufficient reason to give a page a Low quality rating.

New version:

Important : The Low rating should be used if the page has an unsatisfying amount of MC for the purpose of the page.

 

Lack of Purpose Pages

This is a very important area, Google stating that  “Some pages fail to achieve their purpose so profoundly that the purpose of the page cannot be determined. Such pages serve no real purpose for users.”

Pages that Fail to Achieve Their Purpose

This is another section that was reorganized and rewritten.  Here is the updated version:

Lowest E-A-T

One of the most important criteria of PQ rating is E-A-T. Expertise of the creator of the MC, and authoritativeness or trustworthiness of the page or website, is extremely important for a page to achieve its purpose well.

If the E-A-T of a page is low enough, users cannot or should not use the MC of the page. This is especially true of YMYL topics. If the page is highly inexpert, unauthoritative or untrustworthy, it fails to achieve its purpose.

Important : The Lowest rating should be used if the page is highly inexpert, unauthoritative, or untrustworthy.

No/Little Main Content

Pages exist to share their MC with users. The following pages should be rated Lowest because they fail to achieve their purpose:
● Pages with no MC.
● Pages with a bare minimum of MC that is unhelpful for the purpose of the page.

Lowest Quality Main Content

The Lowest rating applies to any page with Lowest Quality MC. Lowest quality MC is content created with such insufficient time, effort, expertise, talent, and/or skill that it fails to achieve its purpose. The Lowest rating should also apply to pages where users cannot benefit from the MC, for example:

  • Informational pages with demonstrably inaccurate MC.
    ● The MC is so difficult to read, watch, or use, that it takes great effort to understand and use the page.
    ● Broken functionality of the page due to lack of skill in construction, poor design, or lack of maintenance.

Have high standards and think about how typical users in your locale would experience the MC on the page. A page may
have value to the creator or participants in the discussion, but few to no general users who view it would benefit from the
MC.

Copied Main Content

Interesting part they removed from the beginning of this section is the comment that “Every page needs Main Content.”

They also combined the two sections “Copied Main Content” and “More About Copied Content”, although it is nearly identical.

They did remove the following:

If all or most of the MC on the page is copied, think about the purpose of the page. Why does the page exist? What value does the page have for users? Why should users look at the page with copied content instead of the original source?

That is a curious part to remove, since it is a valid way to determine if there is any way the content has value despite being copied or syndicated.

Auto-Generated Main Content

This section was renamed from “Automatically-Generated Main Content”, perhaps to change it to match industry lingo.

This section is primarily the same, but added “Another way to create MC with little to no time, effort, or expertise is to create pages (or even entire websites)” to the first paragraph.

 

 

 

Conclusion

There is a lot here as you can see, but for me the main point is that a page should be USEFUL and be WORTH READING.

Curiously though, the guidelines do not state that Copied Content is necessarily a bad thing. I read this as if a page uses content from another site, IF that page then goes on to Add Value then that page should not be down rated.

It also points out that there are no firm guidelines on the amount of content that should be considered as too low. BUT it does state that the length of content can be used to identify those pages as of being of the highest value…

I do hope that this information helps and thanks again for the work done by Jennifer Slegg

 

 

 

What SEO Strategists Need to Know About the Google AI in 2108

The search engines have matured much since their humble beginnings and the SEO strategies of the future must become fully aware of their increasing potential if they hope to stay in their good graces and be blessed with plenty of high-value traffic.

Last year saw the full performance of Semantic Search and the prospect of the Google AI behaving much less like a directory machine and more like an insightful guide. Rather than taking the search queries as mere entries to be matched with equal entries, this incredible thinking machine now has the capacity to look for meanings behind words and provide more meaningful results to their users.

This contributes to Google’s primary goal of providing their users with the results they are truly looking for. It was the creation of the Knowledge Graph database that set the foundation for Google’s latest Semantic Search technology, which allows the results provided to have more meaning by rethinking what truly is relevant to the query and possible interpretations of the query.

The Google AI will apply more than simply a massive database it contains on just about everything on the planet, Knowledge Graph, but more importantly, the many ways that the information it holds on all these topics is interconnected. This means that the results for a query about “Elizabeth Taylor” also includes notable and relevant details like her birthday, her unique eye colour or her first great performance.

Highly meaningful results will be the flavour of the day as the Google AI accesses more properties on the entries it contains. These results will be far more accurate to the user under the understanding that things are connected to other things in very important ways. It will be the subtle connections that will make or break the strategy.

What This Will Mean You When Planning Your SEO Strategies in 2018

The Google AI of the future will rely heavily on elements like anchor texts, meta tags, substantive content and links to form these precision results. But, the real trick this year will be learning how to make an advance in all directions.

It will be essential to consider SEO as a crucial weapon in a much larger arsenal of marketing efforts that will be used in combination to reach your goals. Then you must consider every small adjustment and action taken by your campaign as a part of a greater semantic identity you are creating as you go.

From the web design, to social networks to the blogs you choose to engage everything will be creating a credible and successful campaign in the future or a flop. Here we will see the essential importance of marrying your SEO and Digital Marketing efforts in one seamless strategy.

There will be three crucial areas of your marketing campaign you will need to focus on, creating Authority, building Trust and gaining relevance. The quality of the methods applied in boosting these three areas will decide the final success of your venture.

 

The media has not got a clue about artificial intelligence (AI). Or technology. ‘Robots are coming for your job’ is a popular cry, but the next day it’s fears about AI starting World War III.

Not only do robots and AI have very little to do with each other, but AI is at a very early stage. What’s more, it can be split into several separate technologies.

The masses are being misled into fearing automation and a nebulous super-intelligence, but it’s those with a working knowledge of how AI works – and how it can be exploited – that will be best prepared for the future of work.

What is AI?

There is no precise answer to this question, but it’s got nothing to do with robot overlords. AI is a field of computer science that examines if we can teach a computer to ‘think’.

AI as a phrase has been around since 1956 when it was coined by American computer scientist John McCarthy, six years after English mathematician Alan Turing had published a paper called ‘Computing machinery and intelligence’ in 1950.

AI is generally split into various subsets that try to emulate specific things that humans do. Speech recognition mimics hearing, natural language processing mimics writing and speaking, image recognition and face scanning mimic sight, and machine learning mimics thinking.

That’s a lot of different, often unrelated technologies; AI is an umbrella term, and certainly not a general purpose technology.

Why is AI so hyped up?

Research into AI is currently riding the wave of increased computing power and big data. Together they make AI both possible and imperative; as a society we now produce way too much data to ever process ourselves or get any insight from. Collected data is growing 40% a year, and it’s mostly going to waste.

The existence of all this data also means that AI software has enough information not only to work with, but to learn from. Is this AI’s big moment? Venture capitalists and technology giants such as Amazon, Google, Facebook, Microsoft and Apple think so, and are investing heavily in research.

It’s these companies that have unimaginably huge data sets collected in the last few decades, and a vested interest in automating tasks on that data. Together they’re becoming the arbiters of AI know-how, so it’s AI techniques developed by Google et al. that are being used by scientists to trawl through data to get new insights.

There’s about to be an AI-powered knowledge explosion.

Supervised machine learning

Machine learning is the act of computer scientists training a computer to do something. It’s about automating repetitive tasks, essentially training a computer to recognize patterns, and categorize data.

The classic example is image recognition or ‘AI vision’; give a computer a large number of images containing labeled objects, and the computer can learn to identify them automatically. The computer creates what AI researchers call a neural network; a virtual brain connection similar to a basic process in the human brain.

However, creating a neural network like this takes a lot of human labor, and also a lot of processing power. Google AI and the University of Texas recently used AI on a labeled data-set of signals from the Kepler space telescope to discover two exoplanets when astronomers had failed to find anything.

It’s also being used to identify cracks in reactors, and even help engineers at the UK’s Joint European Torus facility capture and deploy nuclear fusion energy.

This is supervised machine learning, and while it’s getting better at not forgetting, its usefulness at predicting patterns in data is hamstrung by the data it is fed.

Read more here: http://www.techradar.com/news/why-we-should-embrace-ai-not-fear-it

 

Understanding Google Webmaster Guidelines

The Google webmaster guidelines are a collection of best-practice guidelines that will help Google to better understand your website, and help to ensure that your site ranks as well as possible without being mistaken for spam or otherwise suffering from penalties or ranking issues.

These guidelines can be used to help people to understand how their site should be structured, how it should look, and the content on it, as well as how link building should be carried out.

If you are hiring someone to build your website for you, then the guidelines can help to ensure that the site is structured properly, and can be a useful set of instructions for any web developer.

What’s in the Guidelines

Google Webmaster Guidelines include content advice and advice about how a site should be structured. Google wants webmasters to ensure that there are no broken links, and that there are no issues with poorly written content. Pages that load slowly or that have broken navigation will frustrate users, and Google does not want to send people with pages like that, because it knows that to many users, the ‘experience’ that Google offers includes the sites that it sends people to – so the search engine looks better if it performs well.

Google also wants webmasters to have pages that load quickly, and that are easy to use on both desktop and mobile devices. This means that the site needs to be hosted not too far from where most of the traffic will come from, so that it loads promptly.

Google has requirements for sites to be accurate, useful and full of unique content. It frowns upon sites that contain either thin content (e.g. sites made for nothing more than hosting ads) and sites that contain a lot of duplicate content. You will need to make sure that your site provides users with information that is useful and up to date, and that was written for your site. Copying content will not help you in the long run.

There are also some guidelines for link building. If you are trying to build up a lot of incoming links then you will most likely find that your site will move up in the rankings – as long as those links are relevant and high quality. Get a lot of spammy links, or get caught buying links from third parties, and you will lose some of that ranking. The reason for this is that Google wants to maintain the integrity of its index. If you are buying links, then you are not earning them through ‘votes’ and Google may think that your site is not actually worthy of those links in reality.

This issue has led to something called ‘negative SEO’ where rival webmasters get people to link to another webmaster’s site from ‘bad neighbourhoods’ – for example having a site that sells trainers linked to from a gambling website. This used to work, but now it is possible for webmasters to disavow links that they do not want to be associated with – so you get none of the ‘useful’ link benefits, but you also don’t get penalized for those links existing.

It is well worth taking the time to learn the Google Guidelines, because they will stand you in good stead when you are ready to promote your website, and they serve as a good framework for getting your site off to a smooth start, so that the SEO will be easy when you are finally ready to do it, and you won’t fall into obscurity through no fault of your own.

 

When former head of web spam Matt Cutts was at Google, he spent a lot of time communicating with webmasters/site owners about updates. We knew what was coming, when it might be coming, and how severe it would possibly be.

If you woke up in the morning and your traffic had fallen off a proverbial cliff, you could go to Twitter and, based on what Cutts was posting, usually determine if Google had run an update. You could even tell how severe the rollout was, as Cutts would typically give you percentage of queries affected.

Although some believe Cutts was more about misinformation than information, when it came to updates, most would agree he was on point.

So if a site fell off that cliff, you could learn from Cutts what happened, what the update was named, and what it affected. This gave you starting points for what to review so that you could fix the site and bring it back into line with Google’s guidelines.

Why the help?

Cutts seemed to understand there was a need for the webmaster. After all, Google’s Search is not their product — the sites they return from that search are the product.

Without someone translating Google’s desires to site owners, those sites would likely not meet those guidelines very well. This would result in a poor experience for Google users. So, that transfer of knowledge between Google, SEOs and site owners was important. Without it, Google would be hard-pressed to find a plethora of sites that meet its needs.

Then, things changed. Matt Cutts left to go to the US Digital Service — and with his departure, that type of communication from Google ended, for the most part.

While Google will still let webmasters know about really big changes, like the mobile-first index, they’ve stopped communicating much detail about smaller updates. And the communication has not been in such an easily consumable format as Cutts tweeting update metrics.

In fact, very little is said today about smaller updates. It has gotten to the point where they stopped naming all but a very few of these changes.

Google communication in 2017

Right now, the Google spokespeople who primarily communicate with SEOs/webmasters are Gary Illyes and John Mueller. This is not a critique of them, as they communicate in the way Google has asked them to communicate.

Indeed, they have been very helpful over the past few years. Mueller holds Webmaster Central Office Hours Hangouts to help answer questions in long form. Illyes answers similar questions in short form on Twitter and attends conferences, where he participates in various AMA (Ask Me Anything) sessions with interviewers.

All this is helpful and appreciated… but unfortunately, it is not the same.

Highly specific information is difficult to find, and questioners are often are met with more vagueness than specifics, which can at times feel frustrating. Google has become obtuse in how they communicate with digital marketers, and that seems to be directed by internal company processes and policies.

This lack of algorithmic specificity and update confirmation is how we wound up with Phantom.

Welcome, Phantom

Google has many algorithms, as any SEO knows. Some, like Penguin and Panda, have been rolled into Google’s core algorithm and run in (quasi-) real time, while others, like the interstitial penalty, still run, well, when they run.

Big updates such as Penguin have always been set apart from the day-to-day changes of Google. There are potentially thousands of tweaks to core algorithms that run every year and often multiple times a day.

However, day-to-day changes affect sites much differently than massive algorithm updates like Panda, Penguin, Pigeon, Pirate, Layout, Mobilegeddon, Interstitial, and on and on. One is a quiet rain, the other a typhoon. One is rarely noticed, the other can be highly destructive.

Now, Google is correct in that webmasters don’t need to know about these day-to-day changes unless someone dials an algorithm up or down too much. You might not ever even notice them. However, there are other algorithms updates that cause enough disruption in rankings for webmasters to wonder, “Hey Google, what happened?

This was true for an algorithm update that became known as Phantom.

Phantom?

There was a mysterious update in 2013 that SEO expert Glenn Gabe named “Phantom.” While it seemed to be focused on quality, it was not related to Panda or Penguin. This was new, and it affected a large number of sites.

When “Phantom” ran, it was not a minor tweak. Sites, and the sites that monitor sites, would show large-scale ranking changes that only seem to happen when there is a major algorithm update afoot.

Now, there was one occasion that Google acknowledged Phantom existed. However, aside from that, Google has not named it, acknowledged it, or even denied Phantom when SEOs believed it ran. Over time, this string of unknown quality updates all became known as Phantom.

The word “Phantom” came from the idea that we didn’t know what it was; we just knew that some update that was not Panda caused mass fluctuations and was related to quality.

Not Panda quality updates

The changes introduced by Phantom were not one set of changes like Panda or Penguin, which typically target the same items. However, the changes were not completely disparate and had the following in common:

  • They were related to site quality.
  • They were not Panda.
  • They were all found in the Quality Raters Guide.

We don’t use the word “Phantom” anymore, but from 2013 to 2016, large-scale changes that were quality related and not Panda were commonly called Phantom. (It was easier than “that update no one admits exists, but all indicators tell us is there.”)

Read more: https://searchengineland.com/the-trouble-with-fred-283343

 

 

 

Research into the FRED Google update, confirming why sites lost rankings.

The Fred Update by Google caused quite a ripple in the SEO world, many sites losing ranks, and hence traffic, up to 90% in some cases.  I have been doing quite a bit of digging and have asked some Gurus some pointed questions about why and what has happened.

The overall thoughts on the matter are that Google penalised sites that had poor content, or ones that were simply there to make money and not give anything back to the visitor in the form of useful data or information.

User Experience is Another Factor

Other thoughts on the matter were more to do with the User Experience that a page gave its visitors. Here the sites that were said to be hit included those that placed the copy below the fold of the screen or in some cases had very low load times.

However, in some cases sites were hit that were not just ‘out to make money’, but that seem to have been ‘lumped in’ with those that do because of the lack of content on their page.

Having a Lot of Links Did Not Save Sites

There was also talk that FRED checked on the quality of the links to sites too. This may turn out to be the case, further research is needed on this matter. However, what we can say is that sites that fell foul of FRED’s On Page quality checks were not saved by having a lot of links. Instead their positions were taken by sites that had inferior linking profiles, both at Page and Domain levels.

This research only covers 9 sites, so it can hardly be said to be definitive, but the evidence so far is pretty conclusive. Further research into the sites that were affected but did not fit the profile of sites that ‘should have been affected’ (by Fred) is the next step. More on the ‘efficiency’ of Fred later.

The FRED Data

In each case, the sites that held a first page rank before Fred for a given term were compared with the sites that now hold the first page (for that term). The sites that had lost their first page rank (had to have a position of 7 or above pre Fred) were then checked, this with a view to see ‘what could have caused them to lose their rank’ and whether this fitted with the profile of sites that Fred ‘should have hit’.

The phrases checked covered a range of topics, ranging from ‘lqf fruit’ to ‘chemical companies’ so should be diverse enough to give some firm data.

Search Phrase ‘ lqf Fruit’

Before and After FRED

Search Results pre and post Fred

Google results before and after FRED

Here two sites lost their first page rank:-

1

Not enough text for FRED

A screen shot of the site

This site had lost a rank of 5, and when checked, we saw that the actual page that was shown when you clicked the link was https://www.thespruce.com/what-does-iqf-mean-995719, a page not even on the stated domain. Something that is sure to annoy Google to start with. Furthermore, this page had very thin content and seemed to be only really there provide to a place for Google Ads and other advertisments. Being a prime target for Fred, it is not surprising to see that it was hit.

2

 

Content to Thin

The fruitbycrops site

Again a site with very thin content, just 155 words with an Advert at the very top, again a prime target for Fred.

 

Search Phrase ‘ chemical companies’

Before and After FRED

 

Results before and after FRED

The results for the term before and after FRED

Again two sites affected:-

12

 

 

 

 

This is a big website, with a lot of links, some 222,000 links to the domain, (although only 3 to the page)  linking to the page, the reason it lost its ranks seemingly down to the fact that the page in question was just not related enough, it being just one short item on the page.

4

 

An example of a penalised site

Was this site penalised because it’s copy was not ‘good enough’. Seems to be the most likely.

Another page that held just a small amount of what I would call ‘filler text’, it not really ‘saying anything’, at least in my view, the total length being just 251 words. Again a prime target for the Fred update.

 

Search Phrase ‘welding supplies uk’

Changes in the Google results pre and post FRED

The results from Google, pre and post the Fred update

Two sites here:-.

11

An example of a site hit by FRED

The Weldingshop site one of many hit by the Fred update

This site is not that bad in reality, although some may think it is a bit old fashioned. But it is not as bad as many that do hold onto first page ranks.  What is most likely the cause of the pages loss of rankings is that the main copy is only 340 words long. This leads me to consider that the length on the copy is considered below the ‘satisfactory’ level laid down in the Google Quality Guidelines.

5

 

Little text below the fold

Too little copy, with it below the fold. Possible reasons for the site being hit by FRED

This page lost a rank of 7, again the amount of copy being the likely cause of the drop, only 270 words being on the page, whilst also being below the fold, a factor that Google has stated (in 2012) that caused the value of any copy to be degraded.

Search Phrase ‘metal fabricators’

See how Fred altered the Google results

Google results pre and post the FRED update

Three sites had lost their ranks for this phrase

6

 

 

 

To few words for FRED?

Another site hit by Fred, more than likely due to the small amount of copy

Yet another page that lost its ranks, apparently down to the lack of content, the copy amounting to just 154 words.

7

 

Text below the fold - a reason for a Fred hit?

A page with over 600 words, but being below the fold, this could have caused a Fred hit.

This site had a rank of 4 before Fred, and does have a fair number of words, over 600 in all. However, 90% of it is below the fold on the screen and this looks to be the reason for the drop.

8

 

Yet another site hit by Google's Fred.

Yet another site hit by Google’s Fred.

This page lost its 6th position, it again being a ‘low volume of copy’ casualty, the length of copy amounting to just 170 words.

 

Conclusion

In all cases we can see that the sites affected by Fred did seem to fit the patterns suggested by the Gurus and by other research in that they mostly had very thin copy or ‘hid’ the copy below the fold in the page.

The next step is to see if the pages we are currently looking after SEOwise that also suffered a drop in rankings also fit this pattern.

Watch out for another report on this later in April.

Smart Scientific SEO Strategies for 2017

It’s been a fair few weeks since we managed to post anything on our blog and frankly I’m amazed at how fast the year has gone so far, and at the rate at which things seem to be changing, not to mention a lot of really useful software that has become available.

The post we’ve highlighted today (see below) comes from a series published by a well respected Web Design and SEO company called AimInternet. It is certainly a useful piece and highlights the fact that the information in Google’s Webmaster Tools (now called Google Search Console) is very very useful. The main reason I say this is that Google (for reasons of privacy they say…) stopped reporting the keyword phrases used by any visitor to a site in Analytics. You can tell they come from Google, but not what search words they used. All very annoying when trying to work out what words are converting and what are resulting in a high bounce rate.

Google Search Console fills this gap, to a degree in that it gives you a good idea of the phrases being used, the number of times a phrase has resulted in someone seeing a Google listing for the site, the Click Through Rate (very useful this, as it gives you an idea if your Title and Meta Description are well tuned to get clicks) as well as the average position in Google. But, it does not tell you what page they land on or whether they stay or ‘bounce’.

You can start extrapolating the data to make some intelligent guesses about what is going on (there is software that will do this for you) but they are only guesses (you could always run an Adwords campaign to check, but that is another story).

But to get back to what the article is about.

Scientific Organic Search Strategy

In the article AimInternet mention that they had increased the ‘number of keywords present’, by which I think they mean the number of different search phrases (or ‘Queries’ in Google Search Console speak) that were associated with a site. They made a big difference (something that we too pride ourselves on being able to achieve), increasing the number of associated phrases from 300 to 800. What this really means is that the ‘footprint’ of the site on Google has more than doubled, hence it is more likely to be seen and thus get a click ! All very good.

The process by which they reached this point is covered in earlier posts and no doubt they follow the same ‘Scientific’ path as we do. If they do they will first carry out research to find the words being used by people searching for their customers services and products. Then they will weave these into the site and construct content that supports the drive for rankings for the chosen target phrases.

What they ‘might’ not do is to check on the sites that currently have the best positions for these target phrases and then ‘Reverse Engineer’ them. By following that path you ‘know’ the words that Google likes to see and can thus use them in the content. This system also gives you a list of all the similar words and phrases that should be used, which avoids keyword stuffing and gets the ‘message’ across to Google in the way that we know it likes.

Add some links (that themselves have to be intelligently added – there is software that helps with that now too) and the site WILL, like Eagle, be associated with more query phrases, get better rankings and thus more traffic.

But the trick is in carrying out each of these phases in a controlled scientific manner…

One very interesting point that Aim made is that once you have a list of the phrases that Google associate with a site, that you should build on this and write content (about these phrases) that will make the site that bit more interesting and helpful. This will not only cement your position with Google but will no doubt improve the rankings for the site and, more importantly, give your readers more reasons to come back for more, and even, hopefully, buy from you.

They also make the point that visitors don’t always come in through the front door (the home page) so you should make your interior pages interesting too. This is not really new though, in that most of the pages on a site should be doing their best to engage with viewers by providing useful content, each page targeting a different set of keyphrases.

So a very interesting article.

To read the whole post on A Smart Organic Search Strategy please click the link

How We Use A Smart Organic Search Strategy To Get Our Clients On The First Page Of Google

This week we expand on looking at how to get your website on the first page of Google by using a smart organic search strategy.

In our last blog, we looked at the importance of getting on the first page of Google. And, we examined how our methods of using local marketing tools are driving traffic to the homepage – and producing fantastic results – for a client of ours. This week, we’ll expand on part of that methodology – using an organic search strategy to drive traffic to particular product pages or blog pages which then link through to specific product pages. We also do this via Adwords, although this is something we’ll look at in more detail in following blogs.

What Is An Organic Search Strategy?

In brief, an organic search strategy consists of finely keyworded product pages or blogs, which get picked up by Google each time one is published on a website. At this point, you might be thinking “I’ve already got all the information about the products or services I offer on one page of my site so I’ve nailed it, right?” or “I make rubber plugs, why the heck do I need a blog about those, who is going to read it?!”.

OK, so you might not be totally wrong about the last point (but hey, you never know, there might just be a rubber plug enthusiast out there who would LOVE to read your blog about them!).

Getting back to business…

Creating separate product pages on your site and posting blogs is all part of your organic search strategy. Simply, doing so creates more pages on your website containing the relevant keywords that you want your website to be found for, which Google can then index. The more relevant and unique pages and content you have on your site, the more shots on target you have at being shown on the first page of Google.

The important things to note here are relevant and unique. Google is smart and will penalise your site if you post up a load of duplicate pages and content. The same goes if you keyword stuff (make your content unintelligible by jamming in too many keyword phrases) your posts and pages.

We won’t go into it here but recommend that you take some time to familiarise yourself with good content practice. That includes following referencing protocols if you are using content from another site. For example, you might choose to do a blog post which rounds up the “5 best things about rubber plugs” and which uses information from other websites. That’s absolutely fine, but just remember to acknowledge and reference your sources correctly.

Why Do This?

How many pages are currently on your website? Probably not that many? So, if you currently have one page that discusses your 10 different products, by separating them out into individual pages you just added 10 extra pages to your site virtually overnight. You’ll be able to expand the content around each product, and so the mentions of the relevant keyword, too. So, whereas on the original page, you may have only listed the type of products you sell, you can now go into more detail about each one on their own page. This naturally allows for an articulate way of including more of your desired keywords on your site – avoiding the extreme no-no practice of keyword stuffing.

Google likes new and relevant content. Each page becomes a new way for traffic to come to your site. Of course, once the core pages of your site are done it’s likely that you won’t be updating those that often. Which is why, as part of any organic search strategy, we advise our clients to do regular blogging. And, in the case of blogging, the more regular you post the better.

Employing an organic search strategy such as this might mean that traffic enters your website not via the traditional route of arriving at the homepage. Instead it might enter on a product page or a blog post page written around a specific topic, which then links to a product page. Typically, we notice that customers will land on one of the product pages of our client’s websites, because of the organic search that we’ve set up for the client.

If you’re in the pressed parts trade you might do a search in Google for “copper plating”. Google will take into account your location (it gets this information from your settings) and present to you the most relevant results. Let’s say you’re Midlands based, as is EC Williams.

As a result of this search, people enter EC Williams’ site on the Copper Plating product page. Once on the page, you are presented with all of the information you need about “copper plating” along with some important trust points about the company. Our analysis shows us that from landing on this entry point people also then navigate to other pages on the site. From this example in particular, we can see that “zinc plating” is the next most popular page. Once on their website, this alternative page is now easily found in the navigation bar above, under “Plating Services”. From our research, most people stay on the “zinc plating” page, as they’ve found what they want. But, if they want more depth they’ll go onto “zinc nickel plating”.

The point of this is that once on the EC Williams’ website, the customer is presented with everything they need to make a purchasing decision. And, if you were that person looking for a company who were experts in the field of coating pressed-parts, then, bingo – you just found them.

Straight away, serious buying customers get what a snapshot of relevant information once they are on the site. Because of the trade they’re in (pressed parts), they become interested in making an enquiry straight away. We’ve measured this extensively on EC Williams’ site plus many others’, and know that it works. You need to make it easy for your customers to find information on your site and this method works by doing just that. Everything has be there for the user so that they’re not having to look for things too much.

How Organic Search Strategy Works

Most people will find you through a long-tail keyword search. These are keywords that tend to be more specific. Your website content should be driven by the keywords that your SEO advisor gives you. They need to advise your outsourced content providers of these keywords so that they can write content around them.

Take a look at www.eagleplastics.co.uk. They are another client of ours. Again, you can see that similar to www.ecwilliams.co.uk, everything a customer requires is there easy to find on the homepage, above the fold.

From an SEO perspective, when we started working with Eagle Plastics, the number of keywords we had to work with was much less than it is now. The site was receiving much less traffic that it does today which meant that there were nowhere near as many clicks or impressions being recorded. This impacted on the number of keywords being presented to us by Google. At the time we were only getting about 300 keywords presented, yet a year or so on, Google is now presenting 800 keywords.

This is as a result of the organic search strategy we have implemented, like that we discussed earlier. Traffic gets signposted to the Eagle Plastics website all based around these 800 keywords. And, now we have more of those, we can start creating content based on different keywords and keyword phrases.

Through testing the blogs, we are able to determine which keyword phrases are the most successful by analysing which ones have the best impressions.

On Eagle Plastics, “High Impact Polystyrene” is a key term for them. We know that this keyword phrase works well for them so we use it regularly in their blog headlines, in the h2 sub-headers and throughout the blog text. Of course though, we ensure we use it professionally and never keyword stuff.

As a result of this organic search strategy, we are providing more content to Google. This is recognised by them and results in Google starting to suggest more keywords which are relevant. We then create content based around these suggested keywords and their variations. As we post regular content which uses those keywords, Google views this as quality content and so provides us with even more relevant keywords. We then use these to continue to push the search and content strategy. The result is more traffic. But more than that, in getting more traffic, Google rewards you for quality content. And so it continues…

As little as five years ago, most searches were conducted using use two keywords. Today people use an average of five words per keyword search term. What was once a keyword search for “plugs” is now a more unique phrase of “the best luxury rubber plugs”. As you can see, the one word keyword has become a keyword phrase made up of multiple words. Searches are now more unique and these long-tail keyword phrases more specific.

Ultimately, it’s important to remember that every keyword search represents an intent by someone to find some information out. Long-tail keywords help you to better address that user intent by creating unique tailored content.

Statistics show that of 3 billion searches a day, 20% of every search is unique. That’s a heck of a lot of unique searches – and to get displayed on the first page of Google, you need a successful organic search strategy to be found amongst all of that noise.

SEO Ho Ho – Search Engine Optimisation in 2016 – Xmas Message

The year is nearly at an end and Xmas has been and gone, but there is still a lot of cheer in the air and pleasant memories of all the festivities to boot (amongst them our company Xmas card – see the image below) which went down very well with our customers).

seo-ho-ho-card

But there are other reasons to be grateful about 2016, in that in my view Google has made some really good moves to make the results fairer and more accurate, the latest Penguin update really sorting things out.

This has been somewhat of a relief to SOM as we have been ‘preaching’ what we call ‘Proper’, ‘Scientific’ SEO. What we mean by this is that we research the words that people are searching for in a market area, find the words that Google ‘wants to see’ for these phrases so that they can be incorporated in to the copy. Then we add some relevant links (with a natural anchor text and source type mix) and he presto , things start to happen.

The best part of this is that it is all totally ‘Google legal’  and can never in our view be subject to any penalties that Google may dream up at some time. We can say this as all we are trying to do is to make sure that any site we optimise offers some of the best information there is on a given subject, and of course we make sure that there are enough links to the site’s pages so that Google thinks the same. We call this link building programme ‘priming the pump’ as once the site gets traffic, the links will start building organically. Link building is still required in many cases, but perhaps, only because others are trying to get their sites rankings higher too…

As to the blog post we have included below, we certainly agree about the rise of AI and believe that Google searchers have for some time been ‘rats in the Google maze’, in that they have been analysing what we click on and what sites we like, thus getting closer and closer their goal of truly understanding the real intent behind a given search term.

The other interesting thing raised here is the increased importance that mobile search is being given these days, not really surprising when you realise that people are accessing the web using mobile devices more and more these days.

For 2017 we see it as more of the same, Google getting cleverer and cleverer at spotting the good sites (the ones that deserve rankings) from the ones that don’t, all of which means you just have to ‘Do SEO properly’ or suffer the consequences…

To see the full article on SEO in 2016 and some predictions for 2017 please click the link.

What we’ve learned about SEO in 2016?

Since the inception of the search engine, SEO has been an important, yet often misunderstood industry. For some, these three little letters bring massive pain and frustration. For others, SEO has saved their business. One thing is for sure: having a clear and strategic search strategy is what often separates those who succeed from those who don’t.

As we wrap up 2016, let’s take a look at how the industry has grown and shifted over the past year, and then look ahead to 2017.

A growing industry

It was only a few years ago when the internet was pummeled with thousands of “SEO is Dead” posts. Well, here we are, and the industry is still as alive as ever. SEO’s reputation has grown over the past few years, due in great part to the awesome work of the real pros out there. Today, the industry is worth more than $65 billion. Companies large and small are seeing how a good search strategy has the power to change their business.

As search engines and users continue to evolve, SEO is no longer just an added service brought to you by freelance web designers. With the amount of data, knowledge, tools and experience out there, SEO has become a power industry all on its own.

Over the course of the year, my agency alone has earned a number of new contracts from other agencies that are no longer able to provide their own search efforts. A large divide between those that can deliver SEO and those that can’t is beginning to open up across the board.

The rise of AI

Artificial intelligence (AI) is now prevalent in many of our lives. Google, IBM, Amazon and Apple are very active in developing and using Artificial Narrow Intelligence (ANI). ANI can be used to automate repetitive tasks, like looking up product details, shipping dates and order histories and performing countless other customer requests.

The consumer is becoming more and more comfortable with this technology and has even grown to trust its results. Sundar Pichai, Google CEO, announced during his Google I/O keynote that 20 percent of queries on its mobile app and on Android devices are voice searches.

RankBrain, Google’s machine-learning artificial intelligence system, is now among the top three ranking signals for Google’s search algorithm. Why? Google handles more than 3.5 billion searches per a day, and 16 to 20 percent of those are unique queries that have never been searched before. To handle this, the team at Google has harnessed the power of machine learning to help deliver better results.

While we can’t “control” RankBrain, what we can do is learn more about how Google is using it and then help the tool by creating good content that earns shares and links, building connections with others in our niche or related niches, and building trust in very targeted topics.

We are still in the beginning stages of this technology, but as more and more homes become equipped with smart tools like Amazon Echo and Google Home, we can be sure that these tech giants will use the knowledge they gain from voice search to power their AI technology.

The “Google Dance”

Every so often, Google likes to surprise us with a major algorithm update that has a significant impact on search results — some years we get one, and other years we get a little more.
While they do make nearly 500 tweaks to the algorithm each year, some are big enough to garner more attention. Let’s look back at four of 2016’s most memorable updates.

Mobile-friendly algorithm boost

A little under a year after “Mobilegeddon,” an event marked by the launch of Google’s mobile-friendly ranking algorithm, the search giant announced that it would soon be increasing the effects of this algorithm to further benefit mobile-friendly sites on mobile search. That boost rolled out on May 12, 2016, though the impact was not nearly as significant as when the mobile-friendly ranking algorithm initially launched.

Penguin 4.0

While this ended up being a two-phase rollout, Penguin 4.0 made its entrance on September 23, 2016. This has been considered the “gentler” Penguin algorithm, which devalues bad links instead of penalizing sites. The second phase of Penguin 4.0 was the recovery period, in which sites impacted by previous Penguin updates began to finally see a recovery — assuming steps were taken to help clean up their link profiles.

“Possum”

While this update was never confirmed by Google, the local SEO community noted a major shake-up in local pack and Google Maps results in early September 2016.

Fellow Search Engine Land columnist Joy Hawkins noted that this was quite possibly the largest update seen in in the local SEO world since Pigeon was released in 2014. Based on her findings, she believes the update’s goal was “to diversify the local results and also prevent spam from ranking as well.”
Divided index

As mobile search continues to account for more and more of the global share of search queries, Google is increasingly taking steps to become a mobile-first company. In November, Google announced that it was experimenting with using a mobile-first index, meaning that the mobile version of a website would be considered the “default” version for ranking purposes instead of the desktop version:

“To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results.”

The time to say goodbye to 2016 is fast approaching, and I am truly excited to see what 2017 has in store for the world of SEO!

95% of websites are HURTING their Own Google Rankings

We have checked hundreds of websites over the years and the sad fact is that 95% of them are actually doing things that will make it harder (or impossible) to get rankings on Google.

95percent

Is Your Site One of the 95%?

The question that you (as a business website owner) might well be asking is MY site one of the 95%?? Of course, you may not be bothered, thinking that your site’s ‘job’ is just to ‘be there’ when someone wants to check on you. But that is really a waste, your site could be doing so much more than just sitting back, waiting for the occasional visitor…

Brochure Sites

Brochure sites are sites that are just meant to act, well, as an online brochure, a means to impart information about a business to anyone who is interested. They are often just visited by people who having heard about a company (or maybe they met someone at a networking event?) want a bit more information before they contact them for a quote etc.

A Wasted Marketing Opportunity?

This is a good way of using the power of the Internet (saves on a lot of brochure printing for a start), BUT, is it also a wasted opportunity? The thing is here you have a website, full of (hopefully) interesting stuff about your business, the services that you offer and ‘what makes you special’ and yet no great efforts are being made to get more people to read it all. This must be a wasted opportunity, as any one of those visitors (that the site is not getting) could be a potential customer…

So What Are These Sites Doing Wrong?

The fact is that there are many ways that business sites are ‘getting it wrong’ when it comes to getting Google to ‘like’, and thus give their pages a prominent position for a given search term. Some of them are quite basic mistakes too and could easily be fixed with a few clicks (and a little bit of thought).

Some Examples of the Mistakes Sites Make

The Title Tag

You may not notice (although Google always does) this one, as it a bit hidden, but if you take a look at the top of your Internet Browser window, you will see the ‘Title’ information for the page you are looking at. In many cases you will see words like ‘Home’ or ‘About Us’. Whilst not being incorrect (as you would be looking at the Home or About us page), they are not really very informative to the very ‘person’ you really want to impress and that of course is Google.

Think about it, would not a phrase like ‘IT Support Services | Computer Repairs’ ‘tell’ Google a bit more than the word ‘Home’? It really is a no brainer and so very easy to fix….

The Meta Description

When you look at a page you don’t even see this (not even at the top of the Browser), it only being visible in Google’s search results, under the Title and the URL of a site. This might make you think that it is worthless from an SEO point of view, but you would be wrong. It is true that the words in the Description do not have a lot of clout SEOwise, but if you leave the field empty or use the same one on many pages, you run the risk of making the site appear to be ‘lazy’ as far as Google is concerned and that ‘black mark’ could make all the difference when Google has to decide what site to list for a phrase you want to be found for.

Again, a few clicks on the keyboard can make the problem go away.

The Elevator Speech

Another thing you should bear in mind is that a good Description can make all the difference when it comes to getting that all important click from the Google search results. Think of this 160 character text block as your ‘elevator’ speech and create one that would make someone just have to click through to your site, as it is only then that you get a chance to start that dialogue that could result in a sale or enquiry.

The Header Tags

This is another of those things that you will probably not have noticed (and yes you guessed it, Google is looking at this too), other that is that the text might look a bit bigger. But why is the correct use of Header tags important? To explain this I need to give you a bit of a history lesson, it all starting with the way that documents are constructed. This actually goes back to the time that newspapers were laid out using lead type as here the editors had to be able to let the people who were laying out the type which bits were the important, that is, what words (like the Headlines) needed to be big. This was all done using a ‘Header Tag Number ranging from 1 to 6 (or something similar).

This rule set was used when the code that describes how a page would be displayed on Wordprocessors and screens was written , it again being used to control how words would be displayed. This in turn fed through to the language that controls printers and also, most lately, how web pages are rendered by Browsers, this of course being HTML.

The Advent of CSS Styles

In the early days on the Internet there were in fact only a few ways you could control how big the words on a page were, these Header tags being one of them. Today of course you can control the font, size and colour of the text on your webpages using CSS Styles, but the importance of the Header tag lives on as Google still use these to work out which words on a web it should take more notice of, something that is vitally important when trying to get your page to the top of the results.

A Problem With Web Designers

It must be said that most sites use these Header tags, but the problem is they are often used incorrectly, the majority of web designers still using them to control the size of text, often compounding the issue by then using them for such terms as ‘Home’, ‘Contact Us’ or ‘Blog’. Highlighting words like these to Google is useless, far better to use them to point out to Google those words that you want to be found for like ‘IT Support Prices’ or ‘Best Anti Virus Software’.

Putting this right is a little harder than both of the above, but it is still not that big a job and makes your site that bit better in Google’s eyes and thus that bit more likely to get a good listing in their results.

Links – The Popularity Voting System of the Internet

Whilst the majority of the power that links bestow come from links to a site from other sites (so called ‘backlinks’ as they link back to you), the links FROM a webpage to other sites and the INTERNAL links in a site are also important. The first tells Google that you are a part of the community that makes up your market place (as well as pointing them at some other valuable resources, which Google likes to see), whilst the second type helps Google understand what each of your pages is about as well as helping people move about your site. As Google rates sites that offer the best ‘user experience’ higher than others, such internal links can only help.

Incoming Links

Whilst the links to a site cannot be put right by making changes to the site, they are a vital part of the ‘battle’ to get a site listed on Google, accounting for about 40% of the marks that Google allocate when deciding what site to list for what term. However, the fact is that the majority of sites either don’t have the any (or enough) links or have the wrong sort. Both of these can really hinder a sites chances of getting a first page (or any) ranking. Fixing them can take a long time and a lot of work though and has to be done very CAREFULLY.

 

SEMANTIC SEO and the Words on the Page

Semantic SEO is all about making sure that Google understand what a site is all about, thereby ensuring that it’s ‘meaning’ is fully comprehended. This is easier to do than you might think, the major thing to get right being to make sure you use the right words on the page. The right words of course are the words that Google wants to see. The good news is that Google will tell you what these words are, all you have to do is to ask in the right way, this being done by ‘Reverse Engineering‘ the top pages on Google …

Writing the Right Copy

Armed with these words and phrases, and a good understanding of the subject (it helps if you are a genuine expert) you can then write the right copy, adding some images, and if you can audio and video components as you go. Sprinkle some internal and external links at the same time and you have gone a long, long way of cracking this particular nut.

 

Polishing the Spitfire

You may not believe it, but it is said that back in World War 2 they used to polish the photo reconnaissance Spitfires (as well as painting them pink so that they were harder to spot in the dawn or dusk skies) just so that they could gain a few mph, something could make all the difference, life or death in this instance, when being chased by enemy fighters.

If you follow the guidance above and fix any of the items mentioned in the above information, it will in effect polish your website a little, perhaps gaining just enough extra speed to get your site onto Page 1 of Google and thus get the extra traffic that could make all the difference to your business.

 

Need Help With the Polishing?

However, if you need help with the polishing, even if it’s just some assistance in finding out what bits to polish the hardest, please do give us a call. We are here to help and offer a lot of free advice and assistance.

WHAT IS SCIENTIFIC SEO?

First a bit of history about Search Engine Optimisation

SEO can trace its history way back to 1994 when the early pioneers discovered that they could use the Internet to drive traffic to their sites and hence sell their goods. As this idea became more accepted, people started competing with each other for traffic and that meant that they had to ‘convince’ the Search Engine of the day to list their site for appropriate terms.

The Search Engine of the Day has changed over the years, Alta Vista, Ask Jeaves and Yahoo all being the top dog at some time. However, today, the big player is Google and thus that is the engine everyone wants to get listings on, and that of course means you have to understand the rules.

 

The Rules of The Old SEO

The rules that the Search Engines use have altered drastically over the years, as they have become more and more sophisticated. At the start, it was easy to ‘trick’ the Engines, all you needed to do was to stuff the pages with your keywords and get some links to the site (Google’s first stab at SEO was based on something called PageRank which basically is all about the number of links to  a site – and not much else).

These ‘old’ rules however had one big problem, in that the SEO professionals of the day kept finding ways around them and thus the Engines had to keep taking steps to close these ‘holes’ in their rule sets.

This process escalated over the years, especially since 2010, and basically Google decided that enough was enough and decided on a whole new approach, one that could not be
tricked and relied on one thing, perceived quality.

 

The New SEO and Perceived Quality

Today, with the advent of something called ‘SEMANTIC SEO’ (the meaning of a site, what it is really all about), things are a lot different, it being all about the quality of the content of a site.

But Why use the term Perceived Quality?

I use this term as I believe that there are limits to what Google can do, in that its computer algorithms cannot ‘really’ decide on what is real ‘quality’ content and what is not. Also, as mentioned above, links had, and still have a vital role to play in how Google decides what site to list for what.  But it cannot always tell if these links are ‘real’ or have been created, thus in all cases Google looks at a page/site and decides (using it’s rule sets) if it is quality or not.

This is why I say it is the quality that Google perceives in a site that is important. So how can you convince Google that your content is good enough to get a top ranking??

The Rules of the NEW SEO in Detail

Despite all the changes that have taken place in the world of SEO since 1994, but all of them are based on four things, one of these only recently coming to the fore.

The Four Things SEO is and was Based Upon

 Site Construction

The way a site is built is important as if it is constructed in the wrong way then Google cannot (or may just not want to be bothered to) find all the pages in a site. Also if the site is built in such a way that it is very slow, or is not mobile friendly, then too Google will downgrade the site in various ways.

One thing that does not cause so much of a problem today is that of the ‘Code to Text’ ratio (the amount of code that is used to build a site versus the number of words visible to the visitor). In the old days, too much ‘construction code’ was an issue, but today, with the advent of WordPress and the like, Google has been ‘forced’ to ignore this area, virtually all sites being very code heavy.

You MUST however ensure that the site can easily be navigated, a failure in that department being very serious indeed. Plus you should also use a fair number of internal links (not just the navigation) to highlight to Google what each page is about.

Words, Pictures and Videos

This is the area most affected by the new SEMANTIC SEO, it being vitally important to use all the ‘right’ words in a page. Gone are the days of just stuffing a page with the words you want to be found for. Today you need to understand what words Google wants to see and then make sure you include them in the copy, also making sure that you include pictures and where possible audio and video content on the page.

Reverse Engineering is the Key

This is where reverse engineering can help, the idea being that if you know what words are being used on the top pages (for a given term) then by including them (using correct grammar of course, as this is also checked) you must be getting closer to the perfect page.

Links

In the early days of SEO Links were vitally important, in fact they could, all by themselves get a page listed. However, today things have changed a lot. Links are still important counting for some 40% of the reason for a site getting a rank, but they are not as all powerful as they used to be.

Google is Watching You

Besides not being as important as they used to be, the links to a site are now carefully checked by Google. Their aim?, to make sure that the links to a site are ‘natural’ and not all built by an SEO company (although they know of course that the practice goes on all the time).

This checking is carried out by Google, the process being labelled as ‘Penguin’. Basically this checks a sites linking structure to see if it complies with the ‘rules’ and is hence seen to be natural. Here the number of links using the domain or URL of the site as the anchor text (the bit we humans click on) are checked, as are the number of links using ‘money words’ (the terms that a site wants to be found for) and those ‘noise’ links, like ‘see this site’, or ‘click here’. If the balance is not right, or they seem to have been created too fast, then a site can be heavily penalised.

This means that a site’s links have to be built very carefully over time and not all in a rush.

Social Media

This is very new in SEO terms and the amount of ‘power’ that social media chit chat, comments on Facebook and Twitter provide is not fully understood. In my view, the importance of Social Media is more to do with other marketing channels, but nevertheless, obtaining links via things like ‘Social Bookmarks’ can be useful.

Putting it All Together – Scientific SEO

So, what does all this mean?? Basically, it means that you must

 

  1. Find the words you want your site to be found for – KEYWORD RESEARCH
  2. Find the words you need to include in the copy of the page(s) using Reverse Engineering – CONTENT RESEARCH
  3. Build the links to the site, CAREFULLY
  4. If you can get some Social Media comments going (more important for sites selling direct to the public than B2B sites)
  5. Monitor the progress and make changes to improve matters further

 

 

I hope this helps you understand how the matter of SEO has to be approached today.

What Google Wants…

So What Does Google Want?

If the full answer to this question was indeed understood, you can bet just about every site that wanted top rankings would make changes to the way their site looked and worked in pretty short order, the prize, that top place on the first page of Google being worth a lot of money…

But of course, Google won’t tell anyone just what they want, instead they just give out information about some of the things they want to see and as importantly, don’t want to see. Whilst the knowledge that is imparted is useful, it only gives us a part of the picture.

 

wpid-2555-2555-Googles_RankBrain_3x29gn.jpg

A Vital Point – Google Often Ignores Its Own Rules

The biggest problem, from my point of view (as a SEO Professional) is that Google don’t only not tell you the rules, they also don’t keep to the ones you know about. This makes applying any scientific approach to the process difficult, its just like trying to find the boiling point of water when someone  is altering the  air pressure all the time. One time water will boil at 100’C, whilst at another it will boil at 90’C….

You can see this ‘not following their own rules’ phenomenon all the time (if you know what to look for) sites that break the rules still enjoying top ranking positions. This does make life difficult, but does not invalidate the data you can obtain by checking a site’s linking or page structure as it is more than possible (especially when it comes to links) that Google have yet to impose some form of penalty, the site then potentially losing the rankings it currently has.

Some of the Known SEO Rules

Keyword Stuffing

In the early days of the web, it was quite easy to trick the Search Engines into providing a first page rank simply by using the target words over and over again…. Things have moved  on now though, and if you try this trick today you will (more than likely) get worse rankings and not better ones.

Status – Avoid…

Use of Title Tag

The Title tag is not on the page, but is shown in the Browser Window and is used by Google to ‘understand’ what topic the page I question is all  about. It also, importantly forms the phrase that is seen when a site is listed by a Search Engine, so is something that needs to be carefully chosen.

Status – Use Wisely (best to keep to 65-70chrs)

Meta Description Tag

Like the Title tag, the words in this area are not  shown on  the actual page, instead they are used in the Search Engine listings and are to all intents and purposes an ‘elevator speech’. Their effect on SEO is very limited, expect that if they the same text is used on lots of pages that is. In such instances, it is believed that they may have a negative effect.

Status – Ensure that your website has a unique ‘elevator speech’ and is 165 – 170 characters long.

Header Tags

These tags have a long history, their use dating back to the days when newspapers were printed using lead type in blocks. More recently, they formed a part of the postscript language that allowed computers to communicate with printers. They were then subsumed into HTML and at the very start of the Internet, were the only way of creating bigger text on the screen. There are 6 Header styles, from H1 (the most important) to H6 (the least).

Google have stated in the past that they use the text within these tags <H1>the text</H1> as pointers about what the page is about, but now, as CSS styles are used to  control the size of the text on pages there is some debate that Google also treat any BIG text as important..

Status – Use, but only for important phrases (not for Navigation) and only have one H1 tag.

Word Count

There is evidence that the top pages for many search terms are ones that have over 1,000 words of copy, although this ‘requirement’ can go up and down depending on the level of competition. The most important factor here is to use the ‘right words’ on the page (this best found by reverse engineering the top sites  for any term) and to use as many as you can. Size is important here for two reasons, the first is (as explained above) that Google likes lots of words (words are its food after all) but there is another, as important reason  to have lot of text.

This second reason is based on the fact that obtaining traffic for ’long tail searches’ can be great for business, such search terms (normally 4 words or longer) often being used by people who are nearer the end of the buying process and thus that more likely to convert.

Status – Try to create pages that are 1,000 words or longer which contain relevant words and terms

Tabs and Accordions (Copy Triggerd by User Interaction)

One of the reasons that pages are often to light in copy is that the site owner (and the designer) rightly points out that a page that looks like  a ‘wall of text’ is likely to be off putting to viewers and would therefore increase bounce rates and reduce conversion rates.

There is a way of placing the text on the page so that Google can read it, but at the same time ‘hide’ it from viewers, until that is they want to see it. There are various methods doing this, but in every case, it is a user action  that causes the text to be made visible. This process is not treated as hiding text (in the old days people used white text on a white background would  you believe), something that Google frown upon and which could get a site banned, but is a practice that Google have reportedly said they are not altogether happy about.

I find this stance of Google’s somewhat strange as they also want sites to offer the best possible ‘user experience’ and  makes me feel that Google want the cake and to eat it too. But as I don’t believe they are actively  penalising sites that use this in their interface, it seems the  best way of providing Google with the words it needs whilst giving users the best way of assimilating the site’s message.

Status – Use with caution.

Internal Links and Links From Your Site

The power of links to sites are well understood, the right type and number enhancing the possibility of a site getting better rankings. However, it is not so widely understood that the links from a site also have their place. The reason that they are important is that they ‘prove’ to Google that the site is a part of the wider community (in that market area) as well as potentially helping users locate other relevant information.

Internal links also have a role in that they allow users to move through a site in an easier way than just using the navigation system. Used carefully these link types can really assist in improving both the ‘user experience’ and Search Engine rankings.

Status – Do implement links to relevant sites, the more powerful the better. Also, consider what internal  links you could place on your pages.

 

Links To Your Site.

Links to a site are still very powerful, accounting for at least 50% of the reason that a site is selected by Google for a ranking and form a VITAL part of any plan to get better Search Engine listings.

There are however some important factors to bear in mind…

  • Ensure that the links come from a wide number of locations / sites
  • Make sure that the anchor text used contains no more than 25% of ‘money phrases’
  • Check to see that the number of ‘other phrases’ is high, at least 30-40%.
  • Remember that a site is more than just a home page, links to internal pages also being needed
  • Plus when building links, make sure that you don’t build too many too quickly..

 

There is more to SEO of course, so please do see our site for more details and assistance.

What is the aim of Search Engine Optimisation?

The aim of Search Engine Optimisation is obviously about getting traffic, the right sort of traffic from the Search Engines, this being achieved by making a website more ‘attractive’ to Google, Bing etc, so that for certain phrases, the site is listed when someone searches for that phrase.

wpid-2555-2555-SEO_-_The_Old_and_the_NEW.jpg

The process of making a site appeal to the Engines is well understood, the rules dictating where you place the words that are important to the sites SEO being one that Google, for example, are happy to share. Some of the things that you should NOT do have also been shared, but the content of a site is only half the story, the power of links still being one that cannot be ignored.

It is this latter point that is to a degree strange, it being an effect of the very start of Google when it’s PageRank algorithm powered the way by which sites were graded. PageRank was based on the idea that sites that had lots of links MUST be good (or otherwise why did people take the time to create the link). It was not just numbers of links that counted though, the PageRank system looked at the page that held the link and at what pages linked to it, then checking the links to that page and so on. I am not sure as to the ‘depth’ that Google went to here, but it was quite deep and in the beginning worked quite well.

 The Start of The SEO ‘Battle’.

As soon as people became aware of how  the PageRank system worked, and bearing in mind the pot of gold that this form of marketing seemed to offer, companies sprung up offering services that were designed to create links, thus ‘fooling’ Google into thinking that a site was more popular than it really was.

Besides the links, Google also (at this early stage) took a cursory look at the words on the pages, it being heavily influenced  by the simple inclusion of a phrase, hence the start of pages that were ‘stuffed’ with strings of words, no real effort being made to make the page appeal to anyone  other than the Search Engines themselves.

Once this ‘war’ started, Google began to fight back, their systems starting to spot and penalise sites that stuffed their pages with the words they wanted to be found for, while also starting to check on the linking structures of sites in greater and greater detail, both of course with the idea of stamping out the ‘cheating’ that was going on.

Like most wars, both sides got cleverer and cleverer, one thinking of ways to get around the checks and rules that were created, the other trying to combat the attempts, one of the results being the birth of two of Google’s animals, Panda and Penguin.

Google’s Penguin – The Link Checker

As mentioned above, at the start it was links that mattered more than anything else, it being said by some that they could get a blank HTML page ranked if they created enough links to it. Google of course tightened its rule sets to try to counter such practices, in the end deciding to run periodic checks on the links to a site, the rule set being named ‘Penguin’.

Penguin’s aim is to ensure that the linking structure looks ‘normal’ (that is one that has not been manipulated too much) and there many checks that we know it runs (and many that I suspect we do not), these including the type of sites the links come from, and the words used as the ‘anchor text’ (the bit you click on). Failure to keep your linking structure looking ‘normal’ could result in an automatic penalty, one that could cause a site to lose rankings and potentially to be removed from the listings entirely.

However, the real change is not so much about checking the links, but the way that Google evaluate sites in an overall manner.

Google’s Panda

Besides links, it is the power of the content that Google measures. In the beginning, it was quite easy to ‘fool’ Google by simply including the words you wanted to be listed for, the quality of the site was not important. Of course, Google, who wanted to make money from advertising, could not allow these poor quality sites to dominate its rankings, as that would cause people to switch to another Engine, and with Google competing against the likes of Yahoo and AskJeaves this was important…

What Google needed was a system by which they could ensure that the pages they listed first were relevant and offered the information/service which people needed and  wanted. Poor quality sites with little or copied content were not wanted…

Thus the Panda rule set was born, its job being to sniff out sites and pages that were of poor quality this including sites that were not updated frequently enough, or seemed not to be ‘bothered’ enough even to create the right Meta Descriptions and Titles, in short sites that appeared to be ‘lazy’.

Panda also checked for copied and duplicated content as well as looking for pages that were ‘thin’ on words (less than 250), at the same time giving points to sites that included videos and images as well as links to interesting and relevant sites.

Then Came (or Comes) the Semantic Web

Both Penguin and Panda (in their various forms – they kept being altered to make them tougher to fool) were attempts by Google to ensure that the sites they listed were the very best (for any given search phrase), but they were not perfect and time after time the SEO community found ways around them.

Google could see this was going to be a never ending battle, so turned their attention to creating a set of rules that could not be fooled, this rule set looking not just at pages and links, but at the overall ‘meaning’ of the site (this is what Semantic SEO is all about), what ‘it was really all about’ and what problems it was trying to solve and the services it provided.

The Current State of Play

Once Google have perfected this rule set, then it is pretty certain that only the best sites and pages will appear at the top of its rankings, BUT and it is still a BIG BUT, they are far from achieving  this at the moment. The truth is though, at least for the very near future that they are far from achieving their aim, you only have to look at the top sites to see that in many cases, there are far better sites that should be occupying those coveted first page positions.

This of course is extremely annoying for any site owner who KNOWs their site deserves better treatment, but until Google (for one) really do implement this strategy in full, all you can do is to make the best you can to promote your site and that means carrying out works both on and off page.

SEO What You Need To Do

There are two main areas that need to be done in the right manner, one is making sure that the pages are full of useful content, the other to make sure that they site has enough links so that Google ‘believes in it’ enough to rank it. This is as you can imagine, quite a wide topic, but one that we cover in great detail.

So, please see the other pages of this site for more details on what Search Engine Optimisation is all about and the services that we offer at Serendipity Online Marketing Ltd