SEO is Changing Not Dying

Came across another interesting post today, discussing (yet again) whether or not SEO is dying or not?

It is well worth a read and is shown in part below, the full article being reached by clicking the link.

There is a lot there, but nothing really that new. What it does say, and this is important, is that Google are now taking more and more note of the way people interact with a site, whether they stay on the pages and move through the site, or whether they simply ‘bounce’ back to the Google results list.

This is all worth talking about, but it is not new (see our blog on Semantic SEO and the feed back loop) at all. Still the article is worth a read and does cover the important issue of checking your Analytics to see if visitors ‘like what they see’.

As old-style SEO techniques head out to pasture, data analytics becomes a core capability for organisations’ web strategies

The changing field of search engine optimisation (SEO) means IT professionals involved in website development increasingly need to take on the role of data analyst in order to fully understand a website’s audience and consumer behaviour.

Internet search engines rank websites based on internal algorithms that determine how relevant a website is for each user’s search request. Websites that have undergone SEO will naturally rank higher than others, as the search engine’s algorithm will determine that this is the more appropriate website.

As users rarely go beyond the first page of an internet search, typically the first 10 results, a high search engine ranking is crucial for a company to remain competitive in today’s digital markets.

Google has recently updated its algorithm that determines how it ranks search engine results. Previously, these results were based on a website’s keyword descriptions and meta-tags. These were embedded within each page of the website by SEO specialists, so that the websites with the most appropriate keywords would appear higher in the results for relevant internet searches.

Following this update to its algorithm, Google now also take into account a website’s audience behaviour and each user’s online preferences when it comes to ranking search results. For example, Google will make a note of the different subjects written in Gmail and take this into account when relevant internet searches are conducted in the future.

Theoretically, this is good news for everyone. If someone is interested in what you do, and your content matches their search parameters, then they are more likely to see your page.

“If you want to see an unadulterated results page, look at something you Google a lot while logged in, then switch to incognito mode and try the same search,” says Dave Convery, content manager at Simple-talk.com. “You will almost certainly see some different pages, and probably a different order to your results.”

Google’s RankBrain – Is It Really That New?

I came across an article about Google making a big announcement about something called ‘RankBrain’, which is stated to be all about ‘machine learning’, or the process by which a computer program ‘learns’ what is the right and wrong way to ‘do things’.

Googles RankBrain
Public Domain from pixabay

This sort of thing is not new at all, computer devices that can learn the route through a maze being one of them. As soon as the recognise the ‘maze’ they can quickly navigate their way through.

Google have been using this sort of system for years in one form or another anyway. Take the way in which the huge test they ran on sites around the world. In this test, they rated sites for ‘usefulness’ and at the same time checked and noted certain information about the site’s pages.

Then later, they checked to see if there was any correlation between the sites rated as ‘not very useful’, like the way the things like the Meta Title were used.

It was no surprise when they found that the ‘poor’ sites demonstrated a lack of details in some areas, and using this data could therefore, with some certainty, deduce how useful a site is likely to be simply by comparing it with a list of characteristics that the ‘poor’ sites demonstrated.

This is machine learning, but it seems that the new system will be taking things one step further. At the moment you only see the sites that are marked  well ‘out of 10’, the really poor ones not getting (or even been considered for) a ranking.

But with the new algorithm, more sites may be considered when it comes to matching search terms with sites, OR, it could be that by learning what sites are best received for any phrase (that is the visitor does not bounce back to the listings after visiting a site) they can in future list sites in a better way.

Rats in Google’s Maze

However, if it is the latter, then even this is not new, Google have for some time been using its users as ‘rats in a maze’.

The process is really quite simple, if vast (Google translates 3.5billion searches a day) and goes like this:-

(1) Google have already matched sites to phrases to some degree / have a method where using current data they can return a list of sites (very quickly) for any search term.

Users are thus presented with a list of sites to choose from for any search term. This will be subtly different because of ‘personalisation’ and Factor X – this being the way in which Google include new sites every now and then to test users reaction. This Factor X, is important for (3).

(2) Google then sees what people click on from the listings. If a site/page does not get any or few clicks (poor Click Through Ratio CTR%) it may be removed from the listings for certain terms. However, if it does get clicks, Google then checks to see if people ‘bounce’ back to Google to try another site.

(3) If they bounce back in enough numbers, Google ‘knows’ that the site is not a good match for that phrase. Good matched are allowed to remain in the listings, bad ones are removed. This is why the addition of some new sites in the listings is important, as it widens the pool of data and helps Google to ‘understand’ what is ‘behind’ a users search term, why it is being used, this in turn being deduced from what seems to satisfy the request….

Of course this takes time, many 1,000’s of searches being required before any decision can be made. Google however, does have the time and the resources to do this and the entire process must be considered to be a form of ‘machine learning’, Google ‘learning’ to tell what is a good match and what is not.

All of this means that the RankBrain may not be that new after all?

The article that sparked off mine is included in part below. For the full article on Google’s RankBrain please click the link.

All About The New Google RankBrain Algorithm

Google’s using a machine learning technology called RankBrain to help deliver its search results.

Here’s what’s we know about it so far.

Yesterday, news emerged that Google was using a machine-learning artificial intelligence system called “RankBrain” to help sort through its search results.

Wondering how that works and fits in with Google’s overall ranking system? Here’s what we know about RankBrain.

The information covered below comes from three sources. First, the Bloomberg story that broke the news about RankBrain yesterday (see also our write-up of it). Second, additional information that Google has now provided directly to Search Engine Land. Third, our own knowledge and best assumptions in places where Google isn’t providing answers. We’ll make clear where any of these sources are used, when deemed necessary, apart from general background information.

What Is RankBrain?

RankBrain is Google’s name for a machine-learning artificial intelligence system that’s used to help process its search results, as was reported by Bloomberg and also confirmed to us by Google.

What Is Machine Learning?

Machine learning is where a computer teaches itself how to do something, rather than being taught by humans or following detailed programming.

What Is Artificial Intelligence?

True artificial intelligence, or AI for short, is where a computer can be as smart as a human being, at least in the sense of acquiring knowledge both from being taught and from building on what it knows and making new connections.

True AI exists only in science fiction novels, of course. In practice, AI is used to refer to computer systems that are designed to learn and make connections.

How’s AI different from machine learning? In terms of RankBrain, it seems to us they’re fairly synonymous. You may hear them both used interchangeably, or you may hear machine learning used to describe the type of artificial intelligence approach being employed.

So RankBrain Is The New Way Google Ranks Search Results?

No. RankBrain is part of Google’s overall search “algorithm,” a computer program that’s used to sort through the billions of pages it knows about and find the ones deemed most relevant for particular queries.

What’s The Name Of Google’s Search Algorithm?

It’s called Hummingbird, as we reported in the past. For years, the overall algorithm didn’t have a formal name. But in the middle of 2013, Google overhauled that algorithm and gave it a name, Hummingbird.

So RankBrain Is Part Of Google’s Hummingbird Search Algorithm?

That’s our understanding. Hummingbird is the overall search algorithm, just like a car has an overall engine in it. The engine itself may be made up of various parts, such as an oil filter, a fuel pump, a radiator and so on. In the same way, Hummingbird encompasses various parts, with RankBrain being one of the newest.

In particular, we know RankBrain is part of the overall Hummingbird algorithm because the Bloomberg article makes clear that RankBrain doesn’t handle all searches, as only the overall algorithm would.

Hummingbird also contains other parts with names familiar to those in the SEO space, such as Panda, Penguin and Payday designed to fight spam, Pigeon designed to improve local results, Top Heavy designed to demote ad-heavy pages, Mobile Friendly designed to reward mobile-friendly pages and Pirate designed to fight copyright infringement.

I Thought The Google Algorithm Was Called “PageRank”

PageRank is part of the overall Hummingbird algorithm that covers a specific way of giving pages credit based on the links from other pages pointing at them.

PageRank is special because it’s the first name that Google ever gave to one of the parts of its ranking algorithm, way back at the time the search engine began in 1998.