Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Feb 7, 2017 Google Algorithm Update - studies suggest it was Phantom

         

Robert Charlton

11:16 pm on Feb 20, 2017 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Two early studies of Google's February 7, 2017 algorithm update are both suggesting this was principally a "Phantom" quality update. As discussed in our Feb Google Updates thread [webmasterworld.com...] ...and also noted in both articles... the update was major, accompanied by wide ranking swings. As both studies note, Phantom refreshes cyclically, like the old Panda.

The articles are by Glenn Gabe for his company G-Squard Interactive, and by Daniel Furch, for Searchmetrics. We've discussed Phantom studies by both companies in this forum previously. Here's a link that references several of the threads...

Nov 2015 Phantom updates: user-engagement factors + Panda
Dec 2015
https://www.webmasterworld.com/google/4780955.htm [webmasterworld.com]

Here are the two recent articles...

The February 7, 2017 Google Algorithm Update – Analysis and Findings From A Significant Core Ranking Update
February 15, 2017 - by Glenn Gabe
[gsqi.com...]

Google Phantom V Update: There and Back Again?!
February 15th, 2017 - Daniel Furch
[blog.searchmetrics.com...]

Both describe Phantom as an algo that's Panda-like but not quite the same. It's currently appearing to update on six-month cycles, and like Panda requiring a data refresh before a site can recover. Both involve what can be broadly described as site "quality"... ie, usability and user experience, satisfying the intent of the user and the query. .

Gabe more specifically refines this, I feel, than he has in previous studies...
Basically, don’t just look at content quality. There’s more to it. Understand the barriers you are presenting to users and address those. For example, aggressive ad placement, deception for monetization purposes, broken UX elements, autoplay video and audio, aggressive popups and interstitials, and more.

He emphatically points to the relationship of these updates to the Raters Guidelines. In particular, he cautions against "aggressive monetization", and references section 6.3.3 in the Page Quality Ratings section.

I'm thinking that while Google might have a hard time measuring "user experience" as an algo factor, "Needs Met", involving user intent, might be a more usable metric for them.

Both articles mention "relevancy", with Searchmetrics discussing "brand names and short head keywords" as problematic. Because of the long cycle nature of this algo, pages that are in the gray area are susceptible to a roller-coaster ride for certain keywords. I can see how this would relate to Needs Met, which raise the bar considerably for ranking on broad generic keywords. Both articles, and the Quality Rater's Guidelines are worth some study. See also, in this forum...

Google Quality Rater Guidelines Update March 28
April 2016 etseq
https://www.webmasterworld.com/google/4799052.htm [webmasterworld.com]

30K_a_month

3:20 pm on Jul 24, 2017 (gmt 0)

5+ Year Member Top Contributors Of The Month



This is no phantom, I got fked by this update, no others just this one and I am still stuffed. Phantom my ass!

martinibuster

4:16 pm on Jul 24, 2017 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



The so-called studies have always and consistently been wrong. Those studies repeatedly avoided contradictory cases in order to hammer the idea that this was about "quality" even in the face of contradictory feedback from large well known brands.

Additionally Google has consistently stated there was no update to focus on spam. Google has been upfront that they continually tweak their core algorithm but that these changes are not updates in the sense of Panda, Penguin and Hummingbird.

Every single bump has been a core algorithm update but it was not an update focusing on "catching" low quality sites. In fact, Mueller has even been quoted as saying there is no specific algorithm process to catch those so-called quality issues.

All of the theories surrounding Phantom were false.

Most frustrating for those interested in what really happened is when statements are taken out of context. All it takes is for a generic word like "quality" to be uttered and it's held out as confirmation that the update was about "site quality" when the term quality was only used in the most generic and general way.

Here is an example of a statement from Mueller that is twisted to mean something it was never meant to mean:

Essentially, if you are following the Google guideliens and you are doing things technically right, then that sounds to me like there might just be just quality issues with regards to your site. Things that you can improve overall when it comes to the quality of the site.

Which means there is no simple kind of answer, like there is no meta tag that would make your web site higher quality. It is just in general, you probably need to take a step back, get random people who are interested in that topic to review your site compared to other sites to kind of go through a survey to see what you can be doing better to improve the quality of your site overall. And ideally, don’t just like tweak things to kind of subtly improve the quality to see if you can get a little higher. Really take a stab at it and try to figure out what you can do to make a significant dent in the quality of the site overall.


Mueller states that there is no "simple kind of answer" as to identifying why the site under discussion lost rankings. Yet the SEO industry rips the word "quality" out of context and uses it to keep justifying a false Phantom Update Theory. The word "quality" was used in a broad and general manner. To interpret that statement in any other way is to change it.

[edited by: martinibuster at 4:44 pm (utc) on Jul 24, 2017]

Jori

4:23 pm on Jul 24, 2017 (gmt 0)

10+ Year Member Top Contributors Of The Month



Still searching common points between us. Just coincidence ? Random penalty, just for testing purpose? Machine (wrong) learning ?

martinibuster

4:46 pm on Jul 24, 2017 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Still searching common points between us.


Own-baby blindness. Everyone's baby looks like a winner to the parents. It can be difficult for a site owner or the in-house staff to diagnose one's own site.

GiovanniB

6:21 am on Jul 25, 2017 (gmt 0)

5+ Year Member



My2cents. Most of the website I've seen to be affected after 7 February have some sort of blog or magazine with poor content, lots of it, short periods, less than 300 words articles, 3 to 4 content written every week.
We managed to fix one by increasing most of the contents but one site do not make a case.
So phantom or not I'll try with another website and let you know.

Jori

8:38 am on Jul 25, 2017 (gmt 0)

10+ Year Member Top Contributors Of The Month



@GiovanniB : my site is one article / month, always above 1000 words, online since 2008. Updated content, and real quality for the user. No ads everywhere.

85% less traffic.

But, it's true, I had 3 "polemic" pages, where everybody is trying to get their slice of the pie. For me, it was just informational pages about that specific aspect of my main subject, but maybe, just maybe, it's a toxic content. A bit like if I had #*$! on a children website.

If the penalty is algorithmic, it makes sense for me. Anyway, I reduced the number of those toxic pages, and reduced the internal backlinks with exact anchor. Let's wait and see...

Shaddows

9:29 am on Jul 25, 2017 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



85% less traffic...
If the penalty is algorithmic

Traffic loss != Penalty

It is really important to understand Google doesn't care about you or your sites. That statement swings both ways; they neither hate nor love you, are neither for nor against.

Google makes lots of changes all the time. Sometimes, a small pebble makes big ripples. Google doesn't note a change, but one is seen in the SEO community. There may not be a commonality in the affected sites, because the small change was not aimed at any outcome. It was just a minor, probably AI-led, change to assumptions or methodology or bias.

Google wants to match a particular person, in a particular "state" or "mood", with a particular set of sites, of which one or more satisfies their intent. Now, Google will show a selection of sites to hedge their bets as to intent.

If Google makes a minor tweak to their assumptions about User Intent, or a minor statistical bias TOWARDS a certain type of site, or both, then a lot of traffic will be reallocated via the resulting change in SERPs. The ranking monitor tools will light up. SEOs will sniff an update. But nothing much will have changed, and certainly nothing that can be deduced from analysing only negative-affected sites.

User Intent and Personalisation are the big challenges to producing actionable information, and the SEO sites are broadly silent on them, preferring to attribute everything to quality.

rdbseo

1:12 pm on Jul 25, 2017 (gmt 0)

5+ Year Member



For us it was 100% a content issue. I'm an in-house SEO and we took a step back and evaluated all our content. We've since cleaned it all up, republished, deleted and no-indexed and returned better than ever.

You can read more about our strategy in my answers in this thread - [webmasterworld.com...]

Shaddows

1:47 pm on Jul 25, 2017 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



For us it was 100% a content issue.
The report on the other thread describes exactly John Mueller recommended, in MB's quote above:
And ideally, don’t just like tweak things to kind of subtly improve the quality to see if you can get a little higher. Really take a stab at it and try to figure out what you can do to make a significant dent in the quality of the site overall.

Totally redefining your content on it's own terms is a different order of change than fiddling with content to escape a perceived "false positive" based on a "penalty" resulting from a "quality update".

Scare quotes denote false assumptions.
This 39 message thread spans 2 pages: 39