Two early studies of Google's February 7, 2017 algorithm update are both suggesting this was principally a "Phantom" quality update. As discussed in our Feb Google Updates thread [
webmasterworld.com...] ...and also noted in both articles... the update was major, accompanied by wide ranking swings. As both studies note, Phantom refreshes cyclically, like the old Panda.
The articles are by Glenn Gabe for his company G-Squard Interactive, and by Daniel Furch, for Searchmetrics. We've discussed Phantom studies by both companies in this forum previously. Here's a link that references several of the threads...
Nov 2015 Phantom updates: user-engagement factors + Panda Dec 2015 https://www.webmasterworld.com/google/4780955.htm [webmasterworld.com]
Here are the two recent articles...
The February 7, 2017 Google Algorithm Update – Analysis and Findings From A Significant Core Ranking Update February 15, 2017 - by Glenn Gabe [
gsqi.com...]
Google Phantom V Update: There and Back Again?! February 15th, 2017 - Daniel Furch [
blog.searchmetrics.com...]
Both describe Phantom as an algo that's Panda-like but not quite the same. It's currently appearing to update on six-month cycles, and like Panda requiring a data refresh before a site can recover. Both involve what can be broadly described as site "quality"... ie, usability and user experience, satisfying the intent of the user and the query. .
Gabe more specifically refines this, I feel, than he has in previous studies...
Basically, don’t just look at content quality. There’s more to it. Understand the barriers you are presenting to users and address those. For example, aggressive ad placement, deception for monetization purposes, broken UX elements, autoplay video and audio, aggressive popups and interstitials, and more.
He emphatically points to the relationship of these updates to the Raters Guidelines. In particular, he cautions against
"aggressive monetization", and references section 6.3.3 in the Page Quality Ratings section.
I'm thinking that while Google might have a hard time measuring "user experience" as an algo factor, "Needs Met", involving user intent, might be a more usable metric for them.
Both articles mention "relevancy", with Searchmetrics discussing "brand names and short head keywords" as problematic. Because of the long cycle nature of this algo, pages that are in the gray area are susceptible to a roller-coaster ride for certain keywords. I can see how this would relate to Needs Met, which raise the bar considerably for ranking on broad generic keywords. Both articles, and the Quality Rater's Guidelines are worth some study. See also, in this forum...
Google Quality Rater Guidelines Update March 28 April 2016 etseq https://www.webmasterworld.com/google/4799052.htm [webmasterworld.com]