A Google 'speed' timeline for those interested:
* 2009: Google releases 'PageSpeed' to compete with, compliment Yahoo's YSlow.
* 2010: Google announces speed as a new ranking signal albeit of lesser weight than relevance and initially affecting less than 1% of queries.
Note: desktop focussed.
* 2014: Google announces now actually rendering pages to replicate actual user experience and recommends CSS and JS be unblocked to aid ranking.
Note: for many sites this adds considerable page 'weight' to waht Google receives.
* 2015: Google affixes 'mobile friendly' label to pages (not sites) meeting specified criteria in mobile query returns. Intent/relevance still stated to be stronger signals than being 'mobile friendly'.
Note: 'mobile friendly' (initially at least) being a very low bar.
* 2015: Google tests a white on red 'Slow' label and a black on yellow triangle '!' followed by light grey 'Slow to load' text. No information provided as to threshold(s) involved.
* 2015: Google launches AMP (Accelerated Mobile Pages).
* 2016: Google announces 'Mobile-first Indexing' as an ongoing experiment.
* 2017: Google confirms 'Mobile-first Indexing' is being rolled out.
* 2018: Google announces PageSpeed Insights now includes data from the Chrome browser User Experience Report.
* 2018: Google announces forthcoming 'Speed Update' to start July 2018.
In the penultimate announcement/point above
Real-world data in PageSpeed Insights
[developers.googleblog.com] by Mushan Yang and Xiangyu Luo, Google Developers, 09-January-2018, there are a few interesting tidbits to savour:
The PSI report now has several different elements:
* The Speed score categorizes a page as being Fast, Average, or Slow. This is determined by looking at the median value of two metrics: First Contentful Paint (FCP) and DOM Content Loaded (DCL). If both metrics are in the top one-third of their category, the page is considered fast.
* The Optimization score categorizes a page as being Good, Medium, or Low by estimating its performance headroom. The calculation assumes that a developer wants to keep the same appearance and functionality of the page.
* The Page Load Distributions section presents how this page's FCP and DCL events are distributed in the data set. These events are categorized as Fast (top third), Average (middle third), and Slow (bottom third) by comparing to all events in the Chrome User Experience Report.
My take aways:
* Google is waiting on real world (Chrome UX Report) input from each site before categorising a page's speed.
---this appears to be a page level rather than site level ranking signal.
* my initial thought was that Google might dampen the 'Slow' aka bottom third sites, however that seems at odds with with only affect a small percentage of queries
. I also note that intent of the search query is still a very strong signal, so a slow page may still rank highly if it has great, relevant content.
This leads me to wonder if, once again, we are seeing a Google private/public data differential perhaps similar to TBPR which was much less granular than actual. I suspect this is likely and one reason is that it allows Google to warn the bottom 'Slow' third without actually giving any info on an actual threshold. They could, for instance, 'hit' one or two percent, as incentive and if the response is not to their liking hit 5% or 10% or whatever until they get their point across.
And they get to hold 'intent/relevance' as trump for slow sites they can't afford to exclude...
Hosting/ISP connection/service quality and geolocations of server/client suddenly may become a matter of greater concern. Et al.