This trajectory of Google hoarding rather than referring traffic, of transitioning to an answer machine from a search engine is not new. Some of us, here on WebmasterWorld, have been speaking to this topic for over a decade. And a few of us have been building solutions.
Of course, just as no two sites or business models are the same, so too there is no cookie cutter one size fits all solution. However, for a perceptive person... perhaps a way that fits them may be found.
First, before I continue, let me get the mandatory overarching disclaimer out of the way: there is more to search traffic than Google, search traffic is typically the worst converting traffic, and Google is usually the worst of that. There, said. Back to question at hand.
From NickMNS’s linked post the bit that resonated with me was:
There is no longer a business case for creating linear or static informational websites, where one simply provides content eg: like a book. Instead information will need to be provided to the user in a more interactive or dynamic way, for example as visualizations or customized. This will make it impossible for Google to use the information to feed it's systems and will make the website content more appealing for the user.
And from the above OP:
So then the question is, will limiting the information provided to Google, limit it's ability to rank and understand the nature of the website?
Just as Google has been down the personalisation rabbit hole so too have some webdevs, for an insight into my journey see, from May of 2017,
Analytics is THE engine of change [webmasterworld.com], and
especially the additional links in my second post in that thread. The critical light bulb moment for me is summed up in the following:
Instead of thinking: structure/semantics -> content -> presentation/behaviour for each target also remove structure/semantics from the 'page' such that a given URL is totally amorphous. Think instead: context -> content -> structure/semantics -> presentation/behaviour.
What this means, in practice, is that for each page (URL) there is a default content, typically text with graphics/images, that is served to first time visitors and SE bots. Basically much what everyone has/is publishing, except of a much higher quality/value, of course. :)
However, for identified return visitors OR anytime visitors from select referrers contextually different content may be shown.
Further, for all human traffic, past the landing page additional content or variations on content, eg slideshows, videos, interviews, may be added to a page.
SE bots get quality content that allows full indexing of those pages I want to share with them. Human visitors get at least that, often presented/combined in a personalised context, and frequently with additional rich content.
SE bots get fed the bog standard site map; human visitors get to mix and match, go down other routes and back and over and all about content in a highly variable way. It’s all about how content is chunked, associated, and linked. And stomping on SE bots toes as appropriate.
And it works. So far. Google search referred traffic has, with a couple short exceptions, been up YoY every month for two decades.
Google loves me, yes they do. Although they do know much is blocked from their crawl and disallowed from their public index and they do complain now and again. :)
Just a mo while I knock my wooden head...