Kameron Jenkins over at Moz has a decent article on the subject of 'content comprehensiveness', which I won't link too (you should be able to find it easily enough :)) as it is a marketing wrapper for a Moz tool offering. And I'm on record a good many times saying that most/all SEO-centric tools are 5 to 10 years past best use date. And no, I'm not using this mention without a link to talk about comprehensive content, of which I happen to be a practitioner.
Instead I'm going to introduce (as I can't remember it being discussed on WebmasterWorld before) graphing entities, a process that, while still not really available as an inexpensive off the shelf SME tool set began replacing ye olde keyword tools about a decade ago, certainly upon Google's acquisition of MetaWeb and was in full flower by 2015 among those webdevs with the interest and skills.
I'm sure you must have read about Google's use of named entities, should have heard about FaceBook's and Twitter's use of social graphs, etc. Six of one half dozen of the other. Put really simply one connects entities through shared attributes and it can be bounded as tight or loose as desired. Far more comprehensive and far richer than any keyword tool. And much more in keeping with the current state of search parameters. And best in class content.
Of course things change and graph databases have become multidimensional; most commonly along geo-spatial aka location, temporal aka when or between whens, and (social) network aka relationship strengths lines although I've seen as many as eleven dimensions. Increasingly there is a combining of structured, semi-structured, and unstructured data, although too many results seem to be more data quagmire than data functional!
In practice a semantic metadata web is coming into it's own, something to which many/most webdevs are oblivious. Some of you may be familiar with RDF (Resource Description Framework), a progenitor to what has been developing. Most of what is happening, outside academic circles, is at the enterprise level; another indication that the web is tilting to the behemoths with the resources.
Beyond going past keywords to entities multidimensional semantic graph databases enable content comprehensiveness at a realtime contextual level. Comprehensive. Contextual. Realtime. What I've seen slowly coming onto the web in the last few years is steadily gaining momentum albeit largely what is casually obvious is a serene swan whose madly moving legs are unseen beneath the surface.
There are resources that most webdevs, if they even consider them, investigate manually such as the hundreds of open datasets around the world that can be integrated with one's graph database (I use a map/reduce implementation: MapReduce-MPI Library, others prefer Hadoop) to produce truly amazing outstanding timely plus backgrounder content. Yes, it's not simple, it's not easy; the web is a quarter century old, it is increasingly complex; enterprise was slow to get on board but the past decade has shifted the web dramatically as they assert dominance by virtue of size and investment. Get over it. Be nimble, be quick, and be aware.
You think it is harder for smaller webdevs now than last year? a decade ago? You are quite correct. And it isn't going to get any easier or simpler, rather the converse. Can one still compete? Yes, no, maybe. It depends. How much are you willing and able to adapt to the changing environment? How capable are you at understanding what is changing and where that change leads? Are you ahead of the curve? Behind? In the mob? Which one?
Note: I prefer to be outside the box. iamlost afterall.