@goodroi thanks for clarifiying.
I agree technology is evolving and one needs to keep up with it, like it or not, evolution waits for no one.
I have been saying for sometime now that keyword / rankings are meaningless, as there is no longer a 1 to 1 relationship between keyword and search result, and there is no way to know what the relationship is. To this point there was a post on SERountable yesterday from Bing saying exactly that, whereas Google's position was little more nuanced:
[
seroundtable.com...]
The inlinks.net service is compelling. But there are two points that worry me.
- First if one uses Google's NLP solution as a guide, if I take text from my website and paste it in the relevancy score and entities that it discovers appears much lower than expected. But if I then take the actual traffic, GSC keywords, and landing pages it absolutely clear that Google has no issue understanding the nature of my website (contrary to what the NLP suggests). There is no doubt that NLP is being used by Google, but exactly how and in combination with what is not clear. For service such as the one offered by inlinks,net to be truly be effective they must have some means of determining this.
- Second, is if one starts to "design" or write content to match the assumption of a statistical algorithm (like NLP) and in turn that algo is using the content to train itself then we will essentially be causing the algo to overfit itself, thus producing garbage results and garbage content. But this state may still be long ways off.
To bring the two points together, inlinks.net states that it use search results as guide to determine what is successful content. Essentially its algorithm appear to be designed to make your content look more like your competitors content and thus at some point all the content will simply be a copy of itself. Basically it is a filter to provide more of the same. Essentially the service does the exact opposite of evolution, it cause stagnation.
This is a problem with machine learning in general. ML works great if it works at a level where it does not impact the system it is interpreting. But if it reaches a scale where it is able to measurably impact the system, and the results of the algo are feed back into it, the system will implode.