Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: martinibuster
It seemed simple to me at first...something I didn't question. Then I began to think of how ludicrous that might be.
How many genres of sites are there? Billions? Or just the main ones you'd see in an online directory? If a humorous blog links to a news article, is Google like "hmmm...that blog is hilarious, but this news story is about as interesting as watching a history lecture by Ben Stein....LINK DISCOUNTED!"
Another reason I think it might be bologna is that even the best automated sentiment analysis software is nowhere close to being able to determine (with any great degree of certainty) if somebody is saying something positive or negative about something. Language is just too confusing, and I would assume it is the same deal with websites and Google. All Google has to go off of is the saturation of certain keywords on a site...something that is not necessarily indicative of a site's content.
So my question is....does GOOGLE actually know or care if a link is placed on a relevant website? OR, is the main reason to place a link on a relevant website meant to pass the human eye test?
It's easier to digest if you think about the AdSense technology and how they can match ads to content on the fly, as well as calculate estimated clicks etc.
Does Google deprecate links, I believe they do based on my experience.
Here's the discussion from 2003 [webmasterworld.com] where I reported what I had heard.
[edited by: martinibuster at 9:08 pm (utc) on Dec. 10, 2009]
If you consistently aim at that, the overall signals of relevance that develop within your link profile will reinforce your own theme with pretty good clarity.
Not every link needs to be perfectly, obviously relevant, as long as your overall link profile is sending enough signals that help the spiders decide what your page is about.
Still, I disagree that they can't tell. You'd think they can't tell a lot of things until you see a visual link graph. non-intuitive stuff becomes very very clear. And it doesn't have to be about genres. if your site is about SEO and the words SEO appear on the same page as the pages that link to you, that doesn't seem to be very difficult to decipher. Going from that obvious step to determing whether a page has information 'related' to SEO doesn't seem like a stretch.
Still, I think the jury's out. I wouldn't call it complete BS. And even if it is, remember the hand review.
After that, a whack of totally off topic links from blogrolls will get you what you want and keep you there.
Realistically hand reviews are the antidote here, but it would seem that the justice never gets handed down.
I would love to setup a controlled experiment around this and publish the results so that Google could take a serious look at it, but much of WebmasterWorld probably already knows what the results would be.
Google, in my opinion, has a very difficult time knowing how to apply value from links, determine relevance, and apply depreciation. Often good sites get hit incorrectly.
Still, I am a big believer of relevant only, and hopefully one day Google does too :)
Trophy terms stick out like canines proverbials and are sure to be flagged by all and sundry. So stick to relevant links and the rest in moderation so that you don't provoke a reviewer to penalise you.
you may not pass the editorial hand checks if they come along
And, conversely, you might pass an editorial check, and likely seal in your tenure in the top positions.
Which is, unfortunately why we see so much of it happening in Google. Guys are willing to take that bet that they won't be dropped. Not a chance I would like to take!
It seems to me that everybody agrees that you still should stick to "relevant" sites, but the jury is still out as to exactly how and if Google will penalize links that aren't.
Martinibuster: I like the Adsense theory. However I regularly get Adsense links that are off-the-wall, which makes sense (Google has billions of these things running). If anything, I think they sometimes show Google's inability to make an exact determination of page content.
Wheel: What sort of link graph are you talking about? just curious about these.
CainIV: I agree with you, a test would be great. It is also a shame that good sites are getting hit incorrectly.
I guess I should mention why I asked this initially. One of the basic tactics I use to linkbuild is searching the web for instances of my keywords appearing in-content.
Sometimes my keywords will show up in a random high PR blog post that is not necessarily in my site's vertical. But of course I want the link! I can't imagine that Google would have the time, resources, or reliable enough algorithms to discount a link like that with any certainty.
Relevant may or may not work now. Irrelevant may or may not work now. But if you were to make a bet, which one do you think has a high probability of being only 'what works now'?
There's nothing wrong with what you're doing. Just appreciate the risk - Google doesn't want you to do what you're doing. Do they at some point take the time to figure it out - and if they do what happens to your site?
Everyone's giddy with paid links, automated, etc. But every few years everyone figures this stuff out and goes nuts on it as the new, easy way to rank. And it's like predicting a stock market crash. I can keep predicting Google's going to catch on and wipe a lot of sites off the face of the earth - and if I wait long enough I'll be right :). Remember the directory crash? Remember when recips got killed? I remember. It's been a while so maybe some folks are new enough to not remember. But it's pretty catastrophic when it happens - entire swaths of large sites gone from the serps for a long time if not forever.
Do they want you to develop relevant links? I believe they do.
That's not a reflection of Google's inability to determine relevance. It's a reflection of Google's ability to predict the highest paying click. In the case of AdSense, what's going on is they are ALSO making a best guess as to what will get the highest paying click or the most clicks.
The basic relevance matching is solid, which is why I use AdSense as an example. The odd occurrences where they appear to get it wrong doesn't have to do with matching content to content, it has to do with estimating revenue to be earned from showing a high clickthrough or high paying ad. I have seen a number of cases where the high CPC caused an ad to be displayed in the wrong context. Remove CPC from the equation and the matching is solid.
The task of juggling the relevancy of ads to content then factoring CPC etc. is still a good example of how smart Google is. Doing this for links is, I think, easier because there is no CPC/auction calculation to skew the results.
And Wheel...you are making me scared! I guess I wasn't around to see the directory and reciprocal links get smashed, I sure hope my efforts don't end up the same. Luckily I have acquired many different types of links, which I think will help cushion any blow that may come in the future.
Thanks guys :)
Luckily I have acquired many different types of links, which I think will help cushion any blow that may come in the future.
Alternatively, it's just integrating your website into the fabric of the web via links from as many diverse places as one can. trying to beat google by thinking about it much more than that I think is a futile game longterm. In the shorterm one can still get easy rankings and make some cash.
Link building is important. I would not just skip getting a link. I might put it off until things slow down on the related websites.
#1 - search figures out your sites primary topic.
#2 - search figures out how many of the related keywords your site covers that it believes are related.
#3 - search assigns your site a score.
#4 - search slightly increases your score if incoming links are of good score AND also highly related.
That's overly simplistic but the message often missed is that everything is related. If your site is about widgets but you cover none of the subjects normally related to widgets you start off with a low score and if incoming links come from these they provide less benefit.
Every page on your site is a standalone page in terms of its merits but also plays a role on your overall site. If all of your pages cover topics related to your main (as judged by google) you get an excellent score. If your site is overly diluted or you have 100 pages fighting for the same keyword when only ONE can be considered about that keyword (and only one returned in search for it) you get a lower "score" overall (even if individual pages get great scores) and pass/receive less cumulative value.
The days of a link being a link are over. If you want to build the perfect site you can think of it as a Christmas tree. Your main keyword is the angel up top and you branch out from there with related keywords. One page per keyword with spot on page titles and no "check out this super cool new pink widget" stuff if pink widget is the content, those 6 extra words serve only to confuse unless you really think someone will ever type all that in verbatim.
How many pages on your site have variations of the same keyword? Wouldn't a single page about that keyword be easier to use as a visitor?
I hope this doesnt come off as sarcasm, but is that really the main reason to place a link on a relevant site for most webmasters? Whenever I asked about this topic (Im a fanboy of link traffic, b/c G's moods cant kill it ;)) in the past, it seemed that everyone agreed that whatever you do the benefit from a link usually lies in the SE traffic it brings, in comparison to which the value of the actual link traffic usually pales.
I would say unfortunately the main reason to get a link on relevant s ites is probably more that y ou dont want to piss off G :-)
How does Google determine if YOUR URL is relevant to the user's search phrase? They look at on page factors... like title, h1, content, etc. to see if the search phrase, slight variations of the search phrase, partial matches for the search phrase, terms related to the search phrase, etc. appear in the various page elements. Then they look at off-page factors... like link text of the inbound links and how much PR/link juice is being passed in via those links.
So why wouldn't they do the same to determine if the page linking to your URL is relevant to the search phrase? Look at on-page factors on the page linking to you like title, h1, content, etc. to see if the search phrase, slight variations of the search phrase, partial matches for the search phrase, terms related to the search phrase, etc. appear in the various page elements of the page linking to you. Then they look at off-page factors for the page linking to you... like link text of the inbound links too the page linking to you and how much PR/link juice is being passed in via those links.
In otherwords, all they really have to do is see how each of the pages linking to you rank for the same keyword phrase they are ranking your URL for. The higher they rank, the more relevant... assuming their algorithm already ranks URLs for the SERPs based on relevance to the user's search phrase.