homepage Welcome to WebmasterWorld Guest from 54.196.194.204
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 37 message thread spans 2 pages: 37 ( [1] 2 > >     
It's All About Links, or Is It?
TheMadScientist




msg:4527956
 11:06 pm on Dec 13, 2012 (gmt 0)

Alright, well I've noticed some interesting discussions in a couple of relatively 'general advice' threads that strayed a bit OT, so after a bit of discussing with some of the members involved I thought we should have a dedicated discussion about what I'm noticing...

The first are some 'key points' from the 'Sandbox Effect' thread:
[webmasterworld.com...]

These domains are ranking exclusively on their backlinks (and rapid acquisition thereof).

In the past, it was common to see brand new domains soar to the top rapidly, then move down in rankings; but, I've watched some of these domains rank well for several months now.

-crobb305

It almost sounds like we're seeing a 'shift' to visitor behavior having a greater influence than it has previously based on what you're saying.

-TMS

Another interesting characteristic of a few of the sites I am talking about is that some lack useful site navigation ... There no links to internal pages (except for a "contact" link). The domain was registered 4 months ago. Furthermore, the internal pages contain no links to each other, only a link back to the homepage...

I don't understand what it is about the site that makes it "useful" to a visitor (or what metric could define it as such), unless an immediate click to an affiliate link equals visitor satisfied (or if the visitor returns back from the affiliate site only to click on the next affiliate link in the list, and so on).

-crobb305

But in the case you're talking about (immediate click to an affiliate link), to Google, the visitor 'disappeared' and did not return to the results, which would almost have to be interpreted as 'visitor satisfied' by an algo, even if it doesn't make complete sense to us WRT 'the site clicked' satisfying the searcher, because we know it really didn't.

-TMS

how might the algorithm might interpret affiliate links opening to target="blank" when it comes to visitor behavior?

-crobb305

It can't, because it doesn't know ... You might know, I might know, but the algo knows 'when the visitor returned to the results', 'what did the visitor search for upon returning' and 'did the visitor block the site upon returning', that's it, nothing else.

-TMS

I have the same observation. I am looking after a site that sells circle and diamond widgets only, in certain geographical area. The site has decided to create a page on square widgets, reviewing the square widgets in the same geographical area. The page contained 15 different square widgets with photo, main features and dofollow link to the square widget manufacturer (there is only one of these for each square widget)...

After creating the page, it initially ranked at the bottom of the second page for the keyword square widgets geo-area. Then over the course of the next 6 months it started to climb, ending at #1 and holds this position for the last 2 years. The bounce rate of this page is 70%+

-aakk9999


And from the recent '404 / Broken Link' thread:
[webmasterworld.com...]

Just did a test of this on a #*$!ed website. When you serve a blank page with 200 OK header google drops the page from the keyword index entirely in my test...

I was just experimenting, I thought the keyword would still rank on a blank page due to inbound links. But I was wrong
-seoskunk


What I 'get' out of these two discussions is:
Either Google is missing the 'spammy backlinks' crobb305 and a couple of us who have seen the sites in question noted, which should, in my opinion, be easily detected algorithmically, because I'm fairly certain I could code it, so thinking the programmers at Google have not thought of and figured out how to detect those type of links makes my head hurt a bit...

Or (when I put it together with the test posted by seoskunk in the 404 / Broken Link thread is: It seems to be a fairly definite indication page content (or lack of content) can override inbound link text, which as tedster noted could be to help prevent Google bombing, but could also indicate there's a bit of a shift in importance for rankings from 'links as the way to go', to other on page and visitor behavior metrics having enough weight to override inbound links, even those that may be determined algorithmically to be 'questionable' or spammy.

So, I think the question I'd like to 'start things off with' is:

If you look at pages ranking well in whatever niche you're in that don't appear like they should be where they are and you forget about the links (just ignore them and look at anything else you can think of), what other things do you see you think could cause the page to rank where it is?

To me personally it looks like 'link value' can definitely be overridden by other factors, so I think it would be great to discuss what other factors people think could be in play if we 'throw out the links' to a page.

 

TheMadScientist




msg:4527992
 3:24 am on Dec 14, 2012 (gmt 0)

I should add ... I don't think it's something as simple as 'page title', it's too easy to manipulate and Google changes those.

Descriptions? Possible effect on click-thru rate, if it's actually the one you wrote, but Google has also been known to change those, so I don't think that 'counts' for much if anything ... I'm pretty sure it doesn't for count anything at least singularly*, but could it have an influence when 'correctly matches' a combination of something else(s)? I'm not going to rule it totally out as 'having an influence', but it has to be in combination with something else to be counted for more than almost nothing (it definitely is not what's going to override link weight/inbound link text on it's own).

* I haven't tested for a while, but it didn't 'count' last I heard about a test being done on it.

Maybe it would be good to list things we can 'throw out' too, like:

Meta Keywords, we know that's not the answer.
Meta Description, not very likely as the answer, toss it for this question, unless you can think of how or see a way it's 'mixed in' with other factors.

Page Title, not on it's own for sure, but in combination with [something(s) ... what something(s)?]...

H1 / Headings Overall, not on their own, but in combination with [something(s) ... what something(s)?]...

Bounce Rate, as a whole, toss it, it's refuted too many places and in too many situations, but a subset* of bounce rate 'click-thru, click-back to results, take another action' is much more 'telling' than bounce rate itself, so I think it's something to explore personally...

* Hopefully we can agree to call this subset 'click back, reclick' for the sake of discussion and just remember it's actually a bit more than that they would have to use.

Also, I should correct what the algo can 'see', because I didn't note a couple 'technical details' in my reply to crobb305 in the other thread, so what the algo can 'see' is:

The algo knows 'when the visitor returned to the results', 'what did the visitor do upon returning: search for something else, click a different result, close the window, etc.', 'time between actions (search again, click a different result, closed window)' and 'did the visitor block the site upon returning' - It's possible I've missed some other action in the preceding, but the point is, it doesn't 'know' what link a visitor clicked on your page, so it can't tell if it was internal or external.


I guess another way to phrase the question I would like to explore is:

What other factor(s) - it's likely something tough to manipulate and possibly a group of factors - could be overriding the inbound links and causing pages to rank higher than it seems they should?

tedster




msg:4527997
 4:13 am on Dec 14, 2012 (gmt 0)

A couple impressions here - these are definitely not conclusions.

One factor that seems to work for a month or two is freshness. Fresh pages often rank well with almost no backlink support. However, that doesn't create a sustained effect.

I also think the query phrase itself is a factor. It needs to be a pretty strong phrase for it to get the "full treatment" from Google's complete algorithm - especially in the trust/authority area. Google doesn't call their semantic component "phrase-based indexing" for nothing.

We saw this when Panda first rolled out and then, according to Google, it was later extended further down the long tail. Now where that threshold might sit is another question.

martinibuster




msg:4528006
 5:47 am on Dec 14, 2012 (gmt 0)

...what other things do you see you think could cause the page to rank where it is?


I think User Intent has been a factor in some of the ranking shuffles members have been seeing. This means that the old way of ranking, what I call Micro SEO, used to be links, title tag, H1, anchors, domain name as anchor text etc. The new way is to classify sites according to how they relate to user queries. Typical classifciations can be commercial, educational, scientific, community, news, reference, etc. That's Macro.

So instead of displaying whatever wins the Micro SEO Arms Race (link/anchor/h1, etc.), Google selects from a pool of web pages determined to be a certain kind of web page that related to a certain kind of user intent. The Arms Race factors of H1, anchor text and all that becomes less important because understanding the query and matching the answer to it supercedes those factors.

...To me personally it looks like 'link value' can definitely be overridden by other factors...


Not just the link value, as I noted above. The ENTIRE range of traditional "micro" SEO ranking factors can, in my opinion, be considered as depreciated by a step, with the elevation of "macro" ranking factors like user intent, understanding a web page without relying on keywords, etc.

Some may see it as a war on SEO. You would be damning yourself to blindness to see it in those terms. See it for what it is, then try to match it.

...what other factors people think could be in play if we 'throw out the links' to a page.


I don't think it's a matter of throwing out the links as a ranking factor. I think it's a matter of factoring in user intent, factoring in stemming to show web pages that match the concept of the question even if the keywords aren't in it. Then you have to factor in that the pool of sites to be displayed may shrink for certain queries. Have you noticed for some queries how it seems Google is displaying a limited amount of results and apparently not showing what they feel are irrelevant results?

I think Google's stemming is throwing sites into the mix that might not rank because of keywords or even backlinks/anchor.

The above issues I raised is giving birth to some oddball "whiteboard" posts that try to identify traditional citation, co-citation and co-occurrence based reasons for some SERPs that aren't really relying on traditional Micro SEO factors. I think it's time to erase the whiteboards and blackboards because those old school Micro SEO factors have been superceded to a certain extent. Which is why using those old ways to describe what's happening keeps coming up short.

Zivush




msg:4528012
 6:20 am on Dec 14, 2012 (gmt 0)

I think a site is taken as a whole and inbound links are part of the picture.
One page ranking affects the other pages ranking, and the overall authority of a site affects its pages' SERPs.
Links to one page affecting another page with zero inbound links.
When you post a page on WSJ, it gets a different 'treatment' by Google than if the same page were posted on your site.
As for user experience and returning visits, G is way far from being accurate but they seem trying to go on this direction, hopefully.

TheMadScientist




msg:4528013
 6:29 am on Dec 14, 2012 (gmt 0)

Great info tedster & MB, thank you both ... This is definitely the direction I wanted the discussion to go.

When you post a page on WSJ, it gets a different 'treatment' by Google than if it were posted on your site.

But, looking farther, why? How? What are the determining factors?

We know G started with a group of 'seed sites' from one of their patent applications (I don't remember off the top of my head which one) BUT even at that, there have to be reliable signals that can be detected via algo to actually do the ranking and make the determination the WSJ is a better choice for the results than [example.com].

Where I'd like to go, more saying than a page on the WSJ site is treated differently, is discussing Why the page on the WSJ is treated differently and there have to be some 'determining factors' to make a distinction between the WSJ site and say CNN or the NY Times or they would all 3 rank #1 for the same queries when they have a page on the topic, which just 'doesn't work'.

And, for one query the WSJ may rank higher than the NY Times, but for another it could be reversed, so there must be variables ... What's are they? What differentiates one from the other for a specific query, but not every query?

We know WikiPedia and Amazon rank very well for many queries, but what are the characteristics they exhibit that are different than other sites, because the algo isn't a person who 'knows what Amazon is', so there has to be some set of determining factors that make it so it ranks very well for many queries, but does not rank #1 for every query it has a page for.

IOW: You can say 'well Google likes them', but even if that's the case, there has to be something algorithmically determined to keep them close to the top, because they're not always at the top, so we know they're not 'programmed as #1 and everything else comes second', which means there must be factors in play they are solid at exhibiting, but at the same time they are also not 'guaranteed' a top spot or they would be the first result for Every query they have a page on.

BTW: Zivush, thanks for sharing and sorry if it seems like I'm picking on you a bit, it's not really 'picking on you', but more pushing for a deeper discussion on things, because I've been reading for years how 'Google likes this site', but the results are determined by an algo which doesn't 'like' or 'dislike' anything, but rather uses variables to make a determination, so I want us to take a bit longer look at what they are or could be (besides links), because MB is right, it's a bit '2002ish' around here sometimes.

BeeDeeDubbleU




msg:4528040
 8:58 am on Dec 14, 2012 (gmt 0)

I think Google's stemming is throwing sites into the mix that might not rank because of keywords or even backlinks/anchor.
Is there any evidence of this MB?
deadsea




msg:4528068
 11:46 am on Dec 14, 2012 (gmt 0)

I think Google's stemming is throwing sites into the mix that might not rank because of keywords or even backlinks/anchor.

Is there any evidence of this MB?


MB's post is very insightful. His observations match mine. In my opinion "bounceback rate" is biggest driver of rankings. How many users return to Google unsatisfied looking for another site or modifying their query.

Its not just stemming that is throwing additional sites into the mix. Google is also putting in sites that have a subset of the keywords you are searching for. So much so that using Google search without "Verbatim" turned on is painful to me. This has only been the case for about 2 years now.

The personalization of the SERPs based on location, history, and profile also throw a ton of new sites into the SERPs.

We know that Google is trying out more sites than ever before. You visit a SERP on two different computers and you are almost guaranteed not to see the same listings these days.

Sgt_Kickaxe




msg:4528105
 3:43 pm on Dec 14, 2012 (gmt 0)

Think stock market, keywords are assigned into groupings much like stock are assigned a sector. Every grouping has related keywords much like stock sectors have different flavors of companies within them. If your site is about "technology" then it is expected to cover the sector well so if Google finds your site lacking in coverage it probably doesn't do as well. Similarly if you cover the same thing redundantly to excess you may fracture the strength of any given keyword sets your pages maintain.

Without seeing a list of specific words you can't possibly get it 100% right, nor should you since that would also look questionable, so the solution? Just write for people and don't focus on keywords save for the page title and to make sure you have at least *some* keywords related to the subject.

---------------------------------------------------

Other possibility - Google is tracking webmasters and the people working on the site are currently highly trusted? It's not hard to figure out who the site owner is, they have your IP etc from the first moment you ever log into any Google product (gmail, adsense, adwords etc) and they see which sites you frequent via the +1 button, analytics, adsense etc. It wouldn't take very long to get an idea about what type of person you are, been on a blackhat forum a little too often? Hmmmm

Heck, have you been in webmaster related forums too often? That might work against you too. Maybe the site owners aren't into SEO? lol. Toss these into the other factors bin though don't weigh them too highly.

It could be as simple as: Pages gain history as they age, both good and bad, and the new site simply has no negative signals, yet.

martinibuster




msg:4528106
 3:46 pm on Dec 14, 2012 (gmt 0)

personalization of the SERPs based on location


That's a good example of the serps shedding traditional SEO ranking factors and using a different set of data to solve a user intent problem.

Vanessa Fox wrote an article this past summer [searchengineland.com] that to me was the clearest summarization of how Google is shedding traditional ranking factors like links etc. She identifies a few areas as examples where supplemental data is injected, where the ranking factors can change, as well as where it doesn't work. Here are three examples she gave:

* Searcher Intent

* Correcting Misspellings

* Synonyms

canuckseo




msg:4528158
 5:40 pm on Dec 14, 2012 (gmt 0)

I think synoyms and related searches, especially where the number of competing sites is low, play a bigger part for those low competition searches. I've seen sites rank for phrases that they aren't optimized for simply because the page that ranks is similar to the search phrase. there is no anchor text pointing to that page for that phrase but there are other links on similar anchor text pointing to other pages on the site.

Freshness is a factor - i've seen sites i've freshly optimized rank highly for phrases. It depends on the site - a new site that is freshly optimized doesn't keep it's ranking as long while an older more established site has kept its rankings for new phrases merely on the optimization alone for some fairly competitive local terms. I haven't looked at backlinks for the older site but obviously because of its age it has links even if they are of the non-anchor text variety.

So while I personally think in general terms it is a mix of ocntent and links, the ratio of importance depends. IE in non-competitive searches links aren't as important. Pages can rank competitively on the optimization and content alone. With competitive searches links and content are equally important.

So what i'm saying is there is no hard and fast rule - sometimes links play a more important part in overall rankings and sometimes they do not.

scooterdude




msg:4528254
 9:59 pm on Dec 14, 2012 (gmt 0)

Ok i'll throw my outlandish theory into the mix

Starting with Panda Google has been manually creating the "seed" set of websites others have referred to , but, they've also assembled a fairly massive list of webmasters, linked those webmasters to their attributable output, via gwmt, crawling tools, manual sleuthing, assigned ranking scores +ve and -ve, then apply this to the output of the other Algos

How about webmasters not "identified" you ask, well, in writing an Algo, the +ve and -ve bias would probable have reducted impact for queries where none of the identified have a presence,

I think they've been telling us this for a while , who was talking about brands being the answer to a certain open sewer,,,

scooterdude




msg:4528256
 10:06 pm on Dec 14, 2012 (gmt 0)



So instead of displaying whatever wins the Micro SEO Arms Race (link/anchor/h1, etc.), Google selects from a pool of web pages determined to be a certain kind of web page that related to a certain kind of user intent. The Arms Race factors of H1, anchor text and all that becomes less important because understanding the query and matching the answer to it supercedes those factors.


I think they've been doing the above since I've had an interest in SEO, its probably mandatory for a search engine to be able to pre process probable or historical queries

Their document classification comments always suggested this to me

tedster




msg:4528266
 11:14 pm on Dec 14, 2012 (gmt 0)

With regard to synonyms, it's important to recognize that something more complex is going on. Because of phrase-based indexing [webmasterworld.com], an entire phrase can have a "synonym," rather than just an individual word. The same applies to co-occurring vocabulary in general. We're talking about 2, 3, 4 word phrases being treated as a semantic unit in some cases.

nickreynolds




msg:4528273
 11:40 pm on Dec 14, 2012 (gmt 0)

they've also assembled a fairly massive list of webmasters, linked those webmasters to their attributable output, via gwmt, crawling tools, manual sleuthing, assigned ranking scores +ve and -ve, then apply this to the output of the other Algos


My experience would definitely suggest to me that some of my sites have been hit simply because they are mine.

If what I think scooterdude is suggesting is correct, i find this rather worrying!

TheMadScientist




msg:4528297
 2:39 am on Dec 15, 2012 (gmt 0)

they've also assembled a fairly massive list of webmasters, linked those webmasters to their attributable output, via gwmt, crawling tools, manual sleuthing, assigned ranking scores +ve and -ve, then apply this to the output of the other Algos

Why would they do that?
^ That is a serious question, btw, because I don't understand the reasoning behind why, since 'who did what' isn't what keeps them in business, providing visitors with satisfaction to queries made is.

Their job is to present their visitors with the results that satisfy the visitor's query, so why would they care who built a site any more than they really care who the original author of the content is since they don't simply go by discovery date for determination and give authors a way to submit content or 'new original content URLs' via WMT then send the bot out to 'discover' the new content posted...

Even more to the point:
Why would they need to do that?

I think it's much more likely the group of sites has something in common that doesn't line up with what the algo is looking for than anyone at Google ever caring who actually built the site, let alone writing an algo to 'do anything' because of it.

If the same person built or worked on all the sites and that person is missing some of the optimization points we're talking about in this thread, then it's entirely possible a group of sites is not ranking well because of the person who worked on them, but they don't need a special algo or to figure out 'who done it' for that, it'll take care of itself, but it could definitely look like they're 'out to get you' if you're missing some of the new keys to optimization and don't know what those keys are or don't understand them well enough to implement them for some reason...

scooterdude




msg:4528305
 3:24 am on Dec 15, 2012 (gmt 0)

What percentage of the websites controlled by www members are big corporates, and what percentage are sole traders and hobbiests


The language used by the relevant team has been targeted at individuals for as long as i have listened to them, its only on our side that we try to de personalise it

Think of any cop show you ever saw, and the big star say I 'know he did it', nothing else gets in the way of that till he gets his man, cuffed or shot :)

I work with algos all day long, and can make them do wonderful things, the algo is but a tool in my hand , and it my mind that pushes it in a particular direction

Divorcing the algo from its operators, is IMHO, to misunderstand the art of the possible in programming

What is the intent, and motivation of the programmer

TheMadScientist




msg:4528311
 3:33 am on Dec 15, 2012 (gmt 0)

Divorcing the algo from its operators, is IMHO, to misunderstand the art of the possible in programming

What is the intent, and motivation of the programmer

Excellent point!

TheMadScientist




msg:4528333
 4:45 am on Dec 15, 2012 (gmt 0)

What I also see in our niche, is that the big time advertisers are also showing up much much higher in the natural SERPs.... even when searching for a very long hand written phrase that we have on our page and they do not.... coincidence?

-Bewenched

But, it sounds like to me, you're still thinking search is 'keyword based' rather than, for lack of a better way of explaining, 'definition based', so if the competing site has a phrase that's 'algorithmically defined' to equate to the essentially the same thing as your phrase is about, then I'm not surprised at all, especially if you click on the page(s) from the competing site(s) to see if it/they contain the phrase more often than you click on your own site for the same query or type of queries ... That has nothing to do with Adwords spend and everything to do with the new algo we're dealing with.

-TMS

Okay, the preceding from the zombie traffic thread [webmasterworld.com...] has made me wonder how many people really don't understand the new algo and are in some ways crushing (or at least suppressing) their own rankings by constantly clicking on the competition for the phrases they want to rank for?

I wasn't sure where to put it or start discussing it for sure without getting off topic, and I sort of think this thread is be the best place for it...

tedster




msg:4528340
 5:23 am on Dec 15, 2012 (gmt 0)

What percentage of the websites controlled by www members are big corporates, and what percentage are sole traders and hobbiests

I work with both types of sites, and as far as I know, an SEO worker can never actually "control" a big corporate site. You can guide and train the various internal teams involved - and as the company gains some confidence in you you can get more rapid deployment of your ideas.

But even then, unless your contract come from way up in the C-level, you'll have some strong limitations. Even in-house SEOs for big corporates are up against it.

how many people... are in some ways crushing their own rankings by constantly clicking on the competition for the phrases they want to rank for?

The search phrases must be relatively low volume for that to have any significant effect, no?

TheMadScientist




msg:4528342
 5:45 am on Dec 15, 2012 (gmt 0)

The search phrases must be relatively low volume for that to have any significant effect, no?

Well, probably, but not 100% necessarily, because if the clicks were relatively 'in line' and constant from general searchers (#1 get's a standard N%, #2 get's a standard N2%), but when you search you constantly click on competing sites your clicks could have more of an impact than it would seem especially with personalization and query intent grouping.

Think about it this way ... All things being 'normal' #1 usually gets a higher % of clicks than #2 and #2 more than #3, but you search on 100 terms and click on the competition for those terms but not your site ... You're inadvertently 'voting for the competition' as being 'better overall' for the queries than your site, and since they're looking at more than a single page these days, I would say your clicks (or actually lack of clicks) could matter a bit more than we might think, especially if your query intent type is 'weighted' to the terms you're trying to rank for so your clicks (overall behavior) is given more influence for the 'query intent type' you're trying to rank for than say 'average joe surfer' who has a wider variety of query intent types.

I'm not sure if I'm explaining what I'm thinking very well, so I might try again later on, but I'm thinking the cumulative clicks over multiple related queries for the same 'query intent' on a even a weekly or monthly basis, over time could possibly have more of an influence than we might think, and especially in a 'tie break' type situation where #3 is getting to the 'behavior threshold' of moving up and your behavior, not only for one search, but the site as a whole (or across multiple queries at least) indicates it's a 'more satisfying' result than your own for the same query intent type.

And, if you 'clear your cookies' so they 'don't know who you are' or otherwise disguise your behavior, then you're 'just the average joe', but if you happen to visit the competition via the results daily or weekly, then you're a visitor a day (or week) who likes every site for all the queries you make, except yours...

It depends to some extent on how they track and what they weight, but I wouldn't rule out the idea it could have an influence, especially over multiple queries over time where your site is left out of the clicks.

I mean just to remain 'on a level playing field' because of your click behavior, there has to be one searcher who 'counter acts' your clicks and likes your site but no one else's for the same queries, doesn't there? So, I think if you 'weight the field' against yourself on a regular basis across multiple queries you're at best not doing yourself any favors...

martinibuster




msg:4528345
 6:26 am on Dec 15, 2012 (gmt 0)

...crushing their own rankings by constantly clicking on the competition for the phrases they want to rank for?


Why would anyone click on a competitor's site in a keyword search? To see what has changed from five minutes ago, an hour ago, a day ago? Who does that? I don't think anyone repeatedly checks the SERPs then repeatedly clicks though to the site. What is the motivation, what would they be expecting to find?

[edited by: martinibuster at 6:35 am (utc) on Dec 15, 2012]

TheMadScientist




msg:4528348
 6:35 am on Dec 15, 2012 (gmt 0)

I know of people who do all the time ... I've actually skyped with site owners who will say 'search for [blah], now click on the 3rd result down and have a look' rather than just sending the link via IM ... Go figure

martinibuster




msg:4528353
 6:48 am on Dec 15, 2012 (gmt 0)

That's different from what you described. That's not a regular clickthrough scenario. Does that happen every day? How often does the click through happen from the same IP?

If it's an occasional thing, as in once or thrice a month or less I can't see that crushing the SERPs.

[edited by: martinibuster at 6:50 am (utc) on Dec 15, 2012]

TheMadScientist




msg:4528355
 6:50 am on Dec 15, 2012 (gmt 0)

For some I've worked with before it was a daily routine...
(That's why I brought it up)

martinibuster




msg:4528356
 6:51 am on Dec 15, 2012 (gmt 0)

What is the motivation? What were they expecting to find on the click through?

TheMadScientist




msg:4528358
 6:52 am on Dec 15, 2012 (gmt 0)

They just checked the competition's pages every day for different queries to see any updates, changes to pages, advertisers, etc. It was part of what they did.

martinibuster




msg:4528359
 7:00 am on Dec 15, 2012 (gmt 0)

Wow, that just seems like madness. Just my opinion but I think repetitive clicks from a single IP or range have less influence/pop as would clicks from a wide range of IP addys. It's logical to do that for many reasons, I'm sure you know what I mean.

superclown2




msg:4528387
 10:55 am on Dec 15, 2012 (gmt 0)

has made me wonder how many people really don't understand the new algo and are in some ways crushing (or at least suppressing) their own rankings by constantly clicking on the competition for the phrases they want to rank for?


This is a factor I've long suspected and it is a logical one. Are there any figures that support it?

Robert Charlton




msg:4528398
 12:23 pm on Dec 15, 2012 (gmt 0)

I'm a little confused by this thread, as it seems to want to look at the where the algo's going in broad conceptual terms, and I applaud that intention... but it's constantly drifting into minutiae, and I'm wondering why.

I'm not seeing core terms like "user engagement" in this discussion, and it should be there (engagement obviously influenced by intent). Is it because this factors is by now taken for granted, or, for some reason, has this discussion just not gotten to it yet? We've been alluding to user engagement fairly steadily in this forum for over two years now, and I take it as almost a given. It certainly explains a lot of rankings I've been seeing. Originality is another such factor.

Why, then, are we even mentioning trivial tidbits like meta keywords? IMO, those are ancient history, or at best miniscule and isolated points in a very big picture.

Perhaps a beat or two in the discussion has been missed....

The advent of Caffeine, coupled with the speed and power of phrase based indexing, has enabled Google to identify semantic relationships among the content of pages much more closely than before, improving the assessment of links, and enabling the association of sites and pages with user behavior and query intent over time. Caffeine has enabled Google to bring together a mind boggling number of factors in evaluating its results.

Very briefly, as its late at night... of course the New York Times or Amazon are going to outrank a personal site on queries where they explore roughly the same subject matter. User intent isn't a static thing... In ecommerce, user intent is influenced by where a searcher might be in the buying cycle.

In relation to a product or a subject of inquiry, the user's evolving depth of experience is going to want to be satisfied as the user gains experience and develops new needs and curiosities about a topic. A good article ought to send a site visitor off on a deeper exploration. If a site has the depth, the visitor will stay on that site, and return.

In personalized results, users who have site preferences probably have them because the site answers their questions over time. Chances are that site size, diversity, and depth of content for many kinds of material does matter.

For non-personalized results... given a basic quality level... the likelihood a site will satisfy a query also increases with size. A single journalist or small group of writers, sufficiently talented, focused, and hard-working, can compete with a large organization. I've seen it happen, but not everyone can pull it off. Nate Silver, though, was so good he got bought by the NY Times.

Links most definitely help. I see their influence every day. And site performance does matter. But, for some sites and some queries, I think that Google is no further along than it was a few years ago. There's no one algo. It's a statistical model, attempting... from what I can tell... to satisfy a range of intentions for a given query, with those prioritized by user demand. Personalized results may be more focused.

Where Google had dropped host crowding in favor of "brand authority" for a while to check a different way of emphasizing pages, I see those results evolving over time as the data is accumulated. But those, too, are clearly prioritized by user demand. Why else would Google try such a radical experiment?

As for...
...many people... are in some ways crushing (or at least suppressing) their own rankings by constantly clicking on the competition for the phrases they want to rank for?

...I can't say that this hasn't crossed my mind, but I'd look at the other side of it... what happens if you just click your own site and leave too quickly too often? That might skew things more, though I doubt it would be a major influence... certainly not enough to explain why exact match links seem to be becoming a lesser factor. User engagement, probably segmented by niche, IMO is a better explanation.

This 37 message thread spans 2 pages: 37 ( [1] 2 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved