|Links and Content and Compellingness, Oh Sure!|
Google has told us links are still important:
|We played around with the idea of turning off backlink relevance and at least for now backlinks relevance still really helps in terms of making sure that we turn the best, most relevant, most topical set of search results. |
Google has told us we should focus on clarity of content:
|Err on the side of clarity. |
Google has told us we should focus on building high-quality websites:
|In recent months we’ve been especially focused on helping people find high-quality sites in Google’s search results. The “Panda” algorithm change has improved rankings for a large number of high-quality websites, so most of you reading have nothing to be concerned about. However, for the sites that may have been affected by Panda we wanted to provide additional guidance on how Google searches for high-quality sites. |
Google has told us we should make a great site, users love, bookmark, tell friends about, etc.
They're all "givens" as far as what we've heard if you've been around SEO very long, because they keep getting repeated over-and-over-and-over by Google reps. and "SEO Gurus" alike, so what I'm wondering is how a page without a single inbound link, which is nothing more than a list of "city, st" names and happens to rebuild a directory on a site dynamically, without a single outbound link, without a single mention anywhere online [except in e-mail -- Google's not sniffing Gmail for discovery purposes are they?], without GA on it, without a <title>, without a heading <h1> etc., without a single <p>, which should be considered key-phrase stuffed can possibly outrank pages on the same site which do all the things recommended when I conduct a search for [no quotes] "key phrase sitename" [nope, not site: or .com, just the site name included], if Google's not just blowing smoke about what counts or their algo isn't seriously not working correctly.
BTW: IMO, It's really not possible...
I should probably add:
There are no penalties, the phrase is [some service [yes, two words] state SiteNameHere] and the page *does not* include the first two words of the query or the site name anywhere on it. It doesn't even include the state, except as an abbreviation rather than the state spelled out like it is in the query, and the location of the page is very deep, meaning: /directory-name/subdirectory-name/the-filename-no-one-would-simply-guess.ext
Nothing really makes sense about it to me, except we're getting an official "smoke screen" about what really counts and what Google can really, accurately detect, because the page with the [major [authority, notionally well recognized] university here] inbound link and the other pages with a number of inbound/outbound links and compelling content/info people search for should definitely outrank the one with nothing more than a list of [city, st] based on what we've been told -- The only other plausible conclusion I can come up with is: the algo is *not* working like Google's Reps seem to think/say.
|the algo is *not* working |
That probably says it all. ;)
IMO: It very well could :)
Outliers happen. Patterns are what count.
Both pages are on the same domain?
G is not even half as clever as they like to make out. Looking for reasons why such a thing is happening will only drive you crazy.
It's not just an isolated incident either, it's quite common. To maintain any sanity you just have to accept that the algo is full of holes, move on and forget it.
The other page is "too perfect" ?
I presume this is a test? If so, when did you create pages?
That "Google has told us we should focus on building high-quality websites:" article is like a bunch of virgin priests advising on the intricacies of the marital arts.The web is far more varied and more vibrant than these people realise. The singlemost important thing is to develop one's website for its users.
The terrible irony in all this is that Google's links problems are easily solved. But it would involve deleting the Artificial Stupidity efforts (thinking that Google's algo is "smart" enough to anticipate what the user should be searching for rather than what they are searching for) and concentrating on link/traffic analysis. Google would have to admit more failure and that wouldn't be good for its stock. How's Orkut doing? :)
@jmccormac love your comment
|But it would involve deleting the Artificial Stupidity efforts (thinking that Google's algo is "smart" enough to anticipate what the user should be searching for rather than what they are searching for) |
This is the real problem with the most recent update and the reason the search results are becoming increasingly less focused.
It's easier to be a backseat search engineer than a real one, and expecting perfection is the way to madness.
Think about it:
Google gets about 3.5 billion searches on a given day. If only 1 percent of those searches yielded bad results, the number of SERPs that sucked would be 35 million per day.
|Both pages are on the same domain? |
|I presume this is a test? If so, when did you create pages? |
Only half way -- I left them open because:
1.) They shouldn't be found.
2.) It doesn't "break anything" if they are.
3.) They should be ignored if the algo and what we're hearing about it from G's reps is correct.
|Outliers happen. Patterns are what count. |
Absolutely -- Google's developed a pattern of reps saying/preaching one thing while the algo does something totally different.
One page out of billions of searches every day? Even 1000? Or 1,000,000?
|Google's developed a pattern of reps saying/preaching one thing while the algo does something totally different. |
Or maybe Google is looking at the big picture, and you're looking at the part of the picture that interests you.
My attitude is "Don't be angry, be patient." If you don't have to watch your blood pressure, YMMV. :-)
Real search engine developers and operators would have a different approach to the problem.
|It's easier to be a backseat search engineer than a real one, and expecting perfection is the way to madness. |
Real search engine operators and developers tend to rely on the user being sentient. They generally know what they want and if they don't then search suggestions might help. But Google's big problem is that it has wandered off down the Yellow Brick Road of Artificial Stupidity. It ignores the reality that the smartest element in the search cycle might actually be the user.
The AI stuff looks great in press releases for the drooling technology churnalists but it is really just a sticking plaster on a slashed carotid artery. The problem is that as Google has moved farther away from the links basis of search, its algorithm has become more like an explosion in a sticking plaster factory.
One of the side-effects of poor SERPs is that they keep the user on the search engine site for longer.
|"One of the side-effects of poor SERPs is that they keep the user on the search engine site for longer." |
Would not google then be concerned that the user would use bing or duckduckgo or the other plethora of search engines out there?
|Martin Ice Web|
@planet, no because these engines are not popular enough. Meanwhile i install the bingbar on customers computer, after 2 days they get used to it and i don't ever get the question to turn it back to g. If bing /yahoo would do some more public work they would get more users as their serps a quiet better than g's.
|Google has told us links are still important: |
We played around with the idea of turning off backlink relevance and at least for now backlinks relevance still really helps in terms of making sure that we turn the best, most relevant, most topical set of search results.
This is totally mis-leading and Google knows it. When you spend 15 years optimizing your algorithm around backlinks, obviously the results will be worse if you suddenly take backlinks out of the calculations and don't change anything else. If they want to make a meaningful comparison, they need to spend 15 years developing an algorithm that doesn't involve backlinks, then compare that with what they have now.
And what does Google have on most SERP pages? (Being utterly cynical, of course. ;) )
|Would not google then be concerned that the user would use bing or duckduckgo or the other plethora of search engines out there? |
|One page out of billions of searches every day? Even 1000? Or 1,000,000? |
One example, which imo happens to be a very good one, thanks -- The pattern has been emerging for a while, and it's not always as obvious as the example I gave, but it's definitely there, again, imo.
ADDED: It's very easy to algorithmically determine a page has no links, no template, no "compelling text", no title, no heading, no <html> declaration, no <head> section, no <body> section, no user interest, nothing anyone searching for the phrase [it's not on the page and there are no links, so no there's no association with it there] would want to see or find -- I can do that.
There's no "bigger picture" to for Google to try and see in this type of case -- The page should not even be indexed, let alone rank above other pages with links, of much higher quality, and users interest if what they say counts counts and the algo is doing what it should.
Stray thought. How different is all this really from G### Translate? Feed in a text. Click some buttons. Result: text which is sometimes alarmingly and sometimes hilariously wrong, but which in general-- especially if the original was long-- gives a rough idea of what the material is supposed to say.
Isn't that essentially what computerized search does? Sometimes right, sometimes wrong, but in general, on average, it points people in the right direction. And that's as much as it ever can do.