Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: open
Is this a side-effect of improved dynamic spidering, or have the web developers for the sites wised-up (en masse)?
Random forum posts and PDF's sometimes show up in the top 10 for words that are pricey at overture ($5+).
My hope is that this is just a temporary situation before the full update occurs.
I don't know about "deluge", but Amazon is a clear example of a site that should benefit from the status quo. A jillion pages with zero pagerank but with anchor text pointing at other pages.
Google gets better at crawling long urls; Google devalues pagerank; Google considers anchor text to be gold; Amazon does well. All that is pretty easy to understand. Even if people don't agree that all these things are occuring, if they were, the Amazon effect makes perfect sense.
What doesn't make sense is why Google is not treating the Amazon mirror sites as spam, per their guidelines. Frankly one Amazon result on most any search would be reasonable. Two results each from five mirrors should never be seen.
If you're still having trouble finding "clinchers" for irrelevant search results and spam, how can I send you this report for the phrase "widget store"?
Ranking #2 is a CNN article describing a legal battle about a widget store suing another company. Why is it so relevant to earn 2nd place?
Ranking #3 seems to show one URL having hijacked another. The listing shows one domain (which is irrelevant to the search term) but the cache shows metatags and content for a completely different domain.
Ranking #5 seems to be an example of cloaking, because the cached page contains 100% keyword stuffing whereas the actual link leads to a completely different page.
Ranking #8 is another news article about a widgets retailer changing careers.
Meanwhile, many commercial widget sites (which you'd expect to find if searching for "widget store") have either fallen in rankings or disappeared altogether.
The phenomenon of missing index pages and irrelevants search results are real, we're not making this up!
you are very wrong with that example. You complain that there's 5 pointless pdfs in that example. In reality there are SIX pdf's in the top 10.
does anyone else feel like this is all getting sureal? how long till a google spokesperson tells us they had just three spam reports? results are bad right? getting worse daily right? and yet still no acknowledgement from google themselves.
Can't find not even one pdf for your query example, not even within the first 20 results. I'm connecting from Europe but also searched using us proxies and various datacenters. So couldn't the pdf phenomenon just be a new flux phenomenon that sorts itself after a while!?
>and yet still no acknowledgement from google themselves
soapystar, you might want to read GoogleGuy's recent statements about the pdf thingy - no acknowledgement - no denying. He however clearly stated that he gonna talk with people inside the plex and investigate. That's pretty much good a feedback, imho.
People even said he'd use his multiple answers about the pdf problem as a trick to divert from the ebay, amazon, etc. problem.
I'm a bit confused know ...
I can confirm 6 pdfs in top 10 from a US and UK location search.
Believe me I hope that is the case, but I've only seen it get worse. Besides I think we should offer feedback when there seems to be a problem. As I've mentioned before it has NOT affected me as far as clicks, and the examples of problems I submitted were BROAD searches that rarely result in leads for me. The point I'm trying to make is that the search quality in some areas is simply not what it used to be, and when I'm talking to my significant other last night about something in general on the Internet and she mentions that she can't find things in Google anymore, then I think there is a problem. She is the example of the casual user.
However doing searches for other albums I can say that this change is not across the board - but things are still fluid. I hope that things are changing back to how they used to be.
Is it worth pointing out here that the other country Amazons are not 'mirrors' of Amazon.com. The UK has it's own reviews, it's own customer reviews, and quite often has different ASINs (Amazon product IDs).
1. A large phone companies pdf on the subject (They are not a large competitor in the data storage space).
2. Site on portable data storage
3. Doc from the same site in #2
4. The big book store's listing
5. The big book store's listing
6. A company that should be here, but by no means the largest.
7. A company that should be here, but even smaller than #6.
8. I have no idea what this is, but it has no place in the top 100. It mentions data storage once.
9. A link to buy mp3 players on a company mentioned in the subject.
10. A company that should be in the top 3.
The companies that should be here are not even in the top 20. I gave up after that and went to another SE.
This is not a technical search by any stretch of the imagination.
total 18/40 = 1/2
there is a real problem with your SERPs. I sent these things via spamreport. let's see.
best wishes for hopefully future relevant SERPs,
There are still relevant results for heart attack victims on top of the SERPs, but the landscape is changing. There are two Amazon results in the top ten (out of between one and two million results, depending on whether you use quotation marks). The first Amazon result is pushing a rock CD by the group "Queen" for 7.99 pounds sterling.
There has to be a joke in here somewhere about how a "Grateful Dead" listing would be better, but then Google's chef would get mad at me....
Can't find a problem with the serps? That's hard to understand:
Try these search terms:
"adidas golf shoes"
#3 Relevent Site
#4 Adidas - #4 for adidas golf shoes?
#5 MSN search results
#6 MSN search results
#7 - #10 look relevent
"sony mp3 player"
"cheap cd player"
#3 Relevent Site
I find dozens and dozens more seraches that get the same results. This to you is good relevent serps? A few companies dominating markets with pages absolutely stuffed with irrelevent content.
What's happening to Google?
Another way is:
Could be that the strong amazon listings are based on the fresh bonus. Kackle, the heart attack example search is a really interesting one. It remembers me of the gold fresh listings i had in the past with some pages. #1 for many, many searches. For a month or even two. I guess it's not totally impossible that after a while (update, pr and backlink recalculation etc.) these strong amazon results will fall in their positions. We all know this phenomenon from our own fresh listings, don't we?
I originally planned to ask Sergey [webmasterworld.com], what his idea is how to best deal with this in future keeping the goal of using a neutral algo in mind that works without human intervention.
But i don't ask it since i have the strong feeling that it's a flux / fresh listing thingy.
If it's not, then facing the recently discussed amazon phenomenon, even Apple could gain top positions for ie. gnocci receipts and influence the Google search experience if they'd one day start publishing other stuff than hardware and software related infos.
I doubt this'll be the new algo.
[edited by: Yidaki at 6:01 pm (utc) on Aug. 12, 2003]
#6 Panasonic page (yeah! But it's a "product does not exist" page)
#7 Epinions.com (not bad)
#9 Amazon UK
Page two is even more messy ;)
Like c1bernaught said, there are plenty more examples like this....
People use google to find things to buy and if google excludes them then shoppers will use another search engine.
The real problem is the spammy sites that use very long addresses to get results.
[edited by: Marcia at 7:43 pm (utc) on Aug. 12, 2003]
Speaking of which, discussion of *individuals* or *individual members* is off-topic for the board and this forum. The concern here is about the quality of the search which is certainly of concern so let's all confine it to that and stay on topic - and play nice and be courteous, while we're at it. We're all in this together, remember.
There's also a matter of simple Internet_101 technology. I personally hate it when all of a sudden another application on the computer starts to open when I click on a search result without checking first. That isn't user-friendly at all, considering some people may not have the resources available on their computers at the time.
The fact is that we all use browsers to search and we should expect that what we find will open up in the browser, the software we're choosing to use - not require another application to open. If I wanted PDF I could search with Acrobat Reader, right? Ludicrous, but accessing files that require another application other than the BROWSER should be by search preferences set by those who know and make that choice deliberately, and normal browser-access files alone should be the default. I can't see anything else being logical from a user point of view.
[edited by: Marcia at 8:03 pm (utc) on Aug. 12, 2003]
It's not going to work to give ranking bonus for freshness. Look at what one blogger was complaining about [kottke.org] -- and this was before this latest "Amazing" update. Bloggers are ever-fresh, and a lot of observers have noticed blogging noise in the SERPs for over a year now. It's not good enough to say that Google will "work it out" by the end of the cycle.
One pro-Google critic attacked this blogger on the grounds that his searches are insufficiently refined. Yes, with any search engine I can get better results by using better and more specific multiple search terms. But that's not going to cut it. By the time Google educates a few more people on how to do searches, folks will be drilling-down on Teoma, Alltheweb, and Altavista. It would be nice if all Internet users had search-engine smarts, but they don't, and they don't particularly care to learn if they don't have to.
One good question for Sergey would be whether Google has any plans to introduce clustering anytime soon. This is not a trivial thing for Google. The reason I say that is because if the clustering is introduced on the back end as a filter, the CPU cycles go through the roof. More server farms will be required. Google is so popular that they have to think about the load implications of anything they do.
No, I think Google would prefer to install some categorization on the crawl end (off-line), just like they way they computed PageRank. Doing it off-line means you do it once per crawl, not once per search, and you can deliver search results many times faster this way.
But doing it on the front (crawl) end implies more system-wide software architecture changes than doing it on the back (search) end. Perhaps Google is caught between a rock and a hard place. They won't tell us, of course, because it's none of our business. So we're free to speculate.