Welcome to WebmasterWorld Guest from 22.214.171.124
mine was something like searchexample_dmoz_experiment
That sounds more like "searchexample" is experimenting with DMOZ data, rather than a dmoz.org experiment.
(There appears to be an SEO discussion site called Search Example. Never heard of them, myself.)
[edited by: Brett_Tabke at 5:22 pm (utc) on Dec. 20, 2003]
[edit reason] examplified [/edit]
Here's a thread from a while back that's pretty self-explanatory and relates to what we're discussing here, in our now deprecated Search Engine Spider Identification forum:
A new agent or new advertisement? [webmasterworld.com]
It involved none other than our esteemed colleague and long-time member Fantomaster [webmasterworld.com], and was quite an interesting discussion.
Again, my sincere apologies for not being clearer and for not pointing out that thread to begin with. :)
Might I ask what research you did to
a) Determine that it was log spamming
b) Determine that there was no experiment as you categorically stated?
c) Whether you would call Googlebot a log spammer
lol - ya, we ran the same "experiment" 4 - or was it 5 - years ago too. As Lawman said - log spamming erm - experimenting works. Did good data come out of it? Sure did. Some we shared - most we did not. It was a good way to check servers, page sizes and other general page data.
Lots of good reading on experimenting with log file spamming and the eventual data set here in a site search on log file dropping [google.com].
What kind of click back ratio are you getting out of the log files? It used to run as much as 5% when we first started, and then quickly fell off to tenths and then thousands of a percent.
In the end, it is up to the site that got the spidering that makes the determination of whether it is spam or not. We get anywhere from 5 to 100 of these spider spam log drops a day. I don't know any webmaster who wouldn't call the 99.99% of them spam.