| 2:47 pm on Apr 25, 2006 (gmt 0)|
|Why Is GOoglE dOiNG thIS? |
| 2:55 pm on Apr 25, 2006 (gmt 0)|
Why is it insane?
| 3:15 pm on Apr 25, 2006 (gmt 0)|
They use a special bot so that appropriate ads are displayed for your pages, and so you can have relevant ads on your pages right away (instead of waiting for Googlebot to come a-crawling).
If you're really bothered by this, you can take solace in the fact that Google's new infrastructure, Big Daddy, has the ability to cache data from one bot's visit for re-use by another bot. In other words, if the Mediapartner bot visits first, it saves the page info in a cache at Google's data center, and Googlebot checks that cache to see if it can avoid recrawling the page. Result: You save bandwidth, and so does Google. Matt Cutts of Google describes this process in his "Gadgets, Google and SEO" blog.
| 3:22 pm on Apr 25, 2006 (gmt 0)|
oddsod: sorry quick typer
''Big Daddy`` hehehe..
| 4:51 pm on Apr 25, 2006 (gmt 0)|
I'm certain that they target ads on the fly too because I have added new pages on different subjects and have had perfectly targeted ads appear on the very first impression--and I know it's the first impression because I viewed it seconds after the upload.
On the other hand, the media partners bot no doubt refines the process and makes it more efficient as well as producing a separate index for AdSense staff to play with as they will...
| 5:43 pm on Apr 25, 2006 (gmt 0)|
AS can sometimes make a good initial guess for ads for a completely new page on its first impression from the following factors as far as I can tell:
1) The site content that G already knows about, especially pages close in the /dir/ectory/hier/archy to your new page.
2) Keywords in the URL (and any query parameters).
| 5:52 pm on Apr 25, 2006 (gmt 0)|
|especially pages close in the /dir/ectory/hier/archy |
Rule of thumb: Any speculation about how directory hierarchies influence any aspect of Google's activities whatsoever are dead wrong. They are invariably a result of making link structures that mirror directory hierarchies and then believing that it was the directory structure rather than the link structure that caused the effect.
| 6:05 pm on Apr 25, 2006 (gmt 0)|
[I report on my own experiments and ads and site, which is why I hypothesise this, but of course (a) YMMV and (b) I may well be wrong.]
In particular I notice that one component of my URI path often brings up inappropriate ads, where nothing else should. I don't see why G *shouldn't* parse the URI for *clues* for a new page and when it is otherwise struggling! I have discussed this with G and they do not reject my hypothesis...
| 1:46 am on Apr 26, 2006 (gmt 0)|
For the record ALL my pages are in the root directory, only includes, images, scripts, etc go into sub dirs.
And in the most recent instance which actually startled me with how accurate the ads were, the file name itself is the keyword, and the title text is also KW heavy.
If this improves targeting it indicates that the file name is very important because there was not a moment's hesitation, the page loaded with perfect ads as fast as my broadband loads any page.
| 2:09 am on Apr 26, 2006 (gmt 0)|
Off topic from the original post but I can assure you Google places a heavy emphasis on the page file name. I couldn't figure out why I was getting so many ads for Audi A3 and Addidas A3 sneakers on my site. Then I realized the subfolder had /a3/ in the title. I chose this so the folder would be at the top of the list when working on my files. If you want to improve targeting, throw a keyword in your file name.
| 2:36 am on Apr 26, 2006 (gmt 0)|
|For the record ALL my pages are in the root directory, only includes, images, scripts, etc go into sub dirs. |
This is exactly how I have my sites setup Andrea99 and I have never had problems with off-subject ads.
In fact, the only links I add to my filter, are blatant MFA's and (believe it or not) two competitors.
I will not claim that this directory hierarchy is the sole reason for good ad targeting, but I'm sure it doesn't hurt.
| 3:18 am on Apr 26, 2006 (gmt 0)|
Because it serves a different purpose and you might want to allow one type of bot to have access to some pages and not to others.
| 6:07 am on Apr 26, 2006 (gmt 0)|
As others have said, it's NOT insane. Doing otherwise would be insane, or at least incorrect.
The spiders serve different purposes and webmasters might plausibly want to either (a) distinguish between them or (b) give them different instructions in the robots.txt file.