| 11:38 am on Jul 31, 2006 (gmt 0)|
I just had "msnbot-NewsBlogs/1.0 (+http://search.msn.com/msnbot.htm)" from 188.8.131.52 checking out robots.txt
Then a few minutes later it came from 184.108.40.206 to get the rss feed.
Looks like a bunch of new bots have been unleashed by MS.
| 5:17 pm on Jul 31, 2006 (gmt 0)|
These are all Microsoft bots that have been around for a while. Up until recently they were all called "msnbot," but that was getting confusing, so we asked the other groups to append something to the name so people could tell them apart. Here's a few:
The MSN Shopping bot is msnbot-products.
The MSN News bot is msnbot-news.
The MSN Image Search bot is msnbot-MM.
The MSN Search bot is still just plain msnbot.
By the way, this change was partly precipitated by people here at Webmaster World complaining that we were crawling them a lot but never indexing them; it always turned out that it wasn't MSN Search doing the crawling -- it was some other team at MSN. Now it should be much easier for people to see what's really going on -- and to block or restrict other bots (without blocking MSN Search) if they have to.
| 5:35 pm on Jul 31, 2006 (gmt 0)|
Thanks for the info, msndude :)
Also, thanks for asking the other groups to be more specific with their agent names.
| 6:03 pm on Jul 31, 2006 (gmt 0)|
How about this:
| 6:06 pm on Jul 31, 2006 (gmt 0)|
|By the way, this change was partly precipitated by people here at Webmaster World complaining that we were crawling them a lot but never indexing them; it always turned out that it wasn't MSN Search doing the crawling -- it was some other team at MSN. Now it should be much easier for people to see what's really going on -- and to block or restrict other bots (without blocking MSN Search) if they have to. |
A move in the right direction... thanks for the response MSNDude.
| 6:18 pm on Jul 31, 2006 (gmt 0)|
How will this influence robots.txt. Will my msnbot disallow \ still work or will I have msnbot-#*$!xx crashing my server left right and center again?
| 7:39 pm on Jul 31, 2006 (gmt 0)|
I certainly wish search bot/product bot would visit us more often. We only have about 200 pages in msn, but others show 20k+
| 8:45 pm on Jul 31, 2006 (gmt 0)|
Correction: The MSN Image Search bot is msnbot-media.
Sorry about that! :-)
| 9:22 pm on Jul 31, 2006 (gmt 0)|
I too received a visit from this mysterous spider. Since its visit, my MSN referrals have increased significantly. Maybe this is a good thing.
| 9:30 pm on Jul 31, 2006 (gmt 0)|
The following message was cut out to new thread by volatilegx. New thread at: search_engine_spiders/3031574.htm [webmasterworld.com]
9:11 am on Aug. 2, 2006 (CDT -6)
| 11:36 pm on Jul 31, 2006 (gmt 0)|
Thanks for the update, msndude.
Will the MSNBot info page for Webmasters [search.msn.com] be updated to reflect these newly-announced 'bots? I'm currently looking for answers to the following questions:
On sites with no non-proprietary multimedia files, and with no news or shopping content, would the following construct allow or deny msnbot-media, msnbot-news, etc.?
|# Allow unrestricted access for msnbot |
# Disallow all others not 'allowed' above
If the above would allow the various media/news/shopping bots, would the following work any better?
|# Disallow all MSN specialty robots |
# Allow untrestricted access for msnbot search robot
# Disallow all others not 'allowed' above
Thanks in advance for a reply, or for a pointer to an authoritative document that will answer this question.
[edited by: jdMorgan at 11:38 pm (utc) on July 31, 2006]
| 8:43 am on Aug 1, 2006 (gmt 0)|
That's why I allows anything that contains msnbot to crawl as long as the IP belongs to Microsoft just to avoid worrying about anything new that comes out.
I can swat it at leisure later ;)
| 8:46 pm on Aug 1, 2006 (gmt 0)|
I got this waproxyb10.msn.com
| 4:01 pm on Aug 2, 2006 (gmt 0)|
jd: The site isn't updated with the “official rules” yet, but the net of it is the following:
· MSNBot obeys robots.txt for MSNBot
· MSNBot-NAME obeys robots.txt for MSNBot *and* MSNBot-NAME.
This allows site owners to do no extra work for our additional crawlers and also gives them the flexibility to limit for specific crawlers.
Hope that helps.
| 10:42 pm on Aug 2, 2006 (gmt 0)|
|How about this: |
I blocked this one a while back because it didn't look kosher, and ended up with the whole site de-indexed and MSN search bot stopped spidering the site.
It also took me a while to realise what was happening, to un-block it and get the site re-indexed. Doh!
| 5:16 pm on Aug 4, 2006 (gmt 0)|
Any chance that MS can get these bots to start accepting compressed pages?
Both Google & Slurp! have accepted compressed pages for years now (although with G it was the "Mozilla/5" bot, which is now the standard bot). msnbot never has, and consequently consumes more bandwidth on my site than G & Y together, although both of the former take far more pages each than msnbot.
The inability to accept compressed pages really does give the impression of an old, out-dated technology being employed at MS. Time to join the 21st Century, no?
| 4:36 pm on Aug 5, 2006 (gmt 0)|
Compressed pages are nice for you static web page sites but I could care less about compressed pages.
With a dynamic website it's bandwidth vs. CPU time and compressing the page on my site increases the time to deliver the page and chews up more CPU cycles meaning I can deliver fewer pages in the same amount of time.
I won't be sending compressed pages anytime soon, guess I'm using out-dated technology too ;)
| 11:03 pm on Aug 7, 2006 (gmt 0)|
|With a dynamic website it's bandwidth vs. CPU time |
Dynamic, load-balanced compression. My early testing was < 0.002 secs on a twin Xeon 2.4 GHz, Linux 2.6. The routine is encapsulated within the Conteg Content-Negotiation Class [webmasterworld.com] (v0.11 :- v0.12.1 is available via my site; includes cache-control settings).
|Compressed pages are nice for you static web page sites |
My site is fully dynamic (PHP).
|guess I'm using out-dated technology too ;) |
CPU is cheap now. Times have changed.
| 5:58 am on Aug 8, 2006 (gmt 0)|
OK, you didn't get my point Alex....
If you just let the dynamic page be delivered as it's being created the overall process is faster as waiting to generate the whole page, then zip it and ship it, means the overall page time processing the page is longer as it doesn't start transmitting until the entire process is completed.
Not sure how you can get around that fact and my server is just too busy to risk it.
[edited by: incrediBILL at 5:58 am (utc) on Aug. 8, 2006]