SpiderDuck is a service at Twitter that fetches all URLs shared in Tweets in real-time, parses the downloaded content to extract metadata of interest and makes that metadata available for other Twitter services to consume within seconds.
Several teams at Twitter need to access the linked content, typically in real-time, to improve Twitter products. For example:
Search to index resolved URLs and improve relevance
Clients to display certain types of media, such as photos, next to the Tweet
Tweet Button to count how many times each URL has been shared on Twitter
Trust & Safety to aid in detecting malware and spam
Analytics to surface a variety of aggregated statistics about links shared on Twitter
Msg#: 4387539 posted 11:42 am on Nov 16, 2011 (gmt 0)
I've seen Twitter's "spiderduck" subdomain/bot since at least the beginning of August. Here's what it looks like, with two different UAs from two different domains always hitting simultaneously on Nov. 12th --
Msg#: 4387539 posted 11:56 am on Nov 16, 2011 (gmt 0)
I guess, until we see how the data is used or presented it's difficult to say if it's worthwhile to allow access. I would have thought that if you're a site such as WSJ or BBC you'd want to allow access to the public side of the site.