homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Social Media / Twitter
Forum Library, Charter, Moderators: not2easy & rumbas

Twitter Forum

Twitter's Real Time URL Fetcher: SpiderDuck

 10:32 am on Nov 16, 2011 (gmt 0)

Twitter's Real Time URL Fetcher: SpiderDuck [engineering.twitter.com]
SpiderDuck is a service at Twitter that fetches all URLs shared in Tweets in real-time, parses the downloaded content to extract metadata of interest and makes that metadata available for other Twitter services to consume within seconds.

Several teams at Twitter need to access the linked content, typically in real-time, to improve Twitter products. For example:

  • Search to index resolved URLs and improve relevance
  • Clients to display certain types of media, such as photos, next to the Tweet
  • Tweet Button to count how many times each URL has been shared on Twitter
  • Trust & Safety to aid in detecting malware and spam
  • Analytics to surface a variety of aggregated statistics about links shared on Twitter



     11:42 am on Nov 16, 2011 (gmt 0)

    I've seen Twitter's "spiderduck" subdomain/bot since at least the beginning of August. Here's what it looks like, with two different UAs from two different domains always hitting simultaneously on Nov. 12th --

    spiderduck01.dmz1.twitter.com [projecthoneypot.org...]

    09:57:34 /robots.txt

    -- BUT --

    User-agent: *
    Disallow: /

    -- is promptly ignored by its fellow traveler(s):

    r-199-59-149-10.twttr.com [projecthoneypot.org...]

    09:57:34 /filename.html
    10:34:57 /filename.html

    Thee-plus months' of hits show the exact same one-two punch pattern where Twitterbot/1.0 only requests robots.txt and Twitterbot/0.1 never does (& always ignores same).

    FWIW: I'm content to leave my Disallows and bot-blocks as-is because I've yet to see any benefit from Twitter crawling/extracting/whatevering my content "to improve Twitter products."


     11:56 am on Nov 16, 2011 (gmt 0)

    I guess, until we see how the data is used or presented it's difficult to say if it's worthwhile to allow access. I would have thought that if you're a site such as WSJ or BBC you'd want to allow access to the public side of the site.


     11:29 pm on Nov 17, 2011 (gmt 0)

    Posting a URL to Twitter these days usually sees about ten different bot visits to the posted URL within 1 to 2 seconds of posting.

    Global Options:
     top home search open messages active posts  

    Home / Forums Index / Social Media / Twitter
    rss feed

    All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
    Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
    WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
    © Webmaster World 1996-2014 all rights reserved