partyark - 7:29 pm on Jun 10, 2013 (gmt 0)
Thanks for the replies.
To be clear, robots.txt is correcly disallowing any crawling on widgetworld.com (the bogus domain). Apart from robots.txt (200) it has been 410'ing all requests for anything some time.
According to webmaster tools, there are no incoming or external links to widgetworld.com. It has no keywords, and total indexed pages is zero. But despite this, it's clear that somewhere in the google engine it's very much alive.
I think what's happening is that the RemoveURL requests are "expired" because they're returning 410 - or perhaps because there's a site removal on widgetworld.com too. As soon that expiry is triggered, which usually takes about ten days from the remove request, they re-appear as incoming links to my real domain... and then my domain takes a pasting in the organic results and the whole process of requesting removals starts again.