Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: open
So grateful for the extraordinary work our friends at @internetarchive are doing to fight 404s and digitally preserve millions of links to websites and sources Wikipedians cite, as they build the world's largest encyclopedia.
And for the past 3 years, we have been running a software robot called IABot on 22 Wikipedia language editions looking for broken links (URLs that return a ‘404’, or ‘Page Not Found’). When broken links are discovered, IABot searches for archives in the Wayback Machine and other web archives to replace them with. Restoring links ensures Wikipedia remains accurate and verifiable and thus meets one of Wikipedia’s three core content policies: ‘Verifiability’.
joined:Sept 26, 2001
If you blocked archive.org's crawlers, you should be OK..Just a bit of background... after several months of emailed removal requests, DMCA submissions, C&D notices and attempted phone calls (when they were in San Francisco) without any responsive action on their part whatsoever, I came to the conclusion I had to take alternative methods to get my intellectual property (articles written my myself) removed from their "encyclopedia" that BTW was ranking higher than my page for the same article (go figure.)