|Q. When you guys roll out new algorithms, filters, and patches some good sites end up getting filtered out with the bad. Do you pre-test most of the algorithms prior to launching them? How do you know how strongly to apply filters? By default do you usually lean on one side or the other and then tweak your way back? |
A. "We always put algorithmic changes into our test harnesses to poke and prod in lots of different ways. But you also have to be adaptive. If someone in the outside world notices an issue after a launch that you didn't notice, it's important to take that feedback and act on it, and also to try to improve the testing procedure to cover that in the future. We usually have a pretty strong sense of whether something will be a large-impact launch or not. But you can't completely avoid having a large impact with a launch. An example might be if you're replacing a large subsystem in the crawl-index-serve pipeline. We continually go back and improve or replace sections of our system. Sometimes the results can't be bit-for-bit compatible in output, so you have to do the best you can. Update Fritz in 2003 is the canonical example of that; you can't go from a batch-based search engine to an incrementally-updated search engine without some visible impact. To answer your last question, I personally lean toward softer launches; webmasters never need any extra stress. But sometimes launches can't be made completely soft or invisible, as I mentioned."
(Emphasis mine in 2nd paragraph)