lucy24 - 6:49 am on Mar 18, 2013 (gmt 0)
Through personalization (of results of course, which means must I track users and their behavior) and if I know user N1524389 usually clicks on an average of 5.3 results within N seconds and for some reason on query Y they only clicked on 4 results within N seconds, I can determine there's a variance.
I don't see why you need to do any of this. The only real variable is whether a user opens more than one page without reloading the search-results page. And you know when the results page has been reloaded-- even if the browser doesn't put in a fresh request-- because your own analytics will tell you ;)
Now, about the OP:
around March 1, it completely disappeared from the SERPs (not in the top 100 or top 1000 or anywhere at all)
You forgot to answer the inevitable first question: Does the page in question exist in the index at all? Either do a site: search or search for a unique text string. Answers that are obvious to you are not obvious to everyone else.
You don't see a lot of single pages being de-indexed-- usually it's a whole site being stomped-- but it's best to collect all possible information.
The other thing you forgot to say is what information you're getting from wmt. Most of it you can just ignore at this point. In particular: Do not look at the keyword list yet. This is generated strictly for wmt, so it may take as much as several months for it to become accurate. That's counting from when the site was added to wmt, not from when it was created.
But do look for anything red-flaggish like "couldn't load robots.txt" or huge numbers of non-200 responses. This early in the site's life, there shouldn't be any redirects or 404s. Except the purely mechanical /index.html and domain-name canonicalization redirects. Search engines will always ask for the wrong name now and them, just to test you. (Bing more than google, for some reason. And MJ12 loves to leave off directory slashes. And so on.)