Hi... First post here...
I am creating a shareware (thereby "commercial") program that can in general analyze a website. So far, so good, none of it involves Google.
However... To be "overall" competitive I need at least some sort of position checking. Again, my program may (or may not) outshine all competitors out there with other features... But if I have zero position checking... Well, you get the idea.
I would be more than happy with e.g. either just parsing the first page from Google results (e.g. a page with
50 results) or using the WebAPI.
I would also be happy to include other sorts of limits. (e.g. NO concurrent requests - and enforce small "sleep/idle" periods etc.)
I am trying to create an all-around program for those small IT professionals shops that do all sorts of different work for their (local) clients. I am not up to "deep-dig" Google. I am also not in any way helping/promoting black-hat stuff in the software. (or anywhere else.)
All I would like to do (without getting de-indexed, sued, annihilated from face of earth *G*) is some simple position checking within first results page.
Also, how is "automated queries" defined? Would it be ok to have user enter/select a few phrases, retrieve google output (first results page for each phrase), and analyze HTML for website positions?
Does anyone know / have experience with if Google accepts non-abusive usage? (As I would prefer not to have my
entire website de-indexed, hehe.)
I have no idea if I am being an idiot asking this... As I have no search engine contacts that can tell me :-)
[edited by: engine at 3:00 pm (utc) on Feb. 8, 2006]
[edit reason] formatting [/edit]