homepage Welcome to WebmasterWorld Guest from 54.211.201.65
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 77 message thread spans 3 pages: < < 77 ( 1 [2] 3 > >     
Google is Now Asking for a Code if You Use a Rank Checker
ogletree




msg:744328
 6:10 am on Mar 5, 2005 (gmt 0)

I just ran a rank checker and I get a google page telling me I have a virus and that I can't search until I put in a code that is in a picture on the page. I ran it last night and this morning and many other times for over a year and never got that. Is this new?

 

GoogleGuy




msg:744359
 11:37 pm on Mar 8, 2005 (gmt 0)

Just to echo what lftl said--you get 1000 searches a day, and you can use those searches to dig way down past the top 10 if you want. It's just that each search can only fetch 10 results.

blaze




msg:744360
 12:43 am on Mar 9, 2005 (gmt 0)

Yeah, querying webservers programmatically should be illegal, eh Google?

GoogleGuy




msg:744361
 1:05 am on Mar 9, 2005 (gmt 0)

blaze, the collective weight of all those queries add up; that's one of the main reasons we ask people not to do it. But the WebAPI is a great way to do 1000 queries/day of whatever you're interested in for your own personal purposes.

Brett_Tabke




msg:744362
 1:13 am on Mar 9, 2005 (gmt 0)

Well blaze, I do see where you want to go with that, and I do agree that there would be a new standard - but until then, all we have is bots.txt. Unfortunatly, Google has devalued it by using nonstandard syntax.

[google.com...]

caveman




msg:744363
 1:31 am on Mar 9, 2005 (gmt 0)

... the collective weight of all those queries add up ...

One might view that as the price to be paid for monetizing our content, on the way to becomming a $50 billion dollar company.

OTOH, I'm sure that those wm queries do put a strain on. Personally, I'm very respectfull of G's servers. It's a citizen of the 'net thing. ;-)

Think it's time to go back into the cave for a while. :)

idoc




msg:744364
 3:49 am on Mar 9, 2005 (gmt 0)

I have had the opinion for some time now that this Google API feature is being used heavily by site scrapers. The outright scrapers get banned on my servers right off, yet I see alot of my text in the pseudo scraped adwords sites. There are a few other engines that have similiar features that are mostly being abused for purposes other than what was intended.

Just Guessing




msg:744365
 10:00 am on Mar 9, 2005 (gmt 0)

this Google API feature is being used heavily by site scrapers

My guess is that most site scrapers don't bother with it because it's too slow and clumsy. They probably use screen scraping because it is (was?) faster and easier.

chrisgarrett




msg:744366
 11:00 am on Mar 9, 2005 (gmt 0)

It seems often the API results dont match the "real" Google results. Is this a datacenter thing? Do the API results catch up with what a user would see?

pmkpmk




msg:744367
 11:12 am on Mar 9, 2005 (gmt 0)

I second that. Querying via the API and querying manually gets in almost every case differing results. Usually the API-results are off by 1-2 positions (on the negative side).

glitterball




msg:744368
 12:55 pm on Mar 9, 2005 (gmt 0)

I saw that message the other day and I have never used any Rank checking or any other type of automated script of any kind whatsoever.

I immediately assumed that I had a virus or spyware, however after extensive scanning, I have found nothing.

gnomedeplum




msg:744369
 1:49 pm on Mar 9, 2005 (gmt 0)

The only tools I use to check my sites' rankings, apart from the odd query here and there, are the stats of my affiliate programs.

Receptional




msg:744370
 2:19 pm on Mar 9, 2005 (gmt 0)

then how do we get the data? Sites are often dependent on Google rankings for profitiability, but we should just cross our fingers and hope we do well rather than seeing where we do well so we can make informed business decisions?

Relying on Google's results for profitability probably isn't an informed business decision. True - it comes in very handy, but only at the whim of the algorithm, which will change whether you like it or not over time.

But - given that we all take the Google gravy train from time to time, tracking Google traffic (or better, Sales that convert from Google searcjes), rather than Google rankings is a better plan.

Dixon.

insight




msg:744371
 3:31 pm on Mar 9, 2005 (gmt 0)

I've counted Google as a great tool since I read this cool research paper in the summer of 1998, and I don't see it getting less interesting anytime soon. As of this morning, Google accounted for 63.77% of leads to our site. And we shouldn't pay close attention? Like Receptional, I could tell you the conversion rate from those hits to sales. Or for any other engine, or partner, or a shopping site. It's not a lot of data if you really know what you're doing with SQL; I have a pile of interesting reports ready for me in the mornings.

And no, 10,000 results (10 results x 1000 queries) is not enough. It's not even in the right order of magnitude. There's lots of ways people search: "widget wx", "widget wx100", "widget w-100"... We want to be doing well for all of them. Just as GoogleGuy suggested, we mine our server logs to keep finding the new ways that customers find things. It's dead easy to use server logs and ranking results to figure out the clickthrough rate for a given position, and that told us it's not important to look deep into the results for a query, it's important to have something for all the variations.

So, yeah, we do a fair bit of scraping. We do it as politely as we can: big delays between searches, more than one originating IP, we spread hits accross the datacenters (which has also told us a fair bit about how updates happen), etc. That politeness has apparently mostly kept us under the captcha limits. I could invest time into looking less bot-like (more IPs, more browser-like headers, occasional clickthroughs, etc.), but I'd really rather just pay for access to the API.

Let me say that again: I'd rather pay for access to results than scrape them. We could have an arms race between bot detection and human emulation, but I'd rather just be a customer. As a businessman and generally nice human being I don't want to play cat and mouse, I want to have reliable transactions and I don't understand why Google doesn't want that, too.

azzbacwordz




msg:744372
 3:31 pm on Mar 9, 2005 (gmt 0)

I think this is related to Google's new verification code system, when adding a URL you must now verify you are human by typing in the displayed characters. This lets Google know you are human and not a bot. I guess Google is fighting back. Google has already said that it is not recommended to use programs like "WebPosition Gold" and the alike, which automatically submits your URL to search engines and directories. Google has always had its nose turned down on this and it finally wants to fight back. Something as simple as a verification code, I would have thought they would have thought of this sooner, but I guess it is not that easy for them, because I'm sure they work with some of the online submitters like addme.com, submitexpress.com, etc. I wonder if it has affected any of them.

rogerd




msg:744373
 4:31 pm on Mar 9, 2005 (gmt 0)

>>Everybody thought I was nuts.

C'mon, ogletree, it's going to take more than one data point to convince us of your sanity. ;)

Lorel




msg:744374
 5:50 pm on Mar 9, 2005 (gmt 0)


I second that. Querying via the API and querying manually gets in almost every case differing results. Usually the API-results are off by 1-2 positions (on the negative side).

Is it 1-2 position off or a couple weeks behind? I'm sure it's the later, i.e., the site where you have submitted your API is only allowed to query outdated results somehow.

wackybrit




msg:744375
 11:10 pm on Mar 9, 2005 (gmt 0)

I'm 100% behind insight's comments here. Google needs an API that can be used for commercial purposes. Why isn't there a commercial option? I guess Google doesn't need the money nowadays :-)

In which case.. the coder vs Google arms race can continue, and I'm okay with that, because Google aren't trying that hard. Or, at least, to me it's always seemed easy to get the information required.

pmkpmk




msg:744376
 11:13 pm on Mar 9, 2005 (gmt 0)

Is it 1-2 position off or a couple weeks behind?

Uh, you got me there... need to check into it.

communitynews




msg:744377
 3:18 am on Mar 10, 2005 (gmt 0)

I don't see a problem with using the Web API to check rankings for your own sites--just don't sell that tool/service

Funny, that's what I feel about Autolink. I don't see a problem with changing webpage content for yourself-- just don't sell [or give away] a toolbar that does it.

communitynews




msg:744378
 3:32 am on Mar 10, 2005 (gmt 0)

blaze, the collective weight of all those queries add up; that's one of the main reasons we ask people not to do it. But the WebAPI is a great way to do 1000 queries/day of whatever you're interested in for your own personal purposes.

What a shame all those programming resources have to be dedicated to preventing misuse of Google results. Publishers feel the same way about Autolink's misuse of their work.

stcrim




msg:744379
 3:32 am on Mar 10, 2005 (gmt 0)

Well, what about those of use who mine our logs and then use rank checking software to keep up with what we find in the logs?

-s-

stcrim




msg:744380
 3:58 am on Mar 10, 2005 (gmt 0)

Perhaps a better question would be how much is too much? Would a minute apart be google friendly or 30 seconds?

I'm certainly willing to be gentle on google's resources - but still need to do my job too.

Any Thoughts?

-s-

kovacs




msg:744382
 11:49 am on Mar 10, 2005 (gmt 0)

inurl:showrefs.php

ric700




msg:744384
 5:09 pm on Mar 10, 2005 (gmt 0)

Insight said:
> GoogleGuy: then how do we get the data [page rankings]? Sites are often dependent on Google rankings for profitability, but we should just cross our fingers and hope we do well rather than seeing where we do well so we can make informed business decisions?

GoogleGuy replied:
> insight, I'd start with your server logs--that's gold that's usually not mined nearly as much as rank checking. Personally, I don't see a problem with using the Web API to check rankings for your own sites--just don't sell that tool/service. :)

Forgive my ignorance, but what kind of "gold" is GG referring to? Could one determine the link popularity for each of one's pages from the server logs?

Thanks,

Ric

grail




msg:744385
 5:27 pm on Mar 10, 2005 (gmt 0)

"Forgive my ignorance, but what kind of "gold" is GG referring to? "

The kind you 'pan' for, as opposed to 'strip mining'.

communitynews




msg:744386
 5:55 pm on Mar 10, 2005 (gmt 0)

Forgive my ignorance, but what kind of "gold" is GG referring to? Could one determine the link popularity for each of one's pages from the server logs?

He means that there is alot of information in your server logs that can help you improve your site. You can see where your referrals are coming from, you can see what people are using, what paths through the site they are taking, etc..

Page rank info is not available in the logs but you can see what keywords people are using that find you in given search engines.

GoogleGuy




msg:744387
 10:30 pm on Mar 10, 2005 (gmt 0)

What communitynews said. I'd avoid getting obsessed about backlinks/PR and concentrate on useful content/services you can add to your site that will attract links naturally. Server logs give you a great picture of what words/phrases people are using to find your site. Usually looking through those logs will give you ideas about other pages would be good to add to your site, for example.

MoneyMan




msg:744388
 12:47 am on Mar 11, 2005 (gmt 0)

GoogleGuy,

I, and a slew of others Iím sure, am waiting with baited breath for a response to Insights comments:

I'd rather pay for access to results than scrape them. We could have an arms race between bot detection and human emulation, but I'd rather just be a customer. As a businessman and generally nice human being I don't want to play cat and mouse, I want to have reliable transactions and I don't understand why Google doesn't want that, too.

Iím aware that silence is a response, but I thought I'd prod a bit more.

communitynews




msg:744389
 2:05 am on Mar 11, 2005 (gmt 0)

MoneyMan, your name contains something google doesn't want for this. I think Google doesn't want others making money on something they may want to make money on.

The lack of response to the suggestion they charge money for using the service implies this.

Am I right, GoogleGuy? You don't have to answer if I am right.

caveman




msg:744390
 2:43 am on Mar 11, 2005 (gmt 0)

> Am I right, GoogleGuy? You don't have to answer if I am right

Hehe. I'll have to remember that one. ;-)

kservik




msg:744392
 4:32 am on Mar 11, 2005 (gmt 0)

I got this using WebCEO and Googlebar. It is likely that it was WebCEO that triggered it. I sure did not have any trojans on my machine.

This 77 message thread spans 3 pages: < < 77 ( 1 [2] 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved