My ctr went down pretty much to half of what it used to be for a little more than a week. On chat a adsense representative said that it may be a traffic problem the way she described it
"Some of your traffic may include automated clicking tools or traffic sources, robots, or other deceptive software."
I'm not sure why that would suddenly cut my ctr in half unless they are talking off extra "just in case"
Anyone experience a similar thing? Anyone figure out a way to recover?
Any help is appreciated.
If the clicks are showing up on your report and Adsense is not counting then your CTR would be lower.
Many sites are getting hit by automated clicking bots, you are not alone.
If the clicks get through they do not count. G takes the money back and donates it to the poor :)
True but now they're taking away the impressions too, so you'd think that wouldn't hit the CTR stat so hard.
I hear a lot of discussion about "click bots" these days but has anyone actually attempted to identify even one of these creatures? There are techniques available for detecting rogue robots but is anyone actually using them to get to the root of this? I mean, how many robots could be out there hitting a single site at any given time for the express purpose of causing your Adsense account to go off the deep end?
Also, if earnings are dropping because Google is taking back clicks generated by robots, either you've always been getting invalid clicks from robots and it's just now getting detected or, your legitimate visitors aren't clicking the way they used to which is a completely different problem.
So, we hear a lot about these click-bots but if you actually want to do something about it, a useful thread would be one where people actually identify a bot by IP address and user agent and post it so everyone can take steps to block it or at least watch for it. That would be a valuable list if the problem is as rampant as some would contend.
On the other hand, if the bulk of this invalid click activity is actually rooted in geographical ad targeting screw-ups (or other targeting screw-ups) on Google's part (e.g. showing a US-targeted ad to someone in Malaysia who then clicks on it) then the whole bot thing might be a red herring. My guess is could be a bit of both though. The bot activity, might not be intentional but poor programming could be generating clicks. Also someone who can't read the content of an ad (because of language differences) might still click on an image ad based solely on the image so targeting could be a real issue as well. Food for thought.
Would love to see some real evidence that a bot clicked on your Google ad and Google then took it away. Would even more so like to see the bot's IP address and user agent.
And one last thought is that it seems like G should be able to detect these bots better than anyone. Seems it would make more sense to not record a click from a known bot rather than report it into a system and then analyze and remove it later. The whole proactive rather that reactive approach might settle people's nerves a bit. I actually think this is where that whole process is headed. These days, I see adjustments in a matter of hours and there's not much in the way of end-of-month adjustments anymore (at least in my account). I know this isn't the case for everyone which indicates that certain types of adjustments are being made closer to real-time while other types are still tied to the monthly cycle. A great deal of the delayed reporting these days could also be related to the implementation of the various algorithms used in the process of click validation and fraud detection. If you roll an algorithm from being a monthly evaluation into a daily evaluation, you're making the "real time" processes more intensive which could truly slow things down or even bring them to a halt while testing and catching up is taking place. Another serving of food for thought.
|Would love to see some real evidence that a bot clicked on your Google ad and Google then took it away. Would even more so like to see the bot's IP address and user agent. |
Google AdSense support has TOLD me so. Several times. And no, their policy is not give out identifying information. And yes, their policy is to record the impressions and the clicks and take them out later. I have said this over and over; I'm not making this #*$! up, it comes directly from the horse's mouth.
I for one have no geographical screwups; my sites are targeted to specific states and 98.5% of my traffic is US and I haven't seen an ad targeting a different country or language show up in at least five years.
And also, blocking ALL non-US traffic by IP made no difference whatsoever.
I've spent a lot of time on this and had a lot of conversations with people (at Google and elsewhere) who know more about this than either you or I do.
If you don't care to believe it, that's your prerogative.
@netmeg did it knock your ctr down like it did mine? Did you have any success in blocking them or pushing your ctr back up?
I have had no success in blocking them (they're not hitting all my sites, just three) and I don't believe they have had all that much effect on CTR. They actually haven't had a bad effect on earnings so far; I'm actually way up, even after the takebacks. But they have completely trashed my reports.
|I have had no success in blocking them |
Has anyone figured out yet what the hell they think they're supposed to be accomplishing?
To me it is simply a wasteful and pointless exercise.
@Netmeg What I'm asking isn't contrary to what your saying. I'm asking for the identifying features of a "click bot." Theoretically, if Google can identify them, then so can we. But how?
IP Address? (bots can use different IP addresses)
User Agent? (spoofable)
Behavior? (this gives away bots more than other identifying feature and is probably the best approach but also the most complex and normally outside the abilities of your typical weekend Wordpress Webmaster. I'd be happy to just to see any details about such a creature whether it actually helps in blocking it or not. Too many people rely on other people's observations to support their own FUD rather that think the matter through and ask a few hard questions along the way. Being able to know what to look out for and be able to watch it in action could have great valua in it's own right.
So your traffic comes predominately from the US and you've tried filtering out non-us traffic. Someone with a troubleshooting mentality might find that relevant. For me it narrows down the problem or at least suggests that it's not entirely geographical. I don't take things for granted the way some do and being methodical in my problem-solving approach doesn't have anything to do with what I believe.
How all this impacts CTR can't be taken without considering impressions obviously. I've seen G take back impressions so I'm guessing that what Google told you about taking back clicks and impressions makes this whole matter irrelevant to the concept of CTR (other than to make it act like a yo yo over the short term). CTR for me has always been bit of a roller-coaster ride so if invalid clicks and impressions are a zero-sum game, then changes in CTR very probably have their roots elsewhere. Of course, as I mentioned above, constantly throwing new algorithms into the mix can make things pretty seem unpredictable when looking at it from the outside with no clue of what is actually happening. Heck, just redefining the term "invalid click" by adding another variable to the mix is enough to do that. That which was once allowed, is now verboten and filtered out.
I use fail2ban to block a lot of bots. If you can log the clicks, then you should be able to setup a nice filter.