|Google vs. Wordtracker - another one ;)|
I just read an old post about this and I've been aware of the fact that search volumes from keyword tools can be way off, and how they collect their data, etc..
I wouldn't bother too much, but I'll have to decide whether I should make two sites or a single site.
Would it make sense for kw tools such as wordtracker and kw discovery (to be honest, I dont know where kw discovery gets their data from) to show rather low volumes for "learn Spanish", "listening comprehension Spanish", etc.. Whereas the numbers shown in Google's tool are a lot higher?
I assume it would make sense, because people looking to learn a language might be (WAY?) more likely to search on Google then to search on Wordtracker. Could this cause a really huge discrepancy in the data between Google and Wordtracker? I mean, people searching for foreign language learning online are probably Googlers not Metasearchengineusers.
I'm inclined to simply trust Google's numbers and ignore WT and KW discovery b/c Ive heard the volumes they show nowadays are fairly accurate (at least usually not off more than by a factor like say 5 :-).
Really wondering if the demographic of meta search engine users (--> wordtracker) might cause a really huge discrepancy in some kw spaces.
I've done research on educationally-oriented keywords, and I've noticed that Google does get a larger knowledge-oriented demographic than the other tools had shown.
WordTracker has such a small sample size and such a poor demographic that I found them useful only for checking middle of the road searches with a fairly high volume.
Trellian's Keyword Discovery is an interesting tool, one that I've liked within its limitations. Until the Google Tool added numbers, KW Discovery was the only tool that gave really granular data with numbers. It was, in fact, the KW Discovery numbers I had that told me instantly that the Google Tool was dropping some major phrase data last month (as reported elsewhere, this has been fixed).
That said, the Google tool is now working, and I can't imagine using anything else. I might occasionally double-check some things with the Trellian Tool, but chances are I'll do that less and less as time goes on.
This question seems to resurface every now and then, possibly because there is no definitive answer.
First, it is important to realise, as you stated, that each of the keyword tools (KeywordDiscovry (KD), Wordtracker (WT) and the Google Keyword Tool (GKT), to name the most prominent three at the moment) takes their data from very different data sources and sample sizes. According to their respective websites, WT pick up their data from the metacrawlers Dogpile and Metacrawler, KD from search engine logs and other sources, thought to be ISP logs. Presumably the GKT data comes from actual Google Searches.
Second, each of the services will "clean" their data using different methods, and will present that data in yet another set of different methods.
Which means it is a mistake in my opinion to rely on any one source of keyword data for your research. I have consistently found that each of the above resources will not only present differing figures for phrases, but will always present phrases that the other two don't. IE., relying on any one source for keyword research results in a very lopsided view of the actual market depending on the idiosyncrasies of the chosen source. You could be missing out on opportunities by not fully researching the market.
You are then left with the challenge of dealing with the discrepancies in figures caused by the differing sample sets. My solution is to homogenize the data by applying factors to the figures based on their estimated sample set size, and then average the figures for each phrase. What those factors are is up to you as your needs may be or are different to mine.
Of course, I am only mentioning three data sources in my example above. There are in fact many, many more data sources if you go searching for them, even more if you are prepared to collect the data yourself. ;-)
Once you get your site up and running then you have the opportunity to fine tune the collected keyword data based on your own logs, analytics and onsite search data.
To look at the problem a different way, Oranges, Lemons and Limes all belong to the the Citrus family, but are of themselves very different fruits. They each make marvelous marmalade when used separately, but when used together make an all-together different marmalade with aspects of them all, but a gestalt that is something unique, and certainly more than the sum of the individual fruits.
I would second the opinion that if you're going to use one tool, then now that Google supplies (approximate) numbers then that's probably the way to go. But I'd also second the opinion that combining data is the most effective approach. I would put Microsoft's Live Search's keyword services platform as one of the important sources to use if you have access to it.
Note that the Adwords tool seems to include data from the content network - so it's skewed in favour of large content sites that drive adsense impressions, and this means that the numbers you see may not equate to human keyword searches at all. But, they have by far the biggest sample, which is something that's hard to ignore. Once you combine a few sources you dramatically reduce the potential margin of error.
The problem with this is, that if I go by the keyword discovery and the wordtracker tool the overall estimated searchers for all keywords would be 10,000 per month. Considering I cant rank #1 for any of them, that'd probably leave me with multiple hundred maybe over thousand actual searchers in the end.
if I go by Google's kw tool then I could expect something like 80-100,000 searches/month.
I know that aaron wall said on his blog that the volumes seemed to be fairly accurate and that they were only a bit off for some terms (I assume he made those statements relative to what we're used when it comes to a kw tool's accuracy - a bit off meaning more than a few percent off, but still in the same ballpark).
Ive seen other blogs that did tests on this where the volumes were never really off by more than plus/minus 100% (something I could live with easily). I also remember seeing one blog that said those numbers were only good for PPC not for organic search and mentioned his negative experiences with g's keyword data from the past, but his arguments looked more like he was biased or trying to come up with a contrary opinion to draw some attention/links...
So Im kind of willing to go with g's data here.
The advice of only using kw tools as relative volumes, I think, is a sound one, but when you decide whether youre going to enter a niche or if this niche is too small, then using them as absolutely nothing but relative volumes obviously isnt a big help.
My site is on a particular topic for learners of a certain language (say "spanish medical terms/vocabulary/..." but with a broader audience than that (I think significantly broader))...so Im trying to use common sense, too and figuring out how many people might actually end up typing something into a search engine...not sure if this is really helpful in this case, though unfortunately, because its only a vague estimation, too (thus doesnt help me decide whether 80,000 as shown by google would be the ballpark or whether 10,000 would be closer as its really hard to tell using common sense, too).
Ill also ask people I know what theyd type in to search for this kind of stuff and will check my log files/analytics software, etc. but right now its all about deciding whether I should 'enter' that niche in the first place.
Also trying to keep the option to broaden the site's theme if the niche turns out to be too small without losing link authority (though Ill 'lose' anchor text).
SO........what would you guys do? Would it make sense that the real volumes are like eightfold or so of what Wordtracker is showing? Id say people using meta search engines are probably somewhat tech-savy people in general, whereas somebody looking to learn a language is probably a LOT more likely to do a search on Google - but then again, I would say this should be the case for ANY (non-tech) topic.
I mean who uses meta search engines these days? I bet most people I know haven't even heard of a meta search engine, nor used one, and I bet it would be hard for me to find somebody (who's a non-tech-guy) that is actually using a meta search engine REGULARLY. I mean seriously, isnt an extrapolation from that kind of data bound to be way too low every single time if youre dealing with a non-tech-topic (or does WT try to take this into account (I heard/read they dont))?
Still havent been able to figure out where KW discovery really gets their data from, I only read on their homepage that they get it from all the major search engines w/o disclosing their actual method (or Im just too silly to find it).
..thanks for reading this long piece of crap..ehh text :-)
EDIT: just found the page on the kd site where they explain how they collect their data. Unfortunately, that kind of explanation isn't very useful as it doesn't really break it down. Ironically in their comparison to the adwords tool the kd-tool gets a lot more results than the adwords tool..the complete opposite of my current situation lol.
[edited by: Makaveli2007 at 10:47 pm (utc) on Aug. 11, 2008]
If I'm remembering correctly, KWD uses toolbar data.
Some of their data that involved placename information used to be skewed, I felt, suggesting that not enough of the toolbars were in the US... or at least not evenly distributed... but I believe they've said they've fixed this. Their numbers, btw, aren't searches per month... and I don't know whether they even try to predict that anywhere.
The numbers used to be just accumulated numbers of searches since they pressed the "go" button. They got their granularity simply by counting searches over time. It appears now that they've started the count over, and I haven't looked at the recent documentation thoroughly enough to say what that means, and whether the meaning of the numbers has changed as a result.
WT multiplies its very small sample size up to account for Google's very large market share. Doing this also multiplies the errors... maybe by a factor of 100 or more. I would NEVER trust WT for infrequently searched phrases, nor would I trust it as a predictor of the number of likely Google searches.
All of these tools suggest relative frequency better than they suggest absolute numbers.
thanks..I looked a lot of this stuff up in the past..I think i remember wordtracker's numbers are per day not per month (or am I wrong about this, too then it would be even worse - worse by a factor of 30 lol)..but as for kw discovery, I thought I remembered those were monthly volumes, or maybe I couldnt find anything on it and then just assumed that? Thanks for the information that theyre just counting them over time, etc.. if theyve started the whole process over again...ok everything's clear now!
I was going to try the MSN keyword tool that I didnt really know existed (guess it's rather new?), but it didnt show anything. Typed in the keyword, pressed the button, etc. but nothing happened. Maybe one has to be a registered user or something? Obviously I'd like to see what msn thinks in comparison to google, would be a better second data source than wt or kd for this purpose, I guess (though itd still probably bad, but better than nothing hehe)
What about Yahoo and their keyword research tool inside the PPC account?
That should be another to be added to Google and MSN.
Hitwise. But prohibitively expensive unless you work for a company who can justify the cost. I'm lucky that way. It's a great tool for both keyword research and competitive intelligence.
I have recently canceled my subscription to Trellian's Keyword Discovery tool because the results were so bad. I would have to go back an re-optimize sites. Google's keyword tools seems to be more accurate.
|Trellian's Keyword Discovery tool |
I did it too. Numbers seemed to be way off. It was still a good resource for finding new keywords, though.
Still, $600/annum was too much for what you get.