One other detail that may be worth noting...
Whenever our traffic returns from Google, it is at a *higher* level than before. Mind you, not enough to compensate for the huge plunge, but 10%-15% higher that pre-plunge levels. In fact prior to this latest plunge we had our 2 best Google organic traffic days in the history of the site.
Something seems to have happened, yesterday 2008-07-16: 2 Sites of mine which have been dropped to just a few thousand visitors over a year ago went back to 20k daily visitors. Happened at the same time. I have redone one of them a few days ago and did not touch the other one. Went down together and rose together.
same here seeing a big boost in traffic - looks like something has changed, although I'm not hold my breath it lasting
Well this is very weird! My site that lost 80% traffic four days ago still hasn't returned. But my site that lost 70% of it's traffic last year around this time just got all it's keywords back today. This is the first time traffic has returned to this site in a year. I didn't change anything on the site. Also both of the sites are very similar layout and compete for some of the same keywords. Go figure! Oh well I'm keeping my fingers crossed.
So would we say this was a penalty change or just a algorithm change that is going on at this time each summer?
I am still not seeing "last found" in external links updated since July 4th in my webmaster tools. This is very odd indeed because there were a few digg stories last week. Typically Google shows them right away.
|Martin Ice Web|
google blog > One of the key technologies we have developed to understand pages is associating important concepts to a page even when they are not obvious on the page <
doesn´t that mean, that on page factors are getting more and more in the background and google itself tries to find out ( by link-factors, searched keywords ( how long did the user stay on page, did he click on the result,. ... ) ).
If its like this, webmnasters will have less influence on how the site is positioned ( I mean widget positioned )
It summer and time for testing at Google. This happens every year around this time. I have seen it occur for the past two summers. Think about... web traffic is dull in the summer compared to the rest of the year. Why wouldn't they run big tests during this time to prepare for quality results when the real traffic comes. All kinks should be worked out by the fall and your sites, if ranked well before, will return to their normal positions as long as you haven't done anything stupid. Take this opportunity to clean up your meta's, correct any other minor issues, and continue to add fresh content or prepare content for the future. Your rankings are based on the overall site value and that is determined by the rest of the web. As long as the spiders can get through your site, you have unique content with appropriate keyword usage on every page, and a well structure site architecture that is all you can really do.
Monitoring the fluctuations and chasing them with your modification *wand* will drive you nuts.
|doesn't that mean, that on page factors are getting more and more in the background |
It could mean the opposite. Google started try to understand what a page was about years ago. I think many folks here saw it as an attempt to apply semantic web principals to the algo. So the quote you use could mean we are looking for on page semantic richness around a topic.
It could mean that they are trying to understand what people are looking for when they search for certain terms. "Do most people who use that term want some information or to buy something".
FWIW I don't think that this means that on page factors are necessarily reducing in weight.
|Martin Ice Web|
your are right and not.
What if google puts every single site in a box like, a) sell widgets, b) information on widget, c) produces widget ( ACME ). They will do it by using your mentioned semantec richness around a topic by using analysis of previous queries.
Now, google likes to see your site in box b, but your intention is it to sell the widget ( box a ). What kind of tools do you have now to get into box a, now your are in box b. You have to rewrite your complete page but you don´t know witch of the semantic of page factors did push you into box b?!
<Moved from another location>
Does anyone else notice this?
For on of our main keywords .. lets say "foo". I can search google and be page 1 number 1 or 2. I can do the same search hours later or a day later and I am now on page 2. This changes back and forth all the time and is quite annoying.
Just wondering if anyone else sees this.
[edited by: Robert_Charlton at 4:24 pm (utc) on July 17, 2008]
Yes, I am seeing that as well
|This changes back and forth all the time and is quite annoying. |
But is it annoying to users? That's the only important question from Google's point of view.
In some cases, of course, "rankings bounce" might be annoying to users:
- The tourist who's accustomed to searching for a metric converter every time he wants to change miles to kilometers might be annoyed if his favorite conversion tool (which he hasn't bothered to bookmark) isn't on page 1 of the SERP.
- The geography student who's used to finding a Wikipedia article near the top of the search results whenever he searches on a city name might be annoyed if Wikipedia's article on Widgetville or Whatsitberg is nowhere to be found.
But in many cases, the mix of results on page 1 of the SERPs won't matter to users:
- Does the person who's looking for a reply to a question about Windows networking really care if a forum thread with the answer is at willies-windows-networking.com or wallys-windowsnetworking.com? As long as the user gets the answer, isn't that enough?
- If a user is looking for a car part to fix his 1953 Chevy, does it matter to him if the source listed at the top of the SERP is yoursite.com or mysite.com? Isn't he likely to be satisfied if he finds his chrome door handle?
The advantages of randomizing equally relevant and valued results could outweigh the disadvantages, from Google's point of view, at least in many instances. As long as expected results (such as a Wikipedia article or another highly trusted and established page) are easy to find, randomizing results for a bunch of pages that nearly identical scores for relevance isn't likely to upset anyone except the Website owner whose page about green widgets can no longer be counted on to rank consistently in the top 10 results for "green widget."
As Tedster recommended, we keep digging and digging to see whether we can find something that would explain the big drop.
One thing I came across this afternoon that is both surprising and odd is that I noticed that our meta stats in GWT have EXPLODED (~20x) over the last month. The weird part about it is that I don't think we are doing anything different in this regard then we have been doing for at least 2 years. Most all the dup metas and titles I see listed are duplicate for good reason: we have index pages that go on for many pages and articles that are multi page too. I don't think it makes sense to change the meta title across pages of the same content.
Two questions for the group:
-For those reporting traffic oddities, are you seeing any jump in your meta stats in the GWT?
-Are we doing this wrong when we have multiple pages? Should we force some sort of artificial differentiation such as article title <page number>? That just seems weird to me, but I am willing to be told I am wrong.
|Martin Ice Web|
-- > For those reporting traffic oddities, are you seeing any jump in your meta stats in the GWT? <--
Yes, and its growing while we fix the reported problems. You fix one and get two new.
Now to draw a picture:
- In GWT there is more meta information, I had a lot of dup titles suddenly
- Ranking jumping around
Does that mean google is broken and tries to fix things? Or are they communicate with us? Tedster gave some interesting ideas to me saying google may want to change SEO, which made me think that google is trying to force people into GWT checking what might be right or wrong. Think of it, if you had a search engine of this scale how would you want to communicate with webmasters? IMO they are sending signals and they are trying to make you build your site like they want it and how it is best for their spidering. Think of all the papers written about efficient spidering, the web is exploding faster and faster and spidering relevant stuff gets harder and harder. They want to avoid spam which is most of the time the easiest to spider but they want to get the hidden web which may containg precious content. This communication may be some way into the semantic web, trying to obtain the information in a more structured way.
I give you an example: if you search in google news, menu information and other unrelated "breaking news" headlines can also found in articles. This noise data is annoying for a search engine (I am building search engines for unstructed data myself and learned a lot from this).
They may even make us - like in adsense - mark the relevant content on a page, but this should have happening to 80% of the web and therefore they need us using GWT.
In such a case the old pagerank algo may play a different role in the ranking. Other factors will be more important: Trustrank of domain, trustrank of domains linking in, trend of widget on your pages, trend of widget in the entire link graph, local relevance, local relevance of backlinks vs. visitors vs. searches, bounce rate of your site vs. other sites, bounce rate of one term on your domain vs. other term on your domain etc. I think this is happening already as my relative good rankings get better and better (good clicks, low bounce rate, without external links).
Sorry for the long post.
Organic traffic returned to my site a couple of hours ago. In fact as has been the pattern, it returns with ever higher levels of traffic.
That makes it 6 trips to purgatory and back in 6 weeks. If this goes on much longer, I may develop bipolar disorder.
1) My situation might be somewhat unique given that as I mentioned we changed our cross-linking structure about 8 weeks ago. I don't know what the process is for how Google recalculates page rank across internal cross-links, but it could be the case that they remove page rank faster than they add it which could be part of the effect I have seen. But making such a drastic change at the same time that Google is testing new "user intent" algorithms may have created an especially volatile mix.
2) I often watch organic queries as they come across the server. We have it set up that I can see the query string and then click on the destination URL and see where the person landed. In the past I thought that there was a certain amount of hit-and-miss, i.e. for some keyword combinations it didn't seem logical to me that a given query would lead to a given article (though user intent based on keywords is usually pretty hard to assert. Sheesh, I have been married for a long time and I still don't know what my wife is saying to me half the time, so sussing out user intent based on a few words seems a fools game.). Regardless, be it confirmational bias or sampling error, the queries coming in this morning appear like they have a higher quality rate than I remember seeing before.
I'll keep posting on any other developments...
Nice one Gregg - "bipolar disorder"! But six times should surely be "sixpolar disorder". Or even "sexpolar disorder".
Oh, and on the subject of "Sheesh, I have been married for a long time and I still don't know what my wife is saying to me half the time, so sussing out user intent based on a few words seems a fools game", you are surely exagerating? You can really understand what your wife is saying to you half the time? Lucky man.
Looks like I have hit the June 4 slap again today BUT I have to echo cj94111's post in that I have seen a massive change in WMT numbers and now have thousands of duplicates that were not reported before. They too are multipage single topic listings, so the titles are duplicate but the content is not! I think that I will add a random number to the end of every title and meta description and that will get rid of all of Google's moaning over my pages - let's build sites for the search engines and not visitors in future!
The othere recent change I notice is that Google is spidering from far more IP addresses than I have ever seen before. I have also had another PR4 site completely removed from the index for no apparent reason.
This is crazy.
I report the following in case others have seen something similar.
For one of my sites WMT>>Diagnostics>>Content-Analysis has just started to report short meta descriptions on pages that had their meta descriptions changed from those reported in April. Pre April they did have short meta descriptions now they are different and longer. The report says last updated Jul 18th 2008.
More evidence that Google is working on broken data sets perhaps. Or maybe just a bug in WMT.
From my own company site, I would say broken data set. I noted this elsewhere (not sure this thread or another).
Google is currently displaying the correct Site Title for the listed home page but an old Meta (or other) text from a VERY old listing in dmoz (we never updated because dmoz would not accept our submission form because of some tech error - years ago now!). So, google has got split data. There is no way the two items could have been listed together as the whole site was changed at one time to a different usage. And the Site Title certainly isn't from dmoz.
So, I would say google have screwed up at least some of the descriptions somehow and pulled back replacements from dmoz where possible (and where permitted by absence of noodp).
Someone noted elsewhere that on different computers at the same location they were getting different results for the same search.
We have three active machines here: Ubuntu desktop, 2000 Pro and 2000 Server. All are fed from a common internal LAN connected to the internet via a single Router and a single IP (routed through Cambridge UK). The two 2000 machines are both using WMT and the Server was active at the time of this "test". Ubuntu has never accessed WMT.
Using google.co.uk, for two sets of keywords on a single site, searched both through the Web and UK options, Ubuntu and Server both registered the same: Web/UK 10/11 for one keyword set, 16/14 for the other set. The Pro machine registered 2 points lower for each one. We repeated the test twice with the same results.
The only real difference I could find apart from the OS was that the Pro monitor faces South and its machine is aligned North/South whilst the other two machines are aligned East/West nine inches away from the other one with their monitors facing North and West respecively.
Oh, and my wife was driving the Pro machine whilst I drove the others - can google determine gender from reflections in the monitor or pressure on the keyboard?
|an old Meta (or other) text from a VERY old listing in dmoz |
You can force Google to drop the DMOZ information by using this meta tag:
<meta name=”robots” content=”noodp”>
|I will add a random number to the end of every title and meta description and that will get rid of all of Google's moaning over my pages - let's build sites for the search engines and not visitors |
Adding numbers - not random, but which actual page of results - to the BEGINNING of the titles would also serve users who see the title in their tabs - especially those (like me) who often open each page in a new tab or window.
*** different computers at the same location they were getting different results for the same search ***
On Mozilla Seamonkey or Mozilla Firefox install the ShowIP extension to see where the search data came from. I'll bet a different IP serves the results to the other machines.
My traffic and pre-6/4 rankings have returned for the second time today. It's curious they returned last Friday also and it lasted for 3 days. I haven't changed anything on the site this week.
Tedster - yes, I added it a couple of days ago. Never needed it before, google always got it right!
g1smd - Didn't have the time to check. It seems odd that two out of three would be served from different data centres to the same IP but I suppose it could be.
Should not matter about which IP served what. If Google is good at geo-targeting you should see similar results. If geo-targeting is broken, then it would explain a lot of the fluctuation. These results are way too unstable these past 30 days and Google has remained quiet. Either they are fighting some paid links, or they made some changes to the algo and things are unstable and they are working on it.
Look at this thread that I started:
Notice that the part of the Algo that determines site links has the wrong key words. I have been studying some other websites and am seeing it more and more. I bet, the part of the algo that picks up the key terms for a webpage is the same part that feeds site links as well. I suspect that changes were made recently and things are not calculating quite right.
|Something seems to have happened, yesterday 2008-07-16: 2 Sites of mine which have been dropped to just a few thousand visitors over a year ago went back to 20k daily visitors. |
I noticed the same thing. A site of mine that used to get 20k - 25k organic referrals/day a year ago slowly dropped to about 4k/day until three days ago. Now it's back up to the 28k - 30k/day range and on keywords that we ranked very well for a year ago.
I mentioned I lost #1 ranking for a phrase after adding lots of content. Actually I disappeared virtually completely. So I did an experiment. I reverted the page to its original content pre development. Yesterday I returned back to #1. (Reversion uploaded about a week ago.)
| This 172 message thread spans 6 pages: < < 172 ( 1 2  4 5 6 ) > > |