December 2003 is significant because anything prior to that date that is the same as a claim in a patent is prior art and can be used to invalidate the patent.
Anyone who after that date impliments what is in the patent claim is infringing upon the patent and if the patent is upheld is in a world of hurt.
A large number of patents (especially software patents) would fall to prior art if more of them reached the courts.
Disclaimer IANAL but I once slept in the woods.
|Watcher of the Skies|
Hmmm, December 2003. What was significant about that? Well, before others could decipher and invoke Google's evil ways of Nov. 15th, they figured to make it (all) public and broad both to cover their ass, and get it in first BUT at the last possible moment.
Does Copyright protect against a Patent?
If one published a paper highlighting the same specifics within G's Patent, would the Patent on those specifics be valid? Could they be challenged?
I agree with Rollo. Good chance this patent application is a strategic manuever aimed at tripping up the wolves in Redmond or the foxes in Sunnyvale.
Going all the way back to the first page on the topic of domain names. No matter if you register your domain year to year or for 5 or 10 years Googlebot and other crawlers keep the date first indeded in their database.
Google could care less how you pay for your domain name.
As for the question does Google favor sites with years of online experience?
It is not that they are favored it is just how things work, a great many sites registered in the mid 90s got backlinks because that was all there was to do to vote for another site. Lots of awards werw tossed out to them.
This is also allowed them to build pages of useful content, follow design guidelines, make mistakes and learn from them.
From where I sit most back side coding has little to do with search results any longer nor do inbound links.
A search on google for the term "credit card"
Top 10 sites have anywhere from 300,000 inbound links to as low as 10.
Guess the link theory goes out the door.
Page content is the same, some sites have tens of thousands other have as low as 36.
So scraping and adding pages like a fool probably wont work.
Title Tag Most dont use it. Those that do, do not include the keyword. Consider #1 Paypal does not use the keyword term in their title.
Meta Keyword - There are several sites with obvious keyword spam in their meta keyword tags.
We know google doesn't pay attention to the meta keyword tag, but they also dont penalize for spamming with it.
Meta Description Tag. Also useless other than the search snippet. Can be spam laden to no ill effect.
On Page Keyword Use - As normal body content, i.e. not bolded, used as a headline tag, or used for interlinks, outbound links, seems to have no value as more than a few sites do not mention the keyword term or only use it once.
On Page Keyword Use - Looking at Googles cache of some of these pages would lead one to believe that keyword spamming in the page content works.
What seems to work.
Keyword spamming on page with hidden text
Surfing to this site and scrolling down to the bottom reveals a huge list of real estate terms in white text on beige background.
Site ranks on googles front page for folly beach realty, folly beach realtor.
Low volume search and competitors pages I know but my point is it does seem to work no matter what the volumes are.
Anyone else care to add to the black hat tactics winning over the gray and white?
[edited by: Brett_Tabke at 9:53 pm (utc) on April 1, 2005]
[edit reason] lets not point at specific minor sites. [/edit]
|...a great many sites registered in the mid 90s got backlinks because that was all there was to do to vote for another site. Lots of awards werw tossed out to them. |
|From where I sit most back side coding has little to do with search results any longer nor do inbound links. |
SEO1, make up your mind. Do backlinks count or not?
Of course they do. WGGD - What Would Google Do?
Seo1 - No one factor produces top rankings. Its a combination of factors so you cannot be so dismissive in your comments. e.g.
>Top 10 sites have anywhere from 300,000 inbound links to as low as 10.
>Guess the link theory goes out the door.
How do you know how many links Google see's? Don't believe link:domain. Linking, types of links and anchor all play a part. 10 killer links in can be very effective while 200 poor ones will do little.
|"One reason for such spikiness may be the addition of a large number of identical anchors from many documents. Another possibility may be the addition of deliberately different anchors from a lot of documents." |
I'm glad to see that one spelled out. It may save some poor electrons from dying in those artificially-vary-your-anchor-text threads. Whether Google is any good at this or not, the point is the "spikiness" not either the identicalness or the non-identicalness.
The spikiness is the result of same anchor text or differing anchor text.
Identicalness can cause spikiness & non-identicalness can cause spikiness.
So your goal will be to maintain a strategy that will walk the middle line.
I am off to buy my super computer that can do a stupendous 100 tera flops and also has a neural ability to remember the past and calculate the future of my site and will help me optimise so that the future is brighter.
I already got it at birth as a gift.
Time to make use of it.
Could somebody explain how the "user behavior" section could be implemented?
framing pages "a la netscape" style?
How could they measure the time spent viewing a page?
"How could they measure the time spent viewing a page? "
does the term Google Toolbar ring a bell ;)? I hope the rename it Spybar at least.
google toolbar with page rank option turned on is enough for G to know how long you spend there and if you go to Amazon from there.
They have got all the bases covered.
I see Walkman already answered it.
So, since the toolbar functionality respecting getting the PR has been sniffed, I guess some people might be able to check whether this kind of information is really being sent to google or not. So, when they implement this, we'll know it for sure.
And for those users who don't, with their js tracking on their links, it's very easy to see who go's where, and if they return to the results and click on another result or not.
|So, since the toolbar functionality respecting getting the PR has been sniffed, I guess some people might be able to check whether this kind of information is really being sent to google or not. So, when they implement this, we'll know it for sure. |
It already is. With the PR functionality turned on, the toolbar tells Google every single page you visit (it has to in order to get the PR from the server).
All they have to do is capture the data that is sent by the toolbar when the PR is requested. That is enough to allow them to build any type of user behavior models they need.
So Google is basing SERP rankings on the Toolbar? What percentage of surfers have installed the toolbar? Sound's like all the data collected might be from Webmasters' browsers...
Does the FireFox Googlebar report back to Google?
|22. Though the enemy be stronger in numbers, we may prevent him from fighting. Scheme so as to discover his plans and the likelihood of their success. |
23. Rouse him, and learn the principle of his activity or inactivity. Force him to reveal himself, so as to find out his vulnerable spots.
24. Carefully compare the opposing army with your own, so that you may know where strength is superabundant and where it is deficient.
The Art of War
Be skeptical, but also be prepared. If the intent was to scare off SEO's a well placed posting would have sufficed, and been much cheaper than the legal fees involved. So it's more likely it's things they are doing, things they want to do, or things they want to prevent someone else from doing.
I wouldn't think breaking any one of the principles would or should penalize you. However the more you break the more you look like you are trying to game them, and from their perspective should be filtered out. Don't be so focused on the trees that you can't see the forest.
This reminds me somewhat of the MS vs. Lindows case, when MS first sued over the name "Windows". The court found "windows" to be a generic word like 'sky' or 'dirt', or whatever.
But MS kept it up and Lindows finally changed their name to keep from having to duck and dodge. (And yet that battle still rages...)
It seems G has tried to patent any way possible to determine document scores and possible search listing algos, with the possible exception of 'spinning the bottle' to find a list of people you can kiss. I didn't see that one in there.
Since early in this thread it was noted that most of this stuff is public, has been posted on this forum, and has been implemented by other SE's. Does G think it can sue other engines in the future for implementing 'generic' algos to determine serps? I hope not. G can be a great tool if it doesn't get out of hand.
The fact that Yahoo has come out swinging this year is great news. Competition is the only way to solve the issues of one engine calling all the shots. I've started on a small engine of my own. Yep, it's small, but then G, as great of an engine that it is, started on 'borrowed' machines.
[edited by: Brett_Tabke at 10:20 pm (utc) on April 1, 2005]
[edit reason] thanks - tos #26 [/edit]
|been much cheaper than the legal fees involved. |
The fees involved would be nothing to Google, at least not in the application stage.
|Does G think it can sue other engines in the future for implementing 'generic' algos to determine serps |
Other existing - doubt it. But prevent new ones from even trying - why not. Barrier to entry just got $50 million dollars higher.
IMHO a lot of unenforceable and non-implemented (oh, and "never to be implemented") garbage. Some truth may be ingrained, but need to be carefully searched for. If anyone wants to break it - dare to start a search engine on Kayman Islands? The only problem - bandwidth...
Agreed on "The art of war" quote.
"Be no evil - sue the pants off of competition"
> Could somebody explain how the "user behavior"
> section could be implemented?
> How could they measure the time spent viewing a page?
I can think of 5 ways. Here are four:
Toolbars with the advanced feature on,
js click through counters on serps,
serp url click through counters,
Toolbar would be the least accurate because of the low numbers that would use that feature. The click through counters and cache would be 1000 fold the data set that the toolbar could produce. However, the toolbar would produce unique "paths" and user behavior studies.
cookies, cookies, cookies... What goes out, must come back...
> so Google is basing SERP rankings on the Toolbar?
No, it is but one ingredient in what is probably a algo with over 500 data point inputs. What weight toolbar data is given is unknown.
|> How could they measure the time spent viewing a page? |
I can think of 5 ways. Here are four:
Toolbars with the advanced feature on,
I think Google is tracking click behavior even without the "advanced features" turned on, because when I try "related" search, all of my sites come up... even though I do not turn on my advanced features. My sites vary in topics... a lot. They are not related in any search related way but in the ownership of the sites. i, of course, visit my sites a lot. So, if anyone tracking my behavior through toolbar will know that those are related (my) sites.
I think we just need to follow the patent process and see where the dust settles
I Just realised something.
In the past 3 months i got un-accountable no of back links from scraper sites using the Google Api and just showing the top ten google results for a particular keyword.
The modus-operandi is that they put their content first and then display the results as related links.
Helps them rank (atleast used to).
So for a site like mine which got a 1000 backlinks to 1000 seperate pages in my site from just 1 site, there would have been spikiness.
But i have noticed more than one site doing that.
Wonder how many links i got from these scraper sites that pushed me down.
Whats your take on this guys?
just to put an idea forward about the toolbar tracking thing - one way to prove or disprove that it's sending data even without advanced features turned on would be to use some kind of traffic monitoring tool.
The big story is the suggestion that high CTR in adwords can influence search engine ranking. Am I dreaming, or weren't there some "over my dead body" posts from google representatives concerning linkage between adwords and search engine ranking?
What isn't in the patent... a method of discerning niche authority, and unfortunately this isn't much of a surprise.
|> Could somebody explain how the "user behavior" |
> section could be implemented?
Another possibility: GBrower in the future.
Ok a few points. Is it just me or do they try to patent something and it's opposite at the same time?
Secondly, I wrote a search engine before dec 2003 using some of these features, do I have to go back end remove them, because somehow I time travelled and stole Google's inventions from the future? ridiculous.
and lastly, don't forget the preemtive patent-strike wars. This patent might simply be a defensive measure against some patents held by Yahoo or MS that could threaten Google. sort of: "If you [MS] won't let us use single word queries, we won't let you use page age and links as part of your ranking"
I have some real work to do (fixing my roof) so I've only skimmed this thread but I think a few reminders are needed.
1) You can apply for a patent on anything but it may not be granted.
2) Even if granted, a patent may not be enforcable.
3) Google have a long standing policy of not discussing their algos, therefore
4) This can only be intended to confuse - it it certainly not an attempt to limit the technology of their competitors (which is the purpose of a patent).
5) The best place to hide a stolen car is in the biggest car park you can find. In other words, there MIGHT be some real ideas hidden in a load of nonsense but I'm not going to bother looking through it.
Just for the record, I've just restored one page on my site back to #3 on Google from about #25. I did it using the oldest and crudest of SEO. I really don't understand why people believe that Google algos are so fantastic and sophisticated. I've just applied the same nonsense SEO to a page that is nowhere for its target keywords - I'm half expecting to get page 1.
>What isn't in the patent... a method of discerning niche authority
Would this need to be in? Once a set of results for a keyword/phrase has been found, the 'authority' sites could then be identified, resulting in the 'niche authority' for that search.
| This 189 message thread spans 7 pages: < < 189 ( 1 2 3 4  6 7 ) > > |