Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: goodroi
A Brussels court said Google Inc. violated copyright laws by publishing links to Belgian newspapers without permission and ordered the company to remove them, setting a precedent for future cases in Europe.
Google, the owner of the world's most-used search engine, must pay 25,000 euros ($32,500) a day until it removes all Belgian news content, the Brussels Court of First Instance ruled today. There's ``no exception'' for Google in copyright law, the court said. The Mountain View, California-based company said it has already removed the content and will appeal the ruling.
Google Loses Copyright Case & Drops Belgian News Content [bloomberg.com]
Google loses Copyright case [businessweek.com]
Belgian Papers Win Google Copyright Suit [newsday.com]
A court on Tuesday ruled in favor of Belgian newspapers that sued Google Inc., claiming that the Web search Internet search leader infringed copyright laws and demanded it remove their stories. It ordered Google to remove any articles, photos or links from its sites -- including Google News -- that it displays without the newspapers' permission.
We also have no indication that the Belgium paper alliance has asked Google to pay them for the stories. In fact, it appears by some reports that the paper alliance want nothing to do with Google what so ever.
I think for now, the real interesting question is if more sites come around to the Belgians line of thought?
Plans by Google for a Danish news site have been ditched after newspapers complained, while other legal challenges are set to follow in France.
It isn't such an issue in the states as Google has done a great job of pro-actively pre-empting problems like this by forging deals with big media like Time Warner, New York Times, and the Washington Post.
Also, I think the fine is a bit more than some reports are indicating:
A Belgian court has found the world’s most popular internet search engine in breach of national copyright legislation and levied retrospective fines on Google of up to £24 million.
It is also interesting to note how the story is being framed from Europe to "this side of the pond". The Telegraph story clearly pushes it into a "anti american" action, while the .be papers talk specifically about intellectual property and copyright.
[edited by: Brett_Tabke at 4:17 pm (utc) on Feb. 13, 2007]
I appreciate you're trying to step back and think about this instead of flying off the handle, but even then this ruling is completely perverse. This case was brought about by people who want to have their cake and eat it, who clearly want the valuable and free exposure that Google gives them but bizarrely want Google to pay them on top of giving them free exposure.
These newspapers aren't giving anything away, they're getting traffic onto their site from Google just carrying their headlines (and perhaps a single sentence excerpt), and as the robots.txt admission proved, these newspapers willingly allowed their articles to be included in Google's database.
If the newspapers didn't want to be included on Google News or in the Google search engine, they could have easily used the robots.txt exclusion standard, and they admitted as much in court. If they didn't like Google's terms, they were free to reject Google's involvement, yet they chose not to do so.
"In effect, Google is leeching Intel off the those sites with little in return except the potential for a referral or two."
Leeching is when someone uses content without consent. These newspapers knew they could exclude Google with robots.txt, but they chose not to do so. That's consent, and that's why this isn't leeching.
"That traffic is going to go to those .be sites anyway"
If that was true, why are these newspapers so keen to be included in Google's index and news service? The only answer is that it's far more than just a referral or two, that it's a very important source of visitors, and that's why it's so perverse that they want Google to pay for the privilege of advertising these newspapers.
Incidentally almost all newspapers have operated for the past 40 years through rephrasing, repackaging and synthesizing pieces of intelligence taken from uncredited unpaid other sources such as a news wires, magazines or rival newspapers.
When I first got the internet I was shocked by how little original material my newspaper contained, virtually every report was a mixture of news wire stories or revisits of something explored by a rival paper.
[edited by: gibbergibber at 2:45 pm (utc) on Feb. 13, 2007]
It's been said before but all they had to do was block Googlebot from their website. Am I missing something?
My prediction - traffic on their website drops like a rock and they will be scratching their heads wondering what happened.
By listings in the search engine? That is a separate issue.
In the Google news section, you are side-by-side with your competitors being listed. One of the oldest marketing tricks you can do on your competition is list them along side others in your space. You simply CoOpt them and make them into one of the other gang. There is no quicker way of devaluing and hurting your competition than giving them a link in the middle of a list of others in your space.
> easy to delist
Ya, it is OptOut and it requires a pretty big and costly step on your part do it with no indication that it would be successful at stopping Google. As we have all seen, simply telling Google to stop via a robots.txt is no indication that they will stop spidering and using your data.
Additionally, the robots.txt format is non-standard and not applicable in these cases. This is especially true since the Search Engines have never formally adopted it and instead are busy changing it to suit their own services. Nor has the robots.txt format ever been formally recognized internet standards body. It is marginally useful today.
Every time one of these cases comes up, someone screams "robots.txt"! It is easily thrown out of any court with the simple request to show me any interenet standards body that has endorsed it? If that doesn't get it thrown out, then simply showing all the extensions and proprietary stuff Google, Yahoo, and MSN have added on to the thing, should easily do it. There has never been a bigger nonstandard standard in internet history.
> why are these newspapers so keen to be included in Google's index
I don't read that they are keen to be included in the search index. However, that content is listed differently and is aged. The .be paper content in question is the daily stuff they don't want listed, and the modest value archives that they can charge fees to access. They done want listed next to everyone else on the news page. The google news listings value of news to a news paper is only a few hours. After that, it's burried. A keyword search is completely different than a news search.
A news search on Google brings up topics and lists the stories right beside the other papers and outlets where you are just another site covering it. The challenge for any newspaper is to put out unique and interesting stories that stand out from the crowds. If you show up next to everyone else, then you are just "one of the crowd". Additionally, Google is using those stories to help find other related stories, which in turn, feed the traffic to other sites.
Making profit from a newspaper is even more difficult in the computer age, and these guys want to shoot themselves in the foot? If I was Google, I'd ban them from both the regular and news SERPS. Let them die a slow death and then get involved with whomever rises from their ashes.
I think that's exactly the kind of power that the Belgians are afraid of.
Why Google would have a right to publish news, without paying the fee that news agencies usually charge?!?
Do you think that if Yahoo was to use the Belgium headlines in Yahoo News, without paying, it would be any different? Or New York Times website posting headlines stollen from other websites?
I am 100% behind the Belgians on this one, it's about time someone started to remind Google that they are search engine, and not a portal. And if they want to be a portal - they better pay, just like a portal would!
I run a news site. It's a small site but growing rapidly. It got listed in Google News this weekend, finally, after about a year of ignored requests.
My traffic jump was ... dramatic. One article that I would have expected to have around 50-60 visitors had over 700. And the CTR went up fourfold as well, because, I suspect, people finding the site from Google News are more prone to click on ads. (95% of my traffic has been type in or from organic links until now.)
Why would you NOT want your news site in Google News? It's essentially free advertising.
And so what if they cache it? The net benefit to me is still more traffic.
When you are already bigger than Google news and giving Google news your content would a) cost you money, b) cost you long term customers, c) lose control of your Intellectual Property, and the big one: d) when it would hurt your brand to be associated with other lesser news sites. In those cases, you are not building your site, but helping to build the SE site. The Belgians are just resisting assimilation.
Its all about making individual companies making commercial decisions now - the court ruling just stated how the Belgian courts interpret the applicable copyright laws.
After all I think both are stupid. We all work as hard as we can to be in every SE, but they want to be out? Old media.
Not everyone needs Google. Many big sites couldn't care if they never get another Google referral, ever. I used to be where you are but now have sites that really don't need Google so I can now see the Belgians' view.
Over the last few years I took a leaf out of Brett's book and blocked Google from caching my pages. Then I went further and actually blocked Google from some of my content (via robots.txt). Guess what? They didn't give a sh*t and kept crawling that content. So, what do I have to do to block them crawling? Why not give my robots.txt the full respect it deserves and not even venture into folders I've blocked? Talk to me about robots.txt when
a) there are some decent changes made to the standards
b) when Google learns to respect it
Would you like it if every spammer in the world used the excuse that they could spam you because you didn't opt out? It's the same principle. Don't assume I've opted in just because I haven't specifically told you (whoever "you" are) that I don't want to opt in.
It's essentially free advertising.
And so what if they cache it? The net benefit to me is still more traffic.
A website which specializes in running 2 lines of "news" surrounded by Adsense or aff banners - yeah, I can see why you want Google News traffic. But do ou think AJC gets their visitors from Google? Or that they care at all?
Google News is another way of showing Google's inability to return fresh content on their search engine - I do search Google News, but only when I need to find something "recent", and I know there is nothing recent on the search engine...
The problem really lies in the fact that news aggregation services such as Google news have only recently come into existence (prior to the internet it was almost impossible to provide such information in a timescale that was viable) and copyright law predates these services.
Then take into account that these services are by their very nature going to be bordeline on copyright issues (What counts as a legal 'quote' or 'snippet' and what is illegal reproduction is very much open to interpretation).
Essentially it comes down to the fact that without legal rulings such as this one Google really didn't have much to go on when designing their user interface.
Add in the fact that pretty much every countries copyright laws are slightly different and you have stepped into an absolute legal minefield.
You can say that putting a title and a link would be legal 99% of the time and that reproducing entire articles would be illegal 99% of the time, the issue comes in working out where between those two situations you should position your service for Best Usability while taking into account Copyright Legality.
Yes, almost all the websites want to be indexed by Google, but there are some that don`t prefer this. It is simple as that. Google has to respect the owners of these websites.
How does Google make money? By placing ads on SERP`s. Google doesn`t own these websites, it just lists them.
Maybe Google has to give website owners a fee in order to index their websites since it makes billions of dollars through PPC.
"This isn't about linking. Their argument is not with the linking in the main index, it is with the aggregation and usage of news stories. This is about the usage and control of intellectual property. In this case, that intellectual property is also covered by copyright laws."
I agree, I mentioned this point about a week back....
Google asserts the right to profit from copying"
I'd have to finish that for you....and holding in perpetuity, anything they please...
My disappointment with Google is that they lay down such stringent guidelines for webmasters to follow in order to rank anywhere in their SERPS.
Now, I'm not against such stringent guidelines, as you know I've worked hard on the technical side of my site because, all things being equal, the guidelines are based on, IMO, some very solid website design rules (i.e., no duplicate content (a. multiple pages of your own copyrighted material on your own site because your CMS is a less than fully completed open source compilation of code or b. duplicated pages on your site that are also found on another site).
So, what Google wants in its index (at least according to my lay interpretation of the changes in the Google algorithm over the last 18 months I've been watching as it pertains to my SERP placements and the SERP placements of others here and at Webmaster World) are sites with original content. And, their algorithm, in effect, filters out (penalizes) all webmasters whose sites don't fit this mold.
Google has all the right in the world, IMO, to set those rules.
My disappointment with Google is that they sure did turn out to be a real "do as I say, and not as I do" type of company. Why didn't Google filter out youtube from its algorithm for providing duplicate content? Why did the Google algorithm not completly drop youtube from its index for copyright violations? Matt Cutts has sure posted enough about the spam sites he's personally taken down.
I'm more curious about the non-Google implications. What does this means for all the other aggregator services out there? What's the new fair use rules for blogs in Belgium? And the rest of the EU (since Brussels is Capital of Europe)?
It seems to me that if you profit from someone else's intellectual property, you should pay for the privilege and if use of that property could harm the owner in any way, you shouldn't do it without permission. This is roughly how the broadcasting of music works I think. Radio stations don't need permission to play a track, but when they do, they must pay for it. On the other hand, you can't sample a track and use it in a new record without permission.
If Google followed this sort of principle, they could spend less on lawyers. That said, in the UK, I don't think there are any recognised rates of remuneration for reprinting news stories as there are with playing music on the radio. Nor is there any centralised method of clearing payments, so following the practice used by the music industry would require considerable organisation (for some pretty small monies).
These newspapers knew they could exclude Google with robots.txt, but they chose not to do so. That's consent,
Nope! Totally the wrong way around gg. If I don't put a lock on my front door, that is not an invitation for all and sundry to come into my house uninvited and make free use of my posessions.
Copyright is OptIN, not OptOUT. We all turn a blind eye to SEs based on the assumption that they will send useful traffic, but when they start to present that information such that the user no longer has a need to click through to the source site, then they (SEs and any other similar spidering agencies) are definitely crossing the line.
kaled has the right idea regards re-dissemination of intellectual property. Perhaps the Internet needs a Rights Collection Agency or two the same as exists for Music, Literature and other artistic rights such that SEs and other Spidering Agencies could pay for the right to use online material. A nightmare to setup and facilitate I agree, but a thought nonetheless.
No, I think you're totally backwards...
Let's look at the Wall Street Journal - what's the website's name? How can I find it? Do I have to keep guessing the name until I get it right? No, I just go to my favorite SE and search for the Wall Street Journal. Thank you Google!
No one forces anyone to supply online content - the door is open when they put up a site. The WSJ took an interesting approach - you need to pay to read. The New York Times makes you register. If your information is so valuable, these are two very good ways to protect it.
The problem is search engines provide an essential service to users of the Internet - that's why they are so popular. This used to be a quid pro quo deal. But now that some sites figure they don't need Google anymore they want to slam the door.
And for those of you that don't think they needed Google or AltaVista think again. It would have cost you millions of dollars in advertising to generate the traffic they handed you as part of this quid pro quo deal. The only problem is now you want to cut them out.
If your information is so valuable, these are two very good ways to protect it.
I object that I have to take specific steps to protect my information. That's not the legal position either. Is Google that big that normal laws don't apply to it? No. What is yours is yours. Tampering with that basic right by saying that "what is yours is yours provided you...." is the first step down a slippery slope.
No, I just go to my favorite SE and search for the Wall Street Journal.
And for those of you that don't think they needed Google or AltaVista think again.