homepage Welcome to WebmasterWorld Guest from 54.226.168.96
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor
Home / Forums Index / Search Engines / European Search Engines
Forum Library, Charter, Moderators: Rumbas

European Search Engines Forum

    
New German Copyright Law Would Force Google to Pay for Listings
Brett_Tabke




msg:4487176
 11:53 am on Aug 22, 2012 (gmt 0)

Although a pro-Google article, it does talk a bit about the state of the proposed German law that would require Google and other search engines to pay to list websites:


the draft law would make search engines pay for reproducing newspapers’ headlines and first paragraphs. So, take those away and the links are fine. Even if nobody will have the faintest idea what they’re linking to.

Google’s North Europe communications chief, Kay Oberbeck, sounded off about the issue this morning in a guest post for a German press agency. That was in German, of course, so I got him to vent in English as well [gigaom.com...]

 

lucy24




msg:4487339
 8:37 pm on Aug 22, 2012 (gmt 0)

If anyone can figure out who wants or benefits from this proposed legislation, please share. I read the linked article and remain at a loss.

:: vague mental association with longago editorial in The Washington Monthly that began "In Florida, the doctors and the trial lawyers-- two groups who richly deserve each other--" etc. Mutatis mutandis. ::

Andem




msg:4487365
 9:22 pm on Aug 22, 2012 (gmt 0)

From the article:
So now Google is furious for being picked on, when it actually drives traffic to the publishers.


Services like Google news are taking over the role of start pages for many people. Google does this not by producing their own content, but by using content created by others.

In common law countries, there are usually frameworks in place which allow for fair use. These specific laws *do not* apply in other countries, so German newspapers asking Google for money isn't really surprising.

The system of law which Germany uses (Grundgesetz) might not allow for any commercial, automated, mass use of copyrighted material in the form of headlines and paragraphs.

Didn't Moreover go after people many years ago for using the headlines that they themselves aggregated?

incrediBILL




msg:4487443
 3:40 am on Aug 23, 2012 (gmt 0)

I'm thinking Google should just dump those newspapers and see what happens when they have a lot less traffic. I think the newspapers will come crawling back and beg to be included without making Google pay.

Google could simply quit indexing or including any company in Google News that doesn't explicitly permit Google via the robots.txt and theoretically add an EULA that says explicitly authorizing Google in robots.txt gives them permission to display headlines and snippets.

Then there is another alternative, Google could start charging newspapers for inclusion and make it a true stand-off where the newspapers have to pay Google to index them in the first place.

It could get ugly :)

Google does this not by producing their own content, but by using content created by others.


Without the portal most people wouldn't see the content in the first place unless it was their primary newspaper to start with. There is simply too much content to deal with and without tools such as Google News, most people would never see it so these guys are playing with fire IMO.

cabbie




msg:4487501
 7:16 am on Aug 23, 2012 (gmt 0)

I would love for Google to start paying the websites it displays in it's serps. A sliding scale from 1-100 or perhaps a ppc model.
Webmasters , unite and take back the world wide web from Google.Only half j/k

graeme_p




msg:4487536
 7:57 am on Aug 23, 2012 (gmt 0)

The newspapers think that they are the only worthwhile source of news, so Google will have no choice but to pay them. Completely wrong of course!

sem4u




msg:4487547
 8:44 am on Aug 23, 2012 (gmt 0)

If the newspapers do not want their content to be indexed by Google then they should just block their crawlers through the use of robots.txt. Of course the traffic to these sites would nosedive.

jecasc




msg:4487549
 8:49 am on Aug 23, 2012 (gmt 0)

Even if this will become a law - which I doubt - it will be totally useless for the newspapers.

Google will throw them out and let only those in that give a free license by including a specific entry in robots.txt.

Since the law does not change the competition in the online news sector they all will have to do that.

StoutFiles




msg:4487576
 11:21 am on Aug 23, 2012 (gmt 0)

So what if Google decided they would pay...and only paid to include one paper? Wouldn't all the other newspapers try to sue Google for unfairly ignoring them?

This is all very silly, but we do need to find a way to address Google using too much content in their news feed, basically stealing the content.

oddsod




msg:4487670
 3:54 pm on Aug 23, 2012 (gmt 0)

A "pro Google article"? It could almost have been written by Google themselves! :)

The plaintive "but (Google) actually drives traffic to the publishers" is in italics. How dare Google be asked to pay when they are sending traffic?! :)

But what about the traffic that Google doesn't send? The visitors whom Google satisfies at their own properties? Google has no real incentive to send visitors to a third party site if they can answer the question themselves or satisfy the user's curiosity. We've seen them do it with definitions, jokes, weather, currency and other data!

Maybe an equitable solution would be for Google to pay for each visitor it doesn't send. So they show the news to 100 visitors of whom 80 follow the link to the newspaper. Google is then charged for the 20 visitors it got on the strength of someone else's content. Then there'd be less motivation for Google to hog traffic for itself and an incentive to actually send that traffic on. That Google sends some traffic is not good enough reason for them to have carte blanch on your content. What if they sent only 50%? Or 10%? Or 0.01%?

As Brett has said so many times in the past, it's not our responsibility to stop A, B and C via robots.txt. So you stop Google today. Pinterest tomorrow. Somebody else the day after. And webmasters have to take responsibility for keeping track of all these parasites and stopping them one by one? That's technically possible, and is the current state of affairs, but it's not a very elegant solution.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / European Search Engines
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved