Forum Moderators: Robert Charlton & goodroi
"An organization of more than 8,000 authors accused Google Inc. Tuesday of "massive copyright infringement," saying the powerful Internet search engine cannot put its books in the public domain for commercial use without permission."[abcnews.go.com...]
I'm sure most of us saw this one coming. Google insists that they will direct readers who want more to booksellers and libraries. (For a nice profit, no doubt.)
[copyright.gov...]
[copyright.gov...]
[copyright.gov...]
As to those who mention photos, "leaked memos", and other material which gets republished (on the news), these fall under the "news reporting" fair use.
Also, any information (memos, photos, reports, etc.) generated by government agencies is (in most cases) placed in the public domain or may be purchased for copying fees only, or requested under FOIA.
My opinion (which is worth the digital paper it's printed on) is that Google has no right to store entire books belonging to individual copyright holders.
This message (c)2005, lexipixel / WebmasterWorld User, no rights reserved. License is hereby granted for WebmasterWorld to republish this message in any and all formats. I hereby grant Google the non-exclusive right to index and archive this message. All other rights not claimed are hereby waived. E Pluribus Unim, Caveat Emptor, and Livin La' Vida Loca.
A library buys the book. The author is paid. But, the library can't tear apart the book, print copies and let their clients use all of those.
As for fair use: Selling ads on the copyrighted material pretty much knocks that out.
Now, what about a micropayment system, Google? Get those authors paid. Sell ads, too. Release the worlds information for just a few cents per user. Pay the folks who do the work. Help them, help the user and help yourself at the same time.
And, get people use to paying for material. Save local journalism. Etc.
Win-win-win-win, etc.
Google is just providing a service but if authors want their stuff published online via Google, it only makes sense that their copyrighted work should be authorized for publication in any format.
I think suing Google is just an attempt to get money. After all Google does have deep pockets.
Having said that, if Google loose badly, then they may have to rethink the caching of web pages.
Kaled.
As an author under contract with a reputable niche publisher, I think Google is making a big mistake here.
I'd really like to hear our resident WW attorney Webwork's thoughts on this discussion.
If everything was clear cut, we wouldn't need lawyers, judges or juries.
I think that Google must be shocked --SHOCKED-- that people aren't simply peeing in excitement over one of their products this time. They must really be arrogant if not one of them considered the potential copyright infringement issues before they started manually scanning pages. If this turns out badly for them (as I suspect it will), who will be to blame? The GP Product Manager? I have heard from various Google employees that they can't do anything in that company without Brin and Page's stamp of approval, so it would appear that the founders dropped the ball on this one.
Don't get me wrong, it was a great idea. It still is. But, dude, people want to get paid for their work! If authors around the world intended for their works to be free for the public to see and download, they wouldn't bother to sell them in book shops, etc! One does not have to be a mensa member to have foreseen the problems that riddled this project from the get-go.
I think that Google must be shocked --SHOCKED-- that people aren't simply peeing in excitement over one of their products this time.
Then the University of Michigan had to reveal the confidential contract in June due to a freedom of information request. That contract confirmed in writing some of the worst fears of publishers regarding Google's arrogance and attitude. For example, Google is implicitly claiming a new copyright on their digital copies, and reserves the right to license them or sell them in perpetuity. Meanwhile, the University of Michigan is very severely restricted as to what it's able to do with its own copy of the digital files.
And as for the question of, "What were they thinking?" -- well, I don't think they were thinking at all. To quote from the Christian Science Monitor, June 27, 2005:
"We had all these cockamamie schemes for how we could get content," recalls Marissa Mayer, director of consumer Web products at Google. "We thought, well, could we just buy books? But then you don't get the old content. We thought maybe we should just buy one of every book, like from Amazon, and scan them all." How long would it take to scan all the world's books? No one knew, so Ms. Mayer and Google cofounder Larry Page decided to experiment with a book, photographing each page so that it could be digitally scanned. "We had a metronome to keep us on rhythm for turning the pages. Larry's job was to click the shutter, and my job was to turn the pages," Mayer says. "It took us about 45 minutes to do a 300-page book."
That's according to CSM staff writer Gregory M. Lamb. Now this is pure speculation:
Larry says to Sergey, "Hey, the University of Michigan loves me. The engineering department gave me a medal recently. I'll bet we could cut a deal with their library." Sergey says, "Great idea, dude. I'll start working on Stanford's library. Stanford will get rich off of their Google options, so how can they refuse?"
Now if you're a lonely lawyer in a Google cubicle who knows a thing or two about copyright law, and you won't be vested for a while yet, and the free lunches are great, are you going to keep your mouth shut? You betcha.
But the point is, there was so much "Google is cool" juice in the culture for such a long time, that Google almost made it happen without serious opposition. They didn't have to be thinking, because they were already used to having everything they wanted dropped into their laps.
Otherwise, their approach to “authors having to opt out” is a really good joke :)
I'd prefer my site not be cached, but always worry what will happen to SERP's. Some say nothing, some say "No-cache" is harmful.
As far as I can tell copyright law is being applied to web content mostly, so if the Google cache is a test case they've already won!
Even private copies of copyighted works are supposed to be for archival use only, not boosting efficiency and cutting cost in generating SERP's
I participate in private forums at an authors association, and the members active there were concerned (some very upset) about copyright infringement from the very first anouncements of the Google plan.
There was never any doubt that Google was going to find itself in a battle. It was just a question of which parties would take the lead.
I myself am waiting for laws where the robots.txt file is reversed. I opt-in search engines instead of having to opt-out. I would like to see search engines (or any site), that uses ads on pages containing my URL+snippet, pay me a percentage of the profit they make. If a serp page is nothing more than content scraped from others copyrighted material, everyones website on that page should get a percent of the profit from any ads running on that page. How about I define which information on a page can be used for the snippet and the percentage of profit I want for use of that snippet?
Sounds unreal, but only under todays Fair Use laws. Courts only apply the law, they don't create it.
Kaled.
Related [authorsguild.org]
The Supreme Court clearly articulated a fundamental position that we have held and protected for years: Others should not disseminate or profit from our creative product without first securing our permission and paying us our fair share.
Google automatically opts you in unless you FIRST opt-out. Copyright gives you the EXCLUSIVE rights to that material as well as distribution of that material. In other words a copyright FIRST opts you out unless you grant permission FIRST. Google ASSUMES permission FIRST which is completely against what a copyright is.
Then enters the Fair Use debate which we have all heard before so I will spare you all.
"Maybe all of you who are against Google Print should go and create robots.txt on your sites to disallow SEs from crawling."
See statement above. Why should have to EXCLUDE SEs from crawling? By right they should obtain permission FIRST by a robots INCLUSION. Again enter Fair Use.
If you want to throw in an added mix then how about TOS on individual websites where they are leagal contracts between those person(s) or entities entering into a website even through automated means? Something to the effect of: Material contained within this website may not be reproduced in its entirely or in part for commercial or non-commercial purposes without prior WRITTEN approval.
Sounds unreal, but only under todays Fair Use laws. Courts only apply the law, they don't create it.
Um, in this case Fair Use was defined by the courts over a century before it made it into law. The basis for the fair use doctrine was Folsom v. Marsh in 1841, yet did not make it into Title 17 till 1976.
Fair use was created to allow the copyright laws to be constitutional. SCOTUS is extremely unlikely to let congress just legislate Fair Use away, they will need to have a constitutional ammendment to be able to do that.