Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google Adds "Answer Highlighting" to Serps. No Need to Visit Websites

         

Brett_Tabke

2:48 pm on Jan 23, 2010 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month Best Post Of The Month



Taking another page out of the WolfRamAlpha and Bing play book, Google introduces Serps with answers. You may never need to visit any site again.

Google Press Release [izurl.com]

Answer highlighting helps you get to information more quickly by seeking out and bolding the likely answer to your question right in search results.

zeus

12:38 pm on Jan 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



serious, google is "still" to big to ignore thats how it is, but my suggestions is let them spider your Front page and thats it, also stop using there tools, use proxy for surfing, that way you cut a lot of info which they collect to expand there bizz in a evil way, do no evil is a long long time ago.

jecasc

12:38 pm on Jan 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



The relationship between websites and search engines used to be a symbiotic one. Without websites the search engines do not have any search results, without search engines webmasters do not get any visitors. Now in their greed, the search engines slowly turn into parasites, feeding on the websites but giving less and less in return. I think it is time to stop this before the parasite kills its host.

The search engines have introduced new tags in the past, like the canonical tag. It's time webmasters introduce a few tags themselves and demand that search engines obey them. How about a tag like this:

<meta name="use" content"only-fair">

or something like that.

What google does is like steeling the apples in the garden of your neigbhours and selling them as your own.

J_RaD

5:00 pm on Jan 24, 2010 (gmt 0)




So all of you know: Google reads Webmaster World. They assign people to read these threads. They copy items and distribute it in an internal Google newsletter. So, Google, you're reading this. Yes, you, Sergey, Larry, and Eric. What you're doing is deeply wrong. It's "legal", but it's wrong

huh? really? how do you know this?

tedster

8:23 pm on Jan 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Even if it is true that people are "assigned" to read here - and I certainly don't know if they are - what is wrong about that? ...listening to what people are saying about your company and letting other employees know what you hear? Any business that doesn't know what the conversation is in the public marketplace is headed for BIG trouble.

That doesn't mean our forum is intended to be a way to get a message to Google. This forum is for webmasters to share professional discussion and analysis - it's here so we can help each other understand what's going on.

This particular thread itself is quite interesting in that regard. There are viewpoints shared in many directions about Highlighted Answers and Rich Snippets. As I read through the discussion, I found myself thinking about ideas I hadn't taken in before. For example, TheMadScientist's question about whether this move negatively affects advertising income.

Seb7

8:41 pm on Jan 24, 2010 (gmt 0)

10+ Year Member



Its a bit of a crude engine, as it seems that it looks at a combination of keywords, and not the order of them, which is quite dangerous, as rearranging words in a sentence can easy change the meaning of the sentence.

try "the state or height of building an empire"

To add to this problem, apparently the answers are based from Google Squared. Google squared is based on popular answers, not correct answers.

[edited by: Seb7 at 9:16 pm (utc) on Jan. 24, 2010]

Eurydice

8:42 pm on Jan 24, 2010 (gmt 0)

10+ Year Member



J_Rad: "How do you know (about the internal newsletter)?"

I've read many copies of these newsletters. I know a number of Google employees and ex-employees and we've talked about it.

They look for ideas, suggestions, complaints, problems, bugs, etc. They don't reply or acknowledge.

londrum

9:15 pm on Jan 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



You can't take the most important parts and publish them as your own... If google is indeed finding the most valuable piece of information on a website, and publishing it without explicit permission, this indeed is a violation of copyright law.

what we've been talking about here, is google finding the answer to a question on your page and then presenting it to the searcher, so they don't have to visit your site.

but they've already gone way beyond that, months ago.

because what happens if they can't find the answer in a useable snippet? you would think that would defeat them, and they would present the user with the best alternative (...like a normal snippet, highlighting the words they searched for).

but what they are doing now is this: they have started writing their OWN snippets. they don't just lift a segment of text from your page anymore, they lift bits and pieces from all over the place and then re-write them into a user-friendly snippet of their own authorship -- which is more likely to provide the answer to the searcher -- and then print it under your listing.

if you do a search for a UK premier league football club, for example, then you will see what i mean. (the site i'm seeing is for the premier league.) they provide the date and score of the last game, and the next fixture -- none of which appears on the page in that format when you click through.

google realises that when people search for the club, what they usually want is the results and fixtures, so they have said 'to hell with the text on the page... we'll just write our own thing, and who cares whether it actually appears on their page or not'.

that is a blatant grab for the traffic, which would otherwise have gone to the site.

ogletree

10:16 pm on Jan 24, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Ajax just stops stupid spiders. If somebody really wants your data they can get it. Also blocking ip's won't do it either. A motivated scraper can get anything.

loudspeaker

11:06 pm on Jan 24, 2010 (gmt 0)

10+ Year Member



I better keep my "answers" long and descriptive, not short and sweet... it looks like quick, specific answers are what GORG is looking for.

Frankly, I don't think we should be in this situation. An improved and more granular robots.txt would take care of this. I've heard about a European proposal for a more "in-depth" successor to robots.txt, which would specify how the data is used. I think if we all demand that Google adopt it, they'll have no choice but to comply. The way it is right now, nobody is asking so they feel they can do whatever.

dstiles

12:05 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I've been thinking about the robots.txt problem for some time now. It's not really expandable. Robots.txt as an entity is fairly useless in relation to modern web sites if only because most sites can't create it dynamically.

My own suggestion would be a single line at the top of robots.txt that points to another file (eg bots.php) and ALL bots are prohibited below that line.

The file bots.php (whatever) format would be based on typical Linux INI files (eg mysql, php) - headed segments for each bot/function (it would take a bit of working out).

Within such a structure, optionally dynamically changeable per SE/bot, pretty much everything could be covered for ALL bots that has been discussed in this and other threads. It could include formalised copyright text, type of content, instructions to use/not use specific SE features (eg stuff google has been trying to push, as mentioned in other threads), "use this snippet only" text - even site title and meta description.

Creating such a file for each web site would be relatively easy - there are enough people around who could create GPL software to create and manage the file.

The most difficult problem would be getting the SEs to accept the new system, but if enough robots.txt files said "go away or read and obey this" then maybe it could be adopted.

Oh, and I want paying! :) This should be managed by some kind of consortium with no axe to grind, although SE personnel should be included on the team.

[edited by: tedster at 12:18 am (utc) on Jan. 25, 2010]

Korrd

12:22 am on Jan 25, 2010 (gmt 0)

10+ Year Member



Google is treading on shaky ground. These actions may be a violation of copyright law.

In other words, you're not allowed to find the most important piece of information and republish it. You can't take the most important parts and publish them as your own.

Except facts like the height of the Empire State Building are't covered by copyright.

If the snippet shown is factually incorrect in a major way and someone who accepts it as true is harmed by the information in any way, how does google stand in law?

Since Google is attributing the source of the facts and not claiming to have vetted the information they shouldn't have any liability.

Eurydice

1:31 am on Jan 25, 2010 (gmt 0)

10+ Year Member



> ... facts like the height of the Empire State Building are't covered by copyright.

Formulas, recipes, and tables aren't copyrightable. But the statement of facts can indeed be copyrighted. Technical documentation (which is statement of facts), is copyrighted.

loudspeaker

1:36 am on Jan 25, 2010 (gmt 0)

10+ Year Member



@ dstiles - I looked up the proposal (for robots.txt succession). It's called ACAP. You can go to their site (the-acap.org [the-acap.org]) and browse through the proposed specs (click on INFORMATION in the menu). It certainly seems to be a step in the right direction.

Here's an excerpt:

This document proposes extensions to the Robots META Tags format to express a content owner’s policy for allowing or denying crawlers access to and use of their online content. These extensions do not replace the existing Robots META Tags format, but enable unambiguous expression of permissions, both unqualified and qualified by a range of restrictions(1), and outright prohibitions as to what a crawler and associated automated follow-on processes may or may not do with the resource in which the expression of these policies is embedded...

(Emphasis mine)

I think that's exactly what we're talking about - what "follow-on" processes should look like and what control we have over them.

[edited by: tedster at 2:14 am (utc) on Jan. 25, 2010]
[edit reason] added a clickable link [/edit]

Nobias

1:41 am on Jan 25, 2010 (gmt 0)



But if people don't visit sites then they won't click ads. I am not sure of the wisdom of this.

They WILL click on Adwords ads (not on your own ads), that's the point. Why share revenue when they can get 100%?

TheMadScientist

1:43 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I can't decide if it's sad or laughable 1,000,000+ websites (billions of pages) might need to make a change to accommodate 10 or so companies as if those companies couldn't respect the rights of the site owners without the site owners providing explicit instructions for them to do so...

Nobias

2:29 am on Jan 25, 2010 (gmt 0)



OK, I just did an experiment. I entered a general term like '#*$! yy height.' Then I counted the number of words of the 10 snippet-text results.

Do you realize that the snippet text of 10 results comes to 310+ words? It's like a full page of text! EVEN IF I don't find everything under one snippet text, I still have 9 other that total 270+ words! And I'm sure their algorithms are designed to show as unique and useful text under each snippet as possible so that the reader could scrape as much relevant information as possible without leaving google.com. Why would I want to click on any website when all I need to do is to read snipped text of a few selected sites that are on one page and then click on Adwords ads when needed?

I must say it's pretty genius, but it sucks to be a webmaster or content provider these days...

TheMadScientist

2:36 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I must say it's pretty genius, but it sucks to be a webmaster or content provider these days...

Exactly. It's such an oxymoron it's not even funny...
It's really cool and I can't stand it at the same time!

I think the best advice has been given by a few here: Find a way to generate traffic without the Search Engines and you'll be better off in the long-run. Otherwise, you're at their mercy and IMO it's not a good spot to be in.

edacsac

2:40 am on Jan 25, 2010 (gmt 0)

10+ Year Member



ACAP looks pretty cool, although it seems that google routinely cites technical difficulties for implementation. Imagine that, some of the greatest programming minds having continued difficulty in implementing a simple robots enhancement.

kidder

2:59 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Google would be very aware of their legal postion on this matter, I think it's time that internet law caught up with what is happening out there and we should see a bit of a roll back. At this point in time Google has breached the understanding and goodwill that once existed between webmastes and search engines. The more we let them take the more they will continue to take so this disturbing trend needs to be arrested.

BeeDeeDubbleU

10:05 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Since Google is attributing the source of the facts and not claiming to have vetted the information they shouldn't have any liability.

If they are attributing the source as the webpage above where their self compiled snippet of information appears and that website does not actually include this information where does that leave them?

londrum

10:23 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



its not a lot different to the whole google news issue.
when google prints the headline and the gist of the story, its makes murdoch mad because he's losing traffic. and now they're doing more or less the same thing to us -- printing the headline and gist of the answer, losing us traffic too.

zett

11:50 am on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



its not a lot different to the whole google news issue. when google prints the headline and the gist of the story, its makes murdoch mad because he's losing traffic. and now they're doing more or less the same thing to us

Spot-on.

Google has now declared war on webmasters. They seem to forget, though, that the search engine eco-system only works when there is a triple-win situation for all participants (end-consumer, search engine, publisher). As they now want to take away a substantial part of the benefit for the publisher, I fail to see how they will be able to survive.

As I said in a previous post, I am inching towards blocking Google bot every single day. And I bet I am not the only one.

cwnet

5:03 pm on Jan 25, 2010 (gmt 0)

10+ Year Member



zett: no, you are not alone...

Hoople

5:38 pm on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



The one good thing that I can see coming from this is the Wikipedia getting less click-throughs. Wikipedia will either drop off the first page of SERPS or be shoved down towards the bottom of the first page.

TheMadScientist

6:04 pm on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



The best thing I see coming out of it is the possible decimation of MFA and 'thin affiliates' which serve no real purpose except to the owner. Much as I don't like to say it, I think this and other recent developments at Google will cause a separation in sites which will send many looking for jobs and clean up some of the garbage of the Internet...

ecmedia

6:39 pm on Jan 25, 2010 (gmt 0)

10+ Year Member



I don't see any problem at all because as google says "This kind of quick answer only makes sense for certain kinds of searches. For example, the answer to [history of france] can't readily fit in a search snippet." I mean if a web page simply provides the answer to the question 'what is the capital of france' is it really worth visiting?

edacsac

6:43 pm on Jan 25, 2010 (gmt 0)

10+ Year Member



The best thing I see coming out of it is the possible decimation of MFA and 'thin affiliates' which serve no real purpose except to the owner.

This actually sounds like a good thing. I'm getting tired myself of landing on garbage sites when looking for something. If your opinion also includes the scraper type sites that folks complain about ranking above them while using their content, then another agreement from me.

If the internet had more informational or true creative and journalistic folks displaying the majority of the content, then it will be easier to protect that content through copyright awareness and explicit spidering rules. The thin affiliate webmasters don't have the same interest as real content creators in protecting and managing copyright I would imagine, and would only help the se's continue to trample copyright.

pdivi

6:57 pm on Jan 25, 2010 (gmt 0)

10+ Year Member



scraper type sites that folks complain about ranking above them while using their content

Which differ from Google how?

TheMadScientist

6:58 pm on Jan 25, 2010 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Yeah, I'm in an odd spot with my feelings about Google, because I think: Give 'em a snipit and they'll take a paragraph... Applies. So, what starts as a snipit could easily snowball and IMO it's a slippery slope.

If your opinion also includes the scraper type sites that folks complain about ranking above them while using their content, then another agreement from me.

Yes, absolutely.

If the internet had more informational or true creative and journalistic folks displaying the majority of the content

Yeah, there's a whole rant I could go on here, but will refrain, except to say, if people built better websites IMO there would be much less complaining by the webmasters... Google, FaceBook and Twitter are 3 of the most visited sites on the Internet and I don't recall them running any TV ads, and Twitter may not be, but FB and G are mainly word-of-mouth sites people find useful... I think (hope) quite a bit of success moving forward will be determined by whether you can actually build a useful website, not out-scrape someone then display your affiliate links and AdWords better.

J_RaD

6:58 pm on Jan 25, 2010 (gmt 0)




This actually sounds like a good thing. I'm getting tired myself of landing on garbage sites when looking for something. If your opinion also includes the scraper type sites that folks complain about ranking above them while using their content, then another agreement from me

you can't have a subjective view on "garbage" just cause you don't like something you can't just say ohok im cool with this cause it benefits me. Shady people will always find a way to do what they do.. SPAM is still rolling around the internet even with all the safeguards going up.

I think parked domains with adsense on them are total garbage, but godaddy and google don't.

[edited by: J_RaD at 7:00 pm (utc) on Jan. 25, 2010]

This 104 message thread spans 4 pages: 104