Forum Moderators: open

Message Too Old, No Replies

Google as a Hackers Tool - once again. (-wired)

         

Brett_Tabke

7:34 pm on Mar 8, 2003 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Seems like this story gets rerolled every six months:

Google: Net Hacker Tool du Jour [wired.com] (-Wired)

creative craig

7:36 pm on Mar 8, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Seen that a few times before :)

Craig

Key_Master

7:44 pm on Mar 8, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Google is a hacker tool. Even more so because Google will list links in serps that it has been asked not to follow. Why Google?

jdMorgan

1:00 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Key_Master: I agree. For a possible solution, see this post [webmasterworld.com].

[edited by: jdMorgan at 1:30 am (utc) on Mar. 9, 2003]

mack

1:12 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



I got hit by something very similar. I installed a scriot an did not remove the install file. (bad move) I got a referal from google by a user who searched for that file. He then altered my settings and emptied my database.

Tough lesson but I will never leave an install file on the server again.

Key_Master

1:21 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



That's good solution jdMorgan although it won't do any good for links pointing to 401 protected pages.

The only thing I would like to know is, what is the purpose of listing these links in serps. It seems to me, only the hackers benefit.

If Google must use them, use them internally.

GoogleGuy

2:22 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



We get these stories every so often when someone leaves data lying out that anyone can get to. A good reminder to use .htaccess/passwords or robots.txt or meta tags to keep Google out of places that you don't want search engines to find. Security through obscuring a url just doesn't work that well. Using standard ways to keep out bots works much better. :)

Key_Master

2:39 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Keeping Google out is not a problem. The problem is, keeping Google from listing the link that is pointing to the file Google was kept out of.

[edited by: Key_Master at 2:40 am (utc) on Mar. 9, 2003]

conor

2:39 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



If you leave the door open theives will try and get in! with or without the help of Google. A good regular clear look and sweep of your web directories is vital these days.

Key_Master

3:13 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



If you have a file that's password protected and you use robots.txt to prevent abiding spiders from crawling it, it will still show up for a search on Google if you or anybody else points a link to it. It doesn't even matter if the file exists or not.

GoogleGuy, reply #7 is plain false.

conor

3:20 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



If you want to be truly secure either don't put it there in the first place (best option IMO)or use IP based security. It's not rocket science :)

[edited by: conor at 3:33 am (utc) on Mar. 9, 2003]

jdMorgan

3:22 am on Mar 9, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



GoogleGuy,

The link I cited describes the problem and the fix for it. There is no way to keep Google from listing a link to a page by using robots.txt to disallow that page. The link will be listed in the Google results with no title or description, because the page has not been spidered (as requested by robots.txt). But the link itself will be displayed, simply because it has been found - for example, on any other "allowed" page.

I understand the behaviour based on the definition of "indexing", but wish this behaviour was otherwise. I do not have any "private" pages on the web, but I have plenty of pages where direct entry from a search engine may provide a confusing or "non-optimal" user experience. Also, I really would prefer to keep my "contacts" forms behind the page which describes their terms of use, and not wave their URLs around making them easier for harvesters to spot. For these pages, I am now using the method in the post that I cited above, at the cost of decentralized spider control and a bit of extra bandwidth.

As far as I know, Google and Ask are the only SEs which display this behaviour; others interpret a disallow as "don't mention it." As a result, I have special cases for them in my robots.txt.

I don't spend much time complaining about things beyond my control, I just find work-arounds. If Google decides not to display links to disallowed pages in the future, that would be great, and if not, I'll live with the fix I found. There are bigger problems to be dealt with on the Web, and this one is pretty minor.

Pragmatically, :)
Jim

Brett_Tabke

1:32 am on Mar 10, 2003 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



>Google is a hacker tool.

It's not up to Google to be responsible for server admin mistakes. If they are low tech enough to run OS's and servers prone to hackers, it's not Googles job to cover their mistake. It's up to the admins to be as good at running their own server as google is at running an se.

If you publish it on the web - they will find it.