homepage Welcome to WebmasterWorld Guest from 54.211.230.186
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Search Engines / Search Engine Spider and User Agent Identification
Forum Library, Charter, Moderators: Ocean10000 & incrediBILL

Search Engine Spider and User Agent Identification Forum

This 43 message thread spans 2 pages: 43 ( [1] 2 > >     
Botnets
How to recognise & block
keyplyr

WebmasterWorld Senior Member keyplyr us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 12:01 am on Jul 21, 2014 (gmt 0)

Botnets have become an increasing threat to webmasters, made up of compromised accounts from various sources: DSL, Cable Broadband, ISPs, Telecoms, Mobile, Server Farms, Cloud, Colo, Corporate and Private lines... the list grows.

It's easy to feel powerless over methods to block these attacks. They usually have valid, normal request headers and human looking UA strings.

I was one who though it was too much to deal with and for a long time took no action, but I've started adding these IP address to my block list a few months ago.

At my sites I have noticed this behavior from compromised IP addresses:
One particular page is requested w/ no other supporting files, no other requests.
The above hit may also be accompanied by a hit for a php file or other dynamic file type.
Consecutive page requests (HTML files only) each with a different UA string.
Consecutive directory file(s) requests.
Various requests for wp (Wordpress) files
Various requests for login or admin files
The above hits come from ISP type ranges, but also come from server farms.

Sometimes prior to the scrapes/attacks, the compromised accounts are tested (YMMV.)
One particular page is requested w/ no other supporting files, no other requests. This is to 1.) check to see if the compromised IP addresses is still valid, and 2.) evaluate the victim's server response.
Different bad actors may request completely different pages for much the same reason.

- Then the BIG ATTACK comes -

My botnet IP list (in addition to server farm IP ranges) grows each day, but I have successfully blocked many attacks. A few new IPs do make it through, and of course get added to this list.

 

incrediBILL

WebmasterWorld Administrator incredibill us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 8:28 pm on Jul 22, 2014 (gmt 0)

Nice stuff.

I've seen the same kind of thing and they do expose themselves without too much trouble.

I've got a single page on one of my sites that some botnet has the hots for scraping and they attack just that page over and over, multiple requests per IP for the same page and it comes from IPs all over the world.

Crazy.

Assuming a valid browser user agent, botnets tend to use bogus headers and checking for valid headers tends to block all of this nonsense from the start.

If we lived in a world actually concerned with cleaning those machines, we could send the ISPs those IPs and they'd shut them down until they were cleaned but nope, they prefer letting their customers be unwitting parts of an underground network.

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 2:01 am on Jul 23, 2014 (gmt 0)

If we lived in a world actually concerned with cleaning those machines, we could send the ISPs those IPs and they'd shut them down until they were cleaned but nope, they prefer letting their customers be unwitting parts of an underground network.


Bandwidth pays the bills and the higher the numbers, the better a provider looks (at least to most potential customers.

My entrance into htaccess and learning how to deal with these pests was at the suggestion of a tech from a major provider (hint 4g).

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 3:29 am on Jul 23, 2014 (gmt 0)

In my log-wrangling routine, I've got a rotating list of functions that identify ongoing botnets so I can pull them out for closer study. Currently there are two that have been running for so long, they're old ... uh, not friends exactly. Old acquaintances. The maddening part is that most of the time I can only identify them after the fact.

"contact" botnet:
request that receives a 403 on referer grounds --either auto-referer or front-page referer for inner page, never the same one-- followed by request for contact page giving the previously blocked page as referer. (This could easily happen with a human who got locked out by mistake and wanted to know why ... except that the human would also be getting stylesheets, images, favicons and so on. And, of course, it would take them a few seconds to assimilate the 403 page.)

"index.php" botnet:
--inner page with auto-referer (their current favorite is a page that blocks auto-referers explicitly)
--front page
--one directory page (always the same one) with auto-referer
--one inner page (ditto) with auto-referer
--three pairs of
/dir/subdir/index.php
front page
all six giving example.com/index.php as referer, consequently all blocked

I might be able to block everything-- currently 3 or 4 requests of a 10-request package typically get in-- but then it would be out of sight, out of mind, and I wouldn't notice which IPs are misbehaving.

If we lived in a world actually concerned with cleaning those machines, we could send the ISPs those IPs and they'd shut them down until they were cleaned but nope, they prefer letting their customers be unwitting parts of an underground network.

In fact I have occasionally thought about contacting IP administrators of all seemingly infected machines*, just to see how they'd react. It is an experiment I will embark on one of these months.

For some reason, it's been quite a while since I was seriously bothered by /wp-admin/blahblah requests. Maybe they know which servers on my host run WordPress, and don't bother the rest. (Admittedly this presumes some ability of robots to learn. This is so rare, it's actually frightening when you do see it.)


* Well, OK, maybe only the ones in ARIN territory. Let Brazil look out for its own computers.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 7:48 pm on Jul 24, 2014 (gmt 0)

keyplyr wrote:
but I have successfully blocked many attacks.

I'm not sure what you mean by "attacks"
So two questions:
1. When you say "attack", are you talking about 1000 requests per second from 1000 different IPs? To me, that would be a real "all-out" (DDoS) attack. Or are you only talking about one request every 5 minutes or so (on average)? Because to me that's only a "nuisance attack" at best, or possibly some kind of testing that takes place as a new botnet is being created.

2. Typically how long did these attacks (that you successfully blocked) last? Are you talking about hours, days, weeks, or what?

I'm not trying to dispute anything you've written, but am only trying to get a better understanding of it.

brotherhood of LAN

WebmasterWorld Administrator brotherhood_of_lan us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 7:51 pm on Jul 24, 2014 (gmt 0)

Do they send cookies on subsequent fetches?

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 8:26 pm on Jul 24, 2014 (gmt 0)

I'm not sure what you mean by "attacks"
So two questions:
<snip>


Sometimes prior to the scrapes/attacks, the compromised accounts are tested (YMMV.)
One particular page is requested w/ no other supporting files, no other requests. This is to 1.) check to see if the compromised IP addresses is still valid, and 2.) evaluate the victim's server response.
Different bad actors may request completely different pages for much the same reason.

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 8:47 pm on Jul 24, 2014 (gmt 0)

Do they send cookies on subsequent fetches?

You wouldn't expect them to, would you? Unless it's an infected browser that "thinks" it's handling ordinary page requests. Currently I only see these with semalt referers; if not for the bogus referer they'd look perfectly human.

brotherhood of LAN

WebmasterWorld Administrator brotherhood_of_lan us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 9:18 pm on Jul 24, 2014 (gmt 0)

>You wouldn't expect them to accept cookies

I wouldn't be surprised if they do, I just know that it's more scalable/easier to use a simple curl/wget like approach. Sending cookies back would just be another level to deal with.

The only reason I asked is because if they don't and you're using a whitelist approach to blocking, then it'd seem easier to block them.

keyplyr

WebmasterWorld Senior Member keyplyr us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 11:01 pm on Jul 24, 2014 (gmt 0)





It is my opinion (and limited knowledge) that these compromised IP ranges are bunched and sold as various products at hacker forums and other nefarious places. I have even seen them complete with shell management interfaces. The buyer may then configure these ranges to be used in a variety of ways, by a variety of tools, for a variety of purposes.

So asking about the length of attack or what level of attack, or whether cookies are used is a bit broad, at least from my experience, YMMV.

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 11:24 pm on Jul 24, 2014 (gmt 0)

if they don't and you're using a whitelist approach to blocking, then it'd seem easier to block them.

Well, I don't whitelist :) But you're begging the question anyway, because I don't normally use cookies. When I do see cookies in headers, they're most likely from analytics ("I've been here before" and/or "don't log me"), and I really don't care much about those.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 11:44 pm on Jul 24, 2014 (gmt 0)


keyplyr
Thanks for your reply. I was merely trying to understand what you meant by the word "attack", but if you don't want to tell me, that's your privilage. Maybe some of the others here can describe some of the attacks that they've experienced. I think it would be an interesting discussion.

keyplyr

WebmasterWorld Senior Member keyplyr us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 12:46 am on Jul 25, 2014 (gmt 0)


attack = scraping hundreds/thousands of files from your server.
attack = endless attempts at finding vulnerabilities at your server
attack = any other attempt to inject malicious files/scripts into your server
attack = fill_in_the_blank...

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 2:04 am on Jul 25, 2014 (gmt 0)

attack = any attempt at viewing a page and it's accompanying files (and or omitting the accompanying files)in a method not in the normal and intended manner of presentation by what would be deemed a standard visitor.

The latter is a very broad statement, and frequently requires review of the immediate days logs or logs from months past to confirm a willful attack/intrusion as compared to genuine visitor.

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 4:33 am on Jul 25, 2014 (gmt 0)

For the extreme form of attack, you may remember this [webmasterworld.com] from a couple months back ... although you may not remember it as vividly as keyplyr, who was eventually visited by the same bot.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 12:29 pm on Jul 25, 2014 (gmt 0)

Thanks for the replies.
Aparently everyone is using the word "attack' to apply to things like scraping, probing for vulnerabilities, hacking attempts, etc. But since this thread is about botnets, I thought it was referring to "botnet attacks". In my understanding, that would normally mean a large-scale DDoS attack, with 100's or even 1000's of requests per second, that attempts to disrupt a site's operations or knock it off-line. When keyplyr said "Then the BIG ATTACK comes", that's what I thought might be meant, which is why I asked the questions.

One thing I still don't understand is what I call "low-level botnet activity". This is when requests come in every few minutes from the same botnet but different IPs and devices within that botnet. I wouldn't call this an "attack" because it's so feeble and ineffective. It doesn't make any sense to me that anyone would waste botnet resources on something so weak and ineffective. That's why I wouldn't call it an "attack", but instead think it's some kind of testing, perhaps the testing of devices immediately after they are infected and added to the botnet.

Yes, Lucy, I do remember that thread you linked to. Please correct me if I'm wrong, but I have the impression that those attacks didn't come from a botnet.

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 1:31 pm on Jul 25, 2014 (gmt 0)

aristotle,
I've been a member here since 2001 when other web searches on htaccess landed me.

Prior to 2001, I'd been messing with htaccess for about a year earlier.

Nearly all bots (malicious or otherwise) will make some kind of initial (SOFT) probe/test, and then return later with a more extensive crawl.

If a webmaster is able to deter the visits after the initial probe, than generally the more extensive crawl never takes place.

Thus the determination of severity is the experience of being able to determine future activity of the initial probe, even though that more extensive crawl has yet to take place.

BTW, mobile devices have drastically changed the ability to make a determination. I see particular brands (confirmed widget people) that request single page (s) without any supporting files. There's no rhyme or reason.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 2:21 pm on Jul 25, 2014 (gmt 0)

wilderness wrote:
Nearly all bots (malicious or otherwise) will make some kind of initial (SOFT) probe/test, and then return later with a more extensive crawl.

If a webmaster is able to deter the visits after the initial probe, than generally the more extensive crawl never takes place.

wilderness - Thanks for the reply.
In the case of a particular botnet that I've been dealing with over the last few months, I was eventually able to start blocking all fetch attempts, initial or otherwise, with a 403 forbidden, thanks to some code that Lucy gave me for blocking "self-referals". But although I've been using this code to successfully block every single fetch attempt from this botnet for more than 3 months, that hasn't stopped or even slowed down the activity. New fetch attempts continue at the rate of about 200 per day. Each new attempt comes from a new IP, nearly all of them from U.S. locations. I believe that this current activity is testing that takes place as new devices are infected and added to the botnet. In other words I think a new botnet is currently being created, day by day, as new devices are infected and added. Blocking all fetch attempts, as I'm currently doing, will not stop new devices from being added to the botnet and new tests from being conducted, because that takes place somewhere else on the web, and I have no control over it.

So in this particular case, although I've blocked every single fetch attempt for the past 3 months, that hasn't stopped new fetch attempts from new IPs from being made at the same rate as previously, about 200 per day. So it's wrong to think that you can always stop future fetch attempts by blocking current ones. That simply isn't true.

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 2:45 pm on Jul 25, 2014 (gmt 0)

So in this particular case, although I've blocked every single fetch attempt for the past 3 months, that hasn't stopped new fetch attempts from new IPs from being made at the same rate as previously, about 200 per day. So it's wrong to think that you can always stop future fetch attempts by blocking current ones. That simply isn't true.


aristotle,
for an approximate fifteen years, and only with rare instances, has there NOT been some common denominator to deny access.

Rather than looking at IP's your going to be required to implement UA denies (or even headers). There may be one, or there may be tens, or hundreds, however you'll definitely find a common denominator that your just missing.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 3:13 pm on Jul 25, 2014 (gmt 0)

wilderness
Evidently you didn't understand my post. As I said, I'm using some code that Lucy gave me for blocking "self-referral" fetch attempts. It doesn't use IPs at all, but instead is based on a specific type of fetch attempt associated with this particular botnet, which I noticed from studying the log entries. It wouldn't make any sense to try to individually block tens of thousands of different IPs, and I never considered that approach for even a moment.

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 3:21 pm on Jul 25, 2014 (gmt 0)

aristotle,
With all due respect!

Apparently lucy's regex is not working, else you wouldn't still be here proclaiming "these pests".

Did lucy or another beyond yourself review your access logs, for whatever common denominator your missing?
lucy is only capable of providing a solution for the conditions you expressed, and if your conditions were void of a denominator, than all her efforts would fail to accomplish what you desire.

FWIW, I didn't suggest that you deny based upon IP, rather UA and/or header and perhaps even combining the regex lucy provided with UA's and/or headers.

I've denies or rewrites in place based upon multiple conditions. It's not complicated, and your not required to comprehend otherwise complicated regex.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 3:45 pm on Jul 25, 2014 (gmt 0)

wilderness
Lucy's code is working perfectly. As I said, it's successfuly blocked every single fetch attempt from this botnet for the past 3 months. It doesn't matter if it's a new IP or an old one - it blocks all of them with a 403 forbidden.

As for where the code came from, I studied the log entries for the fetch attempts from this botnet and noticed a particular characteristic that "normal" fetches don't have. I thought that this particular characteristic might be used as a way to identify and block these particular fetch attempts. So I started a thread in the Apache forum asking if this could be done, and Lucy responded that, yes, it could be, and that she had already done it herself in similar cases. She then posted some simple .htaccess code that does the trick. I call it "self-referer" blocking, although I noticed that she calls it "auto-referer" blocking. In any case, it doesn't matter what the IP is, and it doesn't matter whether it's an initial fetch attempt or a followup fetch attemp -- it blocks all of them.

not2easy

WebmasterWorld Administrator 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 3:48 pm on Jul 25, 2014 (gmt 0)

@aristotle
But although I've been using this code to successfully block every single fetch attempt from this botnet for more than 3 months, that hasn't stopped or even slowed down the activity


So long as you are observing 200 new 403s every day, your code is doing as it is intended. It is not supposed to prevent attempts, it is supposed to prevent success.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 4:02 pm on Jul 25, 2014 (gmt 0)

not2easy
Thanks for the reply. Yes, I understand what you mean. Blocking current fetch attempts won't stop new ones from coming because they originate somewhere else on the web by a process that I have no control over.

wilderness

WebmasterWorld Senior Member wilderness us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 4:24 pm on Jul 25, 2014 (gmt 0)

So I started a thread in the Apache forum asking if this could be done, and Lucy responded that, yes, it could be, and that she had already done it herself in similar cases. She then posted some simple .htaccess code that does the trick.


Saw the original Apache thread, and the recent posting of same in this forum.

A few weeks ago there was similar thread in the Apache forum were somebody needed the same solution, which I could not locate. lucy was kitty-footing around ;) and did not provide the solution to the 2d thread.

My apologies for all the confusion.

those self-refers and domain refers been coming for years.



Don

incrediBILL

WebmasterWorld Administrator incredibill us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 5:05 pm on Jul 25, 2014 (gmt 0)

Rather than looking at IP's your going to be required to implement UA denies (or even headers).


Header testing is actually easier to implement and more bang for the buck than UA testing.

Doesn't take a rocket scientist to copy/paste a correct UA into a bot, why some of them fail proves they're stupid.

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 6:51 pm on Jul 25, 2014 (gmt 0)

I have the impression that those attacks didn't come from a botnet

Right. That particular one was a single IP. But it could just as easily have been a botnet characterized by a behavior pattern and/or distinctive (to put it politely) UA. Most of the botnets I meeet use some random humanoid UA-- they may actually be infected human machines-- so the pattern of requests and referers is the only way to identify them.

lucy was kitty-footing around

Hm, that doesn't sound like me at all ;)

It is not supposed to prevent attempts, it is supposed to prevent success.

Exactly. If it's your own server, you may be able to do stuff with a firewall so the requests never even reach the server, and hence don't show up in server logs. But there's always something or someone doing the work of locking out.

dstiles

WebmasterWorld Senior Member dstiles us a WebmasterWorld Top Contributor of All Time 5+ Year Member



 
Msg#: 4689324 posted 8:44 pm on Jul 25, 2014 (gmt 0)

A quick botnet overview.

Botnets are variable in size and IP makeup. In general they are made up of compromised computers, some servers and some domestic/office. These may be segregated for specific purposes: servers are likely to shift more traffic than a domestic machine that's offline for much of the day but there may be an advantage to using dynamic IPs on web targets.

Once compromised the computer is used to do whatever the new "owner" wants it to do. Blocks of IPs in small or large quantities are rented by the botnet "owners" to whoever wants to pay the (usually small) price. Along with the rental comes a control panel, sometimes simple sometimes complex, which allows you to set up your deployment profile and probably hide yourself from detection.

Botnets are commonly used to send spam and viruses via email, plant viruses on web servers, scrape web site content for various illegal purposes, run denial of service attacks on any IP or IP range the renter fancies, or attack FTP, SSH and similar services. For web, an action may be a simple probe to begin with, to discover vulnerabilities, perhaps followed by a real barrage if a vulnerability is discovered or suspected - or maybe just because there is a web site there (this is from my own observation: I get very few high-intensity barrages now but quite a few probes).

User agents for mail-sending tools and web "browsers" can easily be faked and usually are, in the latter case from the common googlebot UA to an esoteric "real" browser UA. You cannot depend on them at all. Mail headers (apart from UA and faked sender details) are usually more or less correct otherwise the mail transport system may reject it (and in any case they are not complicated), but there are ways to distinguish spam from real mail. Headers for web bots are usually broken in some way: a bit of intensive study pays dividends but do not expect much help in this matter on this forum. Suffice to say: it's possible to detect and kill most bad bots on first approach.

Botnet ranges vary from hour to hour as the true owners of compromised machines discover their misfortune and get it fixed - or in some cases not. A few large corporations such as Microsoft occasionally take down botnets and then large numbers of IPs can be purged or at least sink-holed; this is immediately followed by renewed phishing activity in an attempt to compromise a new set of IPs.

There is not a lot that can be done against bots from dynamic (broadband) IPs if you want real visitors - they come from the same networks. You have to rely on bot recognition. On the other hand, block every server range you can find, with prejudice: with very few exceptions, such as narrow IP ranges of bing and yandex bots and the occasional good proxy, they have no reason to hit your web site (this is the reverse of mail, where server IPs are good-ish and dynamic bad).

Why are there botnets? Years of neglect (and worse) by law enforement agencies; stupidity and cupidity of NIC companies who are happy to rent domains short-term without proper checks; a few bad server farms who do not care about their users' activities so long as they pay; and the completely (and acknowledged as such) broken internet protocol - broken pretty much from day one.

And, of course, the fact that most people are careless about their computer' security and happily click on links leading to wealth untold or bigger and better... And who have no idea that their computer is compromised.

Brazil (strangely but encouragingly) and a few other countries are trying to put together a better internet. I for one wish them a lot of luck. Some countries do not want to see this. I wonder why?

Finally, blocking IPs.

As noted above, permanently block all server farms except for the odd legitimate IP or range. There is a down-side to this: IP ranges change hands occasionally and what was a server range may become a dynamic range. In my case this is tough: they stay as I classified them until something draws my attention to an update. In practice the changes I've seen so far have been from one owner to another, keeping the range's use the same.

Dynamic IPs can, in my opinion, be blocked for a short space of time and then released on the assumption that in a reasonable time the computer will be cleaned. In my system, a dynamic IP gains an extra 24 hour block per ill-usage detection: if there is no bad use after a while the IP is automatically released.

In a few instances I block short ranges from RU, UA, RO even if they are dynamic: I am very distrusful of very short ranges from these countries. I am also against ranges that have a public registration email address (hotmail, gmail etc) and this may sway my judgement.

incrediBILL

WebmasterWorld Administrator incredibill us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4689324 posted 7:20 am on Jul 26, 2014 (gmt 0)

most people are careless about their computer' security and happily click on links leading to wealth untold or bigger and better


Having both parents and in-laws in the 85-95 age bracket, I can tell you they are more easily fooled by phishing that looks like the real deal from their banks or some website where they shop.

My mom calls me up asking:
mom:"Why would Walmart send me a gift card?"
me:"They didn't"
mom: "How do you know? The local Walmart could be trying to get my business."
me:"Did you shop at walmart.com?"
mom:"No."
me:"Have you ever give the local walmart your email in the store?"
mom:"No."
me:"Then how could they send you email? Why in the hell would you open an email from Walmart when they don't know how to email you? No need to answer, greed overtook your common sense."

No, I don't always talk to my mom like that but after having this discussion for so many years about spam, I'm fed up. It's nothing new, nothing changed.

Mom was not pleased with me. We've gone over this drill many times, if you don't know them, don't open it, and even if you do, it's suspect. Spam and phishing can come from your friends and family, including me, thanks to Facebook and Yahoo mail.

Did she click the attached "gift card"?

Don't know, didn't ask, because I'm not fixing it this time. I'm done. If Norton's didn't catch it, too bad. Her last computer got so messed up I just bought her a new one because it was too old for me to try to fix in the first place.

I can't blame just the elderly because a lot of no-so-bright kids, teens, adults and middle-aged so the same damn thing.

I've given them all the speech, and I'm done fixing their machines if they mess up.

Not. My. Problem.

But that, in a nutshell, is exactly how the botnet expands on a daily basis.

See my other thoughts on this topic:
[webmasterworld.com...]

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4689324 posted 6:18 pm on Jul 26, 2014 (gmt 0)

I block short ranges from ...

Where do you draw the line? Keeping in mind that for the last couple of years no new RIPE range has been larger than /22.

This 43 message thread spans 2 pages: 43 ( [1] 2 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Search Engine Spider and User Agent Identification
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved