Welcome to WebmasterWorld Guest from 34.235.143.190

Forum Moderators: bakedjake

Message Too Old, No Replies

Highly Critical Vulnerability "Ghost" Allowing Code Execution on Most Linux Systems

New bug could spark "a lot of collateral damage on the Internet"

     
4:19 am on Jan 28, 2015 (gmt 0)

Administrator from JP 

WebmasterWorld Administrator bill is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Oct 12, 2000
posts:15148
votes: 170


http://arstechnica.com/security/2015/01/highly-critical-ghost-allowing-code-execution-affects-most-linux-systems/ [arstechnica.com]

Highly critical “Ghost” allowing code execution affects most Linux systems

An extremely critical vulnerability affecting most Linux distributions gives attackers the ability to execute malicious code on servers used to deliver e-mail, host webpages, and carry out other vital functions.

The vulnerability in the GNU C Library (glibc) represents a major Internet threat, in some ways comparable to the Heartbleed and Shellshock bugs that came to light last year.
7:39 pm on Jan 28, 2015 (gmt 0)

Preferred Member

10+ Year Member Top Contributors Of The Month

joined:July 23, 2004
posts:577
votes: 94


It's what I like about Linux so much -- that it can update without dragging everything down to crawl when the updates are being applied. I remember Windows would come nearly to a dead stop when I had auto-updates turned on - screwed every other thing I was doing up ..

Reboots don't happen too often around here when updates are applied ... Though I have noticed a few reboot requests recently .. guess it might have been the patch they're talking about in this case ...
9:11 pm on Jan 28, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member dstiles is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:May 14, 2008
posts:3244
votes: 18


Bit of over-hype on this one. Although potentially critical there are not many programs that use the damaged lib and it's been conceded that "exploiting the bug may be challenging", which suggests the bug is actually of minor importance.

Of course, we've all updated the lib, haven't we? And then rebooted the machine? (Well, no, actually, not rebooted, 'cause I have a lot of stuff open and running at the moment, but it's my risk!)

I wonder how many people do not reboot, though? There is no popup saying this should be done, and few people read or even receive the bug reports which advocate reboot. Although I read the reports I almost missed the "reboot" recommendation, and I suspect I've missed a few more if mcneely is correct: I haven't rebooted most of my linux machines for 4 or 5 months.
10:57 pm on Jan 28, 2015 (gmt 0)

Preferred Member

5+ Year Member Top Contributors Of The Month

joined:May 24, 2012
posts:648
votes: 2


Bit of over-hype on this one

I agree with that.

there are not many programs that use the damaged lib

This, however, is not right. Almost everything in a typical Linux distribution is dynamically linked to libc. It may be easier to compile a list of installed software that IS NOT linked to the installed libc.
5:00 am on Jan 29, 2015 (gmt 0)

Junior Member

5+ Year Member

joined:June 17, 2014
posts:86
votes: 0


Never seen as many updates as I have the past 2 weeks. It's almost getting annoying with some of the trivial ones updating several times a week or every day, and for many default applications that I never use anyway. Mozilla thundermail for example.
Yeah, I know I should just remove it... OCD thing though...

At least it's not boggy old windows and the reboots, when required, only take 5 or 10 seconds.
8:16 pm on Jan 29, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member dstiles is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:May 14, 2008
posts:3244
votes: 18


rish3 - Sorry, I wasn't clear. The bug seems to apply to only a small part of the lib which is used for URLs. From a threatpost report:

“When it comes to client applications, browsers would be probably the most likely vector — but the most popular ones are not believed to be vulnerable.”

“The exploitation depends on being able to convince a program to perform a DNS lookup of a host name provided by the attacker. The lookup has to be done in a very particular way and must lack a couple of commonly-employed (but certainly not mandatory) sanity checks.”

"Ghost is a heap-based buffer overflow found in the __nss_hostname_digits_dots() function in glibc ..."

Further, "... the gethostbyname functions are obsolete because of IPv6 and newer applications using a different call ..."

Full story: [threatpost.com...]
9:09 pm on Jan 29, 2015 (gmt 0)

Preferred Member

5+ Year Member Top Contributors Of The Month

joined:May 24, 2012
posts:648
votes: 2


rish3 - Sorry, I wasn't clear.


Ahh. Okay. I see.

I still think they are underplaying that a bit.

a) The "different call" is getaddrinfo(), which actually does call gethostbyname2_r(), but only after validation via inet_aton(). That validation is the only thing keeping it "safe".

b) Some of the listed potential vulnerable apps, like apache, do call gethostbyname_*() directly. (See [svn.apache.org...]
Of course, apache default installs don't do hostname lookups. So, not a wide open hole, but something to be looked at.

I would guess the risk isn't as high as some of the headlines, but I'm also sure there are blackhats combing source code right now. I would bet they find at least one open door in something popular.
7:40 pm on Jan 30, 2015 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month

joined:Feb 3, 2014
posts:1359
votes: 469


It seems many ISP have rushed to fix this but at the same time may have upset a lot of scripts. My own web host fix the problem, but then it broke their own ticket system...and my coppermine gallery.
6:18 am on Jan 31, 2015 (gmt 0)

System Operator from US 

incredibill is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Jan 25, 2005
posts:14664
votes: 99


This is what happens when you don't have official code reviews for the core libraries and hardcore testing by people that know how to harden software ;)

Then again, MS spends a lot of money on Windows doing this and stuff still slips thru the cracks.
12:06 pm on Jan 31, 2015 (gmt 0)

Senior Member

WebmasterWorld Senior Member billys is a WebmasterWorld Top Contributor of All Time 10+ Year Member

joined:June 1, 2004
posts:3181
votes: 0


At least it's not boggy old windows and the reboots, when required, only take 5 or 10 seconds.

I run Linux (Centos 6) and Windows 7 on the same machine, boot times are pretty much identical.

Good webmasters should keep their machines patched and make it a habit of reading the change log, which will tell you when a reboot is needed.
9:07 am on Feb 2, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member graeme_p is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 16, 2005
posts:2936
votes: 188


@incrediBILL, there will be code reviews - now.

Last year's SSL vulnerabilities started off with one being found (Apple was first, right?). Over the next few months Google found Heartbleed, a Red Hat audit found the problem with GNUTLS, and MS discovered one in their SSL library (have I missed one or two?). Why the clustering of discoveries? I think that no one was bothering to review code in SSL libraries until the first one was found, and once they work up and started looking, the other were found.
9:11 am on Feb 2, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member graeme_p is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 16, 2005
posts:2936
votes: 188


@BillyS, webmasters should read changelongs, but in practice not everyone. Also, desktops (and laptops in particular because some people run for weeks without a reboot) should definitely be asking to be rebooted after the update (as they do after kernel updates)
3:07 am on Feb 7, 2015 (gmt 0)

Junior Member

5+ Year Member

joined:June 17, 2014
posts:86
votes: 0


I run Linux (Centos 6) and Windows 7 on the same machine, boot times are pretty much identical.

Are you talking about rebooting for updates to be applied or just general boot times? Even after accelerated booting to desktop, windows is often still loading services. Have you factored that in?
6:14 am on Feb 9, 2015 (gmt 0)

Preferred Member

10+ Year Member Top Contributors Of The Month

joined:July 23, 2004
posts:577
votes: 94


...windows is often still loading services ...


yeah ... like from 2005 right?

Windows .. heh
1:05 pm on Feb 10, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member graeme_p is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 16, 2005
posts:2936
votes: 188


Why do boot times matter that much? Why would you reboot a laptop more than weekly, a desktop more than daily, or a server for anything other than for updates that require it (kernel updates, glibc updates etc.)?
5:44 pm on Feb 14, 2015 (gmt 0)

Junior Member

5+ Year Member

joined:June 17, 2014
posts:86
votes: 0


Back in the stone age, all those server uptime stats sites were quite popular.
Guys use to think it was pretty cool that their servers had been up longer than a year, even when performance had completely degraded. Not so much anymore I think, there's been a real shift away from that recently. Operating systems are more complex, software is more complex, there's more of it, we're running more services and putting more demands on them, and crud and errors compound over time. With faster hardware and faster boot times, it's a lot more common now to see shorter uptimes for servers.
Windows desktop systems really benefit from frequent reboots. I'm not sure if fragmented ram or failure to properly unload services and dlls, huge running log files, is the cause of performance degradation in windows over time... but rebooting is a simple fix sometimes.
7:21 am on Feb 24, 2015 (gmt 0)

Senior Member from GB 

WebmasterWorld Senior Member graeme_p is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Nov 16, 2005
posts: 2936
votes: 188


It depends what you mean by frequent, and what you are running. My observations are:

1) Linux desktop with 2Gb of RAM and running a heavyweight desktop (Gnome/KDE) and lots of heavy apps - reboot every few days to a week

2) Linux desktop with 4Gb or more RAM and similar load, or less RAM and a lighter load - two to four weeks between reboots.

3) Linux server running a few services (e.g web server + web app + RDBMS + ssh etc.) - months between reboots IF there are no security issues requiring an immediate reboot.

If you are running lots of services on a single sever, you should be considering dividing it up into VPSs or at least using chroot jails or containers (does Windows do those?) which will reduce the problems of complexity, and are more secure.
 

Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members