Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

20 April 2007

Wireless Stupidity

Impromptu

People still just don't get it. Most of the wireless networks pictured are protected by WPA Pre-Shared Key authentication, and the rest WEP. (WEP sucks, but its better than nothing. Or is it...) Obviously, one wasn't protected at all.

(Possibly) worse, a wireless network I am in no way responsible for, but am closely tied to, was serving up tasty, free Internet access to patrons. When asked if the network was encrypted, I said no, and briefly mentioned that we want it to be quick and easy for patrons to hop on. No fuss, no muss.

The response was an awkward, almost accusatory look. Apparently there is some problem with unencrypted networks. They are unsafe and give you STDs like the crazy, promiscuous girl even the desperate guys won't go near. The reality is, even an encrypted network isn't necessarily “safe”. If you want safe and private computerized communication, you want VPNs and SSL/TLS. After all, that's what they were invented for. The right tool for the right job makes all the difference.

05 April 2007

Firewalls Suck

Impromptu

Firewalls suck. I hate them. Sure, a firewall can do you a lot of good in the hands of someone who knows what they are doing, but commonly, firewalls are the little Dutch boy's finger of the network security world. The problem is, no one else comes by to help.

Security charlatans sell firewalls like snake oil. They claim firewalls are mythical pieces of equipment or software that can protect the user from every computer security issue out there. So what happens? Your novice or otherwise uneducated user (or even worse, users who think they are well versed, but really aren't) runs a firewall and thinks they are safe. They don't patch their systems promptly because they believe the firewall will protect them. They don't password protect sensitive services, or encrypt sensitive information because they think the firewall will protect them. They generally do stupid things because there is no need to be concerned; the firewall will protect them.

Just think about immortality for a moment. If you could never be hurt or killed, would you wear your seatbelt?

All that said, firewalls are actually quite useful as temporary solutions to problems, or insurance against possible issues that may come up in the future. If a vulnerability is discovered in a service your computer runs, you can quickly disable access to this service at the network level if, for some reason, you cannot disable the service outright.

In the end, a properly configured, patched system does not need a firewall. If a firewall is your main defense, you've got other problems.

30 March 2007

Perils of Rootkit Detection

i admit i haven't followed security as closely as i did when i just ran windows, but are there now rootkits that can hide themselves from computer b when it's on computer a? - ice60

Yes, but they do not explicitly. In fact, any rootkit hiding itself from other processes is also hiding itself from other systems on the network as a free side-effect.

If computer b is looking for a rootkit on computer a, it will ask for information about the system. Details about running processes, files, logs, and so forth may be queried. The problem is all of these requests must pass though the infected system, and therefor cannot be trusted. Involving a daemon in your quest to sniff out a rootkit only provides an additional opportunity for the rootkit to hide itself.

Let's look at one of my infamous metaphors as an example. We can equate a rootkit-infected system with a brainwashed individual.

  • If you were brainwashed, you wouldn't necessarily know you were brainwashed. Any decent brainwasher would hide the brainwashing from you, perhaps with false memories.

  • If I asked you if you were brainwashed, you wouldn't be able to tell me you are brainwashed. Quite simply, you wouldn't know.

  • If I asked you indirect questions relating to your brainwashing, I *might* be able to infer you were brainwashed. There are certain inconsistencies that may come up in your responses. Perhaps you claim you were eating an anchovy pizza, but I know you hate anchovies.

  • Since any response you give could be effected by the brainwashing, I still wouldn't be able to tell with any certainty that you had been brainwashed.



The only 100% reliable way to detect a rootkit, assuming you have a signature for it, is to bring the infected system off line, and look for it without relying on any part of the infected system. This could mean running the system from a separate, bootable disk, or transplanting the infected systems hard drive into another clean system.

For further reading, please see this article in Computerworld.

27 March 2007

Apache HTTPD Virtual Hosts and SSL

how do i do configure Apache2 to answer for all subdomains requested. as i want to be ble to create sub domains on my server using my main domain name. - patty552 @ UbuntuForums

Various statements of misconception in this thread at UbuntuForums.

Apache HTTPD Virtual Hosts make the web go round. They allow a single server to host many web sites with different addresses. In the web hosting world, this allows for efficient, cheap web hosting. For your average person running Apache HTTPD, it means efficient, cheap home networking and web development testing environments.

Virtual hosts are easy to set up, just check the documentation at http://httpd.apache.org/docs/ . That said, there are two main ways to configure virtual hosting, which you have to keep in mind when starting out. One method involves matching the request host name, IP address, port, or any combination of them to a separate block of HTTPD configuration statements. The other method specifies a directory pattern to use for the document root and cgi-bin based on parts of the host name.

For the former method of configuring virtual hosts, all one needs to do is add a wildcard ServerAlias directive to the VirtualHost block for your domain name.

<VirtualHost 1.2.3.4>
ServerName domain.tld
ServerAlias *.domain.tld
DocumentRoot /var/www/
</VirtualHost>


For the latter, all one needs to do is match against only the domain name, or include subdomains in the pattern, making sure to create the appropriate directory structure.

VirtualDocumentRoot /var/www/%-2/
VirtualDocumentRoot /var/www/%-2/%-3/


Web hosts tend to use the larger VirtualHost method. Smaller shops, or generic mass hosts (departmental or employee hosting within an organization,for example) will find the latter very helpful, particularly when serving out of user's home directories.

All this is great, but what about SSL? You could start up a separate instance of Apache HTTPD to serve over an SSL connection, but you probably don't want to do that. There are some advantages, which but that is beyond the scope of this piece. The easiest way is to use a VirtualHost block to match against connections on port 443, the default HTTPS port. Contrary to popular belief, you do not need a separate IP address to do this.

<VirtualHost *:443>
SSLEngine On
SSLCertificateFile /etc/httpd/ssl.pem
DocumentRoot /var/www/
</VirtualHost>


This works just fine if you are only serving one site over HTTPS. The problem comes in when you have multiple domain names being served from the same server which need SSL. Since the SSL certificate needs to be used before the web browser sends a request to the server, the server has no way of picking a domain-specific SSL certificate to use. Name-based matching just won't work for SSL. This is why proprietors of shared web hosting services demand that you purchase a dedicated IP address if you want to use SSL. IP addresses are known before SSL certificates are used, so by matching based on IP address, we can use domain-specific SSL certificates.

<VirtualHost 1.2.3.4:443>
SSLEngine On
SSLCertificateFile /etc/httpd/dom1-ssl.pem
DocumentRoot /var/www/dom1/
</VirtualHost>

<VirtualHost 1.2.3.5:443>
SSLEngine On
SSLCertificateFile /etc/httpd/dom2-ssl.pem
DocumentRoot /var/www/dom2/
</VirtualHost>


So, to recap, you do not need a separate IP address to use HTTPS. You do need separate IP addresses to use HTTPS on servers with multiple domains using SSL.

17 March 2007

Password Strength

Impromptu

I came across a thread at UbuntuForums where someone asked how safe the password “s8fd7fg67fdg6” is. All these years, and we still have password education issues.

Excusing the fact that the guy just mashed on the keyboard producing something unlike what he would actually use when password-picking time came, this is a horrible password. It would take a while to crack, but its hard to remember and could easily be a lot stronger. 600+ times stronger, actually.

So, putting easy to remember aside, I will make a few assertions. If you want proof, do the math.


  • The strength of a password is determined by the maximum number of attempts a brute force attack would require to find the password under ideal conditions. Ideal conditions in this case are knowing the maximum length of the password and the character set it was derived from. There is more to intelligent brute forcing (no, not a misnomer) than that, but that is beyond the scope of this piece.

  • The maximum number of attempts required, as described above, can be calculated by adding one size of the character set s, and raising the result to the length of the password l. That would be (s+1)l, of course. The +1 is in there because I said “maximum” above. We have to account for the possibility that a password is shorter than the maximum by including an empty character in the character set.

  • Working under the assumption that we know the maximum length of the password as well as its character set is quite reasonable, considering that the average password is 8 characters long*, the vast majority of passwords do not contain non-alphanumeric characters*, and minimum and maximum password lengths are very often imposed on users.



*These numbers came from a group of passwords acquired from MySpace users that fell for a phishing attack. You'd think that this represents the bottom rung of users, and you'd be right, but previous password auditing results have given similar numbers. Your average computer and Internet user isn't really that savvy. Juicy details at http://www.schneier.com/blog/archives/2006/12/realworld_passw.html.

If you start with a password up to 6 characters long, composed of numbers and lowercase letters, there are 2,565,726,409 possible combinations. Increasing the length by just one character yields 94,931,877,133 possible combinations. If you include capital letters in the character set instead of increasing the password length, you get 62,523,502,209 possibilities. Obviously, its better to increase your password length. But wait!

If you start with a 7 character password made up of lower case letters and numbers, and increase the length to 8 characters, you have 3,512,479,453,921 possibilities. If you change the character set instead, there are 3,938,980,639,167 possibilities. Woah!

So, what does this tell us? To over-simplify, if your password is going to be more than 7 characters long, character set matters more than password length.

15 March 2007

Restricting Interactive Access

How can I give someone one SSH access to my server without compromising security? - Plenty of People

[How can I] allow shell access but NOT allow [arbitrary] remote commands to be run? - TyphoonJoe @ Ubuntu Forums (Impromptu)

First, every point of access to your system is a potential weak spot. Learn to accept it.

The long and short of this is if you want to allow someone interactive access to a machine, but want to control what programs they can run, you need to do one of two things. Either force a customized shell upon the user that will implement some sort of access control, or use regular, old-fashioned file system permissions to lock out certain programs.

Wrapping a shell is best left to experts. There are some interrupt issues to handle as well as some special case scenarios to worry about. Plus, its usually overkill!

File system permissions are quick, easy, and effective. There are just six simple steps to take:


  1. Identify a restricted program or group of programs (such as ifconfig or gcc and ld.)

  2. Create a group for the program or program group.

  3. Add users permitted to use the restricted programs to the corresponding group.

  4. Change the group owner on the program(s) to the specially create group.

  5. Change file system permissions on the program so that only the owner or group can read or execute the program.

  6. Pray you didn't botch steps 1 through 5.



Keep in mind that man other programs, scripts, and services may need to use what you have restricted. When locking out commands with file system permissions make sure you are as thorough as possible.