bill's blog

Just another WordPress weblog

Browsing Posts tagged ACLs

I was recently asked to redo the permissions on 5TBs worth of data. There were inherited permissons that conflicted with the users new requirements… it was just a mess! I figured the best way to deal with this was to start from scratch… remove all ACLs and start fresh.

The easiest way I’ve found to do this is…

sudo chmod -R -N ./*

Can you secure a network through access control systems only?

Security in not about relying on a single process to protect assets! A belts AND suspenders approach is the best way to minimize the risk of compromise. Access control list are only a small part of the equation! They relate to who will have access to a particular resource once they have been authenticated. The key here is ACLs support known users to the system NOT unknown users. Network security is a cat and mouse game. The smarter you get at protecting your assets; the hacker will always be one step away! As long as computers are accessible from the Internet they will always be at risk. Many vendors will tell you their product does it all! BUT in reality they don’t and they often fail miserably. Those companies that speak in terms parts to a security plan understand that a layered approach to computer security increases your chances at successfully defending your resources! SO what are these different layers and how are the applied?

First there are firewalls. Firewalls are designed to block unauthorized access while permitting outward communication (, 2009). They sit on the perimeter of your network and the Internet. They control which packets are allowed to pass through to internal resources. Firewalls have a default set of attack signatures whereby they can tell when they are under attacked based on the type and frequency of the packets they “see”. Additionally, network administrators can programmed the device to apply complex rule sets that will determine if the traffic is legitimate or not! These rules bases can be set to allow or deny packets based on the port, source IP address, destination of the traffic, time of day, and contents of the packet. Firewalls can also be deployed within a network infrastructure to protect resources with higher protection needs such as medical information or financial records. They can be deployed on hosts within a secured network in keeping with the belts and suspenders approach… protect the network…protect the host!

Network Access Controls discover and evaluates endpoint compliance status, provisions the appropriate network access, provides remediation capabilities, if needed, and continually monitors endpoints for changes in compliance status (, 2008). In other words, any device that connects into your network it is checked to make sure that it conforms to your minimum requirements before it will be allowed to use your protected resources. We (as network administrators) can take measures that minimize who can use our network by making sure that unused wall jacks are not connect to the network or using MAC address to filter to determine who can get an IP address but this will not stop an determined threat or the casual use of networked PDAs. Network Access Control devices proactively scan your network for new devices and agents are delivered to the device wanting access. The end-user agrees allow the agent to “attach” itself to the client and then when access is no longer needed deletes itself from the host machine. Symantec calls this technology, dissolvable agents!

Never under estimate the value of keeping your machines fully patched. Software Updates can insure that vulnerabilities are closed and cannot be used as an attack vector! Applying patches just to keep current is not always the best thing to do. Very often new bugs can be introduced into an otherwise stable environment. Understanding what services a system is offering and patching the system that is vulnerable. There’s no need to patch the httpd daemon if you’re not running/installed web services. Change management plans are a big part of this scenario!

Access Control Lists (or ACLs) is a permission-based method for securing resources (very often is relates to objects on a file system or in a database). In an ACL-based security model, when a subject requests to perform an operation on an object, the system first checks the list for an applicable entry in order to decide whether to proceed with the operation (, 2009). ACLs allows for greater control over the access to files. In the standard POSIX model, there are owner, group and other permissions, each having read, write and execute attributes assigned to them… very restrictive especially considering that only one user and one group can be assigned to the file/directory. With ACLs, the options are much more varied! You can have multiple owners and multiple groups assigned to a file/directory. In addition, you have the following permissions attributes:

Figure 1. Available ACLs permissions attributes for OSX Server v10.5 (Heese, 2009)

NOTE: You can specify not only ALLOW permissons but also DENY permissions!
One thing to keep in mind when deploying ACLs is that not all file systems support them. Formatting your hard disk, writing data to disk and then discovering an un-supported file system can lead to a lot of wasted time!

Virus Protection is an overlooked aspect of file security. Very often people think in terms of protecting my computer. But it is more than that. Viruses can erase files but Trojans can allow others to gain access to your computer (whether it’s a personal computer or a file server). Critical data such as credit card numbers are often stored in databases and once a computer has been compromised, it’s only a matter of time before the data housed on that computer is lost. One thing to keep in mind when working on a server is never to browse the Internet (especially with root privileges). Much of the malware spread across the Internet takes advantage of vulnerabilities within certain OSs and browsers. Why take the risk. Yes it’s a pain in the bottom but think of all the hassles you’ll have to deal with should you host become compromised. To illustrate the point a little further, it has been recently reported that ATMs are being compromised by some very sophisticated pieces of malware. Now granted the ATMs themselves are being compromised but rather hardware security modules (or HSM) that encrypt and decrypt your PIN as it makes its way from the ATM to the bank clearinghouses are. Specially configured malware can be installed on these devices, and it grabs the decrypted PIN numbers out of memory and writes them to a log file that can be retrieved at a later date (Anderson, 2009).

The last item I want to touch on is log files and while not a security mechanism, it is something worthy of protecting. We often don’t put much thought into log files until there is a problem. Unfortunately, if your log files reside on the same host that’s been compromised, then you should consider that the log files have been altered. Why alter a log file? While many daemons will spit lots of information to syslog so will attempts (or more importantly FAILED attempts) to access a host be recorded. When an attacker is trying to compromise your system, one of the first things he will probably do is completely erase the log files, or erase evidence of his trespass out of those files. Moving you log files off of a host and onto a dedicated syslog server insure that you access can be properly evaluated without the fear that they may have been compromised.

Ultimately, security is NOT about set and forget. You must take an active role! It is not about one size fits all! One single solution will prevent you host from compromise! If you machine is out on the Internet long enough, it will get compromised. That’s not to say that the bad guys are looking for you. Remember we are dealing with computers. The bad guys let the computer work for them. Throwing as many obstacles in the path of the cracker will discourage only the most determined of individuals.


Anderson, N., (2009, April 15), PIN-grabbing malware compromises bank networks, Retrieved on May 11th, 2009 from

Heese, B., (2009, May 11), Available ACLs permissions attributes for OSX Server v10.5

Unknown, (2008, December), Symantec™ Network Access Control, Retrieved on May4th 2009 from

Unknown, (2009), Main: Syslog Security Tip, Retrieved on May 11th, 2009 from

Various, (2009, April 24), Access control list, Retrieved on May 6th, 2009 from

Various, (2009, May 4), Firewall, Retrieved on May 4th, 2009 from

A bastion host is a computer on the internal network that is intentionally exposed to attack (, 2009). The host may be internal to your network but it is also forward facing. It is intentionally placed in ‘harm’s’ way, exposed so that the hosts that actually provide the service can remain protected. The Bastion host provides a layer of protection that other devices such as a firewall or an intrusion detection system do not… It is the focus of attack. A firewall should provide rules that keep the attacker at bay while the IDS will warn and in some cases thwart attacks. BUT the Bastion host WILL be attacked. It’s only a matter of time.

Just because the Bastion host doesn’t mean that it should be put out there unprotected. The host still needs to be hardened! There are many things one can do to protect the Bastion host.


Putting all of your Bastion hosts into a protected network is your first line of defense. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network in order to protect the rest of the network if an intruder was to succeed (, 2009). At no time should a Bastion host have direct access to your protected resources! Internal (or protected) computers should only have access out to the Bastion host. As part of properly configured DMZ, routers/firewalls must be configured with ACLs (or Access Control Lists) so that only those events you (as the administrator) deemed acceptable are allowed to happen. Destination and source addresses need to be evaluated and rules need to be set in place to allow or deny access. Additionally, services ports need to be looked at as well. It may be acceptable for a source address to access port 80 (http) but not port 22 (ssh).

OS & Patches & ACLs

One thing to keep in mind when running a Bastion host is the box itself needs to be hardened. The OS needs to be kept up to date. Many vendors progressively secure their OS through security update. This may or may not be the right move. Vendors often roll multiple fixes into their updates… Sometimes it’s best to compile your own binary to install thus addressing the one service that may be affected by the vulnerability. Services that are not being used by the host should be disabled (or better yet) not installed… certain OS’s provide for this (Linux) others don’t (Apple). If the host has a host based firewall… turn it on configure it… block services that must run but could compromise the safety of the host. Secure the box through the use of ACLs (both user based as well as service based). It is usually up to the system administrator to determine through testing what ACLs they need to modify to lock down the network application as thoroughly as possible without disabling the very features that make is a useful tool (, 2009).


Tools like Tripwire and Nessus all play a part in base-lining your system. Tripwire is an excellent tool for determining the state of a file system. In broad strokes, it does this through the use of MD5 checksums. In theory, no two files (or disk images) will have the same exact checksum. Any changes, will result in a different checksum being produced. File integrity monitoring helps IT ensure the files associated with devices and applications across the IT infrastructure are secure, controlled, and compliant by helping IT identify improper changes made to these files, whether made maliciously or inadvertently ( 2009). So if an administrator, runs md5sum against a file system and then goes back a week later, if the checksums don’t match either he’s not on top of change control OR the system has been compromised! Nessus is a penetration-testing tool. In the case of Nessus, it looks at a database of know vulnerabilities and compares them with versions of software running on your host. When it finds a version of software running on your host that has been compromised, it will alert you to that fact. Should you find a software defect on your system it is imperative that you address the vulnerability through OS or patching and re-baseline.

Log Files

Syslog servers and log analyzers play an important role. Network monitoring solutions fit into this category as well! Logs are a vital part of understanding how your system is running. During the course of a few days or weeks massive amounts of information can be collected. Log files can tell you who tried to log in and when (or perhaps more importantly who failed to log in). It can tell you which files were accessed and by whom! It can tell you when a binary is having problems, either through miss-configuration or perhaps a bug (Heese, 2009). A wonderful tool for analyzing your data/log files is Splunk. It’s fast and allows you the ability to drill down through your log files in a very intuitive manner. Splunk can be configured to send alerts when certain criteria have been met. Sure you could do all this through shell scripts BUT you’d only be looking at the log files on one host! Because Splunk has the ability to act as a warehouse for all you system logs to can be set to look at multiple events across various systems and when combined can give you a true picture of your network/hosts.


You don’t become strong if you don’t learn! Systems that are exposed to the world need to be monitored. If you don’t, compromises will happen and you may not even know about it. A compromise host is not a matter of ‘if’ but rather ‘when’. Learning how your host was compromised can lead to better methods of securing it. Why leave it unprotected. Monitoring systems are essential to the well being of your systems. Why not take advantage of these automated systems. Spend the time to tune them. The more effort you put into it, the better the result will be, and the less false positives your IDS will flag! Know when an event is happening puts you back in control!


Dillard, K., (2009), Intrusion Detection FAQ: What is a bastion host? Retrieved on March 16th 2009 from

Heese, B., (2009, March 11), Log Management, Retrieved on March 17th 2009 from

Unknown, (2009), Bastion Hosts, Retrieved on March 17th 2009 from

Unknown, (2009), File Integrity Monitoring with Tripwire, Retrieved on March 17th 2009 from

Various, (2009, March 11), DMZ (computing), Retrieved on March 17th 2009 from


There are many things in daily life that depend on something to work. A car needs gas. A light bulb needs electricity. And we all need air to breathe. Computers can be simple like a calculator or more complex like the Cray super-computer. Most of our computing needs usually fall somewhere between the two. Most of us rely on the Internet on a daily basis, whether it is for checking the latest sports scores or researching term papers. What most people don’t think about is what’s involved with protecting the resources out on the Internet.

In computing terms CIA stands for:


These three things make up the basic stepping-stones when it comes to securing data stored on a shared resource (of which the Internet is). Without these three things the Internet would be useless. Let’s take a look for example at an online banking operation. How do these three objects relate to its operation?

Confidentiality is about making sure data is only accessed by individuals that have been granted permission to access it. (Keeping data Private). In the online banking scenario, many banks (and other security minded websites) provide an image that is displayed after you enter your user ID. This image is selected by you when setting up you online account. If you don’t see your image then you might think twice about entering your password. Many phishers are adept at making their sites look authentic. Underpinning the goal of confidentiality are authentication methods like user-IDs and passwords; that uniquely identify a data system’s users (, 2006). Ultimately, one needs to insure that not only are you providing the right credentials to access the data but that the resource is actually ‘who’ you think it is!

One other area that needs to be examined with regard to confidentiality is the use of secure transmissions. HTTP transmits data in clear text. This is problematic in two areas:

  1. Passing of your credentials in the clear. This is especially troublesome as any one that can sniff the network could grab those credentials and use it to manipulate your funds.
  2. In terms of privacy, if encryption is not used during the transfer of data anyone sniffing the network can look into your private records. Again this is something that is not desirable.

SSL goes a long way to providing this security. SSL (or Secure Socket Layer) enables the data that you pass between the bank and your browser to be encrypted.

It terms of Integrity, this is making sure that the data remains intact and changes to the data can only be made by authorized personnel. There is the notion that an asset should be trusted; that is, there is an expectation that an asset will only be modified in appropriate ways by appropriate people (, 2004). Data is only useful if it can be relied upon as accurate. System administrators need to insure that the data has not been tampered with. Accidental or intentional manipulate of data is a very bad thing. This is where things such as ACLs (or Access Control Lists) and other permission models come into play. ACLs can be used to control access to file-systems or more importantly databases.

In addition to who has access to the data one needs to check that the data that is being captured is accurate. Error checking must be an intracle part of data entry (garbage in… garbage out). Without this functionality, one could easily see a situation where an online banking user could pay a bill with funds that they don’t have (or vice versa… they want to pay a bill and the bank’s data is not currently reflecting yesterday’s deposit). There is another aspect on integrity that needs to be discussed and that is the validity of the data should something actually happen to it. Accidents happen, whether on purpose or not. Ultimately, what is of utmost importance is that the data can be restored back to its trusted state.

Availability is making sure that the data remains accessible. Data is no good if you can’t get at it. This is the first thing that network/system administrators learn. Your servers need to stay up all the time. In the banking industry, because this data needs to be accessed whenever the customer needs access, system administrators need to this in terms of high availability. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades (, 2009). In today’s fast paced world of Internet banking, banks without this would soon find that if its customers were unable to get to their money, they would be without customers.

Computer/network security is a moving target. Vectors of attack change on a daily basis. One can only plan their defenses based on the known. What information do we have today? However, using the above-mentioned criteria, network administrators can apply what is known about attacks, and how valuable their data is to properly plan defenses for the future.


Purdue University (2004, Feb. 23), RASC: Confidentiality, Integrity and Availability (CIA), retrieved on January 19, 2009 from

Unknown, (2006, April 24), Confidentiality, Integrity, Availability (CIA), retrieved on January 19, 2009 from

Various, (2009, January 20), Information security, retrieved on January 19, 2009 from,