bill's blog

Just another WordPress weblog

Browsing Posts tagged Splunk

A bastion host is a computer on the internal network that is intentionally exposed to attack (, 2009). The host may be internal to your network but it is also forward facing. It is intentionally placed in ‘harm’s’ way, exposed so that the hosts that actually provide the service can remain protected. The Bastion host provides a layer of protection that other devices such as a firewall or an intrusion detection system do not… It is the focus of attack. A firewall should provide rules that keep the attacker at bay while the IDS will warn and in some cases thwart attacks. BUT the Bastion host WILL be attacked. It’s only a matter of time.

Just because the Bastion host doesn’t mean that it should be put out there unprotected. The host still needs to be hardened! There are many things one can do to protect the Bastion host.


Putting all of your Bastion hosts into a protected network is your first line of defense. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network in order to protect the rest of the network if an intruder was to succeed (, 2009). At no time should a Bastion host have direct access to your protected resources! Internal (or protected) computers should only have access out to the Bastion host. As part of properly configured DMZ, routers/firewalls must be configured with ACLs (or Access Control Lists) so that only those events you (as the administrator) deemed acceptable are allowed to happen. Destination and source addresses need to be evaluated and rules need to be set in place to allow or deny access. Additionally, services ports need to be looked at as well. It may be acceptable for a source address to access port 80 (http) but not port 22 (ssh).

OS & Patches & ACLs

One thing to keep in mind when running a Bastion host is the box itself needs to be hardened. The OS needs to be kept up to date. Many vendors progressively secure their OS through security update. This may or may not be the right move. Vendors often roll multiple fixes into their updates… Sometimes it’s best to compile your own binary to install thus addressing the one service that may be affected by the vulnerability. Services that are not being used by the host should be disabled (or better yet) not installed… certain OS’s provide for this (Linux) others don’t (Apple). If the host has a host based firewall… turn it on configure it… block services that must run but could compromise the safety of the host. Secure the box through the use of ACLs (both user based as well as service based). It is usually up to the system administrator to determine through testing what ACLs they need to modify to lock down the network application as thoroughly as possible without disabling the very features that make is a useful tool (, 2009).


Tools like Tripwire and Nessus all play a part in base-lining your system. Tripwire is an excellent tool for determining the state of a file system. In broad strokes, it does this through the use of MD5 checksums. In theory, no two files (or disk images) will have the same exact checksum. Any changes, will result in a different checksum being produced. File integrity monitoring helps IT ensure the files associated with devices and applications across the IT infrastructure are secure, controlled, and compliant by helping IT identify improper changes made to these files, whether made maliciously or inadvertently ( 2009). So if an administrator, runs md5sum against a file system and then goes back a week later, if the checksums don’t match either he’s not on top of change control OR the system has been compromised! Nessus is a penetration-testing tool. In the case of Nessus, it looks at a database of know vulnerabilities and compares them with versions of software running on your host. When it finds a version of software running on your host that has been compromised, it will alert you to that fact. Should you find a software defect on your system it is imperative that you address the vulnerability through OS or patching and re-baseline.

Log Files

Syslog servers and log analyzers play an important role. Network monitoring solutions fit into this category as well! Logs are a vital part of understanding how your system is running. During the course of a few days or weeks massive amounts of information can be collected. Log files can tell you who tried to log in and when (or perhaps more importantly who failed to log in). It can tell you which files were accessed and by whom! It can tell you when a binary is having problems, either through miss-configuration or perhaps a bug (Heese, 2009). A wonderful tool for analyzing your data/log files is Splunk. It’s fast and allows you the ability to drill down through your log files in a very intuitive manner. Splunk can be configured to send alerts when certain criteria have been met. Sure you could do all this through shell scripts BUT you’d only be looking at the log files on one host! Because Splunk has the ability to act as a warehouse for all you system logs to can be set to look at multiple events across various systems and when combined can give you a true picture of your network/hosts.


You don’t become strong if you don’t learn! Systems that are exposed to the world need to be monitored. If you don’t, compromises will happen and you may not even know about it. A compromise host is not a matter of ‘if’ but rather ‘when’. Learning how your host was compromised can lead to better methods of securing it. Why leave it unprotected. Monitoring systems are essential to the well being of your systems. Why not take advantage of these automated systems. Spend the time to tune them. The more effort you put into it, the better the result will be, and the less false positives your IDS will flag! Know when an event is happening puts you back in control!


Dillard, K., (2009), Intrusion Detection FAQ: What is a bastion host? Retrieved on March 16th 2009 from

Heese, B., (2009, March 11), Log Management, Retrieved on March 17th 2009 from

Unknown, (2009), Bastion Hosts, Retrieved on March 17th 2009 from

Unknown, (2009), File Integrity Monitoring with Tripwire, Retrieved on March 17th 2009 from

Various, (2009, March 11), DMZ (computing), Retrieved on March 17th 2009 from

One of the most important tools that any systems administrator has at their deposal is their system’s log files. Unfortunately these files are often overlooked, forgotten or worse yet ignored. However, they contain valuable information! Log files can tell you who tried to log in and when (or perhaps more importantly who failed to log in). It can tell you which files were accessed and by whom! It can tell you when a binary is having problems (either through mis-configuration or perhaps a bug). The point is the information is there. Having the time to go through these files is something that is in short supply. One could always grep the files where they think the problem may have been captured, but this is still a very manual process.

Most hosts are configured to log their file to a centralized directory or one configured by the service that is generating the logs. While this is great for checking on the overall health of a single system it can’t provide you with the global picture. This is where log management tools come into place. Simple syslog servers collect the logs and manage them centrally and may offer some basic reports. Collecting syslogs on a centralized location also adds some security in so far as the log files are stored off of the host generating them. This prevents hackers from altering the log files to hide their presence.

So… What tools are out there?

Syslogd is available on most *NIX systems. Making it available to other clients is a pretty straightforward process. But these daemons really don’t allow for the analysis of the data you collected. This is where Splunk comes into play. Splunk’s software is a specialized data-mining and search tool that digests log files and organizes information so administrators can see how a particular event affects different programs (, 2005).

SO what can Splunk do?

Splunk’s claim to fame is that it indexes all your data and you can use those indexes to search across all your data. It normalizes your data…  Different time formats are no longer and issue. Search through data files is as easy as typing an error code into the search field. This will return all results from all hosts that you are monitoring.

Sure you can do a lot of what Splunk does by simply grepping log files but once the results are published you can click on any of the indexed data and drill down narrowing your search with each click! Splunk allows you to generate reports from your data sets… such as showing the search results of ‘root’ and ‘auth’ over time. Simple… yes I know! Splunk can also send out alerts that can be scheduled. These alerts can trigger shell scripts, generate RSS feeds or email messages. It is a feature rich tool and the website has a lot of useful demos and white papers. For more information see

To log or not to log?

Sometimes it’s not a question if you should set up a syslog server… sometimes is mandated. Regulations, such as SOX, PCI DSS, HIPAA and many others are requiring organizations to implement comprehensive security measures, which often include collecting and analyzing logs from many different sources (, 2009).

Got Ya!

Some things to be aware of… the logs files are sent cleartext. This may be considered security vulnerability. Newer versions of syslogd are incorporating SSL support to overcome this shortcoming! Data can be sent via TCP but more likely UDP, so if you’re using host-based firewalls it’s important that you open the right ports.
Syslogs are only one part of a network monitoring solution but when combines with other tools they can quickly give System Administrators the information they need to correct the problems they come across!


Various, (4 February 2009), Syslog, Retrieved on March 7, 2009 from
Shankland, S., (2005, August 8), Splunk delves into log-search automation, Retrieved on March 7, 2009 from

Learning from your mistakes is critical to pushing past and benefiting from these mistakes.

  1. Critical machines connected directly to the Internet.
  2. Don’t ignore the obvious – look at the bigger picture.
  3. Don’t set and forget! Security is on going.

The big take away from these three scenarios can be broken down as follows:

Never surf the Web with a privileged account.

This really is a common sense thing. Unfortunately many OS vendors make the first account that is set up on the box an administrative user (privileged account). Microsoft does it, Apple does it, and even Ubuntu does it. Fortunately many of these same vendors see the problems associated with this and have disable root by default. However, many versions of Linux enable root by default. Take the time to set up a non-privileged account and use it. NEVER surf the web as root or an admin!

Make sure your machine is up-to-date (OS, App, and AV).

Make sure that you machine is patched and up to date. From an OS perspective one needs to have a change management plan in place. There’s nothing worse that patching a critical machine only to find that upon rebooting it your services won’t start. Many users get Anti-Virus as part of the machine purchase but these vendors only provide a very short period of free AV definition updates. This is where ISP could come into play. One thing that I think many Internet providers should be making mandatory is AV software… Include the cost as part of the users monthly access charge. In addition, users should regularly check for rootkits. In many was a machine compromised by a rootkit is much worse off than one infected with a virus. Even if it does wipe your hard drive clean… You do have backups? Make sure that you’re really running that application you intend to. Kernel rootkits could hide the running of compromised applications as well as hiding whole parts of the file system making it impossible to truly know what applications are running on your machine.

Know where your data resides.

Perhaps a better way of looking at is it… Know what data is on your machine. More and more these days we hear of private data being lost. It seems as if it’s on daily basis. Protect your hard drives! PGP offers whole drive encryption. Yes is does mean setting up a PKI but one substantial loss could cost more in lawsuits than the time, effort and money needed to set this up. Let’s look at the latest in military data loss… January 3rd, 2008, An Air Force band member at Bolling Air Force Base reported a laptop, containing personal data on 10,501 Air Force members missing from his home (, 2008). Now that tops all, a musician with sensitive information. He’s someone who may have secretly clearance… but really what does a musician need with social security numbers.

Check you logs or run a syslog server.

UNIX logs have a vast amount of data and depending on the verboseness that is set it can be overwhelming. Setting up a syslog server and then filtering the data is important. Splunk is a great tool for this but be forewarned… there is a pretty steep learning curve. Make sure that the syslog server continues to run. Can tell you how often the emails just stop and you’re lulled into a sense of false security because you’re not getting emails. Email notification needs to be tuned. You don’t want emails for every little thing, as it won’t take long before you start ignoring those emails. And before you know it the truly important ones have slipped passed you.

Insecure service running in an insecure place.

Double-check you configurations make sure that the services you are running on your box are truly needed to what the server is intended to do. There’s no reason to run NFS on a publicly available machine. If you have to have shares set up do it in a secure fashion. Tunnel your file transfers over ssh or use scp. Make sure you look over your config files before placing your machine on the Internet. We all have fat fingers from time to time. It’s best to find out BEFORE you run into trouble.


One thing to always keep in mind is…Trust your instincts. You know your machine better than anyone else. You know how they react day to day. You know the ‘quirks’ of the machine (It slows down every day just before lunch). Have an emergency response plan written out and available. Who do you call and when? How much time are you allotted to fix a mission critical machine before calling for help? Along with the previous statement goes an understanding from management that blame will be assessed The Internet is truly the Wild West. It’s been said that the Internet mimics the real world BUT it actuality it can be far more dangerous. The anonymity that the Internet provides is vast and tracking down perpetrators can be exceedingly difficult not to mention when found dealing different jurisdictions there are in the world can make it extremely hard to prosecute


Unknown, (January 23rd, 2008), TrustedID Identity Theft Data Breach Alerts » stolen laptop, Retrieved on March 8th 2008 from