bill's blog

Just another WordPress weblog

Browsing Posts in School

Thesis Statement

In today’s inter-connected world, not a week goes by without learning about another security breach with the loss of untold thousands of data records containing PII (or Personally Identifiable Information). Tools such as Nmap and Ettercap are often used as part of the reconnaissance and execution of the breach. These tools (however maligned) do have a legitimate place in systems and network administration.

Introduction

As with any tool, there is always a down side to their use. A hammer can be used to frame a house yet it can also be used to break a car window… and even then that can be seen as a positive thing in cases of emergency. These so-called hacking tools often start out as legitimate applications that provide valuable help to network administrators. It is through their misuse that they gain their negative connotations. Corporate policies often ban their use on protected networks yet for network and system administrators very often these tools can make their job so much easier… the right tool for the right job. Ettercap, Nmap and Wireshark are all valuable tools designed to help these administrators troubleshoot various network problems.

Ettercap is described as a suite of tools for man in the middle attacks on LANs. It was originally released on January 25th, 2001 as a public beta. At that time, Ettercap took advantage of the ncurses library which provided programmers the ability to write text-based user interfaces making the application somewhat more user friendly when compared to CLI based applications. Originally Ettercap’s feature set was pretty bare. It allowed for the sniffing of IP based traffic along with MAC and ARP sniffing. Additionally it allowed for the injection of handcrafted packets into an established connection. Today the current version, NG-0.7.3 (released May 29, 2005) has a very robust feature set. Its features allow for the sniffing of live connections, content filtering on the fly and many other interesting tricks. It supports active and passive dissection of many protocols (even ciphered ones) and includes many features for network and host analysis (ettercap.sourceforge.net, 2010). It has an OS fingerprint database and password collector. The application’s usefulness can be expanded through its support of a plug-in based architecture.

More information on Ettercap’s feature set can be found at http://ettercap.sourceforge.net/history.php

Nmap is a service and network exploration tool. In the right hands it can be used to perform security audits checking for open ports and software versions allowing system administrators the opportunity to patch vulnerable services. In the wrongs hands it can be used to scan a network looking for vulnerable services/hosts to take advantage of. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime (nmap.org, 2010). Nmap was first released in September of 1997 and has continued to have strong development support. The current version nmap-5.35DC1 was released on July 16th 2010. Nmap can be used from the command line interface in addition to various GUIs for Linux, OSX and Windows OSs. One of the nice things about Nmap is that it is well documented. Nmap cannot only do port scanning (both TCP & UDP) on a single host… It can perform ping scans of an entire network, which is great for discovering unknown hosts. It has the ability to map out IP filters, firewalls, and routers! It can “see” past NAT’s. Additionally it can do OS detection and well as software version detection.

More information on NMAPS’s feature set can be found at http://nmap.org/changelog.html

ETTERCAP

So why Ettercap? The focus of this paper is the use of so called hacking tools for legitimate purposes. One needs to look at what the tool does and how it can be used in a constructive way. This application really shines in switched environments because it minimizes the benefits of a switch. I think this needs a little explanation!

In the good old days of non-switched networks, we could attach a network sniffer to a hub and seek out the Ethernet traffic between the two machines without much effort. Why? Because hubs “broadcast” all incoming traffic to all ports on the device. It was then the responsibility of the host to grab the packets intended for itself and then act upon them. While this may seem like a good thing it unfortunately is not. Hubs pass traffic to all of its ports at the same time. Many different machines connected to the hub could then respond to this incoming traffic to at the same exact time. This could lead to packet collision forcing the host to retransmit the packet. This causes network latency and slowness. In an effort to combat this problem, networking vendors came up with switches. Fundamentally speaking… Switches direct traffic between the incoming port and the port that the intended host is connected to. It does this by caching the MAC addresses (Data Link Layer) of all hosts connected to the switch. Additionally, from a logical perspective, in order for machines to communicate via IP (Network Layer), the switch needs to match the MAC address to an IP address. The Address Resolution Protocol (ARP) is used to associate IP addresses with a MAC addresses. Each system maintains a database of previously learned IP to MAC mappings, known as the ARP cache (Norton, 2004). This ARP cache is consulted to pass packets from one host directly to another without having third party hosts “looking” in on the traffic.

ETTERCAP Demo

Let’s say that we are having a problem between a server and client. The client can’t gain access to the server’s resources. It seems as though authentication is not happening but we need to be sure. Could this be a networking issue? Based on what we already know about switched environments, we need to overcome the benefits of a switched environment by manipulating (poisoning) the ARP cache on various hosts. Enter Ettercap! Ettercap relies heavily on ARP spoofing. By using this technique you can fool target machines into sending data through your attacking machine and then you can sniff it on your attacking machine (Garg, 2005). With it we can execute a Man in the Middle (MITM) attack to sniff the traffic between the two machines. Yes… I know this could be accomplished with port monitoring but that assumes you have a smart switch.


Figure 1: Output of ifconfig.
NOTE: the MAC address (HWaddr) of the “attacking” computer.


Figure 2: If your computer only has one NIC card select Unified sniffing…


Figure 3: Make sure to select the proper Ethernet adapter.


Figure 4: After selecting the proper network interface, you need to scan your network looking for hosts for Ettercap to act upon.


Figure 5: Shows the results of our network scan.
NOTE: The last line of output shows us that Ettercap has found 7 hosts on this network segment.


Figure 6: Next select the two hosts that you want to capture the traffic between. In this case we chose 10.0.1.15 and 10.0.1.20.

Figure 7: Next we need to let Ettercap do its work. Under the Mitm menu pull down select Arp poisoning…


Figure 8: Note the last line of output. The two selected hosts have been poisoned.


Figure 9: Shows some of the output from Wireshark on the “attacking” machine.
NOTE: We can see the traffic passing between our two selected hosts. Also note the Ethernet II line in the middle pane. We can clearly see that the MAC of the destination host is that of the “attacking” machine.


Figure 10: Once we finish collecting the packet grabs be sure to “reset” the network using the Stop mitm attack(s) from under the Mtim pull down menu.

Getting back to our problem we can see that traffic is passing back and forth between the client and our LDAP server. So clearly it’s not a connectivity issue. The problem must lay somewhere else.

NMAP Explained

Mapping a network has many benefits. The biggest two are understanding what machines are actually connected to your network and the other being what services/resources are being offering up to client machine. Gordon “Fyodor” Lyon, Nmap’s original developer, once wrote, the idea is to probe as many listeners as possible, and keep track of the ones that are receptive or useful to your particular need (Lyon, 1997). I think that one sentence says it all… receptive or useful to YOUR particular needs! One needs to realize that this software was/is used to find targets of opportunity. A hacker attaches himself or herself to a network and then looks for ways to further penetrate or compromise a host/s and the network they reside on. Testing for compliance can be one of the most important detective security controls you perform in an enterprise infrastructure (Orebaugh, 2008). One of the things to keep in mind is that Nmap does not compromise a host in anyway! It merely looks for and finds machines and the services they are running. When used in conjunction with host vulnerability assessment tools such as Nessus, holes can be discovered and then exploited. This can then be taken a step further and other tools can then be used to compromise the intended victim. Nmap uses many different methods to determine whether a host is active and which ports are open. There are far too many to detail here so I’ll cover only a few.

First up is the ping sweep. This scan really can’t look for open ports on a host… just which IPs are being used on a network.

Next is the TCP connect() scanning. It is the most basic form of TCP scanning. The connect() system call provided by your operating system is used to open a connection to every interesting port on the machine. If the port is listening, connect() will succeed, otherwise the port isn’t reachable (Lyon, 2008).

Moving on to the TCP SYN… This scan uses a technique to create a half open TCP connection. Using this method we send a SYN segment and, if an ACK is received then we have detected an active port on the target machine, and we sent a RESET to close the connection promptly. If we receive an RST instead of an ACK, then the scanned port is not active (Lujambio, 2001).
Finally there is the TCP FIN scans. There are really helpful when dealing with firewalls. This scan type is accomplished by sending TCP segments with the FIN bit set in the packet header. The RFC 793 expected behavior is that any TCP segment with an out-of-state Flag sent to an open port is discarded, whereas segments with out-of-state flags sent to closed ports should be handled with a RST in response (mitre.org, 2010). There is a downside to this type of scan. Open ports are inferred but the benefit for being able to get past a firewall outweighs the extra work of determining the true status of the host.

NMAP Demo

So let’s look at a legitimate network need that NMAP can solve fairly quickly. Many network-based devices come with DHCP turned on so that you can start using it right out of the box without having to configure the networking side of things. The problem is finding which IP address the box actually was assigned. This leaves the network administrator guessing as to where on the network their new toy is. In addition to having DHCP turned on by default, most of these devices have a web interface that runs on port 80 and therein lays the key to finding the device on the network.

Let’s take a look at how this works in practice. Let’s say that I’m installing a new printer on my network. We need to make sure that the device has a static IP but out of the box it’s set to up DHCP. We know that configuring the printer is much easier using the web interface rather than the front panel. Knowing this basic information we can craft an Nmap scan to look for all hosts that have port 80 open. We’ll also want to know what OS the device is running so that we can figuring out exactly which device is my HP printer. Lastly we know that the printer is installed on the 10.0.1.0/24 network. Let’s craft a simple nmap command to find our printer.

nmap -sV -T3 -p80 -sT 10.0.1.0/24

Looking at the above command, the –sV flag will provide the version number of the service that is running of the found device. The –T3 flag sets the timing of the scan… or in other words how intrusive do we want this scan to be. The –p80 flag tells nmap to look only for devices that have port 80 open. Next we tell Nmap to use the basic TCP connect scan using the –sT flag. And lastly we need to tell Nmap what network it should scan (10.0.1.0/24). So let’s run our scan!

endeavour:~ bheese$ nmap -sV -T3 -p80 -sT 10.0.1.0/24

Starting Nmap 4.76 ( http://nmap.org ) at 2010-09-12 11:03 EDT
Interesting ports on (10.0.1.2):
PORT STATE SERVICE VERSION
80/tcp open http 3Com Baseline 2816 switch http config
Service Info: Device: switch

Interesting ports on (10.0.1.15):
PORT STATE SERVICE VERSION
80/tcp closed http

Interesting ports on 10.0.1.24:
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.2.14 ((Unix) mod_ssl/2.2.14 OpenSSL/0.9.7l DAV/2)

Interesting ports on (10.0.1.40):
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.2.8 ((Ubuntu))

Interesting ports on (10.0.1.65):
PORT STATE SERVICE VERSION
80/tcp open http HP Color LaserJet 2600n http config 4.0.2.38
Service Info: Device: printer

Interesting ports on (10.0.1.254):
PORT STATE SERVICE VERSION
80/tcp open tcpwrapped

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 256 IP addresses (6 hosts up) scanned in 8.21 seconds

It’s pretty easy to see that our printer picked up the IP address of 10.0.1.65. Now all we have left to do is point a web browser at that address and configure the printer the way we want. Granted this is a pretty basic scan but it does illustrate how to use Nmap for legitimate purposed on a corporate network.

Conclusion

Understanding your network and how it’s being used under normal circumstances is extremely important. WHY? Because when something changes one can quickly understand the magnitude of the problem which can range from not knowing that a particular project has started to not knowing that a disgruntled employee is distributing illegal content at the company’s expense (Miessler, 2006). Software like troubleshooting Ettercap and Nmap are important tools in the network/systems administrator arsenal.

Resources

Garg, M., (2005, June 13th), Sniffing in a Switched Network, Retrieved on August 9, 2010 from http://articles.manugarg.com/arp_spoofing.pdf

Lujambio, D., (2001-06-29,) Learning with Nmap, Retrieved on August 7, 2010 from http://www.linuxfocus.org/English/July2001/article170.shtml

Lyon, G., (2008), Nmap Network Scanning: The Official Nmap Project Guide to Network Discovery and Security Scanning, Insecure LLC: Sunnyvale, CA

Lyon, G., (1997, September 1st), The Art of Port Scanning, Retrieved on September 7th, 2010 from http://www.phrack.org/issues.html?issue=51&id=11#article

Miessler, D., (2006, July), Housekeeping With Nmap, Retrieved on August 7, 2010 from http://danielmiessler.com/writing/housekeepingwithnmap/

Norton, D., (2004, Aril 14th), An Ettercap Primer, Retrieved on August 8, 2010 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.4282&rep=rep1&type=pdf

Orebaugh, A. & Pinkard, B., (2008), Nmap in the enterprise: your guide to network scanning, Syngress, Burlington, MA

Unknown, (2010, April 10th), CAPEC-302: TCP FIN scan, Retrieved on September 11th, 2010 from http://capec.mitre.org/data/definitions/302.html

Unknown, (2010), Ettercap, Retrieved on September 7th, 2010 from http://ettercap.sourceforge.net/index.php

Unknown, (2010), Nmap – Free Security Scanner For Network Exploration & Security Audits, Retrieved on September 7th, 2010 from http://nmap.org/

Illustrations

Heese, B. (2010), Ettercap Screen Grabs

For a PDF of the above article, please click here!

SMTP (or Simple Mail Transfer Protocol) is the service that handles the sending of email. This protocol runs on port 25. For the most part this is a server-to-server protocol though it is possible to telnet into the service to send emails directly. It uses a number of sub-processes (MSA, MTA, MX exchanger, MDA) to make sure the mail gets to the right place (domain & account).

IMAP (or Internet Message Access Protocol) is one of two protocols that handles the delivery of email to clients. It usually runs on port 143 but this can be changed to allow for obscuring this service by running it on a different port. The downside to this is the client application needs to be manually configured to be made aware of the port change. It can also be to use SSL certs to secure the transmission of data. Secure IMAP runs on port 993 by default. The benefit of using IMAP is it allows for the centralization of email. Mail actually resides on a server and then the end user can access it from multiple machines.

POP (or Post Office Protocol) is the other protocol that handles the delivery of mail to clients. Once again it usually runs on the well-known port of 110 but that can be changed. It too allows for the use of SSL certs and when configure that way it will usually run on port 995. The benefit of using POP is mostly on the server side. POP downloads messages to the local machine and then deletes the record from the mail server keeping storage demands to a minimum.

A Buffer Overflow vulnerability is one in which the programmer of an application does not properly allocate enough memory for a given input. In computer security and programming, a buffer overflow, or buffer overrun, is an anomaly where a program, while writing data to a buffer, overruns the buffer’s boundary and overwrites adjacent memory (Wikipedia.org, 2010). This could lead to any number of problems… simple application instability, complete application crashes or in the worst case, a crash that returns a shell prompt allowing direct access to the box. In practice, a hacker could craft an input string that overflows the buffer and executes something like cmd.exe.

So how does one go about performing such deeds of electronic mischief?

1. Start by recon’ing a site. We do this with NMAP or something like Nessus. We find a machine that is running a piece of software that has a known vulnerability for the version of the software it is running.

2. Next we put together a payload. This is an input string that will exceed the input buffer. Theirs is a bit of work that goes into this and for the script kiddies out there… there are many websites and videos that step on through putting together the attack. A real simple buffer overflow is demonstrated in this video on You Tube.

http://www.youtube.com/watch?v=ZZ0LVAFIDrA

Once the buffer overflow is successfully performed you should be returned to a shell prompt. The prompt will have the same privileges as that of the application that was compromised.

Resources:

Various, (2010), Buffer overflow, Retrieved on August 22nd, 2010 from http://en.wikipedia.org/wiki/Buffer_overflow

MTBF

2 comments

I work in IT and one of my job functions is to warehouse the image files of a corporate creative department. Translated… that means I buy a lot of storage. One of the things that storage admins are looking at is the failure rate of the disc drives that make up their SAN environments. The higher the failure rate of a particular drive the better your chances of having a catastrophic loss… Or in other words you’re restoring from tape if you loss a lot of drives at one time!

MTBF (or mean time before failure) is a standard measurement (in hours) we use to calculate the life of a disk drive before it fails. The other measurement we use is AFR (or the annualized failure rate), which is expressed as a percent based on the MTBF verse the amount of time that device is powered on and running. A couple of things to note… MTBF is not necessarily a devices useful life. And AFR is not meant to be applied to a single drive but rather it is the expected failure rate of any given drive within a particular production run (population).

So what does this all mean?

Well most vendors spec consumer-geared disk drives at about 300000 MTBF. That being said the key word in MRBF is M (or mean). So what we’re looking at is about half of the drive for a given population with fail in the first 300000 hours of use.

Translated again… and I got help on this one 😉

If you had 600,000 drives with 300,000 hour MTBFs, you’d expect to see one drive failure per hour. In a year you’d expect to see 8,760 (the number of hours in a year) drive failures or a 1.46% Annual Failure Rate (AFR) (Harris, 2007).

Realizing that this is what a manufacturer quotes as the expected life, one has to ask how does that hold up in reality. Well Google did a bit of research on this and found that their failure rate was much different from that of the manufacturers. Why? Because there is no clear definition between what a manufacturer considers a failure and the real world’s expectation on these devise are.

In reality many factors will determine whether a drive should remain in production. Call is an IT admins intuition… Call is that odd clicking sound… calls it taking forever to save a file… Often time we (IT professionals) will replace a drive before it is completely unusable (or the point where we can no longer retrieve data from the device). Did the drive fail? Technically no… Practically yes! If we can’t rely on the drive to reliably save and retrieve data that it has fails for our purpose… guess some manufactures don’t see it the same way!

Resources:

Harris, R., (2007, February, 19th), Google’s Disk Failure Experience, retrieved on June 3rd 2010 from http://storagemojo.com/2007/02/19/googles-disk-failure-experience/

Wow what a week! It was a stroll down math’s hit parade… number line theory… adding fractions… primes… substituting variables… and the rules for the order of mathematical operations.  The fact is we use math everyday but rarely do we think about the fact we are using math! So let’s see how we take our math skills for granted!

The other day I was in NYC. I had $7.50 in my pocket for lunch! It was the end of the week and wife’s snagged my wallet so going to the ATM was out of the question! For anyone who’s never been to New York, filling your belly on $7.50 is not an easy task!

I was in the mood for pizza. I ran into the nearest pizza place and saw that a slice of pizza costs $3.50 and a coke would run me an additional $1.50. Now I know this is going to be a stretch but bear with me… Let’s put some number line theory to work! Let’s look at 0 on the number line as being the dividing mark between contentment and starvation! If I drop into the negative side of the number line I’d go hungry. If I stay on the positive side, I’d walk away with a full belly!

Let’s begin…

Starting at + 7.50 on a number line… let’s do some math. 2 slices of pizza, because one slice wasn’t going to cut it… could be represented by the following the equation:

(2 * -3.50)

Let’s apply that to our number line.

(2 * -3.5) = -7.00 + 7.50 (our starting point) = .50

So we’re still positive…  still good! BUT then I need to add the coke in.

.50 + (- 1.50) = -1.00

As you can see I’ve fallen into the negative side of the number line at -1.00. Bill goes hungry.

I know one can say do without the Coke… but I just can’t eat a slice without and icy cold soda!

Let’s look at the menu again!

Ohhh… that calzone looks good at $6.50 for a plain one (I’d have to sacrifice palette for hunger)!

Back to the number line…

(1 * -6.5) = – 6.5 + (7.50) = 1.00

Now we’re talking… still on the positive side. BUT I still need to add in that icy cold Coke (it doesn’t matter… just need one of them to swallow back food with)!

– 6.5  + (- 1.50) = -.50

Poof… I just got blewn that out of the water by .50. I’m running out of options! Let’s see what else is on the menu!

Ahhh… Garlic knots at $2.25. SO maybe I can do a bag of knots, a slice of pizza and that icy cold Coke!

-3.5 + (-2.25) + (-1.5) = -7.25 +7.25 = +.25

Now we’re talking! Still on the positive side of zero… SO I guess I’ve got my lunch! Contentment!

Is my example simple? Yes BUT this is the kind of math that we perform automatically everyday without really putting any effort into it!

Stay tuned for primes and encryption next week!



Everywhere we look in life… rules guide us to the correct way of doing things. Whether it’s the rules of the road or something as basic as math! It’s funny; those of us with kids often have to think back many years when they come home with new math problems. AND the older they get the more you have to think. This year I’ve had to look at the rules of operations all over again… both in this class and my kids. Without these rules the correct answer will be ever elusive! One person may do addition first… another follows from left to right… still another multiplication. Rules are put in place so that everyone can understand and interpret equations without ambiguity! Math has its rules! One easy way to remember which order to execute math equations is…

BEDMAS (Brackets, Exponents, Divide, Multiply, Add, Subtract)

So what does all this mean? Given the equation

4+7-(8*5) = X

The first thing we would do is deal with what’s inside the brackets (8*5) or 40.

Then we would deal with the addition 4 + 7 or 11.

Now we’d deal with the final operator subtraction so 11 – 40 or -29.

SO 4+7-(8*5) = -29

Here in the United States we use a base10 system for many things… certainly we count using a base10 numbering system. Our currency is base10. I can remember the big push in the late ‘70s to move to the metric system, which by the way is a base10 system. Yet we may not realize that there are many different numbering systems ingrained in our society. We use an English system to express units of measure (length). Which in many ways is based on a Roman system of measurement! An example being the mile… Originally based on the Roman mile (5000 feet), in 1592 it was extended to 5280 feet to make it an even number (8) of furlongs (wikipedia.org, 2010). By the way… The distance between the rails on a high-speed train line is 143.5 centimeters. Why? Because that was the distance betweens the wheels of Roman chariot. That was the distance needed to fit two horses side by side in front of the chariot.

In IT, we are familiar seeing different numbering systems. We see both Base2 (binary) and Base16 (Hexadecimal) numbering system quite a lot.

The binary number system contained just two values, 1 and 0. George Boole is considered by many as the father of modern day computing. It was his work with logic that ultimately boils down logic and the math behind it to simple yes or no (1 or 0). This can make computing numbers extremely fast. If one thinks in terms of electricity switches you either have an on or an off position. Computer microchips are designed in such a fashion that depending of the state of the signal (1 or 0) a logic pattern can be computed and the software then executed. We in IT often find behind this logic. It is so ingrained in our beings that it is often hard for us to factor in the randomness that plays such a large part in life. Why? Because we are surrounded by 1’s and 0’s. Yes we all know that computers use on and off as a basic premise of computer code… But did you know that CD/DVD/BluRay Discs are perfect illustrations of the use of the binary system. They are encoded by a laser punching holes in the foil membrane embedded within the protective plastic casing. These holes (or pits) represent a 0 (or no signal) and the untouched foil (or non-pit areas) represents a 1. When played back the software converts this binary stream into the music or movies that we’ve come to enjoy!

We also come across hex numbers quite often as well! The hexadecimal number system complements the binary system. Each hexadecimal digit represents four binary digits (bits) (also called a “nibble”), and the primary use of hexadecimal notation is as a human-friendly representation of binary coded values in computing and digital electronics (wicketkeeper, 2010) We see hex used when looking at MAC addresses. We use hexadecimal representation for RGB colors in Photoshop, HTML or CSS documents. We will be using hexadecimal numbers when writing out Ipv6 addresses! If you’ve ever used a packet capture tool such as Wireshark. Network packets as written in hexadecimal as well. 192.168.1.1 can be represented as c0 a8 01 01. A lot less characters that need to be put out onto the wire.

Different number systems are be fundamentally thought of as ways to keep track of information in the most efficient way that the numbers can be grouped together.

Resources:

Various, (2010, April, 10) English Units, Retrieved on April 28th, 2010 from http://en.wikipedia.org/wiki/English_units

Various, (2010, April 28th), Hexadecimal, Retrieved on April 28th, 2010 from http://en.wikipedia.org/wiki/Hexadecimal

Computers and science fiction are intrinsically bound at the hip! And no one individual ties the both together than Star Trek’s Mr. Spock! Spock could be seen in most episodes working at his computer workstation fine-tuning the results of a search, calculating odds or presenting definitive course of action. But it wasn’t Spock’s love of computers that made him so special… It was his impeccable logic! SO sound was his logic that Kirk would go on to say, “You’d make a splendid computer, Mr. Spock” (Roddenberry, 1967).

We as human beings often think with emotion rather than logic. Thinking with emotion clouds logical thought. In IT the ability to think logically about a problem is a must… ones and zeros. It helps with the reasoning process… “I understand that your computer seems slow but can you be more precise?” If we can eliminate subjectiveness, we can often get at the root of the problem much more expeditiously. But logic isn’t only used to troubleshoot software bugs. Logic comes in handy for project management concerns as well.

We are constantly moving solutions into and out of the organizations we work for. Returning machines on lease seems pretty benign. We buy machines… they get delivered… we image them… we deploy them to the end-users desktop. One needs to be worried about interrupting the user. We don’t want to incur additional costs because we can’t turn around the number of machines ordered. It takes a lot of planning. The more you touch a piece of hardware the more time it takes to deploy… the better your chances of messing up! Understanding how to stage the machines and being able to be flexible to change needs to be a part of your logic.

Technology data migrations are another place where logic plays a hand. The more complex a migration is the more logic needs to be applied for a successful outcome. One needs to be able to determine the order in which changes happen. Formatting out a hard drive before you move the data off would be a really bad thing. Does the users home directory reside on the server or is it cached locally on their laptop? When was the last time the data was synced? These are just some of the questions you need to adequately plan. It is logic that you use to formulate the best way to make things happen.

Common sense… plays a part here too. The most common meaning to the phrase is good sense and sound judgment in practical matters (Wikipedia, 2010). It is this judgment that when strung together makes our logic sound as well! Some may Logic does not come naturally. Just like our reasoning skills logic needs to be learned. The study of logic enables us to communicate effectively, make more convincing arguments, and develop patterns of reasoning for decision making (Angel, 2007). The more you exercise your logical thinking the better you become at it.

Resources:

Angel, A., Abbott, C., & Runde, D., (2007), A Survey of Mathematics with Applications, Pearson/Addison Wesley

Roddenberry, G., (1967, February 9), Star Trek [The Return of the Archons], New York: National Broadcasting Company.

Various, (2010, April 20th), Common sense retrieved on April 21, 2010 from http://en.wikipedia.org/wiki/Common_sense

Getting up in front of any gathering of people can make many people uncomfortable. In fact, it is often rated as one of the top 10 common phobias people have. This social phobia affects about 15 million American adults, according to the National Institute of Mental Health (livescience.com, 2010). Practice makes prefect. The more you get up in front of people the more comfortable you are with it. That really holds true with anything in life. The more you do something the better you get at doing it.

Preparation for your testimony starts way before you get into the courtroom. It starts the minute you’re actually assigned to the case, whether hired by an attorney or assigned by the jurisdiction you work for. You have to work at getting into a routine or better yet a systematic approach to collecting evidence. If for nothing else but to eliminate mistakes. As with anything have a game plan but allow for enough flexibility to keep from looking at evidence the same old way. Sun Tzu, the legendary Chinese military general and strategist once wrote, “According as circumstances are favorable, one should modify one’s plan (Giles, 2009)”. What Sun Tzu is expressing is that one must be open to change if change does not hurt the ultimate outcome. Attorneys will get to know you, if you’re good. Don’t always rely on the same course of action, change things up. They will have a harder time refuting your methods of collecting evidence.

In studying for my Masters, I am looking to update my skill set… keeping current and furthermore look at a completely new set of skills. This is extremely important for the expert witness. Why? Because lawyers need to discredit you and the evidence you bring to the table. If you’re shown being 10 years behind the times in your learning, lawyers could use that to introduce doubt to the jury.

“Perhaps there are better ways to examine that hard drive Mr. Heese?”

The Federal Rules of Civil Procedures, Rule 26 requires that you provide a report on the evidence you are testifying to. As part of that report you are required to present any published writings you’ve done in the last 10 years. Realize since you are being considered an “expert” witness, it is assumed that you keep current and are completely knowledgeable in the your field of expertise. What better way to keep things honest but to write about the things you know about, let your peers refute or agree with the thing you have to say. Publishing provides for this!

One thing we’re never really prepared for, and most celebrities are either is media attention! Sometimes you’ll get a case that is of particular interest to the public such as the Pete Townsend child molestation case. In 2003 Pete Townsend the guitarist for the rock band The Who was arrested for downloading child pornography from the Internet. At the time, Townsend was placed on the sex offender registry for five years after he admitted using his credit card to view the images (Lisi, 2010). A perfect case for computer forensics specialist! But there is a price to pay. The media is going to want to know if it’s true. You will be bombarded. What you say and do could taint your testimony! The media will try and judge the case in the press. They will distort the truth and your words will be taken out of context.

You should know how the trail process works. Who speaks first? When is it your turn? You should know how to dress. What is appropriate attire? Are jeans and sneakers cool? Should you bring your lab coat? What is the proper etiquette in court? Speak to the jury they are the ones you have to convince. Make eye contact! The fastest way to lose creditability is to look down at the floor when providing an answer. Know what you are going to say but don’t spend a lot of time rehearsing things. Try to keep things simple without minimizing the importance of the testimony you are providing. You have to realize that you are the expert. You need to explain things to the jury on a level they can understand. Computers and the technology they bring to the table are complex. Many people may not be able to grasp the concepts they need to make a knowledgeable decision on guilt or innocence!

Resources:

Conners, S. & Giles, L., (2009, June 15th), The Art of War – Classic Kindle Edition, Chapter 1, Section 17

Lisi, C., (2010, January 28), Pete Townshend targeted as a ‘sex offender’ before Super Bowl, Retrieved on March 9th, 2010 from http://www.nypost.com/p/news/national/pete_townshend_targeted_as_sex_offender_3BJDh6zHpMRuPy9pSFfnUL

Unknown, (2010), What Really Scares People: Top 10 Phobias, Retrieved on March 9th, 2010 from http://www.livescience.com/culture/091023-top10-fear-1.html

Many things go into the exchange of information. How is it communicated? How is that information received and most importantly how is that information interpreted? Things such as the person’s tone or their body language or in the case of the written word, what words were chosen and how they were used. Is the wording formal or informal? All of these factors are part of the communication process. It is evident from reading the article that different people may interpret the information in many ways. Clearly and precisely stating you point is extremely important especially when human lives are at stake.

Let’s take a look at what we have learned.

In the case of the Columbia accident, the information that was passed around happened over a long period of time. NASA knew that foam from the external fuel tank breaks free during the launch and could cause damage to the shuttle. NASA failed to take timely measures to correct the problem.

In the case of the Challenger disaster, the engineers at Morton Thiokol had expressed to NASA their concerns for hat the cold could cause the o-rings to fail. The information that was being communicated happened over a very short period of time (less than 24 hours). The engineers didn’t have hard facts and NASA was under pressure to launch.

Now, let’s take a look at another NASA mishap, the Apollo 1 fire. On January 27, 1967, the Apollo 1 astronauts were performing a test and training exercise. During the course of the event a fire broke out in the spacecraft killing all three astronauts. A number of factors were to blame, the 100% oxygen environment, the flammable materials in the cockpit (Velcro) and an inward opening hatch. North American Aviation (the spacecraft’s builder) had argued with NASA officials that these factors could have catastrophic consequences.

It is interesting to note, the only times that we have lost astronauts in their spacecraft; NASA has been at odds with the spacecraft’s manufacturer. No one wants to be blame with death of another human being… so the blame game begins!

During the hearings of the shuttle tragedy, it came to light that two different people had two different opinions on what was being said. The article did not go into any length on who these individuals were and whether or not they worked for NASA or the spacecraft’s manufacturer. It’s important to know about which side of the fence these individuals sat? Without this information an objective third party could draw the wrong conclusions. Clear and precise wording is just as important as what is being said.

Changing corporate culture? Hmmm, now there’s an idea.