Lesson 11: Welcome back! I know that the last few weeks have been a
lull, and even before ShmooCon there wasn't a lot going on our security
blog. However, you're in for a real treat since I'm back with the daily
ITSM Vulnerability Assessment techniques!
It's no longer Spring break (well it is Spring break in most places this
week), and I've done my fair share of partying over the past two weeks.
I even passed out once (as opposed to blacked-out), something I wasn't
aware that could happen to me in my 30's. But now it's time for writing
and to get back into the swing of things.
If you're not already aware of the The New School of Information
Security,
please start there. If I'm going to be making recommendations on
assessments or vulnerability research, I am certainly going to take the
New School even more into account from now on. It's just the way we do
things around here, and now that's it's formalized a bit -- we can
proceed to some even better process-oriented approaches to managing risk
with vulnerability assessment solutions.
Part I: Information assurance vulnerability assessment — Protecting
the infrastructure
If you're reading this, then chances are good that you are connected to
the Internet. Right now this very second, this blog or anything in
between this blog and you is at risk from either side attacking either
side. It's all at risk and should be protected.
This includes your Apple laptop and 802.11G Access point. This includes
Marcin's web server and Wordpress installation (note: 2.5 is on the
way!). But it also includes everything else in between. In stranger
situations, it may also include your entire Google profile if you have
this blog post cached via Google Reader. Now it's not just everything
between your browser and our web application, but also everything in
Google's infrastructure.
Should I say something about firewalls? Both network firewalls
(including ones that see into applications) and web application
firewalls are thought to protect this infrastructure, but in reality --
they are part of this infrastructure. How do you protect a firewall?
With another firewall? With an IPS? How do you protect the IPS?
As a network engineer, I had many people over the years try to explain
to me the concept of fiber miles and the speed of light problem outside
of a vacuum. Yes, it's true that between the distance of Marcin's web
server and my laptop (as I type this), there is a degree of latency
caused by the speed of light problem. It's probably a few milliseconds
(about 3 or 4 ms). Yet, if I send ICMP echo requests and get back
responses, I'll see a round-trip time of 60-80 ms! If I use TCP to
traceroute, it's the same or worse! What causes this?
Latency is caused by the performance hit required by processing packets
(I'm not going to go into OEO conversions versus "all optical", but
that's worth a look for the discerning reader), usually at the software
level. Cisco and many other network vendors will claim that hardware
acceleration will improve this (and in the case of TCAM driven IPFIB/MAC
tables, it sometimes does a little), but the reality of the situation is
that the latency increase is still in the software. Something has to
drive every ASIC, every IC, and every FPGA.
Risk to our infrastructure is also imposed by software. There is no
magical hardware device that can make your router or firewall more
secure. The only thing protecting your access-point, router, firewall,
IPS, or any other device is the software that runs on it -- and the
often "burned-in" configuration. By reducing the number of devices, we
not only increase performance -- but we also reduce the risk to the
firmware and configuration running on network devices in a path.
Recommendation: Discontinue the use of firewalls and IPS devices by
retiring them. Remove appliances from the paths between your software
and the networks and applications that your software needs access to.
Routers make excellent firewalls and often have better capabilities than
firewalls or IPS devices. Classic LAN's and even pre-existing WiFi
infrastructure should be re-thought completely. There is no reason for
me to exist on the same IP subnetwork or broadcast/multicast MAC/LLC
layer as anyone else for any reason.
It's also my opinion that a router should not be considered a router
unless it can run the TCP/IP routing protocols fully and safely. In
other words, every true "router" should be able to run BGP-4 and BGP4+
with IPv6 and IP multicast along with IPSec and SSL VPN. Pyramid Linux
on embedded hardware usually fits this bill more nicely than a random
piece (read: junk) of network hardware purchased at Best Buy or from
CDW.
If you happen to be stuck with Cisco, Juniper Networks, or something
worse such as Extreme Networks, Foundry Networks, Dell PowerEdge, or
D-Link -- I suggest moving to Pyramid Linux or Vyatta or something
similar. You'll of course ignore my logic here, so my additional
suggestion is to open your manual and start crying. Configuring
off-the-shelf routers, switches, and network appliances in order to
reduce risk is a losing battle.
See if you can get your router or switch to operate in bridge-only mode,
thus reducing it's footprint since it will lack an IP address. Barring
that, protection of the control plane is the utmost concern. All
transits (IP prefixes that send IP traffic between each other) and
trunks (Ethernet link-layer or 802.1Q framed "virtual LAN" Ethernet that
sends MAC traffic between each other) must be protected from the
physical layer up. In other words, nobody should be able to pull the
wire, replace with their own wire (or splice it), or Optical/Copper
TEMPEST assess any wire. This applies doubly to wireless.
Network service providers started to have problems with DDoS as early on
as April/May 1997. Before DDoS, most of the attacks were SYN-based or
just unicast UDP or ICMP floods. The Smurf attack changed this by
allowing for an amplification of ICMP due to a few factors. Distributed
attacks became possible, and these concepts quickly spread to both TCP
and UDP.
Worse, TCP amplification attacks targeted not only web servers, ftp
servers, and other obvious well-known services, but also BGP routers.
Smurf DDoS heavily used the ISP network infrastructure itself as well
for the primary reason that "you can ping it".
A few years ago, I got the chance to attend a presentation at NANOG by
Ryan McDowell of Sprint. Sprint changed the way that their network
allowed traffic to/from/between their routers. I highly encourage you to
check out that presentation material, Implications of Securing Router
Infrastructure. Much of
the information is Cisco-specific (but the concepts apply equally well
to any platform). Cisco has recently updated and combined all of these
resources to form their Cisco IOS Network Foundation
Protection
(NFP) program.
For those that want a summary of the above, "don't allow any packets
directly to your infrastructure, including TCP, UDP, or ICMP from any IP
protocol -- but allow all of those through -- and make sure traceroute
still works". There are additional problems such as sending traffic to
foreign agencies (on the "do-not-do-business with" lists which probably
partially match the "do-not-fly" lists). Certain protocols and ports
(such as those seen on the SANS ISC, handlers, and DShield websites) are
also nearly universally blocked.
On the LAN, I think there is something to be said for Cisco's Dynamic
Arp Inspection (DAI), especially when combined with endpoint
port-security (set to one static, and in some cases where that is
infeasible -- 1 static and 1 dynamic where the static is reset every so
often by scripts followed or preceded by a physical inventory audit).
I've heard claims about AirDefense and similar technology preventing WEP
attacks, but be sure to remain skeptical about such approaches and test
for yourself. DAI, port-security, and AirDefense are certainly cheap
alternatives to upgrading the entire infrastructure to NAC or other
endpoint security solution. Thin clients may also provide value when
trying to reduce risk to large-installation local area networks.
For further information, be sure to check out LAN Switch Security: What
Hackers Know About Your Switches, Router Security Strategies: Securing
IP Network Traffic Planes, End-to-End Network Security:
Defense-in-Depth, CCSP SNRS Quick Reference Sheets, and the MPLS VPN
Security Cisco Press titles.
Part 2: Software assurance vulnerability assessment — Session
management
Best Session management attack tools
Burp Sequencer, stompy, w3af, OWASP WebScarab, Paros, Add 'N Edit
Cookies, CookieCuller, CookieSafe, CookieSwap, Cookie Watcher, RSnake's
Security Bookmarklet Edit Cookies
Best Session management attack helper tools
NIST FIPS-140-2, Burp Suite, User Agent switcher, Torbutton, RefControl,
Vidalia, Torify
Session management is one of the only runtime, blackbox testing
techniques that really must be done following all unit testing,
integration testing, and functional testing. In a secure SDLC for web
applications -- the testing of session management is usually tested
earliest during SQA acceptance testing. While it may be possible for
developers to write some tests for token strength, etc -- session
management is one unique area that exists outside of the realm for what
I normally consider good security in the SDLC testing.
In other words, use and learn these tools to your heart's content! They
are extremely valid and useful for real world testing, and provide a lot
of opportunity to learn more effective exploratory testing, especially
when you think about concepts such as time and state. How does my User
agent affect my session ID? How does the time of day? How does the load
on the application? It's a great scenario for learning about
combinatorial explosions, which is the bread-and-butter of any advanced
vulnerability assessment.
Posted by Dre on Tuesday, March 18, 2008 in
Defense,
Hacking,
Itsm and
Security.
Taking care of business
Before I get into this post, I wanted to give you some updates on
progress of other projects here at TS/SCI Security.
First off, I've been working on the OWASP Evaluation and Certification
Criteria
Project
and hope to announce something very soon. Secondly, you'll want to take
a look at today's
post
on The New School of Information Security book (Acidus also did a
writeup).
As a side-note, there was an interesting data breach recently
announced,
which makes information on breaches even that much more relevant.
Lastly, I wanted to announce the return of the "Day X: ITSM
Vulnerability Assessment
techniques".
Expect a post everyday with new and relevant material.
Positive spin on pen-testers
Matasano posted last week on the Seven Deadly Pen-Test
Sins,
with follow-ups from two other blogs I'm aware of: Mike
Andrews
and
BlogFranz.
This should be a hot-topic!
At TS/SCI Security, we like to focus on the positive instead of the
negative. Pen-testing, especially application pen-testing/review is an
important topic to us. So here's our rundown of what we feel is
important to know about pen-testing, as a response to what Matasano
wrote.
Seven things you can do to improve your pen-test results
1. Time Management. If you're not already using
GCalendar,
RTM (with
GCal),
Twitter, Sandy,
and Jott (thanks to Dennis Groves for some of
these!) -- take a look at how these can help you manage your time
better.
- Good pen-testing takes time. If you haven't gone through everything
you need in #2, then you might need more time. What I've found is
that pen-testers need to frame their pen-tests after good strategy
consulting (1-2 days of a
SWOT or similar
analysis). If, like mentioned in The New School, the "basics" are
not taken care of -- then help the client work on asset,
configuration, and change management before doing a vulnerability
assessment.
2. Utilize applicable MITRE resources.
CAPEC (for runtime testing) and
CWE (for static analysis testing, which may
include reversing) should be utilized throughout. Be careful to only
choose the relevant aspects of each (relevant to the application, not
your skill or other criteria).
- My favorite MITRE resource is the Introduction to Vulnerability
Theory,
which Marcin and I spoke about at ShmooCon in our talk on Path
X. It may take some
extra time to spell out exactly how the vulnerabilities were found
using this method, but it will help future assessment work, including
repeat client business.
3. Work with developers. Assuming that the dev team is already doing
Continuous
Integration (or
something like it), Fagan
inspection, using the
V-Model (in the requirements
phase), continuous-prevention
development,
integrating concepts from the Microsoft SDL (or similar secure SDLC),
and automated integration testing -- then get involved right away with a
software
walkthrough with
all of the key players.
- Standard C and assembly code is difficult to reverse the design from,
but they are becoming more easy to reverse. In the case of Java, C#,
and C++ -- UML can be extracted to elicit the design of the
application. Using other tools such as Klocwork K7, Fujaba, or IBM
Rational Rose on the UML diagrams will possibly provide faster
program understanding. Left with the source code, GrammaTech
CodeSurfer also aids understanding, but open-source tools such as
Doxygen can also be of use.
4. Automate development-testing. While web application security
scanners (or fat application-based fuzzers) collectively only typically
find at-most 30% of the bugs with a large amount of false positives,
there are plenty of developer tools that don't have these problems.
- Developers should be made aware of integration testing with Canoo
WebTest. Business analysts, customers, and test managers can submit
table-driven test cases using FitNesse (a wiki that allows for
collaborative test case design), along with additional tools such as
HtmlFixture. I've spoke
to the benefits of Dependency
Injection
before, as well as automating Continuous Integration both inside of
the IDE as well as during the application builds.
- Automation is good for developers and the modern tools surrounding
test automation are extreme improvements over last decade's
technology. However, test cases and test-first strategies still find
only as much as half (possibly up to 70% or more) of the bugs. It's
good to combine exploratory
testing with
scripted (i.e. from a test case) testing. Allot some time with a
buddy and explore the application as it wasn't meant to be run after
learning as much as you can from the developers and their automated
tests.
5. Peer review your work. This one is easy. Find a friend and have
him/her check your work. The more eyes you get on your pen-test and
assessment work, the less likely you're going to miss something.
6. Stay up-to-date on both process and technology. Read
forums/blogs/mailing-lists, trade journals/magazines, and books. Attend
conferences. Keep it cheap and continuous if possible.
- I try to read every security-related book that pops up on Safari
Library
and ACM's access to
Books24x7.
Audible/Amazon are certainly other
resources, especially for non-technical solution learning. Checking
yourself at least once a year on a certification or re-certification
program will make strides in your professional development.
- Using GReader and sharing daily
using it's sharing features and
OPML with your colleagues is
important. Get involved in conversations on forums, IRC channels,
blogs, and mailing-lists is likely the largest win here.
7. Go back to basics. Set a minimum bar of maturity with clients
before you pen-test or do an assessment. The BITS Shared Assessments
Standardized Information Gathering questionnaire is a nice start,
especially combined with Visible Ops, ITIL, COBIT, ISO27K, PCI-DSS,
PABP/PA-DSS, ISECOM, and OWASP information on process.
All of the above is going to go a long way towards improvements to what
pen-testers and vulnerability assessors do. There's very few good
certifications out there for what we do, and it's difficult to measure
what we do. Put the cards on the table up front with your clients and
teach them your methodology and approach.
Posted by Dre on Monday, March 17, 2008 in
Hacking,
Security and
Work.
Recently, I finished reading "The New School of Information Security" by
Adam Shostack and Andrew Stewart. It's only about 200 pages, so it's
certainly worth your time to pick up and read. Some people will compare
it to "Security Metrics" by Andrew Jaquith (or many others), but I think
this book is very unique.
Into the first chapter, I was dismayed and rather disappointed. At first
it appeared as if the book would largely be a repeat of some Shostack
and Geer presentations I've already seen in the past. The introduction
looked like a sample of Dan Geer's testimony to
Congress,
Addressing the Nation's Cybersecurity Challenges: Reducing
Vulnerabilities Requires Strategic Investment and Immediate Action.
This set me off to the whole book since I've already read that paper.
Additionally, the authors immediately begin the book with how they are
going to write it -- how they don't reference anything in great detail,
but that the endnotes should suffice. This also put me off a bit... that
is -- until I got to the endnotes! Certainly from the beginning to the
end of the book I was also kept in a state of constant interest thanks
to the excellent writing. Even if you have read all of their past work,
this book is certainly worth a read or two or three, maybe even
quarterly.
Searching for answers
Since the authors seemed to dump on all security technology products and
solutions, I began to get defensive in my mind about how to address my
own technology suggestions around the information security space. While
I wasn't completely surprised that the authors did not hold the exact
same views that I do -- I was hoping that they would have spoke to
software assurance, security in the SDLC, and other practices that I
tout.
No such luck; the authors provided very few answers in general. This is
probably a preferred message because it makes the book timeless in a
way. "Security changes over time", right?
Some positive answers that were most clear to me from reading the text
came across as, "process over products", and "economics over
technology". For process, the authors seemed to suggest one reoccurring
theme about the three most important IT controls that can aid
decision-making:
- Asset management / Inventory management
- Change management / Change control
- Configuration management
I'm obviously fine with the above in theory. However, in practice --
these are often required by the ITIL, COBIT, ISO27k, and NIST standards
that the book seems to want to throw out the door. Not only that, but
the focus seemed to me as completely attached to IT/Operations and
having nothing to do with development.
After further digging into the endnotes and references, it became more
apparent what the authors were suggesting. The authors were trying to
say that most of ITIL, COBIT, et al -- is completely worthless. Studies
such as Visible Ops (which
appears to be ready for a second
title
according to the Visible Ops
blog), have more clearly
demonstrated that a few choice controls are higher-performing and more
efficient than a large majority of the controls listed in those audit
frameworks. Now, this concept is very believable.
For statistics, popular cybercrime surveys such as the annual report
produced by the Computer Security Institute were thrown out the
proverbial window by the authors, but they were able to point to newer
data, such as the IT Controls Performance
Study as well as the
DOJ/DHS National Computer Security Survey
(available sometime soon -- 2008 according to the website).
Answers in the breach data
One thing I learned from the book is the importance of breach data to
further our understanding of future answers. I tend to concentrate on
software development requirements, software engineering design, and raw
source code as metrics and patterns. However, the authors bring up some
excellent points about breach data.
Breach data is important to get out in the open, and I've mentioned a
few sources on our blog before. One project that I wasn't familiar with
was the New York State freedom of information
laws
that Chris Walsh posted about last year. I've seen some of CWalsh's
other work and I've spoken with him, and I had no idea he worked on such
an interesting project. This is certainly a project you might not want
to miss out on.
Answers in Economics models and Social Psychology theories
This might seem too academic for most security professionals. However,
these same security professionals are usually armchair experts
themselves. I find these recommendations the most fascinating and
relevant. I hope the rest of the industry will give these ideas the
chance that they deserve.
Quoted several times throughout the book is a paper by Gordon and Loeb,
on The Economics of Information Security
Investment
(it's only 22 pages long). The authors consider this a must read, and
tend to summarize this paper as "only spend up to 37% of the value of an
asset in order to protect that asset". I enjoyed this recommendation, as
well as the authors disdain for ROI / ROSI and ALE. However, while I
think their strategy will work for the long-term (and once we have more
data), my short-term recommendation continues to be a bit contrary to
this.
In Building a security
plan,
I discussed (towards the end) how using the voice of the customer to
drive your spending decisions seemed about as appropriate as any other
strategy. I would have appreciated some discussion around customer
support, and maybe this is a topic that the authors will bring up in the
future.
The authors also discussed how Psychology (many of the topics appear to
be centered around Social Psychology at the root of the problems, IMO),
and the authors explored a few concepts. One of the more prominent
psychology topics was regarding Security & Usability, which I'm sure
there are varying levels of opinion about.
After speaking with Marcin -- I was pointed towards an excellent blog
post on psychology needs for security
teams.
Not only is the topic of Security & Usability discussed, but a very
interesting look into the future.
I see a bright future in the New School, and would put this book as a
"first-read" for anyone who needs to be initiated on the subject of
information security / assurance.
Posted by Dre on Monday, March 17, 2008 in
Books,
Privacy and
Security.
Before Mike Rothman posted something about the WhiteHatSec and F5
announcement,
I really wasn't going to say anything negative or positive. Integrating
web application security scanners with web application firewalls at
first seems like a good idea. However, it appears that most people
forgot about the issues with WAF's: they only prevent very few kinds of
software weaknesses.
Enough with the WAF's already
My analysis is the following:
- Web application security scanners, including vulnerability assessment
management platforms, portals, and single-panes-of-glass all suffer
from the same problem: they don't find most of the critical
security-related bugs in any given web application
- Web application firewalls do not protect against the most critical of
security-related bugs
- Combining web application security scanner technology with WAF
technology does not equate to a short-term defense solution for any
business
How do I define a critical security-related bug? How about ones based on
time and state, which could allow checkout of items before paying for
them? This is one of many examples.
Web application security scanners are awareness tools ; Web
application firewalls are door stops
Web application security scanners are tools to be used for raising
awareness. Not too many people are aware of or convinced of the need to
secure web applications to any degree of managed risk. Web application
security scanners can help get funding to necessary projects that should
have happened years ago.
Also note that any consulting or software-as-a-service solutions for web
application security that utilize web application security scanner
technology (or worse, perform testing manually) also fit into this
category.
Web application firewalls are a different story. These are purely door
stops because they make awful paper weights. I classify all of this
technology in the category of epic fail.
Want some examples of epic failure? How about two recent XSS
vulnerabilities found in F5's management interfaces for their Big/IP
product? Here's one in their search
functionality. Oh look,
another in their security
reports.
However, my favorite (and I've been saving this one for a long time)
comes with a huge debate on the F5
blog
about whether or not you have to rewrite the code when security-related
bug fixes are in order.
When Lori MacVittie of F5 attacks Mike Sutton of HP SPI Dynamics, Jeff
Forristal of SPI's response is friendly and compromising. My comments
got a little out of control because I honestly couldn't believe what F5
was saying to be true. I was really upset by this blog post, which
clearly had zero understanding of either networks, systems, or
applications.
Is F5 a security company?
The best part of this F5 blog
post
comes before F5 even gets an opportunity to respond to me. It's been
several months (add to this fact that I'm sure that F5 "forgot" to
install blog software that enables nofollow by default), and the blog
spam that covers the page is still there. How could F5 possibly take web
application security seriously if they can't block/deal-with something
as simple as blog spam?
Secure SDLC solutions can be both short-term and long-term
I've been told that Secure SDLC solutions are only focused on the
long-term, or they're "too idealistic" or "running to developers to put
out the fire is the wrong approach". This is all untrue. There are
plenty of solutions that every development shop can move towards, which
will give immediate results for both quality and security. Yes, every
development shop is different and has different goals than
operations/security -- but this organizational change must happen.
Using Subversion for source-code
revision control? Is your development environment tracking issues such
as defects, but with the ability to add wiki notes as seen in
Trac? Is there an automatic nightly build
using continuous integration server software such as
CruiseControl (or might you
want a push-button approach as seen in
Luntbuild)? Are your developers
using logins (AKA Identity Management for you security people out there)
that make them personally accountable to each piece of checked-in code,
defect tracking issue, and failed build?
Some immediate SDLC wins
Developers have a solution that is bootable from a CDROM today. It's
called Buildix. Buildix contains
Subversion, Trac, CruiseControl, and an identity management solution to
tie all of these together.
How does this help with security? Well, if your development team isn't
doing the basics (and doing them well), then you can forget about other
SDLC process improvements. The trick here isn't to push this as a
security issue. Make it a quality issue. Say it has something to do with
Sarbanes-Oxley. Oh - you're a private company? Consider looking at the
BITS Shared Assessments Program (which has all but replaced SAS 70 Type
II). There are many companies that don't need to meet S-OX 404 that
still follow COBIT, ITIL, or ISO27K as much as possible, including using
internal auditing aggressively.
Requirements gathering is the most important phase of any
quality/security software development lifecycle. For those of you who
aren't familiar with the
V-Model and Fagan
inspection -- it's
important to note that these start in the requirements phase. Testing
software shouldn't happen only by a QA team as acceptance testing. Peer
review shouldn't be an after-thought that only applies to the code
post-build.
Fagan inspection (complete as possible) and test cases (at least a few)
should be handled before any design or decisions about the application
are made. If the moderator is told that unit testing that looks for
strict whitelist input validation on all HTML, CSS, XML, and Javascript
is required -- he/she may be confused at first. Showing this person some
simple XSS, CSRF, and XXE attacks with an awareness test (see web
application security scanners, above) might help encourage this lead
software engineer to understand some backstory.
My favorite short-term win for any Secure SDLC program is to integrate
the concept of continuous-prevention
development.
Continuous-prevention development is Regression Testing 2.0.
Your IDE grew up but your development shop is stuck in 1999
Unit testing can take place in the IDE (as well as during integration
testing at build time, which can be automated by a build server). Often
this can also be done continuously. It
can be combined with static checking and
code coverage. Need runtime testing? Just
add Canoo
WebTest or
Selenium during integration.
Additional security properties can also be tested in the IDE with some
simple changes. Many developers don't know how to install their own web
server, but any developer-tester should know how to run an embedded web
server such as Jetty along with an embedded
database such as H2 (or full solutions
such as PicoContainer). It's almost to
the point where not only are in-IDE continuous unit tests reality, but
also component tests or full system tests.
In summary, continuous in-IDE system testing means that anything a web
application security scanner can test -- a developer can perform more
accurate tests while actually writing the code in real time. Of course,
this will have less false positives, less false negatives, and allow for
easier tuning and customization at much cheaper pricepoints (note: all
of the software linked above is open-source). By adding
"continuous-prevention" regression + fix checks, in-IDE unit testing
prevents pilot error before costly peer review is even necessary.
The "A[OP]-Team"
There is no doubt that security testing, secure coding standards, secure
code review, and other Secure SDLC improvements provide better
alternatives to the classic "scan and patch" hamster-wheels-of-pain that
web application security vulnerability assessment management solutions
provide.
However, what about solutions such as Aspect-oriented programming (AOP)?
Or integration of AOP and dependency
injection? It looks
to be possible to hire a team of coders to write code on top of your
already existing codebase. This code will secure your code from all
types of web application security risks. It won't just protect the OWASP
T10-2007 A1-A2 critical software weaknesses (one of the limitations of a
WAF), but also the other 640-some listed in the MITRE CWE node
structure.
Would you consider hiring such a crack team of hotshot security
developers?
*AOP and Dependency Injection are not long-term solutions. Any
development shop with the proper expertise can implement these
immediately. Consulting groups will start to make AOP/DI solutions
available at a much lower overall cost than a $25K/year SaaS scanning
solution combined with a $70K WAF appliance pair.*
Posted by Dre on Tuesday, March 11, 2008 in
Defense and
Security.
I've been doing some work lately with text files and have been using
various shell command techniques to manipulate them for whatever
purposes I need. This isn't a HOWTO guide as much as it is a reference
for myself and others that just need something quick to work off of.
The first command I reach for is the find utility. If you didn't
know, you can pass arguments to find that will get executed when
find finds a match. This command will find all Nmap greppable output
files and grep for lines with "Status: Up":
$ find . -name *.gnmap -exec grep "Status: Up" {} \; Host: 192.168.1.1 () Status: Up Host: 192.168.1.10 () Status: Up Host: 192.168.1.40 () Status: Up Host: 192.168.1.42 () Status: Up Host: 192.168.1.102 () Status: Up Host: 192.168.1.103 () Status: Up
You can improve the above command further with `awk` to only print out
the IP addresses that appeared online:
$ find . -name *.gnmap -exec \ awk '/Status:\ Up/ {print $2}' {} \; 192.168.1.1 192.168.1.10 192.168.1.40 192.168.1.42 192.168.1.102 192.168.1.103
If you need to know what file the matching results came from, you can do
the following:
$ find . -name *.gnmap -exec \ awk '/Status:\ Up/ {print $2}' {} \; -print 192.168.1.1 192.168.1.10 192.168.1.40 192.168.1.42 192.168.1.102 192.168.1.103 ./nmap_scans/192.168.1.0_24.gnmap
Nmap results are pretty easy to go through, but what if you have Nessus
nbe files? If you've ever seen a Nessus nbe file, it isn't pretty. The
following command will run through a nbe file and print out IP addresses
and NetBIOS names in CSV format. Nessus PluginID
10150
identifies scanned hosts' NetBIOS names.
$ find . -name *.nbe -exec awk -F '|' '$5=="10150"' {} \; | \ awk '{print $63"|"$1}' | awk -F '|' '{print $1","$4}' stacker,192.168.1.10 slaptiva,192.168.1.40 thinker,192.168.1.42
The last script I have found useful is when I need to pull text broken
up over several lines back into one really long line. I accomplish this
using awk, with the following command:
$ awk 'BEGIN {RS="\ \ "; FS="\ "} {for (i=1;i<=NF;i++) \ printf "%s ", $i ; printf "\ \ " }'
The Handy One-Liners for Awk
and Handy One-Liners for Sed
have both been awesome references that I keep bookmarked. I have also
found myself keeping UNIX Power Tools
open all day as well.
Posted by Marcin on Friday, March 7, 2008 in
Hacking and
Linux.