Web application experts have been asking WAF vendors the same questions
for years with no resolution. It's not about religion for many security
professionals -- it's about having a product that works as advertised.
My frustration is not unique. I am not the first person to clamor on
about web application firewalls. Jeff Williams pointed me to a post that
Mark Curphey made in 2004. Today, Curphey appears to have a change of
heart -- his latest blog post provides a link to
URLScan,
which some claim is like mod-security for Microsoft's Internet
Information Server (IIS). Microsoft released URLScan Beta 3.0 in order
to curtail the massive problem of over two million Classic ASP web
applications that have become infected due to the SQL injection attacks.
Here is the post where the frustration of WAF and their vendors first
began:
- -----Original Message-----
- From: The OWASP Project [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 16 November 2004 2:34 PM
To: [EMAIL PROTECTED]
Subject: An Open Letter (and Challenge) to the Application Security
Consortium
An Open Letter (and Challenge) to the Application Security
Consortium
Since its inception in late 2000 the Open Web Application Security
Project (OWASP) has provided free and open tools and documentation
to educate people about the increasing threat of insecure web
applications and web services. As a not-for-profit charitable
foundation, one of our community responsibilities is to ensure that
fair and balanced information is available to companies and
consumers.
Our work has become recommended reading by the Federal Trade
Commission, VISA, the Defense Information Systems Agency and many
other commercial and government entities.
The newly unveiled Application Security Consortium recently
announced a "Web Application Security Challenge" to other vendors at
the Computer Security Institute (CSI) show in Washington, D.C. This
group of security product vendors proposes to create a new minimum
criteria and then rate their own products against it.
The OWASP community is deeply concerned that this criteria will
mislead consumers and result in a false sense of security. In the
interest of fairness, we believe the Application Security Consortium
should disclose what security issues their products do not address.
As a group with a wide range of international members from leading
financial services organizations, pharmaceutical companies,
manufacturing companies, services providers, and technology vendors,
we are constantly reminded about the diverse range of
vulnerabilities that are present in web applications and web
services. The very small selection of vulnerabilities you are
proposing to become a testing criteria are far from representative
of what our members see in the real world and therefore do not
represent a fair or suitable test criteria.
In fact, it seems quite a coincidence that the issues you have
chosen seem to closely mirror the issues that your technology
category is typically able to detect, while ignoring very common
vulnerabilities that cause serious problems for companies.
Robert Graham, Chief Scientist at Internet Security Systems,
recently commented on application firewalls in an interview for CNET
news. When asked the question "How important do you think
application firewalls will become in the future?" his answer was
"Not very."
"Let me give you an example of something that happened with me. Not
long ago, I ordered a plasma screen online, which was to be shipped
by a local company in Atlanta. And the company gave me a six-digit
shipping number. Accidentally, I typed in an incremental of my
shipping number (on the online tracking Web site). Now, a six-digit
number is a small number, so of course I got someone else's user
account information. And the reason that happened was due to the way
they've set up their user IDs, by incrementing from a six-digit
number. So here's the irony: Their system may be so
cryptographically secure that (the) chances of an encrypted shipping
number being cracked is lower than a meteor hitting the earth and
wiping out civilization. Still, I could get at the next ID easily.
There is no application firewall that can solve this problem.
With applications that people are running on the Web, no amount of
additive things can cure fundamental problems that are already there
in the first place."
This story echoes some of the fundamental beliefs and wisdom shared
by the collective members of OWASP. Our experience shows that the
problems we face with insecure software cannot be fixed with
technology alone. Building secure software requires deep changes in
our development culture, including people, processes, and
technology.
We challenge the members of the Application Security Consortium to
accept a fair evaluation of their products. WASP will work with its
members (your customers) to create an open set of criteria that is
representative of the web application and web services issues found
in the real world. OWASP will then build a web application that
contains each of these issues. The criteria and web application will
be submitted to an independent testing company to evaluate your
products.
You can submit your products to be tested against the criteria
(without having prior access to the code) on the basis that the
results are able to be published freely and will unabridged.
We believe that this kind of marketing stunt is irresponsible and
severely distracts awareness from the real issues surrounding web
application and web services security. Corporations need to
understand that they must build better software and not seek an
elusive silver bullet.
We urge the Consortium not to go forward with their criteria, but to
take OWASP up on our offer to produce a meaningful standard and test
environment that are open and free for all.
- Contact: [EMAIL PROTECTED]
- Website: www.owasp.org
Posted by Dre on Wednesday, June 25, 2008 in
Defense and
Security.
Hello, and welcome to the Week of War on WAF's, the same week that ends
whereby PCI-DSS Requirement 6.6 goes into effect as a deadline for many
merchants. Today is the first day. So far, Marcin has identified some
of the problems with web application
firewalls.
We were able to identify what we would like to see in WAF's, both
commercial and open-source in the future (since they do not work
properly today). In this post, I want to start off the week by listing
the Top ten reasons to wait on WAF's.
Top tens reasons to wait on WAF's
- WAF vendors won't tell us what they don't block
- Requires a web application security expert with code knowledge, HTTP
knowledge, and a lot of time / careful planning to configure
- Gartner leaves WAF's off their Magic Quadrant for web application
security on purpose
- No truth in advertising leads to a false sense of security
- Vendors show signs of desparation, claiming imperatives and
illegality in addition to just the standard FUD
- Attacks that are claimed to be blocked are coincidentally also found
in WAF solutions themselves (e.g. XSS in the security reports or web
configuration panels)
- Every organization that has installed a blocking WAF has also been in
the media for known, active XSS and/or SQL injection
- Second-order (i.e. non-HTTP or unprotected path) injections cannot be
blocked or even detected
- Real-world web application attacks are more often strings of attacks,
or at the business logic layer -- WAF's cannot detect or prevent
these kinds of attacks
- PCI-DSS Requirement 6.6 can be met with compensating controls, web
application security scanners, automated security review tools,
and/or manual review of the pen-test or code varieties
We understand and realize that the ideas of a blocking WAF are very
popular right now. There are many supporters behind the WAF and VA+WAF
movements. While we'd also like to support what the rest of the
community sees as the future -- we also want to make sure that it is the
right thing to do.
One of the best ways to move forward with any given technology is to
look at its faults. We learn best in IT when things fall apart -- when
they break. TS/SCI Security has put a lot of thought, practice, and
research into WAF technology. Marcin's most recent post demonstrates our
list of requirements (e.g. block outbound) and nice-to-have's (e.g. good
documentation). Some vendors might already have this sort of outbound
blocking functionality, and we're not even aware of it! Other vendors
could have clearly defined "VA+WAF blocking" documentation, which could
even be internal engineering or strategy documents that should be out in
the open (or at least available to paying customers).
Also -- if we do end up demonstrating that WAF, VA+WAF, APIDS, ADMP, or
other solution is less viable than a new, better idea -- let's move this
research into the forefront.
Posted by Dre on Monday, June 23, 2008 in
Defense and
Security.
We've been beating the drum for some time now, expressing our opinions
of web application firewalls (WAFs). You might have sided with us on
this issue, are against us, or are just tired from it all by now. This
post is about to change all that, and show that we are not 100%
anti-WAF, and that there are some useful applications for them.
Why WAFs do not work
In a post on why most WAFs do not
block,
Jeremiah Grossman quoted Dan Geer:
When you know nothing, permit-all is the only option. When you know
something, default-permit is what you can and should do. When you
know everything, default-deny becomes possible, and only then.
Jeremiah then stated that to implement a default-deny WAF (which would
offer the most security, but carries with it the greatest business
impact), you need to know everything about your app, at all times --
even when it changes. How appealing is this, given the amount of
resources you currently have? Who will be responsible for maintaining
the WAF? These are all questions you should be asking yourself. Jeremiah
then goes on to say that default-permit is necessary in web applications
-- going against everything we've learned in security over the past 40
years. Wait... what??
Some context that can attest to our reasoning
Over the last several weeks, I've been evaluating several web
application firewalls. They all have their own list of cons and more
cons. On my first day, having sat down at their consoles, I was a little
overwhelmed by all the options present -- application profiles,
signatures, policies, etc. It all came to me as I worked through it and
read the manuals, though frankly, I don't see how anyone without a web
application security background can keep up with it all. I fear these
devices will be deployed and forgotten, relying solely on their ability
to learn and self-adjust.
Let's talk about the consoles used to monitor and maintain the WAF. One
vendor had a fat app, which was a bit clunky, non-intuitive and had
multiple usability issues. The number one issue that comes to mind is on
the monitoring panel -- to watch alerts in real-time, you need to set an
automatic refresh rate which updates the entire display, which makes it
impossible to analyze HTTP requests/responses during this time. If
you're scrolled down to a certain location of a request, and the console
refreshes, you lose your position and are brought back up to the top. I
don't understand why the entire screen had to be updated, rather than a
particular frame.
Another vendor used a webapp to manage itself, which was in my opinion
much nicer and easier to use, albeit slower. When on the alert
monitoring page, you had to manually click a button to refresh the
alerts, and viewing requests/responses was a major pain. The application
utilized AJAX on pages that could do without, but in areas that could
benefit from them, they resorted to old web tactics.
In the course of my testing, I started by taking RSnake's XSS
cheatsheet and creating
Selenium test cases for attacking
our own vulnerable web application (See our talk, Path X from
ShmooCon). For
those unfamiliar with Selenium, it's a browser driver that performs
functional testing, though we have showed how it can be used for
security testing. We didn't use
WebGoat
(or any other vulnerable
apps),
reasoning that the vendors must have tested against those and know them
inside out for just such occasions. Renaud Bidou had an excellent
presentation on How to test an IPS
[PPT] from CanSecWest
'06 which I believe can be applied to testing WAFs for those interested.
Suffice to say, the WAF's did not detect ALL of the XSS from the
cheatsheet that was thrown at it, which is pretty sad. I would have
expected they at least get that right.
That brings us to second-order, persistent XSS and SQL injection
attacks. When a web application strings together data from multiple
sources, detection of such attacks can be very hard. The WAF cannot
account for this logic, thus allowing an attacker to effectively bypass
the WAF by staging his XSS/submitting multiple payloads to various
sources. When the application then pieces the data together, an XSS (SQL
injection, etc) condition exists. The problem with this? Your WAF never
detected it, and you have no idea your site's been attacked and is now
hosting malicious scripts.
There are just some attacks a WAF will never detect. HTML / CSS
injection through HTML / CSS is just one example. Go on over to
http://google.com/search?q=cache%3Atssci-security.com
-- can you describe what is going on here?
Or how about CSRF? Insecure session management? What can a WAF do to
protect against business logic flaws? We can go on and on, and yet
vendors still claim protection against OWASP Top 10, which if you
believe shows you know nothing about web application security.
How WAFs can help
So I lied, we haven't changed our minds about WAFs. But wait! I'll let
you know what would change our minds at least a little, which would show
that WAFs can have their purposes. Without this though, I can't
recommend any organization spend the money on such devices -- especially
if they need to meet compliance requirements where other options do
exist.
The value of WAF Egress, revisited
What should a WAF do? `Block
attacks <http://ha.ckers.org/blog/20070213/comparingcontrasting-network-and-application-security/#comment-17912>`_
on the
`egress <http://ha.ckers.org/blog/20070608/the-virtues-of-waf-egress/>`_
/
`outbound <http://ha.ckers.org/blog/20080203/inline-or-out-of-bounds-defeating-active-security-devices/#comment-60864>`_,
while staying out of the inbound flow of traffic. I'm not talking about
signature based blocking either. This is the tough part, because it's
almost impossible. One way I see it working though, is if the
application keeps the content (HTML), presentation (CSS), and behavior
(JavaScript) separated. The application should not serve any inline
scripts, but instead serve script files that would alter the content on
the client side. This would make, e.g. outbound XSS prevention possible
because a WAF could then detect inline scripts in the content. None of
the WAFs I evaluated could detect a client being exploited by a
persistent XSS condition. This would also tell me how many users were
affected by the XSS attack, which we haven't seen any numbers on apart
from the number of friends Samy had when he
dropped his pants and took a dump all over our
industry.
Jeremiah and I got a picture with him wearing "Samy is my hero"
shirts. I haven't laughed that hard in a long time! But to quote a
sanitized version of what one guy said, Samy knew nothing about
webappsec and one day he walked in, dropped his pants and took a
huge dump on our industry and then left again. And we just looked
around at one another and said, "What just happened?"
Another way to get this right is to apply the work done by Matias Madou,
Edward Lee, Jacob West and Brian Chess of Fortify in a paper titled:
Watch What You Write: Preventing Cross-Site Scripting by Observing
Program Output
[PDF].
They go on to talk about capturing normal behavior of an application
during functional testing, and then attacking the application as if in a
hostile environment, where it is then monitored to ensure it does not
deviate from normal behavior. Basically, it's all about monitoring your
application output in areas that are known to be dynamic.
In-depth, the Foritfy work is using dynamic taint propagation, by which
"taint propagation" or "taint tracking" is similarly done with static
analysis in order to trace misused input data from source to sink. This
is also a corollary to the work that Fortify has presented on before
with regards to Countering the faults of web scanners through bytecode
injection
[PDF].
While web application security scanners only demonstrate 20-29 percent
of the overall security picture because of surface and code coverage for
the inputs of the application under test, dynamic taint tracking goes a
long way to providing more coverage for these kinds of tests because
it's done as white-box dynamic analysis instead of functional black-box
runtime testing.
The value of XHTML
My fellow blogger, Andre Gironda, helped out with the praise section for
the book, "Refactoring HTML: Improving the Design of Existing Web
Applications", by Elliotte Rusty Harold. It's hard to disagree with the
notion that
XHTML can help
with both quality and security issues, as well as make applications and
content easier to refactor and work with.
When you're recoding thousands or millions of lines of code, wouldn't
well-formedness and validity be the primary requirements for working
with such large volumes of code? If anything, well-formedness and
content validity make the chores much easier to deal with. Rusty has
this to say in his book:
[...] there are two things [authors for the Web] are very likely to
write: JavaScript and stylesheets. By number, these are by far the
most common kinds of programs that read web pages. Every JavaScript
program embedded in a web page itself reads the web page. Every CSS
stylesheet (though perhaps not a program in the traditional sense of
the word) also reads the web page. JavaScript and CSS are much
easier to write and debug when the pages they operate on are XHTML
rather than HTML. In fact, the extra cost of making a page valid
XHTML is more than paid back by the time you save debugging your
JavaScript and CSS.
Since web application firewalls today cannot convert HTML on the
outbound to XHTML, this is certainly a job for the content writers
(sometimes, but often not the developers) to deal with. In the
Refactoring HTMLbook, Rusty also talks about the tools necessary to
develop content on the web:
Many HTML editors have built-in support for validating pages. For
example, in BBEdit you can just go to the Markup menu and select
Check/Document Syntax to validate the page you're editing. In
Dreamweaver, you can use the context menu that offers a Validate
Current Document item. (Just make sure the validator settings
indicate XHTML rather than HTML.) In essence, these tools just run
the document through a parser such as xmllint to see whether it's
error-free.
If you're using Firefox, you should install Chris Pederick's Web
Developer --
https://addons.mozilla.org/en-US/firefox/addon/60
-- plug-in. Once you've done that, you can validate any page by
going to Tools/Web Developer/Tools/Validate HTML. This loads the
current page in the W3C validator. The plug-in also provides a lot
of other useful options in Firefox.
Whatever tool or technique you use to find the markup mistakes,
validating is the first step to refactoring into XHTML. Once you see
what the problems are, you're halfway to fixing them.
Speaking of properly validated and easy to read/use content, what irked
me throughout my evaluation most was documentation. Vendors: do not
bundle a ton of HTML files together and call it a manual. If you're
looking to do that, please use DocBook if you're not going to make a PDF
available. Better yet, give us a hard copy.
Posted by Marcin on Monday, June 23, 2008 in
Defense,
Security and
Work.
We all know about the CISSP. You've heard the whispered hallway
conversations. You've seen the business cards, the email signatures, and
the government contract requirements. You might even know the secret
handshake, or have the magical letters attached to your name somewhere
yourself.
Alternatively, you may despise what it has done to the IT security
industry and community. I do not despise it, and while I embrace it in
concept (I'm not a CISSP, by the way) -- I have to agree that it has
outlived its usefulness as a technical measure of capability. Special
note: this is a very bad thing and it needs attention. No show of hands
necessary.
Not all of the CISSP has been bad. It's given our industry a way to
measure strong analyst level skills with information security concepts.
Some claim there are benefits in the CBOK and ethics charter -- although
these have been debated into nothingness over the years, with no
innovations or improvements made. While some may argue that the CISSP
was DOA, no one can dispute the fact that the CISSP's ability to deliver
is currently MIA.
Wait, you're a CISA? Wait, you are a <insert other IT security
certification here>? You'll also want to read on because this is also
referring to you.
Specialist or Generalist: Pick one. Woops, you're too slow
I read Dan Geer's keynote at SOURCE
Boston a few weeks ago,
and a few things hit me. Near the end, he says:
Only people in this room will understand what I am now going to say.
It is this: Security is perhaps the most difficult intellectual
profession on the planet. The core knowledge base has reached the
point where new recruits can no longer hope to be competent
generalists, serial specialization is the only broad option
available to them.
Geer is right: security is hard. It's also very intellectual. It brings
a lot of ideas to the table.
Kevin Mitnick was doing his thing way before CISSP was around. It wasn't
until later that intellectual success (but possibly ethical failure)
stories such as Adrian
Lamo, who showed that
expert-level penetration-testing can be done by a hacker-without-a-home,
a simple (possibly even outdated by average technology standards)
laptop, and a web browser.
The reason why Adrian Lamo was so good, the reason why this industry
exists, and the reason why security products fail are all interlinking
problems. The only people who stand to win are the people who cause the
most damage. Security is about damage prevention. Which is why Ranum is
probably
right,
although I guess that's an argument for another time.
I sometimes (read: not often enough) work with a handful of people. Most
are specialists -- a world-renown secure code reviewer, one of the
world's best pen-testers (so I hear from even outside my organization),
and an audit/framework/process guru. The generalists in our group (like
myself) are a dying breed. I might also add that at least one of them is
my age and brings an even broader skill-set and expertise to the table
than I do. I consider myself very fortunate. Let me continue with this
train of thinking by bringing us back to what Geer was saying about
specialists vs. generalists:
Generalists are becoming rare, and they are being replaced by
specialists. This is speciation inaction, and the narrowing of
niches. In rough numbers, there are somewhere close to 5,000 various
technical certifications you can get in the computer field, and the
number of them is growing thus proving the conjecture of
specialization [...] will not stop.
Today, I want to continue in the spirit of The New School of
Information
Security,
and claim that we don't need expensive certification programs (i.e.
products) that cater only to a certain kind of elite. We need to get
back to basics.
IT Security certifications available to-date
You don't need them; I don't have them. Certifications breed
specialization. We need more generalists. Don't get certified and don't
pursue a certification.
Of the people that I work with, only the specialists have
certifications. Note that the guy that is smarter than me (I asked him
to provide input into this) -- he doesn't... and he says that the only
certification that he was ever interested in through his career (I
assume he's been working in this industry for over 10-12 years like
myself) was from SAGE. He says it's no longer offered.
What is different about the OWASP People Certification Project
James McGovern wrote on his blog recently about this new project. In his
blog post, Is it a bad thing that there are no IT security
generalists?,
he summarizes his points as follows:
As an Enterprise Architect, I understand the importance of the
ability for a security professional to *articulate risk to IT and
business executives*, yet I am also equally passionate that
security professionals should also have the capability to *sit
down at a keyboard and actually do something* as opposed to just
talking about [it].
[...] If you are a skilled penetration tester, can write secure code
and can reverse engineer software, you are worth more than any
CISSP. For those who embrace the mental disorder of hybridism and
distillation, *balance between these two are needed* where true
IT security professionals understand both [...]
Can we appease both the voracious business needs and the wrath of the
unstoppable and ever-expanding security learning curve? Is the OWASP
People Certification Project the program that can do this?
If James can truly make this sort of thing happen (and I truly believe
he is doing it and that he can do it -- based on everything I have seen
so far), then I will do my best to ignore the obvious contradictions or
annoyances -- and put my full support behind it.
It's not just James, either. Everyone I've met who has been involved in
the OWASP project has been stellar. The OWASP organization has brought
diverse people together in ways that DefCon/BlackHat, HOPE, Phrack, and
many other grassroot efforts never could.
Will OPCP replace CISSP? Only time will tell, but I will tell you now
that it indeed will. Wait and see.
Posted by Dre on Thursday, June 19, 2008 in
Security and
Work.
I see that the BlackHat Blogger's Network has a topic of
interest.
I'll oblige, especially since The Hoff is involved. I think it's a good
exercise, so I'll have to thank Shimel for this idea.
You also won't want to miss what I've said about virtualization four
months ago in Hardware VM security: past and
present.
Today, I just want to talk about Hoff's points and my experiences with
modern virtualization.
Time is your most valued resource
Securing outdated "default-insecure" network devices, operating systems,
and applications is wasted time that could be spent trying to convince
management to replace or recode them.
It really does take about two or three days for one administrator to
install CentOS, SELinux, Kickstart, and Apache/PHP. Really what else do
you need? Similarly, I guess it takes about two to three months to
install Windows Server 2003 or 2008, GPOAccelerator, RIS or Deployment
Services, and IIS/ASP.NET. It takes maybe another month to implement the
vendor/NSA/NIST hardening guidelines, CIS benchmarks, and OWASP material
for either platform. If you can't be bothered to download a few PDF's,
there's always books like the Hacking Exposed series.
In the case of "if it's not broke; don't fix it", the potential of a
data breach due to an existing, known vulnerability (or an obvious state
of software weakness) would count as "broken".
Virtualization can help organizations move quickly
Virtualization is supposed to help with these efforts! Instead of
running through a whole Kickstart or WDS install, all you have to do is
copy a file and boot it. Better -- this can be combined with iSCSI and
shared disk filesystems such as OCFS2 or GFS. Better -- iSCSI can be
enabled with DHCP to quickly get to new services.
I've had Xen hardware virtualized guests that ran redundant services on
the same machine, with a keepalived heartbeat running between them. Why?
Availability was the primary goal. Availability is certainly a part of
security.
When you want to replace everything and start from scratch --
virtualization will help you get there much faster. In a mature
infrastructure with mature process, any organization will see increases
to availability (and therefore also the availability side of security)
and mean-time-between-failure, and a decrease in mean-time-to-repair.
Bootstrapping an infrastructure overnight is certainly possible. How is
this not a good thing?
Virtualization has the same problems as the least privilege
principle
The real least privilege principle is "if I have an account on the box,
then I also have administrator/root". Privilege escalation is almost
always possible. The same is true with regards to virtualization. If you
get access to a Xen/VMWare/VirtualIron/etc guest, then you can usually
also eventually get access to the host OS (and therefore all other
guests).
For those that don't remember the Xen Multiple
Vulnerabilities, read on up.
The nice thing about virtualization is when you take into account the
previous concept. Sure, virtualization increases risk because it creates
a situation of trust between the host and guest OSes. However,
virtualization can also be instrumental in quickly installing and
verifying SELinux RBAC or TE configurations, thus reducing that same
risk to all OSes.
Also worth a mention here are concepts such as secure hypervisors (see
my last article about sHype and links), Xen Access Controls, and
Phantom.
Capacity planning is not as important as live (or dead) migration
I'll concur with Hoff that security concepts hurt virtualization
concepts with regards to capacity planning. However, I'm not certain
that virtualization is all about capacity planning in the first place.
If anything, it allows you to shut down redundant services when they are
not needed (or turn them on only during peak times). Virtualization
allows you to move OS guests around the network (btw: doing so securely
would be a very good idea -- I'll have to write up a paper/concept on
"MITM'ing Live Migrations"), so in this way you can fix performance
problems quickly by just shuffling things around. Again, this helps
availability because it helps mean-time-to-repair.
For those of you who missed what I said about live migrations in my
last
article,
be sure to check it out -- or just check out this link I had to an
article on Live Migration of Xen
Domains. The basics is that
computers should be shut off when they aren't in use, and with
virtualization this could mean shutting down everything but the last
box.
Inter-dependencies caused by virtualization
I've seen this, felt its effects, and dealt with the agonizing pain.
Virtualization environments add complexity. If you don't handle this
complexity well normally, then stay far away from virtualization.
Example
In the "virtualization can help organizations move quickly" section
above, I talked about using Xen guests with filesystems on a remote
iSCSI shared storage device running a distributed filesystem such as
OCFS2, I also indicated that you could use DHCP for the Xen guests
to find their associated filesystems. Well, of course -- I had
problems with DNS, which was running in one of those Xen guests when
everything came crashing down. After this event, the Xen guests
couldn't find their disks because they couldn't find their DNS.
Hoff mentions ACL, VLAN, and other network-based dependencies. Of
course! Don't do this sort of stuff and you won't get yourself in
trouble. But this takes planning, risk management (of the
non-security-only kind), and repair process analysis.
Documenting your environment and networks should already be in place
before you move to a maturity step involving virtualization. Playing out
"what-if" scenarios with a whiteboard can be done in the same way that
Red-team exercises can be done. Sometimes being a security professional
can also allow you to demonstrate your other resourceful risk expertise.
Virtualization isn't a technology -- it's a transition
Allow me to bring all of this together.
Gartner has an excellent model for understanding IT/Operations called
the Infrastructure Maturity Model. The [PDF] current
version
that I'm looking at right now has seven levels.
Gartner Infrastructure Maturity Model
- Basic
- Centralized
- Standardized
- Rationalized
- Virtualized
- Service-based
- Policy-based
Assume you're in #1 (in the document above, it's #0) if you don't know
otherwise. A lot of smart Enterprises are at least Standardized (#3
above), many more are Rationalized, and some have taken steps towards
Virtualized.
I know that it is somewhat a theme of this blog (I also feel that it's a
theme of security and risk management in general), but basically what
I'm trying to say is that process is more important than technology.
Process needs to come first. If you think you can download Xen and
integrate it into your production IT/Operations infrastructure while
you're stuck in phases 1-3 of the Infrastructure Maturity Model, then
you need to take some ITIL classes. If you think buying the latest
VMWare, VirtualIron, or Microsoft Virtual whatever is going to help you
if you don't do the documentation, monitoring, life cycle care, strict
change management, refined repair processes, and enduring risk
management -- then you're just plain wrong. Virtualization is process,
not a product. Just like security.
Security and virtualization can complement each other, just like
security and availability (or ability to change) can compliment each
other. If you didn't read Hoff's post on The Challenge of
Virtualization Security: Organizational and Operational, NOT
Technical
-- please do so. What's more is that Pete
Lindstrom and
Mike
Rothman
say fairly similar things to what Hoff and I are saying. I'd say that
we're all pretty fairly united as an industry on this topic, which is
rare.
For those that want to read more on this topic, I suggest checking out
this book on Virtualization Security from
Michael T. Hoesing when it comes out in the next week or so.
Posted by Dre on Wednesday, June 18, 2008 in
Defense,
Security and
Tech.