tssci security

nmaparse.py -- Parsing grepable Nmap output to insert into MySQL

Last week, Richard Bejtlich reviewed "Nmap in the Enterprise," and for the most part, was largely disappointed with it's lack of enterprise context. My last script, tissynbe.py, parsed Nessus results in nbe format and inserted them into a MySQL database. Today, I'm making available nmaparse.py, a script that will parse grepable nmap output (*.gnmap, used with -oG or -oA flags) and insert the results into a database. My intention is for anyone to be able to take these scripts and use them for whatever purpose necessary -- be it personal or in the enterprise. Loading various tools' output into a database makes analysis both easy and super powerful, so I'd be interested in seeing what others are doing.

To use it, all you have to do is call the script and point it at some gnmap files. The script breaks up the results by host, port, protocol, state, service, version, os, Seq Index, IPID Seq, scan date, and scan flags, and passes them onto the database (nmapdb.sql schema provided).

$ ./nmaparse.py *.gnmap Number of rows inserted: 76 results

See the nmaparse.py project page for more details. Again, comments and critiques are welcome.

Accountability through connected frameworks

Apparently Laura Chappell and Mark Curphey were presenting at the Microsoft TecEd 2008 Security Track last week. I haven't heard too much about what happened as a result, and I really wish I was there to see them speak about their respective topics.

For those who don't know Mark Curphey, he was the founder of OWASP, and is currently working for Microsoft on the Connected Information Security Framework. I dug up more information on the Microsoft CISF from reading this [PDF] presentation at the Jericho Forums -- which should be required reading, even if you don't read the rest of this blog post.

The Microsoft Connected Information Security Framework

Mark identifies a few key areas of process for the Microsoft CISF:

  1. Understand and document the process
  2. Understand metrics and objectives
  3. Model and automate process
  4. Understand operations and implement controls
  5. Optimize and improve

He identifies several technology solutions such as:

CISF for everybody else

There's quite a lot to like about what Mark says (even if you're a rabid open-source fanatic), and what he claims is "an 80 percent solution for the masses". Here's what the other 20 percent might see:

In basic terms, Mark is trying to say that we need to formalize our security processes to include concepts of modern risk. He's also saying that everyone needs a security plan.

GRC refugees

I don't know if Mark got to speak about the Microsoft CISF at TechEd (or how well it was received), and I wasn't able to find a lot of information on what did happen. However, I was able to dig up a possible short-term solution for those seeking refuge from the now-dead GRC camp.

Actually, I'm still not quite able to make the easy distinction between Mark's goals with CISF and GRC platforms/tools. Eventually, I'm sure this point will be cleared up. Mark calls GRC platforms, "Security departments in a box", but I'm failing to understand how the compliance part of a CISF isn't just "A Security department in a box: Just add milk (or soy milk)".

Visibility: "It's even better than Monitoring"

Mark identified a problem-area for process point number 4 above: "Understand operations and implement controls". He suggested "Visibility": Fast and accurate compliance and audit data. Here's a term that I've seen thrown around a lot recently in all of the various blog posts / Verizon book report notes from their [PDF] data breach survey. It's a term that I'm quickly getting sick of, but let me give you the short-order answers from the various religious institutions.

Richard Bejtlich is without a doubt, the largest supporter of "Visibility" for information security. I think he even has a blog post called "Visibility, Visibility, Visibility". The guy is nuts for the word. Richard is the type of guy who likes to integrate "Visibility" into new security catch-phrases. He even has his own acronym, NSM for "Network Security Monitoring", which is really old news compared to "Visibility".

Network visibility

When Marcin did a blog post about NSM earlier last month, I had lots of thoughts that I wanted to say. Largely, the problem with NSM, visibility, IPS, and IDS is that these technologies are so easily subverted by any intelligent adversary, or by a newbie who happens to be using encryption properly.

Visibility also brings me back to the good old days, when packets were cleartext and adversaries weren't organized. I have fond memories of Zalewski's museum of broken packets. Taosecurity came on the stage a bit late, and Richard's Openpacket.org capture repo would have been incredibly useful many years ago (although I'm not sure of the value now). For those just getting into the world of packet captures, it would have been great to see Laura Chappell's presentation at Microsoft TechEd 2008 (although she appears to be very private about her expensive material) -- because her courseware still remains one of the easiest and fastest (and most expensive) ways to learn Wireshark. She does make her Trace Files available for download, which I guess isn't a half-bad attempt at giving back to the community.

Host compliance visibility

Ok, here we go... I'm creating more problems than I'm solving -- but bear with me for a second. I don't think this has anything to do with Microsoft's CISF or GRC tools, but it's another ingredient to add into the mixing bowl. While researching about Microsoft TechEd 2008, I came across some interesting links that somehow apply to this whole message I'm trying to convey.

I found a TechNet article on Security Compliance Managment dated June 5th, 2008. Basically, it's a downloadable toolkit that integrates with Microsoft SMS / Configuration Manager 2007.

Unfortunately, this means that it requires a full version of that software since it relies on DCM (desired configuration management) features, which are not available in the free version of Microsoft SMS, Microsoft System Center Essentials 2007. Were it available (hint to Microsoft, please do this!), this would add an incredibly powerful, free compliance tool that would support up to 10 servers and 50 clients.

The primary component of the Security Compliance Managment Toolkit, GPOAccelerator, is also available for free, but the other features (such as reporting and SCAP support) would be really nice to have.

I've seen other tools like these: little tools that are faster/easier than GRC tools, but that still provide enough information to those who need it for their auditors. I'm not sure where they fit or how useful they are, but here they are nonetheless.

What I don't like about Visibility

Visibility has a problem: it only sees what it can see; and it usually only sees what it wants to see. This is why I prefer accountability over visibility.

In accountability, when you locate a new problem (or just want information about it): you bring up the name of a person or team (hopefully with expertise). That person or team has contact information: phone, email, IM, and location available in the directory. The other side of accountability is when you're the source of the problem. In this case, the person or team usually comes to you and provides identification.

In basic terms, accountability means that you're working with real, live people. Visibility usually involves looking at a pane-of-glass, a product, a spreadsheet, or some graphic. It makes things easier, but it doesn't solve problems directly, even when "fully automated".

BPM: more People (Accountability) or more Technology (Visibility)?

I think the Wikipedia article on BPM sums up the problems fairly well. GRC tools provide too much visibility in a technology platform that isn't really necessary or working to meet the end goals.

This is where there exists a divide. It doesn't matter if the economy is up or down, organizations have to find their balance of spending on people versus spending on technology.

"Don't lead with a tool"

As a strategy consultant, I'm often in precarious situations to make suggestions that appear outside of my realm of expertise. I think we need more experts on Organizational Development, Organizational Behavior, etc. My gut instinct tells me that in many organizations, we're leading with too many tools -- and we're also very top heavy. The problems with application/software security are largely the result of a lack of accountability. We need more contributors (and the claim is that we can't add enough, or add them fast enough), but really we need better (and less) managers. A single manager should be able to grow his/her organization to the necessary size with the necessary measures in order to be accountable for every win and every mistake. Organizations should hire managers who can do this without adding complex, unnecessary, or idle layers of management. I've worked with managers who manage 50 reports (as an example) with full accountability in place and nearly every contributor happy and successful.

I think it's important to look at the breach data, sure. But it's also important to hear from the people involved in breach identification and response. I like to hear stories. I never hear stories. We're too quick to jump to conclusions about what the numbers mean. We're too quick to use the numbers to prove our points, or to reiterate / bring-up old discussions.

What I'd like to see in a Connected Information Security Framework is identification of roles and responsibilities, as well as active levels of determining accountability to the necessary controls. Do we need to redefine what a CISO is? What a security architect is? What a secure code reviewer / developer is? What a security tester is?

What does "Information Security Analyst" or "Information Security Engineer" mean anymore?

What web application security really is

I wanted to do a post about "what web application security really is" because plenty of people out there don't get it. They understand that "security attacks are moving from hosts to the Web", but they have no idea what that means. To most people, web application security is the same thing as website security. I see people trying to approach web application security in the same way that they have tried host security in the past: penetrate (web application security scanner) and patch (web application firewall) -- which won't work.

Web application vulnerabilities are different from regular vulnerabilities in more ways than you would think

Web application vulnerabilities are not a one-time thing. In a similar way, buffer overflows are also not a one-time thing, however we've been lying to ourselves for 20 years about it. The nice thing about buffer overflows is that they have been historically found with random input testing (e.g. fuzz testing) or code review as "one-off scenarios". A security researcher will find one, but that doesn't mean it's exploitable -- better yet it doesn't mean that the entire codebase is riddled with them (however, some repeat offenders obviously suffer from this problem).

With web application vulnerabilities, the more complex the app -- the more likely an adversary can make his or her dreams into an exploit reality

In the case of web application vulnerabilities -- one vulnerability means that there are often thousands of others -- hence the claimed high rate of false positives in automated security review tools -- with specific regard to SQL injection and XSS. What's even more clever about web application vulnerabilities is that they often work together -- they string together to form a bigger attack. A few little bugs equals one giant nasty bug. This also isn't the situation with fat applications -- a stack-based buffer overflow is usually one mean bug, but once it is patched properly the nightmare is [usually] over.

Web application profiling and Google Hacking read the developer's minds like Jedi magic tricks

Another problem with web application vulnerabilities is Google. Yes, Google. Before Google's code search, it was difficult to use Google to find a new buffer overflow. But such is not the case with web application vulnerabilities. All the blame isn't on just Google's search engine, but also MSN Live Search, Yahoo, and many others. It's so easy to just find a login form and start playing with specific, exploratory characters. Buffer overflows require buffers of a certain length with enough room for shellcode to overwrite EIP (or execute via cute pointer tricks) in order to establish a reliable exploit. Web application vulnerabilities don't worry about such restrictions. Once a web application is profiled (which could be as simple as an inurl:login.asp Google search), an intelligent adversary should already know numerous amounts of input techniques which will work against that specific type of application.

Web application vulnerabilities are like Legos for crimeware

Finally, adversaries aren't moving away from client-side vulnerabilities. They're just implementing the attacks differently. Web application vulnerabilities work well along with older host-based vulnerabilities. This isn't only because web application vulnerabilities can get inside the firewall. They open up many different doors for attack. Using Inter-protocol exploitation, it is possible to send shellcode through an XSS/CSRF/XHR worm. But it's also possible to send XSS worms through XSS worms or SQLi worms through XSS worms. Or XSS worms through SQLi worms. An adversary can put these types of attacks together however he or she wants.

Web app security today -- The 2008 SQL Injection Attacks (January - present)

The 2008 SQL Injection Attacks have spawned a lot of talk and controversy about web application vulnerabilties. It appears that if web application vulnerabilities were not mainstream in crimeware -- they now are.

The Microsoft Security Vulnerability Research and Defense blog had an excellent post regarding these SQL Injection Attacks. I think it dispells a lot of the myths about this attack, and it provide a lot of information on what to do about it. Of course, if you're not used to the developer terminology (as an IT security professional or manager), it's probably now as good as time as ever to read up, hire some experts, and get the word out.

There's a lot of links, but the most important one is how to identify and fix the problem. What's interesting is that a lot of the links talk about ASP.NET, but the attacks from Asprox's fake "Microsoft Security Center Extension" SQL injection attack tool only seem to currently target Classic ASP.

If you have access to IIS logs, you can also run the SQLInjectionFinder.exe tool. Of course, if you run a web application security scanner, you may or may not find SQL injections in your Classic ASP web application -- but this doesn't necessarily mean that you are or aren't hosting the Javascript malware. If you have a web application firewall, this doesn't mean that your web application can't be attacked via internal networks (assuming your web servers are listening on different interfaces) -- and it certainly doesn't mean that you aren't hosting the Javascript malware.

A better tool to test if you are hosting the Javascript malware would be to use a tool such as SpyBye.org. You can just set up your web browser to proxy through Spybye.org, or you can run it locally (e.g. to check your Intranet web applications) -- and it can also integrate with ClamAV. TS/SCI Security (well, Marcin and I) discussed this strategy when we did the Network Security Podcast with Rich Mogull and Martin McKeay.

Many faces of the same SQL injection attack

Also mentioned in the Microsoft Security Vulnerability Research and Defense blog post on the SQL Injection Attack is that it's not just one attack tool -- it's a bunch of tools and tactics. For those of you not familiar with Joe Stewart's SecureWorks article on Danmec/Asprox, be sure to give it a full read. Additionally, check out the SANS article from Bojan Zdrnja on the 10,000 web site infection mystery solved. My favorite is a quote from eWeek's article on Botnet Installs SQL Injection Tool:

Researchers are still investigating exactly what vulnerability on the Web sites is being exploited, Stewart said. The Web sites are English-language and their owners include law firms and midsize businesses. A similar attack technique is currently being seen spreading game-password-stealing Trojans from China. Whether the tool is related or only the attack syntax is shared, it is clear that SQL injection attack activity is on the rise from multiple sources, Stewart wrote in his blog.

Why WAF's are the wrong answer if you didn't get it the first or second time

When I see WAF support from organizations like SANS supporting Jeremiah Grossman -- as well as big companies like Imperva -- I immediately question their reliabilty as sources for expertise. If the web applications serving malware are largely owned by law firms and midsize businesses, then these are likely outside of the scope of SOX, PCI-DSS, GLBA, HIPAA, and BITS Shared Assessments. Many of these organizations don't even have the money or administrative staff to support WAF's as an option.

What works in web application security: Take 2

What these companies need is an IT decision to remove, replace, or recode any web applications which clearly demonstrate an affinity for SQL injection vulnerabilities (especially ones written in Classic ASP). If replacement or recoding are used, then it would be a good idea to establish some software risk guidelines such as how web application software is acquired and tested before loaded by a web server. It's not a matter of production or not -- all applications, internal/Intranet/lab/test/etc -- all of these applications need to be asset tagged and approved as tested. It's now considered important enough to know what web applications your organization is running because of the risk involved by a SQL injection (or other web application vulnerability) "drive-by".

Why we need action now

If we just let this malware sit dormant in web applications, we're in for a lot of trouble. All that is required for adversaries is to flip a switch -- and now they can deliver any new zero-day threat they desire. These could be websites you visit everyday. This could be web application code executed by your browser when you start it. It could even be your Intranet server, or a partner site you use. It could be your favorite online shopping site.

Don't hesitate to make a decision. The next wave of these attacks may not work through an antiquated botnet or a standalone tool. They will probably target PHP MySQL applications, ASP.NET applications, both, or even more. There may not be simple tools such as SQLInjectionFinder.exe or SpyBye to help locate these vulnerabilities -- and some could even stand the scrutiny of incident research for some time. How long did it take us to understand these SQL Injection attacks? Longer than 6 or 7 months?

If you think that implementing a WAF will save you (even in the short-term), please let us know why you believe this is the case. TS/SCI Security sees the WAF answer as FUD, lies, and/or short-sightedness. The best answer is to recode or replace while we still have the chance.

Software Security: a retrospective

Today I am going to cover a topic that is the most important to me: software security. When I talk about "software security", I refer to the process of building applications -- the artifacts, components, and capital that goes into making a polished product. Applications are something that development teams worldwide strive to be proud of -- to make something that is used and adored by its userbase.

Traditionally, software has been built using an inaccurate paradigm. First, the software is constructed by ad-hoc teams, with ad-hoc decision-making, ad-hoc deadlines, and an informal process. This informal process describes the majority of applications that have been released to the public, or sold to specific customers, over my entire lifetime.

In this blog post, my goal is to arm you with the most accurate information possible to build your dream application. However, there is a cost involved. That cost is that the software be built in a secure manner -- one that avoids overly common software weaknesses known to criminals, known to adversaries, known to any would-be attacker. From software engineers on their own to development teams part of the largest of organizations, this advice may keep you from making the headline news with the words "vulnerability" or "exploit" attached, but may make you news such as "committed to secure practices" or "most secure app in its class". There will certainly be gaps, primarily due to the human factor (applications can still be gamed and reduced to trivial "phishing-like" attacks against the users of such apps).

Attacks against people will always be valid attacks for threats such as online organized crime and the nefarious crimeware created out of such circles. However, there is a new hope in software security: a promise that when broken windows are repaired -- that the streets will stay clean. With a clean street to walk upon, it is then up to the community to solve the people problems. At least in the case of acute software security practices -- an application's community -- it's users -- get to stand on firm ground for any fights that transpire. If you're real good, you'll take this advice even further to solve any risk factors facing your applications.

Issue One: Identify and expose risk

Summary --Risk first; security second. Find a risk expert. Use him/her for planning.

Problem --Understand the risks that your applications face, preferably before you design or build them. If you haven't yet started a development project, this is the best time to formalize the risk process. If you already have a completed application or a whole product suite -- risk management is intensely more complex.

There are many people in the world that are fluent in the language of risk. If you have a CPA, physician, or lawyer, then you understand that he/she is probably one of the better people to go to for general-life risk advice. In the world of computed applications, it is unlikely that this same person will have the credentials necessary to help you.

We need software risk experts. The best way to learn about software risks is by involving yourself in a formal software development lifecycle. In other words, requirements engineers, software architects, developers, and software quality engineers make some of the best software risk experts.

However, the current world's threat landscape changes modern software risk. Adversaries have outsmarted and outgunned the average software risk expert by introducing esoteric attack-paths by using widely unknown and misunderstood weaknesses in software. Everyone who thinks that they are an expert in software risk must now worry about the intricacies of SQL injection, cross-site scripting, integer vulnerabilities, and heap overflows -- in addition to deadlock/livelock, exception-handling, and the forbidden "crash" issues of the past.

Recommendation -- Super spies, organized criminals, and federal law-enforcement field agents all have one thing in common: they don't operate alone.

Find some risk experts and heed their advice. You'll want recommendations, but I suggest you avoid the typical places one would go. For example, people assume that the PCI SSC regulates good QSAC's to do code review, or ASV's to do application scanning. Other people assume that Microsoft, Symantec, SANS, or another vendor has "the inside track" for security information. These are largely fallacies, due to the following issues:

That said, I won't make any specific recommendations today since I am contractually obligated by my employer to avoid such details, and naming competitors would be an unfair advantage. My suggestion is to find a solutions provider that has intimate knowledge of the other issues brought up in this post. If you find a software risk management provider that addresses my issues in their entirety, it is quite likely that they will suffice for software risk planning. Of course, nothing is stopping you from commenting about a software risk solutions provider that you've worked with or are looking to work with in the future.

Honorable mention -- NSA Red team white-boarding. Crimeware, especially the recent botwebworms going around. That special way that Taiwan and China interact over the Internet.

Issue Two: Perform security requirements engineering

Summary -- 60% of all defects exist by design time (Gilb 1988). Reworking a software requirements problem once the software is in operation typically costs 50 to 200 times what it would take to rework the problem in the requirements stage (Boehm and Papaccio 1988). Statistics taken from Software Quality at Top Speed, adaptation for software security found in Business Case Best Practices on the Build Security In website.

Quality and Security are intertwined; they depend on each other. How can a CIO be focused on CMMI, yet at the same time be reluctant to enhance their process with a security development lifecycle? Requirements engineering has been proven time and time again for both quality and security purposes as the most beneficial exercise to increase defect prevention, as well as to lower the cost of existing defects.

Resources (primarily acquired from the book, *Software Security Engineering: A Guide for Project Mangers*):

Problem -- In this process, the difficulty often comes with decision-making. Pragmatic managers and analytic engineers could bounce off each other or blow up under the wrong conditions. SixSigma would tell the metrics-oriented business analyst to use a scorecard and focus on the "voice of the customer". Evangelical venture capitalists would say, "requirements-gathering doesn't ship code". Customers would foolishly say, "we trust you to make these decisions for us". Engineers with experience, either real-world or academic, might even oppose these views with fervor.

The book referenced above, Software Security Engineering: A Guide for Project Managers, discusses processes of decision-making, `eliciting security requirements', and trade-offs. They don't mention paralysis by analysis, or the fact that development teams often fail to read the specifications (or a multitude of other issues). I opt for their views, but the reality of what I see/hear going on is viciously to the contrary.

Organizations typically hate requirements.

Recommendation -- After having spent some time in an operational role, I watched how others deal with compromising vendor negotiations and acute attention-deficit disorders among IT decision-makers. Without bothering you with the horror-stories, I can say two simple words about negotiation that exists outside of political situations where one side clearly has the upper hand: Win-win. Win-win and other strategies are available in the Software Security Engineering book, as well as on the Build Security In website's Requirements Engineering section.

The tricky situation is to make sure you have a level playing field to start with. This is where the software risk experts come in. Balance the needs of release engineering/scoping (e.g. responsible manager) with your quality risk experts (e.g. lead developer) and your security risk expert provider. Be sure not to forget the customer, if they want to have any say at all.

I suggest MediaWiki as a base platform for requirements engineering documentation. A balance of documentation/tagging between the responsible parties would be beneficial to all. Code-level requirements specifications can be more easily integrated with software projects using open-source tools such as FitNesse (and HtmlFixture if the application is web-based), or commercial products such as HP TestDirector and IBM RequisitePro.

Honorable mention -- Fagan inspection and the V-Model SDLC representation

Issue Three: Design and construct secure software

Special Note -- You can skip this issue if you don't have any software that is worth defect-tracking. You will also have to skip ahead to Issue Four if your primary goal is to secure an existing application. You'll return to this issue later when you have to re-code that application due to existing defects.

Summary -- There are only two correct paths. Choose one and go as deep with it as necessary. The necessary parts will be well-defined depending on the path that you choose. Both mandatory and optional processes for designing, inspecting, and testing software for correct security properties will be proscribed.

  1. Model-driven development
  2. Test-driven development

Problem -- No product, service, or hodge-podge of free/open-source software/methods exists that will put your software together for you and make it a secure application. No A-Team or B-Team is available for hire to design, build, inspect, or test your software to make it a secure application. You are on your own.

Recommendation -- Well, that all depends. This is the hard part.

You need:

  1. Defect-tracking system and formalized process
  2. Design or test-case specification/management tool and formalized process
  3. Time management with a formal process
  4. Defect prioritization list (must come from your software risk experts and carved in stone)

Path One additional criteria: Model-driven development should generate some or all of the code necessary to complete a software project from the specification. Any of the code not generated using model-driven development must use test-driven development. The specification can be generated using:

If you or your software risk expert do not know what formal methods are, or why you would need them, then you probably don't want this. Instead, investigate semi-formal methods.

  1. Model-driven engineering (MDE) using NModel with C# .NET or Software Factories
  2. Model-driven architecture (MDA) using Andro MDA with Java Enterprise Edition
  3. Executable UML (xUML) for object-oriented languages such as C++ or Objective-C

Again, skip this if you or your risk expert are unaware of the benefits to using semi-formal methods to code generate software. Instead, rely on informal methods (and all of the other required steps) that go along with Path Two.

Path Two additional criteria: Test-driven development uses informal design methods combined with test-first development.

  1. DFD or UML diagram analysis with an automated tool such as Klocwork K7's architectural analysis module and/or manually by a threat-modeling expert
  2. Privilege-centric security analysis with Trike performed by a threat-modeling expert. This is typically a manual process
  3. Threat-modeling with Microsoft TAM-E (or the free, classic Threat Analysis and Modeling v2.1.2 tool) performed by a threat-modeling expert. This is also a manual-only process
  1. Writing unit tests before construction of software
  2. Unit tests should assert for the defect's fix and run continuously so that regression testing is integrated
  3. Design and construction of code in short "sprints" which allow for re-prioritization of security defects before, during, and after every sprint
  4. Continuous refactoring, which may typically take up to 50% or more of the each construction phase during every sprint

See: Agile Security - Breaking the Waterfall Mindset of the Security Industry and the presentation by Dave Wichers of Aspect Security for more ideas on test-first development.

Honorable mention-- The Microsoft SDL. Cigital Touchpoints. OWASP CLASP. My Continuous-Prevention Security Lifecycle (CPSL). SSE-CMM, et al.

Issue Four: Raise awareness and identify root-cause issues

Summary -- There are many ways to raise awareness and do root-cause analysis. Sometimes the fastest/cheapest way is the best way.

Often, the fastest way is with an automated security review checker program such as CAT.NET (XSSDetect is free), FindBugs, PMD, CheckStyle, FxCop, StyleCop, SWAAT, PHP-SAT, Orizon/Milk, Pixy, ITS4, RATS, SPIN (all free), Checkmarx, Ounce, Klockwork K7, GrammaTech CodeSonar, or Fortify SCA (all pay-commercial).

Less likely, a fault-injection or fuzz testing tool can be used -- such as Compuware SecurityChecker, automated web application security scanners (especially ones that are meant to work at the development or QA level such as DevInspect, QAInspect, or AppScan DE), or a concolic unit testing engine such as jCUTE (free).

My favorite technology solution to this problem is the tried and true white-box dynamic analysis method, with tools in this class such as Coverity Prevent, Fortify PTA, Insure++, Purify (pay-commercial) or Valgrind (free). Other possibilities here are code-comprehension tools such as SciTools Understand, Atlassian Clover, Bullseye Coverage (pay-commercial), EMMA, PartCover, NCover, tcov, gcov, and lcov (free).

Sometimes, an overhead view of the UML diagrams with Klockwork K7, Rational Rose (pay-commercial) or even UMLGraph or GoVisual Diagram Editor (free) will be enough.

It could be that the fastest/cheapest way is to simply do an interview with the lead developer(s). They might already know. They might be on your side.

Once you've identified an issue -- the only action is governance to get Issues 1-3 approved and committed to. This means C-Level involvement at the highest level of any organization.

Problem -- Nobody cares. This isn't the way things normally happen. This isn't the way we do things around here. We don't have funding. We don't have time. We don't have resources. Maybe next year.

Recommendation -- Be creative. Listen. Write. Peer review. Speak. Don't screw it up and don't "consider other options". What if you could get Steve Jobs to run LookingGlass.exe and see the obvious problems with QuickTime? We need more tools like this at the tactical level. GRC tools such as Agilance, Archer, and ControlPath are too strategic and "high in the clouds" (as well as too expensive).

Honorable mention -- Application security in all of its ugly and twisted forms. IDS, IPS, WAF, and logging. Black-box web application security scanners. Manual pen-testing. These may raise awareness. They may solve some problems locally, temporarily -- but not systemically. They do not involve root-cause.

VBAAC Security and You

My good friend Arshan Dabirsiaghi at Aspect Security released an interesting paper today on Bypassing VBAAC with HTTP Verb Tampering. For those who don't know what VBAAC is, it stands for "Verb-Based Authentication Access Control." Unfortunately, most vendors have screwed up the implementation by taking a default allow approach, and as a result developers are likely to have [unintentionally] exposed their applications.

How's the attack work? Well, in applications that utilize VBAAC, a developer, as an example, can specificy security constraints that allows only GETs and POSTs to the /admin/* directory for the "admin" user. The constraint would like so in web.xml file in a Java EE web application:

<security-constraint>
    <web-resource-collection>
        <url-pattern>/admin/*</url-pattern>
        <http-method>GET</http-method>
        <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
        <role-name>admin</role-name>
    </auth-constraint>
</security-constraint>

The interesting part about this, is that this constraint limits POST and GET access to /admin/* for the admin user. Anyone can submit a HTTP HEAD request for the resource or an arbitrary HTTP verb, if allowed (the paper uses JEFF method as an example), and the resource can be accessed.

The end result of this is that resources are less restricted than expected. The caveat being that responses will not always be seen by an attacker, though damage can still be done. Take a moment to read the paper now -- it's concise, gets straight to the point, and gives solutions. This likely affects application(s) you are responsible for, so take the time to understand the risk and take action.

« Newer entries — 8 — Older entries »

blog comments powered by Disqus