tssci security

Day 10: ITSM Vulnerability Assessment techniques

Lesson 10:You could say I'm a little late on posting something. However, we've been up to a lot of great research, hopefully much of which we'll publish here over the next few weeks.

We had a few posts lately, some of with a change of heart. The latest must-read from the blog world comes from Nitesh Dhanjani at O'Reilly's ONLamp, What Have You Changed Your Mind About?

So far, I'm the only person who has commented, and I didn't really answer any of his questions. However, the post itself is quite insightful. It talks about data breaches in terms of strictly PII (Personally identifiable information), where PII comes from, and the problems inherent with static-identifiers. I give some defense suggestions in the comments.

Related to Nitesh's post is Adam Shostack's predictions on the SDL Blog, New faces and predictions for the New Year. Bryan Sullivan is also introduced to the team (recently joined from HP / SPI Dynamics) with his XSRF (CSRF) predictions. He's probably referring to Web services, Ajax, Silverlight, and other RIA -- which contain CSRF's that may be more critical than the ones built-into HTTP. For an example, check out the JSON API Hijacking research from the Fortify Software paper, as well as made available in the books Ajax Security, the Web Application Hacker's Handbook, and Hacking Exposed Web 2.0.

Part 1: Information assurance vulnerability assessment — Protective measures, Identification -- multi-factor authentication -- physical access control

Something you know. Something you have. Something you are.

This is the tenant of authentication for security systems. The first thing I think of -- what about one-time passwords -- OTP's (a cross between something you know and something you have in the case of an RSA SecurID hardware token device, but this is more confusing when it's software such as S/Key or OPIE)? What about a USB/Smartcard that contains RSA, DSA, or El-Gamal keys?

There is a company, StartCom, who not only provides free, usable SSL certificates -- but that also sells Aladdin USB Tokens. StartCom is a great way to avoiding using your own private Certificate Authority (CA) if you have a need for this. However, the StartCom root-level CA's are only shipped in Firefox, Safari, and Konqueror.

The Aladdin USB Tokens and Smartcards are very interesting, mostly because they are cheap ways of providing average sized keys (2048-bit) in hardware. The Aladdin eToken NG-OTP appears to be a great concept -- combine the concept of secure keys with an OTP.

Even better would be to combine the OTP and hardware keys with Single Sign-On (SSO). Also interesting is to be able to combine both logical (system/network/application) and physical (proximity, photo ID badges, locks, man-traps) to create Integrated Physical and Logical Access.

Recommendation: Consider adding physical tokens, integrated access controls, SSO, and OTP's. Testing the authentication and access controls would be extremely fun -- it's odd that I haven't seen much research in this area of security much at all.

Many organizations don't have any physical access controls other than a shared/copied key on tumbler locks and/or a security system (mostly for insurance purposes). I've seen a lot of complex systems, including ones that provide cameras, audio monitoring (it's claimed that this can be used for calling the police more easiy than cameras).

I suggest avoiding surveillance systems (both the audio and video kind) and installing a simple wireless security system with alarm monitoring. Keep fire safes around to store documents in almost every office for anyone who wants one or is willing to use one.

There is the Home Security Store, which offers wireless alarm kits (from DSC as cheap as $245) and other wireless solutions -- I suggest DSC Wireless products. The Home Security Store also appears to partner with Alarm Relay, who offers monitoring for $8.95/month. Cheap isn't always good, but in this case it appears to be better than most systems that cost hundreds to install (where you're leasing the equipment anyways) and at least $40/month for monitoring.

You might be thinking that it is strange that I suggest such a pragmatic approach to physical security, when I suggest such complicated concepts for application, network, system, and software security. Insurance is more clearly defined for physical security, where cyber-insurance and liability concepts are more "cutting-edge" and being worked out. We'll cover cyber-insurance in a future post, but I suggest checking out the book "Geekonomics" by David Rice for more information on this topic.

Protecting your building doors, windows, locks, and walls is usually a better place to start. Be sure to check out No-Tech Hacking guides, lockpicking videos/presentations, and other material available out there. Try out these techniques against your own home and office.

I suggest to start with key bumping. There is a good triangle-shaped file you can get at most hardware stores, which works well alone or with a Dremel tool. I have both of these, a few key blanks of different types, a few key gauges, and some test locks. If you want to learn more check out the No-Tech Hacking book by Johnny Long, or attend the presentation at ShmooCon (recently announced speaker-list from earlier this evening!) -- New Countermeasures to the Bump Key Attack by Deviant Ollam.

Part 2: Software assurance vulnerability assessment -- Path traversal and Predictable resource locations (PRL's)

Best Path traversal and PRL attack tools

Nikto, filefolderenum, http-dir-enum, FreeWVS, babelweb, Burp Suite, w3af, DFF Scanner, OWASP DirBuster, OWASP JBroFuzz, ProxMon, Paros, OWAS WebScarab, sn00per, WebRoot.pl, webfuzzer, Wapiti, Syhunt Sandcat Free, N-Stealth Scanner Free Edition

Best Path Traversal and PRL attack helper tools FileMon, lsof, strace, truss, ktrace, ExtendedScanner, Inspekt, Orizon, ASP-Auditor, Milk, SWAAT, RATS, PHP-SAT, PHPSecAudit, PSA3, PFF, LAPSE.

I hate to say it, but PRL's are one thing that commercial web application security scanners are better at than both open-source structural and functional testing tools -- as well as possibly even static source code security analyzers. This is where WebInspect, AppScan, Hailstorm, NTOSpider, Acunetix WVS, Syhunt Sandcat, N-Stealth Scanner, and Sentinel really shine.

In tests of a commercial scanner vs. me, I would bet my money on the scanner to find all the WEB-INF/'s, .bak's, and Emacs "~" files. For more information, check out some books that cover commercial scanners more in-depth such as Hacking Exposed Web Applications (2nd Edition), or Security Power Tools (Section 3.3). In related news, Romain Gaucher compares the commercial tool, Fortify SCA 5.0, to open-source PHP source code security analyzers.

Baby steps with web application security scanners

Web application security scanners have not matured much. I guess patent wars and company-buyouts have caused a lot of stagnation over the past year. However, I think the problems may run deeper than just controversy and industry drama.

AppScan DE and DevInspect as exceptions -- largely the web application security scanner industry is filled with technology that has to spider, or crawl, a website.

However, in most cases, scanners do not crawl the application -- only the web server HTML content and the subsequent links. There is a process known as link discovery, where the spider in a scanner will typically find links and follow them. But what if the link information is inside a Flash file, or other part of the application that a typical spider/crawler can't get into?

This is why I like the word, "Crawling", to accurately describe the process a web application security scanner goes through when it spiders a site and does link discovery. Sure, the spider can use robots.txt, or a Sitemap protocol to get a better picture (or a mostly complete map) of the links the crawler must find and explore. Looking at the big picture -- a complete and accurate crawl of the application -- is far from realized using these tools today.

I also like to use the word "Walking" (i.e. an evolution of "Crawling") to refer to scanning technology that does more than simple link discovery using robots.txt, Sitemap, and/or "grep" type techniques. For example of a "walker", there is the OWASP Sprajax Project. While limited (as discussed in the book Ajax Security), Sprajax is capable of enumerating call endpoints in an ASP.NET AJAX application (used to be Microsoft Atlas) and fuzz the inputs found.

In order for a scanner to be useful, it also has to be functional. I am not going to purchase a commercial product that is unable to support common web application technology (Ajax is found in over 30% of web applications, while Flash is found in at least 40%). With numbers like these, we're talking about common adoption. This is not a trivial thing when you have to assess many different web applications day-to-day. These technologies are growing at a very fast pace -- and we have to look at other RIA/RCP frameworks that are emerging on the scene.

Over 70% of web applications primarily use CSS, only occasionally using HTML features such as tables. Only 2-3% of popular websites are without CSS. Yet, the web application security scanners do not look into the formatting/presentation layer -- nor do they utilize attack-vectors such as HTML/CSS injection of HTML/CSS (as opposed to HTML injection of Javascript, which more widely a concept of XSS).

With these kinds of issues, I prefer open-source tools to costly commercial scanners -- which is why I mention them often. I don't believe in leading with a tool because the tools are often quite poor. There are excellent approaches that are completely different than the typical situation where people jump to a web application security scanner to solve a particular problem.

The best current approaches all seem to revolve around strategy consulting that recommends Fagan inspection as a process-oriented solution to secure coding. Fagan inspection is just the beginning of a secure coding process. Many are referring to this new age of security consulting as "Secure SDLC" or "Security in the SDLC", and the practices of the Microsoft SDL, OWASP CLASP, and Cigital Touchpoints are the usual suspects.

When practices such as inspection and developer-testing are encouraged early in the SDLC (requirements, design, and programming phases), forced at build-time (integration phase), and verified before release (functional, regression, and operations testing phases) -- this is software assurance / software security SixSigma in 2008. Of course, this is the exact model that I suggest in my CPSL "Secure SDLC" process.

Secure SDLC techniques such as secure inspection and unit tests that assert software weaknesses (such as CWE or OWASP T10-2007) don't need to worry about "Crawling" or "Walking". They're already "Running" at full speed.

The TS/SCI Security Team will return with Day 10 of the ITSM vulnerability assessment techniques tomorrow!

SQL Injection Fun v.RIAA

What started as a simple DoS against the RIAA through a SQL injection vulnerability, originally posted to Reddit in tinyurl form.

UNION ALL SELECT BENCHMARK(100000000,MD5('asdf')),NULL,NULL,NULL,NULL%20--

led an attacker on to dump their entire database. I sure hope they don't have backups -- part of me thinks they deserve it and wants them to suffer... muwhahaha

Day 9: ITSM Vulnerability Assessment techniques

Lesson 9:Yesterday was a bit of a whirlwind, discussing BGP, Whois/RWhois, and the DOM all in one big post. I'll try and keep it short and sweet today.

Arshan Dabirsiaghi (leader of the OWASP Anti-Samy Project), commented on yesterday's post regarding how web application security scanners are immature. He thinks they are immature because of the technology. Missing elements such as "the input fields being vulnerable to XSS", with results such as "we got kaput from the scanner". He also said, "the loudest people in the room are the scanner marketing companies".

What do you think?

I agree with Arshan that scanner marketing is out of control in their statements about the technology. When one scanner salesperson called me around Christmas-time, 2006, I told him that the technology just wasn't mature enough. I said that I thought it might be by the end of 2007. Well, that time has come and gone, with really not too much more to show for technology improvements.

I was never a fan of these tools in the first place. I believe strongly in using them for verifying the results from automated and manual secure inspection. This precludes reliable exploitation or writing weaponized exploits (as much as I list w3af as a useful tool, I also do disagree in the approach and end goals). In other words, you don't need BeEF or AttackAPI to demonstrate or perform pivoting in the case of an XSS finding. I would rather spend more time looking for more vulnerabilities, refining techniques, and working on the root-cause of the problem -- which is always input validation and input/output encoding.

Part 1: Information assurance vulnerability assessment — Protective measures, file/disk encryption

Millions of records of sensitive or private information have been lost due to missing, stolen, or "borrowed" laptops, tape drives, disks, and other various data-containing hardware.

A search on Etiolated for "Stolen Laptop" claims that 7.5M records are involved over the past few years they've been tracking statistics. It appears to me that a lot of the reported incidents are due to this type of problem.

Yet, businesses do not have standard images on their laptops and other devices. As Vista sells more at the workplace, it is possible that this will change. Vista includes BitLocker, which is a full-disk encryption (FDE) implementation with varying results of success when implemented. Of course, getting access to revocation keys or private keys would make the implementations broken -- this at least makes it so that two things are needed instead of just one.

Any internal threat agent could theoretically copy data off any drive, assuming account access (or escalation of privilege). Again, this is less likely to happen as more than one step is involved. In the future, we'll be looking at other methods to further decrease the likelihood of this scenario occurring without solution prevention (or at least detection).

Recommendation: Hardware vendors that include software should have the default account already setup for FDE if the OS supports it. Hardening guides should be used to verify. If your company/organizations uses standard builds or images, then there are many solutions that can be used for partial or full-disk encryption.

OS vendors could also force FDE on install. This would be really nice.

For Linux laptops and servers, LUKS is the ideal solution. Many will prefer BitLocker on Windows laptops, and Server 2008 should have similar support. For those who haven't upgraded yet, commercial solutions exist -- although I prefer a thin client laptop such as SafeBook.Net.

While Mac OS X has FileVault (and the hardening guides usually recommend using it), you might want to take a look at TrueCrypt support for Mac OS X partial disk encryption.

TrueCrypt is also supported under Windows 2000/XP, although I prefer FreeOTFE. One of the reasons I prefer FreeOTFE is because it is also supported on Windows Mobile devices such as PDA's and PDA phones.

If you want more information on setting up full-disk encryption with LUKS, BitLocker, TrueCrypt, or FreeOTFE -- I suggest books such as Security Power Tools (Chapter 15, which also covers GPG and S/MIME), Windows Vista Annoyances, Ubuntu Hacks (Hack #70), and Network Security Hacks (2nd Edition, Hacks #39 and 33) from O'Reilly Press.

Part 2: Software assurance vulnerability assessment — Cross-site Request Forgeries

Best CSRF attack tools

OWASP CSRFTester, Chris Shiflett's CSRF Redirector, PDP/GNUCITIZEN's CSRF Redirector, CSRF dorks

CSRF can be tested with a variety of different tools. The presentation that goes along with the CSRFTester tool by Eric Sheridan of Aspect Security is an excellent guide. The CSRFTester is a simple Java-based tool.

Some CSRF redirector tools such as the ones by Chris Shiflett and pdp seem to work in similar ways, along for online testing, but be careful what gets logged to these sites! The CSRF dorks spawned out of the sla.ckers.org forums and Ronald picked it up to support a CSRF database project on his website.

My favorite thing about testing CSRF is looking at the level of defenses in use by a web application. Testing CSRF is so easy and simple compared to XSS or SQLi. If CSRF works with no active defenses -- you can also assume that XSS and SQLi are possible in a lot of cases. It clearly shows a sign of no forethought into security. This provides a rough measurement of a web application's security at ground zero.

When I say that CSRF has "levels of defenses" -- this is also a classic example of how "levels of defenses" can be used to measure web application security testing techniques and methods, including web application security scanners. Romain Gaucher has been giving talks about using "levels of defenses" to benchmark scanners. I saw his first presentation at Verify Conference, but recently he has updated his talk and given it at the HICSS conference in Hawaii. He includes a link to his slides in that linked blog post, as well as a link to NIST SAMATE, which has a focus group on web application security scanners.

« Newer entries — 15 — Older entries »

blog comments powered by Disqus