tssci security

Day 4: ITSM Vulnerability Assessment techniques

Lesson 4: We've touched on some of the critical-path ways to assess and protect your infrastructure including network segmentation and OS/application sandboxing. Often, the weakest area of technology is what you can't segment or sandbox effectively, which today is why we will be covering web applications.

Part 1: Information assurance vulnerability assessment — Web applications

With all the various web applications across any given infrastructure, it's difficult to know where to start. Consolidation of web servers isn't an easy task -- the thought there is that somehow you could put all of your web applications behind a giant web application firewall such as mod-security in reverse proxy mode.

The questions arise, which is more helpful to balance the risk of our web applications -- web application firewalls, enforcing use of secure frameworks / secure configurations, or secure inspection? Is it some combination of the three? Is there a one-size-fits-all solution?

The answer varies, dependent on what web applications you have. If you have a large mix of PHP, ASP.NET, ASP classic, Java/JSP/JEE, Ruby on Rails, ColdFusion, and Zope/Django/TurboGears/Pylons/Python then it's obviously hard to know where to start. HTML/CSS/XHTML, XML/XSL(DTD)/XSLT, Javascript, Actionscript, VBScript/VBA, ActiveX, Flash, and Java Applets just complicate this further. It's very common to also house large amounts of CGI written in Perl, Tcl, Python, Unix shell, C/C++, Visual Basic, Applescript and several other languages you've probably never heard of.

There are many more books written on scaling Enterprise applications with .NET and JEE, but just how popular are these in comparison to PHP, ASP classic, or ColdFusion? According to the The State of Web Development 2006/2007, web applications are developed in order of popularity: PHP, Classic ASP, ASP.NET 2.0, ASP.NET 1.1, Static HTML, Java/JSP, Perl, ColdFusion, Ruby, and Python. Also interesting is the use of point-technologies such as Flash, Javascript libraries, and Ajax (also in order of popularity).

Of the above, the most secure are likely Static HTML, Java/JSP, and ASP.NET 2.0. PHP and Classic ASP require quite a lot of work to secure, so it may be best to concentrate on these technologies first, in addition to watching out for Flash, JS, and Ajax gotchas.

Recommendation: Install Hardened-PHP wherever possible, and front your PHP applications with CORE GRASP and PHP-IDS. Move applications that require medium or high assurance to a framework such as HDIV, which requires JEE developers (however, can be used with SpringMVC, Struts2, or Struts1 -- in my order of preference). Utilize Anti-XSS libraries to bring all legacy .NET and Java code up to par if HDIV or the latest .NET frameworks cannot be used.

You'll detect a repeatable pattern here when I give a recommendation of how to begin the secure inspection of any application. Start with bytecode or binary analysis and then determine where the applications are most weak. In the case of Classic ASP, utilize tools such as OWASP DN_BOFinder. Java/JEE bytecode-testing is best handled with FindBugs. .NET applications can be tested with FxCop and dotnetids. Bytecodes and binaries can be sent to Veracode for further inspection.

PHP applications usually have source code available, especially since most are of the open-source, off-the-shelf variety. I recently read an excellent post on using PSA3 (one of the best open-source web application tools available) and PHPSecAudit to identify web application vulnerabilities in PHP source code. Beyond these tools, also take a look at the Php Fuzzing Framework (PFF) -- and make your custom functions stand the test under the stress of the PHP interpreter. Commercial scanners such as Chorizo-Scanner also include advanced server-side testing with the Morcilla PHP extension, although you might learn more using CORE GRASP and PHP-IDS during a dynamic/hybrid assessment.

Flash, Javascript libraries, Ajax, RIA's, Java applets, and ActiveX components also need to be evaluated for security purposes. Fortunately, there are many open-source tools that simplify this process. Consider taking a look at my older post on 2007 Security testing tools in review, or the ones on Crawling or Ajax security. Specialty applications such as forums, CMS's, Wiki's, and blogs sometimes have their own scanning tools (e.g. I know of ones for phpBB, WordPress, and Joomla).

Also be sure to keep an eye on configuration files. Bryan Sullivan has written a few great articles on securing configuration of web applications in a part one and two of a series, in addition to earlier work on.NET configuration files. There is more information available on secure web application configurations in the "Web Application Hacker's Handbook", the "Developers Guide to Web Application Security", and the latest Apache Cookbook, 2nd Edition.

Part 2: Software assurance vulnerability assessment — SQL Injection

I was going to make today's part two lesson about XSS, but decided to change it to SQLi when I saw that 70k websites were owned via an automated botnet attack in the past two days.

Best SQL injection attack tools

SQL Inject-Me, SQL Power Injection Firefox Plugin, sqlmap, sqlninja, SQL Brute, sts-scanner, w3af, Wapiti, HttpBee, OWASP SQLiX, SQL Power Injector standalone, MSSQLScan, Paros, Burp Suite, OWASP WebScarab, ISR-sqlget, Scully, FG-Injector, PRIAMOS, Grabber, Bobcat, Watchfire Exploiter, Syhunt Sandcat, gunzip webfuzzer, mieliekoek.pl, wpoison

Best SQL injection attack helper tools Absinthe, squeeza, SQL Hooker, HackBar, PMD SQL Injection Rules, SQL injection cheat sheets, ExtendedScanner, AppPrint, AppCodeScan, FindBugs, FxCop, RATS, SWAAT, Pixy, Milk, PSA3, PFF, PHPSecAudit, Inspekt, PHP-SAT, Orizon, LAPSE, PQL, SPIKEproxy, ASP-Auditor, PHP-IDS, dotnetids, CORE GRASP, Mod-Security

If you want to check out these tools in order, that is the preferred way to learning. There are quite a lot of tools mentioned here, and quite a lot utilize very different testing methods. Some testing tools such as HttpBee can be distributed, while many others are command line tools. My favorite black-box tools are usually the browser-based tools such as sqlime, sqlpowerinjector.xpi, and HackBar -- but there are many good command line tools such as sqlmap and sqlninja.

More often than not, securing the database and access controls are just as important as making sure that all source code uses parameterized queries if you want to prevent these kinds of attacks. Using declarative controls along with an RBAC system can lower the attack surface. Making sure that command shells and out-of-band channels (RDBMS use of DNS, SNMP, et al) are locked-down should also be top-priority.

I've worked with developers who think that they don't need to use parameterized queries on internal infrastructure. However, because of Web services (SOAP, RPC) and second-order injection (plus other ways to get closer/inside the database) -- it should be realized that all queries need to use prepared statements. Packages such as DBMS_ASSERT from Oracle and Mod-Security from Breach claim protection against SQLi -- however, they are not panaceas and should be checked for what they do and don't protect against.

Day 3: ITSM Vulnerability Assessment techniques

Lesson 3: After the first few days, we've covered securing WiFi, as well as basic software assurance tools to get you started with a web browser and crawler. This is just the beginning.

Part 1: Information assurance vulnerability assessment — Sandboxing insecure defaults and unnecessary services

Sandboxing is often referred to by many things, such as exploitation countermeasures, trusted paths, secure attention keys, and chroot jails (OS-level virtualization). Ruby on Rails, Java, and the .NET CLR provde virtual machines very similar in effect to OS-level virtualization, but at the application-level.

It is unfortunate that many operating systems include quite a lot of services, applications, and functionality that isn't required or properly sandboxed from other applications. There are also basic issues such as updating OS and applications while scanning networks for known vulnerabilities (widely described as vulnerability management).

Laying out kernel options, as well as disks mount options at build/install time are two major ways of sandboxing applications. One of my favorite ways to reduce the attack surface to Linux is the versatile GRSecurity. GRSecurity must be compiled into the Linux kernel. It allow Low/Medium/High security settings.

In the low setting, GRSecurity improves a few things, including race conditions on temporary directories, as well as improving chroot sandboxing. Medium security places restrictions on all sorts of insecure defaults, as well as improving chroot even further. The high security feature of GRSec is the most advanced, and includes PaX code. PaX is an implementation of ASLR. Some programs are broken because they require executable memory addresses, but PaX can be controlled with the `paxctl' command line utility.

If you're not familiar with GRSecurity, but have used SELinux -- then you might be familiar with access control concepts, such as the differences between discretionary and mandatory controls. GRSecurity controls role-based access controls (RBAC) using the `gradm' command line utility. SELinux can turn a discretionary system into a mandatory access control system (MAC), hardened by domain and type enforcement. We'll cover access control, sometimes referred to as authorization, much more in later posts.

Windows does have equivalents for these -- at least if you're using Windows XP SP2 or Window Server 2003. Other versions of Windows may support these concepts a bit differently. Instead of changing mount points, Windows cannot do much here -- but temporary file directories can be encrypted and the paging file can be cleared on shutdown. Access controls can be locked down further with CORE Force (a strong firewall and access control system from CORE Security).

Other operating systems have some equivalents, such as the BSD/MacOSX ports of systrace. There even exists SEBSD and SEDarwin projects that implement stronger access controls in similar ways as SELinux. Spending a few hours on a hardening guide when you install your OS can be extremely worthwhile in the long-run. This improves security exponentially when you're using a install server and standard images.

Recommendation: Deciding when and how to sandbox which applications should be based on measurements from past vulnerability management experiences. Using Nessus and patch management (e.g. OVAL-Compatible solutions from MITRE) will give you this information over time. I recommend a system to track vulnerability management reports such as Inprotect or Simpleness.

CVE, NVD, OVAL, and OSVDB information can also lead you to discovering potential places you would like to sandbox more functionality or find out more about these vulnerabilities. Applications seen as "at risk" can be handled by binary analysis to determine areas needed for secure inspection. Instead of using only bugreport and IDA/HexRays/BinDiff/ImmDbg -- I suggest moving highly critical (or high at risk) applications to a CWE-Compatible full-solution service such as Veracode. As the CWE-Compatible program expands to include CWE-Effective, medium-risk applications can move to the standard program while high-risk applications can move to the more effective solutions.

Running Nessus + Nikto, as well as using the CIS benchmarking solutions can augment measuring the success of a vulnerability management program. They can help you find out if you're missing a critical sandbox (or just need to tweak the configuration a bit), or have not hardened your standard images (and maybe not in a standard way). Combine with various hardening guides, found all over the Internet.

For more information via books, however, you could also read, "Network Security Hacks, Second Edition" from O'Reilly Press, "The Craft of System Security", or any of the various books on SELinux.

As a final note: whatever you do, make sure you get a safe channel/sandbox for your patch management delivery system. Which is why these tools need review such as OVAL, and probably multiple, third-party review (or first-party review through one of three CWE-Effective/Compatible solutions such as Veracode). Good application risk analysis also goes a long way.

Part 2: Software assurance vulnerability assessment — HTTP

Since we already mentioned Nikto today in the information assurance section of this post, I might as well introduce all of the good tools for learning HTTP. If you thought using your browser as your primary security tool was over, it has only begun. There are various great tools, and most of the best ones are available as an add-on.

Best HTTP request tampering tools

Tamper Data, Burp Proxy, Paros, OWASP WebScarab, Hunting Security Bugs' "Companion Content" Web Proxy Editor, OWASP Pantera, w3af, curl

Best HTTP request tampering helper toolsmetoscan, Syhunt Sandcat, N-Stealth Scanner Free Edition

You also may want to check out RFC 2616 and 2617, in addition to others you may come across (ie. the HTTP specifications). There is also an httpbis Working Group within the IETF and possibly even with some detectable activity.

It should be said that parameter (GET) and form-based (POST) tampering is 98% of web application security testing. All of the other vulnerabilities/attacks come from one of these two. Ever want to learn the difference between HTTP Splitting and HTTP Smuggling? Well, you best understand HTTP GET and POST attacks, including at the very least, concepts of header injection and knowledge of ASCII characters.

Day 2: ITSM Vulnerability Assessment techniques

Lesson 2: We hope that you are enjoying the format of these, as well as the content. Yesterday, I talked about how rogue AP's/clients can be scanned for without adding infrastructure or spending active time walking around the office. I also introduced software assurance tools, including most of the popular and best browser-based tools. Browser-based tools can really get you started with learning how to approach testing for web application vulnerabilities -- whether or not you plan on using code inspection or a purely black-box approach.

Part 1: Information assurance vulnerability assessment — Kernel protection, network drivers

Once when I was performing a full network vulnerability assessment, the client asked me, "What is the worst that can happen from a WiFi attack?". We talked about how easy it was to break WEP and WPA, and I mentioned how this could be done using Rainbowtables, as well as using injections to speed up the traffic (or that small packet traffic such as ARP could easily be predicted) including a discussion about OpenCiphers' use of relatively cheap hardware to perform NSA-grade attacks.

However, this was not the end of the discussion. I then spoke about how some WiFi drivers have had problems, including the FreeBSD 802.11 Management Frame Integer Overflow vulnerability, discovered by Karl Janmar. I made the assumption that many WiFi drivers could be vulnerable to remote root upon boot.

One of the best initial ways to perform a vulnerability assessment on unknown code such as WiFi drivers is to utilize binary analysis. These sorts of rough tests can provide information that is not normally available. Using tools such as GDB, IDA Pro, OllyDbg, and ImmDbg isn't available for system drivers at runtime, so other tools such as KGDB, SoftIce, RR0D, and WinDbg must be used instead. Some tools such as bugreport might also be able to be used, assuming you can also have objdump produce a disassembly. Just poking around can produce a lot of useful information on where to start with other kinds of vulnerability assessments, such as using CAPEC to model attack-paths or moving to other techniques such as reverse engineering, static analysis, and dynamic analysis.

Other software such as BinDiff can be extremely useful to identify areas of code in binaries that change over time. Particular useful is to utilize BinDiff and IDA/HexRays to identify weaknesses in WiFi drivers after a patch to a vulnerability is released. Fuzzers aren't usually specific to WiFi, although there is at least one 802.11 fuzzer. The vulnerability research community has mostly adopted scapy for this purpose. I suggest also checking out file2air and UTScapy.

WiFi drivers present a unique problem for vulnerability research. Their closed-source is often a contention point, especially from the OpenBSD camp. Great projects such as WVE produce content on wireless vulnerabilities. However, because of the complexity and importance that these drivers maintain the highest assurance levels -- I firmly believe that we need more. I would like to see both a CAPEC-specific wireless attack-path classification (similar to WASC TC), as well as a WiFi driver weakness program similar to CWE or OWASP T10-2007. This should be driven by the vendor community, who should provide open standards for pen-testing AP's and drivers.

Theo de Raadt has been a proponent of both open drivers and open documentation. Clearly, both open documentation and source code would allow at least some third-party selective review. Vendors of all WiFi software/hardware should also open their documentation, hardware, and software for third-party selective review. If you are a big client to one of these vendors, please push them to these ends. In order to prove the need, consider fuzzing before purchase.

Recommendation: Use hardened WPA2 to protect your WiFi clients. WPA2-Enterprise is simply too complex for most needs -- consider using HostAP with per-MAC WPA2-Personal PSK's, or other AP that provides such support. Make sure to make the SSID and PSK for every AP/client as random and long as possible. Regularly scan Windows WiFi clients using WiFiDEnum from Aruba Networks. For further information, please check out the books, "WarDriving and Wireless Penetration Testing" (especially the last section on Device Driver Auditing), and "Hacking Exposed Wireless" (by Johnny Cache and Vinnie Liu).

Part 2: Software assurance vulnerability assessment — Crawlers and spiders

Best HTTP crawlers

Burp Spider, Paros, OWASP WebScarab, w3af, larbin / FEAR::API, Heritrix, wget, curl, Offline Explorer Pro

Best Javascript/DOM link crawlers Burp Spider, w3af, scanajax, urlgrep, sts-scanner, CSpider, Syhunt Sandcat

Best Ajax crawlers OWASP Sprajax, scanajax, w3af, Ruby mechanize, crawl-ajax (RBNarcissus), crawl-ajax (Watir), FireWatir, scRUBYt, Syhunt Sandcat

There's not much more to say about crawling, as I've already said it a million times. These should get you started. Again, the order is important. If you've never crawled an application using Burp Spider, today is your lucky day.

Some of these tools such as FEAR::API, Ruby mechanize, RBNarcissus, Watir (including offshoots like FireWatir, Watij, and Watin), and scRUBYt may require some knowledge of programming, although if you read my past posts -- there are many links to get you started. Other tools such as curl provide libraries in several languages (libcurl for C, PHP-CURL, et al) that are also usually worth checking out.

Even if you're only interested in static analysis (which we'll get to next week), application understanding that comes from how a crawler works on a website is important knowledge.

Day 1: ITSM Vulnerability Assessment techniques

Lesson 1:These techniques are in two-parts, 1) Information assurance strategies, and 2) Software assurance tools. My feeling is that vulnerability assessments are typically done less strategically/operationally in IT environments (relying too much on tools and point-and-click scanners), while not hands-on enough for IT dev shops (or unknown where to start).

Part 1: Information assurance vulnerability assessment -- Network segmentation, Physical

This is a bottom-up strategic approach using the OSI model. Start with Layer-1 and move up to the application layer. Physical security is extremely important. Host all computers at a data center and utilize thin clients locally, if possible.

How about the network? Keep network ports off by default and check/verify structured cabling. The largest problem here would be rogue AP's or other WiFi-related security problems. WEP is worst of these, but WPA-Personal can also present problems with weak passphrases. This time last year, TJX had a data breach that started with an attack on WEP.

However, RF is all around and available in many devices. Not only will rogue AP's present a physical layer network segmentation problem, but clients will as well. Clients are devices such as PDA's, PDA phones, and also anything with both a radio and software. Other technologies such as copper wire, lasers, and infrared can also carry network traffic across segmented networks via the physical layer.

Scanning for all of these devices is not easy. Using Kismet and BTScan, even with advanced tools such as WiSpy -- it can be impossible to perform such scanning at all times. There aren't enough good vulnerability assessment tools for brute-forcing passwords along with scanning, and this activity takes even more time. When was the last time you made sure that BTScan was checking for unsafe OBEX passwords?

Recommendation: Vendors (secure AP vendors especially) should provide AP's that present false AP information, for WiFi, Bluetooth, IRda, and possibly other common RF technologies. They should present a captive portal stating that connecting to this AP is against corporate policy, and that they are being monitored while administration has been contacted. Integration with SIEM is ideal to backup these claims.

If your WiFi vendor solution could also scan employee devices, this should detect rogue clients (e.g. Blackberrys, iPhones, Windows Mobile devices, etc). Again, integration with SIEM technology is an ideal way of notifying a possible breach. Checking to see if packets can loop between network types is a great way of detecting rogue AP's and clients, but be careful how it's implemented. All network traffic can be locally queued and/or blocked.

When a vendor can't provide a solution, you may want to roll-your-own. I suggest Soekris boards, CM9 miniPCI cards, and Pyramid Linux.

I'll talk more about the software-side of assessing radios on Day 2. If you'd like more information, please check out the books, "Blackjacking" by Daniel Hoffman, and the infamous, "Wi-Foo" and "Hacking Exposed: Cisco Networks" by the Arhunt team.

Part 2: Software assurance vulnerability assessment -- Browsers and extensions

Best browser tools

bookmarklets, Firefox's Tools->Page Info (with View Cookies), Nikhil's Web Development Helper, Cooxie, Web Developer, FireBug, Microsoft Script Debugger, DOM Inspector, InspectThis, Cert Viewer Plus, HOOK, FlashTracer, XPath Checker, XPather, View Source Chart, viewformattedsource, UrlParams, IE Developer Toolbar, HttpWatch Basic, TamperIE, Tamper Data, Modify Headers, LiveHttpHeaders, Header Monitor, PrefBar, Technika, Fiddler, FireBug Lite, JS Commander, VBScript, Applescript, about:config

bookmarklets are the best browser tools because they are cross-browser, cross-OS, and multi-attack-functional. I use them in IE7, Firefox 2, and Opera 9.

I listed most of the other tools in a sort of important order. Feel free to explore them in this order. Some are Firefox only, and some are IE only. Some are external browser tools but had to be listed regardless (e.g. Fiddler, FireBug Lite, and JS Commander). I've listed VBScript and Applescript because they can be used to control the browser. In my past blog posts on Why crawling doesn't matter, I talked about similar ways of "driving the browser" and called these "browser-drivers".

I am not going to spend any time in the near-future on the internals of bookmarklets or browser add-ons, although I may touch on some of these other tools more when it comes to specific attacks. The point of this is to introduce you to tools which you may have not used or heard of. I would really like to leave you with further information on bookmarklets, so here are a few links to RSnake's, Awardspace, and Squarefree. Some of my favorites are: Find Redirects, Show JS Vars, generated source, view cookies, netcraft, Alexa, http headers, and Edit Cookies. I have taken the code from Ajax Security in order to compose a "HOOK-lite" for Javascript function monitoring.

var ret = ""; for(var i in window) { if(typeof(window[i]) == 'function') { ret += i + " | "; } } alert(ret);

You can paste the above into Technika and click "Run". Technika requires Firebug and Firefox.

OWASP Hartford

Now that I'm back in the Connecticut area, the best thing happened! James McGovern has started the Hartford OWASP chapter. First meeting is set for Thursday, February 28th with opening remarks beginning at 5:30pm. The agenda for the night is as follows:

Future meetings will feature speakers:

We'd love to see people attend from companies in the area... such as Aetna, Gartner, General Electric, Northeast Utilities, United Technologies, Lincoln, Prudential, ESPN, Bristol Myers Squibb, IPC, Foxwoods, etc.

For more information about the chapter, see the Hartford OWASP chapter page and mailing list information.

« Newer entries — 17 — Older entries »

blog comments powered by Disqus