tssci security

Security and safe browsing for Firefox

You installed Firefox. How do you make it more secure for daily use? How do the Mozilla developers ensure that they are doing all the right things? How do you safely browse the Internet?

These are not easy questions to answer, and some of the answers will be system/OS-dependent.

Security functionality in Windows versions of Firefox

Using LookingGlass.exe, one can see a few issues with Firefox beta 3 on Windows Vista or XPSP2.

Clearly, some Firefox binaries (executables and DLL's) are safer now that they support NX, and as seen using LookingGlass.exe -- Firefox 3 is likely overall much safer than Firefox 2 (although adding new functionality must also be taken into account).

I have already seen a few traversals in Firefox 3, although according to the DVLabs PWN2OWN competition at CanSecWest, "simple directory traversal style bugs are inadequate". In other words, they're only good in the real world, not in a hacking competition. Just like XSS and CSRF, right?

Almost anyone can get Firefox 3 betas to crash, which also worries me. Yes, there's a lot more use of some protections e.g. NX, but not others (ASLR would be nice, Mozilla!).

Take a look for yourself at this output from `dumpbin.exe /headers firefox.exe':

100 DLL characteristics NX compatible

If ASLR was available, the "100 DLL characteristics" would have the second most-significant bit set to 4 to appear as "140 DLL characteristics", or in many cases simply "40 DLL characteristics". Of course, this hardly matters to those who are not running Vista, but who wants to wait until Firefox 4 for this functionality?

Safe browsing with Firefox

I browse Firefox using multiple profiles for each web application that I use. By setting the environmental variable, MOZ_NO_REMOTE=1, or by running Firefox with`-no-remote', multiple profiles can be created, named, and run individually as separate processes.

Additionally, NoScript can protect against some URI attacks and some XSS. Most of these are blacklisted, but it helps if all Javascript, Flash, PDF files, SilverLight, Java applets, and JAR files are whitelisted on a per-case basis.

Most people will want to use cookies, and these cookies can be edited to be secure using Add N Edit Cookies. They can be allowed on a site-by-site basis using CookieSafe. One can also build multiple "cookie profiles" using CookieSwap. Simply by using multiple Firefox profiles, only the cookies allowed on a particular web application can be made to be the only cookies allowed in that particular profile. For example, I can search using Google's search functionality in one profile (with no Google cookies allowed), while another separate profile allows Google top-level domain cookies so that I can login/use GMail.

I think there are other URI abuse issues that can be prevented. Using the guidelines at ush.it on a blog post called Clientside security: Hardening Mozilla Firefox, I have setup my profiles to globally deny new URI's, as well as explicitly whitelist only http, https, ftp, and javascript.

By default, Firefox provides some insecure and unsafe features. Automatic form filling is one such feature, including saving passwords for websites. I even think that some Firefox 3 features such as "safe browsing" are not, in fact, "safe" -- and I turn them off. Most of the URL's to get the "safe browsing" information don't even use SSL!

In the default Firefox permissions for Javascript, many of mine are tweaked. All of the `dom.disable_window' about:config settings are set to "true". Scripts cannot be made to raise/lower windows, disable/replace context menus, hise the status bar, or change the status bar text. Other features such as Java (security.enable_java) and IDN (network.enableIDN) are also turned off (i.e. set to "false"), unless they are needed for a certain web application to work.

Security assurance for Firefox

Assurance is the critical missing piece. This is why some people that I know use w3m, curl, and/or links/elinks to access web applications. Full assurance would mean that every line of code has been verified as secure by a significant majority of security code reviewers/testers in the world. This may never be possible.

Looking at the source code for Firefox seems rather daunting, but I would be willing to bet there are at least a handful of people dedicated to this cause. Surely, most of them work for Mozilla, and therefore are empowered to do something about it. However, when vulnerability research from Michal Zalewski and others pop up -- often unannounced, with full-disclosure, and on a semi-regular basis -- it is hard to envision a future where Firefox is secure to the same degree as software such as qmail.

The problem is not just the size of the code, but how often it is changed. There have been almost 15k changes (and 2.5 MLOC) in a little over 2 years. 900 changes were made in between Firefox 3 beta 3 and beta 4 alone! This is the primary problem facing discerning security code review for this type of project. How often do you find yourself updating?

The same issues have plagued Internet Explorer for years, which is why these two browsers have become the vehicles of choice for any would-be adversary. The only way to stop this madness is to stop changing code and to stop adding features. It worked for Microsoft when they implemented their SDL -- at least it worked with other products.

I'm not sure to what degree the size and rate of change in IE had on security or if the SDL-forced change moratorium worked sufficiently. There are and will continue to be security-related bugs in IE for quite some time. In the past 9 Patch events that have had Vista vulnerabilities, 7 of them were at least partially related to IE. Of the 36 vulnerabilities, almost 20 were related to IE.

Besides suggesting a change moratorium on the Mozilla source code (which I still contend is a good idea), I can only recommend one other strategy to improve this situation. I suggest better unit/component-level testing of Mozilla code that asserts each defect's fix -- in the same way that I have made recommendations for the CPSL process I described back in December.

Efficiencies in refactoring the Firefox source code might also help here and there. I don't think that the Mozilla developers use Microsoft's Visual Studio, but their development environment could probably stand to use something like ReSharper. I have not seen a C++ equivalent, and would be interested in seeing other tools in this class.

I've heard a lot of analogies thrown around in the security world year-over-year. Here's a new one for you to think over:

Like an open front door, web browsers are the most common entry point for attackers. Many spirited vulnerability researchers additionally contend that the web browser is the most powerful weapon in an attacker's arsenal. Until we can close and lock this door, the rest of our protections will also continue to fail.

Security in the SDLC is not just code review

Let's take some time here to discuss what "secure code review" is and what it is not. I see a lot more people talking about code review. Many people have only the view of the PCI DSS compliance standard, which almost pits code review against the web application firewall.

David Rice quoted a Gartner study on his blog post on Insanity: 75% of Security Breaches Due to Flaws in Software.

A Gartner study indicates that 75% of security breaches are due to flaws in software...Do you think we would see a significant decrease in the number of data breaches and records stolen if we shifted our spend to actually writing proper code and protecting data at the source instead of at the edge? I think it is time we gained a few IQ percentage points and stopped the insanity.

However, he also thinks that stopping vulnerabilities at the source is, *"painful, difficult, complicated, and troublesome"*. But is source code review really all of these things?

Sachin Varghese of the Plynt blog doesn't seem to think so. In his blog post, Comply with Section 6.6 of PCI DSS with code reviews and get more, Sachin explains that thousands of lines of code -- even millions of lines of code -- can be reviewed for security properties within a matter of weeks.

In this four-step process, Sachin lists how to "get more bang from your code review bucks":

  1. Do rapid risk assessments to determine which 'money' application should be code reviewed (**Choose the right apps*)*
  2. Use tools / processes that easily finds the meat balls (vulnerable code classes) in your thousands or millions of lines of code (**Choose the right code*)*
  3. Analyze vulnerabilities across applications, vendors to identify recurring issues, across the board solutions and effective developer training modules (**Address root causes*)*
  4. Look at trends like software-as-a-service, outsourcing & offshore centers to reduce your code review costs (**Find a good deal*)*

I also offered up a different perspective in this argument between the F5 DevCentral blog on the importance of rewriting code.

No one development shop is like any other development shop. This is where operations people and managers severely fail to understand why typically people and process approaches don't work the same between quality engineering, operations, and their red-headed step-child (or more like their cranky, senile, old grandfather) -- development.

Some development shops use Test-Driven Development, some use a slight varation called Behavior-Driven Development (and many use neither). Even others skip the programming phase almost entirely by doing Model-Driven Development. In this last case, applications are code generated by a model built during the design phase.

Checking the source code is fine, but in some cases -- checking the models, the designs, and the requirements is often a better approach.

To increase software assurance, I prefer to look at three areas of significance:

Others, such as our friends at Aspect Security, think that security in the SDLC is about giving developers secure components (and to make those components popular in development). Instead of forcing a dev shop into "secure frameworks", any framework can utilize a few well-modeled, well-designed, and well-tested components. Some of the better projects coming out of the OWASP projects are: AntiSamy (recently released a 1.1 version!), CSRFGuard, and the uber-project -- ESAPI.

Well-designed, reviewed, and tested code is great! However, security does not last a lifetime. What is secure today may not, in fact, be secure tomorrow. Maintaining applications over time also means having a plan for their retirement. When an organization decides to acquire software off-the-shelf (commercially, or by downloading open-source components such as ESAPI) -- there should be some sort of "Consumer Reports" that they can go to.

OWASP is a good place to go today for information about secure web applications (including secure code review). I would like to see a commitment from OWASP to be a long-term, official authority for this type of information. CERT's Secure Coding and Secure Coding Standards websites are excellent sources of information for general applications (especially those written in C/C++). A similar committment from CERT might also be in order.

My suggestion is to both test and inspect/review source code in order to obtain results as close to a Gold standard test as possible. Open-source components can be shown as "percentage CWE-free" with "percentage of code coverage". Mapping this to a five-star rating system isn't going to take a lot of time or effort, and certainly this will eventually allow for more detailed research and statistics.

Firefox 3 first impressions

I've downloaded and used the Firefox 3 beta browser software for the past few months and wanted to give a report on the latest of what works and what doesn't. Note that I had to install Nightly Tester Tools to get many of these to work. I am also now using the Classic Compact theme, which has some minor annoyances but better than most alternatives.

Working:

Not working:

Required maxVersion modification (some have this updated to 3.0b3 but not 3.0b4!):

Overall, I think that Firefox 3 is much better on memory, faster, and more useful. However, it may not be any more secure than Firefox 2. Stay tuned for more information on creating a secure Firefox profile. I'll be looking at the security of Firefox 3 beta 4 and compare it to earlier Firefox 3 betas, as well as Firefox 2.

Day 13: ITSM Vulnerability Assessment techniques

Lesson 13: Just this week, in lessons 12 and 13, we've covered -- at least partially -- how to significantly reduce risk and vulnerability to system and network infrastructure. We touched on protecting applications, but we weren't able to go into specific detail about how to handle the path of execution to the attacks, only levels of defenses through functionality.

At the system or OS level, there are functional defenses such as hardware stack protection and ASLR. Not every operating system includes these sorts of protections, and not all do them correctly. Sometimes they can even be subverted. I hear comments all the time about people trying to subvert GRSecurity or Vista ASLR. We know it's possible, it's just a matter of time and resources.

Today, we're going to cover how to decrease the footprint that malware or shellcode/exploit has access to the systems, and therefore underlying applications.

Part I: Information assurance vulnerability assessment — Protecting system access

Back in another place, another time -- firewalls did not exist and everyone logged into machines using Berkeley r-services. This led to complications such as the AIX "rlogin -froot" fiasco (which, just over a year ago now was found in the latest Solaris telnet, but I digress), and add to the fact that nearly everyone was on a public IP.

Everyone on a public IP with every service running -- all unencrypted. We have come a long way, and maybe we should even pat ourselves on the backs. However, success in security is measured in the number and criticality of breaches -- and if you compare where we were at now to where we were back then: we're failing!

In the mid 1990's, Cisco Systems purchased a company claiming to have the first Network Address Translation (NAT) device -- the PIX (or Private Internet Exchange). That company's name was Network Translation, Inc. and you can read about it all on the Wikipedia article about the Cisco PIX. Thus began the era where Cisco would coin RFC 1918 as a "security feature".

Unfortunately, Cisco was too late in my mind. SSH was also released that year, and any security professional in the know built their own NAT solutions where the only external services were SSH and maybe ftp and HTTP. It's funny that over the years, the only application/network-service that I've been owned through was SSH -- and it's doubly ironic that I used OPIE (one-time passwords) to access my account through SSH, as well as to su to root.

However, the reason why I was owned, and the reason that many people get owned is not because of a technical vulnerability (although at one point there had to be at least one software weakness exposed), but instead a simple concept of trust. While I was using SSH and OPIE to access my machine that I shared with another trusted administrator (who also used OPIE to access his account and su to root), it turns out that this other admistrator had made a special exception for a certain female friend of his. That person logged in from a terminal that was located at a certain commune that was built and occupied by a reputable hacking group. The ssh binary she used contained a backdoor, and the rest was history. Or, well, our machines were history.

In the late-90's (when this sort of thing went down regularly), the attacks were focused on system access to the server, and the exposure was mostly to attack the server. In the case above, it was the client that was backdoored -- but this wasn't the primary focus.

Today, client-side exploits are the primary focus, especially along with backdoors and botnets. The trust models become worse as we keep adding file and protocol handlers to our browsers and IM clients. AV software adds more file and protocol handlers to detect exploits in file and protocol handlers, making them more vulnerable to attacks as well. The attack surface and trust relationships built in modern software is at a peak of risk and exposure.

Recommendation: As an introduction to the below recommendations, you might want to check out my older post on Client-side attacks: protecting the most vulnerable.

First of all, protection at the local area network (LAN) must come in the form of SSL VPN (with both client and server side certificates correctly configured). You get your DHCP, you get your DNS locally. The DNS server should only have an entry for the DNS server, the DHCP server, and the SSL VPN server. I prefer open-source software that is code reviewed before deployment. Again, Pyramid Linux or Vyatta on embedded hardware are logical starting-points. For an SSL VPN server, I recommend checking out SSL-Explorer.

Once connected via SSL VPN, clients should be on another LAN that has another DHCP and DNS server on it, and two proxy servers (preferably something open-source such as Squid proxy). None of these DNS servers have full Internet access, only the DNS names and reverse for the local servers. For the two proxy servers, one can connect to Intranet, or local services -- while another can connect to external sources such as partners and the Internet.

Each Squid server can have access to the full Intranet or Internet DNS. This way the clients must set their web browser proxy (or pick it up automatically). Yes, this requires setting your proxy differently if you plan on access Intranet or Internet websites. I consider this a bonus.

Jay Beale gave talks on ClientVA at Toorcon and Shmoocon that involved setting up RSnake's Mr. T (Master Recon Tool) to check browsers for versions and plugin versions. Additionally, the Squid proxies can have every URL (or even POST operation) whitelisted using an open-source package such as Whitetrash.

Mr. T and the Squid proxy should be able to verify versions of QuickTime, Windows Media Player, Flash, and even Adobe Acrobat Reader (or other PDF viewer with a file handler / browser plugin). It won't find out which versions of MS Office or OpenOffice are installed. This is why some people recommend running MOICE, the Microsoft Office Isolated Conversion Environment, instead of the full Office version. Before I open files in MOICE and convert them into Office 2007 XML format, I also usually scan them with OfficeCat from the Snort project.

Additionally, users on WiFi will benefit from the SSL VPN immediately, but they could be at risk if their drivers are vulnerable to a kernel based attack. Using assessment software such as WiFiDEnum will check for these types of known vulnerabilities in local drivers.

Part 2: Software assurance vulnerability assessment — File inclusion attacks

Best {Remote|Local} file inclusion attack tools dorkscan.py, FIS, WebSpidah, Wapiti, Pitbullbot, RFIScan.py, Syhunt Sandcat

Best {Remote|Local} file inclusion attack helper tools AppPrint, AppCodeScan, Inspekt, RATS, PHP-SAT, PHPSecAudit, PSA3, PFF, SWAAT, ASP-Auditor, LAPSE, Orizon, Milk

File inclusion only affects PHP, but the concepts can easily be extended to remote script execution in JSP or ASP pages, both HTML based as well as driven by their scripting languages. There is quite a bit of information about file inclusion and remote scripting attacks available in the Web Application Hacker's Handbook, from Portswigger.

Portswigger also recently posted about CSRF and threat ratings on his blog, and he'll be teaching (from his book?) at BlackHat Eurpoe next week.

Day 12: ITSM Vulnerability Assessment techniques

Lesson 12: Yesterday, I shamelessly recommended to ditch all commercial networking gear. In the same breath, I also made several Cisco configuration recommendations. This is just the way that I work. The idea is that network appliances increase risk, but at the same time -- they also allow you to connect to other networks.

Today we're going to examine the same issues at the host, or system (or OS) level.

Before we do that, I wanted to add a few notes to Day 11. First of all, it's not all about Cisco IOS. Juniper Networks has recently begun to shift features from ScreenOS into JunOS. Certainly other vendors are going to innovate, but be careful to assess each platform as new features are added.

There has been some beat on the blogs lately about BGP, which is very good for business. Most IT/Operations people don't understand how the Internet works, which is a crying shame. While some security professionals tend to read blogs such as Arbor Networks and The Hoff, it takes a few comments from Richard (and hopefully myself) to point you towards Renesys. The commentary from the YouTube-Pakistan incident has spurred lots of other interesting conversation.

My take on the YouTube-Pakistan incident follows. I woke up on Saturday and saw some people talking on the NANOG mailing-list about YouTube. I messaged my friends who are network engineers at YouTube. They were already aware of the problem and were attempting to create their own MSR's (my abbreviation for "more-specific routes") to overwrite the ones from Pakistan Telecom. However, their providers were filtering these MSR's because they were less than /24's. Some NSP's filter routes depending on which classful routing system they fit into, but that's a separate argument.

There are many things you can do to prevent this sort of issue from happening to you. The first step is to make sure that you've hired BGP expertise in-house, and the second step is to listen to them. Finally, some sort of assurance is in order, so try to find an assessor who is qualified to check BGP configurations.

Part I: Information assurance vulnerability assessment — Protecting the systems

ZOMG! Someone could subvert your firewall and network-based IPS! What do you do?

Don't panic. Everything is going to be ok. That is, unless you're running anti-virus software on your clients. Do your clients use a web browser to connect to the Internet? Do your users open PDF's, DOC's, XLS files, and PPT's? You don't let them install IM clients, do you?

Ok, so that's nearly everybody in the corporate world. Client-side exploits are much more real now than they ever were before. Sure, viruses/worms can spread from file shares. Sure, you can read an email and get owned. However, the most dangerous software is your client-side applications and the worst part of it comes from how your users use them on a daily basis.

Not only that, but you have twenty-thousand hosts, half of which were patched last July -- and most of the other half which was patched only because you purchased them new at the end of the September. Certainly some hosts are fully patched to date, but only the people who really are "in-the-know" which turns out to be less than 1%.

At first, I would say: BigFix, Lumension Security, or ConfigureSoft to the rescue, right? Maybe Skybox or RedSeal could help? Maybe you can scan and patch using MBSA and Microsoft SUS? Sure -- all of these are going to go a long way to solving the problem you're in. However, there has got to be an easier way. There has to be something everyone can do to keep and stay out of this mess in the first place.

Recommendation: Using psexec with MBSA/SUS and/or BigFix -- you can get everything patched (except possibly Novell systems and some laptops) and verified within 30 days if you have a commitment from IT and a good change control process.

Once you're patched, then you need to measure your rate of patching. Can you setup Windows automatic updates to install on some critical machines between 2 and 3PM PST on every Tuesday? Yes, you probably can in most cases. Microsoft does test patches before they release them live to the world on Patch Tuesdays.

Maybe if you removed all of the extra crap on your Windows installs, then users and system administrators wouldn't have problems installing the base Microsoft patches. Minimize the applications that your organization is required to run.

Once you have a "patch strategy" and "patch metrics", it's then fine time to remove your runtime anti-virus solutions. Yes, I said it. Remove them. You can always setup automated network-based, or USB-based virus scans when IT encounters one-off problems. Your AV hurts you more than it helps you at this point. Runtime AV is not a good blanket control -- it's a compensating control for "lack of near-perfect, high-quality patch management".

However, there are things you can do to increase the security of your systems even more. Besides "Install Vista" or "Install the latest SuSE/Ubuntu/CentOS with AppArmor, GRSecurity, or SELinux", I can provide some additional suggestions for those of you stuck where I figure most organizations are: in XPSP2-land.

In XPSP2-land, you can turn DEP/NoExecute to AlwaysOn. This means that the Data Execution Protection will always be on. It could break things, but it probably won't. You could always be evil and only enable it across all systems right before the third-party pen-testing team comes in to verify you for that PCI-DSS, S-OX, HIPAA, or BITS Shared Assessments audit (and then turn it off when they leave).

XP Lite could be used during OS imaging to permanently remove Internet Explorer, and permanently add Firefox 2.latest (or 3 when it comes out). Imagine installing NoScript on all Firefox instances, but configuring it to a default-mode where first-party (2nd level base domain) Javascript is allowed. Add a second profile that doesn't have NoScript installed, and have IT replace the default icon with the unprotected Firefox for users who simply "don't get it". I guess if Internet Explorer 7 is installed, the SiteAdvisor and/or NetCraft Toolbar could also be used in similar effect, but this is less trustworthy.

Another benefit to running Firefox in XPSP2 over IE7 is that DieHard-Software.org can be used if the processor supports NO_EXEC_STACK (e.g. AMD NX-bit or Intel XD-bit). DieHard-Software.org appears to userland hook in the same way that most host-based IPS works (which means that it isn't very good), but it's free and better than nothing. DieHard-Software adds another layer of DEP/ASLR protection to the browser, which is the application which usually needs it most. If you can use Hardware DEP, then you can also use DieHard-Software.

Sure, if the processors don't support an XD-bit or NX-bit, you're stuck with Software DEP and no ability to run DieHard-Software. However, very few organizations are going to upgrade all of their laptops just to run a feature that nobody has ever heard of. Don't take this as an excuse not to set /NoExecute=AlwaysOn, because Software DEP is better than no DEP at all.

This brings me to another interesting point: don't buy laptops or desktops. If your organization doesn't have any laptops or desktops, then these machines cannot get owned. Consider purchasing thin clients to replace desktops or SafeBook.Net to replace normal laptops. Users can instead use Windows Server 2003 (64-bit is preferred because it has additional security protections and must run on an NX or XD enabled chip) or the new Windows Server 2008 -- but as a terminal services user. A boot server is required, and this could be 2X, NoMachine, or even the free LTSP on an Ubuntu server.

For more information, be sure to read Linux Thin Client Networks Design and Deployment, and for details on Data Execution Protection -- check out the chapters on Memory Management and the Boot Process from Microsoft Windows Internals, Fourth Edition.

Part 2: Software assurance vulnerability assessment — Web Services

Best Web Services attack tools

SOAPbox, OWASP Interceptor, WSBang, wsScanner, WSFuzzer, SIFT, Wfuzz, OWASP WebScarab, wsKnight, fuzz_wsdl.py

Best Web Services attack helper tools OWASP Interceptor, WSMap, SIFT, wsScanner, Foundstone WSDigger, Wfuzz, wsPawn, web2wall, untidy, Schemer, TulaFale, Web Services Enhancements 3.0 for Microsoft .NET, Ajaxfinger, Scanatlas, SOAPSonar Personal Edition

There has been a lot of discussion lately in a few different places that I've seen (blogs, mailing-lists, IRC channels) on testing Web services for security issues. This is a call out to all of those people. Here's your chance to list your favorite tools before I list them all!

« Newer entries — 11 — Older entries »

blog comments powered by Disqus