tssci security

New Uninformed Journal - Vol 8

Get it here. Papers include:

More on Ambiguous Security Standards

When I finished reading through PCI DSS v1.1 the other night (for like the fifth time), several requirements continue to jump out at me. To understand the PCI requirements, we first need to understand what is subject to PCI.

From the standard, PCI DSS requirements are applicable if a Primary Account Number (PAN) is stored, processed or transmitted. If a PAN is not stored, processed or transmitted, PCI DSS requirements do not apply.

A couple weeks ago, Mark Curphey blogged about Ambiguous Security Standards; standards that make "catch all statements." In the comments, Clint Garrison points out the updated standard (v1.1) specifies:

1.1.6 Justification and documentation for any available protocols besides hypertext transfer protocol (HTTP), and secure sockets layer (SSL), secure shell (SSH), and virtual private network (VPN) 1.1.7 Justification and documentation for any risky protocols allowed (for example, file transfer protocol (FTP), which includes reason for use of protocol and security features implemented

Next up, what CardSystems Solutions failed to do,

3.2 Do not store sensitive authentication data subsequent to authorization (even if encrypted)

This requirement only applies to authentication data, not cardholder data. Sensitive authentication data includes the full magnetic stripe, CVC2/CVV2/CID, and PIN/PIN Block.

During transmission, sensitive cardholder data must be protected using encrypted tunnels. This is covered under:

4.1 Use strong cryptography and security protocols such as secure sockets layer (SSL) / transport layer security (TLS) and Internet protocol security (IPSEC) to safeguard sensitive cardholder data during transmission over open, public networks. Examples of open, public networks that are in scope of the PCI DSS are the Internet, WiFi (IEEE 802.11x), global system for mobile communications (GSM), and general packet radio service (GPRS).

So what about internal networks that have direct access to cardholder data? Does the data no longer have to be encrypted?

This requirement is laughable to me:

5.1 Deploy anti-virus software on all systems commonly affected by viruses (particularly personal computers and servers) Note: Systems commonly affected by viruses typically do not include UNIX-based operating systems or mainframes.

So, basically what this is saying is install A/V on Windows workstations and servers... What about Linux and Mac?? Oh wait, they don't get viruses.

Then there's this requirement:

6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:

  • Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security.
  • Installing an application layer firewall in front of web-facing applications.

Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.

So which method is it? One or the other or both? Installing a WAF lets you skimp on code review? What about maintaining the WAF after installation? This requirement should be reviewed and changed to specify what is required and like Requirement 1.1.6/7, should have an extra clause.

Many companies have been adopting two-factor authentication for remote access as a company-wide policy. Under the PCI,

8.3 Implement two-factor authentication for remote access to the network by employees, administrators, and third parties. Use technologies such as remote authentication and dial-in service (RADIUS) or terminal access controller access control system (TACACS) with tokens; or VPN (based on SSL/TLS or IPSEC) with individual certificates.

For which employees is this required, since PCI DSS only applies to the cardholder data environment and its system components?

8.5.12 Do not allow an individual to submit a new password that is the same as any of the last four passwords he or she has used.

I would like to add to this requirement the need for non-repudiation when an individual submits a request for a new password. Anybody can ask for a password reset using most common "password reset forms." If you know the username and email address you can reset anyone's password. Not enough in my opinion.

I wish more organizations implemented Role-based access control. No one should be running as Administrator or root user on their local machine. This PCI requirement states:

10.1 Establish a process for linking all access to system components (especially access done with administrative privileges such as root) to each individual user.

Group and system accounts are the worst because there is little to no ability to do this. Keep these accounts with as few users as possible and designate someone to be responsible for its security and maintenance.

I hate hearing the words "We don't scan against production." Frankly, I don't care to either. There's just something annoying about, "If you bring down production with your tests, you're dead meat/fired/a goner." Well, if your production environment was built properly, you shouldn't have this problem. People who say this are likely responsible for the most delicate, insecure network or system around. Seriously.

Under the PCI DSS, it's required you:

11.2 Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).

Note: Quarterly external vulnerability scans must be performed by a scan vendor qualified by the payment card industry. Scans conducted after network changes may be performed by the company's internal staff.

11.3 Perform penetration testing at least once a year and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a sub-network added to the environment, or a web server added to the environment). These penetration tests must include the following:

  • 11.3.1 Network-layer penetration tests
  • 11.3.2 Application-layer penetration tests

I think it's great companies are using tools and processes for increasing security during development and staging phases. But if you're not testing your production network, your efforts are for granted. Your adversaries are not attacking your development and staging environments, they're attacking your production systems.

This last requirement I have some trouble understanding, since it's under Requirement 12: Maintain a policy that addresses information security.

12.3.10 When accessing cardholder data remotely via modem, prohibition of storage of cardholder data onto local hard drives, floppy disks, or other external media. Prohibition of cut-and-paste and print functions during remote access.

So.. do you do this technically or make it policy? and how do you enforce that?

Tweaking kernel parameters using sysctl

Over the last few years I have been finding ways to tweak my FreeBSD systems for better security and performance. One of the techniques that I used most often was tweaking kernel parameters using sysctl. As you may have known from previous posts I am now an OS X fanboy. Sysctl parameters for the most part, are the same on OS X as on FreeBSD. This post will detail a few of my favorite sysctl tweaks to make. These will work on both FreeBSD and OS X (as well as any other system that is BSD-based since BSD4.4). Note: these settings are not terribly useful for single-user systems and are generally used for multi-user systems with high levels of utilization.

This will drop all TCP packets that are received on a CLOSED port and not reply. Note, that if you enable this you won't be able to traceroute to or from your system.

net.inet.tcp.blackhole=2 net.inet.udp.blackhole=1

Spoofed packets that utilize random source addresses cause the kernel to generate a temporary cached route in the routing table. These temporary routes usually timeout in 1600 seconds or less. Setting this to 2 seconds allows the kernel to react quicker to attacks. Never set this value to 0 however, or you will be presented with a system that does not work properly.

net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=256

This guarantees that dead TCP connections are recognized and torn down. Not really a security setting, but very helpful.

net.inet.tcp.always_keepalive=1

Randomize process ID numbers:

kern.randompid=348

When you start an application such as Apache or MySQL, the command line arguments that you passed the program will show in ps and top output. I personally prefer that these arguments not be viewable by users in ps output so I disable them.

kern.ps_argsopen=0

These are not the only parameters that can be modified using sysctl. For more information read the sysctl documentation for your operating system. You will probably find that sysctl parameters are not well documented and I encourage you to setup a testing system for experimentation.

Using Google Analytics to subvert privacy

Marcin decided to take the day off with pay and allow me to share with you a guest blog post. Thanks, Marcin!

Hello, my name is Andre and I'm a blogoholic. On with the post!

With the popularity of MySpace also came the desire to track others who look at one's profile. MySpace trackers came in mass, and much of the world was introduced to web analytics.

Around the same time, Google made a purchase that would affect the webiverse forever. The availability of Google Analytics allows anyone to track almost any information on any visitors to their content. Free up to 5M entries per day, with sampling also available as an option (plus this goes unlimited if combined with a Google Adwords account) - GA is a simple way for any web administrator to get statistics on their eyeballs.

But what problems does GA introduce for privacy? Can it be abused? How will regular users protect their identities and personal information in light of such a Big Brother move? How bad is this problem?

I became interested in GA after seeing it pop-up in my NoScript alert boxes constantly, on almost every website that I visit. I also see it in CookieSafe (I've disabled cookies globally, i.e. all 1st and 3rd party cookies). In this way, GA can't track my activity. Or can it?

One thing I learned recently is that anyone can download urchin.js (the Javascript code that activates GA) and host it on their own website.

<src="https://www.tssci-security.com/localurchin.js" type="text/javascript"> </script> <script type="text/javascript"></script> _uacct = "UA-12345-0"; urchinTracker(); </script>

By forcing users to use Javascript (and 1st party cookies) to be able to login to your website, you can then throw your localurchin.js file into your web pages after login. I often turn on Javascript when I have to login to a particular website so this would even catch the most careful of users.

Worse, GA can be configured with their custom segmentation framework. This allows one of the GA cookies, __utmv, to be filled with any data that the web administrator wants. In this example, using __utmSetVar() and HTML forms - the web administrator can store username, email address, and the user's real name:

function setSegment(f) { var s1 = f.username.options[f.username.selectedIndex].value; var s2 = f.email.options[f.email.selectedIndex].value; var s3 = f.realname.options[f.realname.selectedIndex].value; __utmSetVar(s1 + ':' + s2 + ':' + s3); return true; }

<form onSubmit="javascript:setSegment(this.form);"> User Name: <input name="username" type="text" /> Email: <input name="email" type="text" /> Full Name: <input name="realname" type="text" /> <input type="submit" /> </form>

This code is certainly against the privacy policies of Google. In order to maintain some sanity and comply with industry standards in privacy, Google has restricted these cookies to a default of 6 months lifetime. I'm not sure, but it appears that one can modify the _ucto value before starting _urchintracker() and set _ucto to something higher than 6 months (the default value, 15768000, is measured in seconds).

Certain obfuscation practices such as rot13, base64, or even encryption of these fields could allow storage in the GA system. What will Google's policymakers do to prevent this from happening? Other interesting fields could also be stored, such as the actual HTTP referrer fields. You'll likely want to turn these on in your logs as well. Here's how (in Apache httpd.conf format): LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Accept}i\" \"%{Accept-Encoding}i\" \"%{Host}i\" \"%{Connection}i\" \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer

I'm not exactly sure how to prevent this kind of abusive behavior (besides to not use Google). I try to be cautious about how I use the web, but this could make it very easy for global abuse. It can put a lot of power into the wrong hands, so to say. What is Google going to do about it? Not much because they stand to benefit from it most (ever looked at your google.com cookies?).

A few weeks ago, Marcin posted a blog about safe browsing. One of the best ways to prevent GA from tracking you is to prevent all cookies using a toolset like CookieSafe (Marcin mentions CS Lite) and block all cookies globally, allowing only those sites that you implicitly trust. Using the Firefox Web developer toolbar wth Information -> View Javascript, you can look for potential localurchin.js code insertions. Javascript obfuscation can make this more complicated, but fortunately there are some writeups that cover decoding javascript.

By browsing with safety in mind, you can prevent your web activities from being tracked and your personal information stored along with your daily (and not so daily) web routines. It's trivial to add support of click tracking and multi-site click tracking using GA. Most of the material and code from this post was taken from a new "short cut" book appropriately called Google Analytics.

Enable password for single-user mode (OS X)

Single-user mode by default is available on OS X without a password. This is not a desirable system behavior and to remedy this, all that is needed are a few simple commands. To enable a higher level of security we can set an "Open Firmware Password".

On OS X 10.4.10, you need to use the updated version of the Open Firmware Password (v. 1.0.2) application from the software installation disc (located at /Applications/Utilities/ or at Apple-Support-Downloads). Then, just click the checkbox 'Require password to change Open Firmware settings,' set a password, and you're good to go.

« Newer entries — 24 — Older entries »

blog comments powered by Disqus