tssci security

ToorCon 9 - Day 0 and 1

This weekend I was in San Diego, California for ToorCon 9 and had an absolute blast. On Friday, I had checked out the USS Midway Aircraft Carrier Museum and enjoyed listening to veterans recount fascinating experiences on the ship during the war. I took the morning to "experience" the city, something I have tried doing while attending conferences. It sucks to travel to various cities around the country and not getting the chance to visit local landmarks and famous sites.

While I was out and about around the city, Dre was doing his seminar talk on Continuous Prevention Testing. Stay tuned for an upcoming blog post from Dre soon that goes over the main points of his talk. I was lucky enough to catch Charlie Miller's seminar talk on Real World Fuzzing. It was a great presentation, and since fuzzing is still new to me, I was able to see the why, what and how aspects of it. There was a lot of talk about code coverage with fuzzing and is interesting given the recent web application scanner code coverage review done by Larry Suto which reported NTOSpider as being best because it can crawl more links in default mode. Dre will have also have some more commentary about Charlie's talk and Larry Suto's review as well.

On Saturday, I went to Jason Medeiros' talk, "The Last Stand: 100% Automatic 0day, Achieved, Explained, and Demonstrated." Jason wrote a tool that is a debugger, heap analyzer, fuzzer all in one that automagically generates a C exploit. It was pretty awesome; I think he wrote over 80,000 lines of code and spent an entire year doing it. A couple people are skeptical about his demo though, stating it could have been taylored with his demo application.

Afterwards, several of us went out to eat for seafood and sushi. I tried sushi for the first time and well.. now I know I definitely do not like it. LOL. Thanks Erich for letting me try some. Following dinner, we went to the Microsoft sponsored party at Olé Madrid down on Gaslamp, which was pretty good. Pretty much everyone in security was there, and then again at the ninja party.

That's it for Friday and Saturday... next blog post from me will cover Sunday's talks.

Update 10/26: Toorcon 9 - Day 2 has been posted

Scraping the web for fun and profit

Crawling and scraping rarely get discussed in a security context because everyone is too busy creating cute mashups and messaging their MySpace friends.

I recently read Webbots, Spiders, and Screen Scrapers from NoStarch Press. The author uses PHP-CURL for all his examples, and provides interesting ideas for scrapers. Most of these ideas can be replaced by two tools: Google Alerts or Dapper.Net.

Google Alerts is one of my favorite web applications yet. I use it all the time, because it is constantly sending me emails about things that I am interested in. Choosing keywords for it is extremely important - a skill that I believe is not only necessary, but will grow in experience and have big end rewards. There is a limitation of 1,000 keywords - but I figure you can open more GMail accounts if you want more Google Alerts.

Another great aspect of Google Alerts in your GMail is not only searching them, but sorting them into archived tags. Then you can search on the tags as well. My personal favorite feature of Google Alerts is to be notified immediately on certain keywords. I have noticed a very low (within 1-2 minutes, possibly less) amount of lag time between when the word first appears on a high traffic site to when it drops into my GMail.

Marcin uses Dapper, and has run into the same legal issues that Pamela Fox describes in her presentation on Web 2.0 & Mashups: How People can Tap into the "Grid" for Fun & Profit. We talked about how the legality of this probably won't stand up - unless you are actively leeching content, growing it as parasite hosting, and/or making money off it somehow. It could be against an AUP, and therefore your account could be disabled - but as long as you can create a new account I think this sort of activity will continue.

Marcin also got me interested in scraping more when he pointed me towards iOpus and I found out about iMacros. I had used a few other similar scraping tools in the past, and wanted to put together a collection of scraping tools for those who are unable to benefit from Google Alerts or Dapper. For example, at home, locally, on an Intranet, or other website that is otherwise unreachable by Googlebots or Dapper scrapes.

Some say that everything started with Perl first, and in the case of scraping - this is almost certainly the case. Randal Schwartz wrote an article for Linux Magainze almost 5 years ago regarding WWW:Mechanize. Perl has evolved to include a few other parsing modules, including HTML::TokeParser, HTML::PullParser, and [IMO best] XML::Smart::HTMLParser. However, most scrapers in scripting languages evolved or copied from WWW::Mechanize.

In fact, Ruby's primary scraper is called exactly that, mechanize. It relies on Hpricot, an HTML parser for Ruby, which Jonathan Wilkins also recently blogged about, while trying to find a Ruby equivalent to Python's setattr. Ruby also has another scraping toolkit, called scRUBYt that is most certainly worth checking out, even for a novice.

One of the latest toolkits for parsing comes from the Python camp. Called pyparsing, this appears to be something Google would use to scrape the entire Internet. Of course, other Python users will be familiar with BeautifulSoup, which has been a very popular and powerful parsing library over the past few years, mostly because it handles invalid markup well, similarly to XML::Smart from Perl.

So let's say you have malformed, unvalidated HTML. What's the best way to handle it in the various languages besides Perl and Python? Well, Ruby has RubyfulSoup (same website). For language-dependent, one could also use HTML TIDY (and here are the bindings for Ruby). Sylvan von Stuppe first mentioned NekoHTML on his website, and then went on to go through his ideas on scraping using Groovy. In this exhaustive list of MITM proxies and web application security testing tool ideas, he also mentions that HTMLUnit uses both the Commons HTTPClient and NekoHTML. We'll talk more about HTMLUnit and related utilities in a future blog post.

I want to wrap this blog post up here because I'm on my way to San Diego to speak at Toorcon. I'll be covering security testing as it relates to the three phases of application lifetime: the programming phase, the testing and pre-deployment phase, and operations/maintenance. Hope to see you there!

ToorCon 9: San Diego -- Eats, Treats, Tricks and Drinks

Several of us are going to ToorCon 9 this weekend in San Diego, California. I'm flying out tomorrow (Friday) morning and I plan on visiting some sites around town, such as The Aircraft Carrier/USS Midway Museum and then head up to Little Italy in the Gaslamp section for some lunch. Following, I plan to hit up the Extraordinary Desserts to satisfy my craving after being on a diet for over six weeks. 12 lbs, beat that :P

Dre will be speaking at ToorCon on Friday during the seminars session, titled "Continuous Prevention Testing." His presentation was well received and met with positive feedback at this week's Minneapolis OWASP meeting. Be sure to check it out if you're attending seminars on Friday. From the description:

Continuous testing presents methodologies and tools that developers, quality engineers, and security professionals can all share and use effectively to their own unique approach. The tools presented are cross-discipline, meaning they can be utilized by a developer as a development tool, by a qa-tester as a quality assurance tool, and by a vulnerability assessor as a security assurance tool. Whether you're trying to build better code faster, demonstrate the power of automated testing using a data-driven test framework, or find security-related defects - Continuous testing has something for you.

San Diego and ToorCon is sounding like it's going to be a lot of fun. If you're in the area or attending the conference and you want to meet up, shoot me an email or comment reply on this post with your email address. We'll be going out after the conference to eat and drink so be ready. :)

More on Google Analytics: Now with Stolen Search Queries!

In my earlier article on Using Google Analytics to Subvert Privacy, I demonstrated how dangerous free tools could be to match privacy information to web clicks.

But now that Google has updated their Analytics service to support internal search queries, you can now link user privacy information to search data, as well. Now everyone can be as famous as AOL, get reported to the FTC, and get your own stalk your users website. Although, unlike AOL, you can't fire 2,000 people in one day.

A month ago, I helped run a conference called Lulzcon in downtown Chicago. I met Virgil Griffith there, who spoke about a famous application he wrote called WikiScanner. We discussed some of the privacy issues around these sorts of tools, and I think this applies back to the original arguments about Google Analytics.

Marcin wrote back in mid-August about safer browsing, but that's only the beginning. RIA/RCP's are about to explode, and you can bet that Google, Microsoft, Yahoo, eBay - and all the major players will be utilizing this technology. As we've seen with Java and Flash applets - access to the local network stack can be devastating to both your privacy and intranet/home security when accessing the Internet. This isn't meant to scare you, it's meant to warn you how to think differently and react to what will likely be the new attack paths of 2008. I also hope that the "big browsers" will start to react to these growing problems.

You've probably heard a lot of push-back from Google about privacy issues. They aren't really doing a lot about it other than keeping their applications up to high quality and security standards. They have played down click fraud. They have played down blackhat SEO, while gaming PageRank seems to be still very popular. I even ran across a very interesting website called Google Rankings.

If you want to learn more about Google Analytics, I highly recommend this article, and their blog/website, LunaMetrics (including a new article on Analytics for Site Search). There is also more information at EpikOne and TheMilk, and you can read more about general information on analytics by joining the Web Analytics Yahoo Group.

Way to go Arnold -- why AB 779 was a lose-lose situation for small business

A lot of commotion has recently been stirred up around California Governer's, Arnold Schwarzennegar's recent vetoing of a bill (AB 779) that would strictly mandate all merchants to comply with. Many have scoffed at the Governer's "caving to lobbyists and members of the retail industry." You know what?? I actually agree with the Governer's vetoing of this bill. Even though I've expressed some criticism recently over the ambiguity of PCI DSS, I really think the Governer has it right.

This bill is only friendly to big business, not the little guys. Compliance with state laws and regulations costs small businesses money, sometimes more than they can afford. Many businesses thus stick with cash-only policies and weigh the trade-offs from losing business from processing debit and credit card transactions. Does anyone know how much money is spent on getting licenses for business and all that other stuff a mom and pop retailer needs to do?

Bill Brennar of Security Bytes is dissapointed with the veto. He states "Apparently no one told the governer that PCI DSS is NOT working." Okay, where's your data to back up this claim? He later on says "Big business, hackers, and ID theives won today." Hmmm, it's easier for big business to comply because they have vastly larger bank accounts and can throw more people at the problem. Small business?? not so easy. Just take a look at how much of a percentage health insurance for your own family ends up costing you when you own a small business.

In addition, the law would conflict with PCI DSS. PCI already has multiple levels (four) for compliance that apply differently to merchants based on the number of transactions processed annually. To push another regulation on every business is just pointless. Chaulk up another useless regulation with no teeth that we have to think about. The list goes on and we all know them... HIPAA, FISMA, etc.. Who would make sure merchants are compliant on a every-other-day basis?

I can compare this bill to NASCAR regulations. A couple years ago, NASCAR introduced a rule that would require teams to dial-in shocks and keep them within spec following a race. The teams with the most bucks turned to sophisticated seven-post rigs to dial-in their suspension easily. What does this mean? The teams with less money get shafted... literally. For the teams that can't afford this tool, more time and money is spent meeting suspension specifications.

Too much effort to comply "not an excuse?" Well, when you're a small business, you have less money, less time, and less resources to do it. What you can do is far less than what a big company can do. For anyone who owns their own business, you know how much time you spend doing government paperwork.. Almost half.

Also, think about the risk here. What's a better target? Mom and Pop with 100 cc#'s, or a multinational with 100's of thousands? Surely, a high-profile nation-wide company/chain inherits a greater risk of security breach just because of the cost-benefit to a hacker. Hacking a little town store is just not worth the trouble. A small-time criminal takes a bigger risk (stupidity, desperateness, etc) holding up a convenience store because the payout isn't even nearly as much as a bank heist (planning, worth, etc).

« Newer entries — 22 — Older entries »

blog comments powered by Disqus