Living in NYC has its perks, one being that we host the largest OWASP
chapter across the world. The NY/NJ Metro chapter put a lot of effort
into making sure this last week went smoothly, even with the change of
venues at the last minute. I had a lot of fun, and it was nice seeing
everyone again, and meeting new faces. On Wednesday night, a bunch of us
gathered for a NYSec meeting at DBA down on 1st Ave and 2nd St. Some
cool new people I got to meet included Andres
Riancho, Dinis
Cruz, Ryan
Naraine, Ivan
Ristić, Dave
Aitel,
RSnake, Chris
Nickerson, and
Gunnar Peterson... phew! That is not
even a fraction of the people you get to see at these conferences.
Anyways, my two favorite talks go to Dinis Cruz and Dave Aitel. Firstly,
Dinis is such an energetic guy, you just want to stand up, do the guido
fist pump and then run off to do something
really, really cool. As an independent contractor working for Ounce,
Dinis developed an open source tool called o2 which helps code reviewers
navigate mountains of static analysis data quickly and logically. Of the
couple static analysis tools I've come across, their interfaces don't
exactly cater to performing thorough, quick and accurate analysis. What
o2 let's you do, is crank up the volume on these tools and just run with
it, identifying patterns in code really easily, letting you cover as
much as possible. I still have to spend some time playing with it, but
it definitely would make anyone's job easier. All that, and it's open
sourced and will read in any CIR data from an Ounce scan.
Dave's talk, "Corruption" really captured my sentiments on non-webappsec
research (present day). While in university, I always thought the
barrier to entry prohibited a lot of people from becoming really good at
writing reliable buffer overflow exploits. This could be seen as both
good and bad, given the fact that many operating systems have
randomized, non-executable memory stacks with Vista ASLR, XP SP2 DEP,
PaX/Grsecurity, etc, making them somewhat immune to the vulnerability,
but not 100% entirely. This presents a problem, a huge gaping
vulnerability in both our systems and our thinking. Buffer overflows
continue to surface even after being discovered 15 years ago. But
because it is so hard, we don't see many exploits on milw0rm or
packetstorm. And let's face it, if they're not on there, then they don't
exist. Right? Maybe. Though one thing is certain, the people writing
exploits are professionals and are really, really good at what they do.
Be it Dino, Gobbles or Aitel (who was being modest when he said he's not
the best), it is true there are people out there who can and will do it,
and when the next remotely exploitable buffer overflow that bypasses
stack protection comes along, we won't know what fucking hit us.
Also, we've begun to set the agenda for OWASP EU Summit Portugal. Arshan
Dabirsiaghi is looking for folks to contribute to ISWG, a group with
some modest goals, like fixing the
Internet. Seriously though, the group is
looking at new ways to secure the browser, and what approach(es) they'll
take to do it. I'd love to talk about some other projects, but really,
there are just too many worthy projects to list all out here, so head on
over to the OWASP EU
Summit page, and
find something of interest.
One last closing thought I'd like to squeeze in... Throughout the entire
week, I found it really coincidental that ISC2 chose to sponsor the
OWASP conference and release a new certification, the CSSLP (Certified
Software Security Lifecycle Professional). Given that James McGovern is
putting a lot of effort into developing an OWASP certification, Dre
posting R.I.P.
CISSP
and getting in the top 5 Google search results for "CISSP", I find it
strangely odd they go and do this. It also seems as if they put no
thought into the certification at all, just one they cranked out to beat
OWASP "to the punch" and make a buck at the industry's expense, laughing
all the way to the bank. Shameful.
Posted by Marcin on Friday, September 26, 2008 in
Security.
This is just going to be a long list of links with rants. I have taken
up the duty of disseminating information on the latest in WiFi and
Bluetooth penetration-testing for no real reason other than it's on the
tip of my tongue.
First, we have the BackTrack 3 project, which is basically mandatory if
you want to be doing any wireless pen-testing. See if your laptop is
supported:
People like ErrataSec suggest Asus Eee PC, others say Acers, others say
Apple. I have a Thinkpad. Whatever. You have what you have, and you will
probably need to buy additional hardware. I'll get to that in a second.
The only real software that is missing from BackTrack is this new python
code called Pyrit. It's not for pen-testing yet, but it certainly looks
to build on the Church of WiFi WPA-PSK Rainbow
Tables work.
Pyrit uses either the CUDA FPU acceleration and use of NVIDIA GPU's on
their video cards instead of the old standby
OpenCiphers.org method of using FPGAs.
Pretty neat stuff. If you can get a CUDA-enabled mobile
product in any
future laptop that you buy (minimum 256MB video RAM), then I highly
suggest doing so based on this information.
The aircrack-ng wiki is really coming along. Check out How to Crack
WPA/WPA2 if you
are confused about the hows and whys of cracking WPA-PSK.
Pyrit also supports acceleration through the VIA Padlock hardware crypto
accelerator found on the VIA C7-M CPUs. There's a lot of motherboards
that support VIA C7-M these days, many which you can find on
iDotPC.com. I found this cute little laptop
called the One Mini A110, which
has lots of information dedicated to the VIA Padlock and Linux.
Of course, small is "in" with pen-testing. The ErrataSec guys had the
FedEx iPhone trick at Defcon this year, and now it appears that others
are trying to get into this business with $700-1000 phones. Yes, the
NeoPwn
is probably now available and soon to be sold-out, just like those
iPhones.
The NeoPwn would probably be great for running
aircrack-ng, but it would leave a bit to be desired for running
Karmetasploit with a Caffe
Latte
twist.
Speaking of Caffe Latte, both the
airbase-ng and
aireplay-ng
tools now support Caffe Latte as well as plenty of other attacks. Many
other commercial tools, such as CommView for WiFi don't even support
half as much as the aircrack-ng tools. Aircrack-ng is quickly becoming
the standard in WiFi and WEP attacks. Offensive-Security (from the
makers of BackTrack) now has a certification and an online course
available called BackTrack
WiFu that
features the aircrack-ng suite.
While I prefer CommView for WiFi for Arp reinjection with Aircrack-ng
GUI for the PTW attack because using them together is stable, easy, and
extremely fast -- I think that
Wesside-ng looks
very promising to automate the same. The problem is that there are so
many one-off scenarios (just check out the PDF above for a short list!).
The great thing about CommView for WiFi is that it's easy for me to
identify what is going on quickly so that I can adjust the attack.
Saving packets 20,000 at a time is annoying, but Wireshark's mergecap
settles the issue for me before cracking with Aircrack-ng GUI with PTW.
The Backtrack WiFu PDF syllabus also mentions Easside-ng, which
implements a very quick fragmentation attack against WEP. The wiki page
compares Easside-ng to
Wesside-ng,
and you can see that they are very different. It also explains how they
can work well together. I think attacking the AP is less exciting than
attacking the clients (ala Karmetasploit or Caffe Latte), but it's
amazing how far this work has come along.
Tom
Nicholson
of NicholsonSecurity turned me onto
Jasager, which is Karma on AP
firmware (and a nice GUI interface to Karma!). It's so hard to find good
access points these days. A few sources are reporting success with the
Asus WL-520GU, but I'll believe it when I see it. I'd rather just go
with a Soekris board or iDotPC running Pyramid Linux or similar.
If you're really going to go out and buy something though, I suggest
investing in CSR-BC4-EXT Bluetooth USB adapters. The Neopwn happens to
come with a CSR-BC4 Bluetooth device (not sure if it is EXT or ROM), but
I don't think this is an accident. CSR,
or Cambridge Silicon Radio, has a Bluetooth chipset that happens to be
used by one of the only USB-based HCI sniffer vendors,
Frontline. Some reverse
engineers are apparently
working an open-source implementation for firmware that can perform HCI
sniffing. Until this time, many are copying the Frontline/FTE Comprobe
firmware
onto their CSR-compatible
Bluetooth
USB adapters. It is really difficult to find information about HCI
sniffing! However, Remote-Exploit's Forums have an awesome thread going
on about it (if you have an hour or so to read it -- it's long).
All of the information is contained in this tutorial, but many readers
are frustrated by the disjointed information, further frustrated by
rumors of bricking the hardware. I disagree -- the information is there
in entirety, you just have to read it. There are a few links that I'd
like to post, as it summarizes some (but not ALL) of the information
you'd need to get started. Yes, you really have to read the whole
thread.
The point here is that if you read everything and take it all in, then
Bluetooth penetration-testing is in your future. There are no shortcuts
to the bleeding-edge. However, it is likely to be extremely rewarding. I
somewhat believe the rumors that the pairing exchange could possibly be
reinjected in order to sniff and crack any Bluetooth PIN. This would
mean access to make phone calls, listen in to audio, and have full or
partial access to a Bluetooth-connected device's filesystem with the
speed of access and ease of WEP attacks. If you look at what happened to
WiFi and WEP, there are probably plenty of radio-based as well as
client-side exploits to consider.
After sniffing the Bluetooth pairing exchange, you'll still need to
crack the BT PIN. n.runs provides the BTcrack
tool for this
purpose. Thierry Zoller's
research seems to run in line with
Frontline, Codenomicon, and others -- apparently he's working on USB
fuzzing
according to his new blog.
Posted by Dre on Tuesday, September 23, 2008 in
Hacking,
Security and
Tech.
The OWASP AppSec NYC 2008 conference is only a couple days away, with
training starting at 9AM on Monday. I will be attending the "Advanced
Web Application Testing" training course with Eric Sheridan of Aspect
Security. I'm really looking forward to this conference, as it'll give
me the opportunity to meet up with old friends and meet new ones. My
employer is also sponsoring the conference, so you might be able to spot
me in the vendor area throughout Wednesday and Thursday. I don't plan to
dwell too long there though, but lucky for you we'll be handing out all
sorts of little goodies and plan to raffle off something I really wish I
had. ;)
Shoot me an email or post a comment if you want to meet up [marcin
(splat) tssci-security.com]. I'll be everywhere.
My agenda, hopefully includes the following talks:
Time |
Presentation |
Wednesday |
|
12:00-12:45 |
Framework-level Threat Analysis: Adding Science to the Art of Source-code review |
14:00-14:45 |
Industry Outlook Panel |
15:00-15:45 |
OWASP Testing Guide - Offensive Assessing Financial Applications
w3af - A Framework to own the web |
16:00-16:45 |
Case Studies: Exploiting application testing tool deficiencies via "out of band" injection |
17:00-17:45 |
Threading the Needle
Multidisciplinary Bank Attacks |
Thursday |
|
09:00-09:45 |
OWASP Web Services Top Ten |
10:00-10:45 |
"Help Wanted" 7 Things You Need to Know APPSEC/INFOSEC Employment
Building a tool for Security consultants: A story of a customized source code scanner |
12:00-12:45 |
Next Generation Cross Site Scripting Worms |
14:00-14:45 |
Practical Advanced Threat Modeling |
16:00-16:45 |
Corruption |
Posted by Marcin on Sunday, September 21, 2008 in
Conferences and
Security.
Jeremiah Grossman wrote in the opinion section for Application security
in CSO Online magazine about Web Application Security Today -- Are We
All
Insane?
I have an opinion of my own which I would like to share with my readers.
Jeremiah spreads FUD -- Fear, Uncertainty, and Doubt (mostly fear) in
his message. I wanted to walk through some parts of what he wrote that
were especially messages of fear, particularly ones that are over-blown.
Seventeen million programmers are churning out an estimated 102
billion new lines of code per year. [...] Web application exposure
has reached the crisis stage because criminals have taken notice and
made Web applications their primary target. There's an old proverb
that explains how to determine whether or not someone is sane. An
individual is shown a river flowing into a pond. He is given a
bucket and asked to drain the pond. If he walks to the stream to dam
the inflow into the pond he will be considered sane. If he decides
to empty the pond with his bucket without first stopping the inflow
then he would be considered insane. This is analogous to today's
approach to software security, and specifically Web application
security.
Many of us (including myself) know exactly where Jeremiah is going with
this. However, my addition is that the purity of the water in his
example is what is important -- not the flood of code. We shouldn't slow
down the production of code, or put an end to it.
The techniques used by the modern cyber-criminal are truly scary.
They're backed by mafia, supported by nation states, and often even
carried out by, or in conjunction with, rogue insiders. We are
dealing with polymorphic malware, 100,000-computer strong botnets,
drive-by-downloads, rootkits with anti-forensic capabilities
conducted by adversaries who fear no U.S. law. The bad guys make
certain their newest tricks are packed, encrypted, and undetectable
by the most popular security products.
While some of this FUD is certainly true to a point, we don't have any
specific measurements on the reality of our situation. What Jeremiah
purports as fact is merely theory, speculation, and potentially myth.
Think the payment card industry's new regulations or the breach
disclosure laws are going to save us? Neither do I, but they
certainly do make a good excuse to get more budget dollars.
I've been having a lot of interesting conversations about compliance
with my colleagues lately. It's been indicated to me that PCI-DSS is not
the only compliance standard or regualation that has a framework to
enforce application security or application penetration-testing.
Stranger, the "cost of a breach" isn't the only cost of insecurity.
Marcin and I were discussing an article on Sound compliance polices,
practices reduce legal
costs.
I had other discussions about cyber-insurance in the Security Catalyst
community regarding a presentation at the recent Defcon conference from
Taylor Banks and Carric on [PDF] The pentest is dead, long live the
pentest!
At the end of their presentation, Taylor and Carric provide a long list
of cyber-insurance providers -- extremely useful for anyone unaware of
such a thing or looking to buy. In David Rice's book, Geekonomics, David
makes mention of AIG's cyber-insurance offerings and how the ISAlliance
and AIG provide discounts to ISAlliance
members
who implement security-framework controls. In other words, doing
compliance "right" not only buys protection from the regulators, but it
also demonstrates cost-improvements for legal and insurance activity.
Another conversation with colleague Adam Muntner discussed how
"compliance readiness" is both more profitable and more enjoyable than
compliance work itself. Many organziations realize that the time and
effort it takes to pass any given set of criteria for an audit standard,
so they prepare themselves ahead of time using experts in application
risk, network penetration-testing, and application penetration-testing.
What most organizations are looking for is custom-tailored advice in the
form of strategy consulting, not just another fancy report that they can
give to the auditors.
Compliance and breach disclose laws could possibly be the primary
motivators towards spending on application security, but there is
certainly more at work here. If compliance is driving application
security, then what is driving compliance?
Want to rent a 10,000-computer botnet for the day? No problem.
Unreported vulnerabilities (zero-days) are being researched, bought,
and sold on the black market for tens or even hundreds of thousands
of dollars. At the same time, when software patches are released,
attackers are immediately (it is rumored, automatically)
reverse-engineering them to find the flaw. Exploit code is then sent
back into the wild before patches can be widely deployed by
legitimate users. Large-scale patch rollouts taking only a few days
seems like a great advancement until compared against exploit code
ready to go in hours.
Here's where Jeremiah's FUD really kicks in. I don't know where his
sources are, but the factual nature of this information should
definitely come into question. I have heard of one or two exploits that
have been sold for US 30,000 dollars. However, this is not the norm. The
rumors of automatic reverse-engineering of patches into exploits has
been disproved, so why make mention of it? Even the Asprox botnet that
coordinated the SQL injection attacks is over one year old -- and I'm
certain that a large majority of Enterprises are patched. The clear
target of the malware behind the SQL injection attacks is consumers,
particularly those whose Windows XP operating system has some sort of
automatic update deficiency or mis-configuration.
In response to the inadequacies of first-generation Web application
security measures, an entire industry has emerged beating the drum
for software in the Software Development Lifecycle (SDL) and touting
secure software as the cure to all our woes.
Actually, application security principles have been around for a lot
longer! I think that security in the SDLC has definitely been talked
about before the invention of the web. The only concepts that I've seen
emerge from the "inadequacies of first-generation Web application
security measures" that have been "beating their drums" and touting
their solutions as the cure to all our woes are:
- Black-box web application security scanners (WASS)
- Web application firewalls (WAF)
Secure code review has been a concept that I've been aware of since
OpenBSD opened their doors. Improving software development processes for
both quality and security go back in the literature to the 1970's. Unit
testing, and security unit testing, are relatively new concepts -- but
certainly not as new as WASS or WAF!
Secure code review is a competitive sport that is different than the
sales/marketing approach of security product vendors. When Theo de
Raadt, a renown (some would say notorious) NetBSD core member who had an
appetite for application security branched OpenBSD from NetBSD -- he
didn't have it directly in mind that he and his team would scour their
source code looking for security-related bugs. However, the NetBSD team
provided some extra competitive eyes on the OpenBSD commits -- looking
especially hard on security-related bugs to embarrass Theo and crew.
From this back-and-forth competitive challenge -- the application
security industry was really born.
Certainly, some will claim that fuzz testing was invented earlier.
However, before OpenBSD -- security-related bugs were found mostly by
accident (while looking for something else). If they were found on
purpose, like in the case of the Morris Internet worm, it was a personal
matter -- potentially shared by a group, but not taken on by a group,
rarely even in academia.
One could claim that WASS has its roots in fuzz testing, while WAF has
its roots in packet filtering or the classic network firewall. Unlike
those two: security unit testing, secure code review, and white-box
dynamic analysis have really not changed much over time. When I use
Javascript breakpoints in FireBug, it is strikingly similar to using
gdb.
In today's world, there is an unimaginable amount of insecure code,
and therefore websites, already in circulation. Just taking up the
battle cry of "secure software" alone does not solve this problem.
As Web 2.0 applications continue to proliferate (blogs, social
networks, video sharing, mash-up websites, etc.) the problem will
expand in parallel, but we also must consider the existing large
financial institutions, credit unions, healthcare operators,
ecommerce retailers that run mission-critical business applications
online. Even our 2008 U.S. presidential candidates are having
trouble securing their campaign websites against amateur attackers.
It's interesting how Jeremiah views "secure software" as a battle cry.
For many security-focused developers, this isn't a war -- it's just a
way of coding properly. Maybe he pictures that the war is "secure
software vs. WASS+WAF", which from his wallet's perspective -- might be
right. I am having some issues separating application
penetration-testing and general application security, but I don't see it
anywhere near as bad as the case that Jeremiah has got.
The one thing about the above paragraph that is potentially very sad is
that he calls XSS bugfinders "attackers" -- "amateur attackers" at that.
There were no real attacks against the presidential candidates' websites
-- there were just some vulnerability findings. No exploits were written
or used. Jeremiah really has a way of twisting words around -- maybe he
should be working for one of the presidential candidates!
Application security vs. Application penetration-testing
Some of us choose to focus our efforts on penetration-testing -- finding
bugs in the code that can be used as an exploit. Others focus just on
building the code with security in mind -- to enhance security. This is
an important distinction.
In a recent presentation entitled [PDF] Code Scanning: Success and
Failure in the
Field, Alex
Stamos discussed some differences between false-positives and
non-exploitables. Sure, black-box web application scanners, including
SaaS vendors such as WhiteHatSec indeed find exploitable conditions.
This comes at a serious cost.
Problems with black-box web application security scanners, including and
especially WhiteHatSec:
- The penetration-test runs unencrypted over the Internet, exposing not
only a MITM condition, but various types of proxy and logging
problems
- Anyone in this path -- present or future -- may gain (potentially
illegal) access to these exploits, pre-built for them, so that almost
no knowledge or expertise is required on their part to run them
- Changing an exploit so that it bypasses WASS+WAF is often trivial
- Use of an encrypted VPN or testing on the local LAN does not settle
this problem, it only protects some of the path involved
I think Jeremiah said it best himself:
The techniques used by the modern cyber-criminal are truly scary.
They're backed by mafia, supported by nation states, and often even
carried out by, or in conjunction with, rogue insiders.
What I propose is that it is safer and easier to avoid the
exploitability arguments. Who cares if something is exploitable or not?
A better question is: how obviously secure is the code?
Advantages of security unit testing, secure code review, and white-box
dynamic analysis:
- No exploits means that no rogue insiders can steal them and give them
to adversaries
- Source code is full-knowledge. There is nothing "black-box" about it,
so every software weakness and vulnerability can theoretically be
found
- These practices encourage finding security-related bugs
"accidentally", which includes new classes of vulnerabilities (often
referred to as software weakness research)
Certainly, I have some ideas and products in mind when I think of true
application security tools: security unit test frameworks (which don't
exist), security code review tools, and white-box dynamic analysis, or
hybrid/composite analysis. However, the primary focus should be on the
expertise needed to perform application security tasks, the process in
place to allow individuals and teams to rise to the occasion, and
guidance/governance from organizational figureheads and leaders.
It is unreasonable to expect publishers, enterprises and other site
owners to restart and reprogram every website securely from scratch.
Nor can we fix the hundreds of thousands (maybe millions) of custom
Web application vulnerabilities one line at time.
Jeremiah thinks that developers work with source code one line at a
time. They don't. Modern developers utilize techniques such as
metaprogramming, code generation, templating, and model-driven
architecture. They're programmers, why wouldn't they write programs to
help them develop other applications?!
Some web applications are so legacy, that they require re-writing from
scratch -- however we don't have numbers or statistics on this amount.
Also note that if Jeremiah is going to only include SSL web applications
as important -- than he should also include them in these numbers as
well.
Developers have been using unit testing frameworks, IDE features, and
processes such as iterative programming, Extreme programming, and Agile
to help them refactor their applications for quite some time now.
Refactoring does not require re-writing from scratch. With refactoring,
developers can restructure the design of their applications by tweaking
small parts of the code. Dependency injection, Aspect-oriented
programming, and Attribute-oriented programming make this faster -- as
do general development concepts such as Design-by-contract, Test-driven
development, Reflective programming, and many others. Some of these
practices don't even require use of an object-oriented language -- let
alone an Enteprise web application programming language such as Java
Enterprise or ASP.NET.
There are numerous books on refactoring the Web, databases, and specific
programming languages. Some languages have used metaprogramming to build
refactoring, unit testing, TDD, and many other quality/security-control
concepts into the entire framework -- such as Rails for Ruby.
Our pond is actually an ocean of code in need of security defect
purification and the dams in the rivers feeding it have holes
requiring patches. In many ways, the state of Web application
security is where we started a decade or so ago in network security
when no one really patched or even had the means to do so.
I dislike how Jeremiah fails to bring this analogy back around in order
to prove any point. If WASS+WAF is supposed to signify blocking the
inflow of water, this neither cleans up the already dirty pond, nor does
it prevent the acidic/polluted water from immediately disintegrating the
wooden plug that is supposed to stop the inflow.
This approach lets us mitigate the problem now giving us breathing
room to fix the code when time and budget allow. Of course there is
still the option of waiting the next 10 years for the Web to be
rebuilt.
If classic firewalls and virtual-patching didn't work the first time
around -- what makes people think they're going to work now?
The web does not require 10 years to be rebuilt -- especially not the
SSL web. It requires smart developers with metaprogramming, refactoring,
and high-efficiency skills that can be focused towards security. Do not
hire cowboy coders. Hire developers that can utilize and spread TDD,
Design-by-contract, metaprogramming, and code generation concepts and
tools throughout your organization. Hire application security experts
that can work with these super-developers. Train and promote modern,
secure development practices to every developer-newbie, veteran
developer -- and every network, application, or information security
professional.
Posted by Dre on Thursday, September 11, 2008 in
Security.
The bad:
- It's a front-end to WebKit much like Safari, with no
bells-or-whistles
- The only add-ons are Web Inspector (from WebKit), Chrome's own Task
Manager, and Chrome's own Java Debugger (they could have at least
used Drosera which comes with Web Inspector / WebKit)
- The Google Updater software it installs runs as a separate process,
is not a service, and installs itself into the registry to startup at
boot
- Privacy policy and default configuration should scare all of us worse
than Mozilla
- Appears to somewhat utilize the Google Desktop API
- Wouldn't let me install Scroogle as the default search
The good:
- It does separate tabs by process. It gives them different Windows
PID's, but the parent is still a Chrome process. I am guessing this
isn't secure for XP, but on Vista it might be fairly solid
- Appears to support Flash, Java, QuickTime, et al out-of-the-box
(note: this makes it just as secure as Internet Explorer 7 or Firefox
on Vista, which we all know have at least a few variations of attacks
and exposure to at least some vulnerabilities)
- Does allow future search engines that conform to
opensearch.org
My analysis:
Google Chrome is DOA (dead on arrival). Nobody is going to use a browser
with such poor support and so completely unpolished. However, I agree
with others' assessments: hopefully Google Chrome will make Mozilla,
Microsoft, and Opera aware of the several features such as tab-process
separation (so that web application developers can also use this
functionality).
Why didn't Google just do a request-for-comments or a peer-reviewed
paper/presentation? What's the point of this loosely running code? I'm
not sure yet, but it is possible that Google has left something out in
their announcements and/or plans for this product.
From a risk assessment perspective, I can tell you that my
threat-modeling spider sense went off from the moment of the download,
was piercing my ears during the install, and became overstimulating
during runtime. If security is the goal of this product, I'm afraid that
Google has definitely failed.
Posted by Dre on Tuesday, September 2, 2008 in
News,
Security and
Tech.