kuza55
noted
this morning that Firefox 2.0.0.5 has
implemented
support for httpOnly
cookies. It's not perfect,
as ma1 pointed out in the
comments,
but it's better than nothing.
The Firefox browser could be made even more secure by building
NoScript,
LocalRodeo,
CookieSafe,
SafeHistory, and
SafeCache into the Firefox codebase. In
addition an option to run only signed
Java(Script)
should be developed.
For more on httpOnly cookies, check out Mitigating Cross-Site Scripting
With HTTP-only
Cookies and
also Why HttpOnly won't protect
you.
Posted by Marcin on Thursday, July 19, 2007 in
Privacy,
Security and
Tech.
I love wikis. I've been working on a security portal at work and it just
got so much better with the addition of embedded RSS feeds. With this
extension, I've
embedded the Security
Whitelist
and Aggregated Vendor and Security News
Sites
pipes on the front page. This gives our team the ability to check the
latest news that is happening inside and outside of our company. I've
also added the Web Application Security
Feed
in a seperate portal for our web app guys.
Within our policy pages, is an embedded rss feed that links to the
latest file uploads to the policy folder on the network. We also have a
page dedicated to vendor security and information bulletins.
If you don't have a wiki yet, start one. You can always worry about the
information organization later, and the sooner you start using wiki, the
easier things will be. Sharing information across your department will
be more enjoyable. I'm actually having fun with this... :)
Posted by Marcin on Wednesday, July 18, 2007 in
Tech and
Work.
C'mon guys, what in the hell are you releasing a .1 for just to fix four
lines of code. I realize that an exploit in netfilter could be a serious
issue, but netfilter doesn't belong in the kernel to begin with; it
should be userland code. Grrrr. This is exactly why I have been a
FreeBSD zealot for so long. You don't see FreeBSD posting a new release
to fix one small problem like a null pointer dereferencing issue.
Anyway, go patch your kernels.
patch-2.6.22.1.bz2
Posted by Casey on Wednesday, July 11, 2007 in
Linux and
Security.
Back in January, I asked Richard Bejtlich in an email to post some tips
for reading
books.
Reading technical books can be a drag at times, yet somehow he manages
to get through several a month. Reading is one of those tasks we all
have to do in our line of work, for obvious reasons. Well, I came across
another blog
post
with another point for reading:
All that I really needed from my first trip through the book was to
know what was possible, not exactly how to do it.
My father always told me, if you don't understand something the first
time through, read it again until you do. Well sometimes, who needs to
know it on the first pass? I always forget why people have technical
books... for reference. It's not like a novel you read once and don't
pick up again because you already know how the story goes.
Maybe this is why I have started about 6-7 books this year and haven't
finished any?
Posted by Marcin on Monday, July 9, 2007 in
Books.
So your DNS team sends you the company's entire domain name inventory in
a CSV (comma-separated values) file. You're tasked with port scanning
those hosts, to perform a network inventory, discover rogue services and
other policy violations. It's simple to do this with a short list of
domains and a small number of servers. For those who are responsible for
entire data centers, hosting thousands of domain names, you'll need an
efficient way to perform this. Since some IP addresses can be hosting
more than one domain, and some domains are hosted on multiple servers, a
little scripting can help you out.
The script below will resolve all the hostnames in the CSV file to IP
addresses, and will only report back those that have successfully
resolved. In a sample list of ~650 domain names, only ~520 were actually
live systems. The rest were domains we owned but were not live yet.
#!/usr/bin/ruby -w
require 'resolv'input = File.open("hosts.csv", "r")
output = File.open("hosts2.csv", "w")input.each_line {|line|
line = line.chop
Resolv::DNS.new.each_address("#{line}") {|addr|
output.puts "#{line},#{addr}"
}
}
Now, using a little shell action, you can do a uniq|sort on this list
to get the IP addresses.
$ ruby resolve.rb $ sed 's/,/ /' hosts2.csv | awk '{print $2}' | sort | uniq
That script will replace the comma with a space and using awk, will
print the IP addresses. It then sorts the IP's and then removes
duplicate entries. From there, you can now feed this list into Nmap for
scanning. My list of ~520 live domain names were hosted on about 75
unique IP addresses, which sure beats scanning 650 domain names. =)
$ sed 's/,/ /' hosts2.csv | awk '{print $2}' | sort | uniq | nmap -sSU -A -P0 -p U:53,111,137,T:1-65535 -T4 -iL - -oX filename.xml
If you have any other tips relating to this post, please reply with a
comment! I'd love to hear feedback with better ways of making this
process even more efficient.
Posted by Marcin on Monday, July 9, 2007 in
Security and
Work.