Hack2o

I had a really good time meeting some new faces and some I had known of but hadn’t chatted with before. One gent, Cory Dingels, let me watch over his shoulder and showed me some nifty things about rails a while as he worked on the Yellow Bike tracking site.

Being the renegade that I am, I struck out on my own to pursue a dream, a dream of encrypted email storage. This idea started months (years?) ago when I read about how Lavabit’s innards worked. While certainly an impressive feat, it left something to be desired in terms of how secure the emails actually were. Since then though, protonmail has launched and they claim to do what I had envisioned and then some.

But what if you wanted to run your own mail services?

Since I was working alone, I figured the best way to make something that worked by the end of the weekend would be to simply have emails be encrypted and then forwarded on to an existing account somewhere else. Easy peasy right? It was…except for my really bad mistake which took me a good chunk of Saturday to figure out. Getting Haraka up and running was simple enough. Wiring openpgp.js and other modules to make the job easier was a breaze. (mailcomposer made composing emails super easy as I hate having to concat strings myself.)

The mistake was sending off just the encrypted message without any sort of headers. Google doesn’t like messages sent like that. Once that was corrected, my logs were less error-y and messages were showing up where they were suppose to.

While I’m sure the code will be found wanting, I’m pretty proud that it was working before the final check-in. (Full disclosure, gmail placed the emails I had the audience send into the spam folder so the demonstration part of the presentation failed.)

You can check out the code which is hosted on github at https://github.com/snoj/haraka-secwrap/releases/tag/v1.

Hoxy Proxy

Ever wish you could shim into and do some testing on a “live” site? Hoxy is for you! It now even supports HTTPS sites thanks to a couple awesome coders Greg Reimer (founder), Francois Ward, and yours truly. A special thanks to Seth Holladay for helping move the issue forward with his bounty!

Over-thinking #3: Restarting a node.js process

Sure, you could use something like forever but what if you want things as self contained as possible?

It’s very ugly and breaks stdio, but it works!

var cluster = require('cluster');
var _ = require('underscore');
var spawn = require('child_process').spawn;
if(cluster.isMaster) {
  //var cluster_args = 
  var runningragnarok = false;
  var msghandler = function(msg) {
    if(msg === 'rebirth') {
      _.each(cluster.workers, function(v) {
        v.kill();
      });
    }
    if(msg === 'ragnarok' && runningragnarok === false) {
      runningragnarok = true;
      var nargs = process.argv;
      nargs.splice.apply(nargs, [1, 0].concat(process.execArgv));
      if(!_.contains(process.argv, '--regnarok')) {
        nargs.push('--ragnarok');
        nargs.push(5000);
      }
      _.each(cluster.workers, function(w) { w.kill(); });
      spawn(nargs[0], nargs.slice(1), {detached: true, stdio: ['ignore', 'ignore', 'ignore']});
      process.kill(process.pid);
    }

    if(msg === 'heatdeath') {
      process.kill(process.pid);
    }
  };

  cluster.on('exit', function() { if(runningragnarok) return; cluster.fork().on('message', msghandler); });

  setTimeout(function() {
    _.each([1,2,3,4,5], function() {
      var f = cluster.fork()
      f.on('message', msghandler);
    })
  }, 5000);
  //_.find(process.argv, function(v, i, a) { return i > 0 && a[i-1] === '--ragnarok'; }) || 5000
  return;
}

Over-thinking #1: Node.js HTTP requests

Something I’ve been toying with is a tip and trick, but mostly horrible hacking away and over-thinking things blog series highlighting the stupid things I do. These things will likely come from stuff for my work or simply curiosity.

Without further ado, here’s #1.

A couple a weeks ago, I needed to migrate a web server and test the sites before going live. Due to a variety of constraints, editing the hosts file, using something like DNShifter or editing the hostname for the vhosts was out of the question. What is a guy to do? Thinking over the problem I figured node.js would be the quickest route to write a testing routine with.

The first problem and probably the biggest was to construct the http request in such a way that I would connect to a different host than the hostname would otherwise send me to. Looking at the node.js code on github made me think it was going to be a piece of cake, just a couple additions to the /lib http files would allow me to specify the actual host to connect with.

//around /lib/http.js:1425
else if(options.connection) {
 self.onSocket(options.connection)
}

This allowed me to use a specific socket made by net.createConnection. However, this is clunky and I’d have to maintain a copy of the mainline http with this and all the other necessary code changes. Obviously this is more work in the long run and my future self is lazy.

Thankfully the folks who wrote the http module decided to check if the options object for http.request() has “createConnection” defined and then uses that to initiate the TCP stream. This makes the task so much easier and should work for the foreseeable future.

var url = require().parse("http://example.com/");
url.createConnection = require('net').createConnection.bind(null, 80, "snoj.us");
require('http').request(url, function(res) {
  res.setEncoding('utf8');
  res.on('data', function (chunk) {
    console.log('BODY: ' + chunk);
  });
}).end();

And of course since drafting all this drivel I find that wget (starting with 1.10), Invoke-WebRequest, and node.js allow the Host header to be specified and each works excellently. However, I still like this technique as it allows you to leave the original URL in place while forcing a connection to another server. Using custom headers means editing the URL which may or may not be doable in some situations and calls for more code changes to accomplish the same end.

Your very own internet speed test in NodeJS

A couple weeks ago I was needing a non-flash internet speed test and came across SpeedOf.Me which is pretty cool in that it’s only HTML and Javascript. Then this last week I needed to test some VPN speeds, but couldn’t find anything simple and easy to quickly run on a server. So I came up with my own NodeJS speed tester.

So far it seems fairly accurate despite the poor coding.

Grab the code on github or to try it out yourself.

Detecting your public IP address

There are times when I’m hacking together something and I need to know my public IP address. I could hard code it in, but where’s the fun in that?

Probably the best IP reporter I’ve seen so far is ifconfig.me, but they lack IPV6 support at the moment. Since I want this, I decided to make my own.

You can find it at wm.snoj.us. It’s rather simple right now and I don’t see myself adding too much. If you would like some other type of information or way it’s presented, leave a comment or email me.

IPv6 using Charter’s 6RD on Ubuntu behind an IPv4 only router.

Finally figured out how to get a 6rd tunnel setup with Charter. My problem was that 1) I wasn’t paying attention to the examples and 2) I have my IPv6 router behind an IPv4 only router. So unlike the examples, I needed to use the private IP address instead of the public for the tunnel. (Like you do for 6in4.)

You can find my setup script here.

In my /etc/network/interface I added the following to my eth0 interface.

post-up /etc/network/6rd || echo 1;
pre-down ip tunnel del tun6rd || echo 1;

In the original script, the author has PREFIX:0::1/32 assigned to the external interface and PREFIX:1::1/64 assigned to the inside. I’m not sure the reasoning for this as both reside on the same /64 subnet. To me, it would make more sense to use PREFIX:: for the outside and PREFIX::1 for the internal so they are right next to each other.

I hope this helps other Charter customers figure out their own ‘native’ IPv6 connectivity.

SnojNS = DNSHifter

It’s been a while since the last SnojNS update. Been working on a lot of other things lately, like a baby…and another one that’ll be here any day now.

SnojNS is now going to be called DNShifter thanks to my good buddy ivorycruncher. He wins Bacon Salt. I also may be rewriting it in Javascript using nodejs….maybe. So far things are going okay, but I’m running into issues with XML. Seems like nodejs doesn’t have built in support or an easy to install library* that’ll let me do the crazy stuff I was able to do in C#. So for the time being, the test code relies on some “fancy” handling of Javascript objects to provide a similar setup to an XML document.

It is definitely not pretty, but here’s a code dump..

Some of the reason for exploring nodejs is that it is very easy to have one codebase for multiple operating systems. Sure Mono can be used to run C# Linux, but it’s a really big package to install. There is also issues with some namespaces and classes not implemented 100% the same or even at all. I do suppose nodejs can have the same issues, but my biggest issue is with the ease one can account for the differences.

That and the code is currently sitting on a backup hard drive from when I installed Server 2008 on the laptop and I haven’t restored it yet.

Speaking of the C# version. Last I worked on it, I finally abstracted the code so listeners for IPv4 or IPv6 could be used. Work also began on using an ssh connection to do lookups using nslookup or dig. This feature I may kill in favor of what I’m currently dubbing DNSXML. Basically, using an httpd server with something like php to do the lookups and send back the results using xml for structure. Doing so would make the encryption (using https or ssh socks/port redirection) of the data easier and cross-platform. And by easier I mean, “I’m lazy and I don’t want to have to deal with that mess.”

*By install, I mean have the necessary library files in the same folder and a simple “require(‘xml_library’)”. In other words, no NPM and works on all OSes without installing things like cygwin.

AAAA tale of two DNS

Since getting IPv6 up and running, I’ve been trying to figure out a way to map domain names to hosts no matter if they have statically assigned addresses or dhcp/radv generated ones. Additionally, I didn’t want to purchase a new domain. Instead I opted to create a new subdomain and delegated authority to my home IPv6 router’s name server.

In my first attempt, I was using a public FQDN. This presented a problem when using BIND’s allow-update as the private IPv4 range was now public and doesn’t help when trying to access my home computers. After some digging I found update-policy, but this required that each host made use of DNSSEC/TSIG/SIG…something I couldn’t guarantee on my network…yet. So it was back to allow-update.

A couple days later after some further thought, I settled on using a .local domain and via a script, copy the AAAA records to a public domain. This solution gives me easy access to my servers, without exposing the private IPv4 addresses. Even better, the script can be extended to include additional records or rules. For instance, don’t want to map android phones? Cisco switches? Cross reference the IPs to macs and filter away.

Zone config

//public
zone "home.example.com" {
	type master;
	file "/var/lib/bind/master/home.example.com.conf";
	//Only allow the updates from the local machine.
	allow-update { localhost; };
	//Only allow the axfr from the local machine.
	allow-transfer { localhost; };
};

//private
zone "home.example.local" {
	type master;
	file "/var/lib/bind/master/home.example.local.conf";
	//Allow local network hosts with static addresses to update the zone.
	allow-update { LocalIPv6/64; LocalIPv4/24; localhost; };
	//Only allow the axfr from the local machine.
	allow-transfer { localhost; };
};

The actual zone files are your regular zone files, nothing special.

Script

The script to update the public zone with AAAA records in the private. This is a cron job that only runs every hour as I’m not motivated enough at the moment to create higher res crontab.

#!/usr/bin/php
<?php
//get the aaaa records that have been registered with the local domain...minus the ns records.
$c = "dig @::1 home.example.local. axfr | grep AAAA | grep -v ns.home.example.local";
$out = array();
exec($c, $out);
$hosts = array();

//build array with hostnames for keys pointing to an array of associated ipv6 addresses.
foreach($out as $v) {
	$hn = substr($v,0,strpos($v,'.'));
	if(!isset($hosts[$hn])) {
		$hosts[$hn] = array();
	}
	$ipv6 = preg_split("/( |\t){1,}/", $v);
	$ipv6 = $ipv6[count($ipv6)-1];
	$hosts[$hn][] = $ipv6;
}

//Now take that array and pump it into nsupdate.
foreach($hosts as $k => $v) {
	$cmds = array("echo server ::1", "echo zone home.example.com", "echo update delete {$k}.home.example.com. AAAA");

	foreach($v as $ipv6) {
		$cmds[] = "echo update add {$k}.home.example.com. 86400 AAAA {$ipv6}";
	}
	$cmds[] = "echo send";
	$cmd = implode("\r\n", $cmds);
	exec("(". $cmd . ") | nsupdate", $out2);
	//var_dump("(". $cmd . ") | nsupdate");
	//var_dump(implode("\r\n", $out2));
}
?>