How to Open Government?

Photo Credit via Flickr / Creative Commons

Photo Credit via Flickr / Creative Commons

So, this is something I have been thinking about for a while now.

I have friends who are supporters of open data writ large, for several reasons.  Use of open data techniques creates a general transparency for public analysis, which is good for citizens and for journalists who are trying to report on government and public affairs.  It makes it easier for staff and planning professionals to do their jobs.  And, at least theoretically, it makes it easier for citizens to supervise the actions of their elected officials -- it makes it harder to "hide the ball."

However, I am also an elected official in Crystal City -- a small town in Missouri.  And while all that sounds very nice, most of the efforts I see toward "open data" for government are directed at large cities or major metropolitan areas, not at cities our size.  Our staff does not include a web developer, and our web site is sadly out of date.  We have a high degree of vendor lock-in with our current administrative systems vendor, whose software handles everything from police bookings to water bills.  I have no idea what options they might offer, if any, that would make it easier to publish our data in an open fashion.

Complicating this is my lack of understanding of exactly what open data means.  It definitely seems to mean different things to different people.  Some are talking about crime reporting; some are talking about financial data; some are talking about statistics for things like broken sidewalks and potholes.

So, let's say hypothetically that a small city with no technical staff wanted to participate in an open government / open data initiative.  Here are the questions I have:

  • What does that mean, in language that a typical elected official can understand?  Is there a standard for reporting formats that we can say we comply with?
  • What does that participation get for the local government, specifically, that it was not already getting?  (I've found that we can comply with standards more readily if it means extra grant money or matching funds.)
  • How would we go about doing this, given our lack of technical staff and funds to outsource those functions?  Can we do that with our current technology vendor? If not, who are the players who are providing standardized software for doing this?

I think most elected officials are interested in transparency, but we aren't sure how best to provide it, and we aren't sure what tangible benefits it might offer.  Therefore, it never rises to the top of the priority list.  I would appreciate any pointers or tips that open government folks can offer.

Docker on Raspberry Pi

Photo Credit via Flickr / Creative Commons

Photo Credit via Flickr / Creative Commons

I have an application I'm building that needs (well, "needs") to run on a Raspberry Pi.  Deploying new versions of a full-stack application to a Pi is a pain, because if you screw something up there's no out-of-band management like there is on a cloud server.  As a result, I've been trying to streamline my devops process to use Docker so I can leave the operating system alone and only change the containerized application.

Good News, Bad News

In case you hadn't noticed, running Docker on Raspberry Pi has gotten a lot easier.  In the latest Arch Linux for Raspberry Pi images, you can actually just:

pacman -S docker

...and it works.  All of the needed features are in the kernel, and the userland tools for Docker 0.10 are in the Arch Linux for ARM repositories.  While I am more comfortable in Debian-based distributions (like Ubuntu or Raspbian), Arch is good for this purpose because it's a much smaller, more barebones OS -- perfect for the underlying layer of a Docker container.

I'm having less luck, however, getting Arch Linux for Raspberry Pi to run in an emulated environment under QEMU.  I would have sworn I had it working a couple of months ago, using the Raspbian image and a kernel I got from XEC Design, but with the latest QEMU I can't seem to get it running again.

So What?

Docker is the new hotness in lightweight application containers.  Where today's mainstream virtualization emulates a separate operating system for each container, and for all practical purposes each container might as well be its own physical box, Docker's containers are more like isolated processes.  You can package up just as much environment as you need, and they can be instantiated so fast they act more like processes than like virtualized machines.

As of right now, Docker is officially only supported on a few x86_64-based Linux distributions.  But it really only relies on Linux kernel features, most of which are portable, plus userland binaries that are written in Go.  Since the Linux kernel source is fairly portable across architectures -- at least to ARM -- and Golang is officially supported on ARM, there should be no reason why it can't work.  And indeed it does... mostly.

That's Great, But... So What?

Raspberry Pi is a great platform for fooling around, but deploying anything to it is kind of a pain.  They would be nice little machines to run low-resource applications on, but for that problem.  Docker apps can be packaged up and distributed fairly neatly, assuming that you've got a base operating system that supports running them.  Now that we do, you can deploy your application without worrying about corruption of the underlying operating system and tools.  That can be a big help if your Raspberry Pi is down in the basement, plugged into a sensor, with no keyboard or mouse hooked up.  I think development for Raspberry Pi and other ARM-based devices is going to get a lot more fun.

I'll be giving a talk about this at the St. Louis Docker Meetup on Wednesday, June 4th in Clayton, MO.

Bleeding Heart OpenSSL Vulnerability

Updated Friday April 11, 2014 with additional info about how the vulnerability affects embedded devices, client software and sites that use hardware SSL acceleration.

This is too long to tweet, so I'm putting it on my blog.  This is my understanding of the so-called "heartbleed bug":

  • It is a vulnerability in OpenSSL.
  • OpenSSL is used for cryptographic security by a wide range of web servers, SSH servers, and basically anything on Linux/UN*X-style operating systems.
  • The bug allows someone to potentially have read your private key out of the server's memory without leaving any trace of the attack.
"Based on this morning's sample, it would be a Twinkie thirty-five feet long, weighing approximately six hundred pounds." - Egon Spengler in Ghostbusters (1984)

"Based on this morning's sample, it would be a Twinkie thirty-five feet long, weighing approximately six hundred pounds." - Egon Spengler in Ghostbusters (1984)

Because the security of all modern network encryption requires that the private key be kept private, that is "extraordinarily bad," as Mr. Spengler would say.  But there seems to be some confusion with regard to what needs to be done now to prevent any further damage.

The SSH protocol does not utilize the "heartbeat" feature of SSL and therefore is not susceptible to the bug.  So unless you're using the same private key for your web server and your SSH identity for some reason, your SSH keys should be safe.  (Unless you're running something that would be a high-value target, where someone might have used a man-in-the-middle attack based on knowledge of your private SSL key to compromise the entire system.)  Right?

Web servers are a different story.  It is not sufficient just to upgrade OpenSSL to a version that no longer has the bug.  Since a vulnerable system has potentially leaked its SSL/TLS private key, it's also necessary to re-key your SSL certificates with a new key, new certificate signing request, and new certificate issued by your SSL certificate vendor.  Right?

All of this is terrifying, because the one thing that we take completely for granted in today's computing infrastructure is the unbreakability of cryptography.  When something that you rely on completely is suddenly proven not to be reliable, we all suffer.

Note that I do not buy into arguments that this bug implies that open source is inherently insecure.  Vulnerabilities like this may have existed in closed-source crypto implementations for years, and you would never have known it.  Unfortunately, it also means that just because a piece of security software is open source, doesn't mean that it's bug-free.

Update: Oh Crap

On Wednesday April 9th it became clear to me that it's not just web servers that are subject to this bug.

Embedded Devices. I used Filippo Valsodra's Heartbleed vulnerability checking tool to check the embedded device that we use for a WAN access point, and it proved to be vulnerable.  There were no firmware updates available to fix the problem, so I just disabled remote access to the HTTPS management interface.  But if you have any kind of "hardware" device that is running Linux or FreeBSD on the inside, it may be vulnerable.  Check it, patch it or disable WAN-side remote access, and then change your management password for that device.

Clients. Certain client software is also vulnerable to the bug.  I checked all the browsers that I use, and while most of them were not vulnerable, Google Chrome was -- but it only leaks memory 7 bytes at a time, so that's not too bad.  Most software I would use or write on the server side links to the OpenSSL dynamic library, so when your system package is upgraded, you've fixed the problem.  If any "behind the scenes" client software you may use that statically links a vulnerable version of OpenSSL, you wouldn't know about it being broken, though.  Also scary is that certain command-line tools, used in scripts and other server processes, are vulnerable: wget, curl, git, and nginx in proxy mode.  (Fortunately, those usually link to OpenSSL dynamically.)

SSL Accelerator Cards. Most large-scale web sites use hardware acceleration for SSL, to offload the difficult math from the CPU to specialized hardware. It was suggested by a few people that users of such accelerators would not be subject to Heartbleed. I'm pretty sure this is not the case, because the way these things work is that OpenSSL handles the protocol, and the hardware does the cryptography.  Since the problem is in the protocol and not in the cryptography, doing the cryptography in hardware doesn't help. If the server is/was running a vulnerable version of OpenSSL, there still is/was a problem.

Passwords. Once you've patched/upgraded/disabled/re-keyed, about all that's left to do is a password audit.  Go through every username/password on every secure site you've used in the last two years, verify that the site has fixed the vulnerability, change your password, and make sure you are using different passwords on every site.  Personally I'm going to start using a password manager, probably 1Password from AgileBits, to generate strong passwords and sync them between the computers and devices that I use.