Encrypting files with SSH keys

Chicken and egg: need to securely send a colleague VPN connection info before they’re on the VPN. We all have SSH keys!

Found this nice GitHub Gist on this topic. It boils down to the following.

Converting a public SSH key to PKCS8

$ ssh-keygen -e -f /path/to/pubkey -m PKCS8 > /path/to/pubkey.pkcs8

Generate a random key

$ openssl rand 192 -out key

Use random key to encrypt a file

$ openssl aes-256-cbc -in secret.txt -out secret.txt.enc -pass file:key

Encrypt random key with PKCS8 SSH key

$ openssl rsautl -encrypt -pubin -inkey /path/to/pubkey.pkcs8 -in key -out key.enc

Glob up both files

$ tar -zcvf secret.tgz *.enc


$ tar -xzvf secret.tgz
$ openssl rsautl -decrypt -ssl -inkey ~/.ssh/id_rsa -in key.enc -out key
$ openssl aes-256-cbc -d -in secret.txt.enc -out secret.txt -pass file:key

Flexible Python logging

Here’s a template for a very flexible logging configuration:

#!/usr/bin/env python
Program description
import logging

from logging.config import dictConfig

logger = None

    "version": 1,
    "disable_existing_loggers": False,

    "formatters": {
        "simple": {
            "format": "%(asctime)s %(name)s:%(lineno)d %(levelname)s %(message)s"

    # this confiuration applies to all loggers that are not listed in `loggers` below
    "root": {
        "handlers": [
        "level": "INFO",

    # configure logging for specific loggers
    "loggers": {
        # this logger is for logging within this file
        "__main__": {
            "level": "DEBUG",
            "propagate": False,
            "handlers": [
        # only log warnings in package foo
        "foo": {
            "level": "WARN",
            "propagate": False,
            "handlers": [

    "handlers": {
        # send logs to the console on stdout
        "console": {
            "stream": "ext://sys.stdout",
            "class": "logging.StreamHandler",
            "level": "DEBUG",
            "formatter": "simple"
        # this handler throws stuff away
        "drop": {
            "class": "logging.NullHandler",
            "level": "DEBUG"

SCRIPT_NAME = os.path.basename(sys.argv[0])

if __name__ == '__main__':

    logger = logging.getLogger(__name__)

    # your stuff here

Now, any time your code needs to log something, request a logger:

logger = logging.getLogger(__name__)

When the line above is used in the module foo/bar.py only warnings and errors will be logged to stdout per the foo entry above.

The line above in the module something/else.py will log info and above per the root handler.

Finally, anything using the logger in the script itself will log debug and above per the __main__ entry.

Scheduling jobs on Mac OS

On Mac OS launchd is the process manager. It can run background processes for the system as well as for each individual user. User-specific jobs are kept in ~/Library/LaunchAgents/.

Running a job on an interval

Here is a template for running a job on an interval. The example below runs the given program every minute. Copy and paste the following in a file named com.example.JobLabel.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

It doesn’t matter, but it will make your life a lot easier to keep the Label string in sync with the filename.

Additionally, there are two ways to specify the program to run:

  • The Program key
  • The ProgramArguments key

After debugging something stupid (specifying Program and ProgramArguments in the same file, I think it’s best to use ProgramArguments exclusively and pretend the other one does not exist. When the program takes no arguments, simply have a single <string> in the <array>.

Kubernetes on AWS

The AWS cluster configuration script Kubernetes comes with works pretty much flawlessly, with two exceptions. First, make sure you have curl installed on the system you are bootstrapping the cluster from. And, second, make sure the awscli package is recent.

The Debian AWS AMI does not come with curl installed and the awscli package is an old version: aws-cli/1.4.2 Python/3.4.2 Linux/3.16.0-4-amd64).“ After running pip install --upgrade awscli you should see a version at least: aws-cli/1.10.24 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.15.

Good times.

Custom Nginx error pages on upstream responses

Digital Ocean has a great post on setting up error pages when nginx encounters an error. However, the above does not work when a request is successfully passed to an upstream server and the upstream response is a 50x error. When processing the upstream response is desired, proxy_intercept_errors on; must be added to the configuration (thank you Stack Overflow!).

The server block looks like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;


        proxy_intercept_errors on;
        error_page 500 502 503 504 /custom_50x.html;
        location = /custom_50x.html {
                root /usr/share/nginx/html;


InfluxDB on Raspberry Pi

I found a blog post by Aymerick describing how to build InfluxDB on a Raspberry Pi. Here’s what I did to get it working.

Install prerequisites

$ sudo apt-get install -y bison ruby2.1 ruby-dev build-essential
$ sudo gem2.1 install fpm

Install gvm

This installs gvm for the current user:

$ bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

Setup Go

$ gvm install go1.4.3
$ gvm use go1.4.3 --default

Create an influxdb package set

$ gvm pkgset create influxdb
$ gvm pkgset use influxdb

Build InfluxDB

$ go get github.com/sparrc/gdm
$ go get github.com/influxdata/influxdb
$ cd ~/.gvm/pkgsets/go1.4.3/influxdb/src/github.com/influxdata/influxdb
$ gdm restore
$ go clean ./...
$ go install ./...

The ./package.sh command did not work for me, so I settled for the influxd and influx binaries in ~/.gvm/pkgsets/go1.4.3/influxdb/bin.

Mac + Docker Toolbox + HAProxy

Working with Docker Toolbox on a Mac is great. It makes it very easy to work with Docker continers and setting up interconnected services with Docker Compose. However, since the services are running in a virtual machine, accessing the services from a machine other than the Mac itself is not possible out of the box.


Installing and running HAProxy on the Mac will proxy any traffic from the VM to external hosts looking to access the service. I installed haproxy via homebrew with the command brew install haproxy.

With HAProxy installed, I set it up to proxy HTTPS traffic over to the VM. In my case, the VM was and my Mac’s IP was I saved the configuration below at /usr/local/etc/haproxy/haproxy.cfg.

  log  local0
  log  local1 notice
  maxconn  4096
  chroot   /usr/local/share/haproxy
  uid  99
  gid  99

  log   global
  mode  tcp
  option  dontlognull
  retries  3
  option  redispatch
  option  http-server-close
  maxconn  2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000

frontend www_fe

  use_backend www

backend www
  timeout server 30s
  server www1

Launch daemon

With haproxy configured, I placed the launchd configuration at /Library/LaunchDaemons/com.zymbit.haproxy.plist and ran the command sudo launchctl load -w /Library/LaunchDaemons/com.zymbit.haproxy.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">

Starting and stopping launchd

Now, whenever I want external access to the Docker service I run the command launchctl start com.zymbit.haproxy and when I’m done launchctl stop com.zymbit.haproxy.