InfluxDB on Raspberry Pi

I found a blog post by Aymerick describing how to build InfluxDB on a Raspberry Pi. Here’s what I did to get it working.

Install prerequisites

$ sudo apt-get install -y bison ruby2.1 ruby-dev build-essential
$ sudo gem2.1 install fpm

Install gvm

This installs gvm for the current user:

$ bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

Setup Go

$ gvm install go1.4.3
$ gvm use go1.4.3 --default

Create an influxdb package set

$ gvm pkgset create influxdb
$ gvm pkgset use influxdb

Build InfluxDB

$ go get github.com/sparrc/gdm
$ go get github.com/influxdata/influxdb
$ cd ~/.gvm/pkgsets/go1.4.3/influxdb/src/github.com/influxdata/influxdb
$ gdm restore
$ go clean ./...
$ go install ./...

The ./package.sh command did not work for me, so I settled for the influxd and influx binaries in ~/.gvm/pkgsets/go1.4.3/influxdb/bin.

Advertisements

Mac + Docker Toolbox + HAProxy

Working with Docker Toolbox on a Mac is great. It makes it very easy to work with Docker continers and setting up interconnected services with Docker Compose. However, since the services are running in a virtual machine, accessing the services from a machine other than the Mac itself is not possible out of the box.

HAProxy

Installing and running HAProxy on the Mac will proxy any traffic from the VM to external hosts looking to access the service. I installed haproxy via homebrew with the command brew install haproxy.

With HAProxy installed, I set it up to proxy HTTPS traffic over to the VM. In my case, the VM was 192.168.99.100 and my Mac’s IP was 192.168.1.200. I saved the configuration below at /usr/local/etc/haproxy/haproxy.cfg.

global
  log  127.0.0.1  local0
  log  127.0.0.1  local1 notice
  maxconn  4096
  chroot   /usr/local/share/haproxy
  uid  99
  gid  99


defaults
  log   global
  mode  tcp
  option  dontlognull
  retries  3
  option  redispatch
  option  http-server-close
  maxconn  2000
  timeout connect  5000
  timeout client  50000
  timeout server  50000


frontend www_fe
  bind 192.168.1.200:443

  use_backend www


backend www
  timeout server 30s
  server www1 192.168.99.100:443

Launch daemon

With haproxy configured, I placed the launchd configuration at /Library/LaunchDaemons/com.zymbit.haproxy.plist and ran the command sudo launchctl load -w /Library/LaunchDaemons/com.zymbit.haproxy.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
 <key>Label</key>
 <string>com.zymbit.haproxy</string>
 <key>ProgramArguments</key>
 <array>
   <string>/usr/local/bin/haproxy</string>
   <string>-db</string>
   <string>-f</string>
   <string>/usr/local/etc/haproxy/haproxy.cfg</string>
 </array>
</dict>
</plist>

Starting and stopping launchd

Now, whenever I want external access to the Docker service I run the command launchctl start com.zymbit.haproxy and when I’m done launchctl stop com.zymbit.haproxy.

Hash buckets, rsync, and xargs magic

At work we have a couple of directories that are organized as two-deep hash buckets, totaling 65536 directories [1]. This creates a ton of directories and traversing this, e.g. find . -type f, takes ages. This structure also causes rsync to take up a lot of memory.

One way to solve this is to work on a single directory at a time instead of all 256 directories (each containing 256 directories of their own). For example, this will run rsync once per directory which dramatically decreases rsync‘s work load and works pretty well:

for i in *; do rsync -a $i server:/path/to/dest/$i; done;

With xargs the serial process above can be parallelized. The following will continually process 8 directories until all 256 have been copied over:

ls | xargs -n 1 -P 8 -I% rsync -a % server:/path/to/dest/%

I tried with 32, 16, then 8 parallel processes. In my case a -P value more than 10 will cause xargs to explode trying to create that many rsync processes. I haven’t figured out why, but it really doesn’t matter. With 8 running in parallel, the disk and network should be pretty well saturated anyway.

[1] the base directory has 256 directories 00 – ff, which each have 256 00 – ff directories in them. 256^2 = 65536 directories.

Caveat Emptor: Taps for Easy Database Transfers

I found this post on a project called Taps. It’s a database proxy to slurp data out of one database and dump it into another, which can be from a different vendor (e.g. it can handle MySQL -> PostgreSQL migrations).

Whenever possible, I’d rather leave serialization to that which knows it best, in my case pg_dump.

The blog post states:

Migrating databases from one server to another is a pain: mysqldump on old server -> gzip -> scp big dump file -> gunzip -> mysql. It takes a long time, and is very manual and (and thus error-prone), and generally has the stink of “lame” hanging about it.

My solution: UNIX. The example below uses PostgreSQL, but the general structure holds true for MySQL. It comes in two flavors; the pull method:

ssh -C current_db_server 'pg_dump current_database' | psql new_database

or the push method:

pg_dump current_database | ssh -C new_db_server 'psql new_database'

The mechanics are strikingly similar to the, ‘stink of “lame”,’ but no babysitting required. I get compression (-C), encryption (SSH), and data integrity (pg_dump) from battle-hardened tools; all nicely wrapped into a single command.

It generally “just works”, but if your connection is interrupted, you’ll have to drop the new db and start again. Remember to use screen!

Taps doesn’t require SSH, but I suppose it is a given in any environment. Worst case scenario is establishing a tunnel using SSH’s -L or -R flags.

Taps can also migrate between database vendors, which is interesting. We went from MySQL to PostgreSQL and had to tweak an existing tool to do so.

Watch out for issues like, “Foreign Keys get lost in the schema transfer,” ouch. Eh, whatever, RI is for stuffy folks.

Make dig less verbose

Dig is great for querying DNS information, but man, by default it’s so chatty:


[berto@bolt]$ dig rcaguilar.wordpress.com

; <> DiG 9.6.0-APPLE-P2 <> rcaguilar.wordpress.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65058
;; flags: qr rd ra; QUERY: 1, ANSWER: 7, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;rcaguilar.wordpress.com. IN A

;; ANSWER SECTION:
rcaguilar.wordpress.com. 13874 IN CNAME lb.wordpress.com.
lb.wordpress.com. 225 IN A 76.74.254.120
lb.wordpress.com. 225 IN A 74.200.243.251
lb.wordpress.com. 225 IN A 74.200.244.59
lb.wordpress.com. 225 IN A 72.233.69.6
lb.wordpress.com. 225 IN A 72.233.2.58
lb.wordpress.com. 225 IN A 76.74.254.123

;; Query time: 63 msec
;; SERVER: 10.123.0.1#53(10.123.0.1)
;; WHEN: Fri Mar 11 10:57:14 2011
;; MSG SIZE rcvd: 154

Sigh, make it get to the point:


[berto@bolt]$ dig +noall +answer rcaguilar.wordpress.com
rcaguilar.wordpress.com. 13841 IN CNAME lb.wordpress.com.
lb.wordpress.com. 192 IN A 76.74.254.123
lb.wordpress.com. 192 IN A 72.233.2.58
lb.wordpress.com. 192 IN A 72.233.69.6
lb.wordpress.com. 192 IN A 74.200.244.59
lb.wordpress.com. 192 IN A 74.200.243.251
lb.wordpress.com. 192 IN A 76.74.254.120