Bringing Devops Home

Hmmm….almost 2 years since a post.  I was pretty busy.

Anyway, I’ve been making some major changes to how I manage my home server, so I thought a write-up would be interesting.  I’ve had some form of home server for over 10 years.  It’s always been Linux, but has morphed in many ways over the years.  It began as a frontend for XBMC (before it was Kodi), but after migrating to Plex, it’s become completely headless.  Over the years, I’ve stacked software on it, installing OS level packages and letting them run behind the scenes or though Apache.  Quite honestly, it began to be a mess.  I took inventory of it one day, and realized should this thing ever die a horrible fiery death, I’d have weeks of rebuilding.  Plus, I frequently found myself in dependency hell, trying to track down what was needed for me to do the thing that I wanted to do.

So, I took a step back, remembered some of those buzzwords that I’d heard around work and the internet, studied them, compared them, decided on a toolset, and put them into practice.

One thing that was clear to me in doing my analysis…. I am not running the data center for a Fortune 500 company.  I only manage three small Linux servers for personal use.  Often, lightweight, simple options work great for me.  With that, many of these solutions can scale quite large either with some additional thought towards configuration or add-on services.

Ansible – Configuration Management

First of all, I’m tired of manually editing configuration files, documenting my changes, and hoping I can remember what I did next time.  That’s silly.  I needed a configuration management system.  I considered a few alternatives, Chef, Salt, Puppet were the main competitors, but I chose Ansible.  It’s a simple push based architecture that relies on ssh and Python, two technologies I already know very well.  It also does not require any special infrastructure.  I could run it off my server, a Raspberry Pi, or a laptop.

I’d actually done some work for my home server in Ansible before, but I’d automated simple tasks, not the state.  I went ahead and started with a blank slate using the Ansible best practices to define the state I wanted vs defining the tasks I needed to accomplish on my server.  Then, I stuck this in a Git Repo.  I’m currently using Gitlab, because their free accounts offer the most flexibility for a hobbyist.

Semaphore – Ansible Frontend

Semaphore Dashboard

Ansible’s full capabilities are available from the command line alone, but sometimes it’s easier just to open up a web page and click a button.  The enterprise solution is Ansible Tower and the open source upstream solution is AWX.  I did play with AWX, and found a lot of good features, but it was very heavy for what I needed.  It requires 4 Docker containers: Web UI container, Worker container(s), PostgreSQL, and RabbitMQ.  I found Semaphore to be simple, lightweight, and did everything I needed.  It can be used to manage SSH keys, GitHub Repos for your playbooks, users, and projects.  On any playbook execution, it will update from git, then perform the requested action.  There is currently no internal scheduling mechanism, but there a rest API available for externally triggered jobs.

At the end of the day, it accomplishes my goal rather well.  I can edit, commit, merge, and run all in a handful of minutes. (more if I actually test first)

Docker – Application Management

One of my bigger frustrations began to be managing software dependencies.  I often found myself troubleshooting dependencies, manually editing configuration files, and configuring Linux users and groups to allow shared file access.

Why do this anymore?  Most mainstream Linux services have some form of Dockerfile available on Docker Hub.  The file is easily readable, so even if you don’t like some of the practices in an image you want, you can definitely create one of your own.  Additionally, Ansible has great Docker modules, so services can be configured easily with the same configuration management system in use by the rest of the system.  Some of the services I’m running in docker are:

  • Airsonic – Free, web-based media streamer.  Fork of Subsonic.
  • Grafana – Analytics and Monitoring
  • Plex – Media Server
  • Portainer – Management UI for Docker – Useful for inspecting and viewing logs
  • Prometheus (and add ons) – Monitoring System and Time Series database
  • RabbitMQ – Message broker – Used in a Django/Celery project I’m working on.  Prime candidate for Docker due to Erlang requirements.

Prometheus – Monitoring

Previously, I had been relying on Icinga2.  While stable, configuration was a pain, and it relied on OK/Warn/Critical limits that would need to be configured on each node remotely.  I felt I needed to re-learn the configuration schema each time I needed to add a new custom alert.  Additionally, Icinga2 had limited options for reporting history and graphing out of the box.  It was also dependent on Apache and MySQL, so what would alert me if those went down?

After analyzing my options, I gravitated towards Prometheus.  It didn’t come pre-configured with a bunch of fancy dashboards and alerts like some of other offerings, but it was easy to manage, and there were many add-ons to enrich the experience.  Data  is gathered through exporters, scraped with HTTP requests.  Prometheus can even scrape HTTPS URLs with authentication.  I’m currently using few exporters to gather information on my systems:

  • Prometheus node_exporter – runs as  service on all nodes to collect system metrics.  This exporter can even scrape text files, which I’ve configured to check for available apt packages on my Ubuntu systems.
  • cAdvisor -Analyzes resource usage and performance characteristics of running containers. (Offered by Google)
  • Blackbox exporter – Allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.

I plan to retire Icinga2 soon, after I have been able to improve my alerting thresholds and gain a little more confidence in the system.

Grafana – Analytics and Alerting for Prometheus

Prometheus is great for storing and querying data.  It can graph data, but the interface is best used to develop new queries and graphs.  I found Grafana to be the best package deal to support Prometheus, as it can generate graphs, and send alerts to multiple channels.  I’ve tried my hand at generating my own dashboards, but shared ones available on are much better than I’ve been able to create quickly.  So far, I’ve been able to get whole system dashboards to help me monitor and alert on various metrics.  The big ones for me are filesystem space, backup status, and security patch requirements.  As a bonus, I’ve also been able to create dashboards for others to show only the metrics they are concerned with (and automate a nagging email when disk space runs low).

I’ll end this post with some of the graphs I have configured in my Grafana Dashboards:

WordPress performance with caching

In my last entry, I detailed the performance gains to be had from switching host providers.  That’s pretty cool, but a lot can still be done within WordPress to improve performance with caching.  Here, I’m going to use the URL from my previous blog post (, and I’m going to run it through similar benchmark tests to see what kind of difference that makes.

During these tests, nothing is being changed except for the caching plugin.  All server variables remained constant, and no other plugins were touched at this time.  This plugin will allow wordpress to generate a static html webpage to take the place of php/mysql code.  Therefore, a page request will simply read a flat file that is ready to go vs execute php code and pull data from the database, limiting processing time.

Note, this test is not downloading images, javascript or any other static content that can be included with a webpage.  I’m purposely leaving that out, testing the webserver’s ability to process the wordpress php code only.

Test #1: 1000 requests, single threaded

Example command: ab  -n 1000 -e post_280_ssl_std.csv -g post_280_ssl_std_gnuplot.tsv

General Numbers:

 Uncached  Cached
 Document Length  35424 bytes  35568 bytes
 Concurrency Level  1  1
 Time taken for tests  280.391 seconds  171.673 seconds
 Complete Requests  1000  1000
 Failed Requests  389 (length)  0
 Total Transferred  35,789,569 bytes  35,873,068 bytes
 HTML transferred:  35,423,569 bytes  35,568,000 bytes
 Requests per second:  3.57 [#/sec]  5.83 [#/sec]
 Mean time per request:  280.391 [ms]  171.673 [ms]
 Transfer rate:  124.65 [Kbytes/sec]  204.06 [Kbytes/sec]

For this test, there were 389 failed requests based on length.  Researching this error indicates it could be caused by dynamic content, and does not necessarily indicate a problem.  Therefore, I’m going to ignore this figure, and assume all connections were successful.

Continue reading “WordPress performance with caching”

I switched my host provider!

DO_SSD_Power_Badge_Blue-077bf22e…and you should too!
(provided you know a thing or two about system management and online security)

I found myself in a place where the basic and plus hosting accounts were providing extreme sub-par service, with no SSL.  I had two options, move up to the $15 dollar a month (on sale) “Pro” hosting account, or jump ship.  I jumped to a $10/month Digital Ocean Droplet and I couldn’t be happier!

  • root
  • Faster performance
  • SSL for free, thanks to Let’s Encrypt
  • Free reign to monitor and tune system
  • Complete control over security policies and patching

Note, all of these things come with a varying levels of responsibility, which should not be taken lightly.  There are plenty of tutorials out there on how to harden a servers and configure web services.  If you go down this road, I highly suggest you do your research first.

While that short bit on the “why” is imporant, I really wrote this to share some performance data!  I used Apache Benchmark against my old hosting account and my new VM.  Honestly, I don’t get that many hits, so load on my own server is negligible.  In order to give both hosts a shot, I performed these tests between 12:00 am and 2 am CST.  I used the same theme and the same config options.  I’m unable to modify the my.cnf file on my shared hosting provider, so I left the defaults in place on my new host.  I did create an apache virtual host, otherwise I left the apache configs alone for similar reasons.  My site runs wordpress, and I made sure both sites were running the same plugins with the same options using the same theme.  At the time, LightWord, Akismet, Jetpack, SyntaxHighlighter Evolved, Ultimate Google Analytics.

Test #1 – 1,000 gets against the main page, single thread:

There is a slight difference in the total bytes and the filesize transferred.  I’ve identified this to be the difference between a custom footer and the standard.  It was a negligible change, and the tests took a while to complete, so I’ve left it alone.  Also, the hostname was different, because I chose to run the tests at the same time, using a sub-domain to point to the new host.

Example Command: ab  -n 1000 -e digitalocean.csv -g digitalocean_gnuplot.tsv

General Numbers:

 Host Type:  Shared  Virtual Machine  Virtual Machine
 Host Provider  Bluehost  Digital Ocean  Digital Ocean
 Monthly Cost:  $6.95  $10  $10
 Server Software  Apache  Apache/2.4.7  Apache/2.4.7
 Server Hostname:
 Server Port:  80  80  443
 SSL/TLS Protocol:  n/a  n/a  TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
 Document Path:  /  /  /
 Document Length:  88453 bytes  84502 bytes  84502 bytes
 Concurrency Level:  1  1  1
 Time taken for tests:  1380.997 seconds  265.789 seconds  399.775 seconds
 Complete requests:  1000  1000  1000
 Failed requests:  0  0  0
 Total transferred:  88714000 bytes  84824000 bytes  84524000 bytes
 HTML transferred:  88453000 bytes  84502000 bytes  84203000 bytes
 Requests per second (mean):  0.72 [#/sec]  3.76 [#/sec]  2.50 [#/sec]
 Time per request (mean):  1380.997 [ms]  265.789 [ms]  399.775 [ms]
 Transfer rate:  62.73 [Kbytes/sec]  311.66 [Kbytes/sec]  206.47 [Kbytes/sec]

Continue reading “I switched my host provider!”

Cleverbot vs. Cleverbot

So, I stumbled upon pycleverbot, a nice little module to interface with the Cleverbot website.

Of course, what is the first thing everybody wants to do?  Make Cleverbot talk to itself!

With the module, coding is quite simple.  The version I’m running outputs the code to an html page on my server, but the syntax was screwing up in WordPress, so I’ve left that out.

import cleverbot

# beginning two different cleverbot sessions
# I use steve and bob to keep things separate while coding.
# This is not reflected in the html output.
steve = cleverbot.Session()
bob = cleverbot.Session()

# Gathering info....boring stuff
convo_start = raw_input("How would you like to begin the conversation? : ")
print ""
cycles = int(raw_input("How many cycles would you like to run? : "))
print ""

# Starting the conversation
print "Bob: "+convo_start
reply = steve.Ask(convo_start)
print "Steve: "+reply

i = 0

# continuing the conversation in the loop.
while (i <= cycles): # you will have to edit the less than symbol on this line.
    reply = bob.Ask(reply)
    print "Bob: "+reply

    reply = steve.Ask(reply)
    print "Steve: "+reply

    i = i + 1

# ....and now to tie up my loose ends.

The output is definitely interesting.  At one point, I grabbed text from the website’s “Think For Me” as a true study of what this would yield.  At one point, they started quoting the song “Still Alive” from Portal!

I’ve also found that this bot seems loop resistant.  I groaned when I saw “Yes you did!; No I didn’t!” cycle about 3 times, but the bot actually recovered.  I’m actually surprised that the bot seems to be great at bringing up it’s own subjects too.

Then every once in a while, the bot will give me a facepalm moment of just utter stupidity.  I’ve learned that as soon as one of them announces that they are Cleverbot that I should just ignore the next 10 lines.  Regardless, I accomplished what I wanted to, and I can live with that.

Now for some examples!

I’d be more than happy to start the script with any examples given to me, just let me know!