Tag Archives: feature

Ansible and AWS EC2

Ansible is not lacking in awesome.  I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent.  Ansible is also simple to get up and running quickly.

In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight.  Adding more and more functionality to playbooks can be done easily.  As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.

I’m currently using Ansible to deploy to EC2 Linux hosts.  My plan is to be able to deploy an EC2 host through the AWS API.  I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers.  Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.

I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers.  I have a combination of physical servers, virtual servers through Xen, and AWS hosts.  I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:

- name: add apt source repo when physical or virtual
  apt_repository:
    repo="deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free"
    state=present
    update_cache=yes
    when: ansible_distribution_release == "jessie" and
          (hosttype == "phy" or hosttype == "vir")

I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.

I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.

Hands-on with Amazon Echo


I was in the market for a Bluetooth speaker that had decent sound.  I got that and then some with the new Amazon Echo.  I received the Echo yesterday through a Prime pre-order.  Amazon has put a lot of thought and effort into the packaging of the Echo.  The unboxing was reminiscent of unboxing an Apple device.Amazon Echo Unboxing

The first thing that struck me was the size and weight of the Echo.  Here’s a pic showing the speaker on my desk next to a CD (Classic Quadrophenia) that should give an example of the scale.

Amazon Echo

Note the blue ring in the pic too.  When the Echo is thinking or responding to something, the blue light is on.  The ring can turn bright red if, for example, the Echo loses its wifi connection.  It did that once yesterday when I had the device in the kitchen (fairly far away from the wireless AP to which it was connected).

On the top are two buttons, one is a mute button for the mic, if you don’t want the Echo listening to your every word.  The other is a button that can be pressed to wake up the Echo to receive a command, if you don’t want to speak the “wake word”.  At this time, the only “wake words” are “Alexa” (the default) and “Amazon”.  I’m guessing they’ll change that eventually so that you can customize or choose a “wake word”.

Amazon Echo Setup

Setting up the Echo is rather straightforward.  Plug it in.  The Echo takes about 40 to 45 seconds to boot, during which time the blue ring will use a white rotate effect.  When the Echo is booted the first time, the ring will turn orange and the Echo will say “Hello”.  The next step is to visit http://echo.amazon.com or download the Echo app from your respective App Store.

The echo sets up its own temporary wifi network and will audibly tell you to connect to that wifi network using your device.  When you do so, you’ll then be able to choose the wifi network to which the Echo should connect.  The Echo will then connect to your home wifi network and you’re ready to roll.

The Echo also comes with a remote and a fairly powerful magnetic holder for said remote.  I haven’t yet experimented with the remote.  I suppose the use case for the remote is when you don’t want to shout “Alexa, stop” across the room.

The Bluetooth pairing process is easy too.  Simply say “Alexa, connect” and the Echo will go into a mode where it can be seen and paired to nearby devices.  There is no additional security code to enter, so anyone nearby who is watching could theoretically connect to the device while in pairing mode.  Connecting and disconnecting devices from Bluetooth is as easy as saying “Alexa, connect” and “Alexa, disconnect”.

The “Alarm” feature might be useful, though there doesn’t seem to be a way to set the alarm tone, just the time.  You can speak the alarm time, like “Alexa, set an alarm for 2pm” and watch as it gets set within the Echo web site.  That’s a novelty though and I haven’t explored the alarm function beyond merely setting it and then jumping out of my chair when the alarm went off and I had the volume too high.

Speaking of volume, there are multiple ways to adjust the volume.  Saying things like “Alexa, increase volume” will make the playback louder and you can also say “Alexa, volume 3″ to manually set the volume to a certain level.  Sadly, the range is 0 to 10 and not 11, as I was hoping.  There is also a dial on the top of the device that can be turned left and right to manually adjust the volume.  Manually adjusting the volume gives finer control over the output. For example, saying “Alexa, volume one” will result in the 0 through 10 scale, but you can still adjust the volume down below one manually.

There is a news function and you can customize from among several audio feeds such as NPR, BBC, Economist, and others.  Saying “Alexa, news” begins these feeds and “Alexa, next” skips to the next “Flash” news feed, as they are called.  For non-audio based feeds, the Echo reads the news aloud.

Weather is available for the current location and you can ask for future or current conditions for  both local and remote locations.

How’s the Sound?

I bought a Bluetooth speaker with good sound.  The Echo has that.  The sound is rich and full range, with sufficient bass and midrange.  With any small-size speaker there is a simple inability to move a lot of air, as you would find on a full-size speaker.  Therefore, I don’t believe a Bluetooth speaker will ever be able to provide the drive of a nice JBL monitor.

I set up a playlist through Amazon Prime Music and can now do things like “Alexa, shuffle playlist Classic Rock” and random songs will be chosen from that playlist.

Conclusion

I suspect Amazon will be working hard to enhance the Echo.  For instance, the calendaring function currently only works with Google Calendar.  I’d love to see that integrated with other calendaring options, maybe through the Alexa AppKit or natively.

I can see the need to order additional accessories like power supplies so that I can move the Echo to different rooms… though I suppose I could just order more Echoes.

I haven’t yet explored the home control aspects of the Echo that can be found with Belkin Wemo devices, though I’m hoping to.  I’m hopeful that Amazon won’t release an “Echo 2″ right away so that this is instantly obsolete either!

Deploying and Debugging PHP with AWS Elastic Beanstalk

AWS Elastic Beanstalk provides a compelling platform for deployment of applications. My web site, the one you’re viewing this page on now, has historically been deployed on a Linux computer hosted somewhere.  Well, ok, the software from which you’re reading this is WordPress but it’s hosted on the same Apache server as the main web site.

I recently redesigned the main site and in the process purposefully made the site more portable.  This essentially means that I can create a zip file with the site, including relevant Apache configuration bits contained in an htaccess file and then deploy them onto any equivalent server, regardless of that server’s underlying Apache or filesystem configuration.

That got me thinking:  Can I deploy the site onto Elastic Beanstalk in Amazon Web Services?  The answer:  Yes I can.  The path that I followed was essentially to clone a clean copy of the repository from git and then create a zip file with the contents but not the .git-related bits.  Here’s the command, executed from within the directory with the code:

zip -r braingia.zip . --exclude=\*.git\*

The next step is to then deploy this into AWS Elastic Beanstalk.  That’s relatively straightforward using the wizard in AWS. In a few minutes, AWS had deployed a micro instance with the code on it.  I needed to undo some hard-coded path information and also redo the bit within the templating system that relied on a local wordpress file for gathering recent blog posts.  It wasn’t immediately clear what the issue was though, and the biggest challenge I encountered was debugging.

Debugging PHP in Elastic Beanstalk

My workflow would normally call for ssh’ing into the server and looking at error logs.  However, I found that adding the ability to display errors was helpful.  That setting is found within the Configuration area of Elastic Beanstalk:Display Errors in Elastic Beanstalk

However, that’s not a setting I would use on a production site.  There are two other ways to troubleshoot Elastic Beanstalk.  First, you can view and download Apache logs and other related logs from Elastic Beanstalk.  For example, adding error_log() statements to  the code would result in log entries in these easily-viewable logs.

The other debug option is to enable an EC2 key pair for the EC2 instance associated with the application.  This is done at application creation or later through the Configuration.  Therefore, I simply deployed another application with the same Application Version and chose an EC2 key pair this time.

EC2 Key Pair

Note that AWS replaces the instance entirely if you change this key at a later date, so if you have Single Instance-hosted version of the application, the site will be unavailable while a new instance is spun up.

AWS Red Status

Once the key pair is enabled on the server, it’s simply a matter of ssh’ing into the EC2 instance using the key and the ec2-user account, like so:

ssh -i mykey.pem ec2-user@example.elasticbeanstalk.com

Doing so, it’s easy to navigate around to see everything on the server, including log files.

SSH to Elastic Beanstalk

Note that the current version of the application can be found within the /var/app/current directory on Elastic Beanstalk-deployed PHP applications.  You can even edit files directly there, but I wouldn’t recommend it since it breaks the zip-based deployment paradigm and architecture.

In summary, Elastic Beanstalk deployment and debugging were much easier and much more powerful than I envisioned.

 

Installing nftables on Debian 7.5

[Last Update: 8/11/2014 – Clean up some bits around the options to select.]

This article discusses installation of nftables, the new Linux firewall software, on a Debian 7.5 system.  Nftables is under very active development and therefore the installation steps may change by the time you view this article.  Specifically, the various prerequisites needed in order to build nftables will likely no longer be needed as the software matures, and more importantly, as packages for it become available.

Note: This article begins with a base of Debian 7.5.0 netinst with the SSH Server and Standard System Utilities installed.

There are two primary components involved in an nftables system:  The first component is the Linux kernel, which provides the underlying nftables core modules.  The second component is the administration program called nft.

Compiling a kernel

The Linux kernel that comes with Debian 7.5.0 is based on version

Before you can compile a kernel, you need to get a kernel.  As of this writing, the latest stable kernel is 3.15.  Retrieving that from the Linux server with the wget command looks like this:

wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.15.tar.xz

Then unpack the kernel source:

tar -xvf linux-3.15.tar.xz

You’ll now have a pristine kernel ready to be built.

Several packages are essential and some are helpful for compiling a kernel on Debian.  The package named kernel-package provides useful utilities for creating a Debian packaged kernel.  Kernel-package has several prerequisites but those are all installed when you select kernel-package for installation on the system.

The method shown in this article uses the ‘menuconfig’ option to build the kernel.  Other methods such as simply the text-based config option are also available.  The menuconfig option requires the ncurses-devel package.  On Debian, this is found as part of the libncurses5-dev package and can be installed with this command (run as root):

apt-get install libncurses5-dev kernel-package

Note:  You may need to update the package list by running apt-get update prior to the packages becoming available for installation.

From within the linux-3.15 (or whatever version) directory, run:

make menuconfig

The options necessary within the kernel for nftables are found in the Networking support hierarchy.

Drill-down to the Networking support -> Networking options -> Network packet filtering framework (Netfilter).

Inside of the IP: Netfilter Configuration select IPv4 NAT.  Back up at the Network packet filtering framework menu, select IPv6 Netfilter Configuration and enable IPv6 NAT along with its sub-options of MASQUERADE target support and NPT target support.

Back at the Network packet filtering framework level, enter the Core Netfilter Configuration menu and enable Netfilter nf_tables support.  Doing so opens up several additional options.

Netfilter nf_tables mixed IPv4/IPv6 tables support
Netfilter nf_tables IPv6 exthdr module
Netfilter nf_tables meta module
Netfilter nf_tables conntrack module
Netfilter nf_tables rbtree set module
Netfilter nf_tables hash set module
Netfilter nf_tables counter module
Netfilter nf_tables log module
Netfilter nf_tables limit module
Netfilter nf_tables nat module
Netfilter nf_tables queue module
Netfilter nf_tables reject support
Netfilter x_tables over nf_tables module

Back in the Network packet filtering framework (Netfilter) level, select IP: Netfilter Configuration and find the IPv4 nf_tables support section and enable IPv4 nf_tables route chain support, IPv4 nf_tables nat chain support, and ARP nf_tables support.  Back at the Network packet filtering framework (Netfilter) level, select IPv6: Netfilter Configuration again and enable IPv6 nf_tables route chain support, and IPv6 nf_tables nat chain support.

Note: For the purposes of this article, all of the options will be selected as modules.

Finally, within the Network packet filtering framework (Netfilter) section, enable the Ethernet Bridge nf_tables support feature if you need this functionality.

Once your kernel configuration is complete, you can clean the source tree with the command:

 make-kpkg clean

Now it’s time to compile the kernel.  Depending on the speed of your system it make take several minutes to several hours.  If you have multiple processors, you can likely speed up the process by having make-kpkg use them.  This is accomplished by setting the CONCURRENCY_LEVEL environment variable.  For instance, on a system with two processors, the variable is set as such:

export CONCURRENCY_LEVEL=2
export INSTALL_MOD_STRIP=1

Alternately, specify all of it on the command line:

CONCURRENCY_LEVEL=2 INSTALL_MOD_STRIP=1 make-kpkg --initrd --revision=1 kernel_image

Note: On a dual processor quad core system the compile took about 30 minutes.

Once the kernel has been compiled, installation is accomplished (as root) with the command:

 dpkg -i linux-image-<your_version_here>.deb

Rebooting the server brings up the shiny new kernel but the server isn’t quite ready to run nf_tables yet.  Prior to compiling the nft administration program, you can verify that the nf_tables module can load.  First, see if the module is already loaded:

 lsmod | grep nf_tables

If there’s output then the module has already been loaded.  If not, then you can load the module with modprobe, as such:

 modprobe nf_tables

Rerunning the lsmod command (lsmod | grep nf_tables) should give output now, similar to this:

 nf_tables              37955  0
nfnetlink              12989  1 nf_tables

 Compiling the nft Administration Program

The nft administration program enables control over the firewall, in much the same way that the iptables command controlled an iptables-based firewall.  The nft program depends on the libmnl and libnftnl libraries.  With the large amount of active development underway on nf_tables and related libraries, this tutorial shows how to get the latest copy using Git rather than attempting to install from a package or another method.

 apt-get install autoconf2.13 libtool pkg-config flex bison libgmp-dev libreadline6-dev dblatex

Note that dblatex is only needed if you want PDF documentation, which I sometimes do.  You can save some space and security footprint by not adding dblatex to the previous apt-get command line.

The three repositories can be cloned into your current directory with the commands:

git clone git://git.netfilter.org/libmnl
git clone git://git.netfilter.org/libnftnl
git clone git://git.netfilter.org/nftables

Once a copy has been downloaded, the next step is to compile the software.  Both libnml and libnftnl are prerequisites for compiling nftables so those will be compiled first with the commands (all run as superuser/root):

 cd libmnl
./autogen.sh
./configure
make
make install

Now cd backwards a directory and into the libnftnl directory and compile it:

 cd ../libnftnl
./autogen.sh
./configure
make
make install

Finally, compile nftables:

 cd ../nftables
./autogen.sh
./configure
make
make install

With the nftables administration program compiled and installed you can now run nft commands!  Depending on your installation, you may need to reboot and/or run ldconfig.  I did both; a reboot didn’t fix it so running ldconfig as root was the next logical step.  Actually, that might have been the first logical step before rebooting, but that’s how it goes sometimes.

In any event, running the following command should do nothing (and that’s what we want right now):

 nft list tables

If the command returns nothing at all, then nft is working fine.  You can create a table with the command:

 nft add table filter

Now create a chain with the command:

nft add chain filter input { type filter hook input priority 0 \; }

Note that the space and backslash before the semi-colon are necessary when entering the command from the command line.

You can now run nft list tables and it will show:

 table filter

Running the following command shows the contents of the table:

 nft list table filter -a

The output will be:

 table ip filter {
chain input {
type filter hook input priority 0;
}
}

That’s it!  You now have nftables running. There are several good tutorials out there that deal with creating an nftables firewall once you’re at this point and I’m also updating my Linux Firewalls book to include coverage of nftables!  It’ll be out in the fall of 2014.

 

Update: Asterisk on Raspberry Pi

I had been successfully running Asterisk on a Raspberry Pi with an Obi110 interface to PSTN for about a year.  However, I recently switched back to a standard 1u rack mount server for the phone system.  The Raspberry Pi server was just fast enough to support asterisk with a SIP and PSTN outbound and several internal SIP clients but the SD card just wasn’t reliable enough.

Something, and I never found out what, was quite wonky with SD card, image, or Raspberry Pi itself for this particular server.  At various times it would stop working and fail to boot properly after power cycle.  Swapping out the SD card for a new one with the same image worked sometimes but sometimes I had to swap out the entire Pi for another one.

I was already sending {just about} all logs towards a centralized log server to prevent writes on the box itself.  In order to increase reliability my next step was to add a USB hub and an external hard drive for the root filesystem, relying on the SD card for boot only.  However, at that point I figured I was only going to create a mess of wires without being fully assured of increasing reliability all that much.  Now there would be two more points of failure (the USB hub and the external drive) thereby making recovery all the more difficult.

I was quite happy with the performance of the Pi for this purpose.  I wonder aloud if something like the Intel Galileo would fare better, if one could get asterisk running on the primary flash.  Regardless, it was a successful experiment.

Perl to Python RSS Conversion

For quite some time, I’ve had my own personal homepage containing commonly used links, server status, subject lines of e-mails, and RSS news feeds.  Nothing exciting there.  The RSS feeds are retrieved by a program that runs every N minutes through cron and places the entries into a MySQL table.  Again, nothing exciting.  However, recently the Perl program that I’ve been using to retrieve the RSS has been consuming a bigger percentage of the available resources on the server.  More appropriately, the server on which the RSS retriever is hosted is more heavily utilized now thus when the RSS parser runs it became noticeable on the load average of server.

Of course, one way to solve it is to throw more hardware at it, like more CPU and RAM.  However, that would be too easy.  Instead I threw together a python program using feedparser just to see the difference in performance between the two for this purpose.  The results were surprising.  Python took about 2.8 seconds in real time and used significantly less system resources to do so.  Perl took ~11 seconds for the same feeds at roughly the same time.

I’m not writing this to be a knock against Perl; more likely the methods that I used to parse the RSS in Perl (and my general Perl programming skills?) are the issue.

Timings below.

Python:

real 0m2.868s
user 0m1.808s
sys 0m0.072s

Perl:

real 0m11.016s
user 0m4.108s
sys 0m0.144s

 

 

Revising Books

This post is somewhat difficult to write. I’ve been involved in a few book revision projects over the past 10 years where I’ve picked up another author’s book and had to revise it for a new version of $widget or just to update it. When doing so, I notice that my writing approach is different than some authors. I can’t or won’t claim that my writing approach is better, in fact it may be worse, but it’s the approach that I use nonetheless.

When revising books I quickly become frustrated with what I view as oversimplification because it leads inevitably to incorrect information. Unfortunately, I can’t provide a real-world example because that would compromise the projects I’m working on.  However, an anecdotal example is around DNS, where many authors don’t seem to understand the most basic things about DNS and therefore gloss over the details in such a way as to provide incorrect information for the reader.

I find myself completely gutting entire sections of books and having to rewrite them when the intention was to merely revise.  A side-effect of this is that the section comes out shorter which some publishers don’t like; they like an increased page count.  I like correct, accurate information, explained in the clearest manner possible.  The funny thing is, (funny to me), I’ve done this to my own revisions too, where I’ve picked up something I wrote 5 years ago and think “what is this garbage?”.

Additionally, with books I revise or write from scratch, I tend to try to explain not only how something works but also why it works.  Sometimes I’m successful at providing this explanation while at other times I fail miserably in both the how and the why.  I feel that you, as the reader, are looking for more than just a tutorial on a subject; the web is full of tutorials. When you take the time and spend the money to read one of my books I hope to provide you with more insight than you would get from a tutorial or even a tutorial-based approach.

My approach to book writing should also serve as a warning to those readers who want a quick-and-dirty tutorial on a given subject.  You won’t get that in my books.  Sure, you’ll see the how, the tutorial, but you’ll hopefully also learn why you need to do it that way. This is how I learn, by seeing how to do something but then having it explained why it should be done that way. Like many people, I get paid to solve difficult problems on computers and with computer systems.  My experience  leads to being able to envision multiple solutions to any given problem.  Therefore for me, it’s helpful to see a problem solved and then see why the problem was solved in that particular way.

Granted, this isn’t always the case, not every example has a multi-page explanation, but many do, and many sections within my books contain best practices learned from years and years of experience in the areas in which I write.

I’m involved in 4 book projects this year, at least two of which will be released this year while the remainder will come out in the first quarter of 2013.  All told, this will likely account for somewhere just short of 2,000 pages (+/- 20%) of writing over the course of 8 months.  The approach described in this post is being applied, as best I can, to each and every one of those 2,000 pages.

HTML5 is a Specification

I read some articles, which I won’t cite for lack of wanting to start a small war, that refer to HTML5 as a collection of technologies that describe how the new web works, in much the same way that the term Web 2.0 was used for years to describe a collection of technologies including AJAX to provide higher interactivity to web pages. In these specific articles, HTML5 was referred to as including HTML, CSS, and JavaScript. This is horribly confusing to a non-technical user and leads to some seriously awkward conversations about HTML5.

When a technical person hears “HTML5″ they think, “Oh, the W3C Spec for HTML5.” However, in reality the non-technical person has been misled into thinking that HTML5 includes other technologies like CSS and JavaScript and therefore “HTML5″ can do all these neat things.

HTML5 is the specification put forth by the W3C for HyperText Markup Language. The version of the specification is number 5. That’s what HTML5 is, nothing more. HTML5 is most definitely not CSS or JavaScript and does not include those technologies. CSS is defined by its own specification and so is JavaScript and neither of them like to be called HTML.

HTML5 is not video and does not make video better. HTML5 has a video element, yes, but browsers still need to support that element and the programmer/site operator still needs to encode the video in multiple formats. Will this change? Probably. I’m looking forward to the day when I can upload my video file in whatever format and have it play across all browsers, like magic. But that day is not today. Today we encode in multiple formats, sometimes use the video tag and sometimes use the object tag. We’ll get there but it’s not now.

[[Ad: Backup your PC Now! ]]

HTML5 is not more secure than HTML4, stating such is nonsense, it’s a markup language. Are browsers better, arguably more secure? Yes. But those same browsers are better at rendering HTML4 too and their security applies equally to all versions of HTML5. It’s actually plausible to say that HTML5 and a browser’s implementation thereof makes users less secure due to as yet undiscovered exploits in the way those browsers interpret the new specification. Think: Web Storage.

Please, I beg of you, stop referring to HTML5 as a set of technologies including CSS and JavaScript. If you’d like to make up some buzzword or buzzphrase then do so but HTML5 is taken. The term Web 3.0 seems too cliche but at least it wouldn’t muddy conversations between technical and non-technical people.

Project Methodology: What’s the Goal?

I’m struck by the number of times that people make the wrong decisions when it comes to application development as it relates to the ultimate goal.  The ultimate goal of application development is to support the business, so that the business can leverage that application to streamline processes, beat a competitor, or whatever the business need.  To that end, it seems worthwhile to deliver the best application possible.

A competing approach says that the application needs to be delivered by a certain date, regardless of the features (or failures) of the application.  Setting dates for deliverables makes sense, especially when those dates are tied to reality.  Too often though, dates are chosen without regard for the needs of the business; the dates are chosen out of thin air, using a dartboard, roulette wheel, or some other method less accurate than the aforementioned.

When due dates are chosen arbitrarily it only hurts the business.  Sure the application gets out there faster and some project manager somewhere can mark that as a completed project, but features go missing, bugs go unfixed, and the people who suffer the most are the ones who need the solution most.  That’s an important point that seems to go missing:  Business users suffer the most when arbitrary deadlines are set.  The ultimate goal, delivering a product to support the business, gets sacrified when dates are not based on reality or requirements.

Does Agile fix this?  With an agile process, more software is delivered, but that software is not necessarily better software.  And even with an agile process, deadlines are set.  However, now those deadlines are set based on even less information than other project methodologies.

Keep in mind the ultimate goal of delivering better software and supporting the business when setting deadlines.