Category Archives: Current Projects

cPanel Backups

cPanel, the popular web hosting platform, has been through some updates over the years.  The latest update changed how the landing page is rendered.  AngularJS is now used for the creation of a reseller list.  The effect of this change is that any parsing done by scripts (perl, python, or otherwise) now needs to search elsewhere for that list of accounts to backup.

The goal is to have the main reseller account login with its credentials, gather a list of its accounts, and then run a backup of each individual account, sending the backup to a third-party off-site server via FTP(s).  I developed a script for a client several years ago and it has been working successfully with very little care and feeding.  The change to Angular meant some updating was needed to that script.

The ultimate fix was to login as normal and then call the list_accounts file which returns a JSON-encoded list of accounts underneath the reseller.  The fix itself was rather easy to implement because the return is JSON.  Finding the fix is, as always, an adventure.

Parsing a CSV with JavaScript

I had a question from a student as to parsing a CSV file with JavaScript – not jQuery, not anything else, just JavaScript.  Easy, right?  Should be if you’ve worked with files and JavaScript before.  I hadn’t done so at the time, so it served as a bit of a challenge, and in a good way.

One caveat on the code in this post:  It’s ugly.  I’m using an inline “onsubmit” event handler for the form, and I hate myself for doing so.  It’s also not optimized in any way but is more Proof of Concept than anything.  If you’re going to use this in a production environment, first fix that event handler and then clean the code up and include error checking/handling.  I also don’t know how well this would perform with a large CSV file.

Speaking of CSV, the code assumes a CSV file that contains no other commas other than those separating the actual values.  Here’s the sample that I used:

City,Temperature,Condition 
Stevens Point,41,Sunny
Chicago,54,Sunny
Montreal,45,Cloudy 
Halifax,50,Rain

As a side note, I want to make it back to Halifax once when it’s not raining.

Build an HTML Page

Let’s build an HTML page to grab the file.  The HTML is simple, just a form with an input type of “file” and a submit button.  The HTML also features a <table> element so that I can dump the resulting contents of the CSV out to the screen.

<!doctype html>
<html>
<head><title>CSV</title></head>
<body>
<form onsubmit="return processFile();" action="#" name="myForm" id="aForm" method="POST">
<input type="file" id="myFile" name="myFile"><br>
<input type="submit" name="submitMe" value="Process File">
</form>
<section>
<table id="myTable"></table>
</section>
</body>
</html>

JavaScript CSV

Next up is the JavaScript.  The form makes an array of files available when retrieved.  So:

var theFile = document.getElementById("myFile").files[0];

Now “theFile” contains the actual file as uploaded.  Next, some minimal error checking to see if theFile is actually something.  If it is, then a couple variables are initialized and set for later use:

var table = document.getElementById("myTable");
 var headerLine = "";

And then the key bit:  A FileReader() object is instantiated:

var myReader = new FileReader();

A function is attached to the onload event of the myReader FileReader.  This function is where the magic happens:

 myReader.onload = function(e) {
   var content = myReader.result;
   var lines = content.split("\r");
   for (var count = 0; count < lines.length; count++) {
     var row = document.createElement("tr");
     var rowContent = lines[count].split(",");
       for (var i = 0; i < rowContent.length; i++) {
         if (count == 0) {
           var cellElement = document.createElement("th");
         } else {
           var cellElement = document.createElement("td");
         }
         var cellContent = document.createTextNode(rowContent[i]);
         cellElement.appendChild(cellContent);
         row.appendChild(cellElement);
       }  //end rowContent for loop
       myTable.appendChild(row);
     } //end main for loop
   }  //end onload function 
   myReader.readAsText(theFile);
 }  //end if(theFile)

Actually, the magic begins outside of the onload function with the line

myReader.readAsText(theFile);

When this line executes, then the onload function is fired for the FileReader object.  The first line within the onload function then gathers the contents of the file into a variable called ‘content’.  The content is then split along Return characters (\r).  So now we have variable that contains the CSV line-by-line:

   var content = myReader.result;
   var lines = content.split("\r");

Next, a for loop is entered.  This for loop creates a new table row (tr) for each line in the CSV:

     var row = document.createElement("tr");

The contents of the row are then split at commas:

 var rowContent = lines[count].split(",");

The contents of each row (in the rowContent variable) are then looped in the next for loop.  If it’s the first line of the CSV then we assume it contains heading values and therefore make a “th” element.  Otherwise simple “td” elements are created for each cell in the table:

         if (count == 0) {
           var cellElement = document.createElement("th");
         } else {
           var cellElement = document.createElement("td");
         }

Next, the code creates text nodes for each bit of content, appends those text nodes to the row and then appends the table row to the HTML table.

         var cellContent = document.createTextNode(rowContent[i]);
         cellElement.appendChild(cellContent);
         row.appendChild(cellElement);
       }  //end rowContent for loop
       myTable.appendChild(row);
     } //end main for loop

Finally, the code does a return false so that the form isn’t actually submitted.

Here’s the full code, with in-page JavaScript:

<!doctype html>
<html>
<head><title>CSV</title></head>
<body>
<script type="text/javascript">
function processFile() {
 var fileSize = 0;
 var theFile = document.getElementById("myFile").files[0];
 if (theFile) {
 var table = document.getElementById("myTable");
 var headerLine = "";
 var myReader = new FileReader();
 myReader.onload = function(e) {
 var content = myReader.result;
 var lines = content.split("\r");
 for (var count = 0; count < lines.length; count++) {
 var row = document.createElement("tr");
 var rowContent = lines[count].split(",");
 for (var i = 0; i < rowContent.length; i++) {
 if (count == 0) {
 var cellElement = document.createElement("th");
 } else {
 var cellElement = document.createElement("td");
 }
 var cellContent = document.createTextNode(rowContent[i]);
 cellElement.appendChild(cellContent);
 row.appendChild(cellElement);
 }
 myTable.appendChild(row);
 }
 }
 myReader.readAsText(theFile);
 }
 return false;
}
</script>
<form onsubmit="return processFile();" action="#" name="myForm" id="aForm" method="POST">
<input type="file" id="myFile" name="myFile"><br>
<input type="submit" name="submitMe" value="Process File">
</form>
<section>
<table id="myTable"></table>
</section>
</body>
</html>

Ansible and AWS EC2

Ansible is not lacking in awesome.  I’ve used Puppet and Chef and others to manage Linux but Ansible meets my criteria for host management for the specific reason that it uses SSH to manage hosts rather than an agent.  Ansible is also simple to get up and running quickly.

In just a few hours, I was managing hosts and doing real work to keep DHCP configs straight.  Adding more and more functionality to playbooks can be done easily.  As I’ve been using Ansible, I’m expanding in both my understanding of the tool and of the infrastructure that I manage.

I’m currently using Ansible to deploy to EC2 Linux hosts.  My plan is to be able to deploy an EC2 host through the AWS API.  I already have a bootstrap playbook in place to add various users, distribute ssh keys, and add those users to /etc/sudoers.  Ansible includes modules to add authorized_keys and install software, so doing so never feels like a hack or that I’m stretching the tool.

I’m also using Ansible to manage an Asterisk server, several MySQL servers, various DNS servers, and soon several Raspberry Pi computers.  I have a combination of physical servers, virtual servers through Xen, and AWS hosts.  I manage those via a custom variable called hosttype and I can do things like this to add an apt repository to the sources list on physical or virtual servers that using the Debian Jessie release:

- name: add apt source repo when physical or virtual
  apt_repository:
    repo="deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free"
    state=present
    update_cache=yes
    when: ansible_distribution_release == "jessie" and
          (hosttype == "phy" or hosttype == "vir")

I don’t like to share passwords among MySQL servers, and Ansible enables trivial customization on a per group or per-host basis using group_vars or host_vars.

I’ll be deploying Raspberry PI and EC2 en masse later this year and Ansible will make doing so terribly easy and repeatable.

Deploying and Debugging PHP with AWS Elastic Beanstalk

AWS Elastic Beanstalk provides a compelling platform for deployment of applications. My web site, the one you’re viewing this page on now, has historically been deployed on a Linux computer hosted somewhere.  Well, ok, the software from which you’re reading this is WordPress but it’s hosted on the same Apache server as the main web site.

I recently redesigned the main site and in the process purposefully made the site more portable.  This essentially means that I can create a zip file with the site, including relevant Apache configuration bits contained in an htaccess file and then deploy them onto any equivalent server, regardless of that server’s underlying Apache or filesystem configuration.

That got me thinking:  Can I deploy the site onto Elastic Beanstalk in Amazon Web Services?  The answer:  Yes I can.  The path that I followed was essentially to clone a clean copy of the repository from git and then create a zip file with the contents but not the .git-related bits.  Here’s the command, executed from within the directory with the code:

zip -r braingia.zip . --exclude=\*.git\*

The next step is to then deploy this into AWS Elastic Beanstalk.  That’s relatively straightforward using the wizard in AWS. In a few minutes, AWS had deployed a micro instance with the code on it.  I needed to undo some hard-coded path information and also redo the bit within the templating system that relied on a local wordpress file for gathering recent blog posts.  It wasn’t immediately clear what the issue was though, and the biggest challenge I encountered was debugging.

Debugging PHP in Elastic Beanstalk

My workflow would normally call for ssh’ing into the server and looking at error logs.  However, I found that adding the ability to display errors was helpful.  That setting is found within the Configuration area of Elastic Beanstalk:Display Errors in Elastic Beanstalk

However, that’s not a setting I would use on a production site.  There are two other ways to troubleshoot Elastic Beanstalk.  First, you can view and download Apache logs and other related logs from Elastic Beanstalk.  For example, adding error_log() statements to  the code would result in log entries in these easily-viewable logs.

The other debug option is to enable an EC2 key pair for the EC2 instance associated with the application.  This is done at application creation or later through the Configuration.  Therefore, I simply deployed another application with the same Application Version and chose an EC2 key pair this time.

EC2 Key Pair

Note that AWS replaces the instance entirely if you change this key at a later date, so if you have Single Instance-hosted version of the application, the site will be unavailable while a new instance is spun up.

AWS Red Status

Once the key pair is enabled on the server, it’s simply a matter of ssh’ing into the EC2 instance using the key and the ec2-user account, like so:

ssh -i mykey.pem ec2-user@example.elasticbeanstalk.com

Doing so, it’s easy to navigate around to see everything on the server, including log files.

SSH to Elastic Beanstalk

Note that the current version of the application can be found within the /var/app/current directory on Elastic Beanstalk-deployed PHP applications.  You can even edit files directly there, but I wouldn’t recommend it since it breaks the zip-based deployment paradigm and architecture.

In summary, Elastic Beanstalk deployment and debugging were much easier and much more powerful than I envisioned.

 

Monitoring SIP Peer in Asterisk

I’ve been experimenting with an external SIP provider for outbound and inbound calling.  Nothing groundbreaking about that, plenty of people use SIP providers rather than traditional landlines.  I recently had an issue with the SIP peer going into unreachable status in asterisk.  After debugging with the provider I found it to be a weird ARP issue local to the asterisk server.  The server thought that some of the provider’s IPs were local traffic and so the traffic wasn’t being passed to the default gateway.  Clearing the arp cache and the ip route cache fixed that issue.

The issue got me thinking about how to monitor the status of the provider, so I set up a simple script that opens an ssh session to the asterisk server and looks for the status of that peer.  When the status is not “OK”, the output is printed and, through the magic of cron, is sent to me.

Here’s the script:

#!/bin/bash

ssh -i /root/.ssh/mykey phoneserver.braingia.org ‘asterisk -x “sip show peers”‘ | grep <providername> | grep -v OK

The script initiates an ssh session using a private key.  The matching public key has already been placed in authorized_keys on the asterisk server…  and yes, slap my hand for ssh’ing as root here;  I need to fix that.  The command “asterisk -x ‘sip show peers'” is executed.  That output is piped to grep for the <providername> which is then piped to grep -v to exclude the “OK” output since I assume things are OK and only want to know when they’re not OK.

Admittedly, nothing groundbreaking about this simple one line script either!  But here it is nonetheless, in case anyone finds it useful for monitoring when a sip peer goes unreachable or lagged.

Update: Asterisk on Raspberry Pi

I had been successfully running Asterisk on a Raspberry Pi with an Obi110 interface to PSTN for about a year.  However, I recently switched back to a standard 1u rack mount server for the phone system.  The Raspberry Pi server was just fast enough to support asterisk with a SIP and PSTN outbound and several internal SIP clients but the SD card just wasn’t reliable enough.

Something, and I never found out what, was quite wonky with SD card, image, or Raspberry Pi itself for this particular server.  At various times it would stop working and fail to boot properly after power cycle.  Swapping out the SD card for a new one with the same image worked sometimes but sometimes I had to swap out the entire Pi for another one.

I was already sending {just about} all logs towards a centralized log server to prevent writes on the box itself.  In order to increase reliability my next step was to add a USB hub and an external hard drive for the root filesystem, relying on the SD card for boot only.  However, at that point I figured I was only going to create a mess of wires without being fully assured of increasing reliability all that much.  Now there would be two more points of failure (the USB hub and the external drive) thereby making recovery all the more difficult.

I was quite happy with the performance of the Pi for this purpose.  I wonder aloud if something like the Intel Galileo would fare better, if one could get asterisk running on the primary flash.  Regardless, it was a successful experiment.

Update: Raspberry Pi Firewall

The firewall built on Raspbian with a Raspberry Pi has been running for a couple weeks, rather flawlessly I might add.  I’ve ordered additional Pi’s (Pis?) from Adafruit.  I’ve had great luck with Adafruit; shipping is quite fast (same day!) and their tutorials are good too.

The overall layout of the firewall includes a Plugable USB-Ethernet USB2-E100 adapter, a Cisco/Linksys USB200M, and the native ethernet port on the Raspberry Pi.  Rather than tax the native USB ports on the Pi I hooked up a Plugable USB2-HUB-AG7 7-port USB hub.  I also added a Cable Matters Active HDMI/VGA adapter for console access.  The console still had a blink so I added the following to /boot/config.txt on the Pi:

hdmi_group=1
hdmi_mode=1

Uncommenting those lines in the file removed the HDMI/VGA blink on the console and now all is well.

The total cost for the entire rig was a bit under $150.  While this is somewhat higher than I would’ve hoped, the savings will come from electricity usage (or lack thereof) with the Pi.  I hooked up a Kill-a-Watt to the entire rig (Ethernet adapters, Pi, HDMI/VGA adapter) and can predict that it will use between 4 and 4.5KwH per month.  The current KwH rate is about 12.5 cents, so it should cost less than 75 cents to run the firewall per month.  That’s much less than the server that it replaced.

Overall I’m happy with the performance of the Raspberry Pi thus far.  Next update will be for the Raspberry Pi phone server running asterisk.

Total Cost: $141.09

Raspberry Pi: $39.95

CY Raspberry Pi Case: $17.49

Power Adapter: $5.95

USB A to Micro B Cable: $3.95

USB2-E100: $13.95 (x2 if you don’t have a USB200M laying about) = $27.90

HDMI/VGA Adapter: $19.95

Power Adapter for HDMI/VGA Adapter: $5.95

Plugable 7 Port USB Hub: $19.95

Transcend 8GB Class 10 SDHC: $8.95

 

 

DMX Lighting

Lighting for bands and DJs has come a long way since the non-grounded 120V household switches with hot electricity running right below your fingertips.

I’ve tested two lighting controllers and in the coming weeks and months I hope to post some reviews and primers on DMX lighting.

Windows 8 Promise Object and JavaScript

One of the best features that I’ve found so far (and I have many more yet to find) are promises.  When building Metro-style apps using JavaScript, the framework utilizes asynchronous requests through xhr.   Within that paradigm, there’s a promise object which essentially says that there will be data there sometime in the future (thus the async request).  The promise object has a method called then, which accepts three arguments, a success function, error function, and a progress function.  This makes the promise object, or the then method, tailored to what we do everyday with AJAX requests:  Send a request, give progress and do something with the results (error or success).

Pseudocode:

WinJS.xhr(“http://example.com/webservice”).then(successFunction, errorFunction, progressFunction);

 

Here we go again…

As amazon.com and several other independent sellers can tell you, I must be beginning another book because I placed a significant order for music to accompany the writing efforts.  In truth, I’m beginning two books, a new book and a revision to another.  This should be an adventurous few months.

I’m about to place the first words into the first chapter and, as you might be able to tell, I’m attempting to delay by any tactic possible including posting in my blog.  The first words of the first chapter are always most difficult.  I’ll likely need to resort to the old trick of writing the summary first…  what will the reader have learned by the time they read that summary.

It’s 5 degrees (F) outside.  Only 600 pages to left write.