Monthly Archives: June 2011

When to Retire an Old Bookmark

I have an app that I use to manage my web bookmarks.  The app enables me to categorize bookmarks and use them cross-device, from anywhere (it’s hosted).  I’ve been using the application for several years and each time I click one of the bookmarks, that click gets recorded into a database.  As time goes by, the most popular bookmarks in each category rise to the top as they are used more and more.

However, this means that less popular bookmarks sink to the bottom.  In looking at the 76 bookmarks (only 76, seems like more) that I have in the list (and are active), there are some that I haven’t used in a year and one that I haven’t used in nearly two years.  So the question becomes when should I delete the bookmark or inactivate an unused bookmark?  I’m thinking that bookmark to a weather site that hasn’t been used since July 25, 2009 is a candidate to be inactivated.

For my own reference (filing this under “Useful Items that I Forget”) here’s the query that I ran:

select b.title,from_unixtime(max(p.dateclicked)) from p_bookmarks b, p_clickstats p where p.bmrkid = b.id and b.active = ‘1’ group by b.id order by from_unixtime(max(p.dateclicked)) desc;

Disable Time Sync with Virtualbox

Virtualbox (or its Guest Additions) have this annoying habit of automatically keeping the clock in sync with the host, regardless of the settings that one tries to implement within the guest itself.  For example, in a Windows 7 guest that I’m trying to use for consistent screenshots I need to set the clock to specific dates.  However, as soon as I set it, Virtualbox changes it back.

Here’s how to change it:

vboxmanage setextradata “<vmname>” “VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled” “1”

Does Subscriber Growth Matter?

Reading a BBC News article today, “Facebook Denies Losing Users” starts the wheels turning.  Two firms are attempting to measure Facebook’s usage and user growth.  One firm (Inside Facebook) is saying that Facebook lost users in May, while another (comScore) is saying that Facebook saw a 21% growth in the U.S. in May.

While I don’t really care if Facebook gained or lost users, I do question how Facebook gains that many new users in a month.   Anecdotally, people seem to either have a Facebook account or not; those that don’t have an account aren’t signing up for one any time soon.  It’s not as if people are just hearing about Facebook, there are no early adopters left.  I’d guess that there’s a huge level of inaccuracy in the third-party measurements.  More likely is that a whole bunch of college-age Facebook users stopped using Facebook from their campus-based locations and started using from home, thus skewing both numbers.

The product life-cycle seems to dictate that Facebook is still in its growth stage but 21% seems like an awful lot of growth. Maybe that growth is coming from spambots and fake user accounts.  As Facebook matures it will be interesting to see how it transitions from the growth stage to maturity.  A competitor is bound to come up with the new shiny which will be competing for people’s attention and at some point there really won’t be subscribers left to sign up for Facebook.  Will Facebook become the application platform of the future?  Could be, but I doubt it.  It’s what’s next that’s interesting.

The Dangers of DNS

I recently (well, ok, several months ago) moved the authoritative DNS for this domain, braingia.org, over to a new hosting provider.  Actually, this was quite a shift for me.  For the previous 9 years I’ve managed the authoritative name servers for the domain myself using BIND, with everything running happily and smoothly.

However, I’m no longer managing as many servers and therefore I had no (easy) way to provide redundant DNS for the domain.  With that in mind, I made a decision to change the authoritative DNS over to a standard hosting provider.  Things were going good.  Using their interface I was able to add and change A records and TTLs as necessary, create subdomains, and have the MX pointing to my own server, etc:  All the stuff a guy needs to do with his domain.

Low and behold, there was a problem with a mailing list on a subdomain of braingia.org.  Weird problem; still not solved.  Anyway, not being able to check the mail logs to try to glean more details I opened a ticket with the provider.  It went downhill from there.  Suddenly I began to get notices and host not found errors for previously working hosts within the domain.

Turns out that one of the engineers at the provider decided that “there were problems with the zone” and without asking chose to revert the domain to the default template, thus removing all of the custom hosts, subdomains, TTLs, MX, everything that had been built up over the last 9 years of owning the domain.

Luckily I had a backup of the zone handy and was able to get it restored by the provider.  However, I can’t imagine what would’ve happened if I didn’t have a clean copy of the zone ready to be loaded.  Well, I can imagine:  I would’ve had to rebuild all of the records by hand.

The moral of the story, aside from venting, is that I’ll be switching DNS back over to my own server and then trying to find a host to do reliable secondary DNS.  And by reliable I mean “won’t touch the zone without my involvement.”  You know, the little things.

The Three W’s of Web Information Architecture

Web page design, more specifically, the information architecture of a web site, needs to be planned carefully.  Too often menus and layout are placed on the site based on one person or group’s idea of what visitors want rather than a thoughtful and thorough analysis of what the visitors themselves want and need.

When planning the information architecture I find it helpful to consider the three W’s:  What, Why, and When.

  • Why is the visitor coming to your site?
  • What is the visitor trying to do?
  • When is the visitor looking for the information?

Complex sites frequently don’t consider any of these when designing their menus or the content contained on individual pages.

On complex sites finding out why someone is coming to your site is typically done using analytics.  For example, analysis of logs or web page tracking reveals how visitors are finding your site, sometimes including search terms and the site that referred them to your web site.  If a visitor got to your site by searching for purple iPod covers then hopefully you have purple iPod covers on the page that they hit.

Determining why a visitor is coming to your site helps determine what they’re trying to do on the site.  Is the visitor looking for information about iPod covers or looking to purchase one?

Arguably the most important piece of web information architecture is determining when the visitor is looking for information.  A page with everything the visitor might need is useless if they aren’t ready to consume the information yet.  For example, a calendar application could provide complex scheduling options on its weekly calendar view, but if the visitor is merely looking for an overview of availability for the week then complex scheduling options will be useless.  Worse yet, the visitor might miss those options and click to another area of the site looking for those options.

If a visitor doesn’t think the page they’re on has the necessary information they’ll click away within 2 to 3 seconds (sometimes less) so you don’t have much time to give them what they’re looking for when they’re looking for it.

JavaScript Step by Step Second Edition – In My Hands Now!

It’s always exciting to receive a box on my doorstep with the first copy of a book.  Rather than wait for the publisher to send the promotional copies I usually spring for a copy of the book from Amazon or B&N.  This time it’s the second edition of JavaScript Step by Step that arrived just yesterday.

And today the first errata arrived for the second edition.  It appears that somewhere between the first pass and the final pass of quality assurance, a sentence got moved.  Then in the final printed edition, the sentence got chopped, leaving half a sentence in the wrong place.  Sheer madness.

On page 204, there’s a Note (called a readeraid in the Microsoft Press/O’Reilly template).  In that note, the last sentence, or partial sentence, reads, “DOM Level 0 is also known”.  In reality, that sentence goes in the following paragraph and should read “DOM Level 0 is also known as the legacy DOM.”

It’s arguably a throw-away sentence and the book still makes sense without that sentence too.

If you spot additional errata for the second edition I encourage you to head over to the book’s web site, http://www.javascriptstepbystep.com and file it.  Those errata come directly to me and get investigated promptly.

I hope JavaScript Step by Step, Second Edition is enjoyed as much as the first edition.  It was a fun book to revise and improve.

The Browser as Operating System

 

I got it, finally, today, while writing a chapter for the Second Edition of JavaScript Step by Step.  The browser *is* the OS.

This epiphany came to me as in the following manner:

I was writing a sentence about JavaScript libraries, “These libraries take difficult tasks and make them easy when programming a JavaScript-centric web application.”

(Note that sentence still needs to be wordsmithed).

In any event, I originally wrote “…JavaScript-centric application” but then added the word “web” to the sentence.  That got me thinking about why I added the word “web” to the sentence.  A multitasking, windowed operating system, among other things, enables its user to open multiple applications at the same time.  Turns out that today’s web browsers enable their users to open multiple applications at the same time through tabs.  So in essence, the underlying operating system (Windows, Linux, OS X) is really just an additional layer between the user and the hardware.

It’s not too difficult to envision a day when the OS is the browser and vice versa.  The only issue would be network connectivity or what to do if there was none.  I, for one, don’t want to be dependent on being connected in order to write or listen to music or whatever it is that I have local to my computer and network.  Therefore, a local web server or the ability to fire up the local application store would be necessary and would need to be seamless.  In other words, the browser (OS) doesn’t assume network connectivity and handles things in a sane manner when connectivity drops or changes.

JavaScript Step by Step and Beginning Perl Web Development: Still Selling Well

 

 

 

I received some notes from my agent over the past few weeks.  Two of my books, JavaScript Step by Step (Microsoft Press), and Beginning Perl Web Development (Apress) are selling well.  The JavaScript book has outsold any of my other books.  Some of the other books that I wrote and co-wrote were done under Work-for-Hire so we don’t get to see the same sales reports on those.

The Perl book was is what prompted me to write today.  I wrote the book back in 2005 and was asked yesterday if the content is still relevant; in other words, has the software been updated thus rendering the book obsolete?  The answer is no.  Beginning Perl Web Development is definitely still relevant.  I’ll go through some of the sections of the book.

Part 1 looks at CGI development and database connections with Perl.  These are still relevant and in wide use.  Part 2 covers LWP and Net:: tools, including retrieving content with LWP::UserAgent.  This is still relevant, I use these modules to scrape web pages with Perl even today!  Part 3 looks at XML and RSS with Perl including consuming RSS feeds and creating them as well.  Part 4 is devoted to mod_perl and Part 5 looks at templating with Perl using the Template Toolkit and Mason.

I was happy to hear about both books recently and even happier that they’re still relevant and provide value to their readers.

Purchase JavaScript Step by Step and Beginning Perl Web Development at Amazon or your preferred book retailer.

Small Business Web Site? Make It Worthwhile or Don’t Make It At All

When I started working with small- and medium-sized businesses to help them create web sites (circa 1995), I encountered some level of difficulty in convincing business owners of the value of a web site.  “Why would I need that?” and “The Internet is a fad” were two common themes among the naysayers.  Even today I’m amazed by the number of small business in my area that aren’t on the web… at all.

However, some business owners see the need for a web site but don’t realize that putting up a horrible site is much more harmful than having no site at all.  Therein lies the predicament:  If you have a web site for your small business, you need to make it look professional and provide value for visitors or you will do more harm than good.  If you can’t create a simple site that looks professional then you’re probably better off not having one at all.

Here’s an anecdote:  I recently searched for a local photographer.  I found several in the area using $searchEngine and clicked through to a couple.  Some of the sites were so terrible that I immediately moved on to the next search result.  A little digging reveals that many of the sites were created in WYSIWYG-type HTML editors or worse yet, created in word processors and then exported to HTML.  The design elements are all-too familiar, bad backgrounds, loud colors schemes, no sense of eye tracking and navigation, no consistency.

Having a bad web site ensured that I wouldn’t do business with those companies.

Getting a not-so-bad web site isn’t all that difficult.  If you have a bad web site, spend a couple bucks and hire someone to redesign it for you.  And no, this does not mean getting your neighbor’s kid, nephew, or niece to do it “because they’re good with computers.”  This means hiring someone who actually does web design.  You can usually get a small (1 to 5 page) site for under $500 (usually under $300).  Alternately, incorporate your content into a free css template and move on with your life.  Don’t fall into the trap of “getting to it later”.  Get to it now.  It shouldn’t take long and will almost certainly turn into results.

Managing Web Design Projects

           

I’ve been working with some individuals who are new to the process of managing a web design project.  By “the process” I mean all of those elements that need to be thought out, discussed, (sometimes argued about), and decided with the end-user prior to the opening <html> tag being written.  Things like color scheme, navigational elements, the number of pages required, content areas, “what happens when someone clicks here”, SEO, header/footer elements, domain name, analytics/metrics, and so on.  While those elements can change, the business owner/end-user needs to be involved in many of these decisions early so as to save time and effort among all participants.

However, I’m struck by the surprise that people show when, or the first time, they have to manage a web project from start to finish.  What seems like a burden, “now they want to change this element to be smaller,” is really just a typical web design cycle.  I haven’t been involved in any web projects that didn’t involve changes.  One always hopes that the requested changes will come early and will be small things like changing the name of a menu item.  But even the early changes are leaving these individuals shaking their heads in disbelief.

I welcome them to the everyday life of someone who handles web design and architecture.  Managing the process of web design requires patience and flexibility.