Heal Your Church WebSite

Teaching, rebuking, correcting & training in righteous web design.

Social media icons galore!

December 17, 2011
by meandean

Digital Marketing is about Vision, Strategy, Tactics and Activity – in that order

Successfully marketing your church or charity in the digital domain is more than adding a “Like Button” to your blog.

Social media icons galore!I’ve become aware of this after the popularity of my “Facebook Like Button Plugin for WordPress.”

In the past year, I’ve received a number emails requesting features and implementation requests that seem to think that this Facebook service can solve all their social media marketing needs.

So it pains me when I have to reply with a ‘no’ to some very nicely worded requests.

I generally spare such a request of the technical limitations of Facebook’s API. I also spare them the bitter reality that there is no “magic bullet” when it comes to digital marketing.

So to make up for the latter omission, I’ve written below what I believe it takes for a church or charity to survive their drive along the information highway – specifically in terms of online outreach and ministry:

  1. Vision;
  2. Strategy;
  3. Tactics; and
  4. Activities

… in that order.

Problem is, most organizations I’ve seen fail on the web do so because they jump into activities without understanding that success needs to have:

  • measurable objectives;
  • focused, targeted audiences;
  • feasible short, medium and long term plans; and
  • return on investment.

I’ll blog later about these “haves” in a future post. For now, let’s define what I mean by Vision, Strategy, Tactics and Activities … in that order.


  1. Grow organization that …
  2. … serves their local community of Lutherans …
  3. … by feeding their faith; and…
  4. … equipping them to feed others.


  1. Bring in more visitors.
  2. Get visitors to become members.
  3. Get members to become active participants.
  4. Get active participants to become evangelists.


  1. Attracts more visitors through popular digital venues.
  2. Provide online content that allow visitors to comfortably explore your organization.
  3. Provide online tools that help members plug-in to programs and collaborate.
  4. Provide online training and services that help evangelists get the word out.


  1. Get the sermons, lessons online, videos and photos onine; and track it all with Google Analytics.
  2. Create a community presence via a Facebook and/or LinkedIn group, and don’t forget Disqus.
  3. Integrate tools such as BaseCamp for program management and Google Apps for document processing.
  4. Enlist tools such as SurveyMonkey and MailChimp to equip individuals in the field.

If you’re finding that your organization is not getting anything but a passing interest from your online activities, then why not take a step back and consider the above?

If you still feel that a rocking WordPress theme for your church website is the key to your organization’s digital marketing management, then I’d ask you to consider what measurable results you received from similar efforts in the past?

As always, comments, questions and criticisms are welcome, so long as their couched in love.

October 10, 2011
by meandean
Comments Off on What to do when your Twitter Account gets Compromised

What to do when your Twitter Account gets Compromised

Despite employing strong passwords that I change regularly, despite deleting unsolicited Direct Messages (DM) and mentions with links to unknown destinations, a simple “fat finger faux pas” event lead to me granting a 3rd party Twitter application permission to spam my followers. For that I apologize — and as part of my penance, have provided some useful advice, images and even a script to help you remedy that situation if you should ever similarly fall victim so such malware.What to do when your Twitter account gets hacked

I woke up a little after 1:30AM last night because I though I had heard some racoons helping themselves to my  trash can as if it were a salad bar. Once that venture into suburban sanitation security was resolved, I checked my Samsung Droid Charge for any incoming notifications.  One that caught my attention read:

Strange link via DM from you just now.

As I dug in, I realized that my Twitter Followers were being sent a DM with a link to a third party Twitter Application, which when clicked, would begin the process of similarly turning their Twitter account into a spam-sending zombie.

First thing first, I read the instructions on Twitter’s help page entitled “My Account Has Been Compromised, ” which advised me to:

  1. Change your password (go ahead, make it  a strong password)
  2. Revoke connections (to any 3rd party application you think suspicious &/or are no longer used)
  3. Update your new password in your trusted third-party applications

Which I did immediately. I then went into Twitter and began to manually delete the messages the pusilanimous 3rd party program had sent. It wasn’t long into this tedious process that I realized “… this is how I got hacked, the malware link is WAY too close to the delete link.”  I’ve attached a screenshot of a test DM to demonstrate the usability issue I’m trying to describe:

How the Twitter delete DM links can sometimes be too close to a malware link

A bit of context, earlier in the evening while watching the 1st quarter of the Packers/Falcon’s game, I received an obvious malware DM. I pulled up Twitter in my browser on my Droid rather than the mobile App because there’s less keystrokes to deleting such conversations. Unfortunately, I clicked the Malware link. I remember that happened because I quickly hit the back key and then deleted — not thinking anything would happen because of my miscue.

I was wrong. Later, sometime during the 4th quarter while searching stats on the Pack’s stunning 2nd half comeback, I saw on my little Droid browser a page that looked like Twitter, asking me to log back in. I was busy with the game, I’d seen Twitter do this before. What I didn’t see that the link was  actually pointing to a misspelled site: Twittler.com!

So despite all my talk about strong passwords, ignoring unsolicited candy from strangers, and other such stuff, I granted a 3rd party application permission to spam the h-e-double-toothpicks out of my followers. Worse, just about the time I was through deleting all the rogue messages, I received another communique that reminded me that followers who get email notifications of DMs were still going to see the link.

So at about 2:45AM, I set out to write a script that would send DM notifications to all my Twitter Friends — technically, those individuals of whom I follow, who also follow me. I won’t go into too much gory detail, other than the resulting replies indicated grateful followers, who while suspicious, were glad to get the personalized Direct Message warning from me.

I chose PERL, because while other languages may be better for long term projects, I knew I could field a solution within an hour and a half by taking advantage of the Net::Twitter module provided at the CPAN library; along with a fresh set of API consumer and access from the Twitter Developer’s page.

I call this script “DM_mea_culprit.pl,” and since it can be used to send a bulk messages to all your Twitter followers, please resist temptation and limit its use it for good:

# Summary:
# --------------------------------------------
# Sends a Direct Message to Friends - those people on Twitter
# whom I follow who also follow me
# Arguments:
# --------------------------------------------
# none yet, we'll get that done on the next version
# Example Use:
# ---------------------------------------------
# perl DM_mea_culprit.pl > run01.log.txt

use Net::Twitter;
use Dumper;

# NOTE: you will need to get consumer keys and access tokens from the
# Twitter Development Center: https://dev.twitter.com/start
my $nt = Net::Twitter->new(
traits => [qw/API::REST OAuth/],
consumer_key => $YOUR_CONSUMER_KEY,
consumer_secret => $YOUR_CONSUMER_SECRET,
access_token => $YOUR_ACCESS_TOKEN,
access_token_secret => $YOUR_ACCESS_TOKEN_SECRET,

# this information is useful to log at the beinning of the script
# .. it includes how many more messages you can send w/in the hour
my $ratelimit = $nt->rate_limit_status();
print Dumper($ratelimit);

# construct the outgoing direct message
my $omsg = "please do NOT open any URL you may have received from me last night as a DM. It was malware.";

# get all the ID's of people I follow
my @ids;
for ( my $cursor = -1, my $r; $cursor; $cursor = $r->{next_cursor} ) {
# for a larger net, consider followers_ids()
$r = $nt->friends_ids({ cursor => $cursor });
push @ids, @{ $r->{ids} };

# walk through all the IDs
foreach my $id (@ids) {
if($id) {

# get an array that describes the friendship
my $friend = $nt->lookup_friendships({ user_id => $id });

# get their screen name
my $screenname = $friend->[0]->{"screen_name"};

# see how you're connected to this friend
my $connections = $friend->[0]->{"connections"};

# important -- do they follow you?
my $isfollowedby = $connections->[1];

if($isfollowedby) {
my $dmsg = "\@$screenname, $omsg"; # personalize the DM
my $smsg = $nt->new_direct_message($id, $dmsg); # send the DM
if($smsg) {
print "message '$dmsg' successfully sent to #ID".$id."\n";
} else {
print "WRN:".$id."\t@".$screenname."\texperienced a message fail\n";
sleep (2); # don't overrun Twitter
sleep(3); # don't get blacklisted

# Now send out a generalized message to the peeps;
my $res = $nt->update({ status => "TO MY FOLLOWERS: $omsg" });

# last bit of logging
print "This work is done\n";
exit 1;

All that said,  here are some things I’m doing to do moving forward to avoid such instances.

  1. continue to change my password periodically, using something very strong;
  2. periodically review my third-party application connections, removing anything that looks suspicious and/or is no longer in use;
  3. always use the Twitter Mobile App to delete DMs with bad looking URLs when on my Droid smartphone;
  4. take a harder look at the URL when asked to log back into Twitter (or Facebook for that matter);
  5. perfect the above script — adding logic to delete spammy DM’s while sending out the warning; and
  6. being the Social Media API junkie that I am, perhaps re-write this in Python.

Please feel free to add your recommendations to the list above — and again — apologies to my Twitter followers for the hassle.

May 17, 2011
by meandean
Comments Off on Bad idea design poster #11 – Canned Content

Bad idea design poster #11 – Canned Content

One of the things myths I’ve heard from attending my share of  Word Camp Raleigh events is that template systems are somehow a magic bullet to a successful online marketing campaign.

Canned Content, about 1/2 as interesting as lima beans with 1/3 the taste and 1/10 the nutrition.

Not that there’s anything wrong with terrific tools such as  Thesis, Headway, Genesis, and Builder; nor the premium themes one can purchase for them.

Rather I’m hoping that along with the discussion of getting a fast start with premium framework themes and plugins that we also remember what usability guru Jakob Nielsen said about ‘Information Foraging‘ back in 2003 when describing how to catch and keep visits from data hungry first time visitors:

The two main strategies are to make your content look like a nutritious meal and signal that it’s an easy catch. These strategies must be used in combination: users will leave if the content is good but hard to find, or if it’s easy to find but offers only empty calories.

Basically, Nielsen is detailing how to attaining user-activity goals through the careful crafting compelling content and the navigation to it – known in the web strategy/analytics world as a ‘conversion funnel.

So it is my hope that amid discussions on how to make our blogs look  ‘different like everyone else’ via various rendering engines, that we also take some time to talk about how-to develop a sensible information architecture that best suites the goals of your church and/or charity.

Put another way, we need to remember that if  “Web users behave like wild beasts in the jungle …,” then we probably want to avoid taunting such ‘Informavores‘  with ‘canned content’  when in fact only raw meat will satisfy their hunger.

In other words, just because a number of the speakers are justifiably and understandably using this conference as an opportunity to sell their template and plugin wares – it shouldn’t be to the exclusion of those in attendance whom are seeking help with all the other aspects that go along with establishing an effective web presence.

Meanwhile, if you’re headed into town, let me know. If you can’t make the event, keep in mind that there’s a “SitePoint Podcast coming to WordCamp Raleigh.”

May 2, 2011
by meandean

Making a Ready Defense by Planning for Failure

Originally published May 2, 2008, made some formatting adjustments, and bumped this up.

Bad church web design poster 0008 - contingency planningThose who fail to plan, plan to fail. While this aphorism is very worn, it is also very true. Here are some simple things you can do with mysqldump, crontab, tar/gzip and a little contingency planning to insure you don’t lose your sanity when your server crashes upon the shoals of of virtual disaster.

Check out these recent tales of real-life virtual horror as told by a variety of news sources from around the globe:

  • The outgoing Italian government posted the entire population’s tax returns on the internet causing a mad scramble which crashed the system.
  • Obama supporters were in for a surprise Monday when an attacker executed code on Barack Obama’s Presidential campaign Website that redirected users to Democratic rival Hillary Clinton’s campaign site.
  • According to police reports, a computer was stolen from the ADT Home Security branch on Sunbeam Center Drive sometime between April 12th and April 13th.
  • Tens of thousands of people were feeling short changed last night after a massive system failure wiped out all the Northern Bank’s ATMs.
  • A statewide computer problem again hobbled the state’s digital driver license system on Friday.

The point is, hardware failures, power outages, software bugs, stolen computers, cross site scripted SQL injections, and/or zombie induced denial of service attacks can all turn your church and/or charity website into a tub of techno-mush quicker than you can recurse a binary tree.

The only real defense against such failures is to plan for them – anticipating them in three ways:

  • backing up your data
  • moving your backed-up data off site
  • having and practicing how to restore backed-up data

Here’s a very simple snippet from an oldie but goldie article entitled “How to backup your MySQL tables and data every night using a bash script and cron:”

# backup data
mysqldump -uroot -ppwd --opt db1 > /sqldata/db1.sql
mysqldump -uroot -ppwd --opt db2 > /sqldata/db2.sql
# zip up data
cd /sqldata/ 
tar -zcvf sqldata.tgz *.sql
# email data off-site
cd /scripts/
perl emailsql.cgi

The article also displays a script on how to email the data off site, not a bad deal if your data is small – such backups being just as simple to restore with this dynamic command line duo of directives:

tar -zxvf sqldata.tgz
mysql -uroot -ppwd db1 < db1.sql

Things get trickier when you have tons of data, in which it may play into one’s restoration plan better to backup and restore a database by individual tables. Here is a set of articles that describes how to do this that includes some script examples you can modify to suite your needs:

Either way, then it is just a manner of putting the shell script on a timer, or in the vernacular of crontab:

1 3 * * * /usr/home/mysite.com/prvt/tbak.sh > /usr/home/logs/tbak.log

If either of these shell script, bash-based approach seems to complex then perhaps one of the control panel, web-based method offered by UpStartBlogger’s post “8 MySQL Backup Strategies for WordPress Bloggers (And Others)” will do the trick.

Here are some other related articles that might help, the last two include automagic date stamping of the backup files:

The bottom line is this: just Peter implores us to make a ready defense in 1 Peter 3:15, so I’m asking you always be ready to make a defense to anything that endangers the data that is on your system so you’re not found tearfully dissheveled cowering in a corner meek and fearful, mumbling something about how you should have planned for such failures.

You’ll be glad you did – probably at the most inopportune time possible.

April 29, 2011
by meandean
Comments Off on backUpMySQL.pl – is it cool?

backUpMySQL.pl – is it cool?

Originally published on April 29, 2003, I’ve made some formatting corrections & bumped this up a bit.

Yesterday, Mark Pilgrim’s message of the day was
You know what’s cool? Backups.

Well who am I to argue with such coolness? So in the spirit of “what is Hip“, and myself being in a situation where my host is also changing data centers, I would like to share with you a little utility script I run on my system every night ubiquitously entitled “backupMySQL.pl.”

Basically this little Perl takes advantage of naming conventions used by the standard-fare Apache configuration many of us enjoy. That is, our accounts are usually stored in directories such as “/home/USERNAME” and our database are prefixed with our USERNAME, such as USERNAME_mt. Moreover, a properly configured system will allow you to securely house and run such scripts BELOW the public /public_html &/or /www directory where all your public stuff is published.

With this configuration in mind, I FTP this script in ASCII mode to my root directory (below HTTP access), a chmod -755 so it would execute. I then created a subdictory entitled /dbs and chmod -755 /dbs so my script can access it. I then went to my control panel and cron’d the job to run every night. Okay, so I lied, I did this all from the command line, but as you can see, you can implement this script without having to bash yourself silly.

One other optional feature I have in this script is the ability to FTP my backup to a friend who hosts a website on an entirely different server and service. I reciprocate in kind for him. What this does is insures that we have an “off site” backup — based on the principle that if both our servers go down, then we’ve got a much larger issue at hand (how about global thermonuclear war?). So here it is. Use it, tweak it, let me know ho you like it — just make sure to check the files from time to time to make sure your backups can be restored.

#!/usr/bin/perl -w
# ———————————————————————–
# copyright Dean Peters © 2003 – all rights reserved
# http://www.HealYourChurchWebSite.org
# ———————————————————————–
# * Obligatory Legal Stuff *
# backupmysql.pl is free software. You can redistribute and modify it
# freely without any consent of the developer, Dean Peters, if and
# only if the following conditions are met:
# (a) The copyright info and links in the headers remains intact.
# (b) The purpose of distribution or modification is non-commercial.
# Commercial distribution of this product without a written
# permission from Dean Peters is strictly prohibited.
# This script is provided on an as-is basis, without any warranty.
# The author does not take any responsibility for any damage or
# loss of data that may occur from use of this script.
# You may refer to our general terms & conditions for clarification:
# http://www.healyourchurchwebsite.com/archives/000002.shtml
# For more info. about this code, please refer to the following article:
# http://www.healyourchurchwebsite.com/archives/000802.shtml
# * Technical Notes and ASSUMPTIONS (PLEASE READ) *
# this code assumes a standard Apache configuration where
# the $HOME directory is a path beneath public_html &/or www
# and employs a naming scheme such as /home/YOURACCOUNTNAME/…
# it also assumes that your databases are prefixed with your
# account name, such as YOURACCOUNTNAME_mt
# do NOT under any circumstances run this from a directory accessible
# via HTTP (e.g. public_html/… or www/…)
# it makes system calls, and although it takes no input, just don’t!
# this program works best with CRON, e.g.
# 0 0 * * * /home/YOURACCOUNTNAME/backupmysql.pl
use DBI;
use Net::FTP;

# this assumes you have previous created a subdirectory named /dbs
# and have chmod 777 /dbs
$path = “/home/$username/dbs/”;
$file = “/home/$username/dbs.tar.gz”;

# databse connection info …
$host = “localhost”;
$username = “YOURACCOUNTNAME”;
$password = “YOURPASSWORD”;

# connect to the database and retrieve a tuple of your databases
$dbh = DBI->connect(“DBI:mysql:host=$host”,$username,$password) or die “Bad login info”;
$sth = $dbh->prepare(“show databases like \’$username\_%\'”);

# for each database … back it up!
while(@row = $sth->fetchrow_array()) {
	if(!$row[0]) { die “No dbs to backup!”; }
	foreach $db (@row) {
		system(“mysqldump –opt –user=$username –password=$password $db > /home/$username/dbs/$db\.sql”);

# you’re done with the database

# delete the old version — probably should “grandfather it”
if(-e $file) { unlink $file; }

# create a single, easy to use and transport file
system(“tar -cf dbs.tar dbs”);
system(“gzip dbs.tar”);
system(“rm $path\$username_*”)	if $path =~ m/home\/$username/;

# OPTIONAL – you can comment this out, or not
# this assumes you have a friend on a different server
# with whom you’ve made arrangements to hold backups of each
# other’s data. This way, if a server fails, you can get it
# from your friend’s site
# it also assumes your friend has created a directory for
# you entitled “/backup” — this of course can be changed
# if your friend sets up an individual FTP account to a directory
# … which is what I actually do … I love my friends!
$ftp = Net::FTP->new(“MYFRIENDSDOMAIN.COM”, Debug => 0);
if($ftp->login(“FTPUSERNAME”,’FTPPASSWORD’)) {
  $ftp->put(“dbs.tar.gz”, $username.”_dbs.tar.gz”);

# bye bye

April 5, 2011
by meandean
1 Comment

Fun with the Twitter Search API and jQuery

During my job search last year, I admitted that “yeah, I’m a bit of an API junkie.” Anyone whose followed this site since 2002 probably has gone blind once or twice reading posts about SOAP, XML-RPC, RSS feed and other such programmer protocols and interfaces.

So why should anyone be surprised that today I’m providing a quick how-to code snippet of some fun I’m having with the Twitter Search API, REST, jQuery and jSON?

YES, I know I need to get back into providing posts about content management,  effective social media strategies and web campaigns … but for today … please indulge me with one more trip into the land of code.

Some Context

In the process of writing some WordPress plugins leveraging the Facebook API, I thought “why not twitter?

However, there are already a multitude of plugins and widgets out there that’ll show my profile.  So I turned my eyes to Twitter Search.

My first thought was to simply write this all up using not much else but the jQuery.getJSON() method.  However, this approach doesn’t lend itself well to caching – which in turn would lead to some of you with busy sites getting your widgets blacklisted by Twitter as <a href=”http://apiwiki.twitter.com/Rate-limiting” title=”Twitter API Wiki – Rate Limiting”>their default rate limit</a> for calls to the REST API is 150 requests per hour.

So now I’m working on a PHP solution inspired in large part by Arron Jorbin’s post “More Twitter Shortcodes for WordPress.” Must read for anyone working with feeds or APIs in the WordPress arena.

Hey, so where’s the jQuery & jSON?

All that context aside, I did successfully write a short snippet that used jQuery to call the RESTFul Twitter Search API and then parses the jSON into a dynamic display.

I did this in part because while I will employ some form of PHP or  Perl to cache the Twitter Search, I still might employ jQuery as the rendering mechanism for said cache. Here’s my test code so far:

/* a counter outside the context of setCountdown() */
var seconds2go = 0;

 * the method that sets the visual display of the countdown timer,
 * and triggers getTweet after 2 minutes
var setCountdown = function() {
  if(seconds2go > 0) {
    $("#countdown").html("Seconds until the next refresh:' +
  ' <span>" + seconds2go + "</span>");
  } else {
    $("#countdown").html("Seconds until the next refresh:' +
  ' <span>0</span>");
    seconds2go = 120;

 * the method goes out to the Titter A.P.I,
 * then parses the jSON block into the display
var getTweet = function() {

  /* set everything up */
  var url="http://search.twitter.com/search.json" +
  var query = escape( query=$("#twittersearch").val() );
  var display = '<div class="tweetDisplayContainer error">' +
       'no records found</div>';
  var urirex = /(https?):\/\/+([\w\d:#@%\/;$()~_?\+-=\\\.&]*)/g;
  var hashrex = /\#+([\w\d:#@%/;$()~_?\+-=\\\.&]*)/g;
  var thashuri = "http://search.twitter.com/search?q=%23";

   * A.J.A.X. happens here -> go get the data, then parse it
  $("#twitterresults").html('<h4><a class="searchlink" href="' +
  url.replace('search\.json','search')+query +
  '" title="see the search query via Twitter">Testing: ' +
  url+query + '</a></h4>');
  if(json) {
    display = '<div class="tweetsContainer">' +
  '<dl class="tweets clearfix">';
      ttext = tweet.text.replace(urirex,
  '<a href="$1://$2" title="">$2</a>');
      ttext = ttext.replace(hashrex,
  '<a href="' + thashuri  + '$1" title="">#$1</a>');
      display +=  '<dt class="tweet' + i + '">' +
            '<img src="' + tweet.profile_image_url + '"  />' +
          '</dt>' +
          '<dd class="tweet' + i + '">' +
            ttext + ' <strong>via:</strong>' +
            '<a href="http://twitter.com/' + tweet.from_user +
            '" title="tweets by ' + tweet.from_user +
            '">@' + tweet.from_user + '</a>'
    display += '</dl></div>';


 * this is where we kick-it all off,
 * assumes seconds2go = 0 initially
setInterval(setCountdown, 1000);

As you can see, the most difficult part was getting it all to fit in a readable format on this blog! Well, that and some additional fun with regular expressions.

Well that and what you don’t see in the code are two html elements:

<h2 id="countdown">Seconds until the next refresh: <span>120</span></h2>

<input type="hidden" id="twittersearch" value="deanpeters #smm" />

<div id="twitterresults">no results yet</div>

Todo: I’m thinking the above script could use a bit of animation easing or some other effect so we don’t simply “flash” new results at the user. It also needs to be objectified and wrapped-up as a plugin. More on that as I work on the widget/plugin.

Demo Stuff

I did create a demo page – it’ s not pretty, but it effectively shows how to get it done. I’ll craft up some CSS for it later.

It’s basically built off a search of  deanpeters  #smm as pictured below:

twitter search criteria for jQuery test

I’ve also created a .txt version of the file if you’re interested.

Additional Reading

In the meantime, I though I’d list some of the sites I visited while approaching this exercise. Some good people providing some good examples:


Thanks for all the emails and retweets of late. Good stuff!

February 26, 2011
by meandean
1 Comment

Using Perl’s Net::Twitter to Harvest Keyword Searches

So you’ve decided to dive into social media marketing on behalf of your church and/or charitable organization.

In fact, you’ve been wisely leveraging bit.ly with Twitter or ow.ly with HootSuite to track and measure your outbound links — but you find yourself in need a more ‘industrial strength‘ means of tracking who is saying what about your organization or an upcoming event.

You also want to speed up your WordPress blog as it’s been gagging when your Twitter RSS feed goes all 503 on you because Ashton Kutcher tweeted about his toenail clippings.

Recipe for Success

As I mentioned in my post last Tuesday entitled ‘Strategy vs. Tactics and your Social Media Activities ,’ I’ve been playing around with some of the cool social networking tools one can find in the CPAN library.

Today I want to provide a quick snippet on how to use the Net::Twitter to write a simple PERL program to harvest a search.

To do this, it mean installing the Net::Twitter library. You’ll likely need root or sudo privileges to make this happen. If you don’t know what root or sudo means, then you’ll want to contact your hosting provider.

That said, once you get it installed, the next step is to go to the Twitter Search page and create an advanced search. From the resulting query string should give you all the parameters you need, for example:

Based on the above example, I created the following script by using the nano editor for a file called ‘eastertweets.pl‘:

use Net::Twitter;
use Net::Twitter::Search;
use Scalar::Util 'blessed';
# Just the Search API; exceptions thrown on error
$nt = Net::Twitter->new(traits => [qw/API::Search/]);
eval {
   # Parameters: q, callback, lang, rpp, page, since_id, geocode, show_user
   my $r = $nt->search({
      q=>"\"easter service\" OR \"sunrise service\"",
   for my $status ( @{$r->{results}} ) {
      print "\@$status->{from_user}";
      print "\t$status->{created_at}\n";
      print "\t\t$status->{text}\n";
      print "-----------------------------------------------------\n";
if ( my $err = $@ ) {
   die $@ unless blessed $err &amp;&amp; $err->isa('Net::Twitter::Error');
   warn "HTTP Response Code: ", $err->code, "\n",
   "HTTP Message......: ", $err->message, "\n",
   "Twitter error.....: ", $err->error, "\n";

Once I created the file, it was simply a matter of modifying it to execute, then calling it:

chmod a+x eastertweets.pl

So Why Bother?

Now this approach by itself is a lot of work for little return. However, here are some things you might want to do with the sample above that would provide some big return value:

  1. feed this into a SQL database history via Perl DBI;
  2. create comma separated values and pipe it into a running log file;
  3. aggregate the returns with other searches into a single RSS file on your server for both the sake of speed and feeding localized dashboards;
  4. grep the returns for other key words, sending email notifications on hot items, while deleting those spammy items that make your feed so noisy;
  5. create a RESTFul web service that dynamically feeds your WordPress blog of select queries using Ajax via jQuery.

The point is, once the data is captured, you can pretty much do anything you want with it programatically.

For me, I’m thinking it might be fun to grab user IDs and feed their demographic information into some sort of analytics engine; or at least have some fun with Google Maps.

Anyway, enjoy the example. If you expand on it, don’t forget to come back and provide a link. I’d be interested to see how this snippet evolves.

October 3, 2010
by meandean

Setting up multiple test sites in XAMPP via virtual sites

It’s NEVER a good idea to test new designs, programs and/or learn new stuff on a production website. This article describes how to create multiple virtual servers on a Windows 7 platform using XAMPP to create a perfect Linux/Apache like test bed.XAMPP + Win7 = great platform to test WordPress, MovableType and   Drupal

Some Context

I’m in the process of re-factoring some websites I’ve let go fallow far too long. Part of this process includes setting up a Linux-like test site on my brand new Windows7-driven Lenovo U350 via XAMPP.

Yeah, I know, that was a lot all at once, so let’s break some of this down for those of you who don’t code for a living:

What’s XAMPP?

The WikiPedia defines XAMPPas follows:

(pronounced /ˈzæmp/ or /ˈɛks.æmp/[1]) is a free and open source cross-platform web server package, consisting mainly of the Apache HTTP Server, MySQL database, and interpreters for scripts written in the PHP and Perl programming languages …

… The program is released under the terms of the GNU General Public License and acts as a free web server capable of serving dynamic pages. XAMPP is available for Microsoft Windows, Linux, Solaris, and Mac OS X, and is mainly used for web development projects..

In short, XAMPP gives me a Linux/LAMP development platform on a Windows based machine.

My Situation

Whether it’s learning something for work, or working on a church website, often find myself jumping between languages such as Perl, PHP and Python … and content ‘manglement’ systems such as WordPress, Drupal and MovableType, I find it’s easier to keep things organized if I:

  1. keep each project in its own path
  2. establish a virtual server for each project
  3. enter the project name in the address bar of my browser

Getting it done

By default, “localhost” is the default domain name for your PC. It resolves to IP address

But just as a hosting provider can support several domain names on a single IP address, so too can your Windows system.

Below are the steps to get this done:

Step 1 – identify the new host

Unlike Windows XP or Vista,  for Windows 7 you’ll need to right click on the NotePad program and “Run as Administrator” as pictured below:

Notepad - Open as Admin

This is because the file we want to edit is now protected. That file is located at:


Once you’ve opened the file and on or about line 23, edit your file so it reads:       localhost       drupal

Save it, close your notepad editor, so you don’t shoot yourself in the foot in admin mode.

Step 2 – establish the virtual host

Keep in mind, the primary purpose of XAMPP is to give you an Apache server that runs on your local machine.

That in mind, you’ll need to edit one more file:

notepad C:\xampp\apache\conf\extra\httpd-vhosts.conf

Once in, you’ll want to modify it so it reads:

NameVirtualHost *:80
<VirtualHost *:80>
 ServerAdmin postmaster@dummy-host.localhost
 DocumentRoot "C:/xampp/htdocs"
 ServerName localhost:80
 ServerAlias localhost
 ErrorLog "logs/dummy-host.localhost-error.log"
 CustomLog "logs/dummy-host.localhost-access.log" combined
<VirtualHost *:80>
 ServerAdmin postmaster@drupal-host.localhost
 DocumentRoot "C:/xampp/htdocs/drupal"
 ServerName drupal:80
 ServerAlias drupal
 ErrorLog "logs/drupal-host.localhost-error.log"
 CustomLog "logs/drupal-host.localhost-access.log" combined

Note, in the default XAMPP install, the above is commented out, and the hosts are dummy and dummy2. I simply un-commented everything and renamed dummy2 to drupal.

Step 3

Restart your Apache server. The easiest way to do this is stop and start the server through the can be done through the console as pictured below:

XAMPP Console

Step 4 – Test It

Finally, you’ll want to test it by entering “drupal” in the address bar of the browser of your choice.

Before you do that, you may want to create the directory C:\xampp\htdocs\drupal …

… and then add an index.html, .php, .pl OR .py file to provide the ubiquitous “Hello World!” to demonstrate everything is running as planned.


Additional Resources

I’m not the first person to write on this topic, nor will I be the last. That said, here are some other sites that offer similar tutorials in case the one above is still as clear as mud.

Why Bother?

Some of you may be wondering why bother at all? Why not just work on your live site.

Personally, as an IT professional with a couple of decades experience, I can say with utter certainty – backed-up with copious examples – that this is a recipe for disaster.

Instead, why not simply take an old box and install a Linux distribution such as Ubuntu or Fedora … or do what I did, took a new box an added XAMPP.

Either way, you’ll be glad you did when one of your tests or learning experiences fries your non-production site.