Showing posts with label how to. Show all posts
Showing posts with label how to. Show all posts

18 May 2007

Semi-Reliable Periodic Commands

Impromptu

cron is great. Using cron, you can schedule commands to be run at regular intervals or at specific times. One of the major drawbacks of cron is that it doesn't generally keep state regarding job's status. If a job is scheduled to run at midnight, but the system is powered down at midnight, the command will never be run.

There are a few solutions out there to take care of this shortcomings, but many are designed for system-wide use only. What if you don't want to intermingle your personal jobs with the system-wide ones? Here is a fairly simple script that works as a wrapper, providing a little insurance to help make sure jobs get run.

#!/bin/bash

export P_ENV=~/.profile
export P_TRACK=~/.p_track

if [ -e "$P_ENV" ]; then
. $P_ENV
fi

case "$1" in
h)
export P_NOW=$(date +%Y-%m-%d-%H)
;;
d)
export P_NOW=$(date +%Y-%m-%d)
;;
w)
export P_NOW=$(date +%Y-%U)
;;
m)
export P_NOW=$(date +%Y-%m)
;;
*)
echo ERROR: Term not specified. Must be one of h, d, w, m .
exit 1
esac

if [ -z "$2" ]; then
echo ERROR: Command to run not specified.
exit 2
fi

export P_TAG=$(echo $2 | sed -e 's/[^A-Za-z0-9]/-/g')
export P_FILE=$P_TRACK/$P_TAG-$P_NOW

if [ -e "$P_FILE" ]; then
exit 0
else
rm -f $P_TRACK/$P_TAG*
echo Executing $2 at $(date)
$2 $3 $4 $5 $6 $7 $8 $9
echo Done

if [ "$?" -eq "0" ]; then
touch $P_FILE
fi
fi


The script has 4 operating modes. It can help ensure command are run hourly, daily, weekly, or monthly. It uses empty files in a configurable directory (~/.p_track by default) to keep tabs on the last time the command was run. Entries in the tracking directory are only created if the command run returns exit status 0 (no error.)

To use this wrapper, place it and a period (h for hourly, d for daily, and so on) in front of commands in your crontab like so:

*/20 * * * * ~/bin/periodic h ~/bin/command param1 param2


The above will check to see if the command needs to be run every 20 minutes, but will only execute it every hour. The advantage of wrapping individual commands instead using periodic command directories (check out the run-parts command) is less administrative overhead.

Note: The "real" periodic command is generally used to run system-wide periodic commands, often stored in /etc/periodic. UNIX-y systems tend to use other mechanisms and directories to do something similar.

Update 1: Changed the script from a group of if/elif to case (thanks for pointing that out Dave) and added a quick import of .profile or another file to set up environment.

16 May 2007

Flash Objects and the Z-Index

... On my Mac using Safari the drop down menus appear. When I move my cursor down on the drop down the rest of that drop down disappears In IE the drop down menus go only as far as the the top of the Flash file. I am not sure if it does the same in Mozilla. Anyway to circument this? Perhaps I could just do a pop-up page? - Michael

Yes! This is easy. First, you'll need to add a param tag to your object tag. Set the name to wmode and the value to transparent.

<param name="wmode" value="transparent" />

Then, force your Flash object to be below everything else by changing its z-index to 0. For the unfamiliar, think back to Geometry class in high school. You had X, Y, and Z axis. In the user interface world, the Z axis values specify how objects are stacked, and typically don't deal with perspective as you would see in a 3D game.

For some reason, applying z-index directly to a Flash object doesn't work properly. Wrap your Flash object in a div, and apply the z-index to it.

<div style="z-index: 0">...</div>

That's all there is too it!

Update 1: Unfortunately, the above code only works for Internet Explorer! The object tag is for IE only. Most embeded Flash content is wrapped with two tags in order to support both IE and other browsers. These tags are the object and embed tags. In order for this fix to apply to all browsers, modify the embed tag as well, adding the wmode property with a value of transparent.

08 May 2007

Silhouette Clone: Part 3

Impromptu

Previously, we created a Subversion repository and automatically populated it with data from a directory which was shared over the network. This allowed users to work on the share while automated processes on the server stored versioned copies of the directory for later access. Now we will provide point-in-time recovery options for files in our repository without making users install and use a Subversion client.

Subversion repositories can be accessed over the network via WebDAV by means of special Apache HTTPD modules. The problem with this approach is that old versions aren't accessible using this method. Even after employing simple workarounds to present users with a prior versions, users would still have two places to look for data: the actual network share and a the URL of the WebDAV interface to the repository.

We can get around this by using WebDAV to serve up the Subversion repository and then re-sharing the WebDAV resource using the same protocols we shared the original network share with. Confused? Follow along.

The first thing we need to do is implement point-in-time tags in our Subversion repository. Subversion can copy files from one location in the repository to another without duplicating the file data. This means we can implement tags by simply copying files! For minimal impact, we will perform this copy entirely within the repository.

Commonly, Subversion repositories have a trunk/ or head/ directory where most of the work happens. We will also add a point-in-time/ directory to store our point-in-time copies. An easy way to create these directories is to use the svn mkdir command on our repository.

$ svn mkdir /path/to/repository/head -m "Created head/"
$ svn mkdir /path/to/repository/point-in-time -m "Created /point-in-time/"


Note the use of the -m option to specify a commit message. We are performing operations directly on the repository and are therefore creating new revisions of it. Subversion demands you leave a message (even an empty one) when creating a new revision.

Now that we are using a head/ directory, all of our day-to-day work should be performed there. Users do not need to be aware of the head/ directory at this point so we will simply pretend that head/ is the root of our repository.

$ svn checkout /path/to/repository/head /path/to/working/copy


Any changes to the working copy will be committed to the head/ directory in our repository transparently.

Now that we have a repository and working copy set up to utilize a head/ directory, we can continue as we did in parts 1 and 2 to enable automatic commits. From this point, implementing point-in-time copies is quick and easy. A simple shell script to copy head/ to a subdirectory of point-in-time/ at regular intervals will get the job done.

#!/bin/bash
svn copy file:///path/to/repository/head "file:///path/to/repository/snapshots/`date +%F\ %T`" -m "Point-in-time marker added"


Note that while we can put the above command in the same file as our previous commands to commit changes to the Subversion repository, we don't have to. Separate scripts will let us use separate schedules for our actual backups and point-in-time tags. This allows us to back up data frequently without presenting a user attempting to recover a file him or herself with an overwhelming number of file iterations to wade through.

Once our backup commands and point-in-time tags commands are running at regular intervals, we will have a repository layout that is fairly self explanatory.

$ svn list /path/to/repository
head/
point-in-time/

$svn list path/to/repository/point-in-time
2007-05-08 17:00:00/
2007-05-08 17:15:00/
2007-05-08 17:30:00/


Note that by modifying the date format string in our point-in-time tag command we can change how subdirectories of our point-in-time/ directory are named. 24-hour time is used instead of 12-hour time to make sure directories are always sorted in chronological order.

Now that we have our data efficiently stored and laid out, we have to provide access to it. Most file sharing packages do not understand Subversion repositories, so we will have to build a bridge. We can start by using Apache HTTPD to provide WebDAV access to the repository. After installing and enabling the mod_dav_svn module, a few lines in an Apache configuration file will do.

<Location /repository>
DAV svn
SVNPath /path/to/repository
</Location>


Ideally, you will want to lock-down this location using a combination of users, passwords, and IP addresses. See the Apache documentation for more information.

Once Apache is reloaded and serving the repository, install davfs2. When installed properly, you can mount the Subversion repository via WebDAV as you would any other file system. A simple setup is to create a directory for the network share, and place your data in a subdirectory. This will let you mount the Subversion repository along side your data.

$ mkdir /path/to/network/share
$ mkdir /path/to/network/share/data
$ mkdir /path/to/network/share/backups

$ mount -t davfs -o ro,noaskauth http://localhost/repository/point-in-time/ /path/to/network/share/backups


Note that you do not want to allow the mounted backup directory to be modified.

Mirroring our Subversion repository layout in our local file sytem seems silly, but we used this setup for a reason. Providing users direct access to the live Subversion repository for use as the live network share would generate up to 3 new revisions in our repository each time a file is saved! This is due to the rename-write-delete method commonly used to avoid data loss when saving files. Multiple revisions aren't too bad, but the way in which these revisions come about prevents Subversion from storing data efficiently.

Using Subversion as a (mostly) transparent replacement for Microsoft Window Server's Shadow Copy for Shared Folders can be quite cumbersome, but its a viable alternative. Current developments in Linux file systems will likely render the need for this workaround obsolete in the near future. Until then, take it one step at a time and enjoy automatic, versioned backups of your network shares.

07 May 2007

Silhouette Clone: Part 2

Impromptu

Read Part 1

Now that we have already set up a Subversion repository as well as automatic file additions and commits, we need to account for deletions and provide access to versioned data.

When a file is deleted from a Subversion working copy without the proper procedures (which is pretty much what we are going to do) Subversion is shocked not to find the file when checking repository status.

$ svn status /path/to/working/copy
! deleted-file


All we need to do is let Subversion know we want to delete the file using the svn delete command. As in the previous installment, a little finagling from a shell script gets the job done.

#!/bin/bash
svn status /path/to/working/copy | grep ^\? | cut -c 8- | xargs svn add
svn status /path/to/working/copy | grep ^\! | cut -c 8- | xargs svn delete
svn commit -m "Automatic snapshot" /path/to/working/copy


You may note that now we have two calls to the svn status command. Below the above is rewritten to cache the output of this command in order to improve performance, but I wanted to show this format just once to compare to the previous version of the script.

#!/bin/bash
svn status /path/to/working/copy > /tmp/svn-status.txt
grep ^\? < /tmp/svn-status.txt | cut -c 8- | xargs svn add
grep ^\! < /tmp/svn-status.txt | cut -c 8- | xargs svn delete
svn commit -m "Automatic snapshot" /path/to/working/copy


Unfortunately, without handling file moves and renames through Subversion, they cannot efficiently. A moved or renamed file looks like a missing and new file pair to Subversion.

$ svn status /path/to/working/copy
! old-file-path
? new-file-path


Even though we can't handle this as efficiently as Subversion can, we can still handle it. No additional work is required to handle file renames or moves.

Next time, we will provide access to the versioned file system over the network without the need for clients to use the svn command line tool.

05 May 2007

Silhouette Clone: Part 1

Impromptu

A feature of a server operating system I will not name enables administrators to create snapshots of network shares automagically. This is above and beyond storing data on a server to facilitate regular backups. Users who clobber each other's work can view the share as it was previously and recover missing or modified documents themselves. If this sounds familiar, it should. Its simply automated version control.

Subversion is a great tool for storing versioned copies of filesystems. Combined with Apache HTTPD and a few modules, access to versioned, writable Subversion repositories over the network is possible, but not ideal for many situations. In order to prevent data loss, many applications do a 3-step shuffle when saving files. Even this would be fine, except for the fact that this doesn't always happen efficiently over WebDAV, the protocol used to share Subversion repositories in this manner.

A decent compromise is to share data over the network as one normally would, and automatically commit changes to a Subversion repository at regular intervals.

Setting up a Subversion repository is quite simple.

svnadmin create /path/to/repository


After that, simply check out your (empty) repository.

svn checkout /path/to/repository /path/to/working/copy


Then, simply share the working copy via the method of your choice. You may want to restrict access to the .svn directory to prevent accidental destruction.

A simple shell script can be run from cron at regular intervals to add new documents to the repository. Modified documents will be committed automatically during the commit command.

#!/bin/bash
svn status /path/to/working/copy | grep \? | cut -c 8- | xargs svn add
svn commit -m "Automatic snapshot" /path/to/working/copy


Once your cron job starts firing, you will have point-in-time snapshots of your share. In future posts, we'll deal with deletes and simple remote access to the versioned data.

21 April 2007

Simple Command Line Text Processing

Impromptu

I almost didn't write this, because I decided to check the latest goings on at GnuJersey.org, and happened to see Dave posted something about sed. I didn't want to be a copy cat. Then I remembered that nothing is original anymore. Plus, you will find no mention of sed (other than this and the previous one) or anything else in Dave's post here. so here we go.


The other day I needed to sort a few dozen lines of text. For some reason, the behemoth text editor I was using didn't have a sort function. (What gives?) A commercial competitor I recently switch from had this functionality, but I didn't want to reinstall it just for one quick sort. Instead, I turn to the old standby of the UNIX user: piping for text processing!

In order to demonstrate some key functionality, I am going to operate on the contents of a simple text file. The file contains the top 10 most common passwords and respective frequencies, according to this article. For reference, here are the contents of the file:

letmein 1.76%
thomas 0.99%
arsenal 1.11%
monkey 1.33%
charlie 1.39%
qwerty 1.41%
qwerty 1.41%
123456 1.63%
letmein 1.76%
liverpool 1.82%
password 3.780%
arsenal 1.11%
123 3.784%


In order to understand what is going on, you need to be familiar with 3 input redirectors used in the UNIX world. They are as follows.


  • < reads data from a file into the command on the left

  • > writes data from a command into a file on the right

  • | forwards data from one command to the next



Input redirection is actually far more complicated and capable than that, but that is another post.

The first key command is sort. sort does what you may expert, it sorts lines in a text file.

$ sort < demo.txt
123 3.784%
123456 1.63%
arsenal 1.11%
arsenal 1.11%
charlie 1.39%
letmein 1.76%
letmein 1.76%
liverpool 1.82%
monkey 1.33%
password 3.780%
qwerty 1.41%
qwerty 1.41%
thomas 0.99%


In order to clean things up, sort is often combined with uniq. uniq preserves unique lines in a text file, but it has a serious limitation: it only works when duplicate lines are adjacent. Therefore, if you want to use uniq, use sort first.

$ sort < demo.txt | uniq
123 3.784%
123456 1.63%
arsenal 1.11%
charlie 1.39%
letmein 1.76%
liverpool 1.82%
monkey 1.33%
password 3.780%
qwerty 1.41%
thomas 0.99%


Another handy one is cut. cut splits lines into fields (think spreadsheet) and allows you to cut out the fields you want to keep. cut can work with any simple field delimiter given by the -d option. A comma separated list of fields to display is given by the -f option.

$ sort < demo.txt | uniq | cut -d ' ' -f 1
123
123456
arsenal
charlie
letmein
liverpool
monkey
password
qwerty
thomas


If you want to get back to your command line roots, or just like the speed and simplicity of throwing a few small, highly specialized commands at a your problems, keep exploring. There are several more fun text processing commands available on most UNIX-like operating systems. More complicated solutions like awk and sed open up even more quick solutions to common text processing problems.

15 April 2007

Background Bookmarklets

To summarize for the sake of brevity: [How can I load a page with a browser bookmarklet without actually viewing the page?]Dave

First, to provide a little background, Dave wanted to save bookmarks to a server on the Internet (like del.icio.us) but at a location under his control. More importantly, leaving the current page to do so (like del.icio.us) was unnecessary and unwanted.

So, how do we do that? If you said AJAX like I did at first, your wrong! The XMLHTTPRequest object Ajax uses won't operate across domains for security reasons. So, what is the solution then? We need to use JavaScript to ensure a one-click affair, but the champion of JavaScript and Web 2.0 has left us out in the cold. We must use the old stand by of JavaScript and DHTML, the Image object.

The Image object provides some very useful behavior for us.

  • Unless you are crossing the HTTPS boundary, there aren't really any security mechanisms around by default to get in our way.

  • We can create an manipulate Image objects without every displaying them, or anything else, to the user.

  • Image objects are fairly cheap to create an use, unlike trying to fire up ActiveX, Java, or Flash.

  • Image objects will load any arbitrary URL we give them immediately and in the background, providing us with flexible, asynchronous behavior.



So, here is the code.

var img = new Image();
img.src = "http://localhost/bookmark/";


But wait! Where is the URL we want to bookmark? Easy, its in the referrer header sent to the target web server. If you don't want to trust referrers, all you need to do is escape and use the location.href property.

var img = new Image();
img.src = "http://localhost/bookmark/" + escape(location.href);


Now, there are a few things you can do with this. Start the entire thing off with javascript: and you've got a suitable bookmarklet. Wrap that in a tags and you have a click-able link you can embed in pages.

If you are interested, here is the mailing list thread this was discussed in.

10 April 2007

6 Steps for Moving Websites

How do I move my web site to another host? - DFRI, and others

If you have a 100% static web site, moving from one host to another is fairly easy. Of course, how often do we get to do things the easy way? Assuming a dynamic web site with some database-housed content (say, a content management system or forum software) there is very particular sequence of events that needs to happen to minimize pain. Some of the steps below only apply to web sites with database-backed content, and can be skipped for sites without it.

Step 1: Set up the new hosting account.


This step is obvious to anyone in the web hosting universe, but is not apartment to others. Moving a web site takes time. If you try and shut down your old service without having a new spot ready, you can wind up in web server limbo. Give your new host 24 hours to get your account set up, and DNS records propagated to their outward-facing DNS servers. Some hosts only update their DNS records once every 12 to 24 hours!

Step 2: Lock your existing web site.


If you aren't the only person updating your web site data, you'll need to lock it temporarily. If something changes after you have copied your data, but before you have finished, you'll have two separate versions of your data! If you don't want to get a stack of e-mails that say “Where did my post go?”, don't skip this step.

Step 3: Copy your data.


All of your files and database data will need to copied to your new web host. This should be fairly easy, so long as you don't forget the incidentals. Be mindful of file permissions, as some files may be created by the web server, and a loss of proper permissions could break your web site. The biggest culprits here are upload directories, uploaded files, and cache files. Consult your original deployment instructions or setup manuals if you are unsure about file permissions. Also, keep a close eye on database host names, database names, user names, and passwords. If any can't be copied exactly, you'll need to note the changes for later.

Step 4: Connect your old web site to your new database.


If you have any dynamic content, you will need to point the code on your old host to the database on your new host. Keep in mind that this is against the terms of service for many web hosts, and not even possible with many. If your new web host won't let you do this, find another one and go back to step 1. If your old web host won't let you do this, well, its a good thing you are moving. A few quick edits to configuration files should be all that is necessary. When you are done, unlock your existing web site, but leave any file uploads disabled. It's OK, all of those edits to the database are happening on your new web host.

Step 5: Connect your new web site to... your new web site.


You've already copied files, now you need to reconfigure your code. Make the same edits to files you did in step 4. Some options may be a little different (localhost vs sql.hostdomain.tld, for example.) Wow, this step was short.

Step 6: Updating DNS information.


The final step is to update your DNS information to point to your new web host. In most cases, this means changing your domain's name servers to the values provided by your new web host. Those with third party DNS hosting (or DNS hosting from the domain registrar) can update the DNS records for the domain. Your new host can provide you with all of the necessary information.

That's it! Your done! It make take up to 72 hours for the switch to be complete, but since you carefully followed the above instructions, it doesn't matter! It will all come together. After you are satisfied that no one is being pointed to the old web host, cancel your old account.

What? You want to move e-mail accounts too... Well, that is a different story...

27 March 2007

Apache HTTPD Virtual Hosts and SSL

how do i do configure Apache2 to answer for all subdomains requested. as i want to be ble to create sub domains on my server using my main domain name. - patty552 @ UbuntuForums

Various statements of misconception in this thread at UbuntuForums.

Apache HTTPD Virtual Hosts make the web go round. They allow a single server to host many web sites with different addresses. In the web hosting world, this allows for efficient, cheap web hosting. For your average person running Apache HTTPD, it means efficient, cheap home networking and web development testing environments.

Virtual hosts are easy to set up, just check the documentation at http://httpd.apache.org/docs/ . That said, there are two main ways to configure virtual hosting, which you have to keep in mind when starting out. One method involves matching the request host name, IP address, port, or any combination of them to a separate block of HTTPD configuration statements. The other method specifies a directory pattern to use for the document root and cgi-bin based on parts of the host name.

For the former method of configuring virtual hosts, all one needs to do is add a wildcard ServerAlias directive to the VirtualHost block for your domain name.

<VirtualHost 1.2.3.4>
ServerName domain.tld
ServerAlias *.domain.tld
DocumentRoot /var/www/
</VirtualHost>


For the latter, all one needs to do is match against only the domain name, or include subdomains in the pattern, making sure to create the appropriate directory structure.

VirtualDocumentRoot /var/www/%-2/
VirtualDocumentRoot /var/www/%-2/%-3/


Web hosts tend to use the larger VirtualHost method. Smaller shops, or generic mass hosts (departmental or employee hosting within an organization,for example) will find the latter very helpful, particularly when serving out of user's home directories.

All this is great, but what about SSL? You could start up a separate instance of Apache HTTPD to serve over an SSL connection, but you probably don't want to do that. There are some advantages, which but that is beyond the scope of this piece. The easiest way is to use a VirtualHost block to match against connections on port 443, the default HTTPS port. Contrary to popular belief, you do not need a separate IP address to do this.

<VirtualHost *:443>
SSLEngine On
SSLCertificateFile /etc/httpd/ssl.pem
DocumentRoot /var/www/
</VirtualHost>


This works just fine if you are only serving one site over HTTPS. The problem comes in when you have multiple domain names being served from the same server which need SSL. Since the SSL certificate needs to be used before the web browser sends a request to the server, the server has no way of picking a domain-specific SSL certificate to use. Name-based matching just won't work for SSL. This is why proprietors of shared web hosting services demand that you purchase a dedicated IP address if you want to use SSL. IP addresses are known before SSL certificates are used, so by matching based on IP address, we can use domain-specific SSL certificates.

<VirtualHost 1.2.3.4:443>
SSLEngine On
SSLCertificateFile /etc/httpd/dom1-ssl.pem
DocumentRoot /var/www/dom1/
</VirtualHost>

<VirtualHost 1.2.3.5:443>
SSLEngine On
SSLCertificateFile /etc/httpd/dom2-ssl.pem
DocumentRoot /var/www/dom2/
</VirtualHost>


So, to recap, you do not need a separate IP address to use HTTPS. You do need separate IP addresses to use HTTPS on servers with multiple domains using SSL.

15 March 2007

Restricting Interactive Access

How can I give someone one SSH access to my server without compromising security? - Plenty of People

[How can I] allow shell access but NOT allow [arbitrary] remote commands to be run? - TyphoonJoe @ Ubuntu Forums (Impromptu)

First, every point of access to your system is a potential weak spot. Learn to accept it.

The long and short of this is if you want to allow someone interactive access to a machine, but want to control what programs they can run, you need to do one of two things. Either force a customized shell upon the user that will implement some sort of access control, or use regular, old-fashioned file system permissions to lock out certain programs.

Wrapping a shell is best left to experts. There are some interrupt issues to handle as well as some special case scenarios to worry about. Plus, its usually overkill!

File system permissions are quick, easy, and effective. There are just six simple steps to take:


  1. Identify a restricted program or group of programs (such as ifconfig or gcc and ld.)

  2. Create a group for the program or program group.

  3. Add users permitted to use the restricted programs to the corresponding group.

  4. Change the group owner on the program(s) to the specially create group.

  5. Change file system permissions on the program so that only the owner or group can read or execute the program.

  6. Pray you didn't botch steps 1 through 5.



Keep in mind that man other programs, scripts, and services may need to use what you have restricted. When locking out commands with file system permissions make sure you are as thorough as possible.

11 March 2007

Mirroring Web Sites

Impromptu

Mirroring a web site, or subset thereof, is actually very quick and easy. All you need is the handy utility wget. There are actually several good reasons for wanting to do this.


  • Snap-shotting a compromised web site for off-line analysis

  • Snap-shotting a healthy web site for off-line analysis

  • Setting up a decoy or dummy website

  • Legitimate mirroring to help someone else out with bandwidth



wget has quite a few options. Read the man page if you care. If not, here is a sample command to snatch a web site domain.tld.

wget --mirror --wait=2 --random-wait --force-directories --recursive --convert-links --page-requisites –domains=domain.tld http://domain.tld/


Most of the options above are the long form so that you can understand what they do without me having to explain each one. One pair worth noting is -wait=2 -random-wait.

Plenty of web hosts and administrators run statistical analysis on their logs. You don't want to set off any alarms, even if your intentions are pure. If some overzealous administrator sees some idiot beating the hell out of their website, they may decided to teach the wannabe DoSing bastard a lesson and phone the feds. The two above options are an attempt at keeping your full footprint in the logs from being noticed.

--wait=2 sets a pause between page fetches of 2 seconds, and -random-wait skews this by 0 to 200% per request. Logs will still show a lot of hits within a short time frame, but hopefully you will avoid some flagging thresholds. These two options will also help you dodge DoS filters as well.

05 March 2007

The Skinny

I am very frequently asked questions about computers, electronics, information security, and other fun topics. I'm tired of answering the same questions over and over again. This archive of questions and answers will handle the repeats for me. Update: There are plenty questions I have anwsered and long since forgotten why. There will be no shortge of impromptu posts as a result.

If you have a question you would like to ask, e-mail me at frostycolddrink@gmail.com

As a side note, posts taged with scenario are discussions of scenarios and techniques from a high-level perspective. Posts taged with how to are more technical and include practical instruction you can use.