Fiddling With UpRight

UpRight, by Otaku Software, is a One-Click file transfer utility, aimed at web developers and anyone else who might need to quickly upload a file to an ftp server without going through the normal upload motions. I browse the Otaku site every few months because their software is, for lack of a better word, nifty. UpRight isn't brand new - I passed it over once or twice thinking it was just one more app to install, but after a couple of weeks of serious toying, I'm pretty happy with it.

I have basically three uses for UpRight:

  1. Uploading files to my web site for various reasons
  2. Uploading podcast audio for my chuch's web site
  3. Quickly moving data to my "general crap" S3 bucket

For each, I'd always had a different workflow which involved an AJAX web form, ncftp in cygwin and Bucket Explorer, respectively. With UpRight, it's one two clicks, and data begins to move. (Note: It would be one click, but I have more than one upload location, and UpRight nicely nests these for me rather than create multiple context menu items.)

Really, the S3 access is my favorite. It's rare that I actually need to access my bucket(s) from work - generally all of my usage is from home - however, Bucket Explorer doesn't live up to my expecatations. Nothing big, really... it just randomly forgets my S3 credentials and sometimes thinks that I don't have an internet connection. Other than those "features", it works "fine".

UpRight, however, works perfectly for this purpose. It is no more and no less than as it advertises itself. It moves selected files/folders, and it does it very well.

Of course UpRight is, like any software application, not perfect. There are a couple of things I'd like to see Otaku improve or add:

  • Handle file overwriting. Right now, it just overwrites without asking. I'd like to see at least the option for a prompt which I can choose to turn off as needed. Per-location prompt preference would be great.
  • SCP support. UpRight would be perfect for the quick uploading I do from work to home, currnelty using WinSCP (which I do really like).

One feature I don't have much use for is the ability to take an action after the upload completes. In its current form, this allows you to customize the "completed" dialog, send an email or copy some text to the clipboard. I can very easily envision strong use cases for each of these options, but for me.. meh. I'd love to see an option to "run this command" as well so that I could have the upload trigger a script of some kind.

Overall, I've saved some time with UpRight. It's not a killer app, but it saves a couple of minutes throughout the week, and I really like that. Otaku also makes a few other nifty utilities such as TopDesk (Expose and 3D task switching for Windows) and DeskSpace (think Compiz Fusion for Windows). I've been using TopDesk for a few years, and have had varying degrees of use for it since, I think, version 1.2ish. At the moment, I'm using it to quickly move my windows out of the way when I need to see my desktop, tile my app windows when I have too much going on, and display the pretty 3D task switcher when my laptop's memory isn't completely used up.

Were I to score products, I'd give this one a 7/10 - Not awesome, but certainly very useful. I'll install it next time I reload my system. I don't, however, so you should ignore this and the previous sentence!


Dec 4th, 2007

Is it more enjoyable to look for a job than to look for job candidates?

In my current position, I'm doing a lot of hiring right now. I have a new team to build from scratch, and it's tough! The job market seems to be chock full of candidates who want either way too much money for arguably little experience, or who think that a resume is supposed to tell their life's story.

I mentioned to a co-worker that I was a little frustrated with the process and he asked me a very honest and thought-provoking question: Is it more enjoyable to look for a job yourself than to look for job candidates? He wasn't asking if I'd prefer to find a job... rather if my own experiences trying to find one were better or worse than my experiences looking for a crack team of software engineers.1

My experience with job hunting has been varied - after completing my undergraduate degree, the market was empty. The dot-com bubble had burst and jobs were simply gone. I applied for what positions I could, near and far. The best I could hope for was about 50% as much pay as graduates from the previous class had received. It wasn't really enough on which to build a future (and start paying off student loans, of course), so I chose graduate school.

A year later, new Masters Degree in hand, I searched again. Salary rates were beginning to level off again, but now that I had a Masters Degree, I felt I had "matured" and was looking for "more than just a good paycheck". I found "nothing". I was unemployed for about eight weeks and finally got a nibble from a temp agency working as a level 1 support tech at a corporate help desk, fixing Outlook PST files 8 hours a day. Since then, on the other hand, I've been offered every single position for which I've applied. While that sounds self-inflating, it was really my experiences hunting for jobs (and utterly failing) that got me to the point at which I could go into an interview and secure said offer.

New graduates tend to expect a position. In general, they believe that the world is supposed to "give them" a shiny new career as soon as they graduate. They also expect to be paid well. This is counter-intuitive as entry level positions almost never pay "well", and college graduates have exactly zero non-academic work experience (save an internship, on-campus or retail job - they've never been a full-time employee in their discipline). This is one of many, many reasons, that it took me 10 weeks to find a job that I didn't even want. I figured two degrees must bequeath me some sort of benefits such as, say, a $50,000 salary (in economically depressed Buffalo) and a cushy position with upward mobility.

Dead. Wrong.

First of all, I didn't "deserve" squat!

Second, getting work takes work, and I hadn't mastered the concept.

Third, a company pays you what they think you are worth as an employee, not what your degree is worth.

So, after being humbled by a short taste of jobless poverty, I changed my perspective, stopped talking about my degree like it was so hard to obtain, was perfectly honest with myself and interviewers about my lack of professional experience and need to increase said experience and landed the next interview I had.

Gone are the days when I even speak about my college degree work. It stops being relevant after about two years in the working world, or at least it should.

Fast forward to now: I'm in my fifth position since graduating from college. That might sound bad, but it's only the third company and the first was a six-month contract position. It's the third in which my role includes hiring people to do my biddingwork for me, one of which barely counts because I was hiring current students. My perspective is vastly different now than it was then. My resume doesn't include a lengthy technical skills/languages list any more because it's far more interesting to potential employers when you can describe what you can do with real-life experiences. (It still helps to list your top few skills and areas of expertise for keyword-searches.) I also stopped listing my responsibilities for jobs I haven't had in several years. Who cares how many students I managed in 2002?

So, as I look for candidates, I've noticed that multi-page resumes have become the norm with every single position, project and language ever used (or even heard of) listed in great detail. I've also noticed folks with 2 years' experience expecting to make a salary grade for those with 5-7. I don't understand the phenomenon, but it helps me to weed out those I'm simply not going to hire!

To answer the question, I think, in some sadistic fashion, I liked being job seeker more than I do looking for them. It boils down to one very important contrast: As a candidate, I can have a direct effect on the outcome of my interview and can even accept a position below my expectations just to pay the bills. As a hiring manager, convincing someone to take a pay cut is nearly impossible unless they came up with the idea themselves, which almost never happens. On top of that, I have to hire people that are capable of doing the job or else I might not have one! I can't "settle" for a candidate who doesn't really meet the requirements of the position just to fill the team.

The current hiring market is very job seeker-friendly. There are dozens of companies per applicant, so the competitive environment is a huge pain for me as a manager. I find I have to do far more to sell the laurels of my company than I do to sell the position itself, which is thankfully fairly easy since the benefits here are pretty darn good.

I hadn't intended to blog about management geekery, but it's what I do, so it makes sense. I'll follow-up at some point with thoughts on interviewing, coaching and any other tricks I pick up along the way.

  1. At least, I don’t think he was intimating I should look elsewhere!

Nov 16th, 2007

The art of the (cheap) pen

For many years now, I've tried to find the perfect pen for note taking and every day use. There have always been four requirements:

  1. My handwriting must appear legible when I write well. This means it should also not smear or bleed too easily.
  2. One pen must cost less than $5, so I'm ruling out expensive gift pens and nice fountain pens. (i.e. no watermen here)
  3. I should be able to buy them anywhere in at least two, but optimally three, colors.
  4. It should reliably always produce constant, steady ink for at least 3 months, if not longer.

This shouldn't be very difficult, right?

There are literally hundreds of varieties of pens out there. However, in my search for the ultimate cheap pen, I've tried everything from the Pilot Precise V5/V7 line (and their retractable cousins) to the Pentel Needle Tip, the Uniball Jetstream and Vision, to the old school Zebra F-301.

All of them failed for one reason or another. The Uniballs don't last long enough. The Pilots smear easily (but otherwise they're ok). The Pentel doesn't last very long and bleeds like crazy. The Zebra... well, I love the Zebra, but sometimes they plain don't write and I have to shake the pen repeatedly or scribble on a free area of paper. They also don't write very boldly. There are uses for that, but not when other people might need to read my notes.

I thought, perhaps, my office would stock some good cheap pens, but they're almost always Bic Round Stic's, and I hate Bic pens. They never produce a constant stream of ink, and are worth the 5 cents they usually cost. Plus, every time I see a Bic pen cap, I think about people chewing on them.

A couple of weeks ago I discovered the new hotness, and ate my last paragraph. Enter the Bic Cristal Gel. Holy awesome cheapness, Batman! The ink writes so smoothly, it's like an expensive pen crammed inside of a 10-cent shell. It's fantastic.

Admittedly, the black ink seems to flow a lot better than the blue, which needs a letter or two to start looking good. We don't have any red or green ones here, but I may just head out to Office Depot and pick some up. No decision on how long they'll last, but I'll patiently wait and see.

This ends my pen geeking. For now.

Nov 2nd, 2007

Home Backup Project - Part 6: Summary

Previous Posts:

Throughout this project, I've had to shed pieces of the plan off as I realized that they:

  • were too expensive
  • were too time consuming
  • had low WAF

I stumbled upon a 500Gb drive for under $90 on TigerDirect, which allowed me to keep data off-site in a high-security location for relatively little effort. That was luck.. keeping it up to date, well that's tougher. I can bring the drive home and sync it locally, but who wants to do that on a regular basis? Corporate firewalls keep me from realizing that potential. The other mitigating factor is that one of the items I would like in sync between the two is all of my music (all ~200Gb of it). And while the initial copy was done on my LAN at home overnight, my iTunes libraries are in two different OS formats, so even if I did copy all of the changed files back to my home drive, I'm still SOL once the files are there. Somehow I have to know to update my library.

I had a brainstorm in the shower (where all of my good geeky ideas originate) that I could create some sort of complex set of AutoHotKey, iTunesSDK/JavaScript or ActionScript [scripts] that would copy over an updated library XML, shut down the local instance of iTunes, compare the XML files, write out a library modification script, start iTunes, run the script and quit. The trouble for me was determining which library would be the master, and if I didn't want either to be the master, how do you reconcile changes? Maybe create two sets of modification files and queue up the returning file for delivery later on? What if iTunes is in use?

Anyway, maybe that's a good project for a rainy/snowy weekend. I'd need some existing rsync system in place first. I'm getting ahead of myself, and WAY off topic.

The drive also keeps a copy of my home SVN respository, which I update via a Windows Scheduled Task once per week. So, after about a month or so, this is my result:

  1. Evaluate online backup options. Done. I ended up sticking with JungleDisk for now, until something better comes along, or I discover mozy's client sucks less.
  2. Set up an automated incremental backup of all of my documents/pictures to a local, external drive since JungleDisk is so painfully slow. I'm using rdiff-backup to keep a local copy, but will switch to Time Machine after I upgrade to OS X 10.5.
  3. Install a large SATA drive (>= 300gb) for my workstation (at work) for my music and possible offsite SVN mirror. Done.
  4. Investigate other offsite SVN mirror locations (family? cheap web hosting?) Deferred.
  5. Get my local SVN repository in order. It's in pretty good shape but it can't hurt to re-think the structure a tad. Add a tree for installation media disk images. This is done. I've updated it from both sets of "Software I Install" directories on each operating system. In the process, I created a tree for tools (non-install, runtime exe's) which contain a lot of little things I use once in a while such as the SysInternals (now Microsoft) suite.
  6. Convert all DVD and CD media to their respective disk image formats and checked into the SVN repository. Done. This was way too easy.
  7. Set up the HDD at work (Done) and rsync my music over. Deferred.
  8. Set up the offsite SVN repository and mirror my repository at home. Create a cron job to do it automatically every night. Done. This works very well.
  9. Scan all of my really important, hard-copy files into PDFs and add them to the Documents tree in my backups. Deferred.

This concludes the first phase of this project. My important data are safe and easily recoverable. My unimportant data are somewhat less safe, but that's fine. Everything runs automatically, and I don't have to think about it much. The best part of this is that my wife doesn't have to know about how all of this works. It just does, and that's high WAF if I've ever heard it!

What's Next?

After all of this, I still have some tasks I'd like to accomplish so that I "feel more secure":

  • Locks for each computer1
  • Scan all of my really important, hard-copy files into PDFs and add them to the Documents tree in my backups.2
  • Investigate other offsite SVN mirror locations.3
  • Rsync music and iTunes library.4

I may do some or all of those some day. For now, I feel relatively secure in my data's security.

  1. This seems silly since the house locks, but if the workstations are all locked to their desks, then a theif has to decide if they budgeted time to hack apart my desk to get my iMac or not.

  2. I plan to do this by the end of the year.

  3. Ideally, this would be at a family member or friend’s house, connected to a machine on which I could run an SVN update automatically. Finding someone willing to let me do that is the tough part, and the reality is that the best option is a low-power linux box with a ~80gb drive that someone lets me toss into their closet. I have the drives and some of the hardware, so really I just need to allocate a couple hundred clams for the guts of the machine… oh, and find someone with a nice internet pipe willing to let me hide a computer in a closet. Details.

  4. That’s a project in itself. We’ll see.

Oct 29th, 2007

Home Backup Project - Part 5: Evaluating Incremental Backup Options

Previous Posts:

The most crucial piece of this project is keeping the primary family computer, a 20" iMac, backed up. This project might as well never have happened if this one task were incomplete. I evaluated several options for keeping the machine backed-up:


Since one of the muses for this project was a post over at Computer Zen, I followed his lead initially and signed up for a free account at (using his referral code, of course). Overall, the concept sounded fantastic. They have clients for Windows and Mac, unlimited backups for $5/month (or less, when purchased in bulk) and a fairly solid fanbase of users who are very pleased.

From a security standpoint, mozy also seemed very good. An article over at MacApper noted that files are encrypted with a 448-byte key and all transactions take place via SSL.1

I installed the mac client and started to play with it and was instantly turned off. There's a huge design flaw right up front... the software does not ask me what I want to do, it just starts doing what it thinks I want it to do - spidering my hard drive for likely content to be backed up. Now, even iTunes and WMP ask before taking up significant CPU resources in such a way. Not mozy, no sir. It knows that you want it to do so!

I may be an uncommon geek, but I'm fairly well organized. A child could probably find a file on my computer (without spotlight) simply by knowing what they're looking for. Is it a document? Yes, it's probably in Documents.

Anyway, all I wanted was the dialog that says "pick your files to back up", which is the second part of the interface. I had to wait almost ten minutes before I could do that. But, for the sake of argument, I waited. After that one annoying "feature" everything else about mozy is actually pretty nice. It tells you how much space you have left, how much your backup will take (the trial is limited to 2Gb) and what files you're backing up. You tell it when to do it's work, and it goes off into the background and does its business.

So, for $0, I had 1.8Gb of data backed up, which really just amounted to my Documents folder and my iTunes library files. I need about ten times that for my images and other data I want to safeguard this way.

What I liked: Speed of backup; Easy way to download backed-up files or order DVDs as needed; Cost

What I disliked: Annoying up-front user behavior assumption; Immature client - it started crashing randomly after about a week

I'm told that this was a fairly new client.. the Windows client is supposed to be much better, so that's good to hear. For my purposes, however, all of the data I care about in my house is on my mac. (My wife's laptop and my linux server are all recoverable with little-to-no effort and don't contain any data that aren't anywhere else. My office workstation gets backed up at work, so I don't need to worry about it here.) I don't much need the Windows client, so to me, the killer app is the Mac client, which was somewhat lousy. Too bad... Scott had such nice things to say about mozy.

One additional note, however. If I were in Scott's position, and were backing up multiple computers with multiple platforms, mozy might make even more sense. If, for instance, I had a true family approach to this, and were trying to keep my and my wife's familys' computers all backed up in one place, then this might make sense. For us, on one mac, mozy didn't completely fit my needs, purely from a client perspective. Were the client overall more usable and stable, I think I might have stayed.

JungleDisk / S3

Initially, mozy was the only online option I was investigating intensively. I had looked at others (Carbonite, iDisk,, etc.) but didn't like various things about their services and stuck with mozy. About a week into the process, I read a comment thread over at LifeHacker about this very idea. Several commenters noted that they had great success with JungleDisk, which uses Amazon's S3 service for storage. The S3 pricing model is pretty cheap, so its barrier to entry was quite low.

JungleDisk installs as an internet drive (similar to iDisk), though it's actually connecting to a service running on your machine. When you add/change a file on this drive, it caches it locally and then queues it for upload to S3. First of all, the upload speed is SLOW. There were times, during my initial 20Gb push, that the speed dropped to less than 1 kbps. To me, that's abysmal. I had to let it run for over a week to complete the upload. It might be faster to download data (I should probably test that, huh?), and that's what's really important now that the backup has completed.

One nice thing about S3, however, was that I could upload data to it, and not leave a copy on my mac. I have about 5Gb of old archived data that, if you remember, I "lost" for a few weeks. I don't need it around all of the time, but I sure do want it somewhere. S3 sounded like a good place.

The client is robust and backs up exactly what you ask it to back up. It runs on whatever schedule you set. I have it running nightly. It costs about $20 after a 30-day trial, but for backup software it's probably worth it.

What I liked: Overall storage cost; Set it and Forget it; Usable for non synchronous data

What I disliked: Client cost; Upload speed


rdiff-backup is a unix command-line util to perform backups by, gasp, recursively diff-ing. I know. The name is a real misnomer.

It's written in python, and it's pretty darn snappy. It backed up about 20Gb in 90 minutes (which doesn't seem that fast, but I was copying to mr. slow external "says it's 2.0 but acts like 1.1" removable drive), and then about 12 hours later ran an incremental update in 4 minutes. Not too shabby, I suppose. I also set it up to run in cron every night:

45 04 * * * rdiff-backup -v5 --print-statistics --exclude /Users/shelton/rdiff-backup.log --include /Users/shelton/Documents --include /Users/shelton/Pictures --include /Users/shelton/Music --exclude '**' /Users/shelton /Volumes/backup/BACKUP >> /Users/shelton/rdiff-backup.log

Then, since space is limited on that drive (though not too much), I also set it up to remove increments that are older than three months. I can always trim that down if space becomes a premium:

00 04 01 * * rdiff-backup --remove-older-than 120D /Volumes/backup/BACKUP

..and that's it. Pretty darn simple, I must say. Once of the really nice things about rdiff-backup is that the most recent revision is always sitting there as normal files. I can use any standard file system command (cp, mv, tar, etc) to move/copy files out of that archive in case I lose something. It's not as space-conscious as incremental tarballs gzipp'd, but it's what makes most sense for me.

rdiff-backup also has some nifty restore features that seem promising.

After using it for a few weeks, however, I noticed that the include/exclude flags don’t really work as I’d like them to. For instance, I want to include all of ~/Documents, but not ~/Documents/Parallels (because I have more than one Parallels VM, and that takes up a LOT of space). Shouldn’t this work properly?

–include /Users/shelton/Documents
–exclude /Users/shelton/Documents/Parallels

I guess not. It pattern-matches the directories, giving precedence to the include statements, so since /Users/shelton/Documents/Parallels has /Users/shelton/Documents in it, I’m out of luck. I ended up thinking different and moved Parallels up a directory and symlink’d back to it so now I have /Users/shelton/Documents/Parallels -> /Users/shelton/Parallels. It doesn’t know the difference because, well, UNIX is teh r0x. I added an exclude statement for that directory and I’m in business, with about 20Gb reclaimed.

While figuring out that mess, I noticed that the pruning task wasn’t quite effective. My drive filled up within two weeks, and I needed to set the prune to 14 days, then 7 days, and then 5 days to recover space enough to even start the following backup. I ended up running the prune task 45 minutes before the backup task every day and setting it to remove anything older than two days, which somewhat defeats the purpose of it keeping previous versions in the first place. I may be too busy (or out of town) to notice something’s been gone in under two days. Again, I could just buy a new hard drive, but I’m also aiming for “cheap” in this project. After getting rid of Parallels, I was able to put this back to 7 days, so all in all, this isn’t such a bad option.

rdiff-backup can be used to backup remotely, but you need to provide a remote machine and unless you can mount it as a drive, you need rdiff-backup to work properly on the remote machine as well. I could not get it to work on my ubuntu server in the basement, so that was somewhat annoying. I thought about using it to sync to S3, but the overall speed of S3 was so slow that it wasn’t worth it.

Time Machine

Leopard isn’t out yet, so I couldn’t fully evaluate Time Machine. However, everything I’ve seen and read leads me to believe that when I do upgrade, I’ll also buy an external drive 2-4 times the size of my internal drive and point Time Machine at it and just pretend it isn’t there. It also doesn’t satisfy the “remote” aspect of this part of the project, but it might be yet another way to keep my data safe.


So what did I end up doing? Well, a little of column A, and a little of column B.

My disgruntled-ness with mozy’s poor client was enough of a detracting factor for me that I ditched it entirely. If I had a windows household, I probably would have kept it. In the end, I stuck with rdiff-backup running every morning. I’m almost certain I’ll switch over to Time Machine once I upgrade to 10.5, but if I do, I’ll probably want a bigger backup drive than what I have.

  1. The last part seems a no-brainer, but having my data encrypted makes me feel all warm and fuzzy inside.

Oct 25th, 2007


Recently released, Instantbird is a cross-platform, multi-protocol Instant Messaging client, built on Mozilla's XUL framework, implementing Pidgin's libpurple for connectivity. This is version 0.1, and already it's off to a good start, supporting all of the libpurple clients out of the box. There are some very obvious awesome things about this development effort:

  • By using open-source, community-driven platforms, they set themselves up to build an extensive developer community rather quickly
  • Beyond that, by using the XUL framework, developers can very easily create add-ons and themes to extend the existing application, just like the user community currently does for Firefox and Thunderbird
  • No separate interface framework installer (gtk) for non-native environments (windows, mac)

This release is lacking so many of the features of mature clients like Pidgin, Trillian and Adium that it's almost not worth comparing them (i.e. you cannot currently delete a buddy from your buddy list). The roadmap lists their 1.0 target as "Feature parity with Pidgin", which may seem a lofty goal, but their initial dot-release roadmap seems to put them on-target to do that, depending heavily on time and community resources.1

The developers are both French, but do a flawless job maintaining the UI in english, which can sometimes be lacking in non-US-developed applications. On the whole, this app seems to have a good start. My only complaint is that I can't seem to get it to connect to my Jabber server at work, but it connects to my GTalk and AIM accounts without any issue. I duplicated my settings from Pidgin, so maybe there's some odd protocol problems with XMPP to our version of Jabber.

  1. There are no anticipated release dates lists, which makes sense since these guys probably have day jobs.

Oct 24th, 2007

Home Backup Project - Part 4: Creating ISO images

Previous Posts:

As part of the ongoing home backup project, I've tasked myself with converting all of my installation media to disk images (.iso files) so as to allow me to recover said media in the event that, say, my house blew up. Worst case scenario, I think. Anyway, I thought this would be really easy on my mac:

dd if=/dev/disk4 of=/path/to/new.iso

This always worked back in my I only run linux because windows users suck! days.

This created an unmountable file in both windows and os x. The unix gods have failed me! (In retrospect, I was probably forgetting a command line operator, or something, and therefore actually failing myself, but I digress...) I then thought that perhaps Roxio Toast Titanium would make my day, but to no avail. The solutions I had in front of me evaporated rapidly.

I did some quick googling and found MagicISO, but it's !(free), and I'm trying to do (most of) this on the cheap. Some more research led me to two tools:

Each seems to create ISOs from CD fairly well. So far, WarCraft III backed up nicely enough for me to install off of the ISO image on my Parallels VM, so that's confidence inspiring. On the other hand, I received some very weird errors reading one of the Unreal II disks, so neither of these may be a perfect solution as I checked the disk itself and it's fine.

However, both of these programs are wicked old... VaporCD was last updated six years ago and hasn't been tested on XP at all. There's got to be a free-ish, newer solution for this somewhere. Perhaps I just haven't stumbled across it yet. Rather than continue stumbling, I kept playing around with both and determined they were, for the most part, crap. Plain and simple.

I'd like this fact to explain their free-ness, but all of the mozilla apps are free (as in beer) and they rock my socks off, so that's right out. (I mean no insult to the developers of either of the above applications. To be frank, I expected something fairly more useful than dd, but my hopes were dashed. End rant.)

I asked for some advice at work and everyone said "Go download MagicISO." So... I did. And it's so much better. It meant "spending $30," but it's fast, and doesn't choke on minor block corruption due to surface scratches, and it's... fast.

Really, really fast.

I blew through all of my games and OS install CDs (about 25 total CDs) in under 90 minutes and just let it run in the background while doing other work. I finished up the rest of my disks the next day, which brought this phase of the "things to do before I get to do what I've been wanting to do" to a very quick close.

Sometimes you just have to "pay" for software. Gasp!

Oct 22nd, 2007

Home Backup Project - Part 3: Subversion Clients

Previous Posts:

As part of the ongoing home backup project, I've been testing various SVN clients to make sure I have a good working application for both home and work environments. At work I run WinXP Pro and therefore use TortoiseSVN because, well, it's pretty much the best there is. I love the shell integration - it's near seamless. I have local stores from multiple repositories - development code, cross-departmental tools and scripts, my home's "tools" directory, and my home svn archive - all updated as I needed.

On the mac at home... well, that's a different story. I tried SCPlugin, which seemed to be the Mac version of Tortoise. While it started out looking that way (the icons showed up for existing checked-out repos), it wouldn't do a base PROPFIND for my local store, and then it crashed. Hard. And stopped working, even after a reboot. Why? I have no idea. That's the great thing about a mac... software either works or it doesn't, and SCPlugin doesn't.

I then tried svnX, which had some promise, but after 15 minutes of poking, I had no idea how to add anything to repository. No clue at all. I'm sure it's simple once someone tells you how.

Then SmartSVN. Same deal.. I couldn't add a file, just a directory, and that seemed wonky. It's probably me, not the software, but I wanted something usable enough that in a groggy stupor, I could update my local store and not need to think twice about what I was doing.

MacSVN also showed some promise, but it crashed on startup needing a newer version of Berkeley-DB than is available for the Mac. I want simple, and hacking up code I didn't write in the first place to use a non-stable version of another software package falls slightly out of scope.

Then, I went back to the folks at tigris and poked around their site. I found RapidSVN, which is a GUI front-end for SVN with some additional bookmark support. That's it... and somehow, this works so much better for me than any of the other options. You check out a local store and update it via finder (or any other file system tool, i.e. mv/cp/tar/etc.), then add/delete/commit as you need to through either RapidSVN or using the svn command line, and it all works together nicely.

This was my experience - I don't plan on maintaining any source code in this repository (for now), so my uses are pretty one-dimensional - it's a glorified password-protected file storage mechanism with an existing protocol for mirroring, versioning and access from anywhere.

After deciding on a method for utilizing SVN from anywhere, I was able to get my repository in good working order. The other major benefit to using SVN is that I can access the repository via a web interface, and when I need a file from anywhere, that sure is handy. The structure currently looks something like this at the top-most level(s):

  • Applications
    • Mac
    • Other
    • Windows
  • Disk Images

Of note, I plan to use svnsync locally on my svn server to keep a secondary mirror of the repository on an identical drive. There's a great wiki article on on Mirroring a Subversion Repository over at the OpenDS wiki.

Oct 18th, 2007

Home Backup Project - Part 2: Plan!

Previous Posts:

The next step in this project was to visualize the future case - where should my data reside in working form and in archive form. Then, I had to figure out what's feasible now and what's feasible down the line. I can't just go out and buy a 4 TB NAS, though that thought has crossed my mind several times, and really, that still leaves something I have to go grab in the case of a fire.1

Another thought is that I might want a hard backup available at a secure, physical location - A bank's safety deposit box, or a family member's house, perhaps. If I had a blu-ray or HD-DVD burner, and could afford to have a stack of said media lying around, or if I felt like dropping money on a REV drive and its media, then perhaps that would be an easy solution. As it stands, my only option would be to burn 2-3 Dual-Layer DVDs (or one Blu-Ray) every 3-4 months and mail it to someone's house. Maybe there's another one out there, but that will require some investigation. I do not want to buy a set of external drives and ship them around. On the other hand, I do have two 60 Gb IDE drives sitting in the basement doing absolutely nothing, so maybe that's not a bad option.

This is what I came up with:

Data Now Initial Result Someday
Documents (10g) HDD, External HDD (manual) HDD, Online Backup, External HDD (automatic) Add Offsite HDD, NAS |
Digital Photos (16g) HDD, External HDD (manual) HDD, Online Backup, External HDD (automatic) Add Offsite HDD, NAS |
Music (200g) External HDD, Limited DVD backup External HDD, rsync'd HDD at work Add NAS |
Digital Video (2Tb+) DVD DVD Add NAS |
Application Installers (8g) Local SVN Repository Local SVN Repository, Offsite SVN Mirror Add NAS svn export |
Installation Media (??g) DVD and CD media DVD/CD -> DMG/ISO -> Local SVN Repository, Offsite SVN Mirror Add NAS svn export |

To get to Initial Result, the following should be accomplished:

  1. Evaluate online backup options, which seem to be:
  2. Install a large SATA drive (>= 300gb) for my workstation (at work) for my music and possible offsite SVN mirror.
  3. Investigate other offsite SVN mirror locations (family? cheap web hosting?)
  4. Get my local SVN repository in order. It's in pretty good shape but it can't hurt to re-think the structure a tad. Add a tree for installation media disk images.
  5. Convert all DVD and CD media to their respective disk image formats and checked into the SVN repository.
  6. Set up the HDD at work and rsync my data over.2
  7. Set up the offsite SVN repository and mirror my repository at home. Create a cron job to do it automatically every night.3
  8. Scan all of my really important, hard-copy files into PDFs and add them to the Documents tree in my backups.

To get there, I needed to purchase:

  1. A subscription to the online service of my choice:
    • is about $54 per year, though if I sign up for 2 years, I get 3 months free.
    • The JungleDisk client is $20, and Amazon's S3 service is use-based, which also should translate to about $5/month.
  2. A large SATA drive. (TigerDirect FTW!)

Overall, what I came up with is not too bad. Some of the tasks, as they are planned now, looked to take a lot of time, but in the end, what they will save me in stress, anguish, annoyance and time spend re-building everything is invaluable.

After even more thinking about this, that last task of scanning important documents seemed a bit out of scope for this project, and I'll pick it up some other time. Suffice it to say that would take a LOT more time than anything else, and I also have a job and a wife.

  1. Still, it’s not a bad idea for the future.

  2. It might be smart to do this locally first since that’s a LOT of data.

  3. This shouldn’t take too much bandwidth after the initial load since I shouldn’t be messing with it too much.

Oct 16th, 2007