Returning from the Tamara, Coorg last week, my only vacation this year, I knew my following week would be nothing short of a back-breaking stone quarry work. So on Monday, just as I was about to get-on, my otherwise reliable work notebook began flashing BSOD. Post frantic calls to IT help-desk, they dispatched a helpful hardware engineer at running speed. He patiently went through system diagnostics and in the end said it’s software, which requires a wipe + re-image routine. I lifted my hands up in exasperation, thinking about my planned work week that just got flushed.
The good thing is that on any given day, I am fully backed-up and up to speed. Ten years ago when I got hit by a bad crash, I vowed to never let this happen to me again. So, everyday my scripts run for about ten minutes before I call it a day, differentially backing everything-up with the ever reliable rsync. So when he asked if I needed more time to manage my data on disk, I said no, and that he should continue. The engineer then proceeded to re-image my machine. Here’s the thing though, and I am trying very hard not to swear, the official images are never up to speed! This makes any freshly re-imaged machine all but usable for the next 48 hours, as auto-downloading updates and profiled software, punctuated by countless reboots, completely consumes the computer. It’s often longer as these down-the-wire updates need the corporate hose.
While waiting for my machine to be back on its feet, my thoughts wandered around about the backup routine, the tools I’ve been using, and our pleasant stay at the Tamara. Then I remembered merging a pull request from a contributor, while still on vacation, who wrote a helpful
Makefile for my repo. Inspired by this, and by Endler’s wonderful post, I thought of making one for all my backup/restore routines too, which would simplify my scattered backup shell scripts into just one
Makefile. So, I wrote one this week.
Makefile is fussy about its format: (a) Tabs only (no spaces!) of indent-size 4; and (b) No in-line comments to ensure predictable behaviour.
Now instead of running different shell scripts, I just have to run
make backup (or
time make backup to also know the time taken) or
make restore (or
time make restore) in the terminal and be done with it. The Makefile also can run only select pre-defined folders, e.g., I can only choose to backup current projects with
make curr, and so on. If I run only
make, then it offers these options:
$ make Makefile for backup and restore routines Usage: make backup backup all mail projects references make restore restore all mail projects references make mail backup all mail make proj backup all projects make curr backup only current projects make past backup only past projects make ref backup all references make cref backup only current references make pref backup only past references
time option (as in
time make backup),
5:44.57 in the example below is the real (clock) time (listed in mm:ss.ms in this example) the job takes to complete.
sent 1,382,904,905 bytes received 118 bytes 4,014,238.09 bytes/sec total size is 4,753,411,072 speedup is 3.44 make mail 17.20s user 48.51s system 19% cpu 5:44.57 total
This downtime gave me a pause, and an opportunity to make my workflow simpler, which I think isn’t bad at all.
Update: If one has symlinks with name same as the command in make file (e.g., if one has curr symlink in the same folder as Makefile), then
curr is up to date, even if source and destination
curr folders are not identical! To avoid this error, ensure symlink names are not identical to make commands in Makefile.