As some may have gathered from my thread on SSD drives I’ve encountered a hard drive failure. Indeed this is the first hard drive that’s gone bad on me in many years so guess I’ve been lucky and that made me complacent.
Hard drive failed with no warning - bang - gone - no chance of recovery.
Point of this post is that I’ve never tested my DR process at all. I rely on Time Machine and some cloud storage - but I’ve never actually tested my process. So when things went wrong I could not be certain I’d get everything back, in theory I should, but I didn’t KNOW and that gave me a whole heap of bad vibes going into recovery.
I’ve been lucky - doesn’t look like I lost anything. But, being brutally honest with myself, that’s more luck than judgement and that is not a good feeling.
Sure we all have our DR processes and I’m not trying to preach - just wanted to be open and use my DR failings to help prevent others having problems. Your DR means nothing without testing. Believe me, you don’t want the same vibes I had, test your DR.
Having had that “Oh Sh*t” moment I’ll be rethinking my processes and I know I’ll be rigorously testing my DR every 6 months. I’ll fire up a blank Virtual Machine and make sure I can recover!
Indeed, how do others approach DR?
I don’t know how others do it, but I rely on Time machine as well. Had to use it once when the HD went kaput and everything worked fine.
I guess the risk with that is that if the electricity network had a spike, it could kill the time machine hd and the computer.
I do backup some files to a HD that’s not constantly plugged in, but that doesn’t happen as often as I’d need to be safe.
E-mails tend to be in the cloud nowadays, and some files I have on my Mac and on my laptop.
Not fully safe, but I think I have at least 4 copies of the most important files. Unfortunately only the Mac and Time machine tend to have the most recent versions
I do something similar to Dirk. I have a LAN based NAS that supports TIME MACHINE for my entire network… and it has RAID drives to increase the integrity of the data. For current projects in XOJO, I keep a binary AND XML copy on my dev machine, and duplicate the entire folder on another machine.
I do not use VCS, as contrary to most others,… I simply do not trust them… I’ve had a few of them totally muck up the files…
Same here. On my laptop (main machine) I actually have Time Machine set to do 3 backups: two networked ones and one local on a fast USB3 flash stick. Then once a week I do a complete backup of the whole computer to a portable hard drive, and once a month I also do a backup to a different hard drive.
The part I haven’t done much yet is a cloud backup. I have a few key files I backup to Dropbox, but I need to come up with a better solution. I read about one that was a nice app that backs everything up to an Amazon S3 account, which sounds like the way to go; I just haven’t done it. Your story reminds me I need to get on that!
This kind of stuff would make a good article for the magazine. Any interested?
I’ve got Time Machine to a NAS as backup for my Mac. Had to use it once after an HDD crash, worked great.
I use Subversion for all Xojo projects. The SVN server is on another NAS. That SVN server has its own off-site backup every hour to a NAS in another building.
That results in one copy of my checked out projects on my Mac, one original on the SVN server itself and an off-site backup of that SVN server
Seems like Time Machine is a constant.
When was the last time you guys tested that you could recover? Now in my diary to do a full test every 6 months.
When was the last time you guys tested that you could recover? [/quote]
Time Machine. too long ago…
Not since that HDD crash about 2 years ago.
SVN backup, about 6 months ago witch reminds me that it’s time to do that again.
I have never had to do a FULL recovery from Time Machine… but I have screwed up files (human error) and gone into the Way Back Machine and pulled up a copy from an hour or less back… Saved my bacon a few times
I’ve had my first data losses over 30 years ago, with tapes
Then I had a floppy drive on my VIC-20 and learned from my wealthier friends with Apple ][s how important it is to have a backup. While backups on the Apple could be made within 2 minutes (remember Locksmith?), it took about half an hour on the VIC-20 / C-64.
So I did some true reverse engineering, even writing my own tools for it, and ended up writing a copy program that did the copy in 5 minutes. That was quite the success, and while I wrote it for purposes of backups, it got very popular with the “let’s make copies of these games for my buddies or sell them” crowd. Enormously popular.
But it also started my understanding of disk drives in general, along with file systems. I have since written lots of recovery tools, not only for others but also just for myself, because, despite my knowledge of how important backups are, there were numerous cases where I lost data nontheless. Even a days’s work bothered me enough to invest 3 days in writing a recovery tool rather than re-writing the data in half a day
Today I have many levels of protection, because one backup is still not enough:
- I have two Macs constantly running, at different locations, both with the fasted internet connection.
- I use Crashplan (which is available for both OSX and Windows!) to have the Macs back each other up over the internet, almost instantly whenever files change (similar to Time Machine).
- I have also Time Machine running on each Mac, because Crashplan doesn’t make bootable backups, but TM does.
- Furthermore, on my main working Mac, which is a Mac Pro, I have installed 6 drives (one SSD, the rest hard drives), of which 4 are used in pairs as mirrors RAID sets. These RAID mirrors have already helped twice, when one drive suddenly fails - I just had to replace that one drive (I keep a spare around) and therefore never even had to restore anything through Time Machine.
So, I have several protections here, all saving me time and pain:
- Mirrored drives for sudden drive failure, saving me from being interrupted in my work at all.
- TM for those cases where I accidentally delete over overwrite something, and for cases where I’d have more than one drive fail on me suddenly.
- Crashplan backup to remote computer for the cases where my home gets robbed or some other drastic failure happens to my computer.
Oh, and one more: All my important source code (and some other docs) are stored remotely using git.
In summary, here’s one important advice:
It’s not that you just need to protect you from hard disk failure. You also need to be able to deal with loss of your entire computer system (theft, fire etc.). So have a plan for a remote backup. I recommend Crashplan. It is very reliable and even safe from the NSA as it encrypts your data before it leaves your computer.
Maybe I should explain how Crashplan (CP) works a little more:
With CP, which runs in the background watching your file changes constantly, similar to TM, provided you pay a few $s per months, you can have any set of folders from your disks passed on to any amount of different destinations.
Destinations can be:
- A connected hard disk or file server.
- Crashplans own cloud server.
- Any other computer you can connect to that runs CP!
That last one is important: All you need is a somewhat reliable “friend” that is willing to do an exchange deal with you: Both of you buy an extra harddisk for your backups, connect that disk to your computer, backup all your data to it once, and then you swap your disks and set them up with CP, so that, from then on, your computer will connect to your buddy’s computer running CP, and transfer your new changes to the hard disk that you gave to your buddy. And since the data will be fully and highly encrypted, your buddy can’t read your files at all, not even see the file names. And since CP is cross-platform you can even choose a buddy with a different OS than you’re using.
If you have good internet bandwidth (upload speed is key), this is the smartest and yet easiest backup strategy ever.
In my opinion, your source code is your most valuable asset you have as a software developer. We NEVER rely upon Time Machine because it’s not granular enough for what we want to track.
Not that we don’t have Time Machine but we don’t rely upon it. We switch our backup drives from our safety deposit box every month or so.
For source code we use offsite Subversion servers that are backed up hourly and offsite daily. We commit a lot and this does a couple of things for us. It keeps our changes ‘in the cloud’ on a regular basis. This way if our office burns down and my computer gets destroyed I can get back up and running in the time it takes me to go get a new computer.
Second, all the commits let us track changes down to the line of code. When, where, and hopefully we’ve added a commit message that is meaningful. Subversion (CVS, GitHub, etc) all let me retrieve old versions of the object I’m working on. If you’ve ever been mad at yourself for ‘doing bad things’ to a class/module/window and wish it was easy to revert you’ll appreciate this feature.
I think even if you are a single developer you do yourself a disservice by not using a source code management system. Having that historical version history is very valuable sometimes. Most of the time you won’t need it but occasionally you will and when you do it’s critical. Binary is convenient but not very granular.
For source code and documentation I use bitbucket.
For my whole computer/environment in general, I use Carbon Copy Cloner to keep a bootable backup (daily). Yes, it’s on site and won’t help if there’s a fire, but if my drive crashes I can plug my backup drive into a new mac and I shouldn’t notice the difference. Granted, I’ve only tried it with the mac that it’s backing up, but it boots fine and everything seems identical, so…
Bill, CCC is still doing a full, slow scan of the entire disk when it backs up, doesn’t it? I’d rather use TM for that, it’s much more effective. Or some other tool that uses FSEvents to monitor the changes so that it doesn’t need to perform a full scan every time.
Time Machine has saved my *ss more than the size of my ass…
If I save my Xojo project on Dropbox, isn’t it doing the versioning for me every time I save? Sort of a poor man’s version control?
How would you revert to your last update?
This is fine except you couldn’t roll back a series of changes easily.
Nor compare files and whatnot.
You could with the API download the most recent version and compare. You just couldn’t do a large set of changes like a commit because Dropbox works on a single file at a time.
My thinking was that the Dropbox approach is more granular than Time Machine, which only kicks in once every hour. Dropbox saves a version every time you save.
Yes, it’s the full file, and that would make it difficult to revert just one change and not a different change you wanted to keep, but for many situations that’s fine. (And possibly a tool like Arbed could be used to do those file comparisons on the binary file anyway.)