Skip to content

How often do you test your backups ?

Blog
  • 1633096453238-backups.webp
    How often do you test your backups ? Whether itโ€™s weekly, monthly, quarterly, or an even longer duration than that (if that is the case, stop reading this and test your backups now), every business and personal user needs the assurance that they can recover either entire systems, or an individual file in the event that a need arises. Whilst you may be thinking โ€œyeah, obviouslyโ€, youโ€™d be surprised at how often this critical process is the victim of oversight on the part of the operator, or forgotten completely in the rush to have a system ready by a particular deadline. Weโ€™ve all missed things on server builds like antivirus, or the latest patches, but no backup processes ?

    No system should ever be considered ready for release to production (or even test in some cases) if there is no backup strategy in place. This is nothing more than unprecedented risk to the business - particularly if the service is considered critical to any particular function. Can you imagine having to explain to senior management or stakeholders that the system everyone has been using for months has just crashed, and youโ€™ve just suddenly remembered that there isnโ€™t a single backup ? You may as well start updating your CV or resume now. Iโ€™ve never had to explain to senior management that there is no backup, and I have no desire to start doing so now.

    A backup strategy isnโ€™t difficult to define, but even with one in place, there is still a risk that this may not function as expected in the event of needing to recover a system or individual file. Backups should be regularly tested to ensure both consistency and integrity, with the restore process being validated afterwards - in short, you are attesting to the fact that you are in a position to recover either an entire server image, or a single file (dependent on the need). Failure to comply with this simple task could see you being thrown to the lions unnecessarily. Backups themselves are not difficult to undertake, and should be running on a daily schedule at minimum. The actual strategy varies heavily between use cases, with some taking differential or incremental backups during the business week, and then executing a full backup over a weekend. This type of backup is still very popular as it has the huge benefit of backing up only files and folders that have changed - the downside is that in the event of a disaster, youโ€™d need the last full backup to be restored first before you can play back each incremental. Another more common approach is to use delta based backups that leverage bits technology to make the most of compression in order to reduce storage requirements and costs. Usage of delta backups typically requires a concept called โ€œsafesetsโ€. Itโ€™s more commonplace in todayโ€™s technology sphere to use an off-site vaulting mechanism where you create an initial โ€œseedโ€ of a backup target, then ship only the changes to that same system on a daily basis.

    This type of backup is also referred to as a snapshot. In the case of (for example) Amazon Web Services, the underlying technology is hypervisor based. This means that an initial image can be made of the machine in question, then a daily / hourly snapshot performed which captures the changes since the last snapshot was taken. In the event of a restore or disaster recovery, the initial image is recovered first, with the snapshots being โ€œlayered on topโ€ to provide the necessary recovery point - this itself really depends on your backup strategy. One thing to bear in mind with off-site vaulting is that the costs can easily spiral out of control with penalty charges being applied for exceeding your quota limit. Storage itself is relatively cheap and easy to acquire, but the backup process required to support it can often attract significant and unprecedented cost to any user. For this reason, the backup strategy should be well defined, with best practices playing a major role in determining both the strategy and the overall concept. Itโ€™s important to note at this point that although there are numerous product offerings that include cheaper storage options, these quite often become the Achilles heel for those managing systems - the storage is cheap, so you get to save a fortune. However, the RTO (Recovery Time Objective) shoots through the roof - that cheaper storage is often magnetic - meaning it could reside on a tape somewhere. The restore times are not just limited to the speed of the media. Tapes typically require a catalog to be built first before backup software can be in the required position to recover the data in question. Years ago, I was performing backups and recovery testing on DLT tapes via a single loader and low grade SCSI card. A restore of an Exchange database could take well over 24 hours to recover based on this aged technology, and a slow network made this even worse.

    Thankfully, we are not bound by slow hardware in todayโ€™s modern technology era. However, if your backups remain untested, how do you know that they will serve the purpose they are designed for ? The short answer is that you donโ€™t. Each backup taken should be subject to analysis in terms of the log files generated during the job - in other words, the backup log should be checked on a daily basis to determine whether it was successful or not, and any issues noted be rectified as soon as possible. With ransomware wreaking havoc across the world, it also makes perfect sense to perform sanity checks on servers and their associated backups to ensure that you are not actually taking copies of files and folders that have in fact been encrypted.

    Hit with ransomware ? No problem - weโ€™ll restore from backup

    โ€ฆโ€ฆuntil you discover that your backups are also full of encrypted content.

    Obviously not a great situation to find yourself in. For this reason, itโ€™s important to invest in decent anti malware endpoint tools and file monitoring capabilities to prevent this from happening in the first place. Itโ€™s also very important to frequently test your ability to recover files from backup.

    The bottom line - Itโ€™s not possible to say with 100% certainty that your backups are in full working order unless they have actually been tested.

  • @phenomlab Yes. But I do not backup stuff as religiously as I should. Cobblerโ€™s childrenโ€™s shoes kind of deal. Kind of tough to get motivated when only dealing w/a few, rather than thousands, of boxes. Plus, I kind of like to keep my fingers into the cli - cuz if you donโ€™t use it, you loose it.

    That, and being retired, I no longer have unlimited access to the resources I might otherwise like to have, provided someone else is picking up the tab. :emoji: :emoji: :emoju:

    OTOH, things are also lots simpler now. ๐Ÿ™‚ ๐ŸŒด ๐ŸŒด

    P.S.; Yeah, I have a somewhat atypical sense of humorโ€ฆ ๐Ÿ•

  • @gotwf said in Do you actually test your backups?:

    Yes. But I do not backup stuff as religiously as I should.

    You and me both then ๐Ÿ˜‰ as @phenomlab has had to pick up the pieces so to speak a few times now I really need to make sure I keep everything backed up. I think Markโ€™s patience may run out next time, Iโ€™m like a cat with nine lives and Iโ€™ve already lost a few ๐Ÿ˜‰.

  • @jac @gotwf provided you have a sensible approach to backups, the cost neednโ€™t spiral out of control - and, neither should the complexity. Most general consumers of data have large amounts of files that are typically static in nature - such as photos etc.

    Clearly, these wonโ€™t be changing anytime soon (unless youโ€™re into image editing) so you could arguably leverage a long term archiving solution for that data. This would keep the cost down to a minimum, and then youโ€™d only need golden copies and one duplicate set just in case - and in most cases, you never access the golden copies as they are literally the last bastion if anything goes wrong.

    Where several people fall in their own swords is to attempt to reduce costs further by keeping the backup of their data in the same place as the origin. Sure, you can argue that itโ€™s on a removable hard disk etc, so if your pc crashes, and you lost the disk, you still have that data - great.

    But what if you had damage caused by flooding or fire, or had your pc / laptop and the external disk stolenโ€ฆ

    The correct strategy here is to keep your backups apart from the origin. In most cases, storage in a secure location (off site) or in a cloud based environment is generally the way to go. Storage is cheap these days, but the backup of that same storage can often work out expensive, which is why itโ€™s always a good idea to shop around for the best deals.

  • @phenomlab said in Do you actually test your backups?:

    The correct strategy here is to keep your backups apart from the origin. In most cases, storage in a secure location (off site) or in a cloud based environment is generally the way to go. Storage is cheap these days, but the backup of that same storage can often work out expensive, which is why itโ€™s always a good idea to shop around for the best deals.

    USB drives in the TBโ€™s are pretty reasonable these days. Some even come w/built in mirroring. If something like this would suit capacity needs, then I think Iโ€™d prefer to use them like large tape drives of old, Keep rotating on some schedule. Then keep them in a safe deposit box. Maybe encrypt. Would not want to trust cloud providers with all the eggs. ๐Ÿฅš ๐Ÿฅš

    I have stuff mirrored on multiple boxes but if my house burns down I am admittedly screwed.

  • @gotwf said in Do you actually test your backups?:

    I have stuff mirrored on multiple boxes but if my house burns down I am admittedly screwed.

    ๐Ÿ˜› Well, hopefully, that never happens !!

  • phenomlabundefined phenomlab referenced this topic on

Related Topics
  • Why Forums Are Still Relevant in 2024

    Blog
    3
    2 Votes
    3 Posts
    121 Views

    @JAC wow. Thanks for the great comments. They are truly appreciated.

    I tend to agree with the social media comments youโ€™ve made. This is made all the more prominent in relation to recent events in Southport for example, and toxicity is a huge issue. Just look at some of the comments from trolls - they are truly disgusting, and the perpetrators seem to take great delight in the anonymity the Internet affords them.

    forums in general are much more subject focused, easier to moderate and users are less likely to be banned because they are there for a specific interest or reason, not to cause trouble.

    Agreed, although discussions can still get out of hand and quite often, these are left to run riot and quickly spiral out of control. A great example of that is here

    https://sudonix.org/topic/141/how-to-destroy-a-community-before-it-s-even-built

    thereโ€™s something much more calming about coming to a specific page at your fancy, posting and taking part in healthy debates over the real mishmash of social media.

    Yes, I personally prefer the atmosphere of a forum against the backdrop of unwanted noise via social media.

  • 2 Votes
    2 Posts
    163 Views

    This is worth listening to

    https://www.bbc.co.uk/sounds/play/w3ct5wmc

  • 3 Votes
    4 Posts
    232 Views

    @phenomlab yeah you have a good point there. Information over lives just doesnโ€™t seem to be worth it. And being the one to release that info and be the one who first put it out there, you may be on the right track about the notoriety.

  • 0 Votes
    2 Posts
    444 Views

    See enclosed article from Sky News

    https://news.sky.com/story/worlds-largest-botnet-taken-down-as-alleged-chinese-mastermind-arrested-and-29m-in-cryptocurrency-seized-13145394

  • 3 Votes
    3 Posts
    394 Views

    @crazycells if it does indeed materialise, then this could well be a landmark case that sets a precedent. But, I donโ€™t hold much hope to be honest. Iโ€™d like to be wrong.

  • Goodbye OnePlus, hello Samsung

    Blog
    44
    29 Votes
    44 Posts
    1k Views

    @Madchatthew definitely. Also good for the environment as it reduces landfill.

  • Testing out Webdock.io

    Moved Announcements
    2
    5 Votes
    2 Posts
    544 Views

    Just coming back to this thread for review (as I often do), and it looks like Webdock have increased their available offerings - some are extremely powerful, yet very competitive from the pricing perspective.

    image.png

    10 CPU cores, plus 20Gb RAM? Well worth a look (and the asking price) - thereโ€™s also a fixed IP which is hugely beneficial.

    Clearly, this is well beyond what most people will want to spend - itโ€™s more of an example (but interestingly, Sudonix runs on something not too different from the above).

    However, not all that glitters is gold ๐Ÿ˜• - just have a walk through the benchmark report I found below and youโ€™ll see a huge difference between Heztner and Webdock

    https://www.vpsbenchmarks.com/compare/hetzner_vs_webdock

    That being said, the amount of HTTP requests that Webdock handles in relation to Hetzner is superior - @DownPW you might want to have a look at this - thereโ€™s a free 24 hour trialโ€ฆ ๐Ÿ™‚

    5203639b-2f62-47e6-b87b-37580ce5deae-image.png

  • 151 Votes
    257 Posts
    18k Views

    And so it starts. Apple AI creates fake headline

    https://www.bbc.co.uk/news/articles/cx2v778x85yo