User Tools

Site Tools


wiki:meta_download-this-wiki

Download This Wiki

Want a full backup of this wiki?

https://s3.samurailink3.com/public/WhistleCrewWiki.tar.gz

This file is generated every hour. You can use it to mirror all pages and downloads on the site.

Run This Wiki

You can unzip this file and use the following docker (or podman) command to run a DokuWiki instance with the same data:

docker run --rm -p 127.0.0.1:8080:8080 --user $UID:$GID -v ./:/storage dokuwiki/dokuwiki:stable

Back Up This Wiki

You can use this script in a cronjob to back up the wiki and keep a configurable number of backup copies, while deleting the oldest file to maintain a certain number of backups. You will absolutely want to configure the backup_directory and likely the number_of_backups_to_keep.

This is based on SamuraiLink3's MIT-Licensed Generic Rolling Backup script.

backup-whistle-crew-wiki.bash
#!/bin/sh
 
# Generic Backup Script
# This script takes no arguments and is designed to be run from anywhere.
# Cron it out, have fun. This is generic enough to be used anywhere by just
# about anything.
#
# MIT License
#
# Copyright (c) 2017 Tom "SamuraiLink3" Webster
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
 
# The command you will use to backup data (create a tar file, mysqldump, psql,
# etc.). The backup command redirects output to the backup filename. If you're
# using tar, don't write the output to a file, the script will take care of that
# automatically.
backup_command="/usr/bin/wget https://s3.samurailink3.com/public/WhistleCrewWiki.tar.gz -O -"
 
# Optionally filter the backup data through compression. If you're doing stream
# compression, like with tar, additional compression won't help you. If you're
# using something like mysqldump, additional compression can save you a decent
# amount of space. Leaving this as "" will disable compression piping.
compression_command=""
 
# Where you want your backup files stored.
# NO TRAILING SLASH!
# USE ABSOLUTE PATHS!!!
# Like this: `/home/backups`
# NOT LIKE THIS: `/home/backups/`
backup_directory="/home/samurailink3/WhistleCrewWikiBackups"
 
# The prefix and suffix of the backup file you will create. This is helpful if
# you have multiple types of backups (databases and flat files, for instance) in
# the same backup directory.
# The filename will look like PREFIX-TIMESTAMP.SUFFIX
backup_filename_prefix="WhistleCrewWiki"
backup_filename_suffix="tar.gz"
 
# How many backups do you want to keep before rolling the oldest one off?
number_of_backups_to_keep="28" # Every 6 hours, 4 times per day, keep 1 week
 
# The filename that will store the timestamp of any/all failed backups. This
# file will be located in the backup_directory.
failure_log_filename="FAILURE.LOG"
 
##
## From this point on, the script starts.
## You shouldn't need to configure anything below this block.
##
 
# Exit on pipeline error
set -e
# Exit on undefined variable
set -u
 
# Create the backups directory
mkdir -p $backup_directory
 
# Increment the number of backups to keep, this keeps our tail pipe chain sane.
_=$((number_of_backups_to_keep=$number_of_backups_to_keep+1))
 
# Create a timestamp variable.
# The benefit to this date format (other than it being a widely used standard)
# is that it naturally sorts into chronological order, making our `ls -l` stupid
# easy. We're stripping ':' because some filesystems don't like it.
timestamp=`date -u +%FT%T%zs | tr -d ":"`
 
# This stores the backup data into a temp file, it will be compressed, and
# archived later. We are checking the return code after the command has run. You
# don't want mysqldump (or whatever you're using) to crash with exit code 1,
# then you carry on your merry way. That would make for a really bad late-night
# recovery, finding out that your 'backups' aren't really backups.
 
if [ "$compression_command" = "" ]; then
  echo "Starting backup command: $backup_command"
  $backup_command > $backup_directory/$backup_filename_prefix-$timestamp.$backup_filename_suffix
else
  echo "Starting backup command: $backup_command | $compression_command"
  $backup_command | $compression_command > $backup_directory/$backup_filename_prefix-$timestamp.$backup_filename_suffix
fi
# If the exit code of the previous command was not '0', freak out and exit.
# You can do what you want to here, kick off an email, shut down the system,
# format the root partition, whatever. Right now, I'm just echoing a failure
# message and exiting with code 1 (failure).
if [ "$?" = 0 ]
then
  echo "Backup command complete."
else
  if [ $compression_command = "" ]; then
    echo "$timestamp: Backup command failed. [$backup_command]" >> $backup_directory/$failure_log_filename
  else
    echo "$timestamp: Backup command failed. [$backup_command | $compression_command]" >> $backup_directory/$failure_log_filename
  fi
  exit 1
fi
 
# Let's break down this pipe (Unix):
#   `find -E $backup_directory/ -regex "$backup_directory/$backup_filename_prefix-.*\.$backup_filename_suffix":`
#     We're going to look for all backup files in the backup directory and return
#     a list of them.
#   `sort`
#     Next, sort the list so we can chop out the ones to remove. Otherwise we
#     could remove something we need.
#   `awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }'`
#     Reverse the list so we can chop out the files we want to keep.
#   `tail -n +$number_of_backups_to_keep`
#     Here, we're going to start at the number of backups we want to keep
#     (plus one, so the line count works appropriately).
#   `awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }'`
#     Re-Reverse the list for readability.
 
to_remove=`find $backup_directory/ -type f -name "$backup_filename_prefix-*.$backup_filename_suffix" | sort | awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' | tail -n +$number_of_backups_to_keep | awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }'`
# Then grab the count here. If it's '0', we're not going to delete anything
to_remove_count=`find $backup_directory/ -type f -name "$backup_filename_prefix-*.$backup_filename_suffix" | sort | awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' | tail -n +$number_of_backups_to_keep | awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' | wc -l`
 
if [ $to_remove_count -eq 0 ]
then
  echo "No backup files to remove."
else
  echo "Removing these files:
$to_remove"
  # Figure out where we are
  you_are_here=`pwd`
  # Head out to the backup directory
  cd $backup_directory || exit 1
  # Clean up the old backup files
  rm $to_remove
  # Head back to where you were
  cd $you_are_here || exit 1
fi
 
echo "Cleanup Complete!"
exit 0
wiki/meta_download-this-wiki.txt · Last modified: by samurailink3

Except where otherwise noted, content on this wiki is licensed under the following license: Public Domain
Public Domain Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki