As a part of the inpadi service offering we have developed a backup system that consists of two parts.
1. Server - the remote location where the client stores the backup.
2. Client - the program that creates the backup and sends it to the server.
Mutch of the logic is within the client so the server needs less resources and you are able to use more clients per server.
Starting a backup as of now:
Go to the location of the GMS client - make sure you have backup-dev program in that place.
Syntax: Backup-Dev path::C:\Users server::https://myserver:443
where path:: can be anything and server:: is your server.
NEW Feature (2018-06-06) -> path:: is now not needed! Create af file (path.bak) of backup locations line by line of paths you want to backup.
Then when the client starts reading the backup set from the server and checks what needs to be backed up and transfers it to the server - the client processes 4(by default) files in parallel to speed small files up.
To change max concurrent file transfers, use paramter ctf, e.g. use ctf::8 for setting max concurrent transfers to 8 - it can be any number - so if you only want a single file at a time - use ctf::1 - if it fails to parse ctf or it's set to 0 then the default 4 is used - note there is nothing that prohibits you from using 1000 - but it might give unknown issues - so use caution!
The client looks for a file called excludes.bak - each line it contains will be a wildcard match of files and folders that should be excluded.
Paths must we written with / - NOT \ ! - on Windows it can be /appdata/ or like c/windows/system32
Excludes are overruled by includes.bak - same syntax as excludes.bak
If there is not a exclude that matches - then the include will not have any effect!
If you only want to backup specific folders, then you can exclude all with / in excludes.bak and then add line by line in include.bak what to backup.
Files are transfered in 512KB blocks and are MD5 sum validated for each block and for whole file when done. A files that is transfered is only read once as checksum validating etc. are done inline - this saves your computer resources.
See performance notes.
How to install backup client
Download backup client - can be done with GMS Script on client with:
neededfile=https://gms.inpadi.dk/Patch/Backup-Cli/GOOS/GOARCH/Backup-Cli.exe=Backup-Cli.exe (on linux/mac do not include .exe)
move /y backup-cli.exe ..\
Backup-Cli.exe path::C:\Users server::https://your.backup.server.name:443
How to automate include/exclude on all clients?
Easy - just save includes.bak and excludes.bak in _Software folder in GMS
Download them with clientfile=includes.bak and clientfile=excludes.bak
move /y includes.bak ..\
move /y excudes.bak ..\
The whole client backup script could look like this:
move /y Backup-Cli.exe ..\
move /y includes.bak ..\
move /y excludes.bak ..\
Backup-Cli.exe path::C:\Users server::https://your.backup.server.name:443
On linux/synology NAS the script can look like this:
mv -f excludes.bak ../
mv -f Backup-Cli ../
chmod 777 Backup-Cli
./Backup-Cli path::/volume1/photo server::https://your.backup.server.name::443 > /dev/null
Name the script as a schedule - fx.: 20180501.0800.20180501.1600.1440.db - this will backup the client the first time it's online within normal business hours.
Backup-Cli is NOT case sensitive on filenames on excludes/includes!
The server is quite simple - it validates client, stores data and validate backup files and backup set when client transfers it back to the server. If a file is missing from the server or some error has occurred(e.g. it is not the correct size or antivirus has removed it) then the server removes the file (if it exists) and removes it from the backup set. Then the client will transfer the file to the backup server on the next run.
The server proxies the client request to the GMS backend for validation of the client so no password sync is needed and backup server can be more or less untrusted (you should not use unencrypted client backup if you do not trust the backup server).
Both backup client and backup server must exist in GMS backend!
Clients can deliver backup data to the backup servers listed in the custommer folder on the GMS server (backup servers are listed line by line)
-> "inpadi staff": see file CustommerName/backupserver.txt (note that the file must start with a blank line and end with a blank line).
First client that makes backup automatically adds backup server to custommer as backup destination.
How to install Backup Server
Download Backup-Srv executable file and place it inside GMS-Client folder on server - It must be placed together with the GMS.ini file as it needs credentials from the client to authenticate backup clients!
Install the server as a service - sc create backup-srv binPath= %cd%\backup-srv.exe start= auto
Start the server with sc start backup-srv
How to configure retention period and max versions?
To configure a max number of backup version on the backup server you can create the file versions.max and put a number inside - no new lines etc. is allowed! Just a number!
To configure a max age for deleted/versions item(s) create file age.max and put a number inside - and yes, no new lines or anything else is allowed in this file either...
The versions.max and age.max can be placed in the following places:
The more specefic file will overrule - so first Client is checked and then custommer and then the root folder.
If no max versions or age is set then it defaults to 3 versions and 90 days.
As file transfer is done in parallel, we see fine throughput - our mixed test shows smaller files at 60Mbit/s, and larger files not having any issues pushing 300Mbit/s to the bandwith limit. Resource usage of the backup server is quite fair - test server have 2 mirrored eco disk (5600 rpm) and memory usage is a few 100's MB of RAM.
The backup client on fair hardware checks 1100 files/s (test on a production vm) - so a job with 225.000 small files is done after 3 minutes and 24 seconds for a second run (after a full backup) with a few 100 changed items.
Another test on a backup set that conatins 250000 files in 38000 folders takes a bit less than 200MB of memory and if no changes is transfered it takes a few minutes which includes transfer backup set from backup server to the client - validate and transfer the backup set back again. - The backup set (DB) for this size takes 51MB of disk space (Excluding backup data).
A third test with 49000 files (320 GB) takes 18 seconds to complete and fills 8.5MB in backup set (DB) size (Excluding backup data).
So - you would like to restore data (or at least test your restore process) - this is how it's done.
Backup-Cli.exe path::C:\Users server::https://your.backup.server.name:443 restore::wildcard path to restore
Example: Backup-Cli.exe path::C:\Users server::https://your.backup.server.name:443 restore::eh/documents
will restore all data where path or filename matches the eh/documents from the backup set.
You can also use cft:: parameter if 4 paralell fire transfers does not match your restore speed needs.
Restore parameter pit::
When restoring data - it can se nice to restore older files (deleted or previus versions).
You can use the parameter pit:: and it stands for point in time.
You need to specify the correct date format for it to be working: YYYYMMDD_HHMMSS
Example: Backup-Cli.exe path::C:\Users server::https://your.backup.server.name:443 restore::eh/documents pit::20180621_210000
this will restore data as it was on 21/6-2018 at 21:00:00
The backup client looks at the last time for the file have been seen and the creation time.
If last time os newer than pit:: and creation time is older - then the file will be restored.
Error logging (files unable to be transfered to backup server) is send to the server. The log containing the last backup job status is named: errors.during.bck and contains each failed file line by line with the OS error why the file not could be transfered.
Backup server replication
The backup server is able to have one og more replication partnes.
A backup server can be wither Master og Slave - A master can have multiple slaves.
NOTE: It's not possible to configure it as a chain with Master -> Slave -> Slave.
It's possible to configure a Master <-> Master configuration - however, this have not been tested deeply yet - and there can be issues if clients deliver backup's to both servers. - If you want be be sure the backup is working as expected - please use backup/slave for now ;-)
How to configure a Master: The master server has a folder called ReplicaPartners (in the root of the backup directory) where each slave is a subfolder - please create the subfolders in a CMD prompt or copy a empty folder into the ReplicaPartners directory. The reason for this is that the master creates a replica set to the slave - and if creating a new folder in explorer it will be named "New folder" and a inconsistent replica set could be made when renaming it.
How to configure a Slave: The slave knows it's a slave when having a file named replicaMasters.txt in the root backup directory - it must contain the following: ServerName from master::Secret from secret.txt located in master folder (auto generated)::https://url.of.the.master.server:443
1. Repository encryption so it can be used as remote backup solution where the backup server is located outside orginisation or is somehow untrusted.
2. A nice GUI in GMS so you can control all your clients in a central location and restore jobs etc.
3. Some beta testers that can give the system a hard time - we already have done some brutal tests on it.
4. VSS Snapshot in Windows.
5. System state backup and restore.
6. Some nice reporting so you can see if it's working.
7. Some smart selection so not all versions are restored ;-).