The purpose of this guide is to break out the manual install instructions, and any advanced configuration flags, into a separate document to limit confusion from the primary README. It is expected that users have read and are familiar with the process and concepts outlined in the primary README.
- If you're installing Gravity Sync on a system running Fedora or CentOS, make sure that you are not just using the built in root account and have a dedicated user in the Administrator group. You'll also need SELinux disabled to install Pi-hole.
After you install Gravity Sync to your server there will be a file called `gravity-sync.conf.example` that you can use as the basis for your own `gravity-sync.conf` file. Make a copy of the example file and modify it with your site specific settings.
_Note: If you don't like VI or don't have VIM on your system, use NANO, or if you don't like any of those substitute for your text editor of choice. I'm not here to start a war._
Make sure you've set the REMOTE_HOST and REMOTE_USER variables with the IP (or DNS name) and user account to authenticate to the primary Pi. This account will need to have sudo permissions on the remote system.
You'll need to generate an SSH key for your secondary Pi-hole user and copy it to your primary Pi-hole. This will allow you to connect to and copy the necessary files without needing a password each time. When generating the SSH key, accept all the defaults and do not put a passphrase on your key file.
Substitute REMOTE_USER for the account on the primary Pi-hole with sudo permissions, and REMOTE_HOST for the IP or DNS name of the Pi-hole you have designated as the primary.
If the execution completes, you will now have overwritten your running `gravity.db` and `custom.list` on the secondary Pi-hole after creating a copy of the running files (with `.backup` appended) in the `backup` subfolder located with your script. Gravity Sync will also keep a copy of the last sync'd files from the primary (in the `backup` folder appended with `.pull`) for future use.
Gravity Sync includes the ability to `push` from the secondary Pi-hole back to the primary. This would be useful in a situation where your primary Pi-hole is down for an extended period of time, and you have made list changes on the secondary Pi-hole that you want to force back to the primary, when it comes online.
```bash
./gravity-sync.sh push
```
Before executing, this will make a copy of the remote database under `backup/gravity.db.push` and `backup/custom.list.push` then sync the local configuration to the primary Pi-hole.
- If your script prompts for a password on the remote system, make sure that your remote user account is setup not to require passwords in the sudoers file.
Gravity Sync can also `restore` the database on the secondary Pi-hole in the event you've overwritten it accidentally. This might happen in the above scenario where you've had your primary Pi-hole down for an extended period, made changes to the secondary, but perhaps didn't get a chance to perform a `push` of the changes back to the primary, before your automated sync ran.
There are a series of advanced configuration options that you may need to change to better adapt Gravity Sync to your environment. They are referenced at the end of the `gravity-sync.conf` file. It is suggested that you make any necessary variable changes to this file, as they will supersede the ones located in the core script. If you want to revert back to the Gravity Sync default for any of these settings, just apply a `#` to the beginning of the line to comment it out.
These variables allow you to configure either a default/standard Pi-hole installation on both the local and remote hosts. Available options are either `default` or `docker` exactly has written.
These variables allow you to change the location of the Pi-hole settings folder on both the local and remote hosts. This is required for Docker installations of Pi-hole. This directory location should be from the root of the file system and be configured **without** a trailing slash.
These variables allow you to change the location of the Pi-hole binary folder on both the local and remote hosts. Unless you've done a custom Pi-hole installation, this setting is unlikely to require changes. This directory location should be from the root of the file system and be configured **without** a trailing slash.
These variables allow you to change the location of the Docker binary folder on both the local and remote hosts. This may be necessary on some systems, if you've done a custom installation of Docker. This directory location should be from the root of the file system and be configured **without** a trailing slash.
These variables allow you to change the file owner of the Pi-hole gravity database on both the local and remote hosts. This is required for Docker installations of Pi-hole, but is likely unnecessary on standard installs.
These variables are for the `gravity.db` and `custom.list` files that are the two components replicated by Gravity Sync. You should not change them unless Pi-hole changes their naming convention for these files, in which case the core Gravity Sync files will be changed to adapt.
Gravity Sync will prompt to verify user interactivity during push, restore, or config operations (that overwrite an existing configuration) with the intention that it prevents someone from accidentally automating in the wrong direction or overwriting data intentionally. If you'd like to automate a push function, or just don't like to be asked twice to do something destructive, then you can opt-out.
Starting in v1.7.0, Gravity Sync manages the `custom.list` file that contains the "Local DNS Records" function within the Pi-hole interface. If you do not want to sync this setting, perhaps if you're doing a multi-site deployment with differing local DNS settings, then you can opt-out of this sync.
_This feature has not been implemented, but the intent is to provide the ability to add timestamped output to each status indicator in the script output (ex: [2020-05-28 19:46:54] [EXEC] \$MESSAGE)._
The `./gravity-sync.sh config` function will attempt to ping the remote host to validate it has a valid network connection. If there is a firewall between your hosts preventing ICMP replies, or you otherwise wish to skip this step, it can be bypassed here.
In versions of Gravity Sync prior to 3.1, at execution, Gravity Sync would check that it's deployed with its own user (not running as root), but for some deployments this was a hindrance.
Gravity Sync is configured by default to use the standard SSH port (22) but if you need to change this, such as if you're traversing a NAT/firewall for a multi-site deployment, to use a non-standard port.
- This variable can be set via `./gravity-sync.sh config` function.
#### `SSH_PKIF`
Gravity Sync is configured by default to use the `.ssh/id_rsa` key-file that is generated using the `ssh-keygen` command. If you have an existing key-file stored somewhere else that you'd like to use, you can configure that here. The key-file will still need to be in the users `$HOME` directory.
At this time Gravity Sync does not support using a passphrase in RSA key-files. If you have a passphrase applied to your standard `.ssh/id_rsa` either remove it, or generate a new file and specify that key for use only by Gravity Sync.
- Default setting in Gravity Sync is `.ssh/id_rsa`.
- This variable can be set via `./gravity-sync.sh config` function.
#### `LOG_PATH`
Gravity Sync will place logs in the same folder as the script (identified as .cron and .log) but if you'd like to place these in a another location, you can do that by identifying the full path to the directory (ex: `/full/path/to/logs`) without a trailing slash.
- Default setting in Gravity Sync is a variable called `${LOCAL_FOLDR}`.
#### `SYNCING_LOG=''`
Gravity Sync will write a timestamp for any completed sync, pull, push or restore job to this file. If you want to change the name of this file, you will also need to adjust the LOG_PATH variable above, otherwise your file will be remove during an `update` operations.
- Default setting in Gravity Sync is `gravity-sync.log`
#### `CRONJOB_LOG=''`
Gravity Sync will log the execution history of the previous automation task via Cron to this file. If you want to change the name of this file, you will also need to adjust the LOG_PATH variable above, otherwise your file will be remove during an `update` operations.
This will have an impact to both the `./gravity-sync.sh automate` function and the `./gravity-sync.sh cron` functions. If you need to change this after running the automate function, either modify your crontab manually or delete the entry and re-run the automate function.
- Default setting in Gravity Sync is `gravity-sync.cron`
#### `HISTORY_MD5=''`
Gravity Sync will log the file hashes of the previous `smart` task to this file. If you want to change the name of this file, you will also need to adjust the LOG_PATH variable above, otherwise your file will be removed during an `update` operations.
- Default setting in Gravity Sync is `gravity-sync.md5`
#### `BASH_PATH=''`
If you need to adjust the path to bash that is identified for automated execution via Crontab, you can do that here. This will only have an impact if changed before generating the crontab via the `./gravity-sync.sh automate` function. If you need to change this after the fact, either modify your crontab manually or delete the entry and re-run the automate function.
If you manually installed Gravity Sync via `.zip` or `.tar.gz` you will need to download and overwrite the `gravity-sync.sh` file with a newer version. If you've chosen this path, I won't lay out exactly what you'll need to do every time, but you should at least review the contents of the script bundle (specifically the example configuration file) to make sure there are no new additional files or required settings.
At the very least, I would recommend backing up your existing `gravity-sync` folder and then deploying a fresh copy each time you update, and then either creating a new .conf file or copying your old file over to the new folder.
Starting in v1.7.2, you can easily flag if you want to receive the development branch of Gravity Sync when running the built in `./gravity-sync.sh update` function. Beginning in v1.7.4 `./gravity-sync.sh dev` will now toggle the dev flag on/off. Starting in v2.2.3, it will prompt you to select the development branch you want to use.
To manually adjust the flag, create an empty file in the `gravity-sync` folder called `dev` and then edit the file to include only one line `BRANCH='origin/x.x.x'` (where x.x.x is the development version you want to use) afterwards the standard `./gravity-sync.sh update` function will apply the correct updates.
This method for implementation is decidedly different than the configuration flags in the .conf file, as explained above, to make it easier to identify development systems.
There are many automation methods available to run scripts on a regular basis of a Linux system. The one built into all of them is cron, but if you'd like to utilize something different then the principles are still the same.
If you prefer to still use cron but modify your settings by hand, using the entry below will cause the entry to run at the top and bottom of every hour (1:00 PM, 1:30 PM, 2:00 PM, etc) but you are free to dial this back or be more aggressive if you feel the need.
The designation of primary and secondary is purely at your discretion. The doesn't matter if you're using an HA process like keepalived to present a single DNS IP address to clients, or handing out two DNS resolvers via DHCP. Generally it is expected that the two (or more) Pi-hole(s) will be at the same physical location, or at least on the same internal networks. It should also be possible to to replicate to a secondary Pi-hole across networks, either over a VPN or open-Internet, with the appropriate firewall/NAT configuration.
There are three reference architectures that I'll outline. All of them require an external DHCP server (such as a router, or dedicated DHCP server) handing out the DNS address(es) for your Pi-holes. Use of the integrated DHCP function in Pi-hole when using Gravity Sync is discouraged, although I'm sure there are ways to make it work. **Gravity Sync does not manage any DHCP settings.**
This design requires the least amount of overhead, or additional software/network configuration beyond Pi-hole and Gravity Sync.
1. Client requests an IP address from a DHCP server on the network and receives it along with DNS and gateway information back. Two DNS servers (Pi-hole) are returned to the client.
2. Client queries one of the two DNS servers, and Pi-hole does it's thing.
You can make changes to your block-list, exceptions, etc, on either Pi-hole and they will be sync'd to the other within the timeframe you establish (here, 15 minutes.) The downside in the above design is you have two places where your clients are logging lookup requests to. Gravity Sync will let you change filter settings in either location, but if you're doing it often things may get overwritten.
One way to get around having logging in two places is by using keepalived and present a single virtual IP address of the two Pi-hole, to clients in an active/passive mode. The two nodes will check their own status, and each other, and hand the VIP around if there are issues.
1. Client requests an IP address from a DHCP server on the network and receives it along with DNS and gateway information back. One DNS server (VIP) is returned to the client.
2. The VIP managed by the keepalived service will determine which Pi-hole responds. You make your configuration changes to the active VIP address.
3. Client queries the single DNS servers, and Pi-hole does it's thing.
You make your configuration changes to the active VIP address and they will be sync'd to the other within the timeframe you establish (here, 15 minutes.)
1. Client requests an IP address from a DHCP server on the network and receives it along with DNS and gateway information back. Two DNS servers (VIPs) are returned to the client.
2. The VIPs are managed by the keepalived service on each side and will determine which of two Pi-hole responds. You can make your configuration changes to the active VIP address on either side.
3. Client queries one of the two VIP servers, and the responding Pi-hole does it's thing.
If you get the error `sudo: a terminal is required to read the password` or `sudo: no tty present and no askpass program specified` during your execution, make sure you have [implemented passwordless sudo](https://linuxize.com/post/how-to-run-sudo-command-without-password/), as defined in the system requirements, for the user accounts on both the local and remote systems.