Auto set correct data path (#491)

* Corrected file path by removing /mnt

* Update 20-zerotier.sh

* Update README.md

* removed /mnt directory as evertyhing is done in /data

* Corrected URL

* Update remote_install.sh

* Auto check data dir

* fixed adguard installation

* More data fixes

* Fix dns common data path

* fixed haproxy readme
This commit is contained in:
bruvv 2023-02-22 17:49:54 +01:00 committed by GitHub
parent 2a1f6a5478
commit 162d4ce478
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
67 changed files with 1774 additions and 1002 deletions

View File

@ -8,41 +8,43 @@
## Requirements
1. You have setup the on boot script described [here](https://github.com/unifi-utilities/unifios-utilities/tree/main/on-boot-script)
1. AdguardHome persists through firmware updates as it will store the configuration in a folder (you need to create this). It needs 2 folders, a Work and Configuration folder. Please create the 2 folders in "/mnt/data/". In my example I created "AdguardHome-Confdir" and "AdguardHome-Workdir"
1. AdguardHome persists through firmware updates as it will store the configuration in a folder (you need to create this). It needs 2 folders, a Work and Configuration folder. Please create the 2 folders in "/data/". In my example I created "AdguardHome-Confdir" and "AdguardHome-Workdir"
## Customization
* Feel free to change [20-dns.conflist](../cni-plugins/20-dns.conflist) to change the IP and MAC address of the container.
* Update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with your own values
* If you want IPv6 support use [20-dnsipv6.conflist](../cni-plugins/20-dnsipv6.conflist) and update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with the IPv6 addresses. Also, please provide IPv6 servers to podman using --dns arguments.
- Feel free to change [20-dns.conflist](../cni-plugins/20-dns.conflist) to change the IP and MAC address of the container.
- Update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with your own values
- If you want IPv6 support use [20-dnsipv6.conflist](../cni-plugins/20-dnsipv6.conflist) and update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with the IPv6 addresses. Also, please provide IPv6 servers to podman using --dns arguments.
## Steps
Please note if you have firmware v2 or above you have to copy all files into /data instead of /mnt/data. You can see what version you are running by running: ubnt-device-info firmware
1. Copy [05-install-cni-plugins.sh](../cni-plugins/05-install-cni-plugins.sh) to /mnt/data/on_boot.d
1. On your controller, make a Corporate network with no DHCP server and give it a VLAN. For this example we are using VLAN 5. You should confirm the bridge is created for this VLAN by running `netstat -r` otherwise the script will fail. If it is not there, initiate a provisioning of the UDM (Controller > UDM > Config > Manage Device > Force provision).
1. Copy [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) to `/mnt/data/on_boot.d` and update its values to reflect your environment
1. Copy [20-dns.conflist](../cni-plugins/20-dns.conflist) to `/mnt/data/podman/cni` after generating a MAC address. This will create your podman macvlan network.
1. Execute /mnt/data/on_boot.d/05-install-cni-plugins.sh
3. Execute `/mnt/data/on_boot.d/10-dns.sh`
4. Run the AdguardHome docker container, be sure to make the directories for your persistent AdguardHome configuration. They are mounted as volumes in the command below.
Please note if you have firmware v2 or above you have to copy all files into /data instead of /data. You can see what version you are running by running: ubnt-device-info firmware
1. Check if you have either `/mnt/data/` or `/data/` and use the correct one below.
2. Copy [05-install-cni-plugins.sh](../cni-plugins/05-install-cni-plugins.sh) to /data/on_boot.d
3. On your controller, make a Corporate network with no DHCP server and give it a VLAN. For this example we are using VLAN 5. You should confirm the bridge is created for this VLAN by running `netstat -r` otherwise the script will fail. If it is not there, initiate a provisioning of the UDM (Controller > UDM > Config > Manage Device > Force provision).
4. Copy [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) to `/data/on_boot.d` and update its values to reflect your environment
5. Copy [20-dns.conflist](../cni-plugins/20-dns.conflist) to `/data/podman/cni` after generating a MAC address. This will create your podman macvlan network.
6. Execute /data/on_boot.d/05-install-cni-plugins.sh
7. Execute `/data/on_boot.d/10-dns.sh`
8. Run the AdguardHome docker container, be sure to make the directories for your persistent AdguardHome configuration. They are mounted as volumes in the command below.
```shell script
mkdir /mnt/data/AdguardHome-Confdir
mkdir /mnt/data/AdguardHome-Workdir
mkdir /data/AdguardHome-Confdir
mkdir /data/AdguardHome-Workdir
podman run -d --network dns --restart always \
--name adguardhome \
-v "/mnt/data/AdguardHome-Confdir/:/opt/adguardhome/conf/" \
-v "/mnt/data/AdguardHome-Workdir/:/opt/adguardhome/work/" \
-v "/data/AdguardHome-Confdir/:/opt/adguardhome/conf/" \
-v "/data/AdguardHome-Workdir/:/opt/adguardhome/work/" \
--dns=127.0.0.1 --dns=1.1.1.1 \
--hostname adguardhome \
adguard/adguardhome:latest
```
1. Browse to 10.0.5.3:3000 and follow the setup wizard
1. Update your DNS Servers to 10.0.5.3 (or your custom ip) in all your DHCP configs.
1. Access the AdguardHome like you would normally.
9. Browse to 10.0.5.3:3000 and follow the setup wizard
10. Update your DNS Servers to 10.0.5.3 (or your custom ip) in all your DHCP configs.
11. Access the AdguardHome like you would normally.
## Troubleshooting

View File

@ -64,6 +64,7 @@ Script to persist ssh keys after reboot or firmware update.
| WireGuard kernel module | <https://github.com/tusc/wireguard-kmod> | Uses a prebuilt linux kernel module, without the need to move to a custom kernel. |
| OpenConnect VPN | <https://github.com/shuguet/openconnect-udm> | OpenConnect VPN Client for the UniFi Dream Machine Pro (Unofficial).|
| split-vpn | <https://github.com/peacey/split-vpn> |A split tunnel VPN script for the UDM with policy based routing. This helper script can be used on your UDM to route select VLANs, clients, or even domains through a VPN connection. It supports OpenVPN, WireGuard, and OpenConnect (Cisco AnyConnect) clients running directly on your UDM, and external VPN clients running on other servers on your network. |
| Zerotier | <zerotier.com/> |ZeroTier provides network control and P2P functionality · Use ZeroTier to create products which run on their own decentralized networks |
## DNS Providers

View File

@ -1,13 +1,40 @@
#! /bin/sh
set -eo pipefail
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "$DATA_DIR" ]; then
# If it does not exist, create the directory
mkdir -p "$DATA_DIR"
echo "Directory '$DATA_DIR' created."
else
# If it already exists, print a message
echo "Directory '$DATA_DIR' already exists. Moving on."
fi
wan_iface="eth8" # "eth9" for UDM Pro WAN2
vlans="br0" # "br0 br100 br101..."
domain="example.invalid" # DNS domain
dns6="[2001:4860:4860::8888],[2001:4860:4860::8844]" # Google
CONTAINER=att-ipv6
confdir=/mnt/data/att-ipv6
confdir=${DATA_DIR}/att-ipv6
# main
mkdir -p "${confdir}/dhcpcd"
@ -66,14 +93,12 @@ EOF
mv "${confdir}/att-ipv6-dnsmasq.conf.tmp" "${confdir}/att-ipv6-dnsmasq.conf"
}
if podman container exists "$CONTAINER"; then
podman start "$CONTAINER"
else
podman run -d --restart=always --name "$CONTAINER" -v "${confdir}/dhcpcd.conf:/etc/dhcpcd.conf" -v "${confdir}/dhcpcd:/var/lib/dhcpcd" --net=host --privileged ghcr.io/michaelw/dhcpcd
fi
# Fix DHCP, assumes DHCPv6 is turned off in UI
cp "${confdir}/att-ipv6-dnsmasq.conf" /run/dnsmasq.conf.d/
start-stop-daemon -K -q -x /usr/sbin/dnsmasq

View File

@ -31,13 +31,13 @@ Near the top of `10-att-ipv6.sh`:
dns6="[2001:4860:4860::8888],[2001:4860:4860::8844]" # Google
```
This generates configuration files in directory `/mnt/data/att-ipv6`, if they don't exist.
This generates configuration files in directory `/data/att-ipv6`, if they don't exist.
The files can be edited, or regenerated by deleting them and re-running the script.
## Installation
```sh
cd /mnt/data/on_boot.d
cd /data/on_boot.d
curl -LO https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/HEAD/att-ipv6/10-att-ipv6.sh
chmod +x 10-att-ipv6.sh
./10-att-ipv6.sh
@ -52,6 +52,7 @@ Running the script starts dhcpcd within the `att-ipv6` container on `eth8` (WAN1
To check that everything is working as expected, and the ATT RG delegates multiple prefixes:
On UDM:
```sh
$ ip -6 r # should see a default route on the WAN interface, and a 2600:1700:X:Y::/64 prefix on each configured VLAN bridge interface
2600:1700:X:yyy0::/64 dev eth9 proto ra metric 203 mtu 1500 pref medium
@ -106,8 +107,8 @@ $ ps auxw|grep dnsmasq # should see dnsmasq running
On BGW320-500, check https://192.168.1.254/cgi-bin/lanstatistics.ha for multiple PDs in `IPv6 Delegated Prefix Subnet (including length)`.
On clients:
```
ip -6 addr show # should see SLAAC and/or DHCPv6 addresses received (if not, check dnsmasq configuration in `/run/dnsmasq.conf.d`)
```

View File

@ -8,7 +8,6 @@
Complete feature list and documentation can be found [here](https://github.com/timothymiller/cloudflare-ddns)
## Requirements
1. You have successfully setup the on boot script described [here](https://github.com/unifi-utilities/unifios-utilities/tree/main/on-boot-script)
@ -18,6 +17,7 @@ Complete feature list and documentation can be found [here](https://github.com/t
## Customization
Update [config.json](configs/config.json) with the following options:
- your cloudflare api token
- your zone id
- each subdomain you'd like to point at your udm-pro
@ -29,13 +29,13 @@ Update [config.json](configs/config.json) with the following options:
2. Make a directory for your configuration
```sh
mkdir -p /mnt/data/cloudflare-ddns
mkdir -p /data/cloudflare-ddns
```
3. Create a [cloudflare-ddns configuration](configs/config.json) in `/mnt/data/cloudflare-ddns` and update the configuration to meet your needs.
4. Copy [30-cloudflare-ddns.sh](on_boot.d/30-cloudflare-ddns.sh) to `/mnt/data/on_boot.d`.
5. Execute /mnt/data/on_boot.d/[30-cloudflare-ddns.sh](on_boot.d/30-cloudflare-ddns.sh)
7. Execute `podman logs cloudflare-ddns` to verify the continer is running without error (ipv6 warnings are normal).
3. Create a [cloudflare-ddns configuration](configs/config.json) in `/data/cloudflare-ddns` and update the configuration to meet your needs.
4. Copy [30-cloudflare-ddns.sh](on_boot.d/30-cloudflare-ddns.sh) to `/data/on_boot.d`.
5. Execute /data/on_boot.d/[30-cloudflare-ddns.sh](on_boot.d/30-cloudflare-ddns.sh)
6. Execute `podman logs cloudflare-ddns` to verify the continer is running without error (ipv6 warnings are normal).
### Useful commands
@ -43,4 +43,3 @@ Update [config.json](configs/config.json) with the following options:
# view cloudflare-ddns logs to verify the continer is running without error (ipv6 warnings are normal).
podman logs cloudflare-ddns
```

View File

@ -2,7 +2,7 @@
CONTAINER=cloudflare-ddns
# Starts a cloudflare ddns container that is deleted after it is stopped.
# All configs stored in /mnt/data/cloudflare-ddns
# All configs stored in /data/cloudflare-ddns
if podman container exists "$CONTAINER"; then
podman start "$CONTAINER"
else
@ -10,6 +10,6 @@ else
--net=host \
--name "$CONTAINER" \
--security-opt=no-new-privileges \
-v /mnt/data/cloudflare-ddns/config.json:/config.json \
-v /data/cloudflare-ddns/config.json:/config.json \
timothyjmiller/cloudflare-ddns:latest
fi

View File

@ -1,7 +1,7 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/mnt/data"
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
@ -22,9 +22,9 @@ case "$(ubnt-device-info firmware || true)" in
# Examples of valid version code would be "latest", "v0.9.1" and "v0.9.0".
CNI_PLUGIN_VER=latest
# location of the CNI Plugin cached tar files
CNI_CACHE="$DATA_DIR/.cache/cni-plugins"
CNI_CACHE="${DATA_DIR}/.cache/cni-plugins"
# location of the conf files to go in the net.d folder of the cni-plugin directory
CNI_NETD="$DATA_DIR/podman/cni"
CNI_NETD="${DATA_DIR}/podman/cni"
# The checksum to use. For CNI Plugin sha1, sha256 and sha512 are available.
CNI_CHECKSUM="sha256"
# Maximum number of loops to attempt to download the plugin if required - setting a 0 or negative value will reinstalled the currently installed version (if in cache)

View File

@ -10,26 +10,31 @@
## Customization
While a 100Mb log limit per container should give plenty of log data for all featured in this repo projects, you can increase or decrease max_log_size value in /mnt/data/on_boot.d/05-container-common.sh file after installation.
While a 100Mb log limit per container should give plenty of log data for all featured in this repo projects, you can increase or decrease max_log_size value in /data/on_boot.d/05-container-common.sh file after installation.
## Steps
1. Run as root on UDM Pro to download and set permissions of on_boot.d script:
```sh
# Download 05-container-common.sh from GitHub
curl -L https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/container-common/on_boot.d/05-container-common.sh -o /mnt/data/on_boot.d/05-container-common.sh;
curl -L https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/container-common/on_boot.d/05-container-common.sh -o /data/on_boot.d/05-container-common.sh;
# Set execute permission
chmod a+x /mnt/data/on_boot.d/05-container-common.sh;
chmod a+x /data/on_boot.d/05-container-common.sh;
```
2. Review the script /mnt/data/on_boot.d/05-container-common.sh and when happy execute it.
2. Review the script /data/on_boot.d/05-container-common.sh and when happy execute it.
```sh
# Review script
cat /mnt/data/on_boot.d/05-container-common.sh;
cat /data/on_boot.d/05-container-common.sh;
# Apply container-common settings
/mnt/data/on_boot.d/05-container-common.sh;
/data/on_boot.d/05-container-common.sh;
```
3. Already running containers will pick up new defaults after either container restart ("podman restart \<container-name\>") or after UDM Pro restart. New containers will pick up a change from first run.
4. To list containers that are running with log size limits:
```sh
# List container monitor processes with "--log-size-max" custom argument set
ps -ef | grep conmon | grep log-size-max

View File

@ -1,5 +1,21 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## configuration variables:
VLAN=5
IPV4_IP="10.0.5.3"
@ -33,7 +49,7 @@ CONTAINER=pihole
if ! test -f /opt/cni/bin/macvlan; then
echo "Error: CNI plugins not found. You can install it with the following command:" >&2
echo " curl -fsSLo /mnt/data/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh /mnt/data/on_boot.d/05-install-cni-plugins.sh" >&2
echo " curl -fsSLo ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh" >&2
exit 1
fi

View File

@ -1,14 +1,41 @@
#!/bin/sh
CONTAINER=haproxy
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/haproxy" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/haproxy"
echo "Directory '${DATA_DIR}/haproxy' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/haproxy' already exists. Moving on."
fi
# Starts an haproxy container that is deleted after it is stopped.
# All configs stored in /mnt/data/haproxy
# All configs stored in /data/haproxy
if podman container exists "$CONTAINER"; then
podman start "$CONTAINER"
else
podman run -d --net=host --restart always \
--name haproxy \
--hostname ha.proxy \
-v "/mnt/data/haproxy/:/usr/local/etc/haproxy/" \
-v "${DATA_DIR}/haproxy/:/usr/local/etc/haproxy/" \
haproxy:latest
fi

View File

@ -12,30 +12,29 @@
## Steps
1. Pull your image with `podman pull docker.io/library/haproxy`.
1. Copy [50-haproxy.sh](./50-haproxy.sh) to `/mnt/data/on_boot.d/50-haproxy.sh`.
1. Choose network configuration - You can run either on the host network or on a seperate docker network. Running on the host network is easier but does mean you can't clash with the ports already in use on the UDM.
1. Check if you either have `/mnt/data` or `/data/` and adjust below accordingly
2. Pull your image with `podman pull docker.io/library/haproxy`.
3. Copy [50-haproxy.sh](./50-haproxy.sh) to `/data/on_boot.d/50-haproxy.sh`.
4. Choose network configuration - You can run either on the host network or on a seperate docker network. Running on the host network is easier but does mean you can't clash with the ports already in use on the UDM.
1. If you want to run on the host network
1. You don't have to do anything extra to run on the host network all the instructions / scripts assume this setup.
1. If you want to run on a custom docker network do the following:
2. If you want to run on a custom docker network do the following:
1. Setup the network - there are some instructions in the Customizations setting of the pihole instructions: https://github.com/unifi-utilities/unifios-utilities/tree/main/run-pihole#customizations
1. Copy [21-haproxy.conflist](./21-haproxy.conflist) to `/mnt/data/podman/cni/` and update its values to reflect your environment.
1. Execute the `/mnt/data/on_boot.d/05-install-cni-plugins.sh` script to create the network.
1. Edit `/mnt/data/on_boot.d/50-haproxy.sh` and change `--net=host` to `--network haproxy`
1. Create a persistant directory and config for haproxy to use:
2. Copy [21-haproxy.conflist](./21-haproxy.conflist) to `/data/podman/cni/` and update its values to reflect your environment.
3. Execute the `/data/on_boot.d/05-install-cni-plugins.sh` script to create the network.
4. Edit `/data/on_boot.d/50-haproxy.sh` and change `--net=host` to `--network haproxy`
5. Create a persistant directory and config for haproxy to use:
```sh
mkdir -p /mnt/data/haproxy
touch /mnt/data/haproxy/haproxy.cfg
mkdir -p /data/haproxy
touch /data/haproxy/haproxy.cfg
```
1. Add your config to `/mnt/data/haproxy/haproxy.cfg`. Each configuration is unique, so check out some resouces like [haproxy.com](https://www.haproxy.com/documentation/hapee/latest/configuration/config-sections/) for basics.
1. Run `/mnt/data/on_boot.d/50-haproxy.sh`
6. Add your config to `/data/haproxy/haproxy.cfg`. Each configuration is unique, so check out some resouces like [haproxy.com](https://www.haproxy.com/documentation/hapee/latest/configuration/config-sections/) for basics.
7. Run `/data/on_boot.d/50-haproxy.sh`
## Upgrading Easily (if at all)
1. Edit [update-haproxy.sh](./update-haproxy.sh) to use the same command you used at installation (if changed). If you added your own network config ensure you change the `--net=host` to `--network haproxy`
2. Copy the [update-haproxy.sh](./update-haproxy.sh) to `/mnt/data/scripts`
3. Anytime you want to update your installation, simply run `/mnt/data/scripts/update-haproxy.sh`
2. Copy the [update-haproxy.sh](./update-haproxy.sh) to `/data/scripts`
3. Anytime you want to update your installation, simply run `/data/scripts/update-haproxy.sh`

View File

@ -1,4 +1,21 @@
IMAGE=haproxy:latest
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
podman pull $IMAGE
podman stop haproxy
@ -6,5 +23,5 @@ podman rm haproxy
podman run -d --net=host --restart always \
--name haproxy \
--hostname ha.proxy \
-v "/mnt/data/haproxy/:/usr/local/etc/haproxy/" \
-v "${DATA_DIR}/haproxy/:/usr/local/etc/haproxy/" \
$IMAGE

View File

@ -1,5 +1,32 @@
#!/bin/sh
# Place cross compiled version of `socat` in /mnt/data/hdhomerun
# Place cross compiled version of `socat` in /data/hdhomerun
HDHOMERUN_IP=10.10.30.146
/mnt/data/hdhomerun/socat -d -d -v udp4-recvfrom:65001,broadcast,fork udp4-sendto:$HDHOMERUN_IP:65001 &
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "$DATA_DIR" ]; then
# If it does not exist, create the directory
mkdir -p "$DATA_DIR"
echo "Directory '$DATA_DIR' created."
else
# If it already exists, print a message
echo "Directory '$DATA_DIR' already exists. Moving on."
fi
${DATA_DIR}/hdhomerun/socat -d -d -v udp4-recvfrom:65001,broadcast,fork udp4-sendto:$HDHOMERUN_IP:65001 &

View File

@ -1,20 +1,22 @@
# Run Homebridge on your UDM
### Features
1. Run [Homebridge](https://homebridge.io/) on your UDM(P).
2. Integrate Unifi Protect cameras in HomeKit via `homebridge-unifi-protect`.
3. Persists through reboots and firmware updates.
### Requirements
1. You have successfully setup the on boot script described [here](https://github.com/unifi-utilities/unifios-utilities/tree/main/on-boot-script).
2. You have applied [container-common](https://github.com/unifi-utilities/unifios-utilities/tree/main/container-common) change to prevent UDM storage to fill up with Homebridge logs and addon error messages that can move fast.
3. You have applied [cni-plugins](https://github.com/unifi-utilities/unifios-utilities/tree/main/cni-plugins "cni-plugins") to setup for cni configeration. (You dont need the configeration files, just place the script in the on boot folder.)
### Steps
1. Type this command: `mkdir -p /mnt/data/homebridge/run`
2. Copy [25-homebridge.sh](on_boot.d/25-homebridge.sh) to `/mnt/data/on_boot.d`. To do this, cd into `/mnt/data/on_boot.d`, then type `vim 25-homebridge.sh` then go to [this page](https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/homebridge/on_boot.d/25-homebridge.sh "this page") and copy everything using CTRL + A and then CTRL + C and then paste it into vim then click ESC and then type `:x` then click the enter key.
3. Copy [90-homebridge.conflist](cni/90-homebridge.conflist) to `/mnt/data/podman/cni`. This will create the podman network that bridges the container to your VLAN. To do this, cd into `/mnt/data/podman/cni` and type `vim 90-homebridge.conflist` then go to [this page](https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/homebridge/cni/90-homebridge.conflist "this page") and the press CTRL + A and then CTRL + C and then paste it into vim and click ESC and then type `:x` then click the enter key.
1. Type this command: `mkdir -p /data/homebridge/run`
2. Copy [25-homebridge.sh](on_boot.d/25-homebridge.sh) to `/data/on_boot.d`. To do this, cd into `/data/on_boot.d`, then type `vim 25-homebridge.sh` then go to [this page](https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/homebridge/on_boot.d/25-homebridge.sh "this page") and copy everything using CTRL + A and then CTRL + C and then paste it into vim then click ESC and then type `:x` then click the enter key.
3. Copy [90-homebridge.conflist](cni/90-homebridge.conflist) to `/data/podman/cni`. This will create the podman network that bridges the container to your VLAN. To do this, cd into `/data/podman/cni` and type `vim 90-homebridge.conflist` then go to [this page](https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/homebridge/cni/90-homebridge.conflist "this page") and the press CTRL + A and then CTRL + C and then paste it into vim and click ESC and then type `:x` then click the enter key.
4. Run the Homebridge docker container. Change the timezone (`-e TZ`) to match your timezone, and DNS (`--dns`) to match your VLAN gateway.
```shell script
@ -28,8 +30,8 @@
-e PGID=0 -e PUID=0 \
-e HOMEBRIDGE_CONFIG_UI=1 \
-e HOMEBRIDGE_CONFIG_UI_PORT=80 \
-v "/mnt/data/homebridge/:/homebridge/" \
-v "/mnt/data/homebridge/run/:/run/" \
-v "/data/homebridge/:/homebridge/" \
-v "/data/homebridge/run/:/run/" \
oznu/homebridge:latest
```

View File

@ -1,8 +1,36 @@
#!/bin/sh
CONTAINER=homebridge
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## network configuration and startup:
CNI_PATH=/mnt/data/podman/cni
CNI_PATH=${DATA_DIR}/podman/cni
# Check if the directory exists
if [ ! -d "$CNI_PATH" ]; then
# If it does not exist, create the directory
mkdir -p "$CNI_PATH"
echo "Directory '$CNI_PATH' created."
else
# If it already exists, print a message
echo "Directory '$CNI_PATH' already exists. Moving on."
fi
if [ ! -f "$CNI_PATH"/tuning ]; then
mkdir -p $CNI_PATH
curl -L https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz | tar -xz -C $CNI_PATH
@ -12,20 +40,17 @@ mkdir -p /opt/cni
rm -f /opt/cni/bin
ln -s $CNI_PATH /opt/cni/bin
for file in "$CNI_PATH"/*.conflist
do
for file in "$CNI_PATH"/*.conflist; do
if [ -f "$file" ]; then
ln -s "$file" "/etc/cni/net.d/$(basename "$file")"
fi
done
# Starts the homebridge container on boot.
# All configs stored in /mnt/data/homebridge
# All configs stored in /data/homebridge
if podman container exists ${CONTAINER}; then
podman start ${CONTAINER}
else
logger -s -t homebridge -p ERROR Container $CONTAINER not found, make sure you set the proper name, you can ignore this error if it is your first time setting it up
fi

View File

@ -25,16 +25,16 @@ Here's an example snippet of an iptable modified by this program:
## Steps
1. Copy [on_boot.d/30-ipt-enable-logs-launch.sh](./on_boot.d/30-ipt-enable-logs-launch.sh) to /mnt/data/on_boot.d
1. Copy the [scripts/ipt-enable-logs](./scripts/ipt-enable-logs) folder to /mnt/data/scripts
1. Copy [scripts/ipt-enable-logs.sh](./scripts/ipt-enable-logs.sh) to /mnt/data/scripts
1. Execute /mnt/data/on_boot.d/30-ipt-enable-logs-launch.sh
1. Copy [scripts/refresh-iptables.sh](./scripts/refresh-iptables.sh) to /mnt/data/scripts
1. Copy [on_boot.d/30-ipt-enable-logs-launch.sh](./on_boot.d/30-ipt-enable-logs-launch.sh) to /data/on_boot.d
1. Copy the [scripts/ipt-enable-logs](./scripts/ipt-enable-logs) folder to /data/scripts
1. Copy [scripts/ipt-enable-logs.sh](./scripts/ipt-enable-logs.sh) to /data/scripts
1. Execute /data/on_boot.d/30-ipt-enable-logs-launch.sh
1. Copy [scripts/refresh-iptables.sh](./scripts/refresh-iptables.sh) to /data/scripts
## Refreshing iptables
Whenever you update the firewall rules on the Network application, the iptables will be reprovisioned and will need to be reprocessed
by calling /mnt/data/scripts/refresh-iptables.sh.
by calling /data/scripts/refresh-iptables.sh.
## Looking at logs

View File

@ -1,9 +1,37 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/scripts" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/scripts"
echo "Directory '${DATA_DIR}/scripts' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/scripts' already exists. Moving on."
fi
set -e
if ! iptables-save | grep -e '\-A UBIOS_.* \--log-prefix "\[' >/dev/null; then
/mnt/data/scripts/ipt-enable-logs.sh | iptables-restore -c
${DATA_DIR}/scripts/ipt-enable-logs.sh | iptables-restore -c
else
echo "iptables already contains USER log prefixes, ignoring."
fi

View File

@ -1,7 +1,34 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/scripts" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/scripts"
echo "Directory '${DATA_DIR}/scripts' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/scripts' already exists. Moving on."
fi
set -e
docker run -it --rm -v /mnt/data/scripts/ipt-enable-logs:/src -w /src --network=none golang:1.17.3 go build -v -o /src/ipt-enable-logs /src >&2
docker run -it --rm -v ${DATA_DIR}/scripts/ipt-enable-logs:/src -w /src --network=none golang:1.17.3 go build -v -o /src/ipt-enable-logs /src >&2
/mnt/data/scripts/ipt-enable-logs/ipt-enable-logs
${DATA_DIR}/scripts/ipt-enable-logs/ipt-enable-logs

View File

@ -1,13 +1,39 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/on_boot.d" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/on_boot.d"
echo "Directory '${DATA_DIR}/on_boot.d' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/on_boot.d' already exists. Moving on."
fi
set -e
if [ -f /mnt/data/on_boot.d/10-dns.sh ]; then
if [ -f ${DATA_DIR}/on_boot.d/10-dns.sh ]; then
if ! iptables-save | grep -e '\-A PREROUTING.* \--log-prefix "\[' >/dev/null; then
/mnt/data/on_boot.d/10-dns.sh
${DATA_DIR}/on_boot.d/10-dns.sh
else
echo "iptables already contains DNAT log prefixes, ignoring."
fi
fi
/mnt/data/on_boot.d/30-ipt-enable-logs-launch.sh
${DATA_DIR}/on_boot.d/30-ipt-enable-logs-launch.sh

View File

@ -23,24 +23,24 @@ _Adjust according to your setup._
1. First, lets create the folder structure we'll be working with.
`$ mkdir -p /mnt/data/mosquitto/config /mnt/data/mosquitto/data`
`$ mkdir -p /data/mosquitto/config /data/mosquitto/data`
This is where Mosquitto's configuration file and data ("persistence database"; if enabled) will live.
If you're unsure on how to configure mosquitto, use the provided barebone config [`config/mosquitto.conf`](config/mosquitto.conf) to get it initially running.
2. **Optional:** Customize [`on_boot.d/45-mosquitto.sh`](on_boot.d/45-mosquitto.sh) to your setup and copy to `/mnt/data/on_boot.d/`.
2. **Optional:** Customize [`on_boot.d/45-mosquitto.sh`](on_boot.d/45-mosquitto.sh) to your setup and copy to `/data/on_boot.d/`.
Most likely you'll need to mark the script as executable, this will do the trick:
`$ chmod a+x /mnt/data/on_boot.d/45-mosquitto.sh`
`$ chmod a+x /data/on_boot.d/45-mosquitto.sh`
3. Then take a loot at [`cni/45-mosquitto.conflist`](cni/45-mosquitto.conflist) and make sure it matches your previously defined configuration; then place it in `/mnt/data/podman/cni/`
3. Then take a loot at [`cni/45-mosquitto.conflist`](cni/45-mosquitto.conflist) and make sure it matches your previously defined configuration; then place it in `/data/podman/cni/`
4. Run boot script (to create the mosquitto network set it's ip routes)
`$ sh /mnt/data/on_boot.d/45-mosquitto.sh`
`$ sh /data/on_boot.d/45-mosquitto.sh`
It fail when trying to run the container, but thats okay, its just for setting op needed configuration before initial image run.
The script will also create a [minimal configuration](config/mosquitto.conf) for Mosquitto in `/mnt/data/mosquitto/config/`, _**only if it doesn't already exist**_.
The script will also create a [minimal configuration](config/mosquitto.conf) for Mosquitto in `/data/mosquitto/config/`, _**only if it doesn't already exist**_.
> **Note:** You can use this config to get everything started, but I highly recommend securing your instance with authentication (links to the offical documentation & other resources are at the bottom)
@ -53,14 +53,14 @@ _Adjust according to your setup._
--name mosquitto \
--hostname mosquitto.local \
-e "TZ=Europe/Berlin" \
-v /mnt/data/mosquitto/config/:/mosquitto/config \
-v /mnt/data/mosquitto/data/:/mosquitto/data \
-v /data/mosquitto/config/:/mosquitto/config \
-v /data/mosquitto/data/:/mosquitto/data \
eclipse-mosquitto:latest
```
6. Run boot script again and we are done!
`$ sh /mnt/data/on_boot.d/45-mosquitto.sh`
`$ sh /data/on_boot.d/45-mosquitto.sh`
> You should now be able to connect with any MQTT client to Mosquitto, in my case `mqtt://10.0.20.4:1883`

View File

@ -1,16 +1,42 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/podman/cni" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/podman/cni"
echo "Directory '${DATA_DIR}/podman/cni' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/podman/cni' already exists. Moving on."
fi
## network configuration
VLAN_ID=20
IPV4_IP_CONTAINER="10.0.20.4"
IPV4_IP_GATEWAY="10.0.20.1"
CONTAINER_NAME="mosquitto"
CONTAINER_CNI_PATH="/mnt/data/podman/cni/45-mosquitto.conflist"
CONTAINER_CNI_PATH="${DATA_DIR}/podman/cni/45-mosquitto.conflist"
# make sure cni plugs are installed
if ! test -f /opt/cni/bin/macvlan; then
echo "Error: CNI plugins not found. You can install it with the following command:" >&2
echo " curl -fsSLo /mnt/data/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh /mnt/data/on_boot.d/05-install-cni-plugins.sh" >&2
echo " curl -fsSLo ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh" >&2
exit 1
fi
@ -38,9 +64,9 @@ ip link set br${VLAN_ID}.mac up
ip route add ${IPV4_IP_CONTAINER}/32 dev br${VLAN_ID}.mac
# create basic config if not exist
if ! test -f /mnt/data/mosquitto/config/mosquitto.conf; then
mkdir -p /mnt/data/mosquitto/data /mnt/data/mosquitto/config
cat > /mnt/data/mosquitto/config/mosquitto.conf<< EOF
if ! test -f "$DATA_DIR"/mosquitto/config/mosquitto.conf; then
mkdir -p "$DATA_DIR"/mosquitto"$DATA_DIR" "$DATA_DIR"/mosquitto/config
cat >"$DATA_DIR"/mosquitto/config/mosquitto.conf <<EOF
listener 1883
allow_anonymous true
@ -55,10 +81,8 @@ log_timestamp true
EOF
fi
if podman container exists ${CONTAINER_NAME}; then
podman start ${CONTAINER_NAME}
else
logger -s -t podman-mosquitto -p ERROR Container $CONTAINER_NAME not found, make sure you set the proper name, you can ignore this error if it is your first time setting it up
fi

View File

@ -1,5 +1,7 @@
# Run NextDNS on your UDM
# THIS IS NO LONGER MAINTAINED. VENDOR PROVIDES DIRECT SUPPORT
## Features
1. Run NextDNS on your UDM with a completely isolated network stack. This will not port conflict or be influenced by any changes on by Ubiquiti.
@ -13,11 +15,11 @@
## Customization
* Feel free to change [20-dns.conflist](../cni-plugins/20-dns.conflist) to change the IP and MAC address of the container.
* The NextDNS docker image is not supported by NextDNS. It is built out of this repo. If you make any enhancements please contribute back via a Pull Request.
* If you want to inject custom DNS names into NextDNS use --add-host docker commands. The /etc/resolv.conf and /etc/hosts is generated from that and --dns.
* Edit [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) and update its values to reflect your environment (specifically the container name)
* If you want IPv6 support use [20-dnsipv6.conflist](../cni-plugins/20-dnsipv6.conflist) and update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with the IPv6 addresses. Also, please provide IPv6 servers to podman using --dns arguments.
- Feel free to change [20-dns.conflist](../cni-plugins/20-dns.conflist) to change the IP and MAC address of the container.
- The NextDNS docker image is not supported by NextDNS. It is built out of this repo. If you make any enhancements please contribute back via a Pull Request.
- If you want to inject custom DNS names into NextDNS use --add-host docker commands. The /etc/resolv.conf and /etc/hosts is generated from that and --dns.
- Edit [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) and update its values to reflect your environment (specifically the container name)
- If you want IPv6 support use [20-dnsipv6.conflist](../cni-plugins/20-dnsipv6.conflist) and update [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) with the IPv6 addresses. Also, please provide IPv6 servers to podman using --dns arguments.
## Docker
@ -39,19 +41,19 @@ docker buildx build --platform linux/arm64/v8 -t nextdns:latest .
If you have already installed PiHole, skip right to step 5.
1. Copy [05-install-cni-plugins.sh](../cni-plugins/05-install-cni-plugins.sh) to /mnt/data/on_boot.d
1. Execute /mnt/data/on_boot.d/05-install-cni-plugins.sh
1. Copy [05-install-cni-plugins.sh](../cni-plugins/05-install-cni-plugins.sh) to /data/on_boot.d
1. Execute /data/on_boot.d/05-install-cni-plugins.sh
1. On your controller, make a Corporate network with no DHCP server and give it a VLAN. For this example we are using VLAN 5.
2. Copy [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) to /mnt/data/on_boot.d and update its values to reflect your environment
3. Copy [20-dns.conflist](../cni-plugins/20-dns.conflist) to /mnt/data/podman/cni. This will create your podman macvlan network
4. Execute /mnt/data/on_boot.d/[10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
5. Create /mnt/data/nextdns and copy [nextdns.conf](udm-files/nextdns.conf) to it.
6. Run the NextDNS docker container. Mounting dbus and running in privileged is only required for mDNS. Also, please change the --dns arguments to whatever was provided by NextDNS.
1. Copy [10-dns.sh](../dns-common/on_boot.d/10-dns.sh) to /data/on_boot.d and update its values to reflect your environment
1. Copy [20-dns.conflist](../cni-plugins/20-dns.conflist) to /data/podman/cni. This will create your podman macvlan network
1. Execute /data/on_boot.d/[10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
1. Create /data/nextdns and copy [nextdns.conf](udm-files/nextdns.conf) to it.
1. Run the NextDNS docker container. Mounting dbus and running in privileged is only required for mDNS. Also, please change the --dns arguments to whatever was provided by NextDNS.
```sh
podman run -d -it --privileged --network dns --restart always \
--name nextdns \
-v "/mnt/data/nextdns/:/etc/nextdns/" \
-v "/data/nextdns/:/etc/nextdns/" \
-v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket \
--mount type=bind,source=/config/dnsmasq.lease,target=/tmp/dnsmasq.leases \
--dns=45.90.28.163 --dns=45.90.30.163 \
@ -59,4 +61,4 @@ If you have already installed PiHole, skip right to step 5.
boostchicken/nextdns:latest
```
7. Update your DNS Servers to 10.0.5.3 (or your custom ip) in all your DHCP configs.
1. Update your DNS Servers to 10.0.5.3 (or your custom ip) in all your DHCP configs.

View File

@ -8,21 +8,21 @@
1. Should work on any UDM/UDMPro after 2.4.x
- [build_deb.sh](build_deb.sh) can be used to build the package by yourself.
* [build_deb.sh](build_deb.sh) can be used to build the package by yourself.
* [dpkg-build-files](dpkg-build-files) contains the sources that debuild uses to build the package if you want to build it yourself / change it
* by default it uses docker or podman to build the debian package
* use ```./build_deb.sh build``` to not use a container
* the resulting package will be in [packages/](packages/)
- [dpkg-build-files](dpkg-build-files) contains the sources that debuild uses to build the package if you want to build it yourself / change it
- by default it uses docker or podman to build the debian package
- use `./build_deb.sh build` to not use a container
- the resulting package will be in [packages/](packages/)
* Built on Ubuntu-20.04 on Windows 10/WSL2
- Built on Ubuntu-20.04 on Windows 10/WSL2
## Install
You can execute in UDM/Pro/SE and UDR with:
```bash
curl -fsL "https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/HEAD/on-boot-script/remote_install.sh" | /bin/sh
curl -fsL "https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/HEAD/on-boot-script-2.x/remote_install.sh" | /bin/sh
```
This is a force to install script so will uninstall any previous version and install on_boot keeping your on boot files.
@ -46,15 +46,16 @@ This will also install CNI Plugins & CNI Bridge scripts. If you are using UDMSE/
exit
```
3. Copy any shell scripts you want to run to /mnt/data/on_boot.d on your UDM (not the unifi-os shell) and make sure they are executable and have the correct shebang (#!/bin/sh). Additionally, scripts need to have a `.sh` extention in their filename.
3. Copy any shell scripts you want to run to /data/on_boot.d on your UDM (not the unifi-os shell) and make sure they are executable and have the correct shebang (#!/bin/sh). Additionally, scripts need to have a `.sh` extention in their filename.
Examples:
* Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
* Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-wpa_supplicant.sh)
* Add a persistent ssh key for the root user [on_boot.d/15-add-root-ssh-keys.sh](examples/udm-files/on_boot.d/15-add-root-ssh-keys.sh)
- Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
- Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-wpa_supplicant.sh)
- Add a persistent ssh key for the root user [on_boot.d/15-add-root-ssh-keys.sh](examples/udm-files/on_boot.d/15-add-root-ssh-keys.sh)
## Version History
### 1.0.0
* First release that persists through firmware
- First release that persists through firmware

View File

@ -1,5 +1,21 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Configure shell profile
device_info() {
@ -18,12 +34,12 @@ EOF
# Extend UbiOS prompt to include useful information
cat >/etc/profile.d/prompt.sh <<'EOF'
UDM_NAME="$(grep -m 1 '^name:' /data/unifi-core/config/settings.yaml | awk -F: '{ gsub(/^[ \t]+|[ \t]+$/, "", $2); print tolower($2) }')"
UDM_NAME="$(grep -m 1 '^name:' ${DATA_DIR}/unifi-core/config/settings.yaml | awk -F: '{ gsub(/^[ \t]+|[ \t]+$/, "", $2); print tolower($2) }')"
PROMPT_MAIN="\u@${UDM_NAME}:\w"
export PS1="[UDM] ${PROMPT_MAIN}${PS1}"
EOF
# Copy all global profile scripts (for all users) from `/mnt/data/on_boot.d/settings/profile/global.profile.d/` directory
mkdir -p /mnt/data/on_boot.d/settings/profile/global.profile.d
cp -rf /mnt/data/on_boot.d/settings/profile/global.profile.d/* /etc/profile.d/
# Copy all global profile scripts (for all users) from `${DATA_DIR}/on_boot.d/settings/profile/global.profile.d/` directory
mkdir -p ${DATA_DIR}/on_boot.d/settings/profile/global.profile.d
cp -rf ${DATA_DIR}/on_boot.d/settings/profile/global.profile.d/* /etc/profile.d/

View File

@ -1,3 +1,19 @@
#!/bin/sh
cp -f /mnt/data/on_boot.d/files/.profile /root/
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
cp -f ${DATA_DIR}/on_boot.d/files/.profile /root/

View File

@ -1,16 +1,34 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Config Variables - please edit these
# Set to true to download public keys from a github user account
USE_GITHUB_KEYS=true
# Enter your username on github to get the public keys for
GITHUB_USER="<YOUR_USERNAME>"
# File location for the output of the git download
GITHUB_KEY_PATH="/mnt/data/podman/ssh"
GITHUB_KEY_PATH="${DATA_DIR}/podman/ssh"
GITHUB_KEY_FILE="${GITHUB_KEY_PATH}/github.keys"
# Set to true to use a file containing a key per line in the format ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAA...\n
USE_KEY_FILE=true
# IF using an input file, list it here
INPUT_KEY_PATH="/mnt/data/podman/ssh"
INPUT_KEY_PATH="${DATA_DIR}/podman/ssh"
INPUT_KEY_FILE="${INPUT_KEY_PATH}/ssh.keys"
# The target key file for the script
OUTPUT_KEY_PATH="/root/.ssh"
@ -41,9 +59,8 @@ use_key_from_file(){
echo "File $1 does not exist"
return
fi
counter=0;
while IFS= read -r line;
do
counter=0
while IFS= read -r line; do
write_to_output "${line}"
let "counter++"
done <$1

View File

@ -1,8 +1,24 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Places public keys in ~/.ssh/authorized_keys
KEYS_SOURCE_FILE="/mnt/data/on_boot.d/settings/ssh/authorized_keys"
KEYS_SOURCE_FILE="${DATA_DIR}/on_boot.d/settings/ssh/authorized_keys"
KEYS_TARGET_FILE="/root/.ssh/authorized_keys"
count_added=0
@ -24,6 +40,6 @@ fi
# Convert ssh key to dropbear for shell interaction
echo "Converting SSH private key to dropbear format"
dropbearconvert openssh dropbear /mnt/data/ssh/id_rsa /root/.ssh/id_dropbear
dropbearconvert openssh dropbear ${DATA_DIR}/ssh/id_rsa /root/.ssh/id_dropbear
exit 0

View File

@ -1,5 +1,21 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
#####################################################
# ADD KNOWN HOSTS AS BELOW - CHANGE BEFORE RUNNING #
#####################################################
@ -12,10 +28,8 @@ set -- "hostname ecdsa-sha2-nistp256 AAAABIGHOSTIDENTIFIERWITHMAGICSTUFF=" \
KNOWN_HOSTS_FILE="/root/.ssh/known_hosts"
counter=0
for host in "$@"
do
for host in "$@"; do
## Places known host in ~/.ssh/known_hosts if not present
if ! grep -Fxq "$host" "$KNOWN_HOSTS_FILE"; then
let counter++
@ -25,5 +39,4 @@ done
echo $counter hosts added to $KNOWN_HOSTS_FILE
exit 0;
exit 0

View File

@ -1,13 +1,29 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
mkdir -p ${DATA_DIR}/.home
mkdir -p /mnt/data/.home
for file in .ash_history .bash_history
if [ ! -f /mnt/data/.home/$file ]; then
for file in .ash_history .bash_history; do
if [ ! -f ${DATA_DIR}/.home/$file ]; then
touch /root/$file
cp /root/$file /mnt/data/.home/$file
chown root:root /mnt/data/.home/$file
chmod 0600 /mnt/data/.home/$file
cp /root/$file ${DATA_DIR}/.home/$file
chown root:root ${DATA_DIR}/.home/$file
chmod 0600 ${DATA_DIR}/.home/$file
fi
ln -sf /mnt/data/.home/$file /root/$file
ln -sf ${DATA_DIR}/.home/$file /root/$file
done

View File

@ -1,9 +1,25 @@
#!/bin/sh
## Store crontab files in /mnt/data/cronjobs/ (you will need to create this folder).
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Store crontab files in ${DATA_DIR}/cronjobs/ (you will need to create this folder).
## This script will re-add them on startup.
cp /mnt/data/cronjobs/* /etc/cron.d/
cp ${DATA_DIR}/cronjobs/* /etc/cron.d/
/etc/init.d/crond restart
exit 0

View File

@ -1,15 +1,31 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## ----------------------------------------------------------------------
## Script to add/remove time-of-day restrictions on internet access for selected clients.
##
## Use DHCP reservations to encourage the selected clients to always obtain the same IP address.
##
## To install:
## * Copy this script into /mnt/data/on_boot.d/, using something like WinSCP or SSH + vi.
## * Grant Execute permission: chmod +x /mnt/data/on_boot.d/iptables_timerestrict.sh
## * Copy this script into ${DATA_DIR}/on_boot.d/, using something like WinSCP or SSH + vi.
## * Grant Execute permission: chmod +x ${DATA_DIR}/on_boot.d/iptables_timerestrict.sh
## * Run it once, to activate it (crontab entries will keep it active forever after):
## Via SSH into UDM shell: /mnt/data/on_boot.d/iptables_timerestrict.sh
## Via SSH into UDM shell: ${DATA_DIR}/on_boot.d/iptables_timerestrict.sh
##
## Notes:
## * Changes to firewall rules in the Unifi Network Application will remove your restriction;
@ -26,7 +42,6 @@
## have a blocked time in the middle of the day.
## ----------------------------------------------------------------------
## List all client addresses you'd like to restrict. Separate multiple with spaces.
timerestricted_addresses='192.168.1.101 192.168.1.102'
@ -36,8 +51,6 @@ wake_hour=06
## Hour of day to activate the restriction.
sleep_hour=23
## ----------------------------------------------------------------------
## ----------------------------------------------------------------------
## ----------------------------------------------------------------------
@ -64,17 +77,17 @@ done
myrule="FORWARD -i br0 -j TIMERESTRICT"
## install or remove rule based on current time and whether the rule already exists
if [ `date +%H` -ge $sleep_hour ]; then
if [ $(date +%H) -ge $sleep_hour ]; then
logger "TIMERESTRICT: Activating sleep time"
iptables -C $myrule 2>/dev/null || iptables -I $myrule
elif [ `date +%H` -ge $wake_hour ]; then
elif [ $(date +%H) -ge $wake_hour ]; then
logger "TIMERESTRICT: Activating awake time"
iptables -C $myrule 2>/dev/null && iptables -D $myrule
fi
## setup cron job to activate/deactivate on time of day
echo "00 $sleep_hour * * * `readlink -f $0`" > /etc/cron.d/iptables_timerestrict
echo "00 $wake_hour * * * `readlink -f $0`" >> /etc/cron.d/iptables_timerestrict
echo "00 $sleep_hour * * * $(readlink -f $0)" >/etc/cron.d/iptables_timerestrict
echo "00 $wake_hour * * * $(readlink -f $0)" >>/etc/cron.d/iptables_timerestrict
## Format: <minute> <hour> <day> <month> <dow> <tags and command>
/etc/init.d/crond restart

View File

@ -1,7 +1,7 @@
#!/usr/bin/env sh
# Get DataDir location
DATA_DIR="/mnt/data"
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
@ -25,13 +25,12 @@ SYMLINK_SYSTEMCTL="/etc/systemd/system/multi-user.target.wants/udm-boot.service"
CNI_PLUGINS_SCRIPT_RAW_URL="https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/HEAD/cni-plugins/05-install-cni-plugins.sh"
CNI_PLUGINS_ON_BOOT_FILENAME="$(basename "$CNI_PLUGINS_SCRIPT_RAW_URL")"
CNI_BRIDGE_SCRIPT_RAW_URL="https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/on-boot-script/examples/udm-networking/on_boot.d/06-cni-bridge.sh"
CNI_BRIDGE_SCRIPT_RAW_URL="https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/on-boot-script-2.x/examples/udm-networking/on_boot.d/06-cni-bridge.sh"
CNI_BRIDGE_ON_BOOT_FILENAME="$(basename "$CNI_BRIDGE_SCRIPT_RAW_URL")"
GITHUB_API_URL="https://api.github.com/repos"
GITHUB_REPOSITORY="unifi-utilities/unifios-utilities"
# --- Functions ---
header() {
@ -133,7 +132,7 @@ After=network-online.target
[Service]
Type=forking
ExecStart=bash -c 'mkdir -p $DATA_DIR/on_boot.d && find -L $DATA_DIR/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- bash -c \'if test -x "\$0"; then echo "%n: running \$0"; "\$0"; else case "\$0" in *.sh) echo "%n: sourcing \$0"; . "\$0";; *) echo "%n: ignoring \$0";; esac; fi\''
ExecStart=bash -c 'mkdir -p ${DATA_DIR}/on_boot.d && find -L ${DATA_DIR}/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- bash -c \'if test -x "\$0"; then echo "%n: running \$0"; "\$0"; else case "\$0" in *.sh) echo "%n: sourcing \$0"; . "\$0";; *) echo "%n: ignoring \$0";; esac; fi\''
[Install]
WantedBy=multi-user.target
@ -165,7 +164,7 @@ header
depends_on ubnt-device-info
depends_on curl
ON_BOOT_D_PATH="$DATA_DIR/on_boot.d"
ON_BOOT_D_PATH="${DATA_DIR}/on_boot.d"
case "$(udm_model)" in
udmlegacy | udmprolegacy)
@ -201,7 +200,6 @@ case "$(udm_model)" in
esac
echo
if [ ! -f "${ON_BOOT_D_PATH}/${CNI_PLUGINS_ON_BOOT_FILENAME}" ]; then
echo "Downloading CNI plugins script..."
if
@ -220,7 +218,6 @@ echo "Executing CNI plugins script..."
"${ON_BOOT_D_PATH}/${CNI_PLUGINS_ON_BOOT_FILENAME}" || true
echo
if [ ! -f "${ON_BOOT_D_PATH}/${CNI_BRIDGE_ON_BOOT_FILENAME}" ]; then
echo "Downloading CNI bridge script..."
if

View File

@ -3,7 +3,7 @@
## Features
1. Allows you to run a shell script at S95 anytime your UDM starts / reboots
1. Persists through reboot and **firmware updates**! It is able to do this because Ubiquiti caches all debian package installs on the UDM in /mnt/data, then re-installs them on reset of unifi-os container.
1. Persists through reboot and **firmware updates**! It is able to do this because Ubiquiti caches all debian package installs on the UDM in /data, then re-installs them on reset of unifi-os container.
## Compatibility
@ -12,7 +12,7 @@
### Upgrade from earlier way
* As long as you didn't change the filenames, installing the deb package is all you need to do. If you want to clean up beforehand anyways....
- As long as you didn't change the filenames, installing the deb package is all you need to do. If you want to clean up beforehand anyways....
```bash
rm /etc/init.d/udm.sh
@ -20,13 +20,14 @@
rm /etc/systemd/system/udmboot.service
```
* [build_deb.sh](build_deb.sh) can be used to build the package by yourself.
* [dpkg-build-files](dpkg-build-files) contains the sources that debuild uses to build the package if you want to build it yourself / change it
* by default it uses docker or podman to build the debian package
* use ```./build_deb.sh build``` to not use a container
* the resulting package will be in [packages/](packages/)
- [build_deb.sh](build_deb.sh) can be used to build the package by yourself.
* Built on Ubuntu-20.04 on Windows 10/WSL2
- [dpkg-build-files](dpkg-build-files) contains the sources that debuild uses to build the package if you want to build it yourself / change it
- by default it uses docker or podman to build the debian package
- use `./build_deb.sh build` to not use a container
- the resulting package will be in [packages/](packages/)
- Built on Ubuntu-20.04 on Windows 10/WSL2
## Install
@ -57,47 +58,48 @@ This will also install CNI Plugins & CNI Bridge scripts. If you are using UDMSE/
exit
```
3. Copy any shell scripts you want to run to /mnt/data/on_boot.d on your UDM (not the unifi-os shell) and make sure they are executable and have the correct shebang (#!/bin/sh). Additionally, scripts need to have a `.sh` extention in their filename.
3. Copy any shell scripts you want to run to /data/on_boot.d on your UDM (not the unifi-os shell) and make sure they are executable and have the correct shebang (#!/bin/sh). Additionally, scripts need to have a `.sh` extention in their filename.
Examples:
* Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
* Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-wpa_supplicant.sh)
* Add a persistent ssh key for the root user [on_boot.d/15-add-root-ssh-keys.sh](examples/udm-files/on_boot.d/15-add-root-ssh-keys.sh)
- Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
- Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-wpa_supplicant.sh)
- Add a persistent ssh key for the root user [on_boot.d/15-add-root-ssh-keys.sh](examples/udm-files/on_boot.d/15-add-root-ssh-keys.sh)
## Version History
### 1.0.7
* Support for Legacy and Current Firmware
- Support for Legacy and Current Firmware
### 1.0.6
* Fix timeouts
- Fix timeouts
### 1.0.5
* Remove on_boot.sh from UDM
* Follow symlinks
* move to network-online.target
- Remove on_boot.sh from UDM
- Follow symlinks
- move to network-online.target
### 1.0.4
* Fix 600s timeout issues
* Fix rc.d policy issue
- Fix 600s timeout issues
- Fix rc.d policy issue
### 1.0.3
* Fix not working after firmware upgrade
* Added udm-boot.boostchicken.dev domain
- Fix not working after firmware upgrade
- Added udm-boot.boostchicken.dev domain
### 1.0.2
* Some build improvements and more clean installation
- Some build improvements and more clean installation
### 1.0.1
* Fully automated install, all that is left is populating /mnt/data/on_boot.d
- Fully automated install, all that is left is populating /data/on_boot.d
### 1.0.0
* First release that persists through firmware
- First release that persists through firmware

View File

@ -2,22 +2,22 @@
## Automated Setup
* NB! THESE WILL NOT PERSIST THROUGH FIRMWARE. They still work however
- NB! THESE WILL NOT PERSIST THROUGH FIRMWARE. They still work however
1. Copy [install.sh](manual-install/install.sh) to your UDM and execute it
1. Copy any shell scripts you want to run to /mnt/data/on_boot.d and make sure they are executable and have the correct shebang (#!/bin/sh)
1. Copy any shell scripts you want to run to /data/on_boot.d and make sure they are executable and have the correct shebang (#!/bin/sh)
Examples:
* Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
* Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-start-containers.sh)
- Start a DNS Container [10-dns.sh](../dns-common/on_boot.d/10-dns.sh)
- Start wpa_supplicant [on_boot.d/10-wpa_supplicant.sh](examples/udm-files/on_boot.d/10-start-containers.sh)
## Manual Setup
1. Copy on_boot.sh and make on_boot.d and add scripts to on_boot.d
```sh
mkdir -p /mnt/data/on_boot.d
vi /mnt/data/on_boot.sh
chmod u+x /mnt/data/on_boot.sh
mkdir -p /data/on_boot.d
vi /data/on_boot.sh
chmod u+x /data/on_boot.sh
```
Example: [on_boot.sh](examples/udm-files/on_boot.sh)
@ -32,11 +32,12 @@
```sh
echo "#!/bin/sh
ssh -o StrictHostKeyChecking=no root@127.0.1.1 '/mnt/data/on_boot.sh'" > /etc/init.d/udm.sh
ssh -o StrictHostKeyChecking=no root@127.0.1.1 '/data/on_boot.sh'" > /etc/init.d/udm.sh
chmod u+x /etc/init.d/udm.sh
```
Example: [udm.sh](examples/unifi-os-files/udm.sh)
4. make a service that runs on startup, after we have networking
```sh

View File

@ -22,7 +22,7 @@ set -e
case "$1" in
purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
if [ -x /sbin/ssh-proxy ]; then
/sbin/ssh-proxy rm -f /mnt/data/on_boot.sh
/sbin/ssh-proxy rm -f /data/on_boot.sh
fi
true
;;

View File

@ -17,7 +17,7 @@ set -e
case "$1" in
install|upgrade)
if [ -x /sbin/ssh-proxy ]; then
/sbin/ssh-proxy rm -f /mnt/data/on_boot.sh
/sbin/ssh-proxy rm -f /data/on_boot.sh
fi
true
;;

View File

@ -8,7 +8,7 @@ StartLimitBurst=5
[Service]
Restart=on-failure
RestartSec=5s
ExecStart=/sbin/ssh-proxy 'mkdir -p /mnt/data/on_boot.d && find -L /mnt/data/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- sh -c '\''if test -x "$0"; then echo "%n: running $0"; "$0"; else case "$0" in *.sh) echo "%n: sourcing $0"; . "$0";; *) echo "%n: ignoring $0";; esac; fi'\'
ExecStart=/sbin/ssh-proxy 'mkdir -p /data/on_boot.d && find -L /data/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- sh -c '\''if test -x "$0"; then echo "%n: running $0"; "$0"; else case "$0" in *.sh) echo "%n: sourcing $0"; . "$0";; *) echo "%n: ignoring $0";; esac; fi'\'
RemainAfterExit=true
[Install]

View File

@ -1,5 +1,21 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Configure shell profile
device_info() {
@ -18,12 +34,12 @@ EOF
# Extend UbiOS prompt to include useful information
cat >/etc/profile.d/prompt.sh <<'EOF'
UDM_NAME="$(grep -m 1 '^name:' /data/unifi-core/config/settings.yaml | awk -F: '{ gsub(/^[ \t]+|[ \t]+$/, "", $2); print tolower($2) }')"
UDM_NAME="$(grep -m 1 '^name:' ${DATA_DIR}/unifi-core/config/settings.yaml | awk -F: '{ gsub(/^[ \t]+|[ \t]+$/, "", $2); print tolower($2) }')"
PROMPT_MAIN="\u@${UDM_NAME}:\w"
export PS1="[UDM] ${PROMPT_MAIN}${PS1}"
EOF
# Copy all global profile scripts (for all users) from `/mnt/data/on_boot.d/settings/profile/global.profile.d/` directory
mkdir -p /mnt/data/on_boot.d/settings/profile/global.profile.d
cp -rf /mnt/data/on_boot.d/settings/profile/global.profile.d/* /etc/profile.d/
# Copy all global profile scripts (for all users) from `${DATA_DIR}/on_boot.d/settings/profile/global.profile.d/` directory
mkdir -p ${DATA_DIR}/on_boot.d/settings/profile/global.profile.d
cp -rf ${DATA_DIR}/on_boot.d/settings/profile/global.profile.d/* /etc/profile.d/

View File

@ -1,3 +1,20 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
cp -f /mnt/data/on_boot.d/files/.profile /root/
cp -f ${DATA_DIR}/on_boot.d/files/.profile /root/

View File

@ -1,16 +1,35 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Config Variables - please edit these
# Set to true to download public keys from a github user account
USE_GITHUB_KEYS=true
# Enter your username on github to get the public keys for
GITHUB_USER="<YOUR_USERNAME>"
# File location for the output of the git download
GITHUB_KEY_PATH="/mnt/data/podman/ssh"
GITHUB_KEY_PATH="${DATA_DIR}/podman/ssh"
GITHUB_KEY_FILE="${GITHUB_KEY_PATH}/github.keys"
# Set to true to use a file containing a key per line in the format ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAA...\n
USE_KEY_FILE=true
# IF using an input file, list it here
INPUT_KEY_PATH="/mnt/data/podman/ssh"
INPUT_KEY_PATH="${DATA_DIR}/podman/ssh"
INPUT_KEY_FILE="${INPUT_KEY_PATH}/ssh.keys"
# The target key file for the script
OUTPUT_KEY_PATH="/root/.ssh"
@ -41,9 +60,8 @@ use_key_from_file(){
echo "File $1 does not exist"
return
fi
counter=0;
while IFS= read -r line;
do
counter=0
while IFS= read -r line; do
write_to_output "${line}"
let "counter++"
done <$1

View File

@ -1,8 +1,24 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Places public keys in ~/.ssh/authorized_keys
KEYS_SOURCE_FILE="/mnt/data/on_boot.d/settings/ssh/authorized_keys"
KEYS_SOURCE_FILE="${DATA_DIR}/on_boot.d/settings/ssh/authorized_keys"
KEYS_TARGET_FILE="/root/.ssh/authorized_keys"
count_added=0
@ -24,6 +40,6 @@ fi
# Convert ssh key to dropbear for shell interaction
echo "Converting SSH private key to dropbear format"
dropbearconvert openssh dropbear /mnt/data/ssh/id_rsa /root/.ssh/id_dropbear
dropbearconvert openssh dropbear ${DATA_DIR}/ssh/id_rsa /root/.ssh/id_dropbear
exit 0

View File

@ -1,13 +1,29 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
mkdir -p ${DATA_DIR}/.home
mkdir -p /mnt/data/.home
for file in .ash_history .bash_history
if [ ! -f /mnt/data/.home/$file ]; then
for file in .ash_history .bash_history; do
if [ ! -f ${DATA_DIR}/.home/$file ]; then
touch /root/$file
cp /root/$file /mnt/data/.home/$file
chown root:root /mnt/data/.home/$file
chmod 0600 /mnt/data/.home/$file
cp /root/$file ${DATA_DIR}/.home/$file
chown root:root ${DATA_DIR}/.home/$file
chmod 0600 ${DATA_DIR}/.home/$file
fi
ln -sf /mnt/data/.home/$file /root/$file
ln -sf ${DATA_DIR}/.home/$file /root/$file
done

View File

@ -1,9 +1,25 @@
#!/bin/sh
## Store crontab files in /mnt/data/cronjobs/ (you will need to create this folder).
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## Store crontab files in ${DATA_DIR}/cronjobs/ (you will need to create this folder).
## This script will re-add them on startup.
cp /mnt/data/cronjobs/* /etc/cron.d/
cp ${DATA_DIR}/cronjobs/* /etc/cron.d/
/etc/init.d/crond restart
exit 0

View File

@ -1,15 +1,31 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
## ----------------------------------------------------------------------
## Script to add/remove time-of-day restrictions on internet access for selected clients.
##
## Use DHCP reservations to encourage the selected clients to always obtain the same IP address.
##
## To install:
## * Copy this script into /mnt/data/on_boot.d/, using something like WinSCP or SSH + vi.
## * Grant Execute permission: chmod +x /mnt/data/on_boot.d/iptables_timerestrict.sh
## * Copy this script into ${DATA_DIR}/on_boot.d/, using something like WinSCP or SSH + vi.
## * Grant Execute permission: chmod +x ${DATA_DIR}/on_boot.d/iptables_timerestrict.sh
## * Run it once, to activate it (crontab entries will keep it active forever after):
## Via SSH into UDM shell: /mnt/data/on_boot.d/iptables_timerestrict.sh
## Via SSH into UDM shell: ${DATA_DIR}/on_boot.d/iptables_timerestrict.sh
##
## Notes:
## * Changes to firewall rules in the Unifi Network Application will remove your restriction;
@ -26,7 +42,6 @@
## have a blocked time in the middle of the day.
## ----------------------------------------------------------------------
## List all client addresses you'd like to restrict. Separate multiple with spaces.
timerestricted_addresses='192.168.1.101 192.168.1.102'
@ -36,8 +51,6 @@ wake_hour=06
## Hour of day to activate the restriction.
sleep_hour=23
## ----------------------------------------------------------------------
## ----------------------------------------------------------------------
## ----------------------------------------------------------------------
@ -64,17 +77,17 @@ done
myrule="FORWARD -i br0 -j TIMERESTRICT"
## install or remove rule based on current time and whether the rule already exists
if [ `date +%H` -ge $sleep_hour ]; then
if [ $(date +%H) -ge $sleep_hour ]; then
logger "TIMERESTRICT: Activating sleep time"
iptables -C $myrule 2>/dev/null || iptables -I $myrule
elif [ `date +%H` -ge $wake_hour ]; then
elif [ $(date +%H) -ge $wake_hour ]; then
logger "TIMERESTRICT: Activating awake time"
iptables -C $myrule 2>/dev/null && iptables -D $myrule
fi
## setup cron job to activate/deactivate on time of day
echo "00 $sleep_hour * * * `readlink -f $0`" > /etc/cron.d/iptables_timerestrict
echo "00 $wake_hour * * * `readlink -f $0`" >> /etc/cron.d/iptables_timerestrict
echo "00 $sleep_hour * * * $(readlink -f $0)" >/etc/cron.d/iptables_timerestrict
echo "00 $wake_hour * * * $(readlink -f $0)" >>/etc/cron.d/iptables_timerestrict
## Format: <minute> <hour> <day> <month> <dow> <tags and command>
/etc/init.d/crond restart

View File

@ -1,7 +1,7 @@
#!/usr/bin/env sh
# Get DataDir location
DATA_DIR="/mnt/data"
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
@ -31,7 +31,6 @@ CNI_BRIDGE_ON_BOOT_FILENAME="$(basename "$CNI_BRIDGE_SCRIPT_RAW_URL")"
GITHUB_API_URL="https://api.github.com/repos"
GITHUB_REPOSITORY="unifi-utilities/unifios-utilities"
# --- Functions ---
header() {
@ -133,7 +132,7 @@ After=network-online.target
[Service]
Type=forking
ExecStart=bash -c 'mkdir -p $DATA_DIR/on_boot.d && find -L $DATA_DIR/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- bash -c \'if test -x "\$0"; then echo "%n: running \$0"; "\$0"; else case "\$0" in *.sh) echo "%n: sourcing \$0"; . "\$0";; *) echo "%n: ignoring \$0";; esac; fi\''
ExecStart=bash -c 'mkdir -p ${DATA_DIR}/on_boot.d && find -L ${DATA_DIR}/on_boot.d -mindepth 1 -maxdepth 1 -type f -print0 | sort -z | xargs -0 -r -n 1 -- bash -c \'if test -x "\$0"; then echo "%n: running \$0"; "\$0"; else case "\$0" in *.sh) echo "%n: sourcing \$0"; . "\$0";; *) echo "%n: ignoring \$0";; esac; fi\''
[Install]
WantedBy=multi-user.target
@ -165,7 +164,7 @@ header
depends_on ubnt-device-info
depends_on curl
ON_BOOT_D_PATH="$DATA_DIR/on_boot.d"
ON_BOOT_D_PATH="${DATA_DIR}/on_boot.d"
case "$(udm_model)" in
udmlegacy | udmprolegacy)
@ -201,7 +200,6 @@ case "$(udm_model)" in
esac
echo
if [ ! -f "${ON_BOOT_D_PATH}/${CNI_PLUGINS_ON_BOOT_FILENAME}" ]; then
echo "Downloading CNI plugins script..."
if
@ -220,7 +218,6 @@ echo "Executing CNI plugins script..."
"${ON_BOOT_D_PATH}/${CNI_PLUGINS_ON_BOOT_FILENAME}" || true
echo
if [ ! -f "${ON_BOOT_D_PATH}/${CNI_BRIDGE_ON_BOOT_FILENAME}" ]; then
echo "Downloading CNI bridge script..."
if

View File

@ -11,16 +11,16 @@ For example, [configuring two IP addresses on your WAN interface, so that you ca
## Installation
1. [Enable on-boot-script](https://github.com/unifi-utilities/unifios-utilities/blob/main/on-boot-script/README.md)
1. Copy `42-watch-for-changes.sh` to `/mnt/data/on_boot.d/`
* Check the `FILE` variable, it should point to a file that exists, it might be in `/data` or in `/mnt/data`
1. Copy `on-state-change.sh` to `/mnt/data/scripts/`
1. Edit `/mnt/data/scripts/on-state-change.sh` to your heart's content
1. Copy `42-watch-for-changes.sh` to `/data/on_boot.d/`
- Check the `FILE` variable, it should point to a file that exists, it might be in `/data` or in `/data`
1. Copy `on-state-change.sh` to `/data/scripts/`
1. Edit `/data/scripts/on-state-change.sh` to your heart's content
> Make sure that your script doesn't error in the likely case that it tries to execute an update which has already been made
## Example: configuring two IP addresses on your WAN interface
`/mnt/data/scripts/on-state-change.sh`
`/data/scripts/on-state-change.sh`
```
#!/bin/sh

View File

@ -1,7 +1,34 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
EXECUTE01='/mnt/data/scripts/on-state-change.sh'
FILE="/data/udapi-config/ubios-udapi-server/ubios-udapi-server.state"
# Check if the directory exists
if [ ! -d "${DATA_DIR}/scripts" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/scripts"
echo "Directory '${DATA_DIR}/scripts' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/scripts' already exists. Moving on."
fi
EXECUTE01="${DATA_DIR}/scripts/on-state-change.sh"
FILE="${DATA_DIR}/udapi-config/ubios-udapi-server/ubios-udapi-server.state"
# /usr/bin/logger -t "${PROCESSNAME}" "$*"
# run on boot aswell
@ -10,20 +37,20 @@ $EXECUTE01
if [ "$1" = "DAEMON" ]; then
# is this necessary? Add other signals at will (TTIN TTOU INT STOP TSTP)
trap '' INT
cd /tmp
cd /tmp || exit
shift
### daemonized section ######
# RUNNING=`ps aux | grep $CMD | grep -v grep | wc -l`
# echo $RUNNING
# if [ "$RUNNING" -lt 1 ]; then
LAST=`ls -l "$FILE"`
LAST=$(ls -l "$FILE")
# echo $LAST
while true; do
sleep 1
NEW=`ls -l "$FILE"`
NEW=$(ls -l "$FILE")
# echo $NEW
if [ "$NEW" != "$LAST" ]; then
DATE=`date`
DATE=$(date)
echo "${DATE}: Executing ${EXECUTE01}"
$EXECUTE01
LAST="$NEW"

View File

@ -1,3 +1,2 @@
# update and reinstall podman each day
#00 00 * * * root sh -c '/data/on_boot.d/00-podman.sh --download-only && /data/on_boot.d/00-podman.sh --force'

View File

@ -25,7 +25,6 @@ udm_model() {
esac
}
DESIRED_ZIPFILE='udmse-podman-install.zip'
case "$(udm_model)" in
udmse | udmpro)
@ -42,9 +41,8 @@ case "$(udm_model)" in
;;
esac
# Get DataDir location
DATA_DIR="/mnt/data"
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
@ -61,7 +59,6 @@ case "$(ubnt-device-info firmware || true)" in
;;
esac
CACHE_DIR="${DATA_DIR}/podman/cache"
INSTALL_ROOT="${DATA_DIR}/podman/install"
CONF_DIR="${DATA_DIR}/podman/conf"
@ -71,9 +68,9 @@ mkdir -p "${CACHE_DIR}" "${INSTALL_ROOT}" "${CONF_DIR}"
URL="https://unifi.boostchicken.io/${DESIRED_ZIPFILE}"
if [ "$1" = '--download-only' ]; then
echo "downloading ${URL}" \
&& curl -Lsfo "${CACHE_DIR}/${DESIRED_ZIPFILE}" "${URL}" \
&& echo "downloaded ${URL}"
echo "downloading ${URL}" &&
curl -Lsfo "${CACHE_DIR}/${DESIRED_ZIPFILE}" "${URL}" &&
echo "downloaded ${URL}"
exit $?
fi
@ -88,8 +85,8 @@ fi
if [ -f "${CACHE_DIR}/${DESIRED_ZIPFILE}" ]; then
echo "(using cache at ${CACHE_DIR}/${DESIRED_ZIPFILE})"
elif echo "downloading ${URL}" \
&& curl -Lsfo "${CACHE_DIR}/${DESIRED_ZIPFILE}" "${URL}"; then
elif echo "downloading ${URL}" &&
curl -Lsfo "${CACHE_DIR}/${DESIRED_ZIPFILE}" "${URL}"; then
echo "downloaded ${URL}"
else
echo 'download failed'
@ -119,4 +116,3 @@ fi
echo 'Something went wrong'
exit 1

View File

@ -1,13 +1,39 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
mkdir -p /mnt/data/.cache
# Check if the directory exists
if [ ! -d "${DATA_DIR}/scripts" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/scripts"
echo "Directory '${DATA_DIR}/scripts' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/scripts' already exists. Moving on."
fi
mkdir -p ${DATA_DIR}/.cache
PODMAN_VERSION=3.3.0
RUNC_VERSION=1.0.2
CONMON_VERSION=2.0.29
PODMAN_DL=/mnt/data/.cache/podman-$PODMAN_VERSION
RUNC_DL=/mnt/data/.cache/runc-$RUNC_VERSION
CONMON_DL=/mnt/data/.cache/conmon-$CONMON_VERSION
PODMAN_DL=${DATA_DIR}/.cache/podman-$PODMAN_VERSION
RUNC_DL=${DATA_DIR}/.cache/runc-$RUNC_VERSION
CONMON_DL=${DATA_DIR}/.cache/conmon-$CONMON_VERSION
SECCOMP=/usr/share/containers/seccomp.json
while [ ! -f $CONMON_DL ]; do
@ -46,4 +72,3 @@ sed -i 's/driver = ""/driver = "overlay"/' /etc/containers/storage.conf
sed -i 's/ostree_repo = ""/#ostree_repo = ""/' /etc/containers/storage.conf
# Comment out if you don't want to enable docker-compose or remote docker admin
/usr/bin/podman system service --time=0 tcp:0.0.0.0:2375 &

View File

@ -1,18 +1,45 @@
#!/bin/sh
CONTAINER=rclone
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/backups" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/backups"
echo "Directory '${DATA_DIR}/backups' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/backups' already exists. Moving on."
fi
if podman container exists "$CONTAINER"; then
podman start "$CONTAINER"
else
podman run -i -d --rm \
--net=host \
-v /mnt/data/rclone:/data/backups/rclone \
-v /mnt/data/pihole:/data/backups/pihole \
-v /mnt/data/on_boot.d:/data/backups/on_boot.d \
-v /data/unifi/data/backup/autobackup:/data/backups//data/unifi/data/backup/autobackup \
-v /mnt/data/podman/cni:/data/backups/podman/cni \
-v /mnt/data/rclone:/config/rclone \
-v /mnt/data/rclone/sync.sh:/data/sync.sh \
-v ${DATA_DIR}/rclone:${DATA_DIR}/backups/rclone \
-v ${DATA_DIR}/pihole:${DATA_DIR}/backups/pihole \
-v ${DATA_DIR}/on_boot.d:${DATA_DIR}/backups/on_boot.d \
-v ${DATA_DIR}/unifi/data/backup/autobackup:${DATA_DIR}/backup/unifi/autobackup \
-v ${DATA_DIR}/podman/cni:${DATA_DIR}/backups/podman/cni \
-v ${DATA_DIR}/rclone:/config/rclone \
-v ${DATA_DIR}/rclone/sync.sh:${DATA_DIR}/sync.sh \
--name "$CONTAINER" \
--security-opt=no-new-privileges \
rclone/rclone:latest \

View File

@ -13,13 +13,13 @@
1. Make a directory for your configuration
```sh
mkdir -p /mnt/data/rclone
mkdir -p /data/rclone
```
2. Create [rclone.conf](https://rclone.org/commands/rclone_config/) in `/mnt/data/rclone` and update it to meet yours needs.
3. Copy [sync.sh](sync.sh) in `/mnt/data/rclone` and update it to meet your needs.
4. Copy [10-rclone.sh](10-rclone.sh) to `/mnt/data/on_boot.d` and update it to meet your needs.
5. Execute `/mnt/data/on_boot.d/10-rclone.sh`
2. Create [rclone.conf](https://rclone.org/commands/rclone_config/) in `/data/rclone` and update it to meet yours needs.
3. Copy [sync.sh](sync.sh) in `/data/rclone` and update it to meet your needs.
4. Copy [10-rclone.sh](10-rclone.sh) to `/data/on_boot.d` and update it to meet your needs.
5. Execute `/data/on_boot.d/10-rclone.sh`
6. Execute `podman logs rclone`, this will provide a link to the Web GUI.
7. Copy [rclone](rclone) in `/etc/cron.hourly/`.
8. Set permissions to executable `chmod +x /etc/cron.hourly/rclone`.

View File

@ -1,14 +1,13 @@
#!/bin/bash
ARCH=$(uname -m)
if [ $ARCH == "x86_64" ]
then
if [ "$ARCH" == "x86_64" ]; then
ARCH="amd64"
elif [ $ARCH == "aarch64" ]; then
elif [ "$ARCH" == "aarch64" ]; then
ARCH="arm64"
fi
curl -fsSLo "/opt/cloudflared" https://github.com/cloudflare/cloudflared/releases/download/2021.5.9/cloudflared-linux-$ARCH
curl -fsSLo "/opt/cloudflared" https://github.com/cloudflare/cloudflared/releases/download/2021.5.9/cloudflared-linux-"$ARCH"
chmod +x /opt/cloudflared
/opt/cloudflared update
/opt/cloudflared proxy-dns $CLOUDFLARED_OPTS &
/opt/cloudflared proxy-dns "$CLOUDFLARED_OPTS" &

View File

@ -13,7 +13,7 @@
Note: IP and VLAN settings for you pihole network, 20-dns-conflist and 10-dns.sh MUST all match each other.
* Example settings for pihole network:
- Example settings for pihole network:
Network Name: Pihole
Host address: 10.0.5.1
Netmask: 24
@ -23,8 +23,7 @@ Note: IP and VLAN settings for you pihole network, 20-dns-conflist and 10-dns.sh
DHCP: None
Ipv6 Interface Type: None
* YOU WILL NEED TO CHANGE [`20-dns.conflist`](../cni-plugins/20-dns.conflist)
- YOU WILL NEED TO CHANGE [`20-dns.conflist`](../cni-plugins/20-dns.conflist)
Change the line:
"mac": "add 3 fake hex portions, replacing x's here 00:1c:b4:xx:xx:xx",
to create a legitimate mac address that matches some vendor space(first 6 digits ). It needs to be unique on your network.
@ -34,26 +33,22 @@ Note: IP and VLAN settings for you pihole network, 20-dns-conflist and 10-dns.sh
Change these lines to match your settings:
"address": "10.0.5.3/24",
"gateway": "10.0.5.1"
If you are using a different VLAN than the example:
Change this line to match your VLAN number:
"master": "br5",
* You MAY need to change[`10-dns.sh`](../dns-common/on_boot.d/10-dns.sh).
- You MAY need to change[`10-dns.sh`](../dns-common/on_boot.d/10-dns.sh).
If you are using a different IP address than the example:
Change these lines to match your settings:
IPV4_IP="10.0.5.3"
IPV4_GW="10.0.5.1/24"
If you are using a different VLAN than the example:
Change this line to match your VLAN number:
VLAN=5
If you want the pihole container to have a different name than the example:
Change this line to match the different name:
CONTAINER=pihole
* If you want IPv6 support
- If you want IPv6 support
Use 20-dnsipv6.conflist and update 10-dns.sh with the IPv6 addresses.
Also, please provide IPv6 servers to podman using --dns arguments.
@ -61,14 +56,14 @@ Note: IP and VLAN settings for you pihole network, 20-dns-conflist and 10-dns.sh
### Configuration files and scripts
1.0 Copy [`05-install-cni-plugins.sh`](../cni-plugins/05-install-cni-plugins.sh) to `/mnt/data/on_boot.d`
1.1 Execute `chmod +x /mnt/data/on_boot.d/05-install-cni-plugins.sh`
1.2 Execute `/mnt/data/on_boot.d/05-install-cni-plugins.sh`
1.0 Copy [`05-install-cni-plugins.sh`](../cni-plugins/05-install-cni-plugins.sh) to `/data/on_boot.d`
1.1 Execute `chmod +x /data/on_boot.d/05-install-cni-plugins.sh`
1.2 Execute `/data/on_boot.d/05-install-cni-plugins.sh`
2.0 On your controller, create a network with no DHCP server and give it a VLAN.(see example settings above).
2.1 Copy YOUR modified [`20-dns.conflist`] to `/mnt/data/podman/cni`
2.2 Execute `chmod +x /mnt/data/podman/cni/20-dns.conflist`
2.3 Execute `cp /mnt/data/podman/cni/20-dns.conflist /etc/cni/net.d/dns.conflist`
2.1 Copy YOUR modified [`20-dns.conflist`] to `/data/podman/cni`
2.2 Execute `chmod +x /data/podman/cni/20-dns.conflist`
2.3 Execute `cp /data/podman/cni/20-dns.conflist /etc/cni/net.d/dns.conflist`
To check progress - run the command:
```shell
@ -76,17 +71,17 @@ Note: IP and VLAN settings for you pihole network, 20-dns-conflist and 10-dns.sh
```
You should see a copy of your 20-dns.conflist displayed.
3.0 Copy your [`10-dns.sh`] to `/mnt/data/on_boot.d`
3.1 Execute `chmod +x /mnt/data/on_boot.d/10-dns.sh`
3.2 Execute `/mnt/data/on_boot.d/10-dns.sh`
3.0 Copy your [`10-dns.sh`] to `/data/on_boot.d`
3.1 Execute `chmod +x /data/on_boot.d/10-dns.sh`
3.2 Execute `/data/on_boot.d/10-dns.sh`
### Create directories for persistent Pi-hole configuration
4.0 Execute the following commands:
```sh
mkdir -p /mnt/data/etc-pihole
mkdir -p /mnt/data/pihole/etc-dnsmasq.d
mkdir -p /data/etc-pihole
mkdir -p /data/pihole/etc-dnsmasq.d
```
### Create the pihole container
@ -113,8 +108,8 @@ If you want to run a DHCP server as well you need to add the following lines:
--name pihole \
-e TZ="America/Los Angeles" \
--cap-add=NET_ADMIN \
-v "/mnt/data/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "/data/etc-pihole/:/etc/pihole/" \
-v "/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
--dns=8.8.8.8 \
@ -138,6 +133,7 @@ If you want to run a DHCP server as well you need to add the following lines:
```sh
podman exec -it pihole pihole -a -p YOURNEWPASSHERE
```
## Set the new DNS in your UDM
7.0 Update your DNS Servers to `10.0.5.3` (or your custom ip) for each of your Networks (UDM GUI | Networks | Advanced | DHCP Name Server)
@ -146,8 +142,8 @@ If you want to run a DHCP server as well you need to add the following lines:
## Upgrading your PiHole container
1. Edit `upd_pihole.sh` script to use the same `podman run` command you used at installation.
2. Copy the `upd_pihole.sh` script to /mnt/data/scripts
3. Anytime you want to update your pihole installation, simply run `/mnt/data/scripts/upd_pihole.sh`
2. Copy the `upd_pihole.sh` script to /data/scripts
3. Anytime you want to update your pihole installation, simply run `/data/scripts/upd_pihole.sh`
## Optional Builds
@ -164,8 +160,8 @@ DNS-over-TLS provider.
--restart always \
--name pihole \
-e TZ="America/Los Angeles" \
-v "/mnt/data/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "/data/etc-pihole/:/etc/pihole/" \
-v "/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
--hostname pi.hole \
@ -179,7 +175,7 @@ DNS-over-TLS provider.
### PiHole with DoTe
Simply copy the `custom_pihole_dote.sh` script to `/mnt/data/scripts` and run it
Simply copy the `custom_pihole_dote.sh` script to `/data/scripts` and run it
to forward all DNS traffic over TLS to Cloudflare 1.1.1.1 / 1.0.0.1. You can modify the
script to forward to different services with ease and full configuration
options including certificate pinning is available in the DoTe README here:
@ -190,8 +186,8 @@ https://github.com/chrisstaite/DoTe/
--restart always \
--name pihole \
-e TZ="America/Los Angeles" \
-v "/mnt/data/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "/data/etc-pihole/:/etc/pihole/" \
-v "/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
--hostname pi.hole \

View File

@ -1,5 +1,32 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/pihole" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/pihole"
mkdir -p "${DATA_DIR}/pihole/etc"
echo "Directory '${DATA_DIR}/pihole' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/pihole' already exists. Moving on."
fi
set -e
tmpdir="$(mktemp -d)"
@ -23,8 +50,8 @@ podman rm pihole
podman run -d --network dns --restart always \
--name pihole \
-e TZ="America/Chicago" \
-v "/mnt/data/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "${DATA_DIR}/etc-pihole/:/etc/pihole/" \
-v "${DATA_DIR}/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--hostname pi.hole \
-e DOTE_OPTS="-s 127.0.0.1:5053 -f 1.1.1.1 -f 1.0.0.1 -m 10" \

View File

@ -1,16 +1,42 @@
# Change to boostchicken/pihole:latest for DoH
# Change to boostchicken/pihole-dote:latest for DoTE
IMAGE=pihole/pihole:latest
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/pihole" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/pihole"
mkdir -p "${DATA_DIR}/pihole/etc"
echo "Directory '${DATA_DIR}/pihole' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/pihole' already exists. Moving on."
fi
podman pull $IMAGE
podman stop pihole
podman rm pihole
podman run -d --network dns --restart always \
--name pihole \
-e TZ="America/Chicago" \
-v "/mnt/data/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "${DATA_DIR}/pihole/etc:/etc/pihole/" \
-v "${DATA_DIR}/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
--dns=1.0.0.1 \

View File

@ -1,5 +1,7 @@
# Run Suricata 5.0.3 with custom rules
## UBNT updated Suricata in 1.9.x firmwares make this unneeded
## Features
1. Run a newer suricata with custom rules
@ -11,9 +13,9 @@
## Customization
* Put customs rules files in /mnt/data/suricata-rules
- Put customs rules files in /data/suricata-rules
## Steps
1. Copy [25-suricata.sh](on_boot.d/25-suricata.sh) to /mnt/data/on_boot.d and update its values to reflect your environment
2. Execute /mnt/data/on_boot.d/25-suricata.sh
1. Copy [25-suricata.sh](on_boot.d/25-suricata.sh) to /data/on_boot.d and update its values to reflect your environment
2. Execute /data/on_boot.d/25-suricata.sh

View File

@ -1,10 +1,36 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/suricata-rules" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/suricata-rules"
echo "Directory '${DATA_DIR}/suricata-rules' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/suricata-rules' already exists. Moving on."
fi
APP_PID="/run/suricata.pid"
cat <<"EOF" >/tmp/suricata.sh
#!/bin/sh
CUSTOM_RULES="/mnt/data/suricata-rules"
CUSTOM_RULES="${DATA_DIR}/suricata-rules"
for file in $(find ${CUSTOM_RULES} -name '*.rules' -print)
do

View File

@ -1,23 +1,29 @@
# Tailscale
Run Tailscale in a container on your Unifi Dream Machine.
In combination with the DNS modules, setting up a Tailscale exit node on the UDM Pro can be quite powerful.
Additionally, the UDM is well positioned to add a tailscale subnet router to permit remote access to the manged network.
## Prerequisites
Follow the instructions and set up the scripts in these directories (in order) before continuing further:
1. `on-boot-script`
2. `container-common`
3. `cni-plugins`
4. (optional, but recommended if you want to set up an exit node and benefit from ad-blocking) `dns-common` followed by your favorite DNS server such as `run-pihole` or `AdguardHome`
## Installation
1. Copy `on_boot.d/20-tailscale.sh` to `/mnt/data/on_boot.d/20-tailscale.sh`.
2. Make sure the boot script is executable with `chmod +x /mnt/data/on_boot.d/20-tailscale.sh`.
1. Copy `on_boot.d/20-tailscale.sh` to `/data/on_boot.d/20-tailscale.sh`.
2. Make sure the boot script is executable with `chmod +x /data/on_boot.d/20-tailscale.sh`.
## Tailscale Configuration
After installing the boot script, you will want to set up the included shell alias and check network connectivity before continuing.
1. Run `/mnt/data/on_boot.d/20-tailscale.sh alias` to print a helpful shell alias to the terminal, inside a shell comment.
1. Run `/data/on_boot.d/20-tailscale.sh alias` to print a helpful shell alias to the terminal, inside a shell comment.
2. Add the alias to your running session, after which you can run `tailscale status` or `tailscale netcheck` from the host shell to make sure the running tailscale agent is healthy and has a good network connection.
3. `/mnt/data/on_boot.d/20-tailscale.sh status` will also perform status checks, if the alias setup isn't working for some reason.
3. `/data/on_boot.d/20-tailscale.sh status` will also perform status checks, if the alias setup isn't working for some reason.
How to proceed from here is largely up to you. It is possible to authenticate by simply running `tailscale up` (if you installed the shell alias) and doing most of the rest of the configuration in the admin console. You will likely want to provide additional options to `tailscale up` to use an auth key, advertise tags or subnet routes, or other configuration.

View File

@ -14,12 +14,12 @@ In the current examples, the DNS resolver (e.g., pi-hole) is listening on `10.0.
Follow the steps in [run-pihole](../run-pihole) to create a separate IP address, by copying the files in the sub-directories to UDM/P.
Adjust the `11-unbound-macvlanip` and `.conflist` files, run [init_unbound.sh](./scripts/init_unbound.sh), *or* execute the commands below manually.
Adjust the `11-unbound-macvlanip` and `.conflist` files, run [init_unbound.sh](./scripts/init_unbound.sh), _or_ execute the commands below manually.
* Link the boot script [11-unbound-macvlanip.sh](./on_boot.d/11-unbound-macvlanip.sh) -> `ln -s /mnt/data/unbound/on_boot.d/11-unbound-macvlanip.sh /mnt/data/on_boot.d/11-unbound-macvlanip.sh`
* Link the IPv4 only configuration: [21-unbound.conflist](./cni_plugins/21-unbound.conflist) -> `ln -s /mnt/data/unbound/cni_plugins/21-unbound.conflist /etc/cni/net.d/21-unbound.conflist` *or*
* Link the IPv4 and IPv6 configuration: [21-unboundipv6.conflist](./cni_plugins/21-unboundipv6.conflist) -> `ln -s /mnt/data/unbound/cni_plugins/21-unboundipv6.conflist /etc/cni/net.d/21-unbound.conflist`
* Create the network
- Link the boot script [11-unbound-macvlanip.sh](./on_boot.d/11-unbound-macvlanip.sh) -> `ln -s /data/unbound/on_boot.d/11-unbound-macvlanip.sh /data/on_boot.d/11-unbound-macvlanip.sh`
- Link the IPv4 only configuration: [21-unbound.conflist](./cni_plugins/21-unbound.conflist) -> `ln -s /data/unbound/cni_plugins/21-unbound.conflist /etc/cni/net.d/21-unbound.conflist` _or_
- Link the IPv4 and IPv6 configuration: [21-unboundipv6.conflist](./cni_plugins/21-unboundipv6.conflist) -> `ln -s /data/unbound/cni_plugins/21-unboundipv6.conflist /etc/cni/net.d/21-unbound.conflist`
- Create the network
```bash
podman network create unbound
@ -42,9 +42,9 @@ Two things are left to do: set the upstream server and de-activate caching in Pi
To use `unbound` as the upstream server for Pi-hole, change the following settings in Pi-hole's admin interface:
* Settings -> DNS -> Upstream DNS Servers
* Custom 1 (IPv4): 10.0.5.3 (or the IPv4 address you chose)
* Custom 2 (IPv6): fdca:5c13:1fb8::3 (or the IPv6 address you chose)
- Settings -> DNS -> Upstream DNS Servers
- Custom 1 (IPv4): 10.0.5.3 (or the IPv4 address you chose)
- Custom 2 (IPv6): fdca:5c13:1fb8::3 (or the IPv6 address you chose)
Both Pi-hole as well as `unbound` are caching their requests. To make the changes of your upstream DNS and to de-activate caching in Pi-hole permanent, modify your `podman run` command **for pi-hole** in this way:
@ -52,8 +52,8 @@ Both Pi-hole as well as `unbound` are caching their requests. To make the change
podman run -d --network dns --restart always \
--name pihole \
-e TZ="America/Los Angeles" \
-v "/mnt/data/pihole/etc-pihole/:/etc/pihole/" \
-v "/mnt/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
-v "/data/pihole/etc-pihole/:/etc/pihole/" \
-v "/data/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/" \
--dns=127.0.0.1 \
--dns=10.0.5.3 \
--hostname pi.hole \

View File

@ -1,4 +1,32 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/unbound" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/unbound"
mkdir -p "${DATA_DIR}/unbound.conf.d"
echo "Directory '${DATA_DIR}/unbound' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/unbound' already exists. Moving on."
fi
## configuration variables:
VLAN=5
@ -32,7 +60,7 @@ CONTAINER=unbound
if ! test -f /opt/cni/bin/macvlan; then
echo "Error: CNI plugins not found. You can install it with the following command:" >&2
echo " curl -fsSLo /mnt/data/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh /mnt/data/on_boot.d/05-install-cni-plugins.sh" >&2
echo " curl -fsSLo ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh https://raw.githubusercontent.com/unifi-utilities/unifios-utilities/main/cni-plugins/05-install-cni-plugins.sh && /bin/sh ${DATA_DIR}/on_boot.d/05-install-cni-plugins.sh" >&2
exit 1
fi

View File

@ -1,20 +1,37 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# init unbound container - quick and dirty for now
# no checks, no balances
echo "Creating links..."
# link the script to create an IP on the macvlan for unbound
ln -s /mnt/data/unbound/on_boot.d/11-unbound-macvlanip.sh /mnt/data/on_boot.d/11-unbound-macvlanip.sh
ln -s ${DATA_DIR}/unbound/on_boot.d/11-unbound-macvlanip.sh ${DATA_DIR}/on_boot.d/11-unbound-macvlanip.sh
# configure either IPv4 only or IPv4 and IPv6 by uncommenting the proper line
#
# link the IPv4 configuration for CNI
# ln -s /mnt/data/unbound/cni_plugins/21-unbound.conflist /etc/cni/net.d/21-unbound.conflist
# ln -s ${DATA_DIR}/unbound/cni_plugins/21-unbound.conflist /etc/cni/net.d/21-unbound.conflist
# link the IPv4 and IPv6 configuration for CNI
ln -s /mnt/data/unbound/cni_plugins/21-unboundipv6.conflist /etc/cni/net.d/21-unbound.conflist
ln -s ${DATA_DIR}/unbound/cni_plugins/21-unboundipv6.conflist /etc/cni/net.d/21-unbound.conflist
# create the podman network unbound
echo "Creating podman network..."
@ -22,4 +39,4 @@ podman network create unbound
# create the container IP
echo "Creating container IP..."
sh /mnt/data/on_boot.d/11-unbound-macvlanip.sh
sh ${DATA_DIR}/on_boot.d/11-unbound-macvlanip.sh

View File

@ -1,4 +1,33 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/unbound" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/unbound"
mkdir -p "${DATA_DIR}/unbound.conf.d"
echo "Directory '${DATA_DIR}/unbound' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/unbound' already exists. Moving on."
fi
CONTAINER=unbound
IMAGE=klutchell/unbound:latest
@ -9,10 +38,9 @@ podman stop $CONTAINER
echo "Removing container..."
podman rm $CONTAINER
echo "Updating root hints..."
mkdir -p /mnt/data/unbound/unbound.conf.d/
curl -m 30 -o /mnt/data/unbound/unbound.conf.d/root.hints https://www.internic.net/domain/named.root
curl -m 30 -o ${DATA_DIR}/unbound/unbound.conf.d/root.hints https://www.internic.net/domain/named.root
echo "Running $CONTAINER container"
podman run -d --net unbound --restart always \
--name $CONTAINER \
-v "/mnt/data/unbound/unbound.conf.d/:/opt/unbound/etc/unbound/ " \
-v "${DATA_DIR}/unbound/unbound.conf.d/:/opt/unbound/etc/unbound/ " \
$IMAGE

View File

@ -13,28 +13,28 @@
## Customization
* Update [wg0.conf](configs/wg0.conf) to match your environment
* You can use a custom interface name by changing wg0.conf to whatever you like
* Use PostUp and PostDown in your wg.conf to execute any commands after the interface is created or destroyed
- Update [wg0.conf](configs/wg0.conf) to match your environment
- You can use a custom interface name by changing wg0.conf to whatever you like
- Use PostUp and PostDown in your wg.conf to execute any commands after the interface is created or destroyed
## Steps
1. Make a directory for your keys and configuration:
```sh
mkdir -p /mnt/data/wireguard
mkdir -p /data/wireguard
```
2. Create your public and private keys:
```sh
podman run -i --rm --net=host --name wireguard_conf masipcat/wireguard-go wg genkey > /mnt/data/wireguard/privatekey
podman run -i --rm --net=host --name wireguard_conf masipcat/wireguard-go wg pubkey < /mnt/data/wireguard/privatekey > /mnt/data/wireguard/publickey
podman run -i --rm --net=host --name wireguard_conf masipcat/wireguard-go wg genkey > /data/wireguard/privatekey
podman run -i --rm --net=host --name wireguard_conf masipcat/wireguard-go wg pubkey < /data/wireguard/privatekey > /data/wireguard/publickey
```
3. Create a [Wireguard configuration](configs/wg0.conf) in /mnt/data/wireguard
4. Copy [20-wireguard.sh](on_boot.d/20-wireguard.sh) to /mnt/data/on_boot.d and update its values to reflect your environment
5. Execute /mnt/data/on_boot.d/[20-wireguard.sh](on_boot.d/20-wireguard.sh)
3. Create a [Wireguard configuration](configs/wg0.conf) in /data/wireguard
4. Copy [20-wireguard.sh](on_boot.d/20-wireguard.sh) to /data/on_boot.d and update its values to reflect your environment
5. Execute /data/on_boot.d/[20-wireguard.sh](on_boot.d/20-wireguard.sh)
6. If you are running a server, make the appropriate firewall rules / port forwards
7. Execute the wg command in the container to verify the tunnel is up. It should look something like this:

View File

@ -1,14 +1,42 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/wireguard" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/wireguard"
echo "Directory '${DATA_DIR}/wireguard' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/wireguard' already exists. Moving on."
fi
CONTAINER=wireguard
# Starts a wireguard container that is deleted after it is stopped.
# All configs stored in /mnt/data/wireguard
# All configs stored in ${DATA_DIR}/wireguard
if podman container exists ${CONTAINER}; then
podman start ${CONTAINER}
else
podman run -i -d --rm --net=host --name ${CONTAINER} --privileged \
-v /mnt/data/wireguard:/etc/wireguard \
-v ${DATA_DIR}/wireguard:/etc/wireguard \
-v /dev/net/tun:/dev/net/tun \
-e LOG_LEVEL=info -e WG_COLOR_MODE=always \
masipcat/wireguard-go:0.0.20210424
fi

View File

@ -1,10 +1,37 @@
#!/bin/sh
# Get DataDir location
DATA_DIR="/data"
case "$(ubnt-device-info firmware || true)" in
1*)
DATA_DIR="/mnt/data"
;;
2*)
DATA_DIR="/data"
;;
3*)
DATA_DIR="/data"
;;
*)
echo "ERROR: No persistent storage found." 1>&2
exit 1
;;
esac
# Check if the directory exists
if [ ! -d "${DATA_DIR}/zerotier-one" ]; then
# If it does not exist, create the directory
mkdir -p "${DATA_DIR}/zerotier-one"
echo "Directory '${DATA_DIR}/zerotier-one' created."
else
# If it already exists, print a message
echo "Directory '${DATA_DIR}/zerotier-one' already exists. Moving on."
fi
CONTAINER=zerotier-one
# Starts a ZeroTier container that is deleted after it is stopped.
# All configs stored in /mnt/data/zerotier-one
# All configs stored in ${DATA_DIR}/zerotier-one
if podman container exists ${CONTAINER}; then
podman start ${CONTAINER}
else
podman run --device=/dev/net/tun --net=host --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=CAP_SYS_RAWIO -v /mnt/data/zerotier-one:/var/lib/zerotier-one --name zerotier-one -d bltavares/zerotier
podman run --device=/dev/net/tun --net=host --cap-add=NET_ADMIN --cap-add=SYS_ADMIN --cap-add=CAP_SYS_RAWIO -v ${DATA_DIR}/zerotier-one:/var/lib/zerotier-one --name zerotier-one -d bltavares/zerotier
fi

View File

@ -1,12 +1,14 @@
# Run ZeroTier VPN on your UDM
## Install
1. Copy 20-zerotier.sh to /mnt/data/on_boot.d
1. Copy 20-zerotier.sh to /data/on_boot.d
2. Create directories for persistent Zerotier configuration
```
mkdir -p /mnt/data/zerotier-one
mkdir -p /data/zerotier-one
```
3. Start the zeriotier container
```
podman run -d \
@ -16,7 +18,7 @@
--cap-add=NET_ADMIN \
--cap-add=SYS_ADMIN \
--cap-add=CAP_SYS_RAWIO \
-v /mnt/data/zerotier-one:/var/lib/zerotier-one \
-v /data/zerotier-one:/var/lib/zerotier-one \
bltavares/zerotier
```
4. Join your zerotier network