Compare commits

..

76 commits

Author SHA1 Message Date
Conrad Kramer
97c569fb35 Align GTK app with Apple home surface
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m50s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
Add the GTK home screen, local account store, daemon gRPC wrapper, and embedded Linux daemon startup path so the Linux app follows the Apple client UX and daemon boundary.

Document the GTK parity expectations and update the daemon IPC and Tailnet BEPs with the cross-platform client model.
2026-05-03 17:36:55 -07:00
Conrad Kramer
9244a0476a Fix Zulip SAML provisioning 2026-04-19 14:48:16 -07:00
Conrad Kramer
7540110713 Wait for Zulip supervisor before nginx patching 2026-04-19 14:09:26 -07:00
Conrad Kramer
836ccc93cd Patch Zulip uwsgi scheme at runtime 2026-04-19 14:04:42 -07:00
Conrad Kramer
6cd0f3b1ae Fix Zulip SAML callback scheme handling 2026-04-19 13:59:01 -07:00
Conrad Kramer
eb9327a99f Map Burrow admins to Zulip owners
Some checks failed
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
Build Rust / Cargo Test (push) Successful in 4m2s
2026-04-19 03:43:57 -07:00
Conrad Kramer
5598fc18fc Enable Zulip SAML auto signup 2026-04-19 03:37:42 -07:00
Conrad Kramer
78d83c5079 Pin Zulip SAML ACS to https
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m55s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-19 01:49:25 -07:00
Conrad Kramer
4c3dcdd17b Force https-only Zulip SAML login 2026-04-19 01:43:43 -07:00
Conrad Kramer
2af7618f52 Fix tailscale landing and zulip bootstrap
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m55s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-19 01:31:45 -07:00
Conrad Kramer
142c2ef778 Allow postgres bootstrap to read generated SQL 2026-04-19 01:22:32 -07:00
Conrad Kramer
2ef804fa10 Use runuser for Zulip Postgres bootstrap 2026-04-19 01:20:55 -07:00
Conrad Kramer
601bedcc59 Fix Zulip Postgres bootstrap runtime 2026-04-19 01:19:01 -07:00
Conrad Kramer
42df7b5618 Run Zulip on host-managed services 2026-04-19 01:11:37 -07:00
Conrad Kramer
fa2806e4b3 Bootstrap Zulip from the live app container 2026-04-19 00:59:34 -07:00
Conrad Kramer
b70b62dfef Fix Zulip bootstrap user handling 2026-04-19 00:56:35 -07:00
Conrad Kramer
824bbd9d67 Run Zulip bootstrap non-interactively 2026-04-19 00:55:07 -07:00
Conrad Kramer
b8cad4c028 Grant Tailnet access and harden Zulip bootstrap 2026-04-19 00:52:16 -07:00
Conrad Kramer
801e0fb419 Declare Zulip compose secrets 2026-04-19 00:30:08 -07:00
Conrad Kramer
bd13ff3ee9 Bind Zulip memcached and RabbitMQ config files 2026-04-19 00:25:16 -07:00
Conrad Kramer
8ac1a5c70e Use unified tailnet launcher and fix Zulip RabbitMQ 2026-04-19 00:22:13 -07:00
Conrad Kramer
7567ab194b Fix Tailscale default app and Zulip metadata fetch 2026-04-19 00:16:51 -07:00
Conrad Kramer
44f437c33c Expose Tailscale and add Zulip SAML deployment 2026-04-19 00:13:10 -07:00
Conrad Kramer
7d3e7a6ec5 Make Linear SCIM object sync best-effort
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m51s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-18 19:34:26 -07:00
Conrad Kramer
7421834ebc Relax Linear Authentik sync verification 2026-04-18 19:32:29 -07:00
Conrad Kramer
6dea4e4557 Fix Authentik Linear application patch paths 2026-04-18 19:30:06 -07:00
Conrad Kramer
4c12dafa6d Fix Linear SAML verification and reseal SCIM token 2026-04-18 19:26:55 -07:00
Conrad Kramer
ebcfc4bf8d Add Linear SCIM role sync 2026-04-18 19:23:53 -07:00
Conrad Kramer
4d3257995b Add Authentik SSO apps for Linear and 1Password 2026-04-18 19:10:18 -07:00
Conrad Kramer
5a4fe58b86 Add Jett forge access and rekey secrets
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m47s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-18 17:47:17 -07:00
Conrad Kramer
4f88f0b1e0 Align Burrow operator access on forge
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m48s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-18 17:09:20 -07:00
Conrad Kramer
abd5a35970 Make Jett a Burrow admin
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m47s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-18 02:42:01 -07:00
Conrad Kramer
c58d06dfc1 Move Burrow Google account aliases into agenix 2026-04-18 02:18:22 -07:00
Conrad Kramer
bc85e256f2 Stabilize Forgejo site build
Some checks failed
Build Rust / Cargo Test (push) Successful in 3m46s
Build Site / Next.js Build (push) Failing after 2s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-09 20:59:31 -07:00
Conrad Kramer
aa577c5616 Inline Forgejo workflow checkout
Some checks failed
Build Rust / Cargo Test (push) Successful in 4m45s
Build Site / Next.js Build (push) Failing after 4s
Lint Governance / BEP Metadata (push) Successful in 0s
2026-04-06 04:22:34 -07:00
Conrad Kramer
fbe8643914 Restart Forgejo runner when registration changes
Some checks failed
Build Rust / Cargo Test (push) Failing after 0s
Build Site / Next.js Build (push) Failing after 0s
Lint Governance / BEP Metadata (push) Failing after 0s
2026-04-06 01:15:46 -07:00
Conrad Kramer
5e58aafb07 Align Forgejo runner labels with workflows
Some checks failed
Build Rust / Cargo Test (push) Failing after 4s
Build Site / Next.js Build (push) Failing after 0s
Lint Governance / BEP Metadata (push) Failing after 0s
2026-04-06 01:12:47 -07:00
Conrad Kramer
e2a2c73922 Install nsc on burrow forge host
Some checks are pending
Build Rust / Cargo Test (push) Waiting to run
Build Site / Next.js Build (push) Waiting to run
Lint Governance / BEP Metadata (push) Waiting to run
2026-04-06 01:08:24 -07:00
Conrad Kramer
70607e874c Move forgejo-nsc credentials into agenix
Some checks are pending
Build Rust / Cargo Test (push) Waiting to run
Build Site / Next.js Build (push) Waiting to run
Lint Governance / BEP Metadata (push) Waiting to run
2026-04-05 23:08:23 -07:00
Conrad Kramer
e40a947223 Add forge-owned Namespace auth portal 2026-04-05 20:52:52 -07:00
Conrad Kramer
64103abbea Refocus Tailnet flow on Tailscale 2026-04-05 02:10:49 -07:00
Conrad Kramer
3ebb0a8e61 Fix tailnet auth flow provider lookup
Some checks are pending
Build Rust / Cargo Test (push) Waiting to run
Build Site / Next.js Build (push) Waiting to run
Lint Governance / BEP Metadata (push) Waiting to run
2026-04-05 01:36:52 -07:00
Conrad Kramer
8de798469b Bind tailnet auth flow to tailscale 2026-04-05 01:34:32 -07:00
Conrad Kramer
c8aa036ade Add Tailscale Authentik OIDC app 2026-04-04 23:53:33 -07:00
Conrad Kramer
b15b6624cb Add Forgejo namespace release workflow 2026-04-04 22:21:03 -07:00
Conrad Kramer
9e3e8fa783 Use upstream nsc-autoscaler on burrow forge 2026-04-04 22:20:55 -07:00
Conrad Kramer
3d80e772c8 Add tailnet connectivity smoke path 2026-04-03 17:49:11 -07:00
Conrad Kramer
5079786515 Allow local UI test secret decryption 2026-04-03 03:08:06 -07:00
Conrad Kramer
75bcfaf655 Add Tailnet UI auth test flow 2026-04-03 03:03:17 -07:00
Conrad Kramer
0c660acd1e Add daemon-owned Tailnet login flow 2026-04-03 02:09:58 -07:00
Conrad Kramer
d1e28b8817 Route Tailnet Apple flows through daemon gRPC 2026-04-03 01:36:55 -07:00
Conrad Kramer
f6a7f0922d Add governance and identity registry scaffolding 2026-04-03 01:36:10 -07:00
Conrad Kramer
1da00ecdf3 Add email-based tailnet discovery to Apple app
Some checks failed
Build Rust / Cargo Test (push) Has been cancelled
Build Site / Next.js Build (push) Has been cancelled
2026-04-03 00:42:39 -07:00
Conrad Kramer
baf1408060 Add Tailnet landing page 2026-04-03 00:17:12 -07:00
Conrad Kramer
72b7f1467b Disable Forgejo local password sign-in
Some checks are pending
Build Rust / Cargo Test (push) Waiting to run
Build Site / Next.js Build (push) Waiting to run
2026-04-02 21:44:10 -07:00
Conrad Kramer
3332bf5c53 Fix Forgejo OIDC account linking 2026-04-01 13:43:47 -07:00
Conrad Kramer
bb05bd9014 Add Burrow Authentik admin directory sync 2026-04-01 11:39:29 -07:00
Conrad Kramer
1ff8270a01 Advertise OIDC discovery on burrow.net 2026-04-01 01:26:08 -07:00
Conrad Kramer
0e68c25a99 Wire Forgejo sign-in through Authentik 2026-04-01 01:12:15 -07:00
Conrad Kramer
7f280c08cf Commit remaining Burrow platform work 2026-03-31 23:35:36 -07:00
Conrad Kramer
fff5475914 Probe Headscale reachability in Apple UI 2026-03-31 23:28:42 -07:00
Conrad Kramer
be5b7d90db Enable Google Authentik login on forge 2026-03-31 23:28:35 -07:00
Conrad Kramer
20964e8ed7 Move forge tailnet secrets to agenix 2026-03-31 16:38:02 -07:00
Conrad Kramer
8aebf56d6d Resolve Burrow forge domains locally 2026-03-31 14:59:30 -07:00
Conrad Kramer
b8347f62ba Fix Headscale bootstrap policy syntax 2026-03-31 14:56:27 -07:00
Conrad Kramer
de25f240d5 Add Burrow forge infrastructure and tailnet control plane 2026-03-31 14:53:48 -07:00
Conrad Kramer
d1ed826389 Unify Tailnet config presentation 2026-03-31 14:32:14 -07:00
Conrad Kramer
014bca073f Polish Apple network config sheets 2026-03-31 14:27:14 -07:00
Conrad Kramer
2f69987742 Fix iOS config sheet width 2026-03-31 13:46:11 -07:00
Conrad Kramer
36a54628ba Simplify iOS network add flow 2026-03-31 13:40:13 -07:00
Conrad Kramer
35f3b3ce4e Use AuthenticationServices for Tailnet sign-in 2026-03-31 13:09:07 -07:00
Conrad Kramer
7670a75840 Add Tailnet accounts and Tailscale login flow 2026-03-31 12:52:21 -07:00
Conrad Kramer
f9062eae33 Fix Apple simulator and Swift 6 build plumbing 2026-03-31 12:50:28 -07:00
Conrad Kramer
cdf8d22055 Add Linux tor-exec namespace runtime 2026-03-30 20:01:55 -07:00
Conrad Kramer
7ade60646b Allow no-tunnel passthrough mode 2026-03-30 19:30:22 -07:00
Conrad Kramer
450e9c6fcd Drive daemon tunnels from stored networks 2026-03-30 19:01:58 -07:00
199 changed files with 25698 additions and 4669 deletions

View file

@ -1,159 +0,0 @@
name: Build Apple
on:
push:
branches:
- main
pull_request:
branches:
- "**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
jobs:
build:
name: Build App (${{ matrix.platform }})
runs-on: namespace-profile-macos-large
strategy:
fail-fast: false
matrix:
include:
- platform: macOS
cache-id: macos
destination: platform=macOS
rust-targets: x86_64-apple-darwin,aarch64-apple-darwin
- platform: iOS Simulator
cache-id: ios-simulator
destination: platform=iOS Simulator,name=iPhone 17 Pro
rust-targets: aarch64-apple-ios-sim,x86_64-apple-ios
env:
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: short
RUSTC_WRAPPER: sccache
SCCACHE_CACHE_SIZE: 20G
steps:
- name: Checkout
uses: https://code.forgejo.org/actions/checkout@v4
with:
token: ${{ github.token }}
fetch-depth: 0
submodules: recursive
- name: Select Xcode
shell: bash
run: |
set -euo pipefail
candidates=(
"/Applications/Xcode_26.1.app/Contents/Developer"
"/Applications/Xcode_26_1.app/Contents/Developer"
"/Applications/Xcode.app/Contents/Developer"
"/Applications/Xcode/Xcode.app/Contents/Developer"
)
selected=""
for candidate in "${candidates[@]}"; do
if [[ -d "$candidate" ]]; then
selected="$candidate"
break
fi
done
if [[ -z "$selected" ]] && command -v xcode-select >/dev/null 2>&1; then
selected="$(xcode-select -p)"
fi
if [[ -z "$selected" ]]; then
echo "::error ::Unable to locate an Xcode toolchain" >&2
exit 1
fi
echo "DEVELOPER_DIR=$selected" >> "$GITHUB_ENV"
DEVELOPER_DIR="$selected" /usr/bin/xcodebuild -version || true
- name: Prepare Cache Dirs
shell: bash
run: |
set -euo pipefail
cache_root="${NSC_CACHE_PATH:-${HOME}/.cache/burrow}"
shared_root="${NSC_SHARED_CACHE_PATH:-${cache_root}/shared}"
lane_root="${NSC_LANE_CACHE_PATH:-${cache_root}/lane/${{ matrix.cache-id }}}"
mkdir -p \
"${shared_root}/cargo" \
"${shared_root}/rustup" \
"${shared_root}/sccache" \
"${shared_root}/homebrew" \
"${shared_root}/apple/PackageCache" \
"${shared_root}/apple/SourcePackages" \
"${lane_root}/cargo-target" \
"${lane_root}/DerivedData"
echo "CARGO_HOME=${shared_root}/cargo" >> "${GITHUB_ENV}"
echo "CARGO_TARGET_DIR=${lane_root}/cargo-target" >> "${GITHUB_ENV}"
echo "RUSTUP_HOME=${shared_root}/rustup" >> "${GITHUB_ENV}"
echo "SCCACHE_DIR=${shared_root}/sccache" >> "${GITHUB_ENV}"
echo "HOMEBREW_CACHE=${shared_root}/homebrew" >> "${GITHUB_ENV}"
echo "APPLE_PACKAGE_CACHE=${shared_root}/apple/PackageCache" >> "${GITHUB_ENV}"
echo "APPLE_SOURCE_PACKAGES=${shared_root}/apple/SourcePackages" >> "${GITHUB_ENV}"
echo "APPLE_DERIVED_DATA=${lane_root}/DerivedData" >> "${GITHUB_ENV}"
df -h "${shared_root}" "${lane_root}" || true
- name: Install Rust
shell: bash
run: |
set -euo pipefail
export PATH="${CARGO_HOME}/bin:${PATH}"
if ! command -v rustup >/dev/null 2>&1; then
curl --proto '=https' --tlsv1.2 -fsSL https://sh.rustup.rs | sh -s -- -y --profile minimal --default-toolchain 1.93.1
else
rustup set profile minimal
rustup toolchain install 1.93.1
rustup default 1.93.1
fi
mkdir -p "${CARGO_HOME}/bin"
echo "${CARGO_HOME}/bin" >> "${GITHUB_PATH}"
export PATH="${CARGO_HOME}/bin:${PATH}"
rustup show active-toolchain
toolchain="$(rustup show active-toolchain | awk '{print $1}')"
cargo_bin="$(rustup which --toolchain "${toolchain}" cargo)"
rustc_bin="$(rustup which --toolchain "${toolchain}" rustc)"
targets='${{ matrix.rust-targets }}'
for target in ${targets//,/ }; do
rustup target add --toolchain "${toolchain}" "${target}"
done
"${rustc_bin}" --version
"${cargo_bin}" --version
- name: Install Protobuf
shell: bash
run: |
set -euo pipefail
if ! command -v protoc >/dev/null 2>&1; then
brew install protobuf
fi
if ! command -v sccache >/dev/null 2>&1; then
brew install sccache
fi
- name: Build
shell: bash
working-directory: Apple
run: |
set -euo pipefail
xcodebuild build \
-project Burrow.xcodeproj \
-scheme App \
-destination '${{ matrix.destination }}' \
-skipPackagePluginValidation \
-skipMacroValidation \
-onlyUsePackageVersionsFromResolvedFile \
-clonedSourcePackagesDirPath "$APPLE_SOURCE_PACKAGES" \
-packageCachePath "$APPLE_PACKAGE_CACHE" \
-derivedDataPath "$APPLE_DERIVED_DATA" \
CODE_SIGNING_ALLOWED=NO \
CODE_SIGNING_REQUIRED=NO \
CODE_SIGN_IDENTITY="" \
DEVELOPMENT_TEAM=""

View file

@ -16,50 +16,27 @@ concurrency:
jobs:
rust:
name: Cargo Test
runs-on: namespace-profile-linux-medium
env:
CARGO_INCREMENTAL: 0
NIX_CONFIG: |
experimental-features = nix-command flakes
accept-flake-config = true
RUSTC_WRAPPER: sccache
SCCACHE_CACHE_SIZE: 20G
runs-on: [self-hosted, linux, x86_64, burrow-forge]
steps:
- name: Checkout
uses: https://code.forgejo.org/actions/checkout@v4
with:
token: ${{ github.token }}
fetch-depth: 0
- name: Prepare Cache Dirs
shell: bash
run: |
set -euo pipefail
cache_root="${NSC_CACHE_PATH:-${HOME}/.cache/burrow}"
shared_root="${NSC_SHARED_CACHE_PATH:-${cache_root}/shared}"
lane_root="${NSC_LANE_CACHE_PATH:-${cache_root}/lane/build-rust}"
mkdir -p \
"${shared_root}/cargo" \
"${shared_root}/sccache" \
"${shared_root}/xdg" \
"${lane_root}/cargo-target"
echo "CARGO_HOME=${shared_root}/cargo" >> "${GITHUB_ENV}"
echo "SCCACHE_DIR=${shared_root}/sccache" >> "${GITHUB_ENV}"
echo "XDG_CACHE_HOME=${shared_root}/xdg" >> "${GITHUB_ENV}"
echo "CARGO_TARGET_DIR=${lane_root}/cargo-target" >> "${GITHUB_ENV}"
{
echo 'NIX_CONFIG<<EOF'
printf '%s\n' "${NIX_CONFIG}"
echo 'EOF'
} >> "${GITHUB_ENV}"
df -h /nix "${shared_root}" "${lane_root}" || true
repo_url="${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
if [ ! -d .git ]; then
git init .
fi
if git remote get-url origin >/dev/null 2>&1; then
git remote set-url origin "${repo_url}"
else
git remote add origin "${repo_url}"
fi
git fetch --force --tags origin "${GITHUB_SHA}"
git checkout --force --detach FETCH_HEAD
git clean -ffdqx
- name: Test
shell: bash
run: |
set -euo pipefail
nix develop .#ci -c bash -euo pipefail -c '
sccache --zero-stats >/dev/null 2>&1 || true
cargo test --workspace --all-features
sccache --show-stats || true
'
nix develop .#ci -c cargo test --workspace --all-features

View file

@ -16,48 +16,27 @@ concurrency:
jobs:
site:
name: Next.js Build
runs-on: namespace-profile-linux-medium
env:
NIX_CONFIG: |
experimental-features = nix-command flakes
accept-flake-config = true
runs-on: [self-hosted, linux, x86_64, burrow-forge]
steps:
- name: Checkout
uses: https://code.forgejo.org/actions/checkout@v4
with:
token: ${{ github.token }}
fetch-depth: 0
- name: Prepare Cache Dirs
shell: bash
run: |
set -euo pipefail
cache_root="${NSC_CACHE_PATH:-${HOME}/.cache/burrow}"
shared_root="${NSC_SHARED_CACHE_PATH:-${cache_root}/shared}"
lane_root="${NSC_LANE_CACHE_PATH:-${cache_root}/lane/build-site}"
mkdir -p \
"${shared_root}/npm" \
"${shared_root}/xdg" \
"${lane_root}/next-cache"
echo "NPM_CONFIG_CACHE=${shared_root}/npm" >> "${GITHUB_ENV}"
echo "XDG_CACHE_HOME=${shared_root}/xdg" >> "${GITHUB_ENV}"
echo "NEXT_CACHE_DIR=${lane_root}/next-cache" >> "${GITHUB_ENV}"
{
echo 'NIX_CONFIG<<EOF'
printf '%s\n' "${NIX_CONFIG}"
echo 'EOF'
} >> "${GITHUB_ENV}"
df -h /nix "${shared_root}" "${lane_root}" || true
repo_url="${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
if [ ! -d .git ]; then
git init .
fi
if git remote get-url origin >/dev/null 2>&1; then
git remote set-url origin "${repo_url}"
else
git remote add origin "${repo_url}"
fi
git fetch --force --tags origin "${GITHUB_SHA}"
git checkout --force --detach FETCH_HEAD
git clean -ffdqx
- name: Build
shell: bash
run: |
set -euo pipefail
nix develop .#ci -c bash -euo pipefail -c '
mkdir -p site/.next
rm -rf site/.next/cache
ln -sfn "${NEXT_CACHE_DIR}" site/.next/cache
cd site
npm install
npm run build
'
nix develop .#ci -c bash -lc 'cd site && npm ci --no-audit --no-fund && npm run build'

View file

@ -0,0 +1,38 @@
name: Lint Governance
on:
push:
branches:
- main
pull_request:
branches:
- "**"
workflow_dispatch:
jobs:
governance:
name: BEP Metadata
runs-on: [self-hosted, linux, x86_64, burrow-forge]
steps:
- name: Checkout
shell: bash
run: |
set -euo pipefail
repo_url="${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
if [ ! -d .git ]; then
git init .
fi
if git remote get-url origin >/dev/null 2>&1; then
git remote set-url origin "${repo_url}"
else
git remote add origin "${repo_url}"
fi
git fetch --force --tags origin "${GITHUB_SHA}"
git checkout --force --detach FETCH_HEAD
git clean -ffdqx
- name: Validate BEP metadata
shell: bash
run: |
set -euo pipefail
python3 Scripts/check-bep-metadata.py

View file

@ -0,0 +1,60 @@
name: Release
on:
push:
tags:
- "v*"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false
jobs:
release:
name: Release Build
runs-on: namespace-profile-linux-medium
steps:
- name: Checkout
uses: https://code.forgejo.org/actions/checkout@v4
with:
token: ${{ github.token }}
fetch-depth: 0
- name: Bootstrap Nix
shell: bash
run: |
set -euo pipefail
chmod +x Scripts/ci/ensure-nix.sh
Scripts/ci/ensure-nix.sh
- name: Build release artifacts
shell: bash
env:
RELEASE_REF: ${{ github.ref_name }}
run: |
set -euo pipefail
ref="${RELEASE_REF:-manual-${GITHUB_SHA::7}}"
export RELEASE_REF="${ref}"
chmod +x Scripts/ci/build-release-artifacts.sh
nix develop .#ci -c Scripts/ci/build-release-artifacts.sh
- name: Upload release artifacts
uses: https://code.forgejo.org/actions/upload-artifact@v4
with:
name: burrow-release-${{ github.ref_name }}
path: dist/*
if-no-files-found: error
- name: Publish Forgejo release
if: startsWith(github.ref, 'refs/tags/')
shell: bash
env:
RELEASE_TAG: ${{ github.ref_name }}
API_URL: ${{ github.api_url }}
REPOSITORY: ${{ github.repository }}
TOKEN: ${{ github.token }}
run: |
set -euo pipefail
chmod +x Scripts/ci/publish-forgejo-release.sh
nix develop .#ci -c Scripts/ci/publish-forgejo-release.sh

View file

@ -54,6 +54,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
with:
toolchain: 1.85.0
targets: ${{ join(matrix.rust-targets, ', ') }}
- name: Install Protobuf
shell: bash
@ -86,4 +87,4 @@ jobs:
destination: ${{ matrix.destination }}
test-plan: ${{ matrix.xcode-ui-test }}
artifact-prefix: ui-tests-${{ matrix.sdk-name }}
check-name: Xcode UI Tests (${{ matrix.platform }})
check-name: Xcode UI Tests (${{ matrix.platform }})

View file

@ -6,6 +6,9 @@ on:
pull_request:
branches:
- "*"
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build:
name: Build Crate (${{ matrix.platform }})
@ -72,14 +75,14 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
toolchain: 1.85.0
components: rustfmt
targets: ${{ join(matrix.targets, ', ') }}
- name: Setup Rust Cache
uses: Swatinem/rust-cache@v2
- name: Build
shell: bash
run: cargo build --verbose --workspace --all-features --target ${{ join(matrix.targets, ' --target ') }} --target ${{ join(matrix.test-targets, ' --target ') }}
run: cargo build --locked --verbose --workspace --all-features --target ${{ join(matrix.targets, ' --target ') }} --target ${{ join(matrix.test-targets, ' --target ') }}
- name: Test
shell: bash
run: cargo test --verbose --workspace --all-features --target ${{ join(matrix.test-targets, ' --target ') }}
run: cargo test --locked --verbose --workspace --all-features --target ${{ join(matrix.test-targets, ' --target ') }}

23
.github/workflows/lint-governance.yml vendored Normal file
View file

@ -0,0 +1,23 @@
name: Governance Lint
on:
pull_request:
branches:
- "*"
jobs:
governance:
name: BEP Metadata
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Validate BEP metadata
shell: bash
run: |
set -euo pipefail
python3 Scripts/check-bep-metadata.py

View file

@ -47,6 +47,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
with:
toolchain: 1.85.0
targets: ${{ join(matrix.rust-targets, ', ') }}
- name: Install Protobuf
shell: bash

1
.gitignore vendored
View file

@ -1,5 +1,6 @@
# Xcode
xcuserdata
Apple/build/
# Swift
Apple/Package/.swiftpm/

14
AGENTS.md Normal file
View file

@ -0,0 +1,14 @@
# instructions for agents
1. Spell the project name as `Burrow` in user-facing copy and `burrow` in code, package, and protocol identifiers unless an existing integration requires a different literal.
2. Read [CONSTITUTION.md](CONSTITUTION.md) before changing Apple clients, the daemon, the control plane, forge infrastructure, identity, or security-sensitive code.
3. Anchor non-trivial changes in a Burrow Evolution Proposal (BEP) under [evolution/](evolution/README.md) so future contributors can inherit the rationale, safeguards, and rollout shape.
4. Before touching the Apple app, daemon IPC, or Tailnet flows, review:
- [evolution/proposals/BEP-0002-control-plane-bootstrap-and-local-auth.md](evolution/proposals/BEP-0002-control-plane-bootstrap-and-local-auth.md)
- [evolution/proposals/BEP-0003-connect-ip-and-negotiation-roadmap.md](evolution/proposals/BEP-0003-connect-ip-and-negotiation-roadmap.md)
- [evolution/proposals/BEP-0005-daemon-ipc-and-apple-boundary.md](evolution/proposals/BEP-0005-daemon-ipc-and-apple-boundary.md)
- [evolution/proposals/BEP-0006-tailnet-authority-first-control-plane.md](evolution/proposals/BEP-0006-tailnet-authority-first-control-plane.md)
5. Apple clients must talk only to the daemon over gRPC. Do not add direct HTTP, control-plane, or helper-process calls from Swift UI code.
6. Treat Tailnet as one protocol family. Tailscale-managed and self-hosted Headscale-style deployments differ by authority, policy, and auth details, not by a separate user-facing protocol surface.
7. Maintain canonical identity and operator metadata in [contributors.nix](contributors.nix). If Burrow forge, Authentik, Headscale, or admin/group mappings need to change, edit that registry first and derive runtime configuration from it.
8. When process or architecture is unclear, stop and draft or update a BEP instead of improvising durable behavior in code.

View file

@ -6,6 +6,8 @@ import SwiftUI
@main
@MainActor
class AppDelegate: NSObject, NSApplicationDelegate {
private var windowController: NSWindowController?
private let quitItem: NSMenuItem = {
let quitItem = NSMenuItem(
title: "Quit Burrow",
@ -17,6 +19,17 @@ class AppDelegate: NSObject, NSApplicationDelegate {
return quitItem
}()
private lazy var openItem: NSMenuItem = {
let item = NSMenuItem(
title: "Open Burrow",
action: #selector(openWindow),
keyEquivalent: "o"
)
item.target = self
item.keyEquivalentModifierMask = .command
return item
}()
private let toggleItem: NSMenuItem = {
let toggleView = NSHostingView(rootView: MenuItemToggleView())
toggleView.frame.size = CGSize(width: 300, height: 32)
@ -31,6 +44,7 @@ class AppDelegate: NSObject, NSApplicationDelegate {
let menu = NSMenu()
menu.items = [
toggleItem,
openItem,
.separator(),
quitItem
]
@ -41,7 +55,7 @@ class AppDelegate: NSObject, NSApplicationDelegate {
let statusBar = NSStatusBar.system
let statusItem = statusBar.statusItem(withLength: NSStatusItem.squareLength)
if let button = statusItem.button {
button.image = NSImage(systemSymbolName: "network.badge.shield.half.filled", accessibilityDescription: nil)
button.image = NSImage(systemSymbolName: "pipe.and.drop.fill", accessibilityDescription: nil)
}
return statusItem
}()
@ -49,5 +63,28 @@ class AppDelegate: NSObject, NSApplicationDelegate {
func applicationDidFinishLaunching(_ notification: Notification) {
statusItem.menu = menu
}
@objc
private func openWindow() {
if let window = windowController?.window {
window.makeKeyAndOrderFront(nil)
NSApplication.shared.activate(ignoringOtherApps: true)
return
}
let contentView = BurrowView()
let hostingController = NSHostingController(rootView: contentView)
let window = NSWindow(contentViewController: hostingController)
window.title = "Burrow"
window.setContentSize(NSSize(width: 820, height: 720))
window.styleMask.insert([.titled, .closable, .miniaturizable, .resizable])
window.center()
let controller = NSWindowController(window: window)
controller.shouldCascadeWindows = true
controller.showWindow(nil)
windowController = controller
NSApplication.shared.activate(ignoringOtherApps: true)
}
}
#endif

View file

@ -0,0 +1,439 @@
import XCTest
import UIKit
@MainActor
final class BurrowTailnetLoginUITests: XCTestCase {
private enum TailnetLoginMode: String, Decodable {
case tailscale
case discovered
}
private struct TestConfig: Decodable {
let email: String
let username: String
let password: String
let mode: TailnetLoginMode?
}
override func setUpWithError() throws {
continueAfterFailure = false
}
func testTailnetLoginThroughAuthentikWebSession() throws {
let config = try loadTestConfig()
let email = config.email
let username = config.username
let password = config.password
let mode = config.mode ?? .tailscale
let browserIdentity = mode == .tailscale ? email : username
let app = XCUIApplication()
app.launch()
let tailnetButton = app.buttons["quick-add-tailnet"]
XCTAssertTrue(tailnetButton.waitForExistence(timeout: 15), "Tailnet add button did not appear")
tailnetButton.tap()
configureTailnetIfNeeded(in: app, mode: mode)
let discoveryField = app.textFields["tailnet-discovery-email"]
XCTAssertTrue(discoveryField.waitForExistence(timeout: 10), "Tailnet discovery email field did not appear")
replaceText(in: discoveryField, with: email)
let serverCard = app.descendants(matching: .any)
.matching(identifier: "tailnet-server-card")
.firstMatch
XCTAssertTrue(serverCard.waitForExistence(timeout: 5), "Tailnet server card did not appear")
let signInButton = app.buttons["tailnet-start-sign-in"]
XCTAssertTrue(signInButton.waitForExistence(timeout: 10), "Tailnet sign-in button did not appear")
signInButton.tap()
acceptAuthenticationPromptIfNeeded(in: app, timeout: 20)
let webSession = webAuthenticationSession()
XCTAssertTrue(webSession.waitForExistence(timeout: 20), "Safari authentication session did not appear")
signIntoAuthentik(in: webSession, username: browserIdentity, password: password)
app.activate()
XCTAssertTrue(
waitForTailnetSignedIn(in: app, timeout: 60),
"Tailnet sign-in never reached the running state"
)
}
private func configureTailnetIfNeeded(in app: XCUIApplication, mode: TailnetLoginMode) {
guard mode == .discovered else { return }
openTailnetMenu(in: app)
tapMenuButton(named: "Edit Custom Server", in: app)
openTailnetMenu(in: app)
tapMenuButton(named: "Show Advanced Settings", in: app)
let authorityField = app.textFields["tailnet-authority"]
XCTAssertTrue(authorityField.waitForExistence(timeout: 10), "Tailnet authority field did not appear")
replaceText(in: authorityField, with: "")
}
private func openTailnetMenu(in app: XCUIApplication) {
let moreButton = app.buttons["More"]
XCTAssertTrue(moreButton.waitForExistence(timeout: 5), "Tailnet menu button did not appear")
moreButton.tap()
}
private func tapMenuButton(named title: String, in app: XCUIApplication) {
let menuButton = firstExistingElement(
from: [
app.buttons[title],
app.descendants(matching: .button)[title],
],
timeout: 5
)
XCTAssertTrue(menuButton.exists, "Menu action \(title) did not appear")
menuButton.tap()
}
private func acceptAuthenticationPromptIfNeeded(
in app: XCUIApplication,
timeout: TimeInterval
) {
let springboard = XCUIApplication(bundleIdentifier: "com.apple.springboard")
let deadline = Date().addingTimeInterval(timeout)
repeat {
let promptCandidates = [
springboard.buttons["Continue"],
springboard.buttons["Allow"],
app.buttons["Continue"],
app.buttons["Allow"],
]
for button in promptCandidates where button.exists && button.isHittable {
button.tap()
return
}
RunLoop.current.run(until: Date().addingTimeInterval(0.25))
} while Date() < deadline
let promptCandidates = [
springboard.buttons["Continue"],
springboard.buttons["Allow"],
app.buttons["Continue"],
app.buttons["Allow"],
]
for button in promptCandidates where button.exists {
button.tap()
return
}
}
private func webAuthenticationSession() -> XCUIApplication {
let safariViewService = XCUIApplication(bundleIdentifier: "com.apple.SafariViewService")
if safariViewService.waitForExistence(timeout: 5) {
return safariViewService
}
let safari = XCUIApplication(bundleIdentifier: "com.apple.mobilesafari")
_ = safari.waitForExistence(timeout: 5)
return safari
}
private func signIntoAuthentik(in webSession: XCUIApplication, username: String, password: String) {
followTailnetRedirectIfNeeded(in: webSession)
if !webSession.exists {
return
}
let immediatePasswordField = firstExistingSecureField(in: webSession, timeout: 2)
if immediatePasswordField.exists {
replaceSecureText(in: immediatePasswordField, within: webSession, with: password)
submitAuthenticationForm(in: webSession, focusedField: immediatePasswordField)
return
}
let usernameField = firstExistingElement(
in: webSession,
queries: [
{ $0.textFields["Username"] },
{ $0.textFields["Email or Username"] },
{ $0.textFields["Email address"] },
{ $0.textFields["Email"] },
{ $0.webViews.textFields["Username"] },
{ $0.webViews.textFields["Email or Username"] },
{ $0.descendants(matching: .textField).firstMatch },
],
timeout: 12
)
if !usernameField.exists {
return
}
replaceText(in: usernameField, with: username)
tapFirstExistingButton(
in: webSession,
titles: ["Continue", "Next", "Sign In", "Log in", "Login"],
timeout: 5
)
let passwordField = firstExistingSecureField(in: webSession, timeout: 20)
XCTAssertTrue(passwordField.exists, "Authentik password field did not appear")
replaceSecureText(in: passwordField, within: webSession, with: password)
submitAuthenticationForm(in: webSession, focusedField: passwordField)
}
private func followTailnetRedirectIfNeeded(in webSession: XCUIApplication) {
let redirectCandidates = [
webSession.links["Found"],
webSession.webViews.links["Found"],
webSession.buttons["Found"],
webSession.webViews.buttons["Found"],
]
let redirectLink = firstExistingElement(from: redirectCandidates, timeout: 8)
if redirectLink.exists {
redirectLink.tap()
}
}
private func firstExistingSecureField(in app: XCUIApplication, timeout: TimeInterval) -> XCUIElement {
let candidates = [
app.descendants(matching: .secureTextField).firstMatch,
app.secureTextFields["Password"],
app.secureTextFields["Password or Token"],
app.webViews.secureTextFields["Password"],
app.webViews.secureTextFields["Password or Token"],
]
return firstExistingElement(from: candidates, timeout: timeout)
}
private func tapFirstExistingButton(
in app: XCUIApplication,
titles: [String],
timeout: TimeInterval
) {
let candidates = titles.flatMap { title in
[
app.buttons[title],
app.webViews.buttons[title],
]
} + [app.descendants(matching: .button).firstMatch]
let button = firstExistingElement(from: candidates, timeout: timeout)
XCTAssertTrue(button.exists, "Expected one of \(titles.joined(separator: ", ")) to appear")
button.tap()
}
private func submitAuthenticationForm(in app: XCUIApplication, focusedField: XCUIElement) {
focus(focusedField)
focusedField.typeText("\n")
if waitForAny(
[
{ !focusedField.exists },
{ !app.staticTexts["Burrow Tailnet Authentication"].exists },
],
timeout: 1.5
) {
return
}
let keyboard = app.keyboards.firstMatch
if keyboard.waitForExistence(timeout: 2) {
let keyboardCandidates = [
"Return",
"return",
"Go",
"go",
"Continue",
"continue",
"Done",
"done",
"Join",
"join",
"Sign In",
"Log In",
"Login",
]
for title in keyboardCandidates {
let key = keyboard.buttons[title]
if key.exists && key.isHittable {
key.tap()
return
}
}
if let lastKey = keyboard.buttons.allElementsBoundByIndex.last,
lastKey.exists,
lastKey.isHittable
{
lastKey.tap()
return
}
}
tapFirstExistingButton(
in: app,
titles: ["Continue", "Sign In", "Log in", "Login"],
timeout: 5
)
}
private func loadTestConfig() throws -> TestConfig {
let environment = ProcessInfo.processInfo.environment
if let email = nonEmptyEnvironment("BURROW_UI_TEST_EMAIL"),
let password = nonEmptyEnvironment("BURROW_UI_TEST_PASSWORD")
{
return TestConfig(
email: email,
username: nonEmptyEnvironment("BURROW_UI_TEST_USERNAME") ?? email,
password: password,
mode: nonEmptyEnvironment("BURROW_UI_TEST_TAILNET_MODE")
.flatMap(TailnetLoginMode.init(rawValue:))
)
}
let configPath = environment["BURROW_UI_TEST_CONFIG_PATH"] ?? "/tmp/burrow-ui-test-config.json"
let configURL = URL(fileURLWithPath: configPath)
guard FileManager.default.fileExists(atPath: configURL.path) else {
throw XCTSkip(
"Missing UI test configuration. Expected env vars or config file at \(configURL.path)"
)
}
let data = try Data(contentsOf: configURL)
return try JSONDecoder().decode(TestConfig.self, from: data)
}
private func nonEmptyEnvironment(_ key: String) -> String? {
guard let value = ProcessInfo.processInfo.environment[key]?
.trimmingCharacters(in: .whitespacesAndNewlines),
!value.isEmpty
else {
return nil
}
return value
}
private func waitForFieldValue(
_ field: XCUIElement,
containing substring: String,
timeout: TimeInterval
) -> Bool {
let predicate = NSPredicate(format: "value CONTAINS %@", substring)
let expectation = XCTNSPredicateExpectation(predicate: predicate, object: field)
return XCTWaiter.wait(for: [expectation], timeout: timeout) == .completed
}
private func waitForButtonLabel(
_ button: XCUIElement,
equals expected: String,
timeout: TimeInterval
) -> Bool {
let predicate = NSPredicate(format: "label == %@", expected)
let expectation = XCTNSPredicateExpectation(predicate: predicate, object: button)
return XCTWaiter.wait(for: [expectation], timeout: timeout) == .completed
}
private func waitForTailnetSignedIn(in app: XCUIApplication, timeout: TimeInterval) -> Bool {
let button = app.buttons["tailnet-start-sign-in"]
let deadline = Date().addingTimeInterval(timeout)
repeat {
acceptAuthenticationPromptIfNeeded(in: app, timeout: 1)
if button.exists, button.label == "Signed In" {
return true
}
RunLoop.current.run(until: Date().addingTimeInterval(0.3))
} while Date() < deadline
return button.exists && button.label == "Signed In"
}
private func waitForAny(_ conditions: [() -> Bool], timeout: TimeInterval) -> Bool {
let deadline = Date().addingTimeInterval(timeout)
repeat {
if conditions.contains(where: { $0() }) {
return true
}
RunLoop.current.run(until: Date().addingTimeInterval(0.2))
} while Date() < deadline
return conditions.contains(where: { $0() })
}
private func firstExistingElement(
in app: XCUIApplication,
queries: [(XCUIApplication) -> XCUIElement],
timeout: TimeInterval
) -> XCUIElement {
firstExistingElement(from: queries.map { $0(app) }, timeout: timeout)
}
private func firstExistingElement(from candidates: [XCUIElement], timeout: TimeInterval) -> XCUIElement {
let deadline = Date().addingTimeInterval(timeout)
repeat {
for candidate in candidates where candidate.exists {
return candidate
}
RunLoop.current.run(until: Date().addingTimeInterval(0.2))
} while Date() < deadline
return candidates[0]
}
private func replaceText(in element: XCUIElement, with value: String) {
focus(element)
clearText(in: element)
element.typeText(value)
}
private func replaceSecureText(in element: XCUIElement, within app: XCUIApplication, with value: String) {
UIPasteboard.general.string = value
focus(element)
for revealMenu in [
{ element.doubleTap() },
{ element.press(forDuration: 1.2) },
] {
revealMenu()
let pasteButton = firstExistingElement(from: pasteCandidates(in: app), timeout: 3)
if pasteButton.exists {
pasteButton.tap()
return
}
}
focus(element)
element.typeText(value)
}
private func clearText(in element: XCUIElement) {
guard let currentValue = element.value as? String, !currentValue.isEmpty else {
return
}
let deleteSequence = String(repeating: XCUIKeyboardKey.delete.rawValue, count: currentValue.count)
element.typeText(deleteSequence)
}
private func focus(_ element: XCUIElement) {
element.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.5)).tap()
RunLoop.current.run(until: Date().addingTimeInterval(0.3))
}
private func pasteCandidates(in app: XCUIApplication) -> [XCUIElement] {
let pasteLabels = ["Paste", "Incolla", "Paste from Clipboard"]
return pasteLabels.flatMap { label in
[
app.menuItems[label],
app.buttons[label],
app.webViews.buttons[label],
app.descendants(matching: .button).matching(NSPredicate(format: "label == %@", label)).firstMatch,
app.descendants(matching: .menuItem).matching(NSPredicate(format: "label == %@", label)).firstMatch,
]
}
}
}

View file

@ -8,6 +8,7 @@
/* Begin PBXBuildFile section */
D00AA8972A4669BC005C8102 /* AppDelegate.swift in Sources */ = {isa = PBXBuildFile; fileRef = D00AA8962A4669BC005C8102 /* AppDelegate.swift */; };
D11000012F70000100112233 /* BurrowUITests.swift in Sources */ = {isa = PBXBuildFile; fileRef = D11000042F70000100112233 /* BurrowUITests.swift */; };
D020F65829E4A697002790F6 /* PacketTunnelProvider.swift in Sources */ = {isa = PBXBuildFile; fileRef = D020F65729E4A697002790F6 /* PacketTunnelProvider.swift */; };
D020F65D29E4A697002790F6 /* BurrowNetworkExtension.appex in Embed Foundation Extensions */ = {isa = PBXBuildFile; fileRef = D020F65329E4A697002790F6 /* BurrowNetworkExtension.appex */; settings = {ATTRIBUTES = (RemoveHeadersOnCopy, ); }; };
D03383AD2C8E67E300F7C44E /* SwiftProtobuf in Frameworks */ = {isa = PBXBuildFile; productRef = D078F7E22C8DA375008A8CEC /* SwiftProtobuf */; };
@ -23,7 +24,6 @@
D0D4E53A2C8D996F007F820A /* BurrowCore.framework in Embed Frameworks */ = {isa = PBXBuildFile; fileRef = D0D4E5312C8D996F007F820A /* BurrowCore.framework */; settings = {ATTRIBUTES = (CodeSignOnCopy, RemoveHeadersOnCopy, ); }; };
D0D4E56B2C8D9C2F007F820A /* Logging.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E49A2C8D921A007F820A /* Logging.swift */; };
D0D4E5702C8D9C62007F820A /* BurrowCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = D0D4E5312C8D996F007F820A /* BurrowCore.framework */; };
D0D4E5712C8D9C6F007F820A /* HackClub.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E49D2C8D921A007F820A /* HackClub.swift */; };
D0D4E5722C8D9C6F007F820A /* Network.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E49E2C8D921A007F820A /* Network.swift */; };
D0D4E5732C8D9C6F007F820A /* WireGuard.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E49F2C8D921A007F820A /* WireGuard.swift */; };
D0D4E5742C8D9C6F007F820A /* BurrowView.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4A22C8D921A007F820A /* BurrowView.swift */; };
@ -33,7 +33,6 @@
D0D4E5782C8D9C6F007F820A /* NetworkExtension+Async.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4A62C8D921A007F820A /* NetworkExtension+Async.swift */; };
D0D4E5792C8D9C6F007F820A /* NetworkExtensionTunnel.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4A72C8D921A007F820A /* NetworkExtensionTunnel.swift */; };
D0D4E57A2C8D9C6F007F820A /* NetworkView.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4A82C8D921A007F820A /* NetworkView.swift */; };
D0D4E57B2C8D9C6F007F820A /* OAuth2.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4A92C8D921A007F820A /* OAuth2.swift */; };
D0D4E57C2C8D9C6F007F820A /* Tunnel.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4AA2C8D921A007F820A /* Tunnel.swift */; };
D0D4E57D2C8D9C6F007F820A /* TunnelButton.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4AB2C8D921A007F820A /* TunnelButton.swift */; };
D0D4E57E2C8D9C6F007F820A /* TunnelStatusView.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4AC2C8D921A007F820A /* TunnelStatusView.swift */; };
@ -44,13 +43,20 @@
D0D4E5A62C8D9E65007F820A /* BurrowCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = D0D4E5312C8D996F007F820A /* BurrowCore.framework */; };
D0F4FAD32C8DC79C0068730A /* BurrowCore.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = D0D4E5312C8D996F007F820A /* BurrowCore.framework */; };
D0F7594E2C8DAB6B00126CF3 /* GRPC in Frameworks */ = {isa = PBXBuildFile; productRef = D078F7E02C8DA375008A8CEC /* GRPC */; };
D0F759612C8DB24B00126CF3 /* grpc-swift-config.json in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4962C8D921A007F820A /* grpc-swift-config.json */; };
D0F759622C8DB24B00126CF3 /* swift-protobuf-config.json in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4972C8D921A007F820A /* swift-protobuf-config.json */; };
D0FA10012D10200100112233 /* burrow.pb.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0FA10032D10200100112233 /* burrow.pb.swift */; };
D0FA10022D10200100112233 /* burrow.grpc.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0FA10042D10200100112233 /* burrow.grpc.swift */; };
D0F7597E2C8DB30500126CF3 /* CGRPCZlib in Frameworks */ = {isa = PBXBuildFile; productRef = D0F7597D2C8DB30500126CF3 /* CGRPCZlib */; };
D0F7598D2C8DB3DA00126CF3 /* Client.swift in Sources */ = {isa = PBXBuildFile; fileRef = D0D4E4992C8D921A007F820A /* Client.swift */; };
/* End PBXBuildFile section */
/* Begin PBXContainerItemProxy section */
D11000022F70000100112233 /* PBXContainerItemProxy */ = {
isa = PBXContainerItemProxy;
containerPortal = D05B9F6A29E39EEC008CB1F9 /* Project object */;
proxyType = 1;
remoteGlobalIDString = D05B9F7129E39EEC008CB1F9;
remoteInfo = App;
};
D020F65B29E4A697002790F6 /* PBXContainerItemProxy */ = {
isa = PBXContainerItemProxy;
containerPortal = D05B9F6A29E39EEC008CB1F9 /* Project object */;
@ -132,6 +138,9 @@
/* Begin PBXFileReference section */
D00117422B30348D00D87C25 /* Configuration.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = Configuration.xcconfig; sourceTree = "<group>"; };
D00AA8962A4669BC005C8102 /* AppDelegate.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AppDelegate.swift; sourceTree = "<group>"; };
D11000032F70000100112233 /* BurrowUITests.xctest */ = {isa = PBXFileReference; explicitFileType = wrapper.cfbundle; includeInIndex = 0; path = BurrowUITests.xctest; sourceTree = BUILT_PRODUCTS_DIR; };
D11000042F70000100112233 /* BurrowUITests.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = BurrowUITests.swift; sourceTree = "<group>"; };
D11000052F70000100112233 /* UITests.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = UITests.xcconfig; sourceTree = "<group>"; };
D020F63D29E4A1FF002790F6 /* Identity.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = Identity.xcconfig; sourceTree = "<group>"; };
D020F64029E4A1FF002790F6 /* Compiler.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = Compiler.xcconfig; sourceTree = "<group>"; };
D020F64229E4A1FF002790F6 /* Info.plist */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = "<group>"; };
@ -156,11 +165,8 @@
D0BCC6032A09535900AD070D /* libburrow.a */ = {isa = PBXFileReference; lastKnownFileType = archive.ar; path = libburrow.a; sourceTree = BUILT_PRODUCTS_DIR; };
D0BF09582C8E6789000D8DEC /* UI.xcconfig */ = {isa = PBXFileReference; lastKnownFileType = text.xcconfig; path = UI.xcconfig; sourceTree = "<group>"; };
D0D4E4952C8D921A007F820A /* burrow.proto */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.protobuf; path = burrow.proto; sourceTree = "<group>"; };
D0D4E4962C8D921A007F820A /* grpc-swift-config.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = "grpc-swift-config.json"; sourceTree = "<group>"; };
D0D4E4972C8D921A007F820A /* swift-protobuf-config.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = "swift-protobuf-config.json"; sourceTree = "<group>"; };
D0D4E4992C8D921A007F820A /* Client.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Client.swift; sourceTree = "<group>"; };
D0D4E49A2C8D921A007F820A /* Logging.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Logging.swift; sourceTree = "<group>"; };
D0D4E49D2C8D921A007F820A /* HackClub.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HackClub.swift; sourceTree = "<group>"; };
D0D4E49E2C8D921A007F820A /* Network.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Network.swift; sourceTree = "<group>"; };
D0D4E49F2C8D921A007F820A /* WireGuard.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WireGuard.swift; sourceTree = "<group>"; };
D0D4E4A12C8D921A007F820A /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = "<group>"; };
@ -171,7 +177,6 @@
D0D4E4A62C8D921A007F820A /* NetworkExtension+Async.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = "NetworkExtension+Async.swift"; sourceTree = "<group>"; };
D0D4E4A72C8D921A007F820A /* NetworkExtensionTunnel.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = NetworkExtensionTunnel.swift; sourceTree = "<group>"; };
D0D4E4A82C8D921A007F820A /* NetworkView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = NetworkView.swift; sourceTree = "<group>"; };
D0D4E4A92C8D921A007F820A /* OAuth2.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = OAuth2.swift; sourceTree = "<group>"; };
D0D4E4AA2C8D921A007F820A /* Tunnel.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Tunnel.swift; sourceTree = "<group>"; };
D0D4E4AB2C8D921A007F820A /* TunnelButton.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TunnelButton.swift; sourceTree = "<group>"; };
D0D4E4AC2C8D921A007F820A /* TunnelStatusView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TunnelStatusView.swift; sourceTree = "<group>"; };
@ -183,9 +188,18 @@
D0D4E58E2C8D9D0A007F820A /* Constants.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = Constants.h; sourceTree = "<group>"; };
D0D4E58F2C8D9D0A007F820A /* Constants.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Constants.swift; sourceTree = "<group>"; };
D0D4E5902C8D9D0A007F820A /* module.modulemap */ = {isa = PBXFileReference; lastKnownFileType = "sourcecode.module-map"; path = module.modulemap; sourceTree = "<group>"; };
D0FA10032D10200100112233 /* burrow.pb.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Generated/burrow.pb.swift; sourceTree = "<group>"; };
D0FA10042D10200100112233 /* burrow.grpc.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = Generated/burrow.grpc.swift; sourceTree = "<group>"; };
/* End PBXFileReference section */
/* Begin PBXFrameworksBuildPhase section */
D11000062F70000100112233 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
D020F65029E4A697002790F6 /* Frameworks */ = {
isa = PBXFrameworksBuildPhase;
buildActionMask = 2147483647;
@ -247,6 +261,7 @@
D0D4E4F72C8D941D007F820A /* Framework.xcconfig */,
D020F64029E4A1FF002790F6 /* Compiler.xcconfig */,
D0D4E4F62C8D932D007F820A /* Debug.xcconfig */,
D11000052F70000100112233 /* UITests.xcconfig */,
D04A3E1D2BAF465F0043EC85 /* Version.xcconfig */,
D020F64229E4A1FF002790F6 /* Info.plist */,
D0D4E5912C8D9D0A007F820A /* Constants */,
@ -272,6 +287,7 @@
isa = PBXGroup;
children = (
D05B9F7429E39EEC008CB1F9 /* App */,
D11000072F70000100112233 /* AppUITests */,
D020F65629E4A697002790F6 /* NetworkExtension */,
D0D4E49C2C8D921A007F820A /* Core */,
D0D4E4AD2C8D921A007F820A /* UI */,
@ -285,6 +301,7 @@
isa = PBXGroup;
children = (
D05B9F7229E39EEC008CB1F9 /* Burrow.app */,
D11000032F70000100112233 /* BurrowUITests.xctest */,
D020F65329E4A697002790F6 /* BurrowNetworkExtension.appex */,
D0BCC6032A09535900AD070D /* libburrow.a */,
D0D4E5312C8D996F007F820A /* BurrowCore.framework */,
@ -307,6 +324,14 @@
path = App;
sourceTree = "<group>";
};
D11000072F70000100112233 /* AppUITests */ = {
isa = PBXGroup;
children = (
D11000042F70000100112233 /* BurrowUITests.swift */,
);
path = AppUITests;
sourceTree = "<group>";
};
D0B98FD729FDDB57004E7149 /* libburrow */ = {
isa = PBXGroup;
children = (
@ -321,8 +346,8 @@
isa = PBXGroup;
children = (
D0D4E4952C8D921A007F820A /* burrow.proto */,
D0D4E4962C8D921A007F820A /* grpc-swift-config.json */,
D0D4E4972C8D921A007F820A /* swift-protobuf-config.json */,
D0FA10032D10200100112233 /* burrow.pb.swift */,
D0FA10042D10200100112233 /* burrow.grpc.swift */,
);
path = Client;
sourceTree = "<group>";
@ -340,7 +365,6 @@
D0D4E4A02C8D921A007F820A /* Networks */ = {
isa = PBXGroup;
children = (
D0D4E49D2C8D921A007F820A /* HackClub.swift */,
D0D4E49E2C8D921A007F820A /* Network.swift */,
D0D4E49F2C8D921A007F820A /* WireGuard.swift */,
);
@ -358,7 +382,6 @@
D0D4E4A62C8D921A007F820A /* NetworkExtension+Async.swift */,
D0D4E4A72C8D921A007F820A /* NetworkExtensionTunnel.swift */,
D0D4E4A82C8D921A007F820A /* NetworkView.swift */,
D0D4E4A92C8D921A007F820A /* OAuth2.swift */,
D0D4E4AA2C8D921A007F820A /* Tunnel.swift */,
D0D4E4AB2C8D921A007F820A /* TunnelButton.swift */,
D0D4E4AC2C8D921A007F820A /* TunnelStatusView.swift */,
@ -381,6 +404,24 @@
/* End PBXGroup section */
/* Begin PBXNativeTarget section */
D11000082F70000100112233 /* BurrowUITests */ = {
isa = PBXNativeTarget;
buildConfigurationList = D110000E2F70000100112233 /* Build configuration list for PBXNativeTarget "BurrowUITests" */;
buildPhases = (
D110000A2F70000100112233 /* Sources */,
D11000062F70000100112233 /* Frameworks */,
D11000092F70000100112233 /* Resources */,
);
buildRules = (
);
dependencies = (
D110000B2F70000100112233 /* PBXTargetDependency */,
);
name = BurrowUITests;
productName = BurrowUITests;
productReference = D11000032F70000100112233 /* BurrowUITests.xctest */;
productType = "com.apple.product-type.bundle.ui-testing";
};
D020F65229E4A697002790F6 /* NetworkExtension */ = {
isa = PBXNativeTarget;
buildConfigurationList = D020F65E29E4A697002790F6 /* Build configuration list for PBXNativeTarget "NetworkExtension" */;
@ -434,8 +475,6 @@
);
dependencies = (
D0F7598A2C8DB34200126CF3 /* PBXTargetDependency */,
D0F7595E2C8DB24400126CF3 /* PBXTargetDependency */,
D0F759602C8DB24400126CF3 /* PBXTargetDependency */,
);
name = Core;
packageProductDependencies = (
@ -498,6 +537,10 @@
LastSwiftUpdateCheck = 1600;
LastUpgradeCheck = 1520;
TargetAttributes = {
D11000082F70000100112233 = {
CreatedOnToolsVersion = 16.0;
TestTargetID = D05B9F7129E39EEC008CB1F9;
};
D020F65229E4A697002790F6 = {
CreatedOnToolsVersion = 14.3;
};
@ -530,6 +573,7 @@
projectRoot = "";
targets = (
D05B9F7129E39EEC008CB1F9 /* App */,
D11000082F70000100112233 /* BurrowUITests */,
D020F65229E4A697002790F6 /* NetworkExtension */,
D0D4E5502C8D9BF2007F820A /* UI */,
D0D4E5302C8D996F007F820A /* Core */,
@ -539,6 +583,13 @@
/* End PBXProject section */
/* Begin PBXResourcesBuildPhase section */
D11000092F70000100112233 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
files = (
);
runOnlyForDeploymentPostprocessing = 0;
};
D05B9F7029E39EEC008CB1F9 /* Resources */ = {
isa = PBXResourcesBuildPhase;
buildActionMask = 2147483647;
@ -602,6 +653,14 @@
/* End PBXShellScriptBuildPhase section */
/* Begin PBXSourcesBuildPhase section */
D110000A2F70000100112233 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
D11000012F70000100112233 /* BurrowUITests.swift in Sources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
D020F64F29E4A697002790F6 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
@ -623,8 +682,8 @@
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
D0F759612C8DB24B00126CF3 /* grpc-swift-config.json in Sources */,
D0F759622C8DB24B00126CF3 /* swift-protobuf-config.json in Sources */,
D0FA10012D10200100112233 /* burrow.pb.swift in Sources */,
D0FA10022D10200100112233 /* burrow.grpc.swift in Sources */,
D0F7598D2C8DB3DA00126CF3 /* Client.swift in Sources */,
D0D4E56B2C8D9C2F007F820A /* Logging.swift in Sources */,
);
@ -634,7 +693,6 @@
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
D0D4E5712C8D9C6F007F820A /* HackClub.swift in Sources */,
D0D4E5722C8D9C6F007F820A /* Network.swift in Sources */,
D0D4E5732C8D9C6F007F820A /* WireGuard.swift in Sources */,
D0D4E5742C8D9C6F007F820A /* BurrowView.swift in Sources */,
@ -644,7 +702,6 @@
D0D4E5782C8D9C6F007F820A /* NetworkExtension+Async.swift in Sources */,
D0D4E5792C8D9C6F007F820A /* NetworkExtensionTunnel.swift in Sources */,
D0D4E57A2C8D9C6F007F820A /* NetworkView.swift in Sources */,
D0D4E57B2C8D9C6F007F820A /* OAuth2.swift in Sources */,
D0D4E57C2C8D9C6F007F820A /* Tunnel.swift in Sources */,
D0D4E57D2C8D9C6F007F820A /* TunnelButton.swift in Sources */,
D0D4E57E2C8D9C6F007F820A /* TunnelStatusView.swift in Sources */,
@ -662,6 +719,11 @@
/* End PBXSourcesBuildPhase section */
/* Begin PBXTargetDependency section */
D110000B2F70000100112233 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
target = D05B9F7129E39EEC008CB1F9 /* App */;
targetProxy = D11000022F70000100112233 /* PBXContainerItemProxy */;
};
D020F65C29E4A697002790F6 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
target = D020F65229E4A697002790F6 /* NetworkExtension */;
@ -697,14 +759,6 @@
target = D0D4E5302C8D996F007F820A /* Core */;
targetProxy = D0F4FAD12C8DC7960068730A /* PBXContainerItemProxy */;
};
D0F7595E2C8DB24400126CF3 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
productRef = D0F7595D2C8DB24400126CF3 /* GRPCSwiftPlugin */;
};
D0F759602C8DB24400126CF3 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
productRef = D0F7595F2C8DB24400126CF3 /* SwiftProtobufPlugin */;
};
D0F7598A2C8DB34200126CF3 /* PBXTargetDependency */ = {
isa = PBXTargetDependency;
productRef = D0F759892C8DB34200126CF3 /* GRPC */;
@ -712,6 +766,20 @@
/* End PBXTargetDependency section */
/* Begin XCBuildConfiguration section */
D110000C2F70000100112233 /* Debug */ = {
isa = XCBuildConfiguration;
baseConfigurationReference = D11000052F70000100112233 /* UITests.xcconfig */;
buildSettings = {
};
name = Debug;
};
D110000D2F70000100112233 /* Release */ = {
isa = XCBuildConfiguration;
baseConfigurationReference = D11000052F70000100112233 /* UITests.xcconfig */;
buildSettings = {
};
name = Release;
};
D020F65F29E4A697002790F6 /* Debug */ = {
isa = XCBuildConfiguration;
baseConfigurationReference = D020F66229E4A6E5002790F6 /* NetworkExtension.xcconfig */;
@ -799,6 +867,15 @@
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
D110000E2F70000100112233 /* Build configuration list for PBXNativeTarget "BurrowUITests" */ = {
isa = XCConfigurationList;
buildConfigurations = (
D110000C2F70000100112233 /* Debug */,
D110000D2F70000100112233 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
D020F65E29E4A697002790F6 /* Build configuration list for PBXNativeTarget "NetworkExtension" */ = {
isa = XCConfigurationList;
buildConfigurations = (
@ -929,16 +1006,6 @@
package = D0B1D10E2C436152004B7823 /* XCRemoteSwiftPackageReference "swift-async-algorithms" */;
productName = AsyncAlgorithms;
};
D0F7595D2C8DB24400126CF3 /* GRPCSwiftPlugin */ = {
isa = XCSwiftPackageProductDependency;
package = D0D4E4822C8D8EF6007F820A /* XCRemoteSwiftPackageReference "grpc-swift" */;
productName = "plugin:GRPCSwiftPlugin";
};
D0F7595F2C8DB24400126CF3 /* SwiftProtobufPlugin */ = {
isa = XCSwiftPackageProductDependency;
package = D0D4E4852C8D8F29007F820A /* XCRemoteSwiftPackageReference "swift-protobuf" */;
productName = "plugin:SwiftProtobufPlugin";
};
D0F7597D2C8DB30500126CF3 /* CGRPCZlib */ = {
isa = XCSwiftPackageProductDependency;
package = D0D4E4822C8D8EF6007F820A /* XCRemoteSwiftPackageReference "grpc-swift" */;

View file

@ -28,7 +28,20 @@
selectedDebuggerIdentifier = "Xcode.DebuggerFoundation.Debugger.LLDB"
selectedLauncherIdentifier = "Xcode.DebuggerFoundation.Launcher.LLDB"
shouldUseLaunchSchemeArgsEnv = "YES"
shouldAutocreateTestPlan = "YES">
shouldAutocreateTestPlan = "NO">
<Testables>
<TestableReference
skipped = "NO"
parallelizable = "YES">
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "D11000082F70000100112233"
BuildableName = "BurrowUITests.xctest"
BlueprintName = "BurrowUITests"
ReferencedContainer = "container:Burrow.xcodeproj">
</BuildableReference>
</TestableReference>
</Testables>
</TestAction>
<LaunchAction
buildConfiguration = "Debug"

View file

@ -40,5 +40,4 @@ APP_GROUP_IDENTIFIER = group.$(APP_BUNDLE_IDENTIFIER)
APP_GROUP_IDENTIFIER[sdk=macosx*] = $(DEVELOPMENT_TEAM).$(APP_BUNDLE_IDENTIFIER)
NETWORK_EXTENSION_BUNDLE_IDENTIFIER = $(APP_BUNDLE_IDENTIFIER).network
// https://github.com/grpc/grpc-swift/issues/683#issuecomment-1130118953
OTHER_SWIFT_FLAGS = $(inherited) -Xcc -fmodule-map-file=$(GENERATED_MODULEMAP_DIR)/CNIOAtomics.modulemap -Xcc -fmodule-map-file=$(GENERATED_MODULEMAP_DIR)/CNIODarwin.modulemap -Xcc -fmodule-map-file=$(GENERATED_MODULEMAP_DIR)/CGRPCZlib.modulemap
OTHER_SWIFT_FLAGS = $(inherited)

View file

@ -1,4 +1,5 @@
@_implementationOnly import CConstants
import Foundation
import OSLog
public enum Constants {
@ -27,9 +28,26 @@ public enum Constants {
private static let _groupContainerURL: Result<URL, Error> = {
switch FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: appGroupIdentifier) {
case .some(let url): .success(url)
case .none: .failure(.invalidAppGroupIdentifier)
case .none:
fallbackContainerURL().mapError { _ in .invalidAppGroupIdentifier }
}
}()
private static func fallbackContainerURL() -> Result<URL, any Swift.Error> {
#if targetEnvironment(simulator)
Result {
// The simulator app's Application Support path lives inside its sandbox container,
// so the host daemon cannot reach it. Use a shared host temp location instead.
let url = URL(filePath: "/tmp", directoryHint: .isDirectory)
.appending(component: bundleIdentifier, directoryHint: .isDirectory)
.appending(component: "SimulatorFallback", directoryHint: .isDirectory)
try FileManager.default.createDirectory(at: url, withIntermediateDirectories: true)
return url
}
#else
.failure(Error.invalidAppGroupIdentifier)
#endif
}
}
extension Logger {

View file

@ -0,0 +1,14 @@
#include "Compiler.xcconfig"
SUPPORTED_PLATFORMS = iphonesimulator iphoneos
TARGETED_DEVICE_FAMILY[sdk=iphone*] = 1,2
PRODUCT_NAME = $(TARGET_NAME)
PRODUCT_BUNDLE_IDENTIFIER = $(APP_BUNDLE_IDENTIFIER).uitests
STRING_CATALOG_GENERATE_SYMBOLS = NO
SWIFT_EMIT_LOC_STRINGS = NO
ALWAYS_EMBED_SWIFT_STANDARD_LIBRARIES = YES
LD_RUNPATH_SEARCH_PATHS = $(inherited) @executable_path/Frameworks @loader_path/Frameworks
TEST_TARGET_NAME = App

View file

@ -1,5 +1,7 @@
import Foundation
import GRPC
import NIOTransportServices
import SwiftProtobuf
public typealias TunnelClient = Burrow_TunnelAsyncClient
public typealias NetworksClient = Burrow_NetworksAsyncClient
@ -30,3 +32,477 @@ extension NetworksClient: Client {
self.init(channel: channel, defaultCallOptions: .init(), interceptors: .none)
}
}
public struct Burrow_TailnetDiscoverRequest: Sendable {
public var email: String = ""
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetDiscoverResponse: Sendable {
public var domain: String = ""
public var authority: String = ""
public var oidcIssuer: String = ""
public var managed: Bool = false
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetProbeRequest: Sendable {
public var authority: String = ""
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetProbeResponse: Sendable {
public var authority: String = ""
public var statusCode: Int32 = 0
public var summary: String = ""
public var detail: String = ""
public var reachable: Bool = false
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetLoginStartRequest: Sendable {
public var accountName: String = ""
public var identityName: String = ""
public var hostname: String = ""
public var authority: String = ""
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetLoginStatusRequest: Sendable {
public var sessionID: String = ""
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetLoginCancelRequest: Sendable {
public var sessionID: String = ""
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TailnetLoginStatusResponse: Sendable {
public var sessionID: String = ""
public var backendState: String = ""
public var authURL: String = ""
public var running: Bool = false
public var needsLogin: Bool = false
public var tailnetName: String = ""
public var magicDNSSuffix: String = ""
public var selfDNSName: String = ""
public var tailnetIPs: [String] = []
public var health: [String] = []
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TunnelPacket: Sendable {
public var payload = Data()
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
extension Burrow_TailnetDiscoverRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetDiscoverRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "email")
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.email)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.email.isEmpty {
try visitor.visitSingularStringField(value: self.email, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetDiscoverResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetDiscoverResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "domain"),
2: .same(proto: "authority"),
3: .same(proto: "oidc_issuer"),
4: .same(proto: "managed"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.domain)
case 2: try decoder.decodeSingularStringField(value: &self.authority)
case 3: try decoder.decodeSingularStringField(value: &self.oidcIssuer)
case 4: try decoder.decodeSingularBoolField(value: &self.managed)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.domain.isEmpty {
try visitor.visitSingularStringField(value: self.domain, fieldNumber: 1)
}
if !self.authority.isEmpty {
try visitor.visitSingularStringField(value: self.authority, fieldNumber: 2)
}
if !self.oidcIssuer.isEmpty {
try visitor.visitSingularStringField(value: self.oidcIssuer, fieldNumber: 3)
}
if self.managed {
try visitor.visitSingularBoolField(value: self.managed, fieldNumber: 4)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetProbeRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetProbeRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "authority")
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.authority)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.authority.isEmpty {
try visitor.visitSingularStringField(value: self.authority, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetProbeResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetProbeResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "authority"),
2: .same(proto: "status_code"),
3: .same(proto: "summary"),
4: .same(proto: "detail"),
5: .same(proto: "reachable"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.authority)
case 2: try decoder.decodeSingularInt32Field(value: &self.statusCode)
case 3: try decoder.decodeSingularStringField(value: &self.summary)
case 4: try decoder.decodeSingularStringField(value: &self.detail)
case 5: try decoder.decodeSingularBoolField(value: &self.reachable)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.authority.isEmpty {
try visitor.visitSingularStringField(value: self.authority, fieldNumber: 1)
}
if self.statusCode != 0 {
try visitor.visitSingularInt32Field(value: self.statusCode, fieldNumber: 2)
}
if !self.summary.isEmpty {
try visitor.visitSingularStringField(value: self.summary, fieldNumber: 3)
}
if !self.detail.isEmpty {
try visitor.visitSingularStringField(value: self.detail, fieldNumber: 4)
}
if self.reachable {
try visitor.visitSingularBoolField(value: self.reachable, fieldNumber: 5)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetLoginStartRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetLoginStartRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "account_name"),
2: .standard(proto: "identity_name"),
3: .same(proto: "hostname"),
4: .same(proto: "authority"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.accountName)
case 2: try decoder.decodeSingularStringField(value: &self.identityName)
case 3: try decoder.decodeSingularStringField(value: &self.hostname)
case 4: try decoder.decodeSingularStringField(value: &self.authority)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.accountName.isEmpty {
try visitor.visitSingularStringField(value: self.accountName, fieldNumber: 1)
}
if !self.identityName.isEmpty {
try visitor.visitSingularStringField(value: self.identityName, fieldNumber: 2)
}
if !self.hostname.isEmpty {
try visitor.visitSingularStringField(value: self.hostname, fieldNumber: 3)
}
if !self.authority.isEmpty {
try visitor.visitSingularStringField(value: self.authority, fieldNumber: 4)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetLoginStatusRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetLoginStatusRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "session_id")
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.sessionID)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.sessionID.isEmpty {
try visitor.visitSingularStringField(value: self.sessionID, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetLoginCancelRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetLoginCancelRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "session_id")
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.sessionID)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.sessionID.isEmpty {
try visitor.visitSingularStringField(value: self.sessionID, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TailnetLoginStatusResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TailnetLoginStatusResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .standard(proto: "session_id"),
2: .standard(proto: "backend_state"),
3: .standard(proto: "auth_url"),
4: .same(proto: "running"),
5: .standard(proto: "needs_login"),
6: .standard(proto: "tailnet_name"),
7: .standard(proto: "magic_dns_suffix"),
8: .standard(proto: "self_dns_name"),
9: .standard(proto: "tailnet_ips"),
10: .same(proto: "health"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularStringField(value: &self.sessionID)
case 2: try decoder.decodeSingularStringField(value: &self.backendState)
case 3: try decoder.decodeSingularStringField(value: &self.authURL)
case 4: try decoder.decodeSingularBoolField(value: &self.running)
case 5: try decoder.decodeSingularBoolField(value: &self.needsLogin)
case 6: try decoder.decodeSingularStringField(value: &self.tailnetName)
case 7: try decoder.decodeSingularStringField(value: &self.magicDNSSuffix)
case 8: try decoder.decodeSingularStringField(value: &self.selfDNSName)
case 9: try decoder.decodeRepeatedStringField(value: &self.tailnetIPs)
case 10: try decoder.decodeRepeatedStringField(value: &self.health)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.sessionID.isEmpty {
try visitor.visitSingularStringField(value: self.sessionID, fieldNumber: 1)
}
if !self.backendState.isEmpty {
try visitor.visitSingularStringField(value: self.backendState, fieldNumber: 2)
}
if !self.authURL.isEmpty {
try visitor.visitSingularStringField(value: self.authURL, fieldNumber: 3)
}
if self.running {
try visitor.visitSingularBoolField(value: self.running, fieldNumber: 4)
}
if self.needsLogin {
try visitor.visitSingularBoolField(value: self.needsLogin, fieldNumber: 5)
}
if !self.tailnetName.isEmpty {
try visitor.visitSingularStringField(value: self.tailnetName, fieldNumber: 6)
}
if !self.magicDNSSuffix.isEmpty {
try visitor.visitSingularStringField(value: self.magicDNSSuffix, fieldNumber: 7)
}
if !self.selfDNSName.isEmpty {
try visitor.visitSingularStringField(value: self.selfDNSName, fieldNumber: 8)
}
if !self.tailnetIPs.isEmpty {
try visitor.visitRepeatedStringField(value: self.tailnetIPs, fieldNumber: 9)
}
if !self.health.isEmpty {
try visitor.visitRepeatedStringField(value: self.health, fieldNumber: 10)
}
try unknownFields.traverse(visitor: &visitor)
}
}
extension Burrow_TunnelPacket: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = "burrow.TunnelPacket"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "payload")
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
switch fieldNumber {
case 1: try decoder.decodeSingularBytesField(value: &self.payload)
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.payload.isEmpty {
try visitor.visitSingularBytesField(value: self.payload, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
}
public struct TailnetClient: Client, GRPCClient {
public let channel: GRPCChannel
public var defaultCallOptions: CallOptions
public init(channel: any GRPCChannel) {
self.channel = channel
self.defaultCallOptions = .init()
}
public func discover(
_ request: Burrow_TailnetDiscoverRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_TailnetDiscoverResponse {
try await self.performAsyncUnaryCall(
path: "/burrow.TailnetControl/Discover",
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
public func probe(
_ request: Burrow_TailnetProbeRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_TailnetProbeResponse {
try await self.performAsyncUnaryCall(
path: "/burrow.TailnetControl/Probe",
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
public func loginStart(
_ request: Burrow_TailnetLoginStartRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_TailnetLoginStatusResponse {
try await self.performAsyncUnaryCall(
path: "/burrow.TailnetControl/LoginStart",
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
public func loginStatus(
_ request: Burrow_TailnetLoginStatusRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_TailnetLoginStatusResponse {
try await self.performAsyncUnaryCall(
path: "/burrow.TailnetControl/LoginStatus",
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
public func loginCancel(
_ request: Burrow_TailnetLoginCancelRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
try await self.performAsyncUnaryCall(
path: "/burrow.TailnetControl/LoginCancel",
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
}
public struct TunnelPacketClient: Client, GRPCClient {
public let channel: GRPCChannel
public var defaultCallOptions: CallOptions
public init(channel: any GRPCChannel) {
self.channel = channel
self.defaultCallOptions = .init()
}
public func makeTunnelPacketsCall(
callOptions: CallOptions? = nil
) -> GRPCAsyncBidirectionalStreamingCall<Burrow_TunnelPacket, Burrow_TunnelPacket> {
self.makeAsyncBidirectionalStreamingCall(
path: "/burrow.Tunnel/TunnelPackets",
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: []
)
}
}

View file

@ -0,0 +1,761 @@
//
// DO NOT EDIT.
// swift-format-ignore-file
//
// Generated by the protocol buffer compiler.
// Source: burrow.proto
//
import GRPC
import NIO
import NIOConcurrencyHelpers
import SwiftProtobuf
/// Usage: instantiate `Burrow_TunnelClient`, then call methods of this protocol to make API calls.
public protocol Burrow_TunnelClientProtocol: GRPCClient {
var serviceName: String { get }
var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? { get }
func tunnelConfiguration(
_ request: Burrow_Empty,
callOptions: CallOptions?,
handler: @escaping (Burrow_TunnelConfigurationResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_TunnelConfigurationResponse>
func tunnelStart(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> UnaryCall<Burrow_Empty, Burrow_Empty>
func tunnelStop(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> UnaryCall<Burrow_Empty, Burrow_Empty>
func tunnelStatus(
_ request: Burrow_Empty,
callOptions: CallOptions?,
handler: @escaping (Burrow_TunnelStatusResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_TunnelStatusResponse>
}
extension Burrow_TunnelClientProtocol {
public var serviceName: String {
return "burrow.Tunnel"
}
/// Server streaming call to TunnelConfiguration
///
/// - Parameters:
/// - request: Request to send to TunnelConfiguration.
/// - callOptions: Call options.
/// - handler: A closure called when each response is received from the server.
/// - Returns: A `ServerStreamingCall` with futures for the metadata and status.
public func tunnelConfiguration(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil,
handler: @escaping (Burrow_TunnelConfigurationResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_TunnelConfigurationResponse> {
return self.makeServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelConfiguration.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelConfigurationInterceptors() ?? [],
handler: handler
)
}
/// Unary call to TunnelStart
///
/// - Parameters:
/// - request: Request to send to TunnelStart.
/// - callOptions: Call options.
/// - Returns: A `UnaryCall` with futures for the metadata, status and response.
public func tunnelStart(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> UnaryCall<Burrow_Empty, Burrow_Empty> {
return self.makeUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStart.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStartInterceptors() ?? []
)
}
/// Unary call to TunnelStop
///
/// - Parameters:
/// - request: Request to send to TunnelStop.
/// - callOptions: Call options.
/// - Returns: A `UnaryCall` with futures for the metadata, status and response.
public func tunnelStop(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> UnaryCall<Burrow_Empty, Burrow_Empty> {
return self.makeUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStop.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStopInterceptors() ?? []
)
}
/// Server streaming call to TunnelStatus
///
/// - Parameters:
/// - request: Request to send to TunnelStatus.
/// - callOptions: Call options.
/// - handler: A closure called when each response is received from the server.
/// - Returns: A `ServerStreamingCall` with futures for the metadata and status.
public func tunnelStatus(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil,
handler: @escaping (Burrow_TunnelStatusResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_TunnelStatusResponse> {
return self.makeServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStatus.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStatusInterceptors() ?? [],
handler: handler
)
}
}
@available(*, deprecated)
extension Burrow_TunnelClient: @unchecked Sendable {}
@available(*, deprecated, renamed: "Burrow_TunnelNIOClient")
public final class Burrow_TunnelClient: Burrow_TunnelClientProtocol {
private let lock = Lock()
private var _defaultCallOptions: CallOptions
private var _interceptors: Burrow_TunnelClientInterceptorFactoryProtocol?
public let channel: GRPCChannel
public var defaultCallOptions: CallOptions {
get { self.lock.withLock { return self._defaultCallOptions } }
set { self.lock.withLockVoid { self._defaultCallOptions = newValue } }
}
public var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? {
get { self.lock.withLock { return self._interceptors } }
set { self.lock.withLockVoid { self._interceptors = newValue } }
}
/// Creates a client for the burrow.Tunnel service.
///
/// - Parameters:
/// - channel: `GRPCChannel` to the service host.
/// - defaultCallOptions: Options to use for each service call if the user doesn't provide them.
/// - interceptors: A factory providing interceptors for each RPC.
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self._defaultCallOptions = defaultCallOptions
self._interceptors = interceptors
}
}
public struct Burrow_TunnelNIOClient: Burrow_TunnelClientProtocol {
public var channel: GRPCChannel
public var defaultCallOptions: CallOptions
public var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol?
/// Creates a client for the burrow.Tunnel service.
///
/// - Parameters:
/// - channel: `GRPCChannel` to the service host.
/// - defaultCallOptions: Options to use for each service call if the user doesn't provide them.
/// - interceptors: A factory providing interceptors for each RPC.
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self.defaultCallOptions = defaultCallOptions
self.interceptors = interceptors
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
public protocol Burrow_TunnelAsyncClientProtocol: GRPCClient {
static var serviceDescriptor: GRPCServiceDescriptor { get }
var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? { get }
func makeTunnelConfigurationCall(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_TunnelConfigurationResponse>
func makeTunnelStartCall(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> GRPCAsyncUnaryCall<Burrow_Empty, Burrow_Empty>
func makeTunnelStopCall(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> GRPCAsyncUnaryCall<Burrow_Empty, Burrow_Empty>
func makeTunnelStatusCall(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_TunnelStatusResponse>
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
extension Burrow_TunnelAsyncClientProtocol {
public static var serviceDescriptor: GRPCServiceDescriptor {
return Burrow_TunnelClientMetadata.serviceDescriptor
}
public var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? {
return nil
}
public func makeTunnelConfigurationCall(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_TunnelConfigurationResponse> {
return self.makeAsyncServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelConfiguration.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelConfigurationInterceptors() ?? []
)
}
public func makeTunnelStartCall(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncUnaryCall<Burrow_Empty, Burrow_Empty> {
return self.makeAsyncUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStart.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStartInterceptors() ?? []
)
}
public func makeTunnelStopCall(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncUnaryCall<Burrow_Empty, Burrow_Empty> {
return self.makeAsyncUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStop.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStopInterceptors() ?? []
)
}
public func makeTunnelStatusCall(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_TunnelStatusResponse> {
return self.makeAsyncServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStatus.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStatusInterceptors() ?? []
)
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
extension Burrow_TunnelAsyncClientProtocol {
public func tunnelConfiguration(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncResponseStream<Burrow_TunnelConfigurationResponse> {
return self.performAsyncServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelConfiguration.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelConfigurationInterceptors() ?? []
)
}
public func tunnelStart(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
return try await self.performAsyncUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStart.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStartInterceptors() ?? []
)
}
public func tunnelStop(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
return try await self.performAsyncUnaryCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStop.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStopInterceptors() ?? []
)
}
public func tunnelStatus(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncResponseStream<Burrow_TunnelStatusResponse> {
return self.performAsyncServerStreamingCall(
path: Burrow_TunnelClientMetadata.Methods.tunnelStatus.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeTunnelStatusInterceptors() ?? []
)
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
public struct Burrow_TunnelAsyncClient: Burrow_TunnelAsyncClientProtocol {
public var channel: GRPCChannel
public var defaultCallOptions: CallOptions
public var interceptors: Burrow_TunnelClientInterceptorFactoryProtocol?
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_TunnelClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self.defaultCallOptions = defaultCallOptions
self.interceptors = interceptors
}
}
public protocol Burrow_TunnelClientInterceptorFactoryProtocol: Sendable {
/// - Returns: Interceptors to use when invoking 'tunnelConfiguration'.
func makeTunnelConfigurationInterceptors() -> [ClientInterceptor<Burrow_Empty, Burrow_TunnelConfigurationResponse>]
/// - Returns: Interceptors to use when invoking 'tunnelStart'.
func makeTunnelStartInterceptors() -> [ClientInterceptor<Burrow_Empty, Burrow_Empty>]
/// - Returns: Interceptors to use when invoking 'tunnelStop'.
func makeTunnelStopInterceptors() -> [ClientInterceptor<Burrow_Empty, Burrow_Empty>]
/// - Returns: Interceptors to use when invoking 'tunnelStatus'.
func makeTunnelStatusInterceptors() -> [ClientInterceptor<Burrow_Empty, Burrow_TunnelStatusResponse>]
}
public enum Burrow_TunnelClientMetadata {
public static let serviceDescriptor = GRPCServiceDescriptor(
name: "Tunnel",
fullName: "burrow.Tunnel",
methods: [
Burrow_TunnelClientMetadata.Methods.tunnelConfiguration,
Burrow_TunnelClientMetadata.Methods.tunnelStart,
Burrow_TunnelClientMetadata.Methods.tunnelStop,
Burrow_TunnelClientMetadata.Methods.tunnelStatus,
]
)
public enum Methods {
public static let tunnelConfiguration = GRPCMethodDescriptor(
name: "TunnelConfiguration",
path: "/burrow.Tunnel/TunnelConfiguration",
type: GRPCCallType.serverStreaming
)
public static let tunnelStart = GRPCMethodDescriptor(
name: "TunnelStart",
path: "/burrow.Tunnel/TunnelStart",
type: GRPCCallType.unary
)
public static let tunnelStop = GRPCMethodDescriptor(
name: "TunnelStop",
path: "/burrow.Tunnel/TunnelStop",
type: GRPCCallType.unary
)
public static let tunnelStatus = GRPCMethodDescriptor(
name: "TunnelStatus",
path: "/burrow.Tunnel/TunnelStatus",
type: GRPCCallType.serverStreaming
)
}
}
/// Usage: instantiate `Burrow_NetworksClient`, then call methods of this protocol to make API calls.
public protocol Burrow_NetworksClientProtocol: GRPCClient {
var serviceName: String { get }
var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? { get }
func networkAdd(
_ request: Burrow_Network,
callOptions: CallOptions?
) -> UnaryCall<Burrow_Network, Burrow_Empty>
func networkList(
_ request: Burrow_Empty,
callOptions: CallOptions?,
handler: @escaping (Burrow_NetworkListResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_NetworkListResponse>
func networkReorder(
_ request: Burrow_NetworkReorderRequest,
callOptions: CallOptions?
) -> UnaryCall<Burrow_NetworkReorderRequest, Burrow_Empty>
func networkDelete(
_ request: Burrow_NetworkDeleteRequest,
callOptions: CallOptions?
) -> UnaryCall<Burrow_NetworkDeleteRequest, Burrow_Empty>
}
extension Burrow_NetworksClientProtocol {
public var serviceName: String {
return "burrow.Networks"
}
/// Unary call to NetworkAdd
///
/// - Parameters:
/// - request: Request to send to NetworkAdd.
/// - callOptions: Call options.
/// - Returns: A `UnaryCall` with futures for the metadata, status and response.
public func networkAdd(
_ request: Burrow_Network,
callOptions: CallOptions? = nil
) -> UnaryCall<Burrow_Network, Burrow_Empty> {
return self.makeUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkAdd.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkAddInterceptors() ?? []
)
}
/// Server streaming call to NetworkList
///
/// - Parameters:
/// - request: Request to send to NetworkList.
/// - callOptions: Call options.
/// - handler: A closure called when each response is received from the server.
/// - Returns: A `ServerStreamingCall` with futures for the metadata and status.
public func networkList(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil,
handler: @escaping (Burrow_NetworkListResponse) -> Void
) -> ServerStreamingCall<Burrow_Empty, Burrow_NetworkListResponse> {
return self.makeServerStreamingCall(
path: Burrow_NetworksClientMetadata.Methods.networkList.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkListInterceptors() ?? [],
handler: handler
)
}
/// Unary call to NetworkReorder
///
/// - Parameters:
/// - request: Request to send to NetworkReorder.
/// - callOptions: Call options.
/// - Returns: A `UnaryCall` with futures for the metadata, status and response.
public func networkReorder(
_ request: Burrow_NetworkReorderRequest,
callOptions: CallOptions? = nil
) -> UnaryCall<Burrow_NetworkReorderRequest, Burrow_Empty> {
return self.makeUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkReorder.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkReorderInterceptors() ?? []
)
}
/// Unary call to NetworkDelete
///
/// - Parameters:
/// - request: Request to send to NetworkDelete.
/// - callOptions: Call options.
/// - Returns: A `UnaryCall` with futures for the metadata, status and response.
public func networkDelete(
_ request: Burrow_NetworkDeleteRequest,
callOptions: CallOptions? = nil
) -> UnaryCall<Burrow_NetworkDeleteRequest, Burrow_Empty> {
return self.makeUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkDelete.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkDeleteInterceptors() ?? []
)
}
}
@available(*, deprecated)
extension Burrow_NetworksClient: @unchecked Sendable {}
@available(*, deprecated, renamed: "Burrow_NetworksNIOClient")
public final class Burrow_NetworksClient: Burrow_NetworksClientProtocol {
private let lock = Lock()
private var _defaultCallOptions: CallOptions
private var _interceptors: Burrow_NetworksClientInterceptorFactoryProtocol?
public let channel: GRPCChannel
public var defaultCallOptions: CallOptions {
get { self.lock.withLock { return self._defaultCallOptions } }
set { self.lock.withLockVoid { self._defaultCallOptions = newValue } }
}
public var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? {
get { self.lock.withLock { return self._interceptors } }
set { self.lock.withLockVoid { self._interceptors = newValue } }
}
/// Creates a client for the burrow.Networks service.
///
/// - Parameters:
/// - channel: `GRPCChannel` to the service host.
/// - defaultCallOptions: Options to use for each service call if the user doesn't provide them.
/// - interceptors: A factory providing interceptors for each RPC.
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self._defaultCallOptions = defaultCallOptions
self._interceptors = interceptors
}
}
public struct Burrow_NetworksNIOClient: Burrow_NetworksClientProtocol {
public var channel: GRPCChannel
public var defaultCallOptions: CallOptions
public var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol?
/// Creates a client for the burrow.Networks service.
///
/// - Parameters:
/// - channel: `GRPCChannel` to the service host.
/// - defaultCallOptions: Options to use for each service call if the user doesn't provide them.
/// - interceptors: A factory providing interceptors for each RPC.
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self.defaultCallOptions = defaultCallOptions
self.interceptors = interceptors
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
public protocol Burrow_NetworksAsyncClientProtocol: GRPCClient {
static var serviceDescriptor: GRPCServiceDescriptor { get }
var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? { get }
func makeNetworkAddCall(
_ request: Burrow_Network,
callOptions: CallOptions?
) -> GRPCAsyncUnaryCall<Burrow_Network, Burrow_Empty>
func makeNetworkListCall(
_ request: Burrow_Empty,
callOptions: CallOptions?
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_NetworkListResponse>
func makeNetworkReorderCall(
_ request: Burrow_NetworkReorderRequest,
callOptions: CallOptions?
) -> GRPCAsyncUnaryCall<Burrow_NetworkReorderRequest, Burrow_Empty>
func makeNetworkDeleteCall(
_ request: Burrow_NetworkDeleteRequest,
callOptions: CallOptions?
) -> GRPCAsyncUnaryCall<Burrow_NetworkDeleteRequest, Burrow_Empty>
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
extension Burrow_NetworksAsyncClientProtocol {
public static var serviceDescriptor: GRPCServiceDescriptor {
return Burrow_NetworksClientMetadata.serviceDescriptor
}
public var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? {
return nil
}
public func makeNetworkAddCall(
_ request: Burrow_Network,
callOptions: CallOptions? = nil
) -> GRPCAsyncUnaryCall<Burrow_Network, Burrow_Empty> {
return self.makeAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkAdd.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkAddInterceptors() ?? []
)
}
public func makeNetworkListCall(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncServerStreamingCall<Burrow_Empty, Burrow_NetworkListResponse> {
return self.makeAsyncServerStreamingCall(
path: Burrow_NetworksClientMetadata.Methods.networkList.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkListInterceptors() ?? []
)
}
public func makeNetworkReorderCall(
_ request: Burrow_NetworkReorderRequest,
callOptions: CallOptions? = nil
) -> GRPCAsyncUnaryCall<Burrow_NetworkReorderRequest, Burrow_Empty> {
return self.makeAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkReorder.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkReorderInterceptors() ?? []
)
}
public func makeNetworkDeleteCall(
_ request: Burrow_NetworkDeleteRequest,
callOptions: CallOptions? = nil
) -> GRPCAsyncUnaryCall<Burrow_NetworkDeleteRequest, Burrow_Empty> {
return self.makeAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkDelete.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkDeleteInterceptors() ?? []
)
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
extension Burrow_NetworksAsyncClientProtocol {
public func networkAdd(
_ request: Burrow_Network,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
return try await self.performAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkAdd.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkAddInterceptors() ?? []
)
}
public func networkList(
_ request: Burrow_Empty,
callOptions: CallOptions? = nil
) -> GRPCAsyncResponseStream<Burrow_NetworkListResponse> {
return self.performAsyncServerStreamingCall(
path: Burrow_NetworksClientMetadata.Methods.networkList.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkListInterceptors() ?? []
)
}
public func networkReorder(
_ request: Burrow_NetworkReorderRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
return try await self.performAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkReorder.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkReorderInterceptors() ?? []
)
}
public func networkDelete(
_ request: Burrow_NetworkDeleteRequest,
callOptions: CallOptions? = nil
) async throws -> Burrow_Empty {
return try await self.performAsyncUnaryCall(
path: Burrow_NetworksClientMetadata.Methods.networkDelete.path,
request: request,
callOptions: callOptions ?? self.defaultCallOptions,
interceptors: self.interceptors?.makeNetworkDeleteInterceptors() ?? []
)
}
}
@available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *)
public struct Burrow_NetworksAsyncClient: Burrow_NetworksAsyncClientProtocol {
public var channel: GRPCChannel
public var defaultCallOptions: CallOptions
public var interceptors: Burrow_NetworksClientInterceptorFactoryProtocol?
public init(
channel: GRPCChannel,
defaultCallOptions: CallOptions = CallOptions(),
interceptors: Burrow_NetworksClientInterceptorFactoryProtocol? = nil
) {
self.channel = channel
self.defaultCallOptions = defaultCallOptions
self.interceptors = interceptors
}
}
public protocol Burrow_NetworksClientInterceptorFactoryProtocol: Sendable {
/// - Returns: Interceptors to use when invoking 'networkAdd'.
func makeNetworkAddInterceptors() -> [ClientInterceptor<Burrow_Network, Burrow_Empty>]
/// - Returns: Interceptors to use when invoking 'networkList'.
func makeNetworkListInterceptors() -> [ClientInterceptor<Burrow_Empty, Burrow_NetworkListResponse>]
/// - Returns: Interceptors to use when invoking 'networkReorder'.
func makeNetworkReorderInterceptors() -> [ClientInterceptor<Burrow_NetworkReorderRequest, Burrow_Empty>]
/// - Returns: Interceptors to use when invoking 'networkDelete'.
func makeNetworkDeleteInterceptors() -> [ClientInterceptor<Burrow_NetworkDeleteRequest, Burrow_Empty>]
}
public enum Burrow_NetworksClientMetadata {
public static let serviceDescriptor = GRPCServiceDescriptor(
name: "Networks",
fullName: "burrow.Networks",
methods: [
Burrow_NetworksClientMetadata.Methods.networkAdd,
Burrow_NetworksClientMetadata.Methods.networkList,
Burrow_NetworksClientMetadata.Methods.networkReorder,
Burrow_NetworksClientMetadata.Methods.networkDelete,
]
)
public enum Methods {
public static let networkAdd = GRPCMethodDescriptor(
name: "NetworkAdd",
path: "/burrow.Networks/NetworkAdd",
type: GRPCCallType.unary
)
public static let networkList = GRPCMethodDescriptor(
name: "NetworkList",
path: "/burrow.Networks/NetworkList",
type: GRPCCallType.serverStreaming
)
public static let networkReorder = GRPCMethodDescriptor(
name: "NetworkReorder",
path: "/burrow.Networks/NetworkReorder",
type: GRPCCallType.unary
)
public static let networkDelete = GRPCMethodDescriptor(
name: "NetworkDelete",
path: "/burrow.Networks/NetworkDelete",
type: GRPCCallType.unary
)
}
}

View file

@ -0,0 +1,598 @@
// DO NOT EDIT.
// swift-format-ignore-file
// swiftlint:disable all
//
// Generated by the Swift generator plugin for the protocol buffer compiler.
// Source: burrow.proto
//
// For information on using the generated types, please see the documentation:
// https://github.com/apple/swift-protobuf/
import Foundation
import SwiftProtobuf
// If the compiler emits an error on this type, it is because this file
// was generated by a version of the `protoc` Swift plug-in that is
// incompatible with the version of SwiftProtobuf to which you are linking.
// Please ensure that you are building against the same version of the API
// that was used to generate this file.
fileprivate struct _GeneratedWithProtocGenSwiftVersion: SwiftProtobuf.ProtobufAPIVersionCheck {
struct _2: SwiftProtobuf.ProtobufAPIVersion_2 {}
typealias Version = _2
}
public enum Burrow_NetworkType: SwiftProtobuf.Enum, Swift.CaseIterable {
public typealias RawValue = Int
case wireGuard // = 0
case tailnet // = 1
case UNRECOGNIZED(Int)
public init() {
self = .wireGuard
}
public init?(rawValue: Int) {
switch rawValue {
case 0: self = .wireGuard
case 1: self = .tailnet
default: self = .UNRECOGNIZED(rawValue)
}
}
public var rawValue: Int {
switch self {
case .wireGuard: return 0
case .tailnet: return 1
case .UNRECOGNIZED(let i): return i
}
}
// The compiler won't synthesize support with the UNRECOGNIZED case.
public static let allCases: [Burrow_NetworkType] = [
.wireGuard,
.tailnet,
]
}
public enum Burrow_State: SwiftProtobuf.Enum, Swift.CaseIterable {
public typealias RawValue = Int
case stopped // = 0
case running // = 1
case UNRECOGNIZED(Int)
public init() {
self = .stopped
}
public init?(rawValue: Int) {
switch rawValue {
case 0: self = .stopped
case 1: self = .running
default: self = .UNRECOGNIZED(rawValue)
}
}
public var rawValue: Int {
switch self {
case .stopped: return 0
case .running: return 1
case .UNRECOGNIZED(let i): return i
}
}
// The compiler won't synthesize support with the UNRECOGNIZED case.
public static let allCases: [Burrow_State] = [
.stopped,
.running,
]
}
public struct Burrow_NetworkReorderRequest: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var id: Int32 = 0
public var index: Int32 = 0
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_WireGuardPeer: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var endpoint: String = String()
public var subnet: [String] = []
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_WireGuardNetwork: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var address: String = String()
public var dns: String = String()
public var peer: [Burrow_WireGuardPeer] = []
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_NetworkDeleteRequest: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var id: Int32 = 0
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_Network: @unchecked Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var id: Int32 = 0
public var type: Burrow_NetworkType = .wireGuard
public var payload: Data = Data()
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_NetworkListResponse: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var network: [Burrow_Network] = []
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_Empty: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
public struct Burrow_TunnelStatusResponse: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var state: Burrow_State = .stopped
public var start: SwiftProtobuf.Google_Protobuf_Timestamp {
get {return _start ?? SwiftProtobuf.Google_Protobuf_Timestamp()}
set {_start = newValue}
}
/// Returns true if `start` has been explicitly set.
public var hasStart: Bool {return self._start != nil}
/// Clears the value of `start`. Subsequent reads from it will return its default value.
public mutating func clearStart() {self._start = nil}
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
fileprivate var _start: SwiftProtobuf.Google_Protobuf_Timestamp? = nil
}
public struct Burrow_TunnelConfigurationResponse: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
// methods supported on all messages.
public var addresses: [String] = []
public var mtu: Int32 = 0
public var routes: [String] = []
public var dnsServers: [String] = []
public var searchDomains: [String] = []
public var includeDefaultRoute: Bool = false
public var unknownFields = SwiftProtobuf.UnknownStorage()
public init() {}
}
// MARK: - Code below here is support for the SwiftProtobuf runtime.
fileprivate let _protobuf_package = "burrow"
extension Burrow_NetworkType: SwiftProtobuf._ProtoNameProviding {
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
0: .same(proto: "WireGuard"),
1: .same(proto: "Tailnet"),
]
}
extension Burrow_State: SwiftProtobuf._ProtoNameProviding {
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
0: .same(proto: "Stopped"),
1: .same(proto: "Running"),
]
}
extension Burrow_NetworkReorderRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".NetworkReorderRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "id"),
2: .same(proto: "index"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularInt32Field(value: &self.id) }()
case 2: try { try decoder.decodeSingularInt32Field(value: &self.index) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if self.id != 0 {
try visitor.visitSingularInt32Field(value: self.id, fieldNumber: 1)
}
if self.index != 0 {
try visitor.visitSingularInt32Field(value: self.index, fieldNumber: 2)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_NetworkReorderRequest, rhs: Burrow_NetworkReorderRequest) -> Bool {
if lhs.id != rhs.id {return false}
if lhs.index != rhs.index {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_WireGuardPeer: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".WireGuardPeer"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "endpoint"),
2: .same(proto: "subnet"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularStringField(value: &self.endpoint) }()
case 2: try { try decoder.decodeRepeatedStringField(value: &self.subnet) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.endpoint.isEmpty {
try visitor.visitSingularStringField(value: self.endpoint, fieldNumber: 1)
}
if !self.subnet.isEmpty {
try visitor.visitRepeatedStringField(value: self.subnet, fieldNumber: 2)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_WireGuardPeer, rhs: Burrow_WireGuardPeer) -> Bool {
if lhs.endpoint != rhs.endpoint {return false}
if lhs.subnet != rhs.subnet {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_WireGuardNetwork: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".WireGuardNetwork"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "address"),
2: .same(proto: "dns"),
3: .same(proto: "peer"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularStringField(value: &self.address) }()
case 2: try { try decoder.decodeSingularStringField(value: &self.dns) }()
case 3: try { try decoder.decodeRepeatedMessageField(value: &self.peer) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.address.isEmpty {
try visitor.visitSingularStringField(value: self.address, fieldNumber: 1)
}
if !self.dns.isEmpty {
try visitor.visitSingularStringField(value: self.dns, fieldNumber: 2)
}
if !self.peer.isEmpty {
try visitor.visitRepeatedMessageField(value: self.peer, fieldNumber: 3)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_WireGuardNetwork, rhs: Burrow_WireGuardNetwork) -> Bool {
if lhs.address != rhs.address {return false}
if lhs.dns != rhs.dns {return false}
if lhs.peer != rhs.peer {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_NetworkDeleteRequest: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".NetworkDeleteRequest"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "id"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularInt32Field(value: &self.id) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if self.id != 0 {
try visitor.visitSingularInt32Field(value: self.id, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_NetworkDeleteRequest, rhs: Burrow_NetworkDeleteRequest) -> Bool {
if lhs.id != rhs.id {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_Network: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".Network"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "id"),
2: .same(proto: "type"),
3: .same(proto: "payload"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularInt32Field(value: &self.id) }()
case 2: try { try decoder.decodeSingularEnumField(value: &self.type) }()
case 3: try { try decoder.decodeSingularBytesField(value: &self.payload) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if self.id != 0 {
try visitor.visitSingularInt32Field(value: self.id, fieldNumber: 1)
}
if self.type != .wireGuard {
try visitor.visitSingularEnumField(value: self.type, fieldNumber: 2)
}
if !self.payload.isEmpty {
try visitor.visitSingularBytesField(value: self.payload, fieldNumber: 3)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_Network, rhs: Burrow_Network) -> Bool {
if lhs.id != rhs.id {return false}
if lhs.type != rhs.type {return false}
if lhs.payload != rhs.payload {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_NetworkListResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".NetworkListResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "network"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeRepeatedMessageField(value: &self.network) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.network.isEmpty {
try visitor.visitRepeatedMessageField(value: self.network, fieldNumber: 1)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_NetworkListResponse, rhs: Burrow_NetworkListResponse) -> Bool {
if lhs.network != rhs.network {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_Empty: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".Empty"
public static let _protobuf_nameMap = SwiftProtobuf._NameMap()
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
// Load everything into unknown fields
while try decoder.nextFieldNumber() != nil {}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_Empty, rhs: Burrow_Empty) -> Bool {
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_TunnelStatusResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".TunnelStatusResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "state"),
2: .same(proto: "start"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularEnumField(value: &self.state) }()
case 2: try { try decoder.decodeSingularMessageField(value: &self._start) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every if/case branch local when no optimizations
// are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
// https://github.com/apple/swift-protobuf/issues/1182
if self.state != .stopped {
try visitor.visitSingularEnumField(value: self.state, fieldNumber: 1)
}
try { if let v = self._start {
try visitor.visitSingularMessageField(value: v, fieldNumber: 2)
} }()
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_TunnelStatusResponse, rhs: Burrow_TunnelStatusResponse) -> Bool {
if lhs.state != rhs.state {return false}
if lhs._start != rhs._start {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}
extension Burrow_TunnelConfigurationResponse: SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
public static let protoMessageName: String = _protobuf_package + ".TunnelConfigurationResponse"
public static let _protobuf_nameMap: SwiftProtobuf._NameMap = [
1: .same(proto: "addresses"),
2: .same(proto: "mtu"),
3: .same(proto: "routes"),
4: .standard(proto: "dns_servers"),
5: .standard(proto: "search_domains"),
6: .standard(proto: "include_default_route"),
]
public mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D) throws {
while let fieldNumber = try decoder.nextFieldNumber() {
// The use of inline closures is to circumvent an issue where the compiler
// allocates stack space for every case branch when no optimizations are
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeRepeatedStringField(value: &self.addresses) }()
case 2: try { try decoder.decodeSingularInt32Field(value: &self.mtu) }()
case 3: try { try decoder.decodeRepeatedStringField(value: &self.routes) }()
case 4: try { try decoder.decodeRepeatedStringField(value: &self.dnsServers) }()
case 5: try { try decoder.decodeRepeatedStringField(value: &self.searchDomains) }()
case 6: try { try decoder.decodeSingularBoolField(value: &self.includeDefaultRoute) }()
default: break
}
}
}
public func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
if !self.addresses.isEmpty {
try visitor.visitRepeatedStringField(value: self.addresses, fieldNumber: 1)
}
if self.mtu != 0 {
try visitor.visitSingularInt32Field(value: self.mtu, fieldNumber: 2)
}
if !self.routes.isEmpty {
try visitor.visitRepeatedStringField(value: self.routes, fieldNumber: 3)
}
if !self.dnsServers.isEmpty {
try visitor.visitRepeatedStringField(value: self.dnsServers, fieldNumber: 4)
}
if !self.searchDomains.isEmpty {
try visitor.visitRepeatedStringField(value: self.searchDomains, fieldNumber: 5)
}
if self.includeDefaultRoute {
try visitor.visitSingularBoolField(value: self.includeDefaultRoute, fieldNumber: 6)
}
try unknownFields.traverse(visitor: &visitor)
}
public static func ==(lhs: Burrow_TunnelConfigurationResponse, rhs: Burrow_TunnelConfigurationResponse) -> Bool {
if lhs.addresses != rhs.addresses {return false}
if lhs.mtu != rhs.mtu {return false}
if lhs.routes != rhs.routes {return false}
if lhs.dnsServers != rhs.dnsServers {return false}
if lhs.searchDomains != rhs.searchDomains {return false}
if lhs.includeDefaultRoute != rhs.includeDefaultRoute {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
}

View file

@ -0,0 +1,64 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto3";
package google.protobuf;
option cc_enable_arenas = true;
option go_package = "google.golang.org/protobuf/types/known/timestamppb";
option java_package = "com.google.protobuf";
option java_outer_classname = "TimestampProto";
option java_multiple_files = true;
option objc_class_prefix = "GPB";
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
// A Timestamp represents a point in time independent of any time zone or local
// calendar, encoded as a count of seconds and fractions of seconds at
// nanosecond resolution. The count is relative to an epoch at UTC midnight on
// January 1, 1970, in the proleptic Gregorian calendar which extends the
// Gregorian calendar backwards to year one.
//
// All minutes are 60 seconds long. Leap seconds are "smeared" so that no leap
// second table is needed for interpretation, using a 24-hour linear smear.
//
// The range is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z. By
// restricting to that range, we ensure that we can convert to and from RFC
// 3339 date strings.
message Timestamp {
// Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z.
// Must be from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z inclusive.
int64 seconds = 1;
// Non-negative fractions of a second at nanosecond resolution. Negative
// second values with fractions must still have non-negative nanos values
// that count forward in time. Must be from 0 to 999,999,999 inclusive.
int32 nanos = 2;
}

View file

@ -1,11 +0,0 @@
{
"invocations": [
{
"protoFiles": [
"burrow.proto",
],
"server": false,
"visibility": "public"
}
]
}

View file

@ -1,10 +0,0 @@
{
"invocations": [
{
"protoFiles": [
"burrow.proto",
],
"visibility": "public"
}
]
}

View file

@ -1,21 +1,35 @@
import AsyncAlgorithms
import BurrowConfiguration
import BurrowCore
import GRPC
import libburrow
@preconcurrency import NetworkExtension
import NetworkExtension
import os
// Xcode 26 imports `startTunnel(options:)` as `[String: NSObject]?` and treats the
// override as crossing a nonisolated boundary. The extension target does not
// mutate or forward these Cocoa objects, so treat them as an unchecked escape hatch.
extension NSObject: @retroactive @unchecked Sendable {}
private final class SendableCallbackBox<Callback>: @unchecked Sendable {
let callback: Callback
class PacketTunnelProvider: NEPacketTunnelProvider {
init(_ callback: Callback) {
self.callback = callback
}
}
final class PacketTunnelProvider: NEPacketTunnelProvider, @unchecked Sendable {
enum Error: Swift.Error {
case missingTunnelConfiguration
}
private static let logger = Logger.logger(for: PacketTunnelProvider.self)
private let logger = Logger.logger(for: PacketTunnelProvider.self)
private var packetCall: GRPCAsyncBidirectionalStreamingCall<Burrow_TunnelPacket, Burrow_TunnelPacket>?
private var inboundPacketTask: Task<Void, Never>?
private var outboundPacketTask: Task<Void, Never>?
private var client: TunnelClient {
get throws { try _client.get() }
}
private let _client: Result<TunnelClient, Swift.Error> = Result {
try TunnelClient.unix(socketURL: Constants.socketURL)
}
override init() {
do {
@ -24,51 +38,289 @@ class PacketTunnelProvider: NEPacketTunnelProvider {
databasePath: try Constants.databaseURL.path(percentEncoded: false)
)
} catch {
Self.logger.error("Failed to spawn networking thread: \(error)")
logger.error("Failed to spawn networking thread: \(error)")
}
}
nonisolated override func startTunnel(options: [String: NSObject]? = nil) async throws {
do {
let client = try TunnelClient.unix(socketURL: Constants.socketURL)
let configuration = try await Array(client.tunnelConfiguration(.init()).prefix(1)).first
guard let settings = configuration?.settings else {
throw Error.missingTunnelConfiguration
override func startTunnel(
options: [String: NSObject]?,
completionHandler: @escaping (Swift.Error?) -> Void
) {
let completion = SendableCallbackBox(completionHandler)
Task {
do {
_ = try await client.tunnelStart(.init())
let configuration = try await Array(client.tunnelConfiguration(.init()).prefix(1)).first
guard let settings = configuration?.settings else {
throw Error.missingTunnelConfiguration
}
try await setTunnelNetworkSettings(settings)
try startPacketBridge()
logger.log("Started tunnel with network settings: \(settings)")
completion.callback(nil)
} catch {
logger.error("Failed to start tunnel: \(error)")
stopPacketBridge()
completion.callback(error)
}
try await setTunnelNetworkSettings(settings)
_ = try await client.tunnelStart(.init())
Self.logger.log("Started tunnel with network settings: \(settings)")
} catch {
Self.logger.error("Failed to start tunnel: \(error)")
throw error
}
}
nonisolated override func stopTunnel(with reason: NEProviderStopReason) async {
do {
let client = try TunnelClient.unix(socketURL: Constants.socketURL)
_ = try await client.tunnelStop(.init())
Self.logger.log("Stopped client")
} catch {
Self.logger.error("Failed to stop tunnel: \(error)")
override func stopTunnel(
with reason: NEProviderStopReason,
completionHandler: @escaping () -> Void
) {
let completion = SendableCallbackBox(completionHandler)
Task {
stopPacketBridge()
do {
_ = try await client.tunnelStop(.init())
logger.log("Stopped client")
} catch {
logger.error("Failed to stop tunnel: \(error)")
}
completion.callback()
}
}
}
extension PacketTunnelProvider {
private func startPacketBridge() throws {
stopPacketBridge()
let packetClient = TunnelPacketClient.unix(socketURL: try Constants.socketURL)
let call = packetClient.makeTunnelPacketsCall()
self.packetCall = call
inboundPacketTask = Task { [weak self] in
guard let self else { return }
do {
for try await packet in call.responseStream {
let payload = packet.payload
self.packetFlow.writePackets(
[payload],
withProtocols: [Self.protocolNumber(for: payload)]
)
}
} catch {
guard !Task.isCancelled else { return }
self.logger.error("Tunnel packet receive loop failed: \(error)")
}
}
outboundPacketTask = Task { [weak self] in
guard let self else { return }
defer { call.requestStream.finish() }
do {
while !Task.isCancelled {
let packets = await self.readPacketsBatch()
for (payload, _) in packets {
var packet = Burrow_TunnelPacket()
packet.payload = payload
try await call.requestStream.send(packet)
}
}
} catch {
guard !Task.isCancelled else { return }
self.logger.error("Tunnel packet send loop failed: \(error)")
}
}
}
private func stopPacketBridge() {
inboundPacketTask?.cancel()
inboundPacketTask = nil
outboundPacketTask?.cancel()
outboundPacketTask = nil
packetCall?.cancel()
packetCall = nil
}
private func readPacketsBatch() async -> [(Data, NSNumber)] {
await withCheckedContinuation { continuation in
packetFlow.readPackets { packets, protocols in
continuation.resume(returning: Array(zip(packets, protocols)))
}
}
}
private static func protocolNumber(for payload: Data) -> NSNumber {
guard let version = payload.first.map({ $0 >> 4 }) else {
return NSNumber(value: AF_INET)
}
switch version {
case 6:
return NSNumber(value: AF_INET6)
default:
return NSNumber(value: AF_INET)
}
}
}
extension Burrow_TunnelConfigurationResponse {
fileprivate var settings: NEPacketTunnelNetworkSettings {
let ipv6Addresses = addresses.filter { IPv6Address($0) != nil }
let parsedAddresses = addresses.compactMap(ParsedTunnelAddress.init(rawValue:))
let ipv4Addresses = parsedAddresses.compactMap(\.ipv4Address)
let ipv6Addresses = parsedAddresses.compactMap(\.ipv6Address)
let parsedRoutes = routes.compactMap(ParsedTunnelRoute.init(rawValue:))
var ipv4Routes = parsedRoutes.compactMap(\.ipv4Route)
var ipv6Routes = parsedRoutes.compactMap(\.ipv6Route)
if includeDefaultRoute {
ipv4Routes.append(.default())
ipv6Routes.append(.default())
}
let settings = NEPacketTunnelNetworkSettings(tunnelRemoteAddress: "1.1.1.1")
settings.mtu = NSNumber(value: mtu)
settings.ipv4Settings = NEIPv4Settings(
addresses: addresses.filter { IPv4Address($0) != nil },
subnetMasks: ["255.255.255.0"]
)
settings.ipv6Settings = NEIPv6Settings(
addresses: ipv6Addresses,
networkPrefixLengths: ipv6Addresses.map { _ in 64 }
)
if !ipv4Addresses.isEmpty {
let ipv4Settings = NEIPv4Settings(
addresses: ipv4Addresses.map(\.address),
subnetMasks: ipv4Addresses.map(\.subnetMask)
)
if !ipv4Routes.isEmpty {
ipv4Settings.includedRoutes = ipv4Routes
}
settings.ipv4Settings = ipv4Settings
}
if !ipv6Addresses.isEmpty {
let ipv6Settings = NEIPv6Settings(
addresses: ipv6Addresses.map(\.address),
networkPrefixLengths: ipv6Addresses.map(\.prefixLength)
)
if !ipv6Routes.isEmpty {
ipv6Settings.includedRoutes = ipv6Routes
}
settings.ipv6Settings = ipv6Settings
}
if !dnsServers.isEmpty {
let dnsSettings = NEDNSSettings(servers: dnsServers)
if !searchDomains.isEmpty {
dnsSettings.matchDomains = searchDomains
}
settings.dnsSettings = dnsSettings
}
return settings
}
}
private struct ParsedTunnelAddress {
struct IPv4AddressSetting {
let address: String
let subnetMask: String
}
struct IPv6AddressSetting {
let address: String
let prefixLength: NSNumber
}
let ipv4Address: IPv4AddressSetting?
let ipv6Address: IPv6AddressSetting?
init?(rawValue: String) {
let components = rawValue.split(separator: "/", maxSplits: 1).map(String.init)
let address = components.first?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
guard !address.isEmpty else {
return nil
}
let prefix = components.count == 2 ? Int(components[1]) : nil
if IPv4Address(address) != nil {
let prefixLength = prefix ?? 32
guard (0 ... 32).contains(prefixLength) else {
return nil
}
ipv4Address = IPv4AddressSetting(
address: address,
subnetMask: Self.ipv4SubnetMask(prefixLength: prefixLength)
)
ipv6Address = nil
return
}
if IPv6Address(address) != nil {
let prefixLength = prefix ?? 128
guard (0 ... 128).contains(prefixLength) else {
return nil
}
ipv4Address = nil
ipv6Address = IPv6AddressSetting(
address: address,
prefixLength: NSNumber(value: prefixLength)
)
return
}
return nil
}
private static func ipv4SubnetMask(prefixLength: Int) -> String {
guard prefixLength > 0 else {
return "0.0.0.0"
}
let mask = UInt32.max << (32 - prefixLength)
let octets = [
(mask >> 24) & 0xff,
(mask >> 16) & 0xff,
(mask >> 8) & 0xff,
mask & 0xff,
]
return octets.map(String.init).joined(separator: ".")
}
}
private struct ParsedTunnelRoute {
let ipv4Route: NEIPv4Route?
let ipv6Route: NEIPv6Route?
init?(rawValue: String) {
let components = rawValue.split(separator: "/", maxSplits: 1).map(String.init)
let address = components.first?.trimmingCharacters(in: .whitespacesAndNewlines) ?? ""
guard !address.isEmpty else {
return nil
}
let prefix = components.count == 2 ? Int(components[1]) : nil
if IPv4Address(address) != nil {
let prefixLength = prefix ?? 32
guard (0 ... 32).contains(prefixLength) else {
return nil
}
ipv4Route = NEIPv4Route(
destinationAddress: address,
subnetMask: Self.ipv4SubnetMask(prefixLength: prefixLength)
)
ipv6Route = nil
return
}
if IPv6Address(address) != nil {
let prefixLength = prefix ?? 128
guard (0 ... 128).contains(prefixLength) else {
return nil
}
ipv4Route = nil
ipv6Route = NEIPv6Route(
destinationAddress: address,
networkPrefixLength: NSNumber(value: prefixLength)
)
return
}
return nil
}
private static func ipv4SubnetMask(prefixLength: Int) -> String {
var mask = UInt32.max << (32 - prefixLength)
if prefixLength == 0 {
mask = 0
}
let octets = [
String((mask >> 24) & 0xff),
String((mask >> 16) & 0xff),
String((mask >> 8) & 0xff),
String(mask & 0xff),
]
return octets.joined(separator: ".")
}
}

View file

@ -62,79 +62,36 @@ else
CARGO_TARGET_SUBDIR="release"
fi
RUSTUP_TOOLCHAIN=""
if [[ -x "$(command -v rustup)" ]]; then
RUSTUP_TOOLCHAIN="$(rustup show active-toolchain | awk '{print $1}')"
if [[ -z "${RUSTUP_TOOLCHAIN}" ]]; then
echo 'error: Unable to determine active rustup toolchain'
exit 1
fi
CARGO_BIN="$(rustup which --toolchain "${RUSTUP_TOOLCHAIN}" cargo)"
RUSTC_BIN="$(rustup which --toolchain "${RUSTUP_TOOLCHAIN}" rustc)"
CARGO_PATH="$(dirname "${CARGO_BIN}"):$(dirname "${RUSTC_BIN}"):/usr/bin"
CARGO_PATH="$(dirname $(rustup which cargo)):/usr/bin"
else
CARGO_BIN="$(command -v cargo)"
CARGO_PATH="$(dirname "${CARGO_BIN}"):/usr/bin"
CARGO_PATH="$(dirname $(readlink -f $(which cargo))):/usr/bin"
fi
PROTOC=$(readlink -f $(which protoc))
CARGO_PATH="$(dirname $PROTOC):$CARGO_PATH"
if [[ -n "${RUSTC_WRAPPER:-}" && "${RUSTC_WRAPPER}" != /* ]]; then
WRAPPER_PATH="$(command -v "${RUSTC_WRAPPER}" || true)"
if [[ -n "${WRAPPER_PATH}" ]]; then
RUSTC_WRAPPER="${WRAPPER_PATH}"
fi
fi
if [[ -x "$(command -v rustup)" ]]; then
for TARGET in "${RUST_TARGETS[@]}"; do
if ! rustup target list --installed | grep -qx "${TARGET}"; then
rustup target add --toolchain "${RUSTUP_TOOLCHAIN}" "${TARGET}"
fi
done
fi
# Run cargo without the various environment variables set by Xcode.
# Those variables can confuse cargo and the build scripts it runs.
EXTRA_ENV=()
for VAR_NAME in HOME CARGO_HOME CARGO_TARGET_DIR RUSTUP_HOME RUSTC_WRAPPER SCCACHE_DIR CARGO_INCREMENTAL; do
if [[ -n "${!VAR_NAME:-}" ]]; then
EXTRA_ENV+=("${VAR_NAME}=${!VAR_NAME}")
fi
done
EFFECTIVE_CARGO_TARGET_DIR="${CARGO_TARGET_DIR:-${CONFIGURATION_TEMP_DIR}/target}"
BUILD_ENV=(
CARGO_ENV=(
"PATH=$CARGO_PATH"
"PROTOC=$PROTOC"
"CARGO_TARGET_DIR=${EFFECTIVE_CARGO_TARGET_DIR}"
"${EXTRA_ENV[@]}"
"CARGO_TARGET_DIR=${CONFIGURATION_TEMP_DIR}/target"
)
if [[ -n "${RUSTUP_TOOLCHAIN}" ]]; then
BUILD_ENV+=("RUSTUP_TOOLCHAIN=${RUSTUP_TOOLCHAIN}")
if [[ -n "$IPHONEOS_DEPLOYMENT_TARGET" ]]; then
CARGO_ENV+=("IPHONEOS_DEPLOYMENT_TARGET=$IPHONEOS_DEPLOYMENT_TARGET")
fi
if [[ -n "${RUSTC_BIN:-}" ]]; then
BUILD_ENV+=("RUSTC=${RUSTC_BIN}")
if [[ -n "$MACOSX_DEPLOYMENT_TARGET" ]]; then
CARGO_ENV+=("MACOSX_DEPLOYMENT_TARGET=$MACOSX_DEPLOYMENT_TARGET")
fi
if [[ -n "${IPHONEOS_DEPLOYMENT_TARGET:-}" ]]; then
BUILD_ENV+=("IPHONEOS_DEPLOYMENT_TARGET=${IPHONEOS_DEPLOYMENT_TARGET}")
fi
if [[ -n "${MACOSX_DEPLOYMENT_TARGET:-}" ]]; then
BUILD_ENV+=("MACOSX_DEPLOYMENT_TARGET=${MACOSX_DEPLOYMENT_TARGET}")
fi
echo "Using Rust toolchain: ${RUSTUP_TOOLCHAIN:-system}"
echo "Using cargo: ${CARGO_BIN}"
if [[ -n "${RUSTC_BIN:-}" ]]; then
echo "Using rustc: ${RUSTC_BIN}"
fi
if [[ -n "${RUSTC_WRAPPER:-}" ]]; then
echo "Using rustc wrapper: ${RUSTC_WRAPPER}"
fi
env -i "${BUILD_ENV[@]}" "${CARGO_BIN}" build "${CARGO_ARGS[@]}"
env -i "${CARGO_ENV[@]}" cargo build "${CARGO_ARGS[@]}"
mkdir -p "${BUILT_PRODUCTS_DIR}"
# Use `lipo` to merge the architectures together into BUILT_PRODUCTS_DIR
/usr/bin/xcrun --sdk $PLATFORM_NAME lipo \
-create $(printf "${EFFECTIVE_CARGO_TARGET_DIR}/%q/${CARGO_TARGET_SUBDIR}/libburrow.a " "${RUST_TARGETS[@]}") \
-create $(printf "${CONFIGURATION_TEMP_DIR}/target/%q/${CARGO_TARGET_SUBDIR}/libburrow.a " "${RUST_TARGETS[@]}") \
-output "${BUILT_PRODUCTS_DIR}/libburrow.a"

View file

@ -1,20 +0,0 @@
{
"colors" : [
{
"color" : {
"color-space" : "srgb",
"components" : {
"alpha" : "1.000",
"blue" : "0x50",
"green" : "0x37",
"red" : "0xEC"
}
},
"idiom" : "universal"
}
],
"info" : {
"author" : "xcode",
"version" : 1
}
}

View file

@ -1,12 +0,0 @@
{
"images" : [
{
"filename" : "flag-standalone-wtransparent.pdf",
"idiom" : "universal"
}
],
"info" : {
"author" : "xcode",
"version" : 1
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,39 +1,61 @@
import SwiftUI
struct NetworkCarouselView: View {
var networks: [any Network] = [
HackClub(id: 1),
HackClub(id: 2),
WireGuard(id: 4),
HackClub(id: 5)
]
var networks: [NetworkCardModel]
var body: some View {
ScrollView(.horizontal) {
LazyHStack {
ForEach(networks, id: \.id) { network in
NetworkView(network: network)
.containerRelativeFrame(.horizontal, count: 10, span: 7, spacing: 0, alignment: .center)
.scrollTransition(.interactive, axis: .horizontal) { content, phase in
content
.scaleEffect(1.0 - abs(phase.value) * 0.1)
}
Group {
if networks.isEmpty {
#if os(iOS)
VStack(alignment: .leading, spacing: 6) {
Text("No stored networks yet")
.font(.headline)
Text("WireGuard and Tailnet networks show up here as soon as you add one.")
.font(.footnote)
.foregroundStyle(.secondary)
}
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
.background(
RoundedRectangle(cornerRadius: 18)
.fill(.thinMaterial)
)
#else
ContentUnavailableView(
"No Networks Yet",
systemImage: "network.slash",
description: Text("Add a WireGuard network, or save a Tailnet account so Burrow can store a managed network when the daemon is reachable.")
)
.frame(maxWidth: .infinity, minHeight: 175)
#endif
} else {
ScrollView(.horizontal) {
LazyHStack {
ForEach(networks) { network in
NetworkView(network: network)
.containerRelativeFrame(.horizontal, count: 10, span: 7, spacing: 0, alignment: .center)
.scrollTransition(.interactive, axis: .horizontal) { content, phase in
content
.scaleEffect(1.0 - abs(phase.value) * 0.1)
}
}
}
}
.scrollTargetLayout()
.scrollClipDisabled()
.scrollIndicators(.hidden)
.defaultScrollAnchor(.center)
.scrollTargetBehavior(.viewAligned)
.containerRelativeFrame(.horizontal)
}
}
.scrollTargetLayout()
.scrollClipDisabled()
.scrollIndicators(.hidden)
.defaultScrollAnchor(.center)
.scrollTargetBehavior(.viewAligned)
.containerRelativeFrame(.horizontal)
}
}
#if DEBUG
struct NetworkCarouselView_Previews: PreviewProvider {
static var previews: some View {
NetworkCarouselView()
NetworkCarouselView(networks: [WireGuardCard(id: 1, detail: "10.13.13.2/24 · wg.burrow.rs:51820").card])
}
}
#endif

View file

@ -105,7 +105,7 @@ public final class NetworkExtensionTunnel: Tunnel {
let proto = NETunnelProviderProtocol()
proto.providerBundleIdentifier = bundleIdentifier
proto.serverAddress = "hackclub.com"
proto.serverAddress = "burrow.rs"
manager.protocolConfiguration = proto
try await manager.save()

View file

@ -31,8 +31,8 @@ struct NetworkView<Content: View>: View {
}
extension NetworkView where Content == AnyView {
init(network: any Network) {
init(network: NetworkCardModel) {
color = network.backgroundColor
content = { AnyView(network.label) }
content = { network.label }
}
}

View file

@ -1,27 +0,0 @@
import BurrowCore
import SwiftUI
struct HackClub: Network {
typealias NetworkType = Burrow_WireGuardNetwork
static let type: Burrow_NetworkType = .hackClub
var id: Int32
var backgroundColor: Color { .init("HackClub") }
@MainActor var label: some View {
GeometryReader { reader in
VStack(alignment: .leading) {
Image("HackClub")
.resizable()
.aspectRatio(contentMode: .fit)
.frame(height: reader.size.height / 4)
Spacer()
Text("@conradev")
.foregroundStyle(.white)
.font(.body.monospaced())
}
.padding()
.frame(maxWidth: .infinity)
}
}
}

View file

@ -1,36 +1,623 @@
import Atomics
import BurrowConfiguration
import BurrowCore
import Foundation
import Security
import SwiftProtobuf
import SwiftUI
protocol Network {
associatedtype NetworkType: Message
associatedtype Label: View
struct NetworkCardModel: Identifiable {
let id: Int32
let backgroundColor: Color
let label: AnyView
}
static var type: Burrow_NetworkType { get }
struct TailnetNetworkPayload: Codable, Sendable {
var provider: TailnetProvider
var authority: String?
var account: String
var identity: String
var tailnet: String?
var hostname: String?
var id: Int32 { get }
var backgroundColor: Color { get }
func encoded() throws -> Data {
let encoder = JSONEncoder()
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
return try encoder.encode(self)
}
}
@MainActor var label: Label { get }
struct TailnetDiscoveryResponse: Codable, Sendable {
var domain: String
var provider: TailnetProvider
var authority: String
var oidcIssuer: String?
}
struct TailnetAuthorityProbeStatus: Sendable {
var authority: String
var statusCode: Int
var summary: String
var detail: String?
}
struct TailnetLoginStatus: Sendable {
var sessionID: String
var backendState: String
var authURL: URL?
var running: Bool
var needsLogin: Bool
var tailnetName: String?
var magicDNSSuffix: String?
var selfDNSName: String?
var tailnetIPs: [String]
var health: [String]
}
enum TailnetDiscoveryClient {
static func discover(email: String, socketURL: URL) async throws -> TailnetDiscoveryResponse {
var request = Burrow_TailnetDiscoverRequest()
request.email = email
let response = try await TailnetClient.unix(socketURL: socketURL).discover(request)
return TailnetDiscoveryResponse(
domain: response.domain,
provider: response.managed ? .tailscale : .headscale,
authority: response.authority,
oidcIssuer: response.oidcIssuer.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty
? nil
: response.oidcIssuer
)
}
}
enum TailnetAuthorityProbeClient {
static func probe(authority: String, socketURL: URL) async throws -> TailnetAuthorityProbeStatus {
var request = Burrow_TailnetProbeRequest()
request.authority = authority
let response = try await TailnetClient.unix(socketURL: socketURL).probe(request)
return TailnetAuthorityProbeStatus(
authority: response.authority,
statusCode: Int(response.statusCode),
summary: response.summary,
detail: response.detail.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty
? nil
: response.detail
)
}
}
enum TailnetLoginClient {
static func start(
accountName: String,
identityName: String,
hostname: String?,
authority: String,
socketURL: URL
) async throws -> TailnetLoginStatus {
var request = Burrow_TailnetLoginStartRequest()
request.accountName = accountName
request.identityName = identityName
request.hostname = hostname ?? ""
request.authority = authority
let response = try await TailnetClient.unix(socketURL: socketURL).loginStart(request)
return decode(response)
}
static func status(sessionID: String, socketURL: URL) async throws -> TailnetLoginStatus {
var request = Burrow_TailnetLoginStatusRequest()
request.sessionID = sessionID
let response = try await TailnetClient.unix(socketURL: socketURL).loginStatus(request)
return decode(response)
}
static func cancel(sessionID: String, socketURL: URL) async throws {
var request = Burrow_TailnetLoginCancelRequest()
request.sessionID = sessionID
_ = try await TailnetClient.unix(socketURL: socketURL).loginCancel(request)
}
private static func decode(_ response: Burrow_TailnetLoginStatusResponse) -> TailnetLoginStatus {
TailnetLoginStatus(
sessionID: response.sessionID,
backendState: response.backendState,
authURL: URL(string: response.authURL.trimmingCharacters(in: .whitespacesAndNewlines)),
running: response.running,
needsLogin: response.needsLogin,
tailnetName: response.tailnetName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty
? nil
: response.tailnetName,
magicDNSSuffix: response.magicDNSSuffix.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty
? nil
: response.magicDNSSuffix,
selfDNSName: response.selfDNSName.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty
? nil
: response.selfDNSName,
tailnetIPs: response.tailnetIPs,
health: response.health
)
}
}
@Observable
@MainActor
final class NetworkViewModel: Sendable {
private(set) var networks: [Burrow_Network] = []
private(set) var connectionError: String?
private let socketURLResult: Result<URL, Error>
private var task: Task<Void, Error>!
@ObservationIgnored private var task: Task<Void, Never>?
init(socketURL: URL) {
init(socketURLResult: Result<URL, Error>) {
self.socketURLResult = socketURLResult
startStreaming()
}
deinit {
task?.cancel()
}
var cards: [NetworkCardModel] {
networks.map(Self.makeCard(for:))
}
var nextNetworkID: Int32 {
(networks.map(\.id).max() ?? 0) + 1
}
func addWireGuardNetwork(configText: String) async throws -> Int32 {
try await addNetwork(type: .wireGuard, payload: Data(configText.utf8))
}
func addTailnetNetwork(payload: TailnetNetworkPayload) async throws -> Int32 {
try await addNetwork(type: .tailnet, payload: payload.encoded())
}
func discoverTailnet(email: String) async throws -> TailnetDiscoveryResponse {
let socketURL = try socketURLResult.get()
return try await TailnetDiscoveryClient.discover(email: email, socketURL: socketURL)
}
func probeTailnetAuthority(_ authority: String) async throws -> TailnetAuthorityProbeStatus {
let socketURL = try socketURLResult.get()
return try await TailnetAuthorityProbeClient.probe(authority: authority, socketURL: socketURL)
}
func startTailnetLogin(
accountName: String,
identityName: String,
hostname: String?,
authority: String
) async throws -> TailnetLoginStatus {
let socketURL = try socketURLResult.get()
return try await TailnetLoginClient.start(
accountName: accountName,
identityName: identityName,
hostname: hostname,
authority: authority,
socketURL: socketURL
)
}
func tailnetLoginStatus(sessionID: String) async throws -> TailnetLoginStatus {
let socketURL = try socketURLResult.get()
return try await TailnetLoginClient.status(sessionID: sessionID, socketURL: socketURL)
}
func cancelTailnetLogin(sessionID: String) async throws {
let socketURL = try socketURLResult.get()
try await TailnetLoginClient.cancel(sessionID: sessionID, socketURL: socketURL)
}
private func addNetwork(type: Burrow_NetworkType, payload: Data) async throws -> Int32 {
let socketURL = try socketURLResult.get()
let networkID = nextNetworkID
let request = Burrow_Network.with {
$0.id = networkID
$0.type = type
$0.payload = payload
}
let client = NetworksClient.unix(socketURL: socketURL)
_ = try await client.networkAdd(request)
return networkID
}
private func startStreaming() {
task?.cancel()
let socketURLResult = self.socketURLResult
task = Task { [weak self] in
let client = NetworksClient.unix(socketURL: socketURL)
for try await networks in client.networkList(.init()) {
guard let viewModel = self else { continue }
Task { @MainActor in
viewModel.networks = networks.network
do {
let socketURL = try socketURLResult.get()
let client = NetworksClient.unix(socketURL: socketURL)
for try await response in client.networkList(.init()) {
guard !Task.isCancelled else { return }
await MainActor.run {
guard let self else { return }
self.networks = response.network
self.connectionError = nil
}
}
} catch {
guard !Task.isCancelled else { return }
await MainActor.run {
guard let self else { return }
self.connectionError = error.localizedDescription
}
}
}
}
private static func makeCard(for network: Burrow_Network) -> NetworkCardModel {
switch network.type {
case .wireGuard:
WireGuardCard(network: network).card
case .tailnet:
TailnetCard(network: network).card
case .UNRECOGNIZED(let rawValue):
unsupportedCard(
id: network.id,
title: "Unknown Network",
detail: "Type \(rawValue) is not recognized by this build."
)
@unknown default:
unsupportedCard(
id: network.id,
title: "Unsupported Network",
detail: "Update Burrow to view this network."
)
}
}
private static func unsupportedCard(id: Int32, title: String, detail: String) -> NetworkCardModel {
NetworkCardModel(
id: id,
backgroundColor: .gray.opacity(0.85),
label: AnyView(
VStack(alignment: .leading, spacing: 12) {
Text(title)
.font(.title3.weight(.semibold))
.foregroundStyle(.white)
Text(detail)
.font(.body)
.foregroundStyle(.white.opacity(0.9))
Spacer()
Text("Network #\(id)")
.font(.footnote.monospaced())
.foregroundStyle(.white.opacity(0.8))
}
.padding()
.frame(maxWidth: .infinity, alignment: .leading)
)
)
}
}
enum TailnetProvider: String, CaseIterable, Codable, Identifiable, Sendable {
case tailscale
case headscale
case burrow
var id: String { rawValue }
var title: String {
switch self {
case .tailscale: "Tailscale"
case .headscale: "Custom Tailnet"
case .burrow: "Burrow"
}
}
var defaultAuthority: String? {
switch self {
case .tailscale:
"https://controlplane.tailscale.com"
case .headscale:
"https://ts.burrow.net"
case .burrow:
nil
}
}
var subtitle: String {
switch self {
case .tailscale:
"Managed Tailnet authority."
case .headscale:
"Custom Tailnet control server."
case .burrow:
"Burrow-native Tailnet authority."
}
}
static func inferred(authority: String?, explicit: TailnetProvider?) -> TailnetProvider {
if explicit == .burrow {
return .burrow
}
if isManagedTailscaleAuthority(authority) {
return .tailscale
}
return .headscale
}
static func isManagedTailscaleAuthority(_ authority: String?) -> Bool {
guard let normalized = authority?
.trimmingCharacters(in: .whitespacesAndNewlines)
.lowercased()
.trimmingCharacters(in: CharacterSet(charactersIn: "/")),
!normalized.isEmpty
else {
return false
}
return normalized == "https://controlplane.tailscale.com"
|| normalized == "http://controlplane.tailscale.com"
|| normalized == "controlplane.tailscale.com"
}
}
enum AccountNetworkKind: String, CaseIterable, Codable, Identifiable, Sendable {
case wireGuard
case tor
case tailnet
var id: String { rawValue }
var title: String {
switch self {
case .wireGuard: "WireGuard"
case .tor: "Tor"
case .tailnet: "Tailnet"
}
}
var subtitle: String {
switch self {
case .wireGuard: "Import a tunnel and optional account metadata."
case .tor: "Store Arti account and identity preferences."
case .tailnet: "Save Tailnet authority, identity defaults, and login material."
}
}
var accentColor: Color {
switch self {
case .wireGuard: .init("WireGuard")
case .tor: .orange
case .tailnet: .mint
}
}
var actionTitle: String {
switch self {
case .wireGuard: "Add Network"
case .tor: "Save Account"
case .tailnet: "Save Account"
}
}
var availabilityNote: String? {
switch self {
case .wireGuard:
nil
case .tor:
"Tor account preferences are stored on Apple now. The managed Tor runtime is not wired on Apple in this branch yet."
case .tailnet:
"Tailnet accounts can sign in from Apple now. The managed Apple runtime is still pending, but Tailnet networks can already be stored in the daemon."
}
}
}
enum AccountAuthMode: String, CaseIterable, Codable, Identifiable, Sendable {
case web
case none
case password
case preauthKey
var id: String { rawValue }
var title: String {
switch self {
case .web: "Browser Sign-In"
case .none: "None"
case .password: "Password"
case .preauthKey: "Preauth Key"
}
}
}
struct NetworkAccountRecord: Codable, Identifiable, Hashable, Sendable {
let id: UUID
var kind: AccountNetworkKind
var title: String
var authority: String?
var provider: TailnetProvider?
var accountName: String
var identityName: String
var hostname: String?
var username: String?
var tailnet: String?
var authMode: AccountAuthMode
var note: String?
var createdAt: Date
var updatedAt: Date
}
struct TailnetCard {
var id: Int32
var title: String
var detail: String
init(network: Burrow_Network) {
let payload = (try? JSONDecoder().decode(TailnetNetworkPayload.self, from: network.payload))
id = network.id
title = payload?.tailnet ?? payload?.hostname ?? "Tailnet"
detail = [
payload?.authority.flatMap { URL(string: $0)?.host } ?? payload?.authority,
payload?.authority,
payload.map { "Account: \($0.account)" },
]
.compactMap { $0 }
.joined(separator: " · ")
.ifEmpty("Stored Tailnet configuration")
}
var card: NetworkCardModel {
NetworkCardModel(
id: id,
backgroundColor: .mint,
label: AnyView(
VStack(alignment: .leading, spacing: 12) {
HStack {
VStack(alignment: .leading, spacing: 4) {
Text("Tailnet")
.font(.headline)
.foregroundStyle(.white.opacity(0.85))
Text(title)
.font(.title3.weight(.semibold))
.foregroundStyle(.white)
}
Spacer()
}
Spacer()
Text(detail)
.font(.body.monospaced())
.foregroundStyle(.white.opacity(0.92))
.lineLimit(4)
}
.padding()
.frame(maxWidth: .infinity, alignment: .leading)
)
)
}
}
@Observable
@MainActor
final class NetworkAccountStore {
private static let storageKey = "burrow.network-accounts"
private let defaults: UserDefaults
private(set) var accounts: [NetworkAccountRecord] = []
init(defaults: UserDefaults = UserDefaults(suiteName: Constants.appGroupIdentifier) ?? .standard) {
self.defaults = defaults
load()
}
func upsert(_ record: NetworkAccountRecord, secret: String?) throws {
if let index = accounts.firstIndex(where: { $0.id == record.id }) {
accounts[index] = record
} else {
accounts.append(record)
}
accounts.sort {
if $0.kind == $1.kind {
return $0.title.localizedCaseInsensitiveCompare($1.title) == .orderedAscending
}
return $0.kind.rawValue < $1.kind.rawValue
}
try persist()
if let secret, !secret.trimmingCharacters(in: .whitespacesAndNewlines).isEmpty {
try AccountSecretStore.store(secret, for: record.id)
} else {
try AccountSecretStore.removeSecret(for: record.id)
}
}
func delete(_ record: NetworkAccountRecord) throws {
accounts.removeAll { $0.id == record.id }
try persist()
try AccountSecretStore.removeSecret(for: record.id)
}
func hasStoredSecret(for record: NetworkAccountRecord) -> Bool {
AccountSecretStore.hasSecret(for: record.id)
}
private func load() {
guard let data = defaults.data(forKey: Self.storageKey) else {
accounts = []
return
}
do {
accounts = try JSONDecoder().decode([NetworkAccountRecord].self, from: data)
} catch {
accounts = []
}
}
private func persist() throws {
let data = try JSONEncoder().encode(accounts)
defaults.set(data, forKey: Self.storageKey)
}
}
private enum AccountSecretStore {
private static let service = "\(Constants.bundleIdentifier).accounts"
static func hasSecret(for accountID: UUID) -> Bool {
let query = baseQuery(for: accountID)
return SecItemCopyMatching(query as CFDictionary, nil) == errSecSuccess
}
static func store(_ secret: String, for accountID: UUID) throws {
let data = Data(secret.utf8)
let query = baseQuery(for: accountID)
let status = SecItemCopyMatching(query as CFDictionary, nil)
if status == errSecSuccess {
let updateStatus = SecItemUpdate(
query as CFDictionary,
[kSecValueData as String: data] as CFDictionary
)
guard updateStatus == errSecSuccess else {
throw AccountSecretStoreError.osStatus(updateStatus)
}
return
}
var item = query
item[kSecValueData as String] = data
item[kSecAttrAccessible as String] = kSecAttrAccessibleAfterFirstUnlock
let addStatus = SecItemAdd(item as CFDictionary, nil)
guard addStatus == errSecSuccess else {
throw AccountSecretStoreError.osStatus(addStatus)
}
}
static func removeSecret(for accountID: UUID) throws {
let status = SecItemDelete(baseQuery(for: accountID) as CFDictionary)
guard status == errSecSuccess || status == errSecItemNotFound else {
throw AccountSecretStoreError.osStatus(status)
}
}
private static func baseQuery(for accountID: UUID) -> [String: Any] {
[
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: accountID.uuidString,
]
}
}
private enum AccountSecretStoreError: LocalizedError {
case osStatus(OSStatus)
var errorDescription: String? {
switch self {
case .osStatus(let status):
if let message = SecCopyErrorMessageString(status, nil) as String? {
return message
}
return "Keychain error \(status)"
}
}
}
private extension String {
func ifEmpty(_ fallback: @autoclosure () -> String) -> String {
isEmpty ? fallback() : self
}
}

View file

@ -1,14 +1,40 @@
import BurrowCore
import Foundation
import SwiftUI
struct WireGuard: Network {
typealias NetworkType = Burrow_WireGuardNetwork
static let type: BurrowCore.Burrow_NetworkType = .wireGuard
struct WireGuardCard {
var id: Int32
var backgroundColor: Color { .init("WireGuard") }
var title: String
var detail: String
@MainActor var label: some View {
init(id: Int32, title: String = "WireGuard", detail: String = "Stored configuration") {
self.id = id
self.title = title
self.detail = detail
}
init(network: Burrow_Network) {
let payload = String(data: network.payload, encoding: .utf8) ?? ""
let address = Self.firstValue(for: "Address", in: payload)
let endpoint = Self.firstValue(for: "Endpoint", in: payload)
self.id = network.id
self.title = "WireGuard"
self.detail = [address, endpoint]
.compactMap { $0 }
.filter { !$0.isEmpty }
.joined(separator: " · ")
.ifEmpty("Stored configuration")
}
var card: NetworkCardModel {
NetworkCardModel(
id: id,
backgroundColor: .init("WireGuard"),
label: AnyView(label)
)
}
private var label: some View {
GeometryReader { reader in
VStack(alignment: .leading) {
HStack {
@ -23,12 +49,29 @@ struct WireGuard: Network {
}
.frame(maxWidth: .infinity, maxHeight: reader.size.height / 4)
Spacer()
Text("@conradev")
Text(detail)
.foregroundStyle(.white)
.font(.body.monospaced())
.lineLimit(3)
}
.padding()
.frame(maxWidth: .infinity)
}
}
private static func firstValue(for key: String, in config: String) -> String? {
config
.split(whereSeparator: \.isNewline)
.map(String.init)
.first(where: { $0.hasPrefix("\(key) = ") })?
.split(separator: "=", maxSplits: 1)
.last
.map { $0.trimmingCharacters(in: .whitespaces) }
}
}
private extension String {
func ifEmpty(_ fallback: @autoclosure () -> String) -> String {
isEmpty ? fallback() : self
}
}

View file

@ -1,293 +0,0 @@
import AuthenticationServices
import Foundation
import os
import SwiftUI
enum OAuth2 {
enum Error: Swift.Error {
case unknown
case invalidAuthorizationURL
case invalidCallbackURL
case invalidRedirectURI
}
struct Credential {
var accessToken: String
var refreshToken: String?
var expirationDate: Date?
}
struct Session {
var authorizationEndpoint: URL
var tokenEndpoint: URL
var redirectURI: URL
var responseType = OAuth2.ResponseType.code
var scopes: Set<String>
var clientID: String
var clientSecret: String
fileprivate static let queue: OSAllocatedUnfairLock<[Int: CheckedContinuation<URL, Swift.Error>]> = {
.init(initialState: [:])
}()
fileprivate static func handle(url: URL) {
let continuations = queue.withLock { continuations in
let copy = continuations
continuations.removeAll()
return copy
}
for (_, continuation) in continuations {
continuation.resume(returning: url)
}
}
init(
authorizationEndpoint: URL,
tokenEndpoint: URL,
redirectURI: URL,
scopes: Set<String>,
clientID: String,
clientSecret: String
) {
self.authorizationEndpoint = authorizationEndpoint
self.tokenEndpoint = tokenEndpoint
self.redirectURI = redirectURI
self.scopes = scopes
self.clientID = clientID
self.clientSecret = clientSecret
}
private var authorizationURL: URL {
get throws {
var queryItems: [URLQueryItem] = [
.init(name: "client_id", value: clientID),
.init(name: "response_type", value: responseType.rawValue),
.init(name: "redirect_uri", value: redirectURI.absoluteString)
]
if !scopes.isEmpty {
queryItems.append(.init(name: "scope", value: scopes.joined(separator: ",")))
}
guard var components = URLComponents(url: authorizationEndpoint, resolvingAgainstBaseURL: false) else {
throw OAuth2.Error.invalidAuthorizationURL
}
components.queryItems = queryItems
guard let authorizationURL = components.url else { throw OAuth2.Error.invalidAuthorizationURL }
return authorizationURL
}
}
private func handle(callbackURL: URL) async throws -> OAuth2.AccessTokenResponse {
switch responseType {
case .code:
guard let components = URLComponents(url: callbackURL, resolvingAgainstBaseURL: false) else {
throw OAuth2.Error.invalidCallbackURL
}
return try await handle(response: try components.decode(OAuth2.CodeResponse.self))
default:
throw OAuth2.Error.invalidCallbackURL
}
}
private func handle(response: OAuth2.CodeResponse) async throws -> OAuth2.AccessTokenResponse {
var components = URLComponents()
components.queryItems = [
.init(name: "client_id", value: clientID),
.init(name: "client_secret", value: clientSecret),
.init(name: "grant_type", value: GrantType.authorizationCode.rawValue),
.init(name: "code", value: response.code),
.init(name: "redirect_uri", value: redirectURI.absoluteString)
]
let httpBody = Data(components.percentEncodedQuery!.utf8)
var request = URLRequest(url: tokenEndpoint)
request.setValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
request.httpBody = httpBody
let session = URLSession(configuration: .ephemeral)
let (data, _) = try await session.data(for: request)
return try OAuth2.decoder.decode(OAuth2.AccessTokenResponse.self, from: data)
}
func authorize(_ session: WebAuthenticationSession) async throws -> Credential {
let authorizationURL = try authorizationURL
let callbackURL = try await session.start(
url: authorizationURL,
redirectURI: redirectURI
)
return try await handle(callbackURL: callbackURL).credential
}
}
private struct CodeResponse: Codable {
var code: String
var state: String?
}
private struct AccessTokenResponse: Codable {
var accessToken: String
var tokenType: TokenType
var expiresIn: Double?
var refreshToken: String?
var credential: Credential {
.init(
accessToken: accessToken,
refreshToken: refreshToken,
expirationDate: expiresIn.map { Date(timeIntervalSinceNow: $0) }
)
}
}
enum TokenType: Codable, RawRepresentable {
case bearer
case unknown(String)
init(rawValue: String) {
self = switch rawValue.lowercased() {
case "bearer": .bearer
default: .unknown(rawValue)
}
}
var rawValue: String {
switch self {
case .bearer: "bearer"
case .unknown(let type): type
}
}
}
enum GrantType: Codable, RawRepresentable {
case authorizationCode
case unknown(String)
init(rawValue: String) {
self = switch rawValue.lowercased() {
case "authorization_code": .authorizationCode
default: .unknown(rawValue)
}
}
var rawValue: String {
switch self {
case .authorizationCode: "authorization_code"
case .unknown(let type): type
}
}
}
enum ResponseType: Codable, RawRepresentable {
case code
case idToken
case unknown(String)
init(rawValue: String) {
self = switch rawValue.lowercased() {
case "code": .code
case "id_token": .idToken
default: .unknown(rawValue)
}
}
var rawValue: String {
switch self {
case .code: "code"
case .idToken: "id_token"
case .unknown(let type): type
}
}
}
fileprivate static var decoder: JSONDecoder {
let decoder = JSONDecoder()
decoder.keyDecodingStrategy = .convertFromSnakeCase
return decoder
}
fileprivate static var encoder: JSONEncoder {
let encoder = JSONEncoder()
encoder.keyEncodingStrategy = .convertToSnakeCase
return encoder
}
}
extension WebAuthenticationSession: @unchecked @retroactive Sendable {
}
extension WebAuthenticationSession {
#if canImport(BrowserEngineKit)
@available(iOS 17.4, macOS 14.4, tvOS 17.4, watchOS 10.4, *)
fileprivate static func callback(for redirectURI: URL) throws -> ASWebAuthenticationSession.Callback {
switch redirectURI.scheme {
case "https":
guard let host = redirectURI.host else { throw OAuth2.Error.invalidRedirectURI }
return .https(host: host, path: redirectURI.path)
case "http":
throw OAuth2.Error.invalidRedirectURI
case .some(let scheme):
return .customScheme(scheme)
case .none:
throw OAuth2.Error.invalidRedirectURI
}
}
#endif
fileprivate func start(url: URL, redirectURI: URL) async throws -> URL {
#if canImport(BrowserEngineKit)
if #available(iOS 17.4, macOS 14.4, tvOS 17.4, watchOS 10.4, *) {
return try await authenticate(
using: url,
callback: try Self.callback(for: redirectURI),
additionalHeaderFields: [:]
)
}
#endif
return try await withThrowingTaskGroup(of: URL.self) { group in
group.addTask {
return try await authenticate(using: url, callbackURLScheme: redirectURI.scheme ?? "")
}
let id = Int.random(in: 0..<Int.max)
group.addTask {
return try await withCheckedThrowingContinuation { continuation in
OAuth2.Session.queue.withLock { $0[id] = continuation }
}
}
guard let url = try await group.next() else { throw OAuth2.Error.invalidCallbackURL }
group.cancelAll()
OAuth2.Session.queue.withLock { $0[id] = nil }
return url
}
}
}
extension View {
func handleOAuth2Callback() -> some View {
onOpenURL { url in OAuth2.Session.handle(url: url) }
}
}
extension URLComponents {
fileprivate func decode<T: Decodable>(_ type: T.Type) throws -> T {
guard let queryItems else {
throw DecodingError.valueNotFound(
T.self,
.init(codingPath: [], debugDescription: "Missing query items")
)
}
let data = try OAuth2.encoder.encode(try queryItems.values)
return try OAuth2.decoder.decode(T.self, from: data)
}
}
extension Sequence where Element == URLQueryItem {
fileprivate var values: [String: String?] {
get throws {
try Dictionary(map { ($0.name, $0.value) }) { _, _ in
throw DecodingError.dataCorrupted(.init(codingPath: [], debugDescription: "Duplicate query items"))
}
}
}
}

996
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,4 @@
FROM docker.io/library/rust:1.79-slim-bookworm AS builder
FROM docker.io/library/rust:1.85-slim-bookworm AS builder
ARG TARGETPLATFORM
ARG LLVM_VERSION=16

View file

@ -1,56 +1,21 @@
FLAKE ?= .
AGENIX ?= nix run ${FLAKE}\#agenix --
SECRETS := forgejo/admin-password \
forgejo/agent-ssh-key \
forgejo/nsc-token \
forgejo/nsc-dispatcher-config \
forgejo/nsc-autoscaler-config \
cloudflare/api-token \
hetzner/api-token \
forwardemail/api-token \
forwardemail/hetzner-s3-user \
forwardemail/hetzner-s3-secret
tun := $(shell ifconfig -l | sed 's/ /\n/g' | grep utun | tail -n 1)
cargo_console := env RUST_BACKTRACE=1 RUST_LOG=debug RUSTFLAGS='--cfg tokio_unstable' cargo run --all-features --
cargo_norm := env RUST_BACKTRACE=1 RUST_LOG=debug cargo run --
sudo_cargo_console := sudo -E env RUST_BACKTRACE=1 RUST_LOG=debug RUSTFLAGS='--cfg tokio_unstable' cargo run --all-features --
sudo_cargo_norm := sudo -E env RUST_BACKTRACE=1 RUST_LOG=debug cargo run --
.PHONY: secret secret-file secrets-list
secret:
@if [ -z "${name}" ]; then \
printf 'Usage: make secret name=<secret-path>\nAvailable secrets:\n %s\n' "${SECRETS}"; \
exit 1; \
fi
${AGENIX} -e secrets/${name}.age
secret-file:
@if [ -z "${name}" ]; then \
printf 'Usage: make secret-file name=<secret-path> file=<source-file>\nAvailable secrets:\n %s\n' "${SECRETS}"; \
exit 1; \
fi
@if [ -z "${file}" ]; then \
printf 'Usage: make secret-file name=<secret-path> file=<source-file>\n'; \
exit 1; \
fi
@if [ ! -f "${file}" ]; then \
printf 'Source file "%s" not found.\n' "${file}"; \
exit 1; \
fi
SECRET_SOURCE_FILE="${file}" EDITOR="${PWD}/Scripts/agenix-load-file.sh" ${AGENIX} -e secrets/${name}.age </dev/tty
secrets-list:
@printf '%s\n' ${SECRETS}
check:
@cargo check
build:
@cargo build
bep-check:
@python3 Scripts/check-bep-metadata.py
bep-list:
@Scripts/bep list
daemon-console:
@$(sudo_cargo_console) daemon

View file

@ -10,6 +10,7 @@ Routine verification now runs unprivileged with `cargo test --workspace --all-fe
The repository now carries its own design and deployment record:
- [Constitution](./CONSTITUTION.md)
- [Agent Instructions](./AGENTS.md)
- [Burrow Evolution](./evolution/README.md)
- [WireGuard Rust Lineage](./docs/WIREGUARD_LINEAGE.md)
- [Protocol Roadmap](./docs/PROTOCOL_ROADMAP.md)
@ -19,6 +20,8 @@ The repository now carries its own design and deployment record:
Burrow is fully open source, you can fork the repo and start contributing easily. For more information and in-depth discussions, visit the `#burrow` channel on the [Hack Club Slack](https://hackclub.com/slack/), here you can ask for help and talk with other people interested in burrow. Checkout [GETTING_STARTED.md](./docs/GETTING_STARTED.md) for build instructions and [GTK_APP.md](./docs/GTK_APP.md) for the Linux app. Forge and deployment scaffolding live in [`flake.nix`](./flake.nix), [`nixos/`](./nixos), and [`.forgejo/workflows/`](./.forgejo/workflows/). Hosted mail backup operations live in [`docs/FORWARDEMAIL.md`](./docs/FORWARDEMAIL.md) and [`Tools/forwardemail-custom-s3.sh`](./Tools/forwardemail-custom-s3.sh).
Agent and governance-sensitive work should start with [AGENTS.md](./AGENTS.md), [CONSTITUTION.md](./CONSTITUTION.md), and the relevant BEPs under [`evolution/proposals/`](./evolution/proposals/). Identity and bootstrap metadata now live in [`contributors.nix`](./contributors.nix).
The project structure is divided in the following folders:
```

View file

@ -1,131 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
BURROW_SECRET_TMPFILES=()
burrow_secret_repo_path() {
local repo_root="$1"
local secret_path="$2"
case "${secret_path}" in
"${repo_root}"/*)
printf '%s\n' "${secret_path#${repo_root}/}"
;;
*)
printf '%s\n' "${secret_path}"
;;
esac
}
burrow_agenix_identity_path() {
local repo_root="$1"
local candidate
for candidate in \
"${BURROW_AGE_IDENTITY:-}" \
"${BURROW_FORGE_SSH_KEY:-}" \
"${repo_root}/intake/agent_at_burrow_net_ed25519" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519" \
"${HOME}/.ssh/id_ed25519"
do
if [[ -n "${candidate}" && -f "${candidate}" ]]; then
printf '%s\n' "${candidate}"
return 0
fi
done
}
burrow_cleanup_secret_tmpfiles() {
local path
for path in "${BURROW_SECRET_TMPFILES[@]:-}"; do
[[ -n "${path}" ]] && rm -f "${path}" >/dev/null 2>&1 || true
done
BURROW_SECRET_TMPFILES=()
}
burrow_decrypt_age_secret_to_temp() {
local repo_root="$1"
local secret_path="$2"
local agenix_path
local identity_path
local tmp_file
if [[ ! -f "${secret_path}" ]]; then
echo "age secret not found: ${secret_path}" >&2
return 1
fi
agenix_path="$(burrow_secret_repo_path "${repo_root}" "${secret_path}")"
identity_path="$(burrow_agenix_identity_path "${repo_root}")"
tmp_file="$(mktemp "${TMPDIR:-/tmp}/burrow-secret.XXXXXX")"
if [[ -n "${identity_path}" ]]; then
nix --extra-experimental-features "nix-command flakes" run "${repo_root}#agenix" -- -d "${agenix_path}" -i "${identity_path}" > "${tmp_file}"
else
nix --extra-experimental-features "nix-command flakes" run "${repo_root}#agenix" -- -d "${agenix_path}" > "${tmp_file}"
fi
chmod 600 "${tmp_file}"
BURROW_SECRET_TMPFILES+=("${tmp_file}")
printf '%s\n' "${tmp_file}"
}
burrow_resolve_secret_file() {
local repo_root="$1"
local explicit_path="$2"
local intake_path="$3"
local age_path="$4"
local fallback_path="${5:-}"
if [[ -n "${explicit_path}" ]]; then
if [[ ! -s "${explicit_path}" ]]; then
echo "required file missing or empty: ${explicit_path}" >&2
return 1
fi
printf '%s\n' "${explicit_path}"
return 0
fi
if [[ -n "${age_path}" && -f "${age_path}" ]]; then
burrow_decrypt_age_secret_to_temp "${repo_root}" "${age_path}"
return 0
fi
if [[ -n "${intake_path}" && -s "${intake_path}" ]]; then
printf '%s\n' "${intake_path}"
return 0
fi
if [[ -n "${fallback_path}" && -s "${fallback_path}" ]]; then
printf '%s\n' "${fallback_path}"
return 0
fi
return 1
}
burrow_encrypt_secret_from_file() {
local repo_root="$1"
local secret_path="$2"
local source_path="$3"
local agenix_path
local backup_file=""
if [[ ! -s "${source_path}" ]]; then
echo "secret source missing or empty: ${source_path}" >&2
return 1
fi
agenix_path="$(burrow_secret_repo_path "${repo_root}" "${secret_path}")"
if [[ -f "${secret_path}" ]]; then
backup_file="$(mktemp "${TMPDIR:-/tmp}/burrow-secret-backup.XXXXXX")"
cp "${secret_path}" "${backup_file}"
fi
rm -f "${secret_path}"
if ! nix --extra-experimental-features "nix-command flakes" run "${repo_root}#agenix" -- -e "${agenix_path}" < "${source_path}"; then
if [[ -n "${backup_file}" && -f "${backup_file}" ]]; then
mv "${backup_file}" "${secret_path}"
fi
return 1
fi
[[ -n "${backup_file}" ]] && rm -f "${backup_file}"
}

View file

@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
if [[ $# -lt 1 ]]; then
echo "Usage: agenix-load-file.sh <destination-file>" >&2
exit 1
fi
dest="${!#}"
source_path="${SECRET_SOURCE_FILE:-}"
if [[ -z "$source_path" ]]; then
echo "SECRET_SOURCE_FILE is not set; point it at the source file to encrypt." >&2
exit 1
fi
if [[ ! -f "$source_path" ]]; then
echo "Source file '$source_path' does not exist." >&2
exit 1
fi
cp "$source_path" "$dest"

View file

@ -0,0 +1,243 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_ONEPASSWORD_APPLICATION_SLUG:-onepassword}"
application_name="${AUTHENTIK_ONEPASSWORD_APPLICATION_NAME:-1Password}"
provider_name="${AUTHENTIK_ONEPASSWORD_PROVIDER_NAME:-1Password}"
template_slug="${AUTHENTIK_ONEPASSWORD_TEMPLATE_SLUG:-ts}"
client_id="${AUTHENTIK_ONEPASSWORD_CLIENT_ID:-1password.burrow.net}"
launch_url="${AUTHENTIK_ONEPASSWORD_LAUNCH_URL:-https://burrow-team.1password.com/}"
redirect_uris_json="${AUTHENTIK_ONEPASSWORD_REDIRECT_URIS_JSON:-[
\"https://burrow-team.1password.com/sso/oidc/redirect/\",
\"onepassword://sso/oidc/redirect\"
]}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-1password-oidc.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
Optional environment:
AUTHENTIK_URL
AUTHENTIK_ONEPASSWORD_APPLICATION_SLUG
AUTHENTIK_ONEPASSWORD_APPLICATION_NAME
AUTHENTIK_ONEPASSWORD_PROVIDER_NAME
AUTHENTIK_ONEPASSWORD_TEMPLATE_SLUG
AUTHENTIK_ONEPASSWORD_CLIENT_ID
AUTHENTIK_ONEPASSWORD_LAUNCH_URL
AUTHENTIK_ONEPASSWORD_REDIRECT_URIS_JSON
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if ! printf '%s' "$redirect_uris_json" | jq -e 'type == "array" and length > 0' >/dev/null; then
echo "error: AUTHENTIK_ONEPASSWORD_REDIRECT_URIS_JSON must be a non-empty JSON array" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
api_with_status() {
local method="$1"
local path="$2"
local data="${3:-}"
local response_file status
response_file="$(mktemp)"
trap 'rm -f "$response_file"' RETURN
if [[ -n "$data" ]]; then
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
)"
else
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
)"
fi
printf '%s\n' "$status"
cat "$response_file"
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
wait_for_authentik
template_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c --arg template_slug "$template_slug" '.results[]? | select(.assigned_application_slug == $template_slug)' \
| head -n1
)"
if [[ -z "$template_provider" ]]; then
echo "error: could not resolve the Authentik OAuth provider template ${template_slug}" >&2
exit 1
fi
authorization_flow="$(printf '%s\n' "$template_provider" | jq -r '.authorization_flow')"
invalidation_flow="$(printf '%s\n' "$template_provider" | jq -r '.invalidation_flow')"
property_mappings="$(printf '%s\n' "$template_provider" | jq -c '.property_mappings')"
signing_key="$(printf '%s\n' "$template_provider" | jq -r '.signing_key')"
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg authorization_flow "$authorization_flow" \
--arg invalidation_flow "$invalidation_flow" \
--arg client_id "$client_id" \
--arg signing_key "$signing_key" \
--argjson property_mappings "$property_mappings" \
--argjson redirect_uris "$redirect_uris_json" \
'{
name: $name,
authorization_flow: $authorization_flow,
invalidation_flow: $invalidation_flow,
client_type: "public",
client_id: $client_id,
include_claims_in_id_token: true,
redirect_uris: ($redirect_uris | map({matching_mode: "strict", url: .})),
property_mappings: $property_mappings,
signing_key: $signing_key,
issuer_mode: "per_provider",
sub_mode: "hashed_user_id"
}'
)"
existing_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/oauth2/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/oauth2/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: 1Password OIDC provider did not return a primary key" >&2
exit 1
fi
application_payload="$(
jq -n \
--arg name "$application_name" \
--arg slug "$application_slug" \
--arg provider "$provider_pk" \
--arg launch_url "$launch_url" \
'{
name: $name,
slug: $slug,
provider: ($provider | tonumber),
meta_launch_url: $launch_url,
open_in_new_tab: true,
policy_engine_mode: "any"
}'
)"
existing_application="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
)"
if [[ -n "$existing_application" ]]; then
application_pk="$(printf '%s\n' "$existing_application" | jq -r '.pk')"
else
create_application_result="$(
api_with_status POST "/api/v3/core/applications/" "$application_payload"
)"
create_application_status="$(printf '%s\n' "$create_application_result" | sed -n '1p')"
create_application_body="$(printf '%s\n' "$create_application_result" | sed '1d')"
if [[ "$create_application_status" =~ ^20[01]$ ]]; then
application_pk="$(printf '%s\n' "$create_application_body" | jq -r '.pk // empty')"
elif [[ "$create_application_status" == "400" ]] && printf '%s\n' "$create_application_body" | jq -e '
(.slug // [] | index("Application with this slug already exists.")) != null
or (.provider // [] | index("Application with this provider already exists.")) != null
' >/dev/null; then
application_pk="existing-duplicate"
else
printf '%s\n' "$create_application_body" >&2
echo "error: could not reconcile Authentik application ${application_slug}" >&2
exit 1
fi
fi
if [[ -z "${application_pk:-}" ]]; then
echo "error: 1Password OIDC application did not return a primary key" >&2
exit 1
fi
for _ in $(seq 1 30); do
if curl -fsS "${authentik_url}/application/o/${application_slug}/.well-known/openid-configuration" >/dev/null 2>&1; then
echo "Synced Authentik 1Password OIDC application ${application_slug} (${application_name})."
exit 0
fi
sleep 2
done
echo "warning: 1Password OIDC issuer document for ${application_slug} was not immediately readable; keeping reconciled config." >&2
echo "Synced Authentik 1Password OIDC application ${application_slug} (${application_name})."

View file

@ -0,0 +1,263 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
directory_json="${AUTHENTIK_BURROW_DIRECTORY_JSON:-[]}"
users_group="${AUTHENTIK_BURROW_USERS_GROUP:-burrow-users}"
admins_group="${AUTHENTIK_BURROW_ADMINS_GROUP:-burrow-admins}"
forgejo_application_slug="${AUTHENTIK_FORGEJO_APPLICATION_SLUG:-}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-burrow-directory.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_BURROW_DIRECTORY_JSON
Optional environment:
AUTHENTIK_URL
AUTHENTIK_BURROW_USERS_GROUP
AUTHENTIK_BURROW_ADMINS_GROUP
AUTHENTIK_FORGEJO_APPLICATION_SLUG
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if ! printf '%s' "$directory_json" | jq -e 'type == "array"' >/dev/null; then
echo "error: AUTHENTIK_BURROW_DIRECTORY_JSON must be a JSON array" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_group_pk() {
local group_name="$1"
api GET "/api/v3/core/groups/?page_size=200&search=${group_name}" \
| jq -r --arg name "$group_name" '.results[]? | select(.name == $name) | .pk // empty' \
| head -n1
}
ensure_group() {
local group_name="$1"
local payload group_pk
payload="$(
jq -cn \
--arg name "$group_name" \
'{name: $name}'
)"
group_pk="$(lookup_group_pk "$group_name")"
if [[ -n "$group_pk" ]]; then
api PATCH "/api/v3/core/groups/${group_pk}/" "$payload" >/dev/null
else
group_pk="$(
api POST "/api/v3/core/groups/" "$payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "$group_pk" ]]; then
echo "error: could not create Authentik group ${group_name}" >&2
exit 1
fi
printf '%s\n' "$group_pk"
}
lookup_user_pk() {
local username="$1"
api GET "/api/v3/core/users/?page_size=200&search=${username}" \
| jq -r --arg username "$username" '.results[]? | select(.username == $username) | .pk // empty' \
| head -n1
}
ensure_user() {
local user_spec="$1"
local username name email is_admin groups_json password_file effective_groups_json group_name
local group_pks_json payload user_pk
username="$(printf '%s\n' "$user_spec" | jq -r '.username')"
name="$(printf '%s\n' "$user_spec" | jq -r '.name')"
email="$(printf '%s\n' "$user_spec" | jq -r '.email')"
is_admin="$(printf '%s\n' "$user_spec" | jq -r '.isAdmin // false')"
groups_json="$(printf '%s\n' "$user_spec" | jq -c '.groups // []')"
password_file="$(printf '%s\n' "$user_spec" | jq -r '.passwordFile // empty')"
if [[ -z "$username" || "$username" == "null" || -z "$email" || "$email" == "null" ]]; then
echo "error: each Burrow Authentik user requires username and email" >&2
exit 1
fi
effective_groups_json="$(
printf '%s\n' "$groups_json" \
| jq -c --arg users_group "$users_group" --arg admins_group "$admins_group" --argjson is_admin "$is_admin" '
. + [$users_group] + (if $is_admin then [$admins_group] else [] end) | unique
'
)"
group_pks_json='[]'
while IFS= read -r group_name; do
group_pk="$(ensure_group "$group_name")"
group_pks_json="$(
jq -cn \
--argjson current "$group_pks_json" \
--arg next "$group_pk" \
'$current + [$next]'
)"
done < <(printf '%s\n' "$effective_groups_json" | jq -r '.[]')
payload="$(
jq -cn \
--arg username "$username" \
--arg name "$name" \
--arg email "$email" \
--argjson groups "$group_pks_json" \
'{
username: $username,
name: $name,
email: $email,
is_active: true,
path: "users",
groups: $groups
}'
)"
user_pk="$(lookup_user_pk "$username")"
if [[ -n "$user_pk" ]]; then
api PATCH "/api/v3/core/users/${user_pk}/" "$payload" >/dev/null
else
user_pk="$(
api POST "/api/v3/core/users/" "$payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "$user_pk" ]]; then
echo "error: could not create Authentik user ${username}" >&2
exit 1
fi
if [[ -n "$password_file" ]]; then
if [[ ! -s "$password_file" ]]; then
echo "error: password file for Authentik user ${username} is missing: ${password_file}" >&2
exit 1
fi
api POST "/api/v3/core/users/${user_pk}/set_password/" "$(
jq -cn \
--arg password "$(tr -d '\r\n' < "$password_file")" \
'{password: $password}'
)" >/dev/null
fi
}
lookup_application_pk() {
local slug="$1"
api GET "/api/v3/core/applications/?page_size=200" \
| jq -r --arg slug "$slug" '.results[]? | select(.slug == $slug) | .pk // empty' \
| head -n1
}
ensure_application_group_binding() {
local application_slug="$1"
local group_name="$2"
local application_pk group_pk existing payload binding_pk
application_pk="$(lookup_application_pk "$application_slug")"
if [[ -z "$application_pk" ]]; then
echo "warning: could not resolve Authentik application ${application_slug}; skipping application group binding" >&2
return 0
fi
group_pk="$(lookup_group_pk "$group_name")"
if [[ -z "$group_pk" ]]; then
echo "error: could not resolve Authentik group ${group_name}" >&2
exit 1
fi
existing="$(
api GET "/api/v3/policies/bindings/?page_size=200&target=${application_pk}" \
| jq -c --arg group_pk "$group_pk" '.results[]? | select(.group == $group_pk)' \
| head -n1
)"
payload="$(
jq -cn \
--arg target "$application_pk" \
--arg group "$group_pk" \
'{
group: $group,
target: $target,
negate: false,
enabled: true,
order: 100,
timeout: 30,
failure_result: false
}'
)"
if [[ -n "$existing" ]]; then
binding_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/policies/bindings/${binding_pk}/" "$payload" >/dev/null
else
api POST "/api/v3/policies/bindings/" "$payload" >/dev/null
fi
}
wait_for_authentik
ensure_group "$users_group" >/dev/null
ensure_group "$admins_group" >/dev/null
while IFS= read -r user_spec; do
ensure_user "$user_spec"
done < <(printf '%s\n' "$directory_json" | jq -c '.[]')
if [[ -n "$forgejo_application_slug" ]]; then
ensure_application_group_binding "$forgejo_application_slug" "$users_group"
fi
echo "Synced Burrow Authentik directory."

View file

@ -0,0 +1,250 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_FORGEJO_APPLICATION_SLUG:-git}"
application_name="${AUTHENTIK_FORGEJO_APPLICATION_NAME:-burrow.net}"
provider_name="${AUTHENTIK_FORGEJO_PROVIDER_NAME:-burrow.net}"
client_id="${AUTHENTIK_FORGEJO_CLIENT_ID:-git.burrow.net}"
client_secret="${AUTHENTIK_FORGEJO_CLIENT_SECRET:-}"
launch_url="${AUTHENTIK_FORGEJO_LAUNCH_URL:-https://git.burrow.net/}"
redirect_uris_json="${AUTHENTIK_FORGEJO_REDIRECT_URIS_JSON:-[
\"https://git.burrow.net/user/oauth2/burrow.net/callback\",
\"https://git.burrow.net/user/oauth2/authentik/callback\",
\"https://git.burrow.net/user/oauth2/GitHub/callback\"
]}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-forgejo-oidc.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_FORGEJO_CLIENT_SECRET
Optional environment:
AUTHENTIK_URL
AUTHENTIK_FORGEJO_APPLICATION_SLUG
AUTHENTIK_FORGEJO_APPLICATION_NAME
AUTHENTIK_FORGEJO_PROVIDER_NAME
AUTHENTIK_FORGEJO_CLIENT_ID
AUTHENTIK_FORGEJO_LAUNCH_URL
AUTHENTIK_FORGEJO_REDIRECT_URIS_JSON
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -z "$client_secret" || "$client_secret" == PENDING* ]]; then
echo "Forgejo OIDC client secret is not configured; skipping Authentik Forgejo sync." >&2
exit 0
fi
if ! printf '%s' "$redirect_uris_json" | jq -e 'type == "array" and length > 0' >/dev/null; then
echo "error: AUTHENTIK_FORGEJO_REDIRECT_URIS_JSON must be a non-empty JSON array" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
api_with_status() {
local method="$1"
local path="$2"
local data="${3:-}"
local response_file status
response_file="$(mktemp)"
trap 'rm -f "$response_file"' RETURN
if [[ -n "$data" ]]; then
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
)"
else
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
)"
fi
printf '%s\n' "$status"
cat "$response_file"
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
wait_for_authentik
template_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c '.results[]? | select(.assigned_application_slug == "ts")' \
| head -n1
)"
if [[ -z "$template_provider" ]]; then
echo "error: could not resolve the Burrow Tailnet OAuth provider template" >&2
exit 1
fi
authorization_flow="$(printf '%s\n' "$template_provider" | jq -r '.authorization_flow')"
invalidation_flow="$(printf '%s\n' "$template_provider" | jq -r '.invalidation_flow')"
property_mappings="$(printf '%s\n' "$template_provider" | jq -c '.property_mappings')"
signing_key="$(printf '%s\n' "$template_provider" | jq -r '.signing_key')"
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg authorization_flow "$authorization_flow" \
--arg invalidation_flow "$invalidation_flow" \
--arg client_id "$client_id" \
--arg client_secret "$client_secret" \
--arg signing_key "$signing_key" \
--argjson property_mappings "$property_mappings" \
--argjson redirect_uris "$redirect_uris_json" \
'{
name: $name,
authorization_flow: $authorization_flow,
invalidation_flow: $invalidation_flow,
client_type: "confidential",
client_id: $client_id,
client_secret: $client_secret,
include_claims_in_id_token: true,
redirect_uris: ($redirect_uris | map({matching_mode: "strict", url: .})),
property_mappings: $property_mappings,
signing_key: $signing_key,
issuer_mode: "per_provider",
sub_mode: "hashed_user_id"
}'
)"
existing_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/oauth2/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/oauth2/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: Forgejo OIDC provider did not return a primary key" >&2
exit 1
fi
application_payload="$(
jq -n \
--arg name "$application_name" \
--arg slug "$application_slug" \
--arg provider "$provider_pk" \
--arg launch_url "$launch_url" \
'{
name: $name,
slug: $slug,
provider: ($provider | tonumber),
meta_launch_url: $launch_url,
open_in_new_tab: false,
policy_engine_mode: "any"
}'
)"
existing_application="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
)"
if [[ -n "$existing_application" ]]; then
application_pk="$(printf '%s\n' "$existing_application" | jq -r '.pk')"
else
create_application_result="$(
api_with_status POST "/api/v3/core/applications/" "$application_payload"
)"
create_application_status="$(printf '%s\n' "$create_application_result" | sed -n '1p')"
create_application_body="$(printf '%s\n' "$create_application_result" | sed '1d')"
if [[ "$create_application_status" =~ ^20[01]$ ]]; then
application_pk="$(printf '%s\n' "$create_application_body" | jq -r '.pk // empty')"
elif [[ "$create_application_status" == "400" ]] && printf '%s\n' "$create_application_body" | jq -e '
(.slug // [] | index("Application with this slug already exists.")) != null
or (.provider // [] | index("Application with this provider already exists.")) != null
' >/dev/null; then
application_pk="existing-duplicate"
else
printf '%s\n' "$create_application_body" >&2
echo "error: could not reconcile Authentik application ${application_slug}" >&2
exit 1
fi
fi
if [[ -z "${application_pk:-}" ]]; then
echo "error: Forgejo OIDC application did not return a primary key" >&2
exit 1
fi
for _ in $(seq 1 30); do
if curl -fsS "${authentik_url}/application/o/${application_slug}/.well-known/openid-configuration" >/dev/null 2>&1; then
echo "Synced Authentik Forgejo OIDC application ${application_slug} (${application_name})."
exit 0
fi
sleep 2
done
echo "warning: Forgejo OIDC issuer document for ${application_slug} was not immediately readable; keeping reconciled config." >&2
echo "Synced Authentik Forgejo OIDC application ${application_slug} (${application_name})."

View file

@ -0,0 +1,284 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
google_client_id="${AUTHENTIK_GOOGLE_CLIENT_ID:-}"
google_client_secret="${AUTHENTIK_GOOGLE_CLIENT_SECRET:-}"
source_slug="${AUTHENTIK_GOOGLE_SOURCE_SLUG:-google}"
source_name="${AUTHENTIK_GOOGLE_SOURCE_NAME:-Google}"
identification_stage_name="${AUTHENTIK_GOOGLE_IDENTIFICATION_STAGE_NAME:-default-authentication-identification}"
authentication_flow_slug="${AUTHENTIK_GOOGLE_AUTHENTICATION_FLOW_SLUG:-default-source-authentication}"
enrollment_flow_slug="${AUTHENTIK_GOOGLE_ENROLLMENT_FLOW_SLUG:-default-source-enrollment}"
login_mode="${AUTHENTIK_GOOGLE_LOGIN_MODE:-redirect}"
user_matching_mode="${AUTHENTIK_GOOGLE_USER_MATCHING_MODE:-email_link}"
policy_engine_mode="${AUTHENTIK_GOOGLE_POLICY_ENGINE_MODE:-any}"
google_account_map_json="${AUTHENTIK_GOOGLE_ACCOUNT_MAP_JSON:-[]}"
property_mapping_name="${AUTHENTIK_GOOGLE_PROPERTY_MAPPING_NAME:-Burrow Google Account Map}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-google-source.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_GOOGLE_CLIENT_ID
AUTHENTIK_GOOGLE_CLIENT_SECRET
Optional environment:
AUTHENTIK_URL
AUTHENTIK_GOOGLE_SOURCE_SLUG
AUTHENTIK_GOOGLE_SOURCE_NAME
AUTHENTIK_GOOGLE_IDENTIFICATION_STAGE_NAME
AUTHENTIK_GOOGLE_AUTHENTICATION_FLOW_SLUG
AUTHENTIK_GOOGLE_ENROLLMENT_FLOW_SLUG
AUTHENTIK_GOOGLE_LOGIN_MODE promoted|redirect
AUTHENTIK_GOOGLE_USER_MATCHING_MODE identifier|email_link|email_deny|username_link|username_deny
AUTHENTIK_GOOGLE_POLICY_ENGINE_MODE all|any
AUTHENTIK_GOOGLE_ACCOUNT_MAP_JSON JSON array of alias mappings
AUTHENTIK_GOOGLE_PROPERTY_MAPPING_NAME
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -z "$google_client_id" || -z "$google_client_secret" || "$google_client_id" == PENDING* || "$google_client_secret" == PENDING* ]]; then
echo "Google OAuth credentials are not configured; skipping Authentik Google source sync." >&2
echo "Set Authorized redirect URI in Google to ${authentik_url}/source/oauth/callback/${source_slug}/" >&2
exit 0
fi
if ! printf '%s' "$google_account_map_json" | jq -e 'type == "array"' >/dev/null; then
echo "error: AUTHENTIK_GOOGLE_ACCOUNT_MAP_JSON must be a JSON array" >&2
exit 1
fi
case "$login_mode" in
promoted|redirect) ;;
*)
echo "warning: unsupported AUTHENTIK_GOOGLE_LOGIN_MODE=$login_mode; falling back to redirect" >&2
login_mode="redirect"
;;
esac
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_single_result() {
local path="$1"
local jq_filter="$2"
api GET "$path" | jq -r "$jq_filter" | head -n1
}
wait_for_authentik
flow_pk="$(
lookup_single_result \
"/api/v3/flows/instances/?slug=${authentication_flow_slug}" \
'.results[] | select(.slug != null) | .pk // empty'
)"
if [[ -z "$flow_pk" ]]; then
echo "error: could not resolve Authentik authentication flow slug ${authentication_flow_slug}" >&2
exit 1
fi
enrollment_flow_pk="$(
lookup_single_result \
"/api/v3/flows/instances/?slug=${enrollment_flow_slug}" \
'.results[] | select(.slug != null) | .pk // empty'
)"
if [[ -z "$enrollment_flow_pk" ]]; then
echo "error: could not resolve Authentik enrollment flow slug ${enrollment_flow_slug}" >&2
exit 1
fi
identification_stage="$(
api GET "/api/v3/stages/identification/" \
| jq -c --arg name "$identification_stage_name" '.results[] | select(.name == $name)'
)"
if [[ -z "$identification_stage" ]]; then
echo "error: could not resolve Authentik identification stage ${identification_stage_name}" >&2
exit 1
fi
stage_pk="$(printf '%s\n' "$identification_stage" | jq -r '.pk')"
property_mapping_payload='[]'
if [[ "$(printf '%s' "$google_account_map_json" | jq 'length')" -gt 0 ]]; then
alias_map_python="$(
printf '%s' "$google_account_map_json" \
| jq -c '
map({
key: (.source_email | ascii_downcase),
value: {
username: .username,
email: .email,
name: .name
}
})
| from_entries
'
)"
oauth_property_mapping_expression="$(
cat <<EOF
email = (info.get("email") or "").strip().lower()
alias_map = ${alias_map_python}
mapped = alias_map.get(email)
if not mapped:
return {}
result = {}
for key in ("username", "email", "name"):
value = mapped.get(key)
if value:
result[key] = value
return result
EOF
)"
oauth_property_mapping_payload="$(
jq -n \
--arg name "$property_mapping_name" \
--arg expression "$oauth_property_mapping_expression" \
'{
name: $name,
expression: $expression
}'
)"
existing_property_mapping="$(
api GET "/api/v3/propertymappings/source/oauth/?page_size=200" \
| jq -c --arg name "$property_mapping_name" '.results[]? | select(.name == $name)'
)"
if [[ -n "$existing_property_mapping" ]]; then
property_mapping_pk="$(printf '%s\n' "$existing_property_mapping" | jq -r '.pk')"
api PATCH "/api/v3/propertymappings/source/oauth/${property_mapping_pk}/" "$oauth_property_mapping_payload" >/dev/null
else
property_mapping_pk="$(
api POST "/api/v3/propertymappings/source/oauth/" "$oauth_property_mapping_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${property_mapping_pk:-}" ]]; then
echo "error: Google OAuth property mapping did not return a primary key" >&2
exit 1
fi
property_mapping_payload="$(jq -cn --arg property_mapping_pk "$property_mapping_pk" '[$property_mapping_pk]')"
fi
oauth_source_payload="$(
jq -n \
--arg name "$source_name" \
--arg slug "$source_slug" \
--arg authentication_flow "$flow_pk" \
--arg enrollment_flow "$enrollment_flow_pk" \
--arg user_matching_mode "$user_matching_mode" \
--arg policy_engine_mode "$policy_engine_mode" \
--argjson user_property_mappings "$property_mapping_payload" \
--arg consumer_key "$google_client_id" \
--arg consumer_secret "$google_client_secret" \
'{
name: $name,
slug: $slug,
enabled: true,
promoted: true,
authentication_flow: $authentication_flow,
enrollment_flow: $enrollment_flow,
user_property_mappings: $user_property_mappings,
group_property_mappings: [],
policy_engine_mode: $policy_engine_mode,
user_matching_mode: $user_matching_mode,
provider_type: "google",
consumer_key: $consumer_key,
consumer_secret: $consumer_secret
}'
)"
existing_source="$(
api GET "/api/v3/sources/oauth/?slug=${source_slug}" \
| jq -c '.results[]?'
)"
if [[ -n "$existing_source" ]]; then
source_pk="$(printf '%s\n' "$existing_source" | jq -r '.pk')"
api PATCH "/api/v3/sources/oauth/${source_slug}/" "$oauth_source_payload" >/dev/null
else
source_pk="$(
api POST "/api/v3/sources/oauth/" "$oauth_source_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "$source_pk" ]]; then
echo "error: Google OAuth source did not return a primary key" >&2
exit 1
fi
stage_patch="$(
printf '%s\n' "$identification_stage" \
| jq -c \
--arg source_pk "$source_pk" \
--arg login_mode "$login_mode" '
.sources = (
if $login_mode == "redirect" then
[$source_pk]
else
([ $source_pk ] + ((.sources // []) | map(select(. != $source_pk))))
end
)
| .show_source_labels = true
| if $login_mode == "redirect" then
.user_fields = []
else
.
end
| {
sources,
show_source_labels,
user_fields
}'
)"
api PATCH "/api/v3/stages/identification/${stage_pk}/" "$stage_patch" >/dev/null
echo "Synced Authentik Google source ${source_slug} (${source_pk}) in ${login_mode} mode."

View file

@ -0,0 +1,344 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_LINEAR_APPLICATION_SLUG:-linear}"
application_name="${AUTHENTIK_LINEAR_APPLICATION_NAME:-Linear}"
provider_name="${AUTHENTIK_LINEAR_PROVIDER_NAME:-Linear}"
launch_url="${AUTHENTIK_LINEAR_LAUNCH_URL:-https://linear.app/burrownet}"
acs_url="${AUTHENTIK_LINEAR_ACS_URL:-}"
audience="${AUTHENTIK_LINEAR_AUDIENCE:-}"
issuer="${AUTHENTIK_LINEAR_ISSUER:-${authentik_url}/application/saml/${application_slug}/metadata/}"
default_relay_state="${AUTHENTIK_LINEAR_DEFAULT_RELAY_STATE:-}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-linear-saml.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_LINEAR_ACS_URL
AUTHENTIK_LINEAR_AUDIENCE
Optional environment:
AUTHENTIK_URL
AUTHENTIK_LINEAR_APPLICATION_SLUG
AUTHENTIK_LINEAR_APPLICATION_NAME
AUTHENTIK_LINEAR_PROVIDER_NAME
AUTHENTIK_LINEAR_LAUNCH_URL
AUTHENTIK_LINEAR_ISSUER
AUTHENTIK_LINEAR_DEFAULT_RELAY_STATE
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -z "$acs_url" ]]; then
echo "error: AUTHENTIK_LINEAR_ACS_URL is required" >&2
exit 1
fi
if [[ -z "$audience" ]]; then
echo "error: AUTHENTIK_LINEAR_AUDIENCE is required" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
api_with_status() {
local method="$1"
local path="$2"
local data="${3:-}"
local response_file status
response_file="$(mktemp)"
trap 'rm -f "$response_file"' RETURN
if [[ -n "$data" ]]; then
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
)"
else
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
)"
fi
printf '%s\n' "$status"
cat "$response_file"
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_oauth_template_field() {
local field="$1"
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -r --arg field "$field" '.results[]? | select(.assigned_application_slug == "ts") | .[$field]' \
| head -n1
}
reconcile_property_mapping() {
local name="$1"
local saml_name="$2"
local friendly_name="$3"
local expression="$4"
local payload existing_pk
payload="$(
jq -n \
--arg name "$name" \
--arg saml_name "$saml_name" \
--arg friendly_name "$friendly_name" \
--arg expression "$expression" \
'{
name: $name,
saml_name: $saml_name,
friendly_name: $friendly_name,
expression: $expression
}'
)"
existing_pk="$(
api GET "/api/v3/propertymappings/provider/saml/?page_size=200" \
| jq -r --arg name "$name" '.results[]? | select(.name == $name) | .pk' \
| head -n1
)"
if [[ -n "$existing_pk" ]]; then
api PATCH "/api/v3/propertymappings/provider/saml/${existing_pk}/" "$payload" >/dev/null
printf '%s\n' "$existing_pk"
else
api POST "/api/v3/propertymappings/provider/saml/" "$payload" | jq -r '.pk // empty'
fi
}
wait_for_authentik
authorization_flow="$(lookup_oauth_template_field authorization_flow)"
invalidation_flow="$(lookup_oauth_template_field invalidation_flow)"
signing_kp="$(lookup_oauth_template_field signing_key)"
if [[ -z "$authorization_flow" || -z "$invalidation_flow" || -z "$signing_kp" ]]; then
echo "error: could not resolve Authentik provider defaults from Burrow Tailnet template" >&2
exit 1
fi
email_mapping_pk="$(
reconcile_property_mapping \
"Burrow Linear SAML Email" \
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" \
"email" \
'return request.user.email'
)"
name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Linear SAML Name" \
"name" \
"name" \
'return request.user.name or request.user.username'
)"
first_name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Linear SAML First Name" \
"firstName" \
"firstName" \
$'parts = (request.user.name or "").split(" ", 1)\nif len(parts) > 0 and parts[0]:\n return parts[0]\nreturn request.user.username'
)"
last_name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Linear SAML Last Name" \
"lastName" \
"lastName" \
$'parts = (request.user.name or "").rsplit(" ", 1)\nif len(parts) == 2 and parts[1]:\n return parts[1]\nreturn request.user.username'
)"
if [[ -z "$email_mapping_pk" || -z "$name_mapping_pk" || -z "$first_name_mapping_pk" || -z "$last_name_mapping_pk" ]]; then
echo "error: failed to reconcile Linear SAML property mappings" >&2
exit 1
fi
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg authorization_flow "$authorization_flow" \
--arg invalidation_flow "$invalidation_flow" \
--arg acs_url "$acs_url" \
--arg audience "$audience" \
--arg issuer "$issuer" \
--arg signing_kp "$signing_kp" \
--arg default_relay_state "$default_relay_state" \
--arg name_id_mapping "$email_mapping_pk" \
--arg email_mapping "$email_mapping_pk" \
--arg name_mapping "$name_mapping_pk" \
--arg first_name_mapping "$first_name_mapping_pk" \
--arg last_name_mapping "$last_name_mapping_pk" \
'{
name: $name,
authorization_flow: $authorization_flow,
invalidation_flow: $invalidation_flow,
acs_url: $acs_url,
audience: $audience,
issuer: $issuer,
signing_kp: $signing_kp,
sign_assertion: true,
sign_response: true,
sp_binding: "post",
name_id_mapping: $name_id_mapping,
property_mappings: [
$email_mapping,
$name_mapping,
$first_name_mapping,
$last_name_mapping
]
}
+ (if $default_relay_state == "" then {} else {default_relay_state: $default_relay_state} end)'
)"
existing_provider="$(
api GET "/api/v3/providers/saml/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/saml/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/saml/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: Linear SAML provider did not return a primary key" >&2
exit 1
fi
application_payload="$(
jq -n \
--arg name "$application_name" \
--arg slug "$application_slug" \
--arg provider "$provider_pk" \
--arg launch_url "$launch_url" \
'{
name: $name,
slug: $slug,
provider: ($provider | tonumber),
meta_launch_url: $launch_url,
open_in_new_tab: true,
policy_engine_mode: "any"
}'
)"
existing_application="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
)"
if [[ -n "$existing_application" ]]; then
application_pk="existing"
api PATCH "/api/v3/core/applications/${application_slug}/" "$application_payload" >/dev/null
else
create_application_result="$(
api_with_status POST "/api/v3/core/applications/" "$application_payload"
)"
create_application_status="$(printf '%s\n' "$create_application_result" | sed -n '1p')"
create_application_body="$(printf '%s\n' "$create_application_result" | sed '1d')"
if [[ "$create_application_status" =~ ^20[01]$ ]]; then
application_pk="$(printf '%s\n' "$create_application_body" | jq -r '.pk // empty')"
elif [[ "$create_application_status" == "400" ]] && printf '%s\n' "$create_application_body" | jq -e '
(.slug // [] | index("Application with this slug already exists.")) != null
or (.provider // [] | index("Application with this provider already exists.")) != null
' >/dev/null; then
application_pk="existing-duplicate"
else
printf '%s\n' "$create_application_body" >&2
echo "error: could not reconcile Authentik application ${application_slug}" >&2
exit 1
fi
fi
if [[ -z "${application_pk:-}" ]]; then
echo "error: Linear SAML application did not return a primary key" >&2
exit 1
fi
for _ in $(seq 1 30); do
metadata_status="$(
curl -sS \
-o /dev/null \
-w '%{http_code}' \
--max-redirs 0 \
"${authentik_url}/application/saml/${application_slug}/metadata/" \
|| true
)"
case "$metadata_status" in
200|301|302|307|308)
echo "Synced Authentik Linear SAML application ${application_slug} (${application_name})."
exit 0
;;
esac
sleep 2
done
echo "warning: Linear SAML metadata for ${application_slug} was not immediately readable; keeping reconciled config." >&2
echo "Synced Authentik Linear SAML application ${application_slug} (${application_name})."

View file

@ -0,0 +1,311 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_LINEAR_APPLICATION_SLUG:-linear}"
provider_name="${AUTHENTIK_LINEAR_SCIM_PROVIDER_NAME:-Linear SCIM}"
scim_url="${AUTHENTIK_LINEAR_SCIM_URL:-}"
scim_token_file="${AUTHENTIK_LINEAR_SCIM_TOKEN_FILE:-}"
user_identifier="${AUTHENTIK_LINEAR_SCIM_USER_IDENTIFIER:-email}"
owner_group="${AUTHENTIK_LINEAR_OWNER_GROUP:-linear-owners}"
admin_group="${AUTHENTIK_LINEAR_ADMIN_GROUP:-linear-admins}"
guest_group="${AUTHENTIK_LINEAR_GUEST_GROUP:-linear-guests}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-linear-scim.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_LINEAR_SCIM_URL
AUTHENTIK_LINEAR_SCIM_TOKEN_FILE
Optional environment:
AUTHENTIK_URL
AUTHENTIK_LINEAR_APPLICATION_SLUG
AUTHENTIK_LINEAR_SCIM_PROVIDER_NAME
AUTHENTIK_LINEAR_SCIM_USER_IDENTIFIER
AUTHENTIK_LINEAR_OWNER_GROUP
AUTHENTIK_LINEAR_ADMIN_GROUP
AUTHENTIK_LINEAR_GUEST_GROUP
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -z "$scim_url" ]]; then
echo "error: AUTHENTIK_LINEAR_SCIM_URL is required" >&2
exit 1
fi
if [[ -z "$scim_token_file" || ! -s "$scim_token_file" ]]; then
echo "error: AUTHENTIK_LINEAR_SCIM_TOKEN_FILE is required and must be readable" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_group_pk() {
local group_name="$1"
api GET "/api/v3/core/groups/?page_size=200&search=${group_name}" \
| jq -r --arg name "$group_name" '.results[]? | select(.name == $name) | .pk // empty' \
| head -n1
}
ensure_group() {
local group_name="$1"
local payload group_pk
payload="$(jq -cn --arg name "$group_name" '{name: $name}')"
group_pk="$(lookup_group_pk "$group_name")"
if [[ -n "$group_pk" ]]; then
api PATCH "/api/v3/core/groups/${group_pk}/" "$payload" >/dev/null
else
group_pk="$(
api POST "/api/v3/core/groups/" "$payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "$group_pk" ]]; then
echo "error: could not reconcile Authentik group ${group_name}" >&2
exit 1
fi
printf '%s\n' "$group_pk"
}
lookup_application() {
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
}
lookup_scim_provider() {
api GET "/api/v3/providers/scim/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_backchannel_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
}
lookup_scim_mapping_pk() {
local managed_name="$1"
api GET "/api/v3/propertymappings/provider/scim/?page_size=200" \
| jq -r --arg managed "$managed_name" '.results[]? | select(.managed == $managed) | .pk // empty' \
| head -n1
}
reconcile_property_mapping() {
local name="$1"
local expression="$2"
local payload existing_pk
payload="$(
jq -n \
--arg name "$name" \
--arg expression "$expression" \
'{
name: $name,
expression: $expression
}'
)"
existing_pk="$(
api GET "/api/v3/propertymappings/provider/scim/?page_size=200" \
| jq -r --arg name "$name" '.results[]? | select(.name == $name) | .pk // empty' \
| head -n1
)"
if [[ -n "$existing_pk" ]]; then
api PATCH "/api/v3/propertymappings/provider/scim/${existing_pk}/" "$payload" >/dev/null
printf '%s\n' "$existing_pk"
else
api POST "/api/v3/propertymappings/provider/scim/" "$payload" \
| jq -r '.pk // empty'
fi
}
sync_object() {
local provider_pk="$1"
local model="$2"
local object_id="$3"
if ! api POST "/api/v3/providers/scim/${provider_pk}/sync/object/" "$(
jq -cn \
--arg model "$model" \
--arg object_id "$object_id" \
'{
sync_object_model: $model,
sync_object_id: $object_id,
override_dry_run: false
}'
)" >/dev/null; then
echo "warning: could not trigger immediate Linear SCIM sync for ${model} ${object_id}; provider will continue with its normal sync cycle." >&2
fi
}
wait_for_authentik
group_mapping_pk="$(lookup_scim_mapping_pk "goauthentik.io/providers/scim/group")"
case "$user_identifier" in
email)
user_mapping_expression=$'# Some implementations require givenName and familyName to be set\ngivenName, familyName = request.user.name, " "\nformatted = request.user.name + " "\nif " " in request.user.name:\n givenName, _, familyName = request.user.name.partition(" ")\n formatted = request.user.name\n\navatar = request.user.avatar\nphotos = None\nif "://" in avatar:\n photos = [{"value": avatar, "type": "photo"}]\n\nlocale = request.user.locale()\nif locale == "":\n locale = None\n\nemails = []\nif request.user.email != "":\n emails = [{\n "value": request.user.email,\n "type": "other",\n "primary": True,\n }]\n\nidentifier = request.user.email\nif identifier == "":\n identifier = request.user.username\n\nreturn {\n "userName": identifier,\n "name": {\n "formatted": formatted,\n "givenName": givenName,\n "familyName": familyName,\n },\n "displayName": request.user.name,\n "photos": photos,\n "locale": locale,\n "active": request.user.is_active,\n "emails": emails,\n}'
;;
username)
user_mapping_expression=$'# Some implementations require givenName and familyName to be set\ngivenName, familyName = request.user.name, " "\nformatted = request.user.name + " "\nif " " in request.user.name:\n givenName, _, familyName = request.user.name.partition(" ")\n formatted = request.user.name\n\navatar = request.user.avatar\nphotos = None\nif "://" in avatar:\n photos = [{"value": avatar, "type": "photo"}]\n\nlocale = request.user.locale()\nif locale == "":\n locale = None\n\nemails = []\nif request.user.email != "":\n emails = [{\n "value": request.user.email,\n "type": "other",\n "primary": True,\n }]\nreturn {\n "userName": request.user.username,\n "name": {\n "formatted": formatted,\n "givenName": givenName,\n "familyName": familyName,\n },\n "displayName": request.user.name,\n "photos": photos,\n "locale": locale,\n "active": request.user.is_active,\n "emails": emails,\n}'
;;
*)
echo "error: unsupported AUTHENTIK_LINEAR_SCIM_USER_IDENTIFIER value: ${user_identifier}" >&2
exit 1
;;
esac
user_mapping_pk="$(reconcile_property_mapping "Burrow Linear SCIM User" "$user_mapping_expression")"
if [[ -z "$user_mapping_pk" || -z "$group_mapping_pk" ]]; then
echo "error: could not resolve managed Authentik SCIM property mappings" >&2
exit 1
fi
owner_group_pk="$(ensure_group "$owner_group")"
admin_group_pk="$(ensure_group "$admin_group")"
guest_group_pk="$(ensure_group "$guest_group")"
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg url "$scim_url" \
--arg token "$(tr -d '\r\n' < "$scim_token_file")" \
--arg user_mapping_pk "$user_mapping_pk" \
--arg group_mapping_pk "$group_mapping_pk" \
--arg owner_group_pk "$owner_group_pk" \
--arg admin_group_pk "$admin_group_pk" \
--arg guest_group_pk "$guest_group_pk" \
'{
name: $name,
url: $url,
token: $token,
auth_mode: "token",
verify_certificates: true,
compatibility_mode: "default",
property_mappings: [$user_mapping_pk],
property_mappings_group: [$group_mapping_pk],
group_filters: [
$owner_group_pk,
$admin_group_pk,
$guest_group_pk
],
dry_run: false
}'
)"
existing_provider="$(lookup_scim_provider)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/scim/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/scim/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: Linear SCIM provider did not return a primary key" >&2
exit 1
fi
application="$(lookup_application)"
if [[ -z "$application" ]]; then
echo "error: could not resolve Authentik application ${application_slug}" >&2
exit 1
fi
application_payload="$(
printf '%s\n' "$application" \
| jq \
--arg provider_pk "$provider_pk" \
'{
name: .name,
slug: .slug,
provider: .provider,
backchannel_providers: ((.backchannel_providers // []) + [($provider_pk | tonumber)] | unique),
open_in_new_tab: .open_in_new_tab,
meta_launch_url: .meta_launch_url,
policy_engine_mode: .policy_engine_mode
}'
)"
api PATCH "/api/v3/core/applications/${application_slug}/" "$application_payload" >/dev/null
group_pks_json="$(jq -cn --arg owner "$owner_group_pk" --arg admin "$admin_group_pk" --arg guest "$guest_group_pk" '[$owner, $admin, $guest]')"
user_pks_json="$(
api GET "/api/v3/core/users/?page_size=200" \
| jq -c \
--argjson group_pks "$group_pks_json" \
'[.results[]?
| select(
([((.groups // [])[] | tostring)] as $user_groups
| ($group_pks | map(. as $wanted | ($user_groups | index($wanted)) != null) | any))
)
| .pk]'
)"
while IFS= read -r group_pk; do
[[ -z "$group_pk" ]] && continue
sync_object "$provider_pk" "authentik.core.models.Group" "$group_pk"
done < <(printf '%s\n' "$group_pks_json" | jq -r '.[]')
while IFS= read -r user_pk; do
[[ -z "$user_pk" ]] && continue
sync_object "$provider_pk" "authentik.core.models.User" "$user_pk"
done < <(printf '%s\n' "$user_pks_json" | jq -r '.[]')
status_json="$(api GET "/api/v3/providers/scim/${provider_pk}/sync/status/" || true)"
if ! printf '%s\n' "$status_json" | jq -e 'has("last_sync_status")' >/dev/null 2>&1; then
echo "warning: could not read Linear SCIM sync status for provider ${provider_pk}; keeping reconciled configuration." >&2
fi
echo "Synced Authentik Linear SCIM provider ${provider_name} (${provider_pk}) with groups ${owner_group}, ${admin_group}, ${guest_group}."

View file

@ -0,0 +1,309 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
provider_slug="${AUTHENTIK_TAILNET_PROVIDER_SLUG:-ts}"
provider_slugs_json="${AUTHENTIK_TAILNET_PROVIDER_SLUGS_JSON:-}"
authentication_flow_name="${AUTHENTIK_TAILNET_AUTHENTICATION_FLOW_NAME:-Burrow Tailnet Authentication}"
authentication_flow_slug="${AUTHENTIK_TAILNET_AUTHENTICATION_FLOW_SLUG:-burrow-tailnet-authentication}"
identification_stage_name="${AUTHENTIK_TAILNET_IDENTIFICATION_STAGE_NAME:-burrow-tailnet-identification-stage}"
password_stage_name="${AUTHENTIK_TAILNET_PASSWORD_STAGE_NAME:-burrow-tailnet-password-stage}"
user_login_stage_name="${AUTHENTIK_TAILNET_USER_LOGIN_STAGE_NAME:-burrow-tailnet-user-login-stage}"
google_source_slug="${AUTHENTIK_TAILNET_GOOGLE_SOURCE_SLUG:-google}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-tailnet-auth-flow.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
Optional environment:
AUTHENTIK_URL
AUTHENTIK_TAILNET_PROVIDER_SLUG
AUTHENTIK_TAILNET_PROVIDER_SLUGS_JSON
AUTHENTIK_TAILNET_AUTHENTICATION_FLOW_NAME
AUTHENTIK_TAILNET_AUTHENTICATION_FLOW_SLUG
AUTHENTIK_TAILNET_IDENTIFICATION_STAGE_NAME
AUTHENTIK_TAILNET_PASSWORD_STAGE_NAME
AUTHENTIK_TAILNET_USER_LOGIN_STAGE_NAME
AUTHENTIK_TAILNET_GOOGLE_SOURCE_SLUG
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -n "$provider_slugs_json" ]]; then
if ! printf '%s' "$provider_slugs_json" | jq -e 'type == "array" and length > 0 and all(.[]; type == "string" and length > 0)' >/dev/null; then
echo "error: AUTHENTIK_TAILNET_PROVIDER_SLUGS_JSON must be a non-empty JSON array of strings" >&2
exit 1
fi
else
provider_slugs_json="$(jq -cn --arg slug "$provider_slug" '[$slug]')"
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_stage_by_name() {
local path="$1"
local name="$2"
api GET "${path}?page_size=200" \
| jq -c --arg name "$name" '.results[]? | select(.name == $name)' \
| head -n1
}
lookup_flow_pk() {
local slug="$1"
api GET "/api/v3/flows/instances/?slug=${slug}" \
| jq -r '.results[]? | select(.slug != null) | .pk // empty' \
| head -n1
}
lookup_source_pk() {
local slug="$1"
api GET "/api/v3/sources/oauth/?page_size=200&slug=${slug}" \
| jq -r --arg slug "$slug" '.results[]? | select(.slug == $slug) | .pk // empty' \
| head -n1
}
ensure_password_stage() {
local existing payload stage_pk
existing="$(lookup_stage_by_name "/api/v3/stages/password/" "$password_stage_name")"
payload="$(
jq -cn \
--arg name "$password_stage_name" \
'{
name: $name,
backends: [
"authentik.core.auth.InbuiltBackend",
"authentik.core.auth.TokenBackend"
],
allow_show_password: false,
failed_attempts_before_cancel: 5
}'
)"
if [[ -n "$existing" ]]; then
stage_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/stages/password/${stage_pk}/" "$payload" >/dev/null
else
stage_pk="$(
api POST "/api/v3/stages/password/" "$payload" \
| jq -r '.pk // empty'
)"
fi
printf '%s\n' "$stage_pk"
}
ensure_identification_stage() {
local password_stage_pk="$1"
local google_source_pk="$2"
local existing payload stage_pk sources_json
existing="$(lookup_stage_by_name "/api/v3/stages/identification/" "$identification_stage_name")"
if [[ -n "$google_source_pk" ]]; then
sources_json="$(jq -cn --arg source "$google_source_pk" '[$source]')"
else
sources_json='[]'
fi
payload="$(
jq -cn \
--arg name "$identification_stage_name" \
--arg password_stage "$password_stage_pk" \
--argjson sources "$sources_json" \
'{
name: $name,
user_fields: ["username", "email"],
password_stage: $password_stage,
case_insensitive_matching: true,
show_matched_user: true,
sources: $sources,
show_source_labels: true,
pretend_user_exists: false,
enable_remember_me: false
}'
)"
if [[ -n "$existing" ]]; then
stage_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/stages/identification/${stage_pk}/" "$payload" >/dev/null
else
stage_pk="$(
api POST "/api/v3/stages/identification/" "$payload" \
| jq -r '.pk // empty'
)"
fi
printf '%s\n' "$stage_pk"
}
ensure_user_login_stage() {
local existing payload stage_pk
existing="$(lookup_stage_by_name "/api/v3/stages/user_login/" "$user_login_stage_name")"
payload="$(
jq -cn \
--arg name "$user_login_stage_name" \
'{
name: $name,
session_duration: "hours=12",
terminate_other_sessions: false,
remember_me_offset: "seconds=0",
network_binding: "no_binding",
geoip_binding: "no_binding"
}'
)"
if [[ -n "$existing" ]]; then
stage_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/stages/user_login/${stage_pk}/" "$payload" >/dev/null
else
stage_pk="$(
api POST "/api/v3/stages/user_login/" "$payload" \
| jq -r '.pk // empty'
)"
fi
printf '%s\n' "$stage_pk"
}
ensure_authentication_flow() {
local existing_pk payload
existing_pk="$(lookup_flow_pk "$authentication_flow_slug")"
payload="$(
jq -cn \
--arg name "$authentication_flow_name" \
--arg slug "$authentication_flow_slug" \
'{
name: $name,
title: $name,
slug: $slug,
designation: "authentication",
policy_engine_mode: "any",
layout: "stacked"
}'
)"
if [[ -n "$existing_pk" ]]; then
api PATCH "/api/v3/flows/instances/${authentication_flow_slug}/" "$payload" >/dev/null
printf '%s\n' "$existing_pk"
else
api POST "/api/v3/flows/instances/" "$payload" \
| jq -r '.pk // empty'
fi
}
ensure_flow_binding() {
local flow_pk="$1"
local stage_pk="$2"
local order="$3"
local existing payload binding_pk
existing="$(
api GET "/api/v3/flows/bindings/?target=${flow_pk}&stage=${stage_pk}&page_size=200" \
| jq -c '.results[]?' \
| head -n1
)"
payload="$(
jq -cn \
--arg target "$flow_pk" \
--arg stage "$stage_pk" \
--argjson order "$order" \
'{
target: $target,
stage: $stage,
order: $order,
policy_engine_mode: "any"
}'
)"
if [[ -n "$existing" ]]; then
binding_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/flows/bindings/${binding_pk}/" "$payload" >/dev/null
else
api POST "/api/v3/flows/bindings/" "$payload" >/dev/null
fi
}
wait_for_authentik
mapfile -t provider_pks < <(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -r --argjson provider_slugs "$provider_slugs_json" '
.results[]?
| select(
((.assigned_application_slug // empty) as $assigned | ($provider_slugs | index($assigned)) != null)
or ((.slug // empty) as $slug | ($provider_slugs | index($slug)) != null)
)
| .pk // empty
'
)
if [[ "${#provider_pks[@]}" -eq 0 ]]; then
echo "error: could not resolve any Authentik Tailnet OAuth providers from ${provider_slugs_json}" >&2
exit 1
fi
google_source_pk="$(lookup_source_pk "$google_source_slug" || true)"
password_stage_pk="$(ensure_password_stage)"
identification_stage_pk="$(ensure_identification_stage "$password_stage_pk" "$google_source_pk")"
user_login_stage_pk="$(ensure_user_login_stage)"
authentication_flow_pk="$(ensure_authentication_flow)"
ensure_flow_binding "$authentication_flow_pk" "$identification_stage_pk" 10
ensure_flow_binding "$authentication_flow_pk" "$user_login_stage_pk" 30
for provider_pk in "${provider_pks[@]}"; do
api PATCH "/api/v3/providers/oauth2/${provider_pk}/" "$(
jq -cn --arg flow "$authentication_flow_pk" '{authentication_flow: $flow}'
)" >/dev/null
done
echo "Synced Burrow Tailnet authentication flow for providers ${provider_slugs_json}."

View file

@ -0,0 +1,369 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_TAILSCALE_APPLICATION_SLUG:-tailscale}"
application_name="${AUTHENTIK_TAILSCALE_APPLICATION_NAME:-Tailscale}"
provider_name="${AUTHENTIK_TAILSCALE_PROVIDER_NAME:-Tailscale}"
template_slug="${AUTHENTIK_TAILSCALE_TEMPLATE_SLUG:-ts}"
client_id="${AUTHENTIK_TAILSCALE_CLIENT_ID:-tailscale.burrow.net}"
client_secret="${AUTHENTIK_TAILSCALE_CLIENT_SECRET:-}"
launch_url="${AUTHENTIK_TAILSCALE_LAUNCH_URL:-https://login.tailscale.com/start/oidc}"
access_group="${AUTHENTIK_TAILSCALE_ACCESS_GROUP:-}"
default_external_application_slug="${AUTHENTIK_DEFAULT_EXTERNAL_APPLICATION_SLUG:-}"
redirect_uris_json="${AUTHENTIK_TAILSCALE_REDIRECT_URIS_JSON:-[
\"https://login.tailscale.com/a/oauth_response\"
]}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-tailscale-oidc.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
AUTHENTIK_TAILSCALE_CLIENT_SECRET
Optional environment:
AUTHENTIK_URL
AUTHENTIK_TAILSCALE_APPLICATION_SLUG
AUTHENTIK_TAILSCALE_APPLICATION_NAME
AUTHENTIK_TAILSCALE_PROVIDER_NAME
AUTHENTIK_TAILSCALE_TEMPLATE_SLUG
AUTHENTIK_TAILSCALE_CLIENT_ID
AUTHENTIK_TAILSCALE_LAUNCH_URL
AUTHENTIK_TAILSCALE_REDIRECT_URIS_JSON
AUTHENTIK_TAILSCALE_ACCESS_GROUP
AUTHENTIK_DEFAULT_EXTERNAL_APPLICATION_SLUG
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
if [[ -z "$client_secret" || "$client_secret" == PENDING* ]]; then
echo "Tailscale OIDC client secret is not configured; skipping Authentik Tailscale sync." >&2
exit 0
fi
if ! printf '%s' "$redirect_uris_json" | jq -e 'type == "array" and length > 0' >/dev/null; then
echo "error: AUTHENTIK_TAILSCALE_REDIRECT_URIS_JSON must be a non-empty JSON array" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
api_with_status() {
local method="$1"
local path="$2"
local data="${3:-}"
local response_file status
response_file="$(mktemp)"
trap 'rm -f "$response_file"' RETURN
if [[ -n "$data" ]]; then
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
)"
else
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
)"
fi
printf '%s\n' "$status"
cat "$response_file"
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
wait_for_authentik
lookup_group_pk() {
local group_name="$1"
api GET "/api/v3/core/groups/?page_size=200" \
| jq -r --arg group_name "$group_name" '.results[]? | select(.name == $group_name) | .pk // empty' \
| head -n1
}
lookup_application_pk() {
local slug="$1"
local application_pk lookup_result lookup_status
application_pk="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -r --arg slug "$slug" '.results[]? | select(.slug == $slug) | .pk // empty' \
| head -n1
)"
if [[ -n "$application_pk" ]]; then
printf '%s\n' "$application_pk"
return 0
fi
lookup_result="$(api_with_status GET "/api/v3/core/applications/${slug}/")"
lookup_status="$(printf '%s\n' "$lookup_result" | sed -n '1p')"
if [[ "$lookup_status" =~ ^20[01]$ ]]; then
printf '%s\n' "$lookup_result" | sed '1d' | jq -r '.pk // empty'
fi
}
ensure_application_group_binding() {
local application_slug="$1"
local group_name="$2"
local application_pk group_pk existing payload binding_pk
application_pk="$(lookup_application_pk "$application_slug")"
if [[ -z "$application_pk" ]]; then
echo "warning: could not resolve Authentik application ${application_slug}; skipping application group binding" >&2
return 0
fi
group_pk="$(lookup_group_pk "$group_name")"
if [[ -z "$group_pk" ]]; then
echo "error: could not resolve Authentik group ${group_name}" >&2
exit 1
fi
existing="$(
api GET "/api/v3/policies/bindings/?page_size=200&target=${application_pk}" \
| jq -c --arg group_pk "$group_pk" '.results[]? | select(.group == $group_pk)' \
| head -n1
)"
payload="$(
jq -cn \
--arg target "$application_pk" \
--arg group "$group_pk" \
'{
group: $group,
target: $target,
negate: false,
enabled: true,
order: 100,
timeout: 30,
failure_result: false
}'
)"
if [[ -n "$existing" ]]; then
binding_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/policies/bindings/${binding_pk}/" "$payload" >/dev/null
else
api POST "/api/v3/policies/bindings/" "$payload" >/dev/null
fi
}
ensure_default_external_application() {
local application_slug="$1"
local application_pk default_brand brand_payload
application_pk="$(lookup_application_pk "$application_slug")"
if [[ -z "$application_pk" ]]; then
echo "error: could not resolve Authentik application ${application_slug} for brand default application" >&2
exit 1
fi
default_brand="$(
api GET "/api/v3/core/brands/?page_size=200" \
| jq -c '.results[]? | select(.default == true)' \
| head -n1
)"
if [[ -z "$default_brand" ]]; then
echo "warning: could not resolve the default Authentik brand; skipping external default application" >&2
return 0
fi
brand_payload="$(
printf '%s\n' "$default_brand" \
| jq --arg application_pk "$application_pk" '.default_application = $application_pk'
)"
api PUT "/api/v3/core/brands/$(printf '%s\n' "$default_brand" | jq -r '.brand_uuid')/" "$brand_payload" >/dev/null
}
template_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c --arg template_slug "$template_slug" '.results[]? | select(.assigned_application_slug == $template_slug)' \
| head -n1
)"
if [[ -z "$template_provider" ]]; then
echo "error: could not resolve the Authentik OAuth provider template ${template_slug}" >&2
exit 1
fi
authorization_flow="$(printf '%s\n' "$template_provider" | jq -r '.authorization_flow')"
invalidation_flow="$(printf '%s\n' "$template_provider" | jq -r '.invalidation_flow')"
property_mappings="$(printf '%s\n' "$template_provider" | jq -c '.property_mappings')"
signing_key="$(printf '%s\n' "$template_provider" | jq -r '.signing_key')"
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg authorization_flow "$authorization_flow" \
--arg invalidation_flow "$invalidation_flow" \
--arg client_id "$client_id" \
--arg client_secret "$client_secret" \
--arg signing_key "$signing_key" \
--argjson property_mappings "$property_mappings" \
--argjson redirect_uris "$redirect_uris_json" \
'{
name: $name,
authorization_flow: $authorization_flow,
invalidation_flow: $invalidation_flow,
client_type: "confidential",
client_id: $client_id,
client_secret: $client_secret,
include_claims_in_id_token: true,
redirect_uris: ($redirect_uris | map({matching_mode: "strict", url: .})),
property_mappings: $property_mappings,
signing_key: $signing_key,
issuer_mode: "per_provider",
sub_mode: "hashed_user_id"
}'
)"
existing_provider="$(
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/oauth2/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/oauth2/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: Tailscale OIDC provider did not return a primary key" >&2
exit 1
fi
application_payload="$(
jq -n \
--arg name "$application_name" \
--arg slug "$application_slug" \
--arg provider "$provider_pk" \
--arg launch_url "$launch_url" \
'{
name: $name,
slug: $slug,
provider: ($provider | tonumber),
meta_launch_url: $launch_url,
open_in_new_tab: true,
policy_engine_mode: "any"
}'
)"
existing_application="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
)"
if [[ -n "$existing_application" ]]; then
application_pk="$(printf '%s\n' "$existing_application" | jq -r '.pk')"
api PATCH "/api/v3/core/applications/${application_pk}/" "$application_payload" >/dev/null
else
create_application_result="$(
api_with_status POST "/api/v3/core/applications/" "$application_payload"
)"
create_application_status="$(printf '%s\n' "$create_application_result" | sed -n '1p')"
create_application_body="$(printf '%s\n' "$create_application_result" | sed '1d')"
if [[ "$create_application_status" =~ ^20[01]$ ]]; then
application_pk="$(printf '%s\n' "$create_application_body" | jq -r '.pk // empty')"
elif [[ "$create_application_status" == "400" ]] && printf '%s\n' "$create_application_body" | jq -e '
(.slug // [] | index("Application with this slug already exists.")) != null
or (.provider // [] | index("Application with this provider already exists.")) != null
' >/dev/null; then
application_pk="existing-duplicate"
else
printf '%s\n' "$create_application_body" >&2
echo "error: could not reconcile Authentik application ${application_slug}" >&2
exit 1
fi
fi
if [[ -z "${application_pk:-}" ]]; then
echo "error: Tailscale OIDC application did not return a primary key" >&2
exit 1
fi
if [[ -n "$access_group" ]]; then
ensure_application_group_binding "$application_slug" "$access_group"
fi
if [[ -n "$default_external_application_slug" ]]; then
ensure_default_external_application "$default_external_application_slug"
fi
for _ in $(seq 1 30); do
if curl -fsS "${authentik_url}/application/o/${application_slug}/.well-known/openid-configuration" >/dev/null 2>&1; then
echo "Synced Authentik Tailscale OIDC application ${application_slug} (${application_name})."
exit 0
fi
sleep 2
done
echo "warning: Tailscale OIDC issuer document for ${application_slug} was not immediately readable; keeping reconciled config." >&2
echo "Synced Authentik Tailscale OIDC application ${application_slug} (${application_name})."

View file

@ -0,0 +1,412 @@
#!/usr/bin/env bash
set -euo pipefail
authentik_url="${AUTHENTIK_URL:-https://auth.burrow.net}"
bootstrap_token="${AUTHENTIK_BOOTSTRAP_TOKEN:-}"
application_slug="${AUTHENTIK_ZULIP_APPLICATION_SLUG:-zulip}"
application_name="${AUTHENTIK_ZULIP_APPLICATION_NAME:-Zulip}"
provider_name="${AUTHENTIK_ZULIP_PROVIDER_NAME:-Zulip}"
acs_url="${AUTHENTIK_ZULIP_ACS_URL:-https://chat.burrow.net/complete/saml/}"
audience="${AUTHENTIK_ZULIP_AUDIENCE:-https://chat.burrow.net}"
launch_url="${AUTHENTIK_ZULIP_LAUNCH_URL:-https://chat.burrow.net/}"
access_group="${AUTHENTIK_ZULIP_ACCESS_GROUP:-}"
admin_group="${AUTHENTIK_ZULIP_ADMIN_GROUP:-}"
issuer="${AUTHENTIK_ZULIP_ISSUER:-$authentik_url}"
usage() {
cat <<'EOF'
Usage: Scripts/authentik-sync-zulip-saml.sh
Required environment:
AUTHENTIK_BOOTSTRAP_TOKEN
Optional environment:
AUTHENTIK_URL
AUTHENTIK_ZULIP_APPLICATION_SLUG
AUTHENTIK_ZULIP_APPLICATION_NAME
AUTHENTIK_ZULIP_PROVIDER_NAME
AUTHENTIK_ZULIP_ACS_URL
AUTHENTIK_ZULIP_AUDIENCE
AUTHENTIK_ZULIP_LAUNCH_URL
AUTHENTIK_ZULIP_ACCESS_GROUP
AUTHENTIK_ZULIP_ADMIN_GROUP
AUTHENTIK_ZULIP_ISSUER
EOF
}
if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
usage
exit 0
fi
if [[ -z "$bootstrap_token" ]]; then
echo "error: AUTHENTIK_BOOTSTRAP_TOKEN is required" >&2
exit 1
fi
api() {
local method="$1"
local path="$2"
local data="${3:-}"
if [[ -n "$data" ]]; then
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
else
curl -fsS \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
fi
}
api_with_status() {
local method="$1"
local path="$2"
local data="${3:-}"
local response_file status
response_file="$(mktemp)"
trap 'rm -f "$response_file"' RETURN
if [[ -n "$data" ]]; then
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
-H "Content-Type: application/json" \
-d "$data" \
"${authentik_url}${path}"
)"
else
status="$(
curl -sS \
-o "$response_file" \
-w '%{http_code}' \
-X "$method" \
-H "Authorization: Bearer ${bootstrap_token}" \
"${authentik_url}${path}"
)"
fi
printf '%s\n' "$status"
cat "$response_file"
}
wait_for_authentik() {
for _ in $(seq 1 90); do
if curl -fsS "${authentik_url}/-/health/ready/" >/dev/null 2>&1; then
return 0
fi
sleep 2
done
echo "error: Authentik did not become ready at ${authentik_url}" >&2
exit 1
}
lookup_oauth_template_field() {
local field="$1"
api GET "/api/v3/providers/oauth2/?page_size=200" \
| jq -r --arg field "$field" '.results[]? | select(.assigned_application_slug == "ts") | .[$field]' \
| head -n1
}
lookup_group_pk() {
local group_name="$1"
api GET "/api/v3/core/groups/?page_size=200" \
| jq -r --arg group_name "$group_name" '.results[]? | select(.name == $group_name) | .pk // empty' \
| head -n1
}
lookup_application_pk() {
local slug="$1"
api GET "/api/v3/core/applications/?page_size=200" \
| jq -r --arg slug "$slug" '.results[]? | select(.slug == $slug) | .pk // empty' \
| head -n1
}
ensure_application_group_binding() {
local application_slug="$1"
local group_name="$2"
local application_pk group_pk existing payload binding_pk
application_pk="$(lookup_application_pk "$application_slug")"
if [[ -z "$application_pk" ]]; then
echo "warning: could not resolve Authentik application ${application_slug}; skipping application group binding" >&2
return 0
fi
group_pk="$(lookup_group_pk "$group_name")"
if [[ -z "$group_pk" ]]; then
echo "error: could not resolve Authentik group ${group_name}" >&2
exit 1
fi
existing="$(
api GET "/api/v3/policies/bindings/?page_size=200&target=${application_pk}" \
| jq -c --arg group_pk "$group_pk" '.results[]? | select(.group == $group_pk)' \
| head -n1
)"
payload="$(
jq -cn \
--arg target "$application_pk" \
--arg group "$group_pk" \
'{
group: $group,
target: $target,
negate: false,
enabled: true,
order: 100,
timeout: 30,
failure_result: false
}'
)"
if [[ -n "$existing" ]]; then
binding_pk="$(printf '%s\n' "$existing" | jq -r '.pk')"
api PATCH "/api/v3/policies/bindings/${binding_pk}/" "$payload" >/dev/null
else
api POST "/api/v3/policies/bindings/" "$payload" >/dev/null
fi
}
reconcile_property_mapping() {
local name="$1"
local saml_name="$2"
local friendly_name="$3"
local expression="$4"
local payload existing_pk
payload="$(
jq -n \
--arg name "$name" \
--arg saml_name "$saml_name" \
--arg friendly_name "$friendly_name" \
--arg expression "$expression" \
'{
name: $name,
saml_name: $saml_name,
friendly_name: $friendly_name,
expression: $expression
}'
)"
existing_pk="$(
api GET "/api/v3/propertymappings/provider/saml/?page_size=200" \
| jq -r --arg name "$name" '.results[]? | select(.name == $name) | .pk' \
| head -n1
)"
if [[ -n "$existing_pk" ]]; then
api PATCH "/api/v3/propertymappings/provider/saml/${existing_pk}/" "$payload" >/dev/null
printf '%s\n' "$existing_pk"
else
api POST "/api/v3/propertymappings/provider/saml/" "$payload" | jq -r '.pk // empty'
fi
}
wait_for_authentik
authorization_flow="$(lookup_oauth_template_field authorization_flow)"
invalidation_flow="$(lookup_oauth_template_field invalidation_flow)"
signing_kp="$(lookup_oauth_template_field signing_key)"
if [[ -z "$authorization_flow" || -z "$invalidation_flow" || -z "$signing_kp" ]]; then
echo "error: could not resolve Authentik provider defaults from Burrow Tailnet template" >&2
exit 1
fi
email_mapping_pk="$(
reconcile_property_mapping \
"Burrow Zulip SAML Email" \
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" \
"email" \
'return request.user.email'
)"
name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Zulip SAML Name" \
"name" \
"name" \
'return request.user.name or request.user.username'
)"
first_name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Zulip SAML First Name" \
"firstName" \
"firstName" \
$'parts = (request.user.name or "").split(" ", 1)\nif len(parts) > 0 and parts[0]:\n return parts[0]\nreturn request.user.username'
)"
last_name_mapping_pk="$(
reconcile_property_mapping \
"Burrow Zulip SAML Last Name" \
"lastName" \
"lastName" \
$'parts = (request.user.name or "").rsplit(" ", 1)\nif len(parts) == 2 and parts[1]:\n return parts[1]\nreturn request.user.username'
)"
role_mapping_pk=""
if [[ -n "$admin_group" ]]; then
role_mapping_pk="$(
reconcile_property_mapping \
"Burrow Zulip SAML Role" \
"zulip_role" \
"zulip_role" \
$'admin_group = "'$admin_group$'"\nif any(group.name == admin_group for group in request.user.ak_groups.all()):\n return "owner"\nreturn None'
)"
fi
if [[ -z "$email_mapping_pk" || -z "$name_mapping_pk" || -z "$first_name_mapping_pk" || -z "$last_name_mapping_pk" ]]; then
echo "error: failed to reconcile Zulip SAML property mappings" >&2
exit 1
fi
provider_payload="$(
jq -n \
--arg name "$provider_name" \
--arg authorization_flow "$authorization_flow" \
--arg invalidation_flow "$invalidation_flow" \
--arg acs_url "$acs_url" \
--arg audience "$audience" \
--arg issuer "$issuer" \
--arg signing_kp "$signing_kp" \
--arg name_id_mapping "$email_mapping_pk" \
--arg email_mapping "$email_mapping_pk" \
--arg name_mapping "$name_mapping_pk" \
--arg first_name_mapping "$first_name_mapping_pk" \
--arg last_name_mapping "$last_name_mapping_pk" \
--arg role_mapping "$role_mapping_pk" \
'{
name: $name,
authorization_flow: $authorization_flow,
invalidation_flow: $invalidation_flow,
acs_url: $acs_url,
audience: $audience,
issuer: $issuer,
signing_kp: $signing_kp,
sign_assertion: true,
sign_response: true,
sp_binding: "post",
name_id_mapping: $name_id_mapping,
property_mappings: [
$email_mapping,
$name_mapping,
$first_name_mapping,
$last_name_mapping
] + (if $role_mapping != "" then [$role_mapping] else [] end)
}'
)"
existing_provider="$(
api GET "/api/v3/providers/saml/?page_size=200" \
| jq -c \
--arg application_slug "$application_slug" \
--arg provider_name "$provider_name" \
'.results[]? | select(.assigned_application_slug == $application_slug or .name == $provider_name)' \
| head -n1
)"
if [[ -n "$existing_provider" ]]; then
provider_pk="$(printf '%s\n' "$existing_provider" | jq -r '.pk')"
api PATCH "/api/v3/providers/saml/${provider_pk}/" "$provider_payload" >/dev/null
else
provider_pk="$(
api POST "/api/v3/providers/saml/" "$provider_payload" \
| jq -r '.pk // empty'
)"
fi
if [[ -z "${provider_pk:-}" ]]; then
echo "error: Zulip SAML provider did not return a primary key" >&2
exit 1
fi
application_payload="$(
jq -n \
--arg name "$application_name" \
--arg slug "$application_slug" \
--arg provider "$provider_pk" \
--arg launch_url "$launch_url" \
'{
name: $name,
slug: $slug,
provider: ($provider | tonumber),
meta_launch_url: $launch_url,
open_in_new_tab: true,
policy_engine_mode: "any"
}'
)"
existing_application="$(
api GET "/api/v3/core/applications/?page_size=200" \
| jq -c --arg slug "$application_slug" '.results[]? | select(.slug == $slug)' \
| head -n1
)"
if [[ -n "$existing_application" ]]; then
application_pk="$(printf '%s\n' "$existing_application" | jq -r '.pk')"
api PATCH "/api/v3/core/applications/${application_pk}/" "$application_payload" >/dev/null
else
create_application_result="$(
api_with_status POST "/api/v3/core/applications/" "$application_payload"
)"
create_application_status="$(printf '%s\n' "$create_application_result" | sed -n '1p')"
create_application_body="$(printf '%s\n' "$create_application_result" | sed '1d')"
if [[ "$create_application_status" =~ ^20[01]$ ]]; then
application_pk="$(printf '%s\n' "$create_application_body" | jq -r '.pk // empty')"
elif [[ "$create_application_status" == "400" ]] && printf '%s\n' "$create_application_body" | jq -e '
(.slug // [] | index("Application with this slug already exists.")) != null
or (.provider // [] | index("Application with this provider already exists.")) != null
' >/dev/null; then
application_pk="existing-duplicate"
else
printf '%s\n' "$create_application_body" >&2
echo "error: could not reconcile Authentik application ${application_slug}" >&2
exit 1
fi
fi
if [[ -z "${application_pk:-}" ]]; then
echo "error: Zulip SAML application did not return a primary key" >&2
exit 1
fi
if [[ -n "$access_group" ]]; then
ensure_application_group_binding "$application_slug" "$access_group"
fi
for _ in $(seq 1 30); do
metadata_status="$(
curl -sS \
-o /dev/null \
-w '%{http_code}' \
--max-redirs 0 \
"${authentik_url}/application/saml/${application_slug}/metadata/" \
|| true
)"
case "$metadata_status" in
200|301|302|307|308)
echo "Synced Authentik Zulip SAML application ${application_slug} (${application_name})."
exit 0
;;
esac
sleep 2
done
echo "warning: Zulip SAML metadata for ${application_slug} was not immediately readable; keeping reconciled config." >&2
echo "Synced Authentik Zulip SAML application ${application_slug} (${application_name})."

133
Scripts/bep Executable file
View file

@ -0,0 +1,133 @@
#!/usr/bin/env bash
set -euo pipefail
repo_root=$(git rev-parse --show-toplevel)
proposals_dir="$repo_root/evolution/proposals"
auto_browse() {
if command -v wisu >/dev/null 2>&1; then
exec wisu -i -g --icons "$repo_root/evolution"
fi
exec ls -la "$repo_root/evolution"
}
usage() {
cat <<'USAGE'
Usage: bep [command]
Commands:
list [--status <Status>] List BEPs, optionally filtered by status.
open <BEP-XXXX|XXXX|X> Open a BEP in $EDITOR.
help Show this help.
If no command is provided, bep launches a simple browser for evolution/.
USAGE
}
normalize_id() {
local raw="$1"
if [[ "$raw" =~ ^BEP-[0-9]+$ ]]; then
printf '%s' "$raw"
return 0
fi
if [[ "$raw" =~ ^[0-9]+$ ]]; then
printf 'BEP-%04d' "$raw"
return 0
fi
return 1
}
read_status() {
local file="$1"
awk -F ': ' '/^Status:/ {print $2; exit}' "$file"
}
read_title() {
local file="$1"
local line
line=$(head -n 1 "$file" || true)
printf '%s' "$line" | sed -E 's/^# `[^`]+`[[:space:]]+//; s/^[^A-Za-z0-9]+//'
}
list_bep() {
local filter="${1:-}"
local filter_lower=""
if [[ -n "$filter" ]]; then
filter_lower=$(printf '%s' "$filter" | tr '[:upper:]' '[:lower:]')
fi
printf '%-10s %-18s %s\n' "BEP" "Status" "Title"
local file
local entries=()
for file in "$proposals_dir"/BEP-*.md; do
[[ -e "$file" ]] || continue
local base
base=$(basename "$file")
local id
id=$(printf '%s' "$base" | cut -d- -f1-2)
local status
status=$(read_status "$file")
local status_lower
status_lower=$(printf '%s' "$status" | tr '[:upper:]' '[:lower:]')
if [[ -n "$filter_lower" && "$status_lower" != "$filter_lower" ]]; then
continue
fi
local title
title=$(read_title "$file")
entries+=("$(printf '%-10s %-18s %s' "$id" "$status" "$title")")
done
if [[ ${#entries[@]} -gt 0 ]]; then
printf '%s\n' "${entries[@]}" | sort
fi
}
open_bep() {
local raw="$1"
local id
if ! id=$(normalize_id "$raw"); then
echo "Unknown BEP id: $raw" >&2
exit 1
fi
local matches
matches=("$proposals_dir"/"$id"-*.md)
if [[ ${#matches[@]} -eq 0 || ! -e "${matches[0]}" ]]; then
echo "No proposal found for $id" >&2
exit 1
fi
if [[ ${#matches[@]} -gt 1 ]]; then
echo "Multiple proposals match $id:" >&2
printf ' %s\n' "${matches[@]}" >&2
exit 1
fi
local editor="${EDITOR:-vi}"
exec "$editor" "${matches[0]}"
}
command=${1:-}
case "$command" in
"")
auto_browse
;;
list)
if [[ ${2:-} == "--status" && -n ${3:-} ]]; then
list_bep "$3"
else
list_bep
fi
;;
open)
if [[ -z ${2:-} ]]; then
echo "bep open requires an id" >&2
exit 1
fi
open_bep "$2"
;;
help|-h|--help)
usage
;;
*)
echo "Unknown command: $command" >&2
usage
exit 1
;;
esac

View file

@ -3,8 +3,6 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
@ -12,33 +10,27 @@ Usage: Scripts/bootstrap-forge-intake.sh [options]
Copy the minimum Burrow forge bootstrap secrets onto the target host under
/var/lib/burrow/intake with the ownership expected by the NixOS services.
Legacy path only: the current forge runtime consumes agenix secrets directly.
Options:
--host <user@host> SSH target (default: root@git.burrow.net)
--ssh-key <path> SSH private key used to reach the host
(default: secrets/forgejo/agent-ssh-key.age, then intake/)
(default: intake/agent_at_burrow_net_ed25519)
--password-file <path> Forgejo admin bootstrap password file
(default: secrets/forgejo/admin-password.age, then intake/)
(default: intake/forgejo_pass_contact_at_burrow_net.txt)
--agent-key-file <path> Agent SSH private key copied for runner bootstrap
(default: secrets/forgejo/agent-ssh-key.age, then intake/)
(default: intake/agent_at_burrow_net_ed25519)
--no-verify Skip remote ls/stat verification after install
-h, --help Show this help text
EOF
}
HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-}"
PASSWORD_FILE="${BURROW_FORGE_PASSWORD_FILE:-}"
AGENT_KEY_FILE="${BURROW_FORGE_AGENT_KEY_FILE:-}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-${REPO_ROOT}/intake/agent_at_burrow_net_ed25519}"
PASSWORD_FILE="${BURROW_FORGE_PASSWORD_FILE:-${REPO_ROOT}/intake/forgejo_pass_contact_at_burrow_net.txt}"
AGENT_KEY_FILE="${BURROW_FORGE_AGENT_KEY_FILE:-${REPO_ROOT}/intake/agent_at_burrow_net_ed25519}"
KNOWN_HOSTS_FILE="${BURROW_FORGE_KNOWN_HOSTS_FILE:-${HOME}/.cache/burrow/forge-known_hosts}"
VERIFY=1
cleanup() {
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
while [[ $# -gt 0 ]]; do
case "$1" in
--host)
@ -75,29 +67,12 @@ done
mkdir -p "$(dirname "${KNOWN_HOSTS_FILE}")"
SSH_KEY="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${SSH_KEY}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)"
PASSWORD_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${PASSWORD_FILE}" \
"${REPO_ROOT}/intake/forgejo_pass_contact_at_burrow_net.txt" \
"${REPO_ROOT}/secrets/forgejo/admin-password.age"
)"
AGENT_KEY_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${AGENT_KEY_FILE}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)"
for path in "${SSH_KEY}" "${PASSWORD_FILE}" "${AGENT_KEY_FILE}"; do
if [[ ! -s "${path}" ]]; then
echo "required file missing or empty: ${path}" >&2
exit 1
fi
done
ssh_opts=(
-i "${SSH_KEY}"

94
Scripts/check-bep-metadata.py Executable file
View file

@ -0,0 +1,94 @@
#!/usr/bin/env python3
from __future__ import annotations
import pathlib
import re
import sys
REPO_ROOT = pathlib.Path(__file__).resolve().parent.parent
PROPOSALS_DIR = REPO_ROOT / "evolution" / "proposals"
ALLOWED_STATUSES = {
"Pitch",
"Draft",
"In Review",
"Accepted",
"Implemented",
"Rejected",
"Returned for Revision",
"Superseded",
"Archived",
}
REQUIRED_FIELDS = [
"Status",
"Proposal",
"Authors",
"Coordinator",
"Reviewers",
"Constitution Sections",
"Implementation PRs",
"Decision Date",
]
def text_block_lines(path: pathlib.Path) -> list[str]:
content = path.read_text(encoding="utf-8")
match = re.search(r"```text\n(.*?)\n```", content, re.DOTALL)
if not match:
raise ValueError("missing leading ```text metadata block")
return [line.rstrip() for line in match.group(1).splitlines() if line.strip()]
def validate(path: pathlib.Path) -> list[str]:
errors: list[str] = []
proposal_id = path.name.split("-", 2)[:2]
expected_id = "-".join(proposal_id).removesuffix(".md")
try:
lines = text_block_lines(path)
except ValueError as exc:
return [f"{path}: {exc}"]
field_names = [line.split(":", 1)[0] for line in lines]
if field_names != REQUIRED_FIELDS:
errors.append(
f"{path}: metadata fields must appear in order {', '.join(REQUIRED_FIELDS)}"
)
return errors
fields = dict(line.split(":", 1) for line in lines)
fields = {key.strip(): value.strip() for key, value in fields.items()}
if fields["Status"] not in ALLOWED_STATUSES:
errors.append(f"{path}: invalid Status {fields['Status']!r}")
if fields["Proposal"] != expected_id:
errors.append(
f"{path}: Proposal field {fields['Proposal']!r} does not match filename id {expected_id!r}"
)
if fields["Status"] in {"Accepted", "Implemented", "Superseded", "Rejected", "Archived"} and fields["Decision Date"] == "Pending":
errors.append(
f"{path}: Decision Date must not be Pending once status is {fields['Status']}"
)
return errors
def main() -> int:
errors: list[str] = []
for path in sorted(PROPOSALS_DIR.glob("BEP-*.md")):
errors.extend(validate(path))
if errors:
for error in errors:
print(error, file=sys.stderr)
return 1
print(f"checked {len(list(PROPOSALS_DIR.glob('BEP-*.md')))} BEPs")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View file

@ -3,8 +3,6 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
@ -14,21 +12,18 @@ Run a post-boot verification pass against the Burrow forge host.
Options:
--host <user@host> SSH target (default: root@git.burrow.net)
--ssh-key <path> SSH private key (default: secrets/forgejo/agent-ssh-key.age, then intake/)
--ssh-key <path> SSH private key (default: intake/agent_at_burrow_net_ed25519)
--expect-nsc Fail if forgejo-nsc services are not active
--expect-tailnet Fail if Authentik and Headscale services are not active
-h, --help Show this help text
EOF
}
HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-${REPO_ROOT}/intake/agent_at_burrow_net_ed25519}"
KNOWN_HOSTS_FILE="${BURROW_FORGE_KNOWN_HOSTS_FILE:-${HOME}/.cache/burrow/forge-known_hosts}"
EXPECT_NSC=0
cleanup() {
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
EXPECT_TAILNET=0
while [[ $# -gt 0 ]]; do
case "$1" in
@ -44,6 +39,10 @@ while [[ $# -gt 0 ]]; do
EXPECT_NSC=1
shift
;;
--expect-tailnet)
EXPECT_TAILNET=1
shift
;;
-h|--help)
usage
exit 0
@ -58,17 +57,10 @@ done
mkdir -p "$(dirname "${KNOWN_HOSTS_FILE}")"
SSH_KEY="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${SSH_KEY}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)" || {
echo "forge SSH key could not be resolved" >&2
if [[ ! -f "${SSH_KEY}" ]]; then
echo "forge SSH key not found: ${SSH_KEY}" >&2
exit 1
}
fi
ssh \
-i "${SSH_KEY}" \
@ -77,6 +69,7 @@ ssh \
-o StrictHostKeyChecking=accept-new \
"${HOST}" \
EXPECT_NSC="${EXPECT_NSC}" \
EXPECT_TAILNET="${EXPECT_TAILNET}" \
'bash -s' <<'EOF'
set -euo pipefail
@ -93,6 +86,13 @@ nsc_services=(
forgejo-nsc-autoscaler.service
)
tailnet_services=(
burrow-authentik-runtime.service
burrow-authentik-ready.service
headscale.service
headscale-bootstrap.service
)
show_service() {
local service="$1"
systemctl show \
@ -145,13 +145,41 @@ for service in "${nsc_services[@]}"; do
fi
done
for service in "${tailnet_services[@]}"; do
echo "== ${service} =="
show_service "${service}" || true
if [[ "${EXPECT_TAILNET}" == "1" ]] && ! service_is_healthy "${service}"; then
echo "required tailnet service is not active: ${service}" >&2
exit 1
fi
done
echo "== intake =="
ls -l /var/lib/burrow/intake || true
if [[ "${EXPECT_TAILNET}" == "1" ]]; then
echo "== agenix =="
ls -l /run/agenix || true
test -s /run/agenix/burrowAuthentikEnv
test -s /run/agenix/burrowHeadscaleOidcClientSecret
fi
if [[ "${EXPECT_NSC}" == "1" ]]; then
echo "== agenix-nsc =="
ls -l /run/agenix || true
test -s /run/agenix/burrowForgejoNscToken
test -s /run/agenix/burrowForgejoNscDispatcherConfig
test -s /run/agenix/burrowForgejoNscAutoscalerConfig
fi
if command -v curl >/dev/null 2>&1; then
echo "== http-local =="
curl -fsS -o /dev/null -w 'forgejo_login %{http_code}\n' http://127.0.0.1:3000/user/login
curl -fsS -o /dev/null -H 'Host: burrow.net' -w 'burrow_root %{http_code}\n' http://127.0.0.1/
curl -fsS -o /dev/null -H 'Host: git.burrow.net' -w 'git_login %{http_code}\n' http://127.0.0.1/user/login
if [[ "${EXPECT_TAILNET}" == "1" ]]; then
curl -fsS -o /dev/null -H 'Host: auth.burrow.net' -w 'authentik_ready %{http_code}\n' http://127.0.0.1/-/health/ready/
curl -sS -o /dev/null -H 'Host: ts.burrow.net' -w 'headscale_root %{http_code}\n' http://127.0.0.1/ || true
fi
fi
EOF

View file

@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
repo_root="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")/../.." && pwd)"
cd "${repo_root}"
release_ref="${RELEASE_REF:-manual-${GITHUB_SHA:-unknown}}"
target="x86_64-unknown-linux-gnu"
out_dir="${repo_root}/dist"
staging="${out_dir}/burrow-${release_ref}-${target}"
mkdir -p "${staging}"
cargo build --locked --release -p burrow --bin burrow
install -m 0755 target/release/burrow "${staging}/burrow"
cp README.md "${staging}/README.md"
tarball="${out_dir}/burrow-${release_ref}-${target}.tar.gz"
tar -C "${out_dir}" -czf "${tarball}" "$(basename "${staging}")"
shasum -a 256 "${tarball}" > "${tarball}.sha256"

157
Scripts/ci/ensure-nix.sh Executable file
View file

@ -0,0 +1,157 @@
#!/usr/bin/env bash
set -euo pipefail
source_nix_profile() {
local candidate
for candidate in \
"/nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh" \
"${HOME}/.nix-profile/etc/profile.d/nix.sh"
do
if [[ -f "${candidate}" ]]; then
# shellcheck disable=SC1090
. "${candidate}"
return 0
fi
done
return 1
}
linux_cp_supports_preserve() {
cp --help 2>&1 | grep -q -- '--preserve'
}
ensure_root_owned_home() {
if [[ "$(id -u)" -ne 0 ]]; then
return 0
fi
if [[ ! -d "${HOME}" ]] || [[ ! -O "${HOME}" ]]; then
export HOME="/root"
fi
mkdir -p "${HOME}"
}
ensure_linux_nixbld_accounts() {
if [[ "$(id -u)" -ne 0 ]]; then
return 0
fi
if command -v getent >/dev/null 2>&1 && getent group nixbld >/dev/null 2>&1; then
return 0
fi
if command -v addgroup >/dev/null 2>&1 && ! command -v groupadd >/dev/null 2>&1; then
addgroup -S nixbld >/dev/null 2>&1 || true
for i in $(seq 1 10); do
adduser -S -D -H -h /var/empty -s /sbin/nologin -G nixbld "nixbld${i}" >/dev/null 2>&1 || true
done
return 0
fi
if command -v groupadd >/dev/null 2>&1; then
groupadd -r nixbld >/dev/null 2>&1 || true
for i in $(seq 1 10); do
useradd \
--system \
--no-create-home \
--home-dir /var/empty \
--shell /usr/sbin/nologin \
--gid nixbld \
"nixbld${i}" >/dev/null 2>&1 || true
done
return 0
fi
echo "linux nix bootstrap requires nixbld group creation support" >&2
exit 1
}
ensure_linux_nix_bootstrap_prereqs() {
if linux_cp_supports_preserve; then
ensure_root_owned_home
ensure_linux_nixbld_accounts
return 0
fi
if command -v apk >/dev/null 2>&1; then
apk add --no-cache coreutils xz >/dev/null
elif command -v apt-get >/dev/null 2>&1; then
export DEBIAN_FRONTEND=noninteractive
apt-get update -y >/dev/null
apt-get install -y coreutils xz-utils >/dev/null
elif command -v dnf >/dev/null 2>&1; then
dnf install -y coreutils xz >/dev/null
elif command -v yum >/dev/null 2>&1; then
yum install -y coreutils xz >/dev/null
else
echo "linux nix bootstrap requires GNU cp but no supported package manager was found" >&2
exit 1
fi
linux_cp_supports_preserve || {
echo "linux nix bootstrap still lacks GNU cp after installing prerequisites" >&2
exit 1
}
ensure_root_owned_home
ensure_linux_nixbld_accounts
}
if ! command -v nix >/dev/null 2>&1; then
if ! command -v curl >/dev/null 2>&1; then
echo "curl is required to install nix" >&2
exit 1
fi
case "$(uname -s)" in
Linux)
ensure_linux_nix_bootstrap_prereqs
curl -fsSL https://nixos.org/nix/install | sh -s -- --no-daemon
;;
Darwin)
installer="$(mktemp -t burrow-nix.XXXXXX)"
trap 'rm -f "${installer}"' EXIT
curl -fsSL -o "${installer}" https://install.determinate.systems/nix
chmod +x "${installer}"
if command -v sudo >/dev/null 2>&1; then
if sudo -n true 2>/dev/null; then
sudo -n sh "${installer}" install --no-confirm
else
sudo sh "${installer}" install --no-confirm
fi
else
sh "${installer}" install --no-confirm
fi
;;
*)
echo "unsupported platform for nix bootstrap: $(uname -s)" >&2
exit 1
;;
esac
fi
source_nix_profile || true
export PATH="${HOME}/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/nix/var/nix/profiles/default/sbin:${PATH}"
config_root="${XDG_CONFIG_HOME:-$HOME/.config}"
config_file="${config_root}/nix/nix.conf"
if [[ -e "${config_file}" && ! -w "${config_file}" ]]; then
config_root="$(mktemp -d -t burrow-nix-config.XXXXXX)"
export XDG_CONFIG_HOME="${config_root}"
config_file="${XDG_CONFIG_HOME}/nix/nix.conf"
fi
mkdir -p "$(dirname -- "${config_file}")"
cat > "${config_file}" <<'EOF'
experimental-features = nix-command flakes
sandbox = true
fallback = true
substituters = https://cache.nixos.org
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=
EOF
command -v nix >/dev/null 2>&1 || {
echo "nix is still unavailable after bootstrap" >&2
exit 1
}

View file

@ -0,0 +1,65 @@
#!/usr/bin/env bash
set -euo pipefail
: "${API_URL:?API_URL is required}"
: "${REPOSITORY:?REPOSITORY is required}"
: "${RELEASE_TAG:?RELEASE_TAG is required}"
: "${TOKEN:?TOKEN is required}"
release_api="${API_URL}/repos/${REPOSITORY}/releases"
tag_api="${release_api}/tags/${RELEASE_TAG}"
release_json="$(mktemp)"
create_json="$(mktemp)"
trap 'rm -f "${release_json}" "${create_json}"' EXIT
status="$(
curl -sS -o "${release_json}" -w '%{http_code}' \
-H "Authorization: token ${TOKEN}" \
"${tag_api}"
)"
if [[ "${status}" == "404" ]]; then
jq -n \
--arg tag "${RELEASE_TAG}" \
--arg name "Burrow ${RELEASE_TAG}" \
'{
tag_name: $tag,
target_commitish: $tag,
name: $name,
body: "Automated prerelease built on Forgejo Namespace runners.",
draft: false,
prerelease: true
}' > "${create_json}"
curl -fsS \
-H "Authorization: token ${TOKEN}" \
-H "Content-Type: application/json" \
-d @"${create_json}" \
"${release_api}" > "${release_json}"
elif [[ "${status}" != "200" ]]; then
echo "failed to query Forgejo release for ${RELEASE_TAG} (HTTP ${status})" >&2
cat "${release_json}" >&2
exit 1
fi
release_id="$(jq -r '.id' "${release_json}")"
if [[ -z "${release_id}" || "${release_id}" == "null" ]]; then
echo "Forgejo release payload is missing an id" >&2
cat "${release_json}" >&2
exit 1
fi
for file in dist/*; do
name="$(basename "${file}")"
asset_id="$(jq -r --arg name "${name}" '.assets[]? | select(.name == $name) | .id' "${release_json}" | head -n1)"
if [[ -n "${asset_id}" ]]; then
curl -fsS -X DELETE \
-H "Authorization: token ${TOKEN}" \
"${release_api}/${release_id}/assets/${asset_id}" >/dev/null
fi
curl -fsS \
-H "Authorization: token ${TOKEN}" \
-F "attachment=@${file}" \
"${release_api}/${release_id}/assets?name=${name}" >/dev/null
done

View file

@ -1,11 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
Usage: Scripts/cloudflare-upsert-a-record.sh --zone <zone> --name <fqdn> --ipv4 <address> [options]
@ -18,7 +13,7 @@ Options:
--name <fqdn> Fully-qualified DNS record name
--ipv4 <address> IPv4 address for the A record
--token-file <path> Cloudflare API token file
default: secrets/cloudflare/api-token.age, then intake/cloudflare-token.txt
default: intake/cloudflare-token.txt
--ttl <seconds|auto> Record TTL, or auto
default: auto
--proxied <true|false> Whether to proxy through Cloudflare
@ -30,15 +25,10 @@ EOF
ZONE_NAME=""
RECORD_NAME=""
IPV4=""
TOKEN_FILE="${CLOUDFLARE_TOKEN_FILE:-}"
TOKEN_FILE="intake/cloudflare-token.txt"
TTL_VALUE="auto"
PROXIED="false"
cleanup() {
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
while [[ $# -gt 0 ]]; do
case "$1" in
--zone)
@ -81,16 +71,11 @@ if [[ -z "${ZONE_NAME}" || -z "${RECORD_NAME}" || -z "${IPV4}" ]]; then
usage >&2
exit 2
fi
TOKEN_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${TOKEN_FILE}" \
"${REPO_ROOT}/intake/cloudflare-token.txt" \
"${REPO_ROOT}/secrets/cloudflare/api-token.age"
)" || {
echo "Cloudflare token file could not be resolved" >&2
if [[ ! -f "${TOKEN_FILE}" ]]; then
echo "Cloudflare token file not found: ${TOKEN_FILE}" >&2
exit 1
}
fi
if [[ ! "${IPV4}" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
echo "Invalid IPv4 address: ${IPV4}" >&2

View file

@ -5,8 +5,6 @@ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# shellcheck source=Scripts/_burrow-flake.sh
source "${SCRIPT_DIR}/_burrow-flake.sh"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
@ -20,7 +18,7 @@ Defaults:
Environment:
BURROW_FORGE_HOST root@git.burrow.net
BURROW_FORGE_SSH_KEY explicit path, otherwise secrets/forgejo/agent-ssh-key.age
BURROW_FORGE_SSH_KEY intake/agent_at_burrow_net_ed25519
EOF
}
@ -30,7 +28,6 @@ ALLOW_DIRTY=0
BURROW_FLAKE_TMPDIRS=()
cleanup() {
burrow_cleanup_secret_tmpfiles
burrow_cleanup_flake_tmpdirs
}
trap cleanup EXIT
@ -74,17 +71,21 @@ if [[ ${ALLOW_DIRTY} -ne 1 ]] && [[ -n "$(git status --short)" ]]; then
fi
FORGE_HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
FORGE_SSH_KEY="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${BURROW_FORGE_SSH_KEY:-}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)" || {
echo "Unable to resolve the forge SSH key." >&2
FORGE_SSH_KEY="${BURROW_FORGE_SSH_KEY:-}"
if [[ -z "${FORGE_SSH_KEY}" ]]; then
if [[ -f "${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" ]]; then
FORGE_SSH_KEY="${REPO_ROOT}/intake/agent_at_burrow_net_ed25519"
else
FORGE_SSH_KEY="${HOME}/.ssh/agent_at_burrow_net_ed25519"
fi
fi
if [[ ! -f "${FORGE_SSH_KEY}" ]]; then
echo "Forge SSH key not found at ${FORGE_SSH_KEY}." >&2
echo "Set BURROW_FORGE_SSH_KEY or place the agent key in intake/." >&2
exit 1
}
fi
FORGE_KNOWN_HOSTS_FILE="${BURROW_FORGE_KNOWN_HOSTS_FILE:-${HOME}/.cache/burrow/forge-known_hosts}"
mkdir -p "$(dirname "${FORGE_KNOWN_HOSTS_FILE}")"

View file

@ -1,144 +0,0 @@
#!/usr/bin/env python3
from __future__ import annotations
import json
import os
import pathlib
import subprocess
import time
import urllib.error
import urllib.request
def _read_token() -> str:
token = os.environ.get("FORGEJO_API_TOKEN", "").strip()
token_file = os.environ.get("FORGEJO_API_TOKEN_FILE", "").strip()
if not token and token_file:
token = pathlib.Path(token_file).read_text().strip()
if not token:
raise SystemExit("Forgejo API token is missing")
if token.startswith("PENDING-"):
raise SystemExit("Forgejo API token is pending")
return token
def _request(method: str, url: str, token: str) -> tuple[int, str]:
headers = {"Authorization": f"token {token}", "Accept": "application/json"}
req = urllib.request.Request(url, headers=headers, method=method)
try:
with urllib.request.urlopen(req, timeout=20) as resp:
body = resp.read().decode("utf-8")
return resp.getcode(), body
except urllib.error.HTTPError as exc:
body = exc.read().decode("utf-8")
return exc.code, body
def _list_runners(api_url: str, token: str, org: str | None) -> tuple[str, list[dict]]:
if org:
list_url = f"{api_url}/orgs/{org}/actions/runners"
else:
list_url = f"{api_url}/actions/runners"
status, body = _request("GET", list_url, token)
if status == 404:
return list_url, []
if status >= 400:
raise RuntimeError(f"list runners failed ({status}) {body}")
try:
runners = json.loads(body)
except json.JSONDecodeError as exc:
raise RuntimeError(f"invalid runner list response: {exc}") from exc
if not isinstance(runners, list):
raise RuntimeError("runner list response is not a list")
return list_url, runners
def _delete_runner(api_url: str, token: str, org: str | None, runner_id: int) -> bool:
if org:
delete_url = f"{api_url}/orgs/{org}/actions/runners/{runner_id}"
else:
delete_url = f"{api_url}/actions/runners/{runner_id}"
status, body = _request("DELETE", delete_url, token)
if status in (200, 204):
return True
print(f"[forgejo-prune-runners] delete {runner_id} failed: {status} {body}")
return False
def _prune_db(ttl_seconds: int) -> int:
cutoff = int(time.time()) - ttl_seconds
now = int(time.time())
sql = (
"WITH updated AS ("
"UPDATE action_runner "
f"SET deleted = {now} "
"WHERE (deleted IS NULL OR deleted = 0) "
f"AND ((last_online IS NOT NULL AND last_online > 0 AND last_online < {cutoff}) "
f"OR (COALESCE(last_online, 0) = 0 AND created < {cutoff})) "
"RETURNING 1"
") SELECT count(*) FROM updated;"
)
result = subprocess.run(
["psql", "-h", "/run/postgresql", "-U", "forgejo", "forgejo", "-tAc", sql],
check=True,
capture_output=True,
text=True,
)
output = (result.stdout or "").strip()
try:
return int(output)
except ValueError:
return 0
def main() -> None:
api_url = os.environ.get("FORGEJO_API_URL", "https://git.burrow.net/api/v1").rstrip("/")
org = os.environ.get("FORGEJO_ORG", "hackclub").strip() or None
dry_run = os.environ.get("FORGEJO_DRY_RUN", "0") == "1"
db_only = os.environ.get("FORGEJO_PRUNE_DB", "0") == "1"
ttl_seconds = int(os.environ.get("FORGEJO_RUNNER_TTL_SEC", "3600"))
if db_only:
removed = _prune_db(ttl_seconds)
print(f"[forgejo-prune-runners] pruned {removed} runners via DB")
return
token = _read_token()
try:
_, runners = _list_runners(api_url, token, org)
except RuntimeError as exc:
if org is not None:
print(f"[forgejo-prune-runners] org runner list failed ({exc}); retrying instance scope")
_, runners = _list_runners(api_url, token, None)
org = None
else:
raise SystemExit(str(exc))
if not runners:
removed = _prune_db(ttl_seconds)
print(f"[forgejo-prune-runners] pruned {removed} runners via DB fallback")
return
removed = 0
for runner in runners:
runner_id = runner.get("id")
name = runner.get("name", "unknown")
status = (runner.get("status") or "").lower()
busy = bool(runner.get("busy"))
if status == "online" or busy:
continue
if runner_id is None:
continue
if dry_run:
print(f"[forgejo-prune-runners] would delete runner {runner_id} ({name}) status={status}")
continue
if _delete_runner(api_url, token, org, int(runner_id)):
removed += 1
print(f"[forgejo-prune-runners] deleted runner {runner_id} ({name})")
print(f"[forgejo-prune-runners] done; removed {removed} runners")
if __name__ == "__main__":
main()

View file

@ -6,14 +6,12 @@ REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-flake.sh
source "${SCRIPT_DIR}/_burrow-flake.sh"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
DEFAULT_CONFIG="burrow-forge"
DEFAULT_FLAKE="."
DEFAULT_LOCATION="hel1"
DEFAULT_ARCHITECTURE="x86"
DEFAULT_TOKEN_FILE=""
DEFAULT_TOKEN_FILE="${REPO_ROOT}/intake/hetzner-api-token.txt"
CONFIG="${HCLOUD_IMAGE_CONFIG:-${DEFAULT_CONFIG}}"
FLAKE="${HCLOUD_IMAGE_FLAKE:-${DEFAULT_FLAKE}}"
@ -32,13 +30,6 @@ NIX_BUILD_FLAGS=()
BURROW_FLAKE_TMPDIRS=()
LOCAL_STORE_DIR=""
cleanup() {
burrow_cleanup_secret_tmpfiles
burrow_cleanup_flake_tmpdirs
}
trap cleanup EXIT
usage() {
cat <<'EOF'
Usage: Scripts/hcloud-upload-nixos-image.sh [options]
@ -51,7 +42,7 @@ Options:
--location <code> Hetzner location for the temporary upload server (default: hel1)
--architecture <x86|arm> CPU architecture of the image (default: x86)
--server-type <name> Hetzner server type for the temporary upload server
--token-file <path> Hetzner API token file (default: secrets/hetzner/api-token.age, then intake/hetzner-api-token.txt)
--token-file <path> Hetzner API token file (default: intake/hetzner-api-token.txt)
--artifact-path <path> Prebuilt raw image artifact to upload directly
--output-hash <hash> Stable hash label for --artifact-path uploads
--builder-spec <string> Complete builders string passed to nix build
@ -134,17 +125,6 @@ while [[ $# -gt 0 ]]; do
esac
done
TOKEN_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${TOKEN_FILE}" \
"${REPO_ROOT}/intake/hetzner-api-token.txt" \
"${REPO_ROOT}/secrets/hetzner/api-token.age"
)" || {
echo "Hetzner API token file could not be resolved" >&2
exit 1
}
cleanup() {
burrow_cleanup_flake_tmpdirs
if [[ -n "${LOCAL_STORE_DIR}" && -d "${LOCAL_STORE_DIR}" ]]; then

View file

@ -2,9 +2,6 @@
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
@ -34,7 +31,7 @@ Options:
-h, --help Show this help text.
Environment:
HCLOUD_TOKEN_FILE Defaults to secrets/hetzner/api-token.age, then intake/hetzner-api-token.txt
HCLOUD_TOKEN_FILE Defaults to intake/hetzner-api-token.txt
EOF
}
@ -46,15 +43,10 @@ IMAGE="ubuntu-24.04"
CONFIG="burrow-forge"
FLAKE="."
UPLOAD_LOCATION=""
TOKEN_FILE="${HCLOUD_TOKEN_FILE:-}"
TOKEN_FILE="${HCLOUD_TOKEN_FILE:-intake/hetzner-api-token.txt}"
YES=0
SSH_KEYS=("contact@burrow.net" "agent@burrow.net")
cleanup() {
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
if [[ $# -gt 0 ]]; then
case "$1" in
show|create|delete|recreate|build-image|create-from-image|recreate-from-image)
@ -118,16 +110,10 @@ while [[ $# -gt 0 ]]; do
esac
done
TOKEN_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${TOKEN_FILE}" \
"${REPO_ROOT}/intake/hetzner-api-token.txt" \
"${REPO_ROOT}/secrets/hetzner/api-token.age"
)" || {
echo "Hetzner API token file could not be resolved" >&2
if [[ ! -f "${TOKEN_FILE}" ]]; then
echo "Hetzner API token file not found: ${TOKEN_FILE}" >&2
exit 1
}
fi
if [[ -z "${UPLOAD_LOCATION}" ]]; then
UPLOAD_LOCATION="${LOCATION}"

View file

@ -6,13 +6,11 @@ REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-flake.sh
source "${SCRIPT_DIR}/_burrow-flake.sh"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
CONFIG="${HCLOUD_IMAGE_CONFIG:-burrow-forge}"
FLAKE="${HCLOUD_IMAGE_FLAKE:-.}"
LOCATION="${HCLOUD_IMAGE_LOCATION:-hel1}"
TOKEN_FILE="${HCLOUD_TOKEN_FILE:-}"
TOKEN_FILE="${HCLOUD_TOKEN_FILE:-${REPO_ROOT}/intake/hetzner-api-token.txt}"
NSC_SSH_HOST="${NSC_SSH_HOST:-ssh.ord2.namespace.so}"
NSC_MACHINE_TYPE="${NSC_MACHINE_TYPE:-linux/amd64:32x64}"
NSC_BUILDER_DURATION="${NSC_BUILDER_DURATION:-4h}"
@ -28,13 +26,6 @@ EXTRA_LABELS=()
BURROW_FLAKE_TMPDIRS=()
BUILDER_ID=""
cleanup() {
burrow_cleanup_secret_tmpfiles
burrow_cleanup_flake_tmpdirs
}
trap cleanup EXIT
usage() {
cat <<'EOF'
Usage: Scripts/nsc-build-and-upload-image.sh [options]
@ -46,7 +37,7 @@ Options:
--config <name> images.<name>-raw output to build (default: burrow-forge)
--flake <path> Flake path to build from (default: .)
--location <code> Hetzner upload location (default: hel1)
--token-file <path> Hetzner API token file (default: secrets/hetzner/api-token.age, then intake/hetzner-api-token.txt)
--token-file <path> Hetzner API token file (default: intake/hetzner-api-token.txt)
--machine-type <type> Namespace machine type (default: linux/amd64:32x64)
--ssh-host <host> Namespace SSH endpoint (default: ssh.ord2.namespace.so)
--duration <ttl> Namespace builder lifetime (default: 4h)
@ -135,17 +126,6 @@ while [[ $# -gt 0 ]]; do
esac
done
TOKEN_FILE="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${TOKEN_FILE}" \
"${REPO_ROOT}/intake/hetzner-api-token.txt" \
"${REPO_ROOT}/secrets/hetzner/api-token.age"
)" || {
echo "Hetzner API token file could not be resolved" >&2
exit 1
}
cleanup() {
if [[ -n "${BUILDER_ID}" && -n "${NSC_BIN}" ]]; then
"${NSC_BIN}" destroy "${BUILDER_ID}" --force >/dev/null 2>&1 || true

View file

@ -6,47 +6,41 @@ REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-flake.sh
source "${SCRIPT_DIR}/_burrow-flake.sh"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
usage() {
cat <<'EOF'
Usage: Scripts/provision-forgejo-nsc.sh [options]
Generate Burrow forgejo-nsc runtime inputs and refresh the authoritative
`secrets/forgejo/*.age` files, optionally refreshing the Namespace token from
the currently logged-in namespace account.
Generate Burrow forgejo-nsc runtime inputs in intake/ and optionally refresh the
Namespace token from the currently logged-in namespace account.
Options:
--host <user@host> SSH target used to mint the Forgejo PAT.
Default: root@git.burrow.net
--ssh-key <path> SSH private key for the forge host.
Default: secrets/forgejo/agent-ssh-key.age, then intake/
Default: intake/agent_at_burrow_net_ed25519
--nsc-bin <path> Override the nsc binary.
--no-refresh-token Reuse the existing encrypted Namespace token if it already exists.
--no-refresh-token Reuse intake/forgejo_nsc_token.txt if it already exists.
--token-name <name> Forgejo PAT name prefix (default: forgejo-nsc)
--contact-user <name> Forgejo username used for PAT creation (default: contact)
--scope-owner <name> Forgejo org/user owner for the default NSC scope (default: hackclub)
--scope-owner <name> Forgejo org/user owner for the default NSC scope (default: burrow)
--scope-name <name> Forgejo repository name for the default NSC scope (default: burrow)
-h, --help Show this help text.
EOF
}
HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-${REPO_ROOT}/intake/agent_at_burrow_net_ed25519}"
NSC_BIN="${NSC_BIN:-}"
KNOWN_HOSTS_FILE="${BURROW_FORGE_KNOWN_HOSTS_FILE:-${HOME}/.cache/burrow/forge-known_hosts}"
REFRESH_TOKEN=1
TOKEN_NAME_PREFIX="${FORGEJO_PAT_NAME:-forgejo-nsc}"
CONTACT_USER="${FORGEJO_CONTACT_USER:-contact}"
SCOPE_OWNER="${FORGEJO_SCOPE_OWNER:-hackclub}"
SCOPE_OWNER="${FORGEJO_SCOPE_OWNER:-burrow}"
SCOPE_NAME="${FORGEJO_SCOPE_NAME:-burrow}"
BURROW_FLAKE_TMPDIRS=()
TMP_DIR=""
cleanup() {
[[ -n "${TMP_DIR}" ]] && rm -rf "${TMP_DIR}" >/dev/null 2>&1 || true
burrow_cleanup_secret_tmpfiles
burrow_cleanup_flake_tmpdirs
}
trap cleanup EXIT
@ -103,15 +97,13 @@ burrow_require_cmd nix
burrow_require_cmd ssh
burrow_require_cmd python3
SSH_KEY="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${SSH_KEY}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)"
TMP_DIR="$(mktemp -d "${TMPDIR:-/tmp}/burrow-forgejo-nsc.XXXXXX")"
if [[ ! -f "${SSH_KEY}" ]]; then
echo "forge SSH key not found: ${SSH_KEY}" >&2
exit 1
fi
mkdir -p "${REPO_ROOT}/intake"
chmod 700 "${REPO_ROOT}/intake"
flake_ref="$(burrow_prepare_flake_ref "${REPO_ROOT}")"
if [[ -z "${NSC_BIN}" ]]; then
@ -136,77 +128,16 @@ if [[ ! -x "${NSC_BIN}" ]]; then
exit 1
fi
token_file="${TMP_DIR}/forgejo_nsc_token.txt"
dispatcher_out="${TMP_DIR}/forgejo_nsc_dispatcher.yaml"
autoscaler_out="${TMP_DIR}/forgejo_nsc_autoscaler.yaml"
token_file="${REPO_ROOT}/intake/forgejo_nsc_token.txt"
dispatcher_out="${REPO_ROOT}/intake/forgejo_nsc_dispatcher.yaml"
autoscaler_out="${REPO_ROOT}/intake/forgejo_nsc_autoscaler.yaml"
dispatcher_src="${REPO_ROOT}/services/forgejo-nsc/deploy/dispatcher.yaml"
autoscaler_src="${REPO_ROOT}/services/forgejo-nsc/deploy/autoscaler.yaml"
token_secret="${REPO_ROOT}/secrets/forgejo/nsc-token.age"
dispatcher_secret="${REPO_ROOT}/secrets/forgejo/nsc-dispatcher-config.age"
autoscaler_secret="${REPO_ROOT}/secrets/forgejo/nsc-autoscaler-config.age"
if [[ "${REFRESH_TOKEN}" -eq 1 ]]; then
ssh \
-i "${SSH_KEY}" \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile="${KNOWN_HOSTS_FILE}" \
-o StrictHostKeyChecking=accept-new \
"${HOST}" \
'sudo -u forgejo-nsc python3 - <<'"'"'PY'"'"'
import json
from pathlib import Path
payload = {}
token_json = Path("/var/lib/forgejo-nsc/.config/ns/token.json")
if token_json.exists():
data = json.loads(token_json.read_text(encoding="utf-8"))
session = str(data.get("session_token", "")).strip()
if session:
payload["session_token"] = session
token_cache = Path("/var/lib/forgejo-nsc/.config/ns/token.cache")
if token_cache.exists():
bearer = token_cache.read_text(encoding="utf-8").strip()
if bearer:
payload["bearer_token"] = bearer
if not payload:
raise SystemExit("forgejo-nsc host does not have a usable Namespace session")
print(json.dumps(payload, indent=2))
PY' > "${token_file}"
if [[ "${REFRESH_TOKEN}" -eq 1 || ! -s "${token_file}" ]]; then
"${NSC_BIN}" auth check-login --duration 20m >/dev/null
"${NSC_BIN}" auth generate-dev-token --output_to "${token_file}" >/dev/null
chmod 600 "${token_file}"
elif [[ -f "${token_secret}" ]]; then
burrow_decrypt_age_secret_to_temp "${REPO_ROOT}" "${token_secret}" > "${token_file}"
fi
if [[ -s "${token_file}" ]]; then
TOKEN_FILE="${token_file}" python3 - <<'PY'
import json
import os
from pathlib import Path
path = Path(os.environ["TOKEN_FILE"])
raw = path.read_text(encoding="utf-8").strip()
if not raw:
raise SystemExit(0)
try:
parsed = json.loads(raw)
except json.JSONDecodeError:
parsed = None
if isinstance(parsed, dict):
bearer = parsed.get("bearer_token")
session = parsed.get("session_token")
if isinstance(bearer, str) and bearer.strip():
raise SystemExit(0)
if isinstance(session, str) and session.strip():
raise SystemExit(0)
path.write_text(json.dumps({"bearer_token": raw}, indent=2) + "\n", encoding="utf-8")
PY
fi
webhook_secret="$(python3 - <<'PY'
@ -302,9 +233,5 @@ PY
chmod 600 "${dispatcher_out}" "${autoscaler_out}"
burrow_encrypt_secret_from_file "${REPO_ROOT}" "${token_secret}" "${token_file}"
burrow_encrypt_secret_from_file "${REPO_ROOT}" "${dispatcher_secret}" "${dispatcher_out}"
burrow_encrypt_secret_from_file "${REPO_ROOT}" "${autoscaler_secret}" "${autoscaler_out}"
echo "Updated secrets/forgejo/{nsc-token,nsc-dispatcher-config,nsc-autoscaler-config}.age."
echo "Rendered intake/forgejo_nsc_token.txt, intake/forgejo_nsc_dispatcher.yaml, and intake/forgejo_nsc_autoscaler.yaml."
echo "Minted Forgejo PAT ${token_name} for ${CONTACT_USER} on ${HOST}."

View file

@ -0,0 +1,163 @@
#!/usr/bin/env bash
set -euo pipefail
repo_root="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
bundle_id="${BURROW_UI_TEST_APP_BUNDLE_ID:-com.hackclub.burrow}"
simulator_name="${BURROW_UI_TEST_SIMULATOR_NAME:-iPhone 17 Pro}"
simulator_os="${BURROW_UI_TEST_SIMULATOR_OS:-26.4}"
simulator_id="${BURROW_UI_TEST_SIMULATOR_ID:-}"
derived_data_path="${BURROW_UI_TEST_DERIVED_DATA_PATH:-/tmp/burrow-ui-tests-deriveddata}"
source_packages_path="${BURROW_UI_TEST_SOURCE_PACKAGES_PATH:-/tmp/burrow-ui-tests-sourcepackages}"
fallback_dir="/tmp/${bundle_id}/SimulatorFallback"
socket_path="${fallback_dir}/burrow.sock"
tailnet_state_root="/tmp/${bundle_id}/SimulatorTailnetState"
daemon_log="${BURROW_UI_TEST_DAEMON_LOG:-/tmp/burrow-ui-test-daemon.log}"
ui_test_config_path="${BURROW_UI_TEST_CONFIG_PATH:-/tmp/burrow-ui-test-config.json}"
ui_test_runner_bundle_id="${bundle_id}.uitests.xctrunner"
ui_test_email="${BURROW_UI_TEST_EMAIL:-ui-test@burrow.net}"
ui_test_username="${BURROW_UI_TEST_USERNAME:-ui-test}"
ui_test_tailnet_mode="${BURROW_UI_TEST_TAILNET_MODE:-tailscale}"
password_secret="${repo_root}/secrets/infra/authentik-ui-test-password.age"
age_identity="${BURROW_UI_TEST_AGE_IDENTITY:-${HOME}/.ssh/id_ed25519}"
ui_test_password="${BURROW_UI_TEST_PASSWORD:-}"
if [[ -z "$ui_test_password" ]]; then
if [[ -f "$password_secret" && -f "$age_identity" ]]; then
ui_test_password="$(age -d -i "$age_identity" "$password_secret" | tr -d '\r\n')"
else
echo "error: BURROW_UI_TEST_PASSWORD is unset and ${password_secret} could not be decrypted" >&2
exit 1
fi
fi
rm -rf "$fallback_dir" "$tailnet_state_root"
mkdir -p "$fallback_dir" "$tailnet_state_root" "$derived_data_path" "$source_packages_path"
rm -f "$socket_path"
resolve_simulator_id() {
xcrun simctl list devices available -j | python3 -c '
import json
import os
import sys
target_name = sys.argv[1]
target_os = sys.argv[2]
target_runtime = "com.apple.CoreSimulator.SimRuntime.iOS-" + target_os.replace(".", "-")
devices = json.load(sys.stdin).get("devices", {})
healthy = []
for runtime, entries in devices.items():
if runtime != target_runtime:
continue
for entry in entries:
if not entry.get("isAvailable", False):
continue
if not os.path.isdir(entry.get("dataPath", "")):
continue
healthy.append(entry)
for entry in healthy:
if entry.get("name") == target_name:
print(entry["udid"])
raise SystemExit(0)
for entry in healthy:
if target_name in entry.get("name", ""):
print(entry["udid"])
raise SystemExit(0)
raise SystemExit(1)
' "$simulator_name" "$simulator_os"
}
if [[ -z "$simulator_id" ]]; then
simulator_id="$(resolve_simulator_id || true)"
fi
if [[ -n "$simulator_id" ]]; then
xcrun simctl boot "$simulator_id" >/dev/null 2>&1 || true
xcrun simctl bootstatus "$simulator_id" -b
xcrun simctl terminate "$simulator_id" "$bundle_id" >/dev/null 2>&1 || true
xcrun simctl terminate "$simulator_id" "$ui_test_runner_bundle_id" >/dev/null 2>&1 || true
xcrun simctl uninstall "$simulator_id" "$bundle_id" >/dev/null 2>&1 || true
xcrun simctl uninstall "$simulator_id" "$ui_test_runner_bundle_id" >/dev/null 2>&1 || true
destination="id=${simulator_id}"
else
destination="platform=iOS Simulator,name=${simulator_name},OS=${simulator_os}"
fi
cleanup() {
rm -f "$ui_test_config_path"
if [[ -n "${daemon_pid:-}" ]]; then
kill "$daemon_pid" >/dev/null 2>&1 || true
wait "$daemon_pid" >/dev/null 2>&1 || true
fi
}
trap cleanup EXIT
umask 077
python3 - <<'PY' "$ui_test_config_path" "$ui_test_email" "$ui_test_username" "$ui_test_password" "$ui_test_tailnet_mode"
import json
import pathlib
import sys
config_path = pathlib.Path(sys.argv[1])
config_path.write_text(
json.dumps(
{
"email": sys.argv[2],
"username": sys.argv[3],
"password": sys.argv[4],
"mode": sys.argv[5],
}
),
encoding="utf-8",
)
PY
cargo build -p burrow --bin burrow
(
cd "$fallback_dir"
RUST_LOG="${BURROW_UI_TEST_RUST_LOG:-info,burrow=debug}" \
BURROW_SOCKET_PATH="burrow.sock" \
BURROW_TAILSCALE_STATE_ROOT="$tailnet_state_root" \
"${repo_root}/target/debug/burrow" daemon >"$daemon_log" 2>&1
) &
daemon_pid=$!
for _ in $(seq 1 50); do
[[ -S "$socket_path" ]] && break
sleep 0.2
done
if [[ ! -S "$socket_path" ]]; then
echo "error: Burrow daemon did not create ${socket_path}" >&2
[[ -f "$daemon_log" ]] && cat "$daemon_log" >&2
exit 1
fi
common_xcodebuild_args=(
-quiet
-skipPackagePluginValidation
-project "${repo_root}/Apple/Burrow.xcodeproj"
-scheme App
-configuration Debug
-destination "$destination"
-derivedDataPath "$derived_data_path"
-clonedSourcePackagesDirPath "$source_packages_path"
-only-testing:BurrowUITests
-parallel-testing-enabled NO
-maximum-concurrent-test-simulator-destinations 1
-maximum-parallel-testing-workers 1
CODE_SIGNING_ALLOWED=NO
)
xcodebuild \
"${common_xcodebuild_args[@]}" \
build-for-testing
BURROW_UI_TEST_EMAIL="$ui_test_email" \
BURROW_UI_TEST_USERNAME="$ui_test_username" \
BURROW_UI_TEST_PASSWORD="$ui_test_password" \
BURROW_UI_TEST_CONFIG_PATH="$ui_test_config_path" \
BURROW_UI_TEST_EPHEMERAL_AUTH=1 \
xcodebuild \
"${common_xcodebuild_args[@]}" \
test-without-building

View file

@ -0,0 +1,186 @@
#!/usr/bin/env bash
set -euo pipefail
repo_root="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
bundle_id="${BURROW_UI_TEST_APP_BUNDLE_ID:-com.hackclub.burrow}"
smoke_root="${BURROW_TAILNET_SMOKE_ROOT:-/tmp/burrow-tailnet-connectivity}"
socket_path="${smoke_root}/burrow.sock"
db_path="${smoke_root}/burrow.db"
daemon_log="${BURROW_TAILNET_SMOKE_DAEMON_LOG:-${smoke_root}/daemon.log}"
payload_path="${smoke_root}/tailnet.json"
authority="${BURROW_TAILNET_SMOKE_AUTHORITY:-https://ts.burrow.net}"
account_name="${BURROW_TAILNET_SMOKE_ACCOUNT:-ui-test}"
identity_name="${BURROW_TAILNET_SMOKE_IDENTITY:-apple}"
hostname="${BURROW_TAILNET_SMOKE_HOSTNAME:-burrow-apple}"
message="${BURROW_TAILNET_SMOKE_MESSAGE:-burrow-tailnet-smoke}"
timeout_ms="${BURROW_TAILNET_SMOKE_TIMEOUT_MS:-8000}"
remote_ip="${BURROW_TAILNET_SMOKE_REMOTE_IP:-}"
remote_port="${BURROW_TAILNET_SMOKE_REMOTE_PORT:-18081}"
remote_hostname="${BURROW_TAILNET_SMOKE_REMOTE_HOSTNAME:-burrow-echo}"
remote_authkey="${BURROW_TAILNET_SMOKE_REMOTE_AUTHKEY:-}"
helper_bin="${BURROW_TAILNET_SMOKE_HELPER_BIN:-${smoke_root}/tailscale-login-bridge}"
remote_state_root="${BURROW_TAILNET_SMOKE_REMOTE_STATE_ROOT:-${smoke_root}/remote-state}"
remote_stdout="${smoke_root}/remote-helper.stdout"
remote_stderr="${BURROW_TAILNET_SMOKE_REMOTE_LOG:-${smoke_root}/remote-helper.log}"
if [[ -n "${TS_AUTHKEY:-}" ]]; then
default_tailnet_state_root="${smoke_root}/local-state"
else
default_tailnet_state_root="/tmp/${bundle_id}/SimulatorTailnetState"
fi
tailnet_state_root="${BURROW_TAILNET_STATE_ROOT:-${default_tailnet_state_root}}"
need_login=0
if [[ -z "${TS_AUTHKEY:-}" ]] && { [[ ! -d "$tailnet_state_root" ]] || [[ -z "$(find "$tailnet_state_root" -mindepth 1 -maxdepth 2 -print -quit 2>/dev/null)" ]]; }; then
need_login=1
fi
if [[ "$need_login" -eq 1 ]]; then
echo "Tailnet state root is empty; running iOS login bootstrap first..."
"${repo_root}/Scripts/run-ios-tailnet-ui-tests.sh"
fi
rm -rf "$smoke_root"
mkdir -p "$smoke_root"
cleanup() {
rm -f "$payload_path"
if [[ -n "${daemon_pid:-}" ]]; then
kill "$daemon_pid" >/dev/null 2>&1 || true
wait "$daemon_pid" >/dev/null 2>&1 || true
fi
if [[ -n "${remote_pid:-}" ]]; then
kill "$remote_pid" >/dev/null 2>&1 || true
wait "$remote_pid" >/dev/null 2>&1 || true
fi
}
trap cleanup EXIT
wait_for_helper_listen() {
python3 - <<'PY' "$1"
import json
import pathlib
import sys
import time
path = pathlib.Path(sys.argv[1])
deadline = time.time() + 20
while time.time() < deadline:
if path.exists():
with path.open("r", encoding="utf-8") as handle:
line = handle.readline().strip()
if line:
hello = json.loads(line)
print(hello["listen_addr"])
raise SystemExit(0)
time.sleep(0.1)
raise SystemExit("timed out waiting for helper startup line")
PY
}
wait_for_helper_ip() {
python3 - <<'PY' "$1"
import json
import sys
import time
import urllib.request
url = sys.argv[1]
deadline = time.time() + 30
while time.time() < deadline:
with urllib.request.urlopen(url, timeout=5) as response:
status = json.load(response)
if status.get("running") and status.get("tailscale_ips"):
print(status["tailscale_ips"][0])
raise SystemExit(0)
time.sleep(0.25)
raise SystemExit("timed out waiting for helper to become ready")
PY
}
python3 - <<'PY' "$payload_path" "$authority" "$account_name" "$identity_name" "$hostname"
import json
import pathlib
import sys
path = pathlib.Path(sys.argv[1])
payload = {
"authority": sys.argv[2],
"account": sys.argv[3],
"identity": sys.argv[4],
"hostname": sys.argv[5],
}
path.write_text(json.dumps(payload, indent=2) + "\n", encoding="utf-8")
PY
cargo build -p burrow --bin burrow
(
cd "${repo_root}/Tools/tailscale-login-bridge"
GOWORK=off go build -o "$helper_bin" .
)
if [[ -z "$remote_ip" ]]; then
if [[ -z "$remote_authkey" ]] && { [[ ! -d "$remote_state_root" ]] || [[ -z "$(find "$remote_state_root" -mindepth 1 -maxdepth 1 -print -quit 2>/dev/null)" ]]; }; then
echo "error: set BURROW_TAILNET_SMOKE_REMOTE_IP, BURROW_TAILNET_SMOKE_REMOTE_AUTHKEY, or BURROW_TAILNET_SMOKE_REMOTE_STATE_ROOT to an existing logged-in helper state" >&2
exit 1
fi
if [[ -n "$remote_authkey" ]]; then
rm -rf "$remote_state_root"
mkdir -p "$remote_state_root"
fi
(
cd "$repo_root"
if [[ -n "$remote_authkey" ]]; then
export TS_AUTHKEY="$remote_authkey"
fi
"$helper_bin" \
--listen 127.0.0.1:0 \
--state-dir "$remote_state_root" \
--hostname "$remote_hostname" \
--control-url "$authority" \
--udp-echo-port "$remote_port" \
>"$remote_stdout" 2>"$remote_stderr"
) &
remote_pid=$!
remote_listen_addr="$(wait_for_helper_listen "$remote_stdout")"
remote_ip="$(wait_for_helper_ip "http://${remote_listen_addr}/status")"
fi
(
cd "$smoke_root"
RUST_LOG="${BURROW_TAILNET_SMOKE_RUST_LOG:-info,burrow=debug}" \
BURROW_SOCKET_PATH="$socket_path" \
BURROW_TAILSCALE_STATE_ROOT="$tailnet_state_root" \
"${repo_root}/target/debug/burrow" daemon >"$daemon_log" 2>&1
) &
daemon_pid=$!
for _ in $(seq 1 50); do
[[ -S "$socket_path" ]] && break
sleep 0.2
done
if [[ ! -S "$socket_path" ]]; then
echo "error: Burrow daemon did not create ${socket_path}" >&2
[[ -f "$daemon_log" ]] && cat "$daemon_log" >&2
exit 1
fi
run_burrow() {
BURROW_SOCKET_PATH="$socket_path" \
BURROW_TAILSCALE_STATE_ROOT="$tailnet_state_root" \
"${repo_root}/target/debug/burrow" "$@"
}
run_burrow network-add 1 1 "$payload_path"
run_burrow start
run_burrow tunnel-config
run_burrow tailnet-udp-echo "${remote_ip}:${remote_port}" --message "$message" --timeout-ms "$timeout_ms"
echo
echo "Tailnet connectivity smoke passed."
echo "State root: $tailnet_state_root"
echo "Remote: ${remote_ip}:${remote_port}"

View file

@ -0,0 +1,112 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
usage() {
cat <<'EOF'
Usage: Scripts/seal-forgejo-nsc-secrets.sh [options]
Encrypt Burrow forgejo-nsc runtime inputs from intake/ into the agenix secrets
consumed by burrow-forge.
Options:
--provision Re-render the local intake files before sealing.
--host <user@host> SSH target forwarded to provision-forgejo-nsc.sh.
--ssh-key <path> SSH private key forwarded to provision-forgejo-nsc.sh.
--nsc-bin <path> Override the nsc binary for provisioning.
-h, --help Show this help text.
EOF
}
PROVISION=0
HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-${REPO_ROOT}/intake/agent_at_burrow_net_ed25519}"
NSC_BIN="${NSC_BIN:-}"
while [[ $# -gt 0 ]]; do
case "$1" in
--provision)
PROVISION=1
shift
;;
--host)
HOST="${2:?missing value for --host}"
shift 2
;;
--ssh-key)
SSH_KEY="${2:?missing value for --ssh-key}"
shift 2
;;
--nsc-bin)
NSC_BIN="${2:?missing value for --nsc-bin}"
shift 2
;;
-h|--help)
usage
exit 0
;;
*)
echo "unknown option: $1" >&2
usage >&2
exit 64
;;
esac
done
require_cmd() {
if ! command -v "$1" >/dev/null 2>&1; then
echo "missing required command: $1" >&2
exit 1
fi
}
require_cmd age
require_cmd nix
require_cmd python3
if [[ "${PROVISION}" -eq 1 ]]; then
provision_args=(--host "${HOST}" --ssh-key "${SSH_KEY}")
if [[ -n "${NSC_BIN}" ]]; then
provision_args+=(--nsc-bin "${NSC_BIN}")
fi
"${SCRIPT_DIR}/provision-forgejo-nsc.sh" "${provision_args[@]}"
fi
tmpdir="$(mktemp -d)"
cleanup() {
rm -rf "${tmpdir}"
}
trap cleanup EXIT
seal_secret() {
local target="$1"
local source_path="$2"
recipients_file="${tmpdir}/$(basename "${target}").recipients"
if [[ ! -s "${source_path}" ]]; then
echo "required runtime input missing or empty: ${source_path}" >&2
exit 1
fi
nix eval --impure --json --expr "let s = import ${REPO_ROOT}/secrets.nix; in s.\"${target}\".publicKeys" \
| python3 -c 'import json, sys; [print(item) for item in json.load(sys.stdin)]' \
> "${recipients_file}"
age -R "${recipients_file}" -o "${REPO_ROOT}/${target}" "${source_path}"
}
seal_secret "secrets/infra/forgejo-nsc-token.age" "${REPO_ROOT}/intake/forgejo_nsc_token.txt"
seal_secret "secrets/infra/forgejo-nsc-dispatcher-config.age" "${REPO_ROOT}/intake/forgejo_nsc_dispatcher.yaml"
seal_secret "secrets/infra/forgejo-nsc-autoscaler-config.age" "${REPO_ROOT}/intake/forgejo_nsc_autoscaler.yaml"
chmod 600 \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-token.age" \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-dispatcher-config.age" \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-autoscaler-config.age"
echo "Sealed forgejo-nsc runtime inputs into:"
printf ' %s\n' \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-token.age" \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-dispatcher-config.age" \
"${REPO_ROOT}/secrets/infra/forgejo-nsc-autoscaler-config.age"
echo "Deploy burrow-forge to apply the new CI credentials."

View file

@ -1,109 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: Scripts/sync-forgejo-nsc-config.sh [options]
Deploy Burrow forgejo-nsc runtime inputs from age secrets onto the forge host.
Options:
--host <user@host> SSH target (default: root@git.burrow.net)
--ssh-key <path> SSH private key (default: secrets/forgejo/agent-ssh-key.age, then intake/)
--rotate-pat Re-render the encrypted runtime inputs before deploying.
--no-restart Validate the encrypted inputs only; do not deploy.
-h, --help Show this help text.
EOF
}
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${SCRIPT_DIR}/_burrow-secrets.sh"
HOST="${BURROW_FORGE_HOST:-root@git.burrow.net}"
SSH_KEY="${BURROW_FORGE_SSH_KEY:-}"
KNOWN_HOSTS_FILE="${BURROW_FORGE_KNOWN_HOSTS_FILE:-${HOME}/.cache/burrow/forge-known_hosts}"
ROTATE_PAT=0
NO_RESTART=0
TMP_DIR=""
cleanup() {
[[ -n "${TMP_DIR}" ]] && rm -rf "${TMP_DIR}" >/dev/null 2>&1 || true
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
while [[ $# -gt 0 ]]; do
case "$1" in
--host)
HOST="${2:?missing value for --host}"
shift 2
;;
--ssh-key)
SSH_KEY="${2:?missing value for --ssh-key}"
shift 2
;;
--rotate-pat)
ROTATE_PAT=1
shift
;;
--no-restart)
NO_RESTART=1
shift
;;
-h|--help)
usage
exit 0
;;
*)
echo "unknown option: $1" >&2
usage >&2
exit 64
;;
esac
done
mkdir -p "$(dirname "${KNOWN_HOSTS_FILE}")"
burrow_require_cmd() {
if ! command -v "$1" >/dev/null 2>&1; then
echo "missing required command: $1" >&2
exit 1
fi
}
burrow_require_cmd ssh
SSH_KEY="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${SSH_KEY}" \
"${REPO_ROOT}/intake/agent_at_burrow_net_ed25519" \
"${REPO_ROOT}/secrets/forgejo/agent-ssh-key.age" \
"${HOME}/.ssh/agent_at_burrow_net_ed25519"
)"
if [[ "${ROTATE_PAT}" -eq 1 ]]; then
"${SCRIPT_DIR}/provision-forgejo-nsc.sh" --host "${HOST}" --ssh-key "${SSH_KEY}"
fi
token_file="${REPO_ROOT}/secrets/forgejo/nsc-token.age"
dispatcher_file="${REPO_ROOT}/secrets/forgejo/nsc-dispatcher-config.age"
autoscaler_file="${REPO_ROOT}/secrets/forgejo/nsc-autoscaler-config.age"
for path in "${token_file}" "${dispatcher_file}" "${autoscaler_file}"; do
if [[ ! -s "${path}" ]]; then
echo "required runtime input missing or empty: ${path}" >&2
exit 1
fi
done
if [[ "${NO_RESTART}" -eq 0 ]]; then
BURROW_FORGE_HOST="${HOST}" \
BURROW_FORGE_SSH_KEY="${SSH_KEY}" \
BURROW_FORGE_KNOWN_HOSTS_FILE="${KNOWN_HOSTS_FILE}" \
"${SCRIPT_DIR}/forge-deploy.sh" --switch
fi
echo "forgejo-nsc runtime sync complete (host=${HOST}, deployed=$((1 - NO_RESTART)))."
echo "Scripts/sync-forgejo-nsc-config.sh is obsolete." >&2
echo "Burrow forgejo-nsc now consumes agenix-backed secrets instead of host-local intake files." >&2
echo "Use Scripts/seal-forgejo-nsc-secrets.sh and deploy burrow-forge." >&2
exit 1

View file

@ -3,22 +3,17 @@
set -euo pipefail
umask 077
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
# shellcheck source=Scripts/_burrow-secrets.sh
source "${REPO_ROOT}/Scripts/_burrow-secrets.sh"
usage() {
cat <<'EOF'
Usage:
Tools/forwardemail-custom-s3.sh \
--domain burrow.net \
--api-token-file secrets/forwardemail/api-token.age \
--api-token-file intake/forwardemail_api_token.txt \
--s3-endpoint https://<endpoint> \
--s3-region <region> \
--s3-bucket <bucket> \
--s3-access-key-file secrets/forwardemail/hetzner-s3-user.age \
--s3-secret-key-file secrets/forwardemail/hetzner-s3-secret.age
--s3-access-key-file intake/hetzner-s3-user.txt \
--s3-secret-key-file intake/hetzner-s3-secret.txt
Options:
--domain <domain> Forward Email domain to update.
@ -59,18 +54,13 @@ read_secret() {
printf '%s' "$value"
}
cleanup() {
burrow_cleanup_secret_tmpfiles
}
trap cleanup EXIT
domain=""
api_token_file="${FORWARDEMAIL_API_TOKEN_FILE:-}"
api_token_file=""
s3_endpoint=""
s3_region=""
s3_bucket=""
s3_access_key_file="${FORWARDEMAIL_S3_ACCESS_KEY_FILE:-}"
s3_secret_key_file="${FORWARDEMAIL_S3_SECRET_KEY_FILE:-}"
s3_access_key_file=""
s3_secret_key_file=""
test_only=false
while [[ $# -gt 0 ]]; do
@ -118,38 +108,16 @@ while [[ $# -gt 0 ]]; do
done
[[ -n "$domain" ]] || fail "--domain is required"
[[ -n "$api_token_file" ]] || fail "--api-token-file is required"
[[ -n "$s3_endpoint" || "$test_only" == true ]] || fail "--s3-endpoint is required unless --test-only is set"
[[ -n "$s3_region" || "$test_only" == true ]] || fail "--s3-region is required unless --test-only is set"
[[ -n "$s3_bucket" || "$test_only" == true ]] || fail "--s3-bucket is required unless --test-only is set"
api_token_file="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${api_token_file}" \
"${REPO_ROOT}/intake/forwardemail_api_token.txt" \
"${REPO_ROOT}/secrets/forwardemail/api-token.age"
)" || fail "unable to resolve Forward Email API token file"
[[ -n "$s3_access_key_file" || "$test_only" == true ]] || fail "--s3-access-key-file is required unless --test-only is set"
[[ -n "$s3_secret_key_file" || "$test_only" == true ]] || fail "--s3-secret-key-file is required unless --test-only is set"
require_file "$api_token_file"
api_token="$(read_secret "$api_token_file")"
if [[ "$test_only" != true ]]; then
s3_access_key_file="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${s3_access_key_file}" \
"${REPO_ROOT}/intake/hetzner-s3-user.txt" \
"${REPO_ROOT}/secrets/forwardemail/hetzner-s3-user.age"
)" || fail "unable to resolve Hetzner S3 access key file"
s3_secret_key_file="$(
burrow_resolve_secret_file \
"${REPO_ROOT}" \
"${s3_secret_key_file}" \
"${REPO_ROOT}/intake/hetzner-s3-secret.txt" \
"${REPO_ROOT}/secrets/forwardemail/hetzner-s3-secret.age"
)" || fail "unable to resolve Hetzner S3 secret key file"
require_file "$s3_access_key_file"
require_file "$s3_secret_key_file"
fi
if [[ "$test_only" == false ]]; then
require_file "$s3_access_key_file"
require_file "$s3_secret_key_file"

View file

@ -6,7 +6,6 @@ import argparse
import datetime as dt
import hashlib
import hmac
import subprocess
import sys
import textwrap
from pathlib import Path
@ -14,38 +13,11 @@ from urllib.parse import urlencode, urlparse
import requests
REPO_ROOT = Path(__file__).resolve().parent.parent
def default_secret_path(age_rel: str, intake_rel: str) -> str:
age_path = REPO_ROOT / age_rel
if age_path.exists():
return str(age_path)
return intake_rel
def read_secret(path: str) -> str:
file_path = Path(path)
if not file_path.is_absolute():
file_path = REPO_ROOT / file_path
if file_path.suffix == ".age":
value = subprocess.check_output(
[
"nix",
"--extra-experimental-features",
"nix-command flakes",
"run",
f"{REPO_ROOT}#agenix",
"--",
"-d",
str(file_path),
],
text=True,
).strip()
else:
value = file_path.read_text(encoding="utf-8").strip()
value = Path(path).read_text(encoding="utf-8").strip()
if not value:
raise SystemExit(f"error: empty secret file: {file_path}")
raise SystemExit(f"error: empty secret file: {path}")
return value
@ -240,12 +212,12 @@ def parse_args() -> argparse.Namespace:
parser.add_argument("--region", default="hel1", help="S3 region.")
parser.add_argument(
"--access-key-file",
default=default_secret_path("secrets/forwardemail/hetzner-s3-user.age", "intake/hetzner-s3-user.txt"),
default="intake/hetzner-s3-user.txt",
help="File containing the S3 access key id.",
)
parser.add_argument(
"--secret-key-file",
default=default_secret_path("secrets/forwardemail/hetzner-s3-secret.age", "intake/hetzner-s3-secret.txt"),
default="intake/hetzner-s3-secret.txt",
help="File containing the S3 secret key.",
)
parser.add_argument(

View file

@ -0,0 +1,66 @@
module burrow.dev/tailscale-login-bridge
go 1.26.1
require tailscale.com v1.96.5
require (
filippo.io/edwards25519 v1.2.0 // indirect
github.com/akutz/memconn v0.1.0 // indirect
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa // indirect
github.com/aws/aws-sdk-go-v2 v1.41.0 // indirect
github.com/aws/aws-sdk-go-v2/config v1.29.5 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.58 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 // indirect
github.com/aws/smithy-go v1.24.0 // indirect
github.com/coder/websocket v1.8.12 // indirect
github.com/creachadair/msync v0.7.1 // indirect
github.com/dblohm7/wingoes v0.0.0-20240119213807-a09d6be7affa // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/gaissmai/bart v0.26.1 // indirect
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced // indirect
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hdevalence/ed25519consensus v0.2.0 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/jsimonetti/rtnetlink v1.4.0 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 // indirect
github.com/mdlayher/socket v0.5.0 // indirect
github.com/mitchellh/go-ps v1.0.0 // indirect
github.com/pires/go-proxyproto v0.8.1 // indirect
github.com/prometheus-community/pro-bing v0.4.0 // indirect
github.com/safchain/ethtool v0.3.0 // indirect
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect
github.com/tailscale/hujson v0.0.0-20221223112325-20486734a56a // indirect
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 // indirect
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect
github.com/x448/float16 v0.8.4 // indirect
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.40.0 // indirect
golang.org/x/term v0.38.0 // indirect
golang.org/x/text v0.32.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
golang.zx2c4.com/wireguard/windows v0.5.3 // indirect
gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 // indirect
)

View file

@ -0,0 +1,229 @@
9fans.net/go v0.0.8-0.20250307142834-96bdba94b63f h1:1C7nZuxUMNz7eiQALRfiqNOm04+m3edWlRff/BYHf0Q=
9fans.net/go v0.0.8-0.20250307142834-96bdba94b63f/go.mod h1:hHyrZRryGqVdqrknjq5OWDLGCTJ2NeEvtrpR96mjraM=
filippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=
filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/akutz/memconn v0.1.0 h1:NawI0TORU4hcOMsMr11g7vwlCdkYeLKXBcxWu2W/P8A=
github.com/akutz/memconn v0.1.0/go.mod h1:Jo8rI7m0NieZyLI5e2CDlRdRqRRB4S7Xp77ukDjH+Fw=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7VVbI0o4wBRNQIgn917usHWOd6VAffYI=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/aws/aws-sdk-go-v2 v1.41.0 h1:tNvqh1s+v0vFYdA1xq0aOJH+Y5cRyZ5upu6roPgPKd4=
github.com/aws/aws-sdk-go-v2 v1.41.0/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0=
github.com/aws/aws-sdk-go-v2/config v1.29.5 h1:4lS2IB+wwkj5J43Tq/AwvnscBerBJtQQ6YS7puzCI1k=
github.com/aws/aws-sdk-go-v2/config v1.29.5/go.mod h1:SNzldMlDVbN6nWxM7XsUiNXPSa1LWlqiXtvh/1PrJGg=
github.com/aws/aws-sdk-go-v2/credentials v1.17.58 h1:/d7FUpAPU8Lf2KUdjniQvfNdlMID0Sd9pS23FJ3SS9Y=
github.com/aws/aws-sdk-go-v2/credentials v1.17.58/go.mod h1:aVYW33Ow10CyMQGFgC0ptMRIqJWvJ4nxZb0sUiuQT/A=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 h1:7lOW8NUwE9UZekS1DYoiPdVAqZ6A+LheHWb+mHbNOq8=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27/go.mod h1:w1BASFIPOPUae7AgaH4SbjNbfdkxuggLyGfNFTn8ITY=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 h1:rgGwPzb82iBYSvHMHXc8h9mRoOUBZIGFgKb9qniaZZc=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16/go.mod h1:L/UxsGeKpGoIj6DxfhOWHWQ/kGKcd4I1VncE4++IyKA=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 h1:1jtGzuV7c82xnqOVfx2F0xmJcOw5374L7N6juGW6x6U=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16/go.mod h1:M2E5OQf+XLe+SZGmmpaI2yy+J326aFf6/+54PoxSANc=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 h1:Pg9URiobXy85kgFev3og2CuOZ8JZUBENF+dcgWBaYNk=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 h1:oHjJHeUy0ImIV0bsrX0X91GkV5nJAyv1l1CC9lnO0TI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16/go.mod h1:iRSNGgOYmiYwSCXxXaKb9HfOEj40+oTKn8pTxMlYkRM=
github.com/aws/aws-sdk-go-v2/service/ssm v1.44.7 h1:a8HvP/+ew3tKwSXqL3BCSjiuicr+XTU2eFYeogV9GJE=
github.com/aws/aws-sdk-go-v2/service/ssm v1.44.7/go.mod h1:Q7XIWsMo0JcMpI/6TGD6XXcXcV1DbTj6e9BKNntIMIM=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 h1:c5WJ3iHz7rLIgArznb3JCSQT3uUMiz9DLZhIX+1G8ok=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14/go.mod h1:+JJQTxB6N4niArC14YNtxcQtwEqzS3o9Z32n7q33Rfs=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 h1:f1L/JtUkVODD+k1+IiSJUUv8A++2qVr+Xvb3xWXETMU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13/go.mod h1:tvqlFoja8/s0o+UruA1Nrezo/df0PzdunMDDurUfg6U=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 h1:SciGFVNZ4mHdm7gpD1dgZYnCuVdX1s+lFTg4+4DOy70=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5/go.mod h1:iW40X4QBmUxdP+fZNOpfmkdMZqsovezbAeO+Ubiv2pk=
github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=
github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02 h1:bXAPYSbdYbS5VTy92NIUbeDI1qyggi+JYh5op9IFlcQ=
github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c=
github.com/cilium/ebpf v0.16.0 h1:+BiEnHL6Z7lXnlGUsXQPPAE7+kenAd4ES8MQ5min0Ok=
github.com/cilium/ebpf v0.16.0/go.mod h1:L7u2Blt2jMM/vLAVgjxluxtBKlz3/GWjB0dMOEngfwE=
github.com/coder/websocket v1.8.12 h1:5bUXkEPPIbewrnkU8LTCLVaxi4N4J8ahufH2vlo4NAo=
github.com/coder/websocket v1.8.12/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/creachadair/mds v0.25.9 h1:080Hr8laN2h+l3NeVCGMBpXtIPnl9mz8e4HLraGPqtA=
github.com/creachadair/mds v0.25.9/go.mod h1:4hatI3hRM+qhzuAmqPRFvaBM8mONkS7nsLxkcuTYUIs=
github.com/creachadair/msync v0.7.1 h1:SeZmuEBXQPe5GqV/C94ER7QIZPwtvFbeQiykzt/7uho=
github.com/creachadair/msync v0.7.1/go.mod h1:8CcFlLsSujfHE5wWm19uUBLHIPDAUr6LXDwneVMO008=
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/dblohm7/wingoes v0.0.0-20240119213807-a09d6be7affa h1:h8TfIT1xc8FWbwwpmHn1J5i43Y0uZP97GqasGCzSRJk=
github.com/dblohm7/wingoes v0.0.0-20240119213807-a09d6be7affa/go.mod h1:Nx87SkVqTKd8UtT+xu7sM/l+LgXs6c0aHrlKusR+2EQ=
github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc h1:8WFBn63wegobsYAX0YjD+8suexZDga5CctH4CCTx2+8=
github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A=
github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c=
github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gaissmai/bart v0.26.1 h1:+w4rnLGNlA2GDVn382Tfe3jOsK5vOr5n4KmigJ9lbTo=
github.com/gaissmai/bart v0.26.1/go.mod h1:GREWQfTLRWz/c5FTOsIw+KkscuFkIV5t8Rp7Nd1Td5c=
github.com/github/fakeca v0.1.0 h1:Km/MVOFvclqxPM9dZBC4+QE564nU4gz4iZ0D9pMw28I=
github.com/github/fakeca v0.1.0/go.mod h1:+bormgoGMMuamOscx7N91aOuUST7wdaJ2rNjeohylyo=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced h1:Q311OHjMh/u5E2TITc++WlTP5We0xNseRMkHDyvhW7I=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced/go.mod h1:TiCD2a1pcmjd7YnhGH0f/zKNcCD06B029pHhzV23c2M=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737 h1:cf60tHxREO3g1nroKr2osU3JWZsJzkfi7rEg+oAB0Lo=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS0jDzbU/vuM9MC4YnBITCv+RYuTRq8dJzmCrFsK9g=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh3PFscJRLDnkL547IeP7kpPe3uUhEg=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I=
github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 h1:wG8RYIyctLhdFk6Vl1yPGtSRtwGpVkWyZww1OCil2MI=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806/go.mod h1:Beg6V6zZ3oEn0JuiUQ4wqwuyqqzasOltcoXPtgLbFp4=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU=
github.com/hdevalence/ed25519consensus v0.2.0/go.mod h1:w3BHWjwJbFU29IRHL1Iqkw3sus+7FctEyM4RqDxYNzo=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/illarion/gonotify/v3 v3.0.2 h1:O7S6vcopHexutmpObkeWsnzMJt/r1hONIEogeVNmJMk=
github.com/illarion/gonotify/v3 v3.0.2/go.mod h1:HWGPdPe817GfvY3w7cx6zkbzNZfi3QjcBm/wgVvEL1U=
github.com/insomniacslk/dhcp v0.0.0-20231206064809-8c70d406f6d2 h1:9K06NfxkBh25x56yVhWWlKFE8YpicaSfHwoV8SFbueA=
github.com/insomniacslk/dhcp v0.0.0-20231206064809-8c70d406f6d2/go.mod h1:3A9PQ1cunSDF/1rbTq99Ts4pVnycWg+vlPkfeD2NLFI=
github.com/jellydator/ttlcache/v3 v3.1.0 h1:0gPFG0IHHP6xyUyXq+JaD8fwkDCqgqwohXNJBcYE71g=
github.com/jellydator/ttlcache/v3 v3.1.0/go.mod h1:hi7MGFdMAwZna5n2tuvh63DvFLzVKySzCVW6+0gA2n4=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jsimonetti/rtnetlink v1.4.0 h1:Z1BF0fRgcETPEa0Kt0MRk3yV5+kF1FWTni6KUFKrq2I=
github.com/jsimonetti/rtnetlink v1.4.0/go.mod h1:5W1jDvWdnthFJ7fxYX1GMK07BUpI4oskfOqvPteYS6E=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a h1:+RR6SqnTkDLWyICxS1xpjCi/3dhyV+TgZwA6Ww3KncQ=
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a/go.mod h1:YTtCCM3ryyfiu4F7t8HQ1mxvp1UBdWM2r6Xa+nGWvDk=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw=
github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o=
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 h1:A1Cq6Ysb0GM0tpKMbdCXCIfBclan4oHk1Jb+Hrejirg=
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42/go.mod h1:BB4YCPDOzfy7FniQ/lxuYQ3dgmM2cZumHbK8RpTjN2o=
github.com/mdlayher/sdnotify v1.0.0 h1:Ma9XeLVN/l0qpyx1tNeMSeTjCPH6NtuD6/N9XdTlQ3c=
github.com/mdlayher/sdnotify v1.0.0/go.mod h1:HQUmpM4XgYkhDLtd+Uad8ZFK1T9D5+pNxnXQjCeJlGE=
github.com/mdlayher/socket v0.5.0 h1:ilICZmJcQz70vrWVes1MFera4jGiWNocSkykwwoy3XI=
github.com/mdlayher/socket v0.5.0/go.mod h1:WkcBFfvyG8QENs5+hfQPl1X6Jpd2yeLIYgrGFmJiJxI=
github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4=
github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY=
github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=
github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646 h1:zYyBkD/k9seD2A7fsi6Oo2LfFZAehjjQMERAvZLEDnQ=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
github.com/pierrec/lz4/v4 v4.1.25 h1:kocOqRffaIbU5djlIBr7Wh+cx82C0vtFb0fOurZHqD0=
github.com/pierrec/lz4/v4 v4.1.25/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
github.com/pires/go-proxyproto v0.8.1 h1:9KEixbdJfhrbtjpz/ZwCdWDD2Xem0NZ38qMYaASJgp0=
github.com/pires/go-proxyproto v0.8.1/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/prometheus-community/pro-bing v0.4.0 h1:YMbv+i08gQz97OZZBwLyvmmQEEzyfyrrjEaAchdy3R4=
github.com/prometheus-community/pro-bing v0.4.0/go.mod h1:b7wRYZtCcPmt4Sz319BykUU241rWLe1VFXyiyWK/dH4=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/safchain/ethtool v0.3.0 h1:gimQJpsI6sc1yIqP/y8GYgiXn/NjgvpM0RNoWLVVmP0=
github.com/safchain/ethtool v0.3.0/go.mod h1:SA9BwrgyAqNo7M+uaL6IYbxpm5wk3L7Mm6ocLW+CJUs=
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e h1:PtWT87weP5LWHEY//SWsYkSO3RWRZo4OSWagh3YD2vQ=
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e/go.mod h1:XrBNfAFN+pwoWuksbFS9Ccxnopa15zJGgXRFN90l3K4=
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8Jj4P4c1a3CtQyMaTVCznlkLZI++hok4=
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ=
github.com/tailscale/hujson v0.0.0-20221223112325-20486734a56a h1:SJy1Pu0eH1C29XwJucQo73FrleVK6t4kYz4NVhp34Yw=
github.com/tailscale/hujson v0.0.0-20221223112325-20486734a56a/go.mod h1:DFSS3NAGHthKo1gTlmEcSBiZrRJXi28rLNd/1udP1c8=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 h1:UBPHPtv8+nEAy2PD8RyAhOYvau1ek0HDJqLS/Pysi14=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da h1:jVRUZPRs9sqyKlYHHzHjAqKN+6e/Vog6NpHYeNPJqOw=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e h1:zOGKqN5D5hHhiYUp091JqK7DPCqSARyUfduhGUY8Bek=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e/go.mod h1:orPd6JZXXRyuDusYilywte7k094d7dycXXU5YnWsrwg=
github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=
github.com/tc-hib/winres v0.2.1/go.mod h1:C/JaNhH3KBvhNKVbvdlDWkbMDO9H4fKKDaN7/07SSuk=
github.com/u-root/u-root v0.14.0 h1:Ka4T10EEML7dQ5XDvO9c3MBN8z4nuSnGjcd1jmU2ivg=
github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1WMluqE=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701/go.mod h1:P3a5rG4X7tI17Nn3aOIAYr5HbIMukwXG0urG0WuL8OA=
github.com/vishvananda/netns v0.0.5 h1:DfiHV+j8bA32MFM7bfEunvT8IAqQ/NzSJHtcmW5zdEY=
github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 h1:Tl++JLUCe4sxGu8cTpDzRLd3tN7US4hOxG5YpKCzkek=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745/go.mod h1:reUoABIJ9ikfM5sgtSF3Wushcza7+WeD01VB9Lirh3g=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba h1:0b9z3AuHCjxk0x/opv64kcgZLBseWJUpBw5I82+2U4M=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/WfdlivPbZJsZdgWZlrGope/Y=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b h1:M2rDM6z3Fhozi9O7NWsxAkg/yqS/lQJ6PmkyIV3YP+o=
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w=
golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20220817070843-5a390386f1f2/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 h1:Zy8IV/+FMLxy6j6p87vk/vQGKcdnbprwjTxc8UiUtsA=
gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8/go.mod h1:QkHjoMIBaYtpVufgwv3keYAbln78mBoCuShZrPrer1Q=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0 h1:5SXjd4ET5dYijLaf0O3aOenC0Z4ZafIWSpjUzsQaNho=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0/go.mod h1:EPDDhEZqVHhWuPI5zPAsjU0U7v9xNIWjoOVyZ5ZcniQ=
howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=
howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k=
software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI=
tailscale.com v1.96.5 h1:gNkfA/KSZAl6jCH9cj8urq00HRWItDDTtGsyATI89jA=
tailscale.com v1.96.5/go.mod h1:/3lnZBYb2UEwnN0MNu2SDXUtT06AGd5k0s+OWx3WmcY=

View file

@ -0,0 +1,523 @@
package main
import (
"context"
"encoding/binary"
"encoding/json"
"errors"
"flag"
"fmt"
"io"
"log"
"net"
"net/netip"
"net/http"
"os"
"strconv"
"sync"
"time"
"github.com/tailscale/wireguard-go/tun"
"tailscale.com/client/local"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
"tailscale.com/tsnet"
)
type statusResponse struct {
BackendState string `json:"backend_state"`
AuthURL string `json:"auth_url,omitempty"`
Running bool `json:"running"`
NeedsLogin bool `json:"needs_login"`
TailnetName string `json:"tailnet_name,omitempty"`
MagicDNSSuffix string `json:"magic_dns_suffix,omitempty"`
SelfDNSName string `json:"self_dns_name,omitempty"`
TailscaleIPs []string `json:"tailscale_ips,omitempty"`
Health []string `json:"health,omitempty"`
Peers []peerSummary `json:"peers,omitempty"`
}
type peerSummary struct {
Name string `json:"name,omitempty"`
DNSName string `json:"dns_name,omitempty"`
TailscaleIPs []string `json:"tailscale_ips,omitempty"`
Online bool `json:"online"`
Active bool `json:"active"`
Relay string `json:"relay,omitempty"`
CurAddr string `json:"cur_addr,omitempty"`
LastSeenUnix int64 `json:"last_seen_unix,omitempty"`
}
type pingResponse struct {
Result *ipnstate.PingResult `json:"result,omitempty"`
}
type helperHello struct {
ListenAddr string `json:"listen_addr"`
PacketSocket string `json:"packet_socket,omitempty"`
}
type helperState struct {
mu sync.RWMutex
authURL string
}
func (s *helperState) authURLSnapshot() string {
s.mu.RLock()
defer s.mu.RUnlock()
return s.authURL
}
func (s *helperState) setAuthURL(url string) {
s.mu.Lock()
defer s.mu.Unlock()
s.authURL = url
}
func (s *helperState) clearAuthURL() {
s.setAuthURL("")
}
// chanTUN is a tun.Device backed by channels so another process can feed and
// consume raw IP packets while tsnet handles the Tailnet control/data plane.
type chanTUN struct {
Inbound chan []byte
Outbound chan []byte
closed chan struct{}
events chan tun.Event
}
func newChanTUN() *chanTUN {
t := &chanTUN{
Inbound: make(chan []byte, 1024),
Outbound: make(chan []byte, 1024),
closed: make(chan struct{}),
events: make(chan tun.Event, 1),
}
t.events <- tun.EventUp
return t
}
func (t *chanTUN) File() *os.File { return nil }
func (t *chanTUN) Close() error {
select {
case <-t.closed:
default:
close(t.closed)
close(t.Inbound)
}
return nil
}
func (t *chanTUN) Read(bufs [][]byte, sizes []int, offset int) (int, error) {
select {
case <-t.closed:
return 0, io.EOF
case pkt, ok := <-t.Outbound:
if !ok {
return 0, io.EOF
}
sizes[0] = copy(bufs[0][offset:], pkt)
return 1, nil
}
}
func (t *chanTUN) Write(bufs [][]byte, offset int) (int, error) {
for _, buf := range bufs {
pkt := buf[offset:]
if len(pkt) == 0 {
continue
}
select {
case <-t.closed:
return 0, errors.New("closed")
case t.Inbound <- append([]byte(nil), pkt...):
default:
}
}
return len(bufs), nil
}
func (t *chanTUN) MTU() (int, error) { return 1280, nil }
func (t *chanTUN) Name() (string, error) { return "burrow-tailnet", nil }
func (t *chanTUN) Events() <-chan tun.Event { return t.events }
func (t *chanTUN) BatchSize() int { return 1 }
func main() {
listen := flag.String("listen", "127.0.0.1:0", "local listen address")
stateDir := flag.String("state-dir", "", "persistent state directory")
hostname := flag.String("hostname", "burrow-apple", "tailnet hostname")
controlURL := flag.String("control-url", "", "optional control URL")
packetSocket := flag.String("packet-socket", "", "optional unix socket path for raw packet bridging")
udpEchoPort := flag.Int("udp-echo-port", 0, "optional tailnet UDP echo port")
flag.Parse()
if *stateDir == "" {
log.Fatal("--state-dir is required")
}
if err := os.MkdirAll(*stateDir, 0o755); err != nil {
log.Fatalf("create state dir: %v", err)
}
server := &tsnet.Server{
Dir: *stateDir,
Hostname: *hostname,
UserLogf: log.Printf,
}
var tunDevice *chanTUN
var packetListener net.Listener
if *packetSocket != "" {
_ = os.Remove(*packetSocket)
ln, err := net.Listen("unix", *packetSocket)
if err != nil {
log.Fatalf("packet listen: %v", err)
}
packetListener = ln
defer func() {
packetListener.Close()
_ = os.Remove(*packetSocket)
}()
tunDevice = newChanTUN()
server.Tun = tunDevice
}
if *controlURL != "" {
server.ControlURL = *controlURL
}
defer server.Close()
if err := server.Start(); err != nil {
log.Fatalf("start tsnet: %v", err)
}
localClient, err := server.LocalClient()
if err != nil {
log.Fatalf("local client: %v", err)
}
state := &helperState{}
ln, err := net.Listen("tcp", *listen)
if err != nil {
log.Fatalf("listen: %v", err)
}
defer ln.Close()
if packetListener != nil {
go servePacketBridge(packetListener, tunDevice)
}
if *udpEchoPort > 0 {
go serveUDPEcho(context.Background(), server, localClient, *udpEchoPort)
}
hello := helperHello{
ListenAddr: ln.Addr().String(),
}
if *packetSocket != "" {
hello.PacketSocket = *packetSocket
}
if err := json.NewEncoder(os.Stdout).Encode(hello); err != nil {
log.Fatalf("write hello: %v", err)
}
_ = os.Stdout.Sync()
mux := http.NewServeMux()
mux.HandleFunc("/status", func(w http.ResponseWriter, r *http.Request) {
status, err := snapshot(r.Context(), localClient, state)
if err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
}
w.Header().Set("content-type", "application/json")
_ = json.NewEncoder(w).Encode(status)
})
mux.HandleFunc("/ping", func(w http.ResponseWriter, r *http.Request) {
ip := r.URL.Query().Get("ip")
if ip == "" {
http.Error(w, "missing ip", http.StatusBadRequest)
return
}
target, err := netip.ParseAddr(ip)
if err != nil {
http.Error(w, fmt.Sprintf("invalid ip: %v", err), http.StatusBadRequest)
return
}
pingType := tailcfg.PingTSMP
switch r.URL.Query().Get("type") {
case "", "tsmp", "TSMP":
pingType = tailcfg.PingTSMP
case "icmp", "ICMP":
pingType = tailcfg.PingICMP
case "peerapi":
pingType = tailcfg.PingPeerAPI
default:
http.Error(w, "unsupported ping type", http.StatusBadRequest)
return
}
result, err := localClient.Ping(r.Context(), target, pingType)
if err != nil {
http.Error(w, err.Error(), http.StatusBadGateway)
return
}
w.Header().Set("content-type", "application/json")
_ = json.NewEncoder(w).Encode(&pingResponse{Result: result})
})
mux.HandleFunc("/shutdown", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNoContent)
go func() {
_ = server.Close()
time.Sleep(100 * time.Millisecond)
os.Exit(0)
}()
})
httpServer := &http.Server{
Handler: mux,
}
log.Fatal(httpServer.Serve(ln))
}
func servePacketBridge(listener net.Listener, device *chanTUN) {
for {
conn, err := listener.Accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
return
}
log.Printf("packet accept: %v", err)
continue
}
log.Printf("packet bridge connected")
if err := bridgePacketConn(conn, device); err != nil && !errors.Is(err, io.EOF) {
log.Printf("packet bridge error: %v", err)
}
_ = conn.Close()
log.Printf("packet bridge disconnected")
}
}
func bridgePacketConn(conn net.Conn, device *chanTUN) error {
errCh := make(chan error, 2)
go func() {
for {
pkt, err := readFrame(conn)
if err != nil {
errCh <- err
return
}
select {
case <-device.closed:
errCh <- io.EOF
return
case device.Outbound <- pkt:
}
}
}()
go func() {
for {
select {
case <-device.closed:
errCh <- io.EOF
return
case pkt, ok := <-device.Inbound:
if !ok {
errCh <- io.EOF
return
}
if err := writeFrame(conn, pkt); err != nil {
errCh <- err
return
}
}
}
}()
return <-errCh
}
func readFrame(r io.Reader) ([]byte, error) {
var size [4]byte
if _, err := io.ReadFull(r, size[:]); err != nil {
return nil, err
}
length := binary.BigEndian.Uint32(size[:])
if length == 0 {
return []byte{}, nil
}
packet := make([]byte, length)
if _, err := io.ReadFull(r, packet); err != nil {
return nil, err
}
return packet, nil
}
func writeFrame(w io.Writer, packet []byte) error {
var size [4]byte
binary.BigEndian.PutUint32(size[:], uint32(len(packet)))
if _, err := w.Write(size[:]); err != nil {
return err
}
if len(packet) == 0 {
return nil
}
_, err := w.Write(packet)
return err
}
func snapshot(ctx context.Context, localClient *local.Client, state *helperState) (*statusResponse, error) {
status, err := localClient.Status(ctx)
if err != nil {
return nil, err
}
authURL := status.AuthURL
if authURL == "" {
authURL = state.authURLSnapshot()
}
if status.BackendState == ipn.Running.String() {
state.clearAuthURL()
authURL = ""
} else if (status.BackendState == ipn.NeedsLogin.String() || status.BackendState == ipn.NoState.String()) && authURL == "" {
authURL, err = awaitAuthURL(ctx, localClient, state)
if err != nil {
return nil, err
}
}
response := &statusResponse{
BackendState: status.BackendState,
AuthURL: authURL,
Running: status.BackendState == ipn.Running.String(),
NeedsLogin: status.BackendState == ipn.NeedsLogin.String(),
Health: append([]string(nil), status.Health...),
}
if status.CurrentTailnet != nil {
response.TailnetName = status.CurrentTailnet.Name
response.MagicDNSSuffix = status.CurrentTailnet.MagicDNSSuffix
}
if status.Self != nil {
response.SelfDNSName = status.Self.DNSName
}
for _, ip := range status.TailscaleIPs {
response.TailscaleIPs = append(response.TailscaleIPs, ip.String())
}
for _, key := range status.Peers() {
peer := status.Peer[key]
if peer == nil {
continue
}
summary := peerSummary{
Name: peer.HostName,
DNSName: peer.DNSName,
Online: peer.Online,
Active: peer.Active,
Relay: peer.Relay,
CurAddr: peer.CurAddr,
LastSeenUnix: peer.LastSeen.Unix(),
}
for _, ip := range peer.TailscaleIPs {
summary.TailscaleIPs = append(summary.TailscaleIPs, ip.String())
}
response.Peers = append(response.Peers, summary)
}
return response, nil
}
func serveUDPEcho(ctx context.Context, server *tsnet.Server, localClient *local.Client, port int) {
ip, err := awaitTailscaleIP(ctx, localClient)
if err != nil {
log.Printf("udp echo setup failed: %v", err)
return
}
listenAddr := net.JoinHostPort(ip.String(), strconv.Itoa(port))
pc, err := server.ListenPacket("udp", listenAddr)
if err != nil {
log.Printf("udp echo listen failed on %s: %v", listenAddr, err)
return
}
defer pc.Close()
log.Printf("udp echo listening on %s", pc.LocalAddr())
buf := make([]byte, 64<<10)
for {
n, addr, err := pc.ReadFrom(buf)
if err != nil {
if errors.Is(err, net.ErrClosed) || errors.Is(err, io.EOF) {
return
}
log.Printf("udp echo read failed: %v", err)
return
}
if _, err := pc.WriteTo(buf[:n], addr); err != nil {
log.Printf("udp echo write failed: %v", err)
return
}
}
}
func awaitTailscaleIP(ctx context.Context, localClient *local.Client) (netip.Addr, error) {
for range 60 {
status, err := localClient.StatusWithoutPeers(ctx)
if err == nil {
for _, ip := range status.TailscaleIPs {
if ip.Is4() {
return ip, nil
}
}
for _, ip := range status.TailscaleIPs {
if ip.Is6() {
return ip, nil
}
}
}
select {
case <-ctx.Done():
return netip.Addr{}, ctx.Err()
case <-time.After(250 * time.Millisecond):
}
}
return netip.Addr{}, errors.New("timed out waiting for tailscale IP")
}
func awaitAuthURL(ctx context.Context, localClient *local.Client, state *helperState) (string, error) {
watchCtx, cancel := context.WithTimeout(ctx, 8*time.Second)
defer cancel()
watcher, err := localClient.WatchIPNBus(watchCtx, ipn.NotifyInitialState)
if err != nil {
return "", err
}
defer watcher.Close()
if err := localClient.StartLoginInteractive(ctx); err != nil {
return "", err
}
for {
notify, err := watcher.Next()
if err != nil {
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
return state.authURLSnapshot(), nil
}
return "", err
}
if notify.BrowseToURL != nil && *notify.BrowseToURL != "" {
state.setAuthURL(*notify.BrowseToURL)
return *notify.BrowseToURL, nil
}
if notify.State != nil && *notify.State == ipn.Running {
state.clearAuthURL()
return "", nil
}
}
}

View file

@ -11,6 +11,8 @@ relm4 = { version = "0.6", features = ["libadwaita", "gnome_44"]}
burrow = { version = "*", path = "../burrow/" }
tokio = { version = "1.35.0", features = ["time", "sync"] }
gettext-rs = { version = "0.7.0", features = ["gettext-system"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
[build-dependencies]
anyhow = "1.0"

View file

@ -0,0 +1,139 @@
use anyhow::{Context, Result};
use serde::{Deserialize, Serialize};
use std::{
path::PathBuf,
time::{SystemTime, UNIX_EPOCH},
};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AccountRecord {
pub id: String,
pub kind: AccountKind,
pub title: String,
pub authority: Option<String>,
pub account: String,
pub identity: String,
pub hostname: Option<String>,
pub tailnet: Option<String>,
pub note: Option<String>,
pub created_at: u64,
pub updated_at: u64,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum AccountKind {
WireGuard,
Tor,
Tailnet,
}
impl AccountKind {
pub fn title(self) -> &'static str {
match self {
Self::WireGuard => "WireGuard",
Self::Tor => "Tor",
Self::Tailnet => "Tailnet",
}
}
fn sort_rank(self) -> u8 {
match self {
Self::Tailnet => 0,
Self::Tor => 1,
Self::WireGuard => 2,
}
}
}
pub fn load() -> Result<Vec<AccountRecord>> {
let path = storage_path()?;
if !path.exists() {
return Ok(Vec::new());
}
let data =
std::fs::read(&path).with_context(|| format!("failed to read {}", path.display()))?;
serde_json::from_slice(&data).with_context(|| format!("failed to parse {}", path.display()))
}
pub fn upsert(mut record: AccountRecord) -> Result<Vec<AccountRecord>> {
let mut accounts = load()?;
let now = timestamp();
record.updated_at = now;
if record.created_at == 0 {
record.created_at = now;
}
if let Some(index) = accounts.iter().position(|account| account.id == record.id) {
accounts[index] = record;
} else {
accounts.push(record);
}
accounts.sort_by(|lhs, rhs| {
lhs.kind
.sort_rank()
.cmp(&rhs.kind.sort_rank())
.then_with(|| lhs.title.to_lowercase().cmp(&rhs.title.to_lowercase()))
});
persist(&accounts)?;
Ok(accounts)
}
pub fn new_record(
kind: AccountKind,
title: String,
authority: Option<String>,
account: String,
identity: String,
hostname: Option<String>,
tailnet: Option<String>,
note: Option<String>,
) -> AccountRecord {
let now = timestamp();
AccountRecord {
id: format!("{}-{now}", kind.title().to_ascii_lowercase()),
kind,
title,
authority,
account,
identity,
hostname,
tailnet,
note,
created_at: now,
updated_at: now,
}
}
fn persist(accounts: &[AccountRecord]) -> Result<()> {
let path = storage_path()?;
if let Some(parent) = path.parent() {
std::fs::create_dir_all(parent)
.with_context(|| format!("failed to create {}", parent.display()))?;
}
let data = serde_json::to_vec_pretty(accounts).context("failed to encode account store")?;
std::fs::write(&path, data).with_context(|| format!("failed to write {}", path.display()))
}
fn storage_path() -> Result<PathBuf> {
if let Some(data_home) = std::env::var_os("XDG_DATA_HOME") {
return Ok(PathBuf::from(data_home)
.join("burrow")
.join("accounts.json"));
}
if let Some(home) = std::env::var_os("HOME") {
return Ok(PathBuf::from(home)
.join(".local")
.join("share")
.join("burrow")
.join("accounts.json"));
}
Ok(std::env::temp_dir().join("burrow-accounts.json"))
}
fn timestamp() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|duration| duration.as_secs())
.unwrap_or_default()
}

View file

@ -1,24 +1,19 @@
use super::*;
use anyhow::Context;
use std::time::Duration;
const RECONNECT_POLL_TIME: Duration = Duration::from_secs(5);
pub struct App {
daemon_client: Arc<Mutex<Option<DaemonClient>>>,
settings_screen: Controller<settings_screen::SettingsScreen>,
switch_screen: AsyncController<switch_screen::SwitchScreen>,
_home_screen: AsyncController<home_screen::HomeScreen>,
}
#[derive(Debug)]
pub enum AppMsg {
None,
PostInit,
}
impl App {
pub fn run() {
let app = RelmApp::new(config::ID);
relm4::set_global_css(APP_CSS);
Self::setup_gresources().unwrap();
Self::setup_i18n().unwrap();
@ -49,7 +44,7 @@ impl AsyncComponent for App {
view! {
adw::Window {
set_title: Some("Burrow"),
set_default_size: (640, 480),
set_default_size: (900, 760),
}
}
@ -58,100 +53,84 @@ impl AsyncComponent for App {
root: Self::Root,
sender: AsyncComponentSender<Self>,
) -> AsyncComponentParts<Self> {
let daemon_client = Arc::new(Mutex::new(DaemonClient::new().await.ok()));
let switch_screen = switch_screen::SwitchScreen::builder()
.launch(switch_screen::SwitchScreenInit {
daemon_client: Arc::clone(&daemon_client),
})
.forward(sender.input_sender(), |_| AppMsg::None);
let settings_screen = settings_screen::SettingsScreen::builder()
.launch(settings_screen::SettingsScreenInit {
daemon_client: Arc::clone(&daemon_client),
})
let home_screen = home_screen::HomeScreen::builder()
.launch(())
.forward(sender.input_sender(), |_| AppMsg::None);
let widgets = view_output!();
let view_stack = adw::ViewStack::new();
view_stack.add_titled(switch_screen.widget(), None, "Switch");
view_stack.add_titled(settings_screen.widget(), None, "Settings");
let view_switcher_bar = adw::ViewSwitcherBar::builder().stack(&view_stack).build();
view_switcher_bar.set_reveal(true);
// When libadwaita 1.4 support becomes more avaliable, this approach is more appropriate
//
// let toolbar = adw::ToolbarView::new();
// toolbar.add_top_bar(
// &adw::HeaderBar::builder()
// .title_widget(&gtk::Label::new(Some("Burrow")))
// .build(),
// );
// toolbar.add_bottom_bar(&view_switcher_bar);
// toolbar.set_content(Some(&view_stack));
// root.set_content(Some(&toolbar));
let content = gtk::Box::new(gtk::Orientation::Vertical, 0);
content.append(
&adw::HeaderBar::builder()
.title_widget(&gtk::Label::new(Some("Burrow")))
.build(),
);
content.append(&view_stack);
content.append(&view_switcher_bar);
content.append(home_screen.widget());
root.set_content(Some(&content));
sender.input(AppMsg::PostInit);
let model = App {
daemon_client,
switch_screen,
settings_screen,
};
let model = App { _home_screen: home_screen };
AsyncComponentParts { model, widgets }
}
async fn update(
&mut self,
_msg: Self::Input,
msg: Self::Input,
_sender: AsyncComponentSender<Self>,
_root: &Self::Root,
) {
loop {
tokio::time::sleep(RECONNECT_POLL_TIME).await;
{
let mut daemon_client = self.daemon_client.lock().await;
let mut disconnected_daemon_client = false;
if let Some(daemon_client) = daemon_client.as_mut() {
if let Err(_e) = daemon_client.send_command(DaemonCommand::ServerInfo).await {
disconnected_daemon_client = true;
self.switch_screen
.emit(switch_screen::SwitchScreenMsg::DaemonDisconnect);
self.settings_screen
.emit(settings_screen::SettingsScreenMsg::DaemonStateChange)
}
}
if disconnected_daemon_client || daemon_client.is_none() {
match DaemonClient::new().await {
Ok(new_daemon_client) => {
*daemon_client = Some(new_daemon_client);
self.switch_screen
.emit(switch_screen::SwitchScreenMsg::DaemonReconnect);
self.settings_screen
.emit(settings_screen::SettingsScreenMsg::DaemonStateChange)
}
Err(_e) => {
// TODO: Handle Error
}
}
}
}
match msg {
AppMsg::None => {}
}
}
}
const APP_CSS: &str = r#"
.empty-state {
border-radius: 18px;
padding: 22px;
background: alpha(@card_bg_color, 0.72);
}
.summary-card {
border-radius: 18px;
padding: 14px;
background: alpha(@card_bg_color, 0.72);
}
.network-card {
border-radius: 10px;
padding: 16px;
box-shadow: 0 2px 6px alpha(black, 0.14);
}
.wireguard-card {
background: linear-gradient(135deg, #3277d8, #174ea6);
}
.tailnet-card {
background: linear-gradient(135deg, #31b891, #147d69);
}
.network-card-kind,
.network-card-title,
.network-card-detail {
color: white;
}
.network-card-kind {
opacity: 0.86;
font-weight: 700;
}
.network-card-title {
font-size: 1.22em;
font-weight: 700;
}
.network-card-detail {
opacity: 0.92;
font-family: monospace;
}
"#;

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
use super::*;
use crate::daemon_api;
use adw::prelude::*;
use burrow::{DaemonClient, DaemonCommand, DaemonResponseData};
use gtk::Align;
use relm4::{
component::{
@ -9,13 +9,9 @@ use relm4::{
},
prelude::*,
};
use std::sync::Arc;
use tokio::sync::Mutex;
mod app;
mod settings;
mod settings_screen;
mod switch_screen;
mod home_screen;
pub use app::*;
pub use settings::{DaemonGroupMsg, DiagGroupMsg};
pub use home_screen::{HomeScreen, HomeScreenMsg};

View file

@ -0,0 +1,420 @@
use anyhow::{anyhow, Context, Result};
use burrow::{
control::{TailnetConfig, TailnetProvider},
grpc_defs::{
Empty, Network, NetworkType, State, TailnetDiscoverRequest, TailnetLoginCancelRequest,
TailnetLoginStartRequest, TailnetLoginStatusRequest, TailnetProbeRequest,
},
BurrowClient,
};
use std::{path::PathBuf, sync::OnceLock};
use tokio::time::{timeout, Duration};
const RPC_TIMEOUT: Duration = Duration::from_secs(3);
const MANAGED_TAILSCALE_AUTHORITY: &str = "https://controlplane.tailscale.com";
static EMBEDDED_DAEMON_STARTED: OnceLock<()> = OnceLock::new();
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum TunnelState {
Running,
Stopped,
}
#[derive(Debug, Clone)]
pub struct NetworkSummary {
pub id: i32,
pub title: String,
pub detail: String,
}
#[derive(Debug, Clone)]
pub struct TailnetDiscovery {
pub authority: String,
pub managed: bool,
pub oidc_issuer: Option<String>,
}
#[derive(Debug, Clone)]
pub struct TailnetProbe {
pub summary: String,
pub detail: Option<String>,
pub status_code: i32,
}
#[derive(Debug, Clone)]
pub struct TailnetLoginStatus {
pub session_id: String,
pub backend_state: String,
pub auth_url: Option<String>,
pub running: bool,
pub needs_login: bool,
pub tailnet_name: Option<String>,
pub self_dns_name: Option<String>,
pub tailnet_ips: Vec<String>,
pub health: Vec<String>,
}
pub fn default_tailnet_authority() -> &'static str {
MANAGED_TAILSCALE_AUTHORITY
}
pub fn configure_client_paths() -> Result<()> {
if std::env::var_os("BURROW_SOCKET_PATH").is_none() {
std::env::set_var("BURROW_SOCKET_PATH", default_socket_path()?);
}
Ok(())
}
pub async fn ensure_daemon() -> Result<()> {
configure_client_paths()?;
if daemon_available().await {
return Ok(());
}
let socket_path = socket_path()?;
let db_path = database_path()?;
ensure_parent(&socket_path)?;
ensure_parent(&db_path)?;
if EMBEDDED_DAEMON_STARTED.get().is_none() {
tokio::task::spawn_blocking(move || {
burrow::spawn_in_process_with_paths(Some(socket_path), Some(db_path));
})
.await
.context("failed to join embedded daemon startup")?;
let _ = EMBEDDED_DAEMON_STARTED.set(());
}
tunnel_state()
.await
.map(|_| ())
.context("Burrow daemon started but did not accept tunnel status RPCs")
}
pub fn infer_tailnet_provider(authority: &str) -> TailnetProvider {
let normalized = authority.trim().trim_end_matches('/').to_ascii_lowercase();
if normalized == "controlplane.tailscale.com"
|| normalized == "http://controlplane.tailscale.com"
|| normalized == MANAGED_TAILSCALE_AUTHORITY
{
TailnetProvider::Tailscale
} else {
TailnetProvider::Headscale
}
}
pub async fn daemon_available() -> bool {
tunnel_state().await.is_ok()
}
fn socket_path() -> Result<PathBuf> {
if let Some(path) = std::env::var_os("BURROW_SOCKET_PATH") {
return Ok(PathBuf::from(path));
}
default_socket_path()
}
fn default_socket_path() -> Result<PathBuf> {
if let Some(runtime_dir) = std::env::var_os("XDG_RUNTIME_DIR") {
return Ok(PathBuf::from(runtime_dir).join("burrow.sock"));
}
let uid = std::env::var("UID").unwrap_or_else(|_| "1000".to_owned());
Ok(PathBuf::from(format!("/tmp/burrow-{uid}.sock")))
}
fn database_path() -> Result<PathBuf> {
if let Some(path) = std::env::var_os("BURROW_DB_PATH") {
return Ok(PathBuf::from(path));
}
if let Some(data_home) = std::env::var_os("XDG_DATA_HOME") {
return Ok(PathBuf::from(data_home).join("burrow").join("burrow.db"));
}
if let Some(home) = std::env::var_os("HOME") {
return Ok(PathBuf::from(home)
.join(".local")
.join("share")
.join("burrow")
.join("burrow.db"));
}
Ok(std::env::temp_dir().join("burrow.db"))
}
fn ensure_parent(path: &PathBuf) -> Result<()> {
if let Some(parent) = path.parent() {
std::fs::create_dir_all(parent)
.with_context(|| format!("failed to create {}", parent.display()))?;
}
Ok(())
}
pub async fn tunnel_state() -> Result<TunnelState> {
let mut client = BurrowClient::from_uds().await?;
let mut stream = timeout(RPC_TIMEOUT, client.tunnel_client.tunnel_status(Empty {}))
.await
.context("timed out connecting to Burrow daemon")??
.into_inner();
let status = timeout(RPC_TIMEOUT, stream.message())
.await
.context("timed out reading Burrow tunnel status")??
.context("Burrow daemon ended the status stream without a state")?;
Ok(match status.state() {
State::Running => TunnelState::Running,
State::Stopped => TunnelState::Stopped,
})
}
pub async fn start_tunnel() -> Result<()> {
let mut client = BurrowClient::from_uds().await?;
timeout(RPC_TIMEOUT, client.tunnel_client.tunnel_start(Empty {}))
.await
.context("timed out starting Burrow tunnel")??;
Ok(())
}
pub async fn stop_tunnel() -> Result<()> {
let mut client = BurrowClient::from_uds().await?;
timeout(RPC_TIMEOUT, client.tunnel_client.tunnel_stop(Empty {}))
.await
.context("timed out stopping Burrow tunnel")??;
Ok(())
}
pub async fn list_networks() -> Result<Vec<NetworkSummary>> {
let mut client = BurrowClient::from_uds().await?;
let mut stream = timeout(RPC_TIMEOUT, client.networks_client.network_list(Empty {}))
.await
.context("timed out connecting to Burrow network list")??
.into_inner();
let response = timeout(RPC_TIMEOUT, stream.message())
.await
.context("timed out reading Burrow network list")??
.context("Burrow daemon ended the network stream without a snapshot")?;
Ok(response.network.iter().map(summarize_network).collect())
}
pub async fn add_wireguard(config: String) -> Result<i32> {
add_network(NetworkType::WireGuard, config.into_bytes()).await
}
pub async fn add_tailnet(
authority: String,
account: String,
identity: String,
hostname: Option<String>,
tailnet: Option<String>,
) -> Result<i32> {
let provider = infer_tailnet_provider(&authority);
let config = TailnetConfig {
provider,
authority: Some(authority),
account: Some(account),
identity: Some(identity),
hostname,
tailnet,
};
let payload = serde_json::to_vec_pretty(&config)?;
add_network(NetworkType::Tailnet, payload).await
}
pub async fn discover_tailnet(email: String) -> Result<TailnetDiscovery> {
let mut client = BurrowClient::from_uds().await?;
let response = timeout(
RPC_TIMEOUT,
client
.tailnet_client
.discover(TailnetDiscoverRequest { email }),
)
.await
.context("timed out discovering Tailnet authority")??
.into_inner();
Ok(TailnetDiscovery {
authority: response.authority,
managed: response.managed,
oidc_issuer: optional(response.oidc_issuer),
})
}
pub async fn probe_tailnet(authority: String) -> Result<TailnetProbe> {
let mut client = BurrowClient::from_uds().await?;
let response = timeout(
RPC_TIMEOUT,
client
.tailnet_client
.probe(TailnetProbeRequest { authority }),
)
.await
.context("timed out probing Tailnet authority")??
.into_inner();
Ok(TailnetProbe {
summary: response.summary,
detail: optional(response.detail),
status_code: response.status_code,
})
}
pub async fn start_tailnet_login(
authority: String,
account_name: String,
identity_name: String,
hostname: Option<String>,
) -> Result<TailnetLoginStatus> {
let mut client = BurrowClient::from_uds().await?;
let response = timeout(
RPC_TIMEOUT,
client.tailnet_client.login_start(TailnetLoginStartRequest {
account_name,
identity_name,
hostname: hostname.unwrap_or_default(),
authority,
}),
)
.await
.context("timed out starting Tailnet sign-in")??
.into_inner();
Ok(decode_tailnet_status(response))
}
pub async fn tailnet_login_status(session_id: String) -> Result<TailnetLoginStatus> {
let mut client = BurrowClient::from_uds().await?;
let response = timeout(
RPC_TIMEOUT,
client
.tailnet_client
.login_status(TailnetLoginStatusRequest { session_id }),
)
.await
.context("timed out reading Tailnet sign-in status")??
.into_inner();
Ok(decode_tailnet_status(response))
}
pub async fn cancel_tailnet_login(session_id: String) -> Result<()> {
let mut client = BurrowClient::from_uds().await?;
timeout(
RPC_TIMEOUT,
client
.tailnet_client
.login_cancel(TailnetLoginCancelRequest { session_id }),
)
.await
.context("timed out cancelling Tailnet sign-in")??;
Ok(())
}
async fn add_network(network_type: NetworkType, payload: Vec<u8>) -> Result<i32> {
let id = next_network_id().await?;
let mut client = BurrowClient::from_uds().await?;
timeout(
RPC_TIMEOUT,
client.networks_client.network_add(Network {
id,
r#type: network_type.into(),
payload,
}),
)
.await
.context("timed out saving network to Burrow daemon")??;
Ok(id)
}
async fn next_network_id() -> Result<i32> {
let networks = list_networks().await?;
Ok(networks.iter().map(|network| network.id).max().unwrap_or(0) + 1)
}
fn summarize_network(network: &Network) -> NetworkSummary {
match network.r#type() {
NetworkType::WireGuard => summarize_wireguard(network),
NetworkType::Tailnet => summarize_tailnet(network),
}
}
fn summarize_wireguard(network: &Network) -> NetworkSummary {
let payload = String::from_utf8_lossy(&network.payload);
let detail = payload
.lines()
.map(str::trim)
.find(|line| !line.is_empty() && !line.starts_with('['))
.unwrap_or("Stored WireGuard configuration")
.to_owned();
NetworkSummary {
id: network.id,
title: format!("WireGuard {}", network.id),
detail,
}
}
fn summarize_tailnet(network: &Network) -> NetworkSummary {
match TailnetConfig::from_slice(&network.payload) {
Ok(config) => {
let title = config
.tailnet
.clone()
.or(config.hostname.clone())
.unwrap_or_else(|| "Tailnet".to_owned());
let authority = config
.authority
.unwrap_or_else(|| "default authority".to_owned());
let account = config.account.unwrap_or_else(|| "default".to_owned());
NetworkSummary {
id: network.id,
title,
detail: format!("{authority} - account {account}"),
}
}
Err(error) => NetworkSummary {
id: network.id,
title: "Tailnet".to_owned(),
detail: format!("Unable to read Tailnet payload: {error}"),
},
}
}
fn decode_tailnet_status(
response: burrow::grpc_defs::TailnetLoginStatusResponse,
) -> TailnetLoginStatus {
TailnetLoginStatus {
session_id: response.session_id,
backend_state: response.backend_state,
auth_url: optional(response.auth_url),
running: response.running,
needs_login: response.needs_login,
tailnet_name: optional(response.tailnet_name),
self_dns_name: optional(response.self_dns_name),
tailnet_ips: response.tailnet_ips,
health: response.health,
}
}
fn optional(value: String) -> Option<String> {
let trimmed = value.trim();
if trimmed.is_empty() {
None
} else {
Some(trimmed.to_owned())
}
}
pub fn normalized(value: &str, fallback: &str) -> String {
let trimmed = value.trim();
if trimmed.is_empty() {
fallback.to_owned()
} else {
trimmed.to_owned()
}
}
pub fn normalized_optional(value: &str) -> Option<String> {
let trimmed = value.trim();
if trimmed.is_empty() {
None
} else {
Some(trimmed.to_owned())
}
}
pub fn require_value(value: &str, label: &str) -> Result<String> {
normalized_optional(value).ok_or_else(|| anyhow!("{label} is required"))
}

View file

@ -1,11 +1,15 @@
use anyhow::Result;
pub mod components;
mod diag;
mod account_store;
mod daemon_api;
// Generated using meson
mod config;
fn main() {
if let Err(error) = daemon_api::configure_client_paths() {
eprintln!("failed to configure Burrow daemon paths: {error}");
}
components::App::run();
}

View file

@ -10,11 +10,13 @@ crate-type = ["lib", "staticlib"]
[dependencies]
anyhow = "1.0"
tokio = { version = "1.50.0", features = [
tokio = { version = "1.37", features = [
"rt",
"macros",
"sync",
"io-util",
"net",
"process",
"rt-multi-thread",
"signal",
"time",
@ -32,6 +34,7 @@ serde_json = "1.0"
blake2 = "0.10"
chacha20poly1305 = "0.10"
rand = "0.8"
bytes = "1"
rand_core = "0.6"
aead = "0.5"
x25519-dalek = { version = "2.0", features = [
@ -45,40 +48,46 @@ base64 = "0.21"
fehler = "1.0"
ip_network_table = "0.2"
ip_network = "0.4"
ipnetwork = { version = "0.21", features = ["serde"] }
async-channel = "2.1"
schemars = "0.8"
futures = "0.3.28"
once_cell = "1.19"
arti-client = "0.40.0"
hickory-proto = "0.25.2"
netstack-smoltcp = "0.2.1"
tokio-util = { version = "0.7.18", features = ["compat"] }
tor-rtcompat = "0.40.0"
console-subscriber = { version = "0.2.0", optional = true }
console = "0.15.8"
axum = "0.8.8"
reqwest = { version = "0.13.2", default-features = false, features = [
axum = "0.7.4"
argon2 = "0.5"
reqwest = { version = "0.12", default-features = false, features = [
"json",
"rustls",
"rustls-tls",
] }
rusqlite = { version = "0.38.0", features = ["blob"] }
dotenv = "0.15.0"
tonic = "0.14.5"
tonic-prost = "0.14.5"
prost = "0.14.3"
prost-types = "0.14.3"
tokio-stream = "0.1.18"
tonic = "0.12.0"
prost = "0.13.1"
prost-types = "0.13.1"
tokio-stream = "0.1"
async-stream = "0.2"
tower = "0.5.3"
hyper-util = "0.1.20"
tower = { version = "0.4.13", features = ["util"] }
hyper-util = "0.1.6"
toml = "0.8.15"
rust-ini = "0.21.0"
subtle = "2.6"
[target.'cfg(target_os = "linux")'.dependencies]
caps = "0.5"
libsystemd = "0.7"
tracing-journald = "0.3"
libc = "0.2"
libsystemd = "0.7"
nix = { version = "0.27", features = ["fs", "socket", "uio"] }
tracing-journald = "0.3"
[target.'cfg(target_vendor = "apple")'.dependencies]
nix = { version = "0.27", features = ["ioctl"] }
nix = { version = "0.27" }
rusqlite = { version = "0.38.0", features = ["bundled", "blob"] }
[target.'cfg(target_os = "macos")'.dependencies]
@ -86,6 +95,7 @@ tracing-oslog = { git = "https://github.com/Stormshield-robinc/tracing-oslog" }
[dev-dependencies]
insta = { version = "1.32", features = ["yaml"] }
tempfile = "3.13"
[package.metadata.generate-rpm]
assets = [
@ -102,4 +112,4 @@ bundled = ["rusqlite/bundled"]
[build-dependencies]
tonic-prost-build = "0.14.5"
tonic-build = "0.12.0"

View file

@ -1,4 +1,4 @@
fn main() -> Result<(), Box<dyn std::error::Error>> {
tonic_prost_build::compile_protos("../proto/burrow.proto")?;
tonic_build::compile_protos("../proto/burrow.proto")?;
Ok(())
}

View file

@ -1,24 +0,0 @@
use std::env::var;
use anyhow::Result;
use reqwest::Url;
pub async fn login() -> Result<()> {
let state = "vt :P";
let nonce = "no";
let mut url = Url::parse("https://slack.com/openid/connect/authorize")?;
let mut q = url.query_pairs_mut();
q.append_pair("response_type", "code");
q.append_pair("scope", "openid profile email");
q.append_pair("client_id", &var("CLIENT_ID")?);
q.append_pair("state", state);
q.append_pair("team", &var("SLACK_TEAM_ID")?);
q.append_pair("nonce", nonce);
q.append_pair("redirect_uri", "https://burrow.rs/callback");
drop(q);
println!("Continue auth in your browser:\n{}", url.as_str());
Ok(())
}

View file

@ -1,2 +1 @@
pub mod client;
pub mod server;

View file

@ -1,91 +1,627 @@
use anyhow::Result;
use anyhow::{anyhow, Context, Result};
use argon2::{
password_hash::{PasswordHash, PasswordHasher, PasswordVerifier, SaltString},
Argon2,
};
use base64::{engine::general_purpose, Engine as _};
use rand::RngCore;
use rusqlite::{params, Connection, OptionalExtension};
use crate::daemon::rpc::grpc_defs::{Network, NetworkType};
use crate::control::{
DnsConfig, Hostinfo, LocalAuthResponse, MapRequest, MapResponse, Node, NodeCapMap,
PacketFilter, PeerCapMap, RegisterRequest, UserProfile,
};
const CREATE_SCHEMA: &str = r#"
CREATE TABLE IF NOT EXISTS auth_user (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
profile_pic_url TEXT,
groups_json TEXT NOT NULL DEFAULT '[]',
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS auth_local_credential (
user_id INTEGER PRIMARY KEY REFERENCES auth_user(id) ON DELETE CASCADE,
username TEXT NOT NULL UNIQUE,
password_hash TEXT NOT NULL,
rotated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE TABLE IF NOT EXISTS auth_session (
id TEXT PRIMARY KEY,
user_id INTEGER NOT NULL REFERENCES auth_user(id) ON DELETE CASCADE,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
expires_at TEXT NOT NULL DEFAULT (datetime('now', '+7 days'))
);
CREATE TABLE IF NOT EXISTS control_node (
id INTEGER PRIMARY KEY AUTOINCREMENT,
stable_id TEXT NOT NULL UNIQUE,
user_id INTEGER NOT NULL REFERENCES auth_user(id) ON DELETE CASCADE,
name TEXT NOT NULL,
node_key TEXT NOT NULL UNIQUE,
machine_key TEXT,
disco_key TEXT,
addresses_json TEXT NOT NULL,
allowed_ips_json TEXT NOT NULL,
endpoints_json TEXT NOT NULL,
home_derp INTEGER,
hostinfo_json TEXT,
tags_json TEXT NOT NULL DEFAULT '[]',
primary_routes_json TEXT NOT NULL DEFAULT '[]',
cap_version INTEGER NOT NULL DEFAULT 1,
cap_map_json TEXT NOT NULL DEFAULT '{}',
peer_cap_map_json TEXT NOT NULL DEFAULT '{}',
machine_authorized INTEGER NOT NULL DEFAULT 1,
node_key_expired INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now')),
last_seen TEXT,
online INTEGER
);
"#;
#[derive(Clone, Debug)]
pub struct StoredUser {
pub profile: UserProfile,
}
pub fn init_db(path: &str) -> Result<()> {
let conn = Connection::open(path)?;
conn.execute_batch(CREATE_SCHEMA)?;
Ok(())
}
pub fn ensure_local_identity(
path: &str,
username: &str,
email: &str,
display_name: &str,
password: &str,
) -> Result<UserProfile> {
let conn = Connection::open(path)?;
conn.execute(
"INSERT INTO auth_user (email, display_name) VALUES (?, ?)
ON CONFLICT(email) DO UPDATE SET display_name = excluded.display_name",
params![email, display_name],
)?;
let user_id: i64 =
conn.query_row("SELECT id FROM auth_user WHERE email = ?", [email], |row| {
row.get(0)
})?;
let existing_hash: Option<String> = conn
.query_row(
"SELECT password_hash FROM auth_local_credential WHERE user_id = ?",
[user_id],
|row| row.get(0),
)
.optional()?;
let password_hash = match existing_hash {
Some(hash) if verify_password(password, &hash) => hash,
_ => hash_password(password)?,
};
conn.execute(
"INSERT INTO auth_local_credential (user_id, username, password_hash)
VALUES (?, ?, ?)
ON CONFLICT(user_id) DO UPDATE SET username = excluded.username, password_hash = excluded.password_hash, rotated_at = datetime('now')",
params![user_id, username, password_hash],
)?;
load_user_profile(&conn, user_id)
}
pub fn authenticate_local(
path: &str,
identifier: &str,
password: &str,
) -> Result<Option<LocalAuthResponse>> {
let conn = Connection::open(path)?;
let record = conn
.query_row(
"SELECT u.id, u.email, u.display_name, u.profile_pic_url, u.groups_json, c.password_hash
FROM auth_user u
JOIN auth_local_credential c ON c.user_id = u.id
WHERE c.username = ? OR u.email = ?",
params![identifier, identifier],
|row| {
Ok((
row.get::<_, i64>(0)?,
row.get::<_, String>(1)?,
row.get::<_, String>(2)?,
row.get::<_, Option<String>>(3)?,
row.get::<_, String>(4)?,
row.get::<_, String>(5)?,
))
},
)
.optional()?;
let Some((user_id, email, display_name, profile_pic_url, groups_json, password_hash)) = record
else {
return Ok(None);
};
if !verify_password(password, &password_hash) {
return Ok(None);
}
let token = random_token();
conn.execute(
"INSERT INTO auth_session (id, user_id) VALUES (?, ?)",
params![token, user_id],
)?;
Ok(Some(LocalAuthResponse {
access_token: token,
user: UserProfile {
id: user_id,
login_name: email,
display_name,
profile_pic_url,
groups: parse_json(&groups_json)?,
},
}))
}
pub fn user_for_session(path: &str, token: &str) -> Result<Option<StoredUser>> {
let conn = Connection::open(path)?;
let user_id = conn
.query_row(
"SELECT user_id FROM auth_session WHERE id = ? AND expires_at > datetime('now')",
[token],
|row| row.get::<_, i64>(0),
)
.optional()?;
let Some(user_id) = user_id else {
return Ok(None);
};
Ok(Some(load_user(&conn, user_id)?))
}
pub fn upsert_node(path: &str, user: &StoredUser, request: &RegisterRequest) -> Result<Node> {
let conn = Connection::open(path)?;
let existing = find_existing_node(&conn, user.profile.id, request)?;
let name = Node::preferred_name(request);
let allowed_ips = Node::normalized_allowed_ips(request);
match existing {
Some((node_id, stable_id, created_at)) => {
conn.execute(
"UPDATE control_node
SET name = ?, node_key = ?, machine_key = ?, disco_key = ?, addresses_json = ?, allowed_ips_json = ?,
endpoints_json = ?, home_derp = ?, hostinfo_json = ?, tags_json = ?, primary_routes_json = ?,
cap_version = ?, cap_map_json = ?, peer_cap_map_json = ?, updated_at = datetime('now'),
last_seen = datetime('now'), online = 1
WHERE id = ?",
params![
name,
request.node_key,
request.machine_key,
request.disco_key,
to_json(&request.addresses)?,
to_json(&allowed_ips)?,
to_json(&request.endpoints)?,
request.home_derp,
optional_json(&request.hostinfo)?,
to_json(&request.tags)?,
to_json(&request.primary_routes)?,
request.version.max(1),
to_json(&request.cap_map)?,
to_json(&request.peer_cap_map)?,
node_id,
],
)?;
load_node(&conn, node_id, stable_id, Some(created_at))
}
None => {
conn.execute(
"INSERT INTO control_node (
stable_id, user_id, name, node_key, machine_key, disco_key, addresses_json, allowed_ips_json,
endpoints_json, home_derp, hostinfo_json, tags_json, primary_routes_json, cap_version,
cap_map_json, peer_cap_map_json, last_seen, online
) VALUES ('', ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, datetime('now'), 1)",
params![
user.profile.id,
name,
request.node_key,
request.machine_key,
request.disco_key,
to_json(&request.addresses)?,
to_json(&allowed_ips)?,
to_json(&request.endpoints)?,
request.home_derp,
optional_json(&request.hostinfo)?,
to_json(&request.tags)?,
to_json(&request.primary_routes)?,
request.version.max(1),
to_json(&request.cap_map)?,
to_json(&request.peer_cap_map)?,
],
)?;
let node_id = conn.last_insert_rowid();
let stable_id = format!("bn-{node_id}");
conn.execute(
"UPDATE control_node SET stable_id = ? WHERE id = ?",
params![stable_id, node_id],
)?;
load_node(&conn, node_id, stable_id, None)
}
}
}
pub fn map_for_node(
path: &str,
user: &StoredUser,
request: &MapRequest,
domain: &str,
) -> Result<MapResponse> {
let conn = Connection::open(path)?;
apply_map_request(&conn, user.profile.id, request)?;
let self_row = conn
.query_row(
"SELECT id, stable_id, created_at FROM control_node WHERE user_id = ? AND node_key = ?",
params![user.profile.id, request.node_key],
|row| {
Ok((
row.get::<_, i64>(0)?,
row.get::<_, String>(1)?,
row.get::<_, String>(2)?,
))
},
)
.optional()?
.ok_or_else(|| anyhow!("node not registered"))?;
let node = load_node(&conn, self_row.0, self_row.1, Some(self_row.2))?;
let peers = load_peers(&conn, node.id)?;
Ok(MapResponse {
map_session_handle: Some(format!("map-{}", node.stable_id)),
seq: Some(request.map_session_seq.unwrap_or(0) + 1),
node,
peers,
domain: domain.to_owned(),
dns: Some(DnsConfig {
resolvers: vec!["1.1.1.1".to_owned(), "1.0.0.1".to_owned()],
search_domains: vec![domain.to_owned()],
magic_dns: true,
}),
packet_filters: vec![PacketFilter::default()],
})
}
pub static PATH: &str = "./server.sqlite3";
pub fn init_db() -> Result<()> {
let conn = rusqlite::Connection::open(PATH)?;
fn apply_map_request(conn: &Connection, user_id: i64, request: &MapRequest) -> Result<()> {
let current = conn
.query_row(
"SELECT id FROM control_node WHERE user_id = ? AND node_key = ?",
params![user_id, request.node_key],
|row| row.get::<_, i64>(0),
)
.optional()?;
let Some(node_id) = current else {
return Ok(());
};
let hostinfo_json = optional_json(&request.hostinfo)?;
let endpoints_json = to_json(&request.endpoints)?;
conn.execute(
"CREATE TABLE IF NOT EXISTS user (
id PRIMARY KEY,
created_at TEXT NOT NULL
)",
(),
"UPDATE control_node
SET disco_key = COALESCE(?, disco_key),
hostinfo_json = CASE WHEN ? IS NULL THEN hostinfo_json ELSE ? END,
endpoints_json = CASE WHEN ? = '[]' THEN endpoints_json ELSE ? END,
updated_at = datetime('now'),
last_seen = datetime('now'),
online = 1
WHERE id = ?",
params![
request.disco_key,
hostinfo_json,
hostinfo_json,
endpoints_json,
endpoints_json,
node_id,
],
)?;
conn.execute(
"CREATE TABLE IF NOT EXISTS user_connection (
user_id INTEGER REFERENCES user(id) ON DELETE CASCADE,
openid_provider TEXT NOT NULL,
openid_user_id TEXT NOT NULL,
openid_user_name TEXT NOT NULL,
access_token TEXT NOT NULL,
refresh_token TEXT,
PRIMARY KEY (openid_provider, openid_user_id)
)",
(),
)?;
conn.execute(
"CREATE TABLE IF NOT EXISTS device (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
public_key TEXT NOT NULL,
apns_token TEXT UNIQUE,
user_id INT REFERENCES user(id) ON DELETE CASCADE,
created_at TEXT NOT NULL DEFAULT (datetime('now')) CHECK(created_at IS datetime(created_at)),
ipv4 TEXT NOT NULL UNIQUE,
ipv6 TEXT NOT NULL UNIQUE,
access_token TEXT NOT NULL UNIQUE,
refresh_token TEXT NOT NULL UNIQUE,
expires_at TEXT NOT NULL DEFAULT (datetime('now', '+7 days')) CHECK(expires_at IS datetime(expires_at))
)",
()
).unwrap();
Ok(())
}
pub fn store_connection(
openid_user: super::providers::OpenIdUser,
openid_provider: &str,
access_token: &str,
refresh_token: Option<&str>,
) -> Result<()> {
log::debug!("Storing openid user {:#?}", openid_user);
let conn = rusqlite::Connection::open(PATH)?;
fn find_existing_node(
conn: &Connection,
user_id: i64,
request: &RegisterRequest,
) -> Result<Option<(i64, String, String)>> {
let mut candidates = vec![request.node_key.as_str()];
if let Some(old) = request.old_node_key.as_deref() {
if old != request.node_key {
candidates.push(old);
}
}
conn.execute(
"INSERT OR IGNORE INTO user (id, created_at) VALUES (?, datetime('now'))",
(&openid_user.sub,),
)?;
conn.execute(
"INSERT INTO user_connection (user_id, openid_provider, openid_user_id, openid_user_name, access_token, refresh_token) VALUES (
(SELECT id FROM user WHERE id = ?),
?,
?,
?,
?,
?
)",
(&openid_user.sub, &openid_provider, &openid_user.sub, &openid_user.name, access_token, refresh_token),
)?;
Ok(())
for candidate in candidates {
let hit = conn
.query_row(
"SELECT id, stable_id, created_at FROM control_node WHERE user_id = ? AND node_key = ?",
params![user_id, candidate],
|row| {
Ok((
row.get::<_, i64>(0)?,
row.get::<_, String>(1)?,
row.get::<_, String>(2)?,
))
},
)
.optional()?;
if hit.is_some() {
return Ok(hit);
}
}
Ok(None)
}
pub fn store_device(
openid_user: super::providers::OpenIdUser,
openid_provider: &str,
access_token: &str,
refresh_token: Option<&str>,
) -> Result<()> {
log::debug!("Storing openid user {:#?}", openid_user);
let conn = rusqlite::Connection::open(PATH)?;
// TODO
Ok(())
fn load_peers(conn: &Connection, self_id: i64) -> Result<Vec<Node>> {
let mut stmt = conn.prepare(
"SELECT id, stable_id, created_at FROM control_node WHERE id != ? AND machine_authorized = 1 ORDER BY id",
)?;
let peers = stmt
.query_map([self_id], |row| {
Ok((
row.get::<_, i64>(0)?,
row.get::<_, String>(1)?,
row.get::<_, String>(2)?,
))
})?
.collect::<rusqlite::Result<Vec<_>>>()?;
peers
.into_iter()
.map(|(id, stable_id, created_at)| load_node(conn, id, stable_id, Some(created_at)))
.collect()
}
fn load_node(
conn: &Connection,
id: i64,
stable_id: String,
created_at_hint: Option<String>,
) -> Result<Node> {
let row = conn.query_row(
"SELECT user_id, name, node_key, machine_key, disco_key, addresses_json, allowed_ips_json,
endpoints_json, home_derp, hostinfo_json, tags_json, primary_routes_json, cap_version,
cap_map_json, peer_cap_map_json, machine_authorized, node_key_expired,
created_at, updated_at, last_seen, online
FROM control_node WHERE id = ?",
[id],
|row| {
Ok((
row.get::<_, i64>(0)?,
row.get::<_, String>(1)?,
row.get::<_, String>(2)?,
row.get::<_, Option<String>>(3)?,
row.get::<_, Option<String>>(4)?,
row.get::<_, String>(5)?,
row.get::<_, String>(6)?,
row.get::<_, String>(7)?,
row.get::<_, Option<i32>>(8)?,
row.get::<_, Option<String>>(9)?,
row.get::<_, String>(10)?,
row.get::<_, String>(11)?,
row.get::<_, i32>(12)?,
row.get::<_, String>(13)?,
row.get::<_, String>(14)?,
row.get::<_, i64>(15)?,
row.get::<_, i64>(16)?,
row.get::<_, String>(17)?,
row.get::<_, String>(18)?,
row.get::<_, Option<String>>(19)?,
row.get::<_, Option<i64>>(20)?,
))
},
)?;
Ok(Node {
id,
stable_id,
user_id: row.0,
name: row.1,
node_key: row.2,
machine_key: row.3,
disco_key: row.4,
addresses: parse_json(&row.5)?,
allowed_ips: parse_json(&row.6)?,
endpoints: parse_json(&row.7)?,
home_derp: row.8,
hostinfo: row.9.map(|raw| parse_json::<Hostinfo>(&raw)).transpose()?,
tags: parse_json(&row.10)?,
primary_routes: parse_json(&row.11)?,
cap_version: row.12,
cap_map: parse_json::<NodeCapMap>(&row.13)?,
peer_cap_map: parse_json::<PeerCapMap>(&row.14)?,
machine_authorized: row.15 != 0,
node_key_expired: row.16 != 0,
created_at: Some(created_at_hint.unwrap_or(row.17)),
updated_at: Some(row.18),
last_seen: row.19,
online: row.20.map(|value| value != 0),
})
}
fn load_user(conn: &Connection, user_id: i64) -> Result<StoredUser> {
let profile = load_user_profile(conn, user_id)?;
Ok(StoredUser { profile })
}
fn load_user_profile(conn: &Connection, user_id: i64) -> Result<UserProfile> {
let row = conn.query_row(
"SELECT email, display_name, profile_pic_url, groups_json FROM auth_user WHERE id = ?",
[user_id],
|row| {
Ok((
row.get::<_, String>(0)?,
row.get::<_, String>(1)?,
row.get::<_, Option<String>>(2)?,
row.get::<_, String>(3)?,
))
},
)?;
Ok(UserProfile {
id: user_id,
login_name: row.0,
display_name: row.1,
profile_pic_url: row.2,
groups: parse_json(&row.3)?,
})
}
fn hash_password(password: &str) -> Result<String> {
let salt = SaltString::generate(&mut argon2::password_hash::rand_core::OsRng);
let hash = Argon2::default()
.hash_password(password.as_bytes(), &salt)
.map_err(|err| anyhow!("failed to hash password: {err}"))?;
Ok(hash.to_string())
}
fn verify_password(password: &str, password_hash: &str) -> bool {
PasswordHash::new(password_hash)
.ok()
.and_then(|hash| {
Argon2::default()
.verify_password(password.as_bytes(), &hash)
.ok()
})
.is_some()
}
fn random_token() -> String {
let mut bytes = [0u8; 32];
rand::thread_rng().fill_bytes(&mut bytes);
general_purpose::URL_SAFE_NO_PAD.encode(bytes)
}
fn to_json<T: serde::Serialize>(value: &T) -> Result<String> {
serde_json::to_string(value).context("failed to serialize json")
}
fn optional_json<T: serde::Serialize>(value: &Option<T>) -> Result<Option<String>> {
value.as_ref().map(to_json).transpose()
}
fn parse_json<T: serde::de::DeserializeOwned>(value: &str) -> Result<T> {
serde_json::from_str(value)
.with_context(|| format!("failed to decode json payload from '{value}'"))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::control::{Hostinfo, RegisterRequest};
use tempfile::TempDir;
fn temp_db() -> Result<(TempDir, String)> {
let dir = tempfile::tempdir()?;
let db_path = dir.path().join("server.sqlite3");
Ok((dir, db_path.to_string_lossy().to_string()))
}
#[test]
fn local_auth_and_map_round_trip() -> Result<()> {
let (_dir, db_path) = temp_db()?;
init_db(&db_path)?;
ensure_local_identity(
&db_path,
"contact",
"contact@burrow.net",
"Burrow Contact",
"password-1",
)?;
let auth = authenticate_local(&db_path, "contact", "password-1")?
.expect("expected login to succeed");
let user =
user_for_session(&db_path, &auth.access_token)?.expect("expected session to resolve");
let node = upsert_node(
&db_path,
&user,
&RegisterRequest {
node_key: "nodekey:aaaa".to_owned(),
machine_key: Some("machinekey:aaaa".to_owned()),
disco_key: Some("discokey:aaaa".to_owned()),
addresses: vec!["100.64.0.1/32".to_owned()],
endpoints: vec!["203.0.113.10:41641".to_owned()],
hostinfo: Some(Hostinfo {
hostname: Some("burrow-dev".to_owned()),
os: Some("linux".to_owned()),
os_version: Some("6.13".to_owned()),
services: vec!["ssh".to_owned()],
request_tags: vec!["tag:dev".to_owned()],
}),
..RegisterRequest::default()
},
)?;
assert_eq!(node.name, "burrow-dev");
assert_eq!(node.allowed_ips, vec!["100.64.0.1/32"]);
let map = map_for_node(
&db_path,
&user,
&MapRequest {
node_key: "nodekey:aaaa".to_owned(),
stream: true,
endpoints: vec!["203.0.113.10:41641".to_owned()],
..MapRequest::default()
},
"burrow.net",
)?;
assert_eq!(map.node.node_key, "nodekey:aaaa");
assert_eq!(map.domain, "burrow.net");
assert!(map.dns.expect("dns config").magic_dns);
Ok(())
}
#[test]
fn register_can_rotate_node_keys() -> Result<()> {
let (_dir, db_path) = temp_db()?;
init_db(&db_path)?;
ensure_local_identity(
&db_path,
"contact",
"contact@burrow.net",
"Burrow Contact",
"password-1",
)?;
let auth = authenticate_local(&db_path, "contact@burrow.net", "password-1")?
.expect("expected login to succeed");
let user =
user_for_session(&db_path, &auth.access_token)?.expect("expected session to resolve");
upsert_node(
&db_path,
&user,
&RegisterRequest {
node_key: "nodekey:old".to_owned(),
addresses: vec!["100.64.0.2/32".to_owned()],
..RegisterRequest::default()
},
)?;
let rotated = upsert_node(
&db_path,
&user,
&RegisterRequest {
node_key: "nodekey:new".to_owned(),
old_node_key: Some("nodekey:old".to_owned()),
addresses: vec!["100.64.0.3/32".to_owned()],
..RegisterRequest::default()
},
)?;
assert_eq!(rotated.node_key, "nodekey:new");
assert_eq!(rotated.addresses, vec!["100.64.0.3/32"]);
Ok(())
}
}

View file

@ -1,32 +1,297 @@
pub mod db;
pub mod providers;
pub mod tailscale;
use anyhow::Result;
use axum::{http::StatusCode, routing::post, Router};
use providers::slack::auth;
use std::{env, path::Path};
use anyhow::{Context, Result};
use axum::{
extract::{Json, Path as AxumPath, Query, State},
http::{header::AUTHORIZATION, HeaderMap, StatusCode},
response::IntoResponse,
routing::{get, post},
Router,
};
use serde::Deserialize;
use tokio::signal;
use crate::control::{
discovery, LocalAuthRequest, LocalAuthResponse, MapRequest, MapResponse, RegisterRequest,
RegisterResponse, TailnetDiscovery, BURROW_TAILNET_DOMAIN,
};
#[derive(Clone, Debug)]
pub struct BootstrapIdentity {
pub username: String,
pub email: String,
pub display_name: String,
pub password_file: String,
}
impl Default for BootstrapIdentity {
fn default() -> Self {
Self {
username: "contact".to_owned(),
email: "contact@burrow.net".to_owned(),
display_name: "Burrow Contact".to_owned(),
password_file: "intake/forgejo_pass_contact_at_burrow_net.txt".to_owned(),
}
}
}
#[derive(Clone, Debug)]
pub struct AuthServerConfig {
pub listen: String,
pub db_path: String,
pub tailnet_domain: String,
pub bootstrap: BootstrapIdentity,
}
impl Default for AuthServerConfig {
fn default() -> Self {
Self {
listen: "0.0.0.0:8080".to_owned(),
db_path: db::PATH.to_owned(),
tailnet_domain: BURROW_TAILNET_DOMAIN.to_owned(),
bootstrap: BootstrapIdentity::default(),
}
}
}
impl AuthServerConfig {
pub fn from_env() -> Self {
let mut config = Self::default();
if let Ok(value) = env::var("BURROW_AUTH_LISTEN") {
config.listen = value;
}
if let Ok(value) = env::var("BURROW_AUTH_DB_PATH") {
config.db_path = value;
}
if let Ok(value) = env::var("BURROW_AUTH_TAILNET_DOMAIN") {
config.tailnet_domain = value;
}
if let Ok(value) = env::var("BURROW_BOOTSTRAP_USERNAME") {
config.bootstrap.username = value;
}
if let Ok(value) = env::var("BURROW_BOOTSTRAP_EMAIL") {
config.bootstrap.email = value;
}
if let Ok(value) = env::var("BURROW_BOOTSTRAP_DISPLAY_NAME") {
config.bootstrap.display_name = value;
}
if let Ok(value) = env::var("BURROW_BOOTSTRAP_PASSWORD_FILE") {
config.bootstrap.password_file = value;
}
config
}
fn bootstrap_password(&self) -> Result<Option<String>> {
let path = Path::new(&self.bootstrap.password_file);
if !path.exists() {
return Ok(None);
}
let password = std::fs::read_to_string(path).with_context(|| {
format!("failed to read bootstrap password from {}", path.display())
})?;
let password = password.trim().to_owned();
if password.is_empty() {
return Ok(None);
}
Ok(Some(password))
}
}
#[derive(Clone)]
struct AppState {
config: AuthServerConfig,
tailscale: tailscale::TailscaleBridgeManager,
}
#[derive(Debug, Deserialize)]
struct TailnetDiscoveryQuery {
email: String,
}
type AppResult<T> = Result<T, (StatusCode, String)>;
pub async fn serve() -> Result<()> {
db::init_db()?;
serve_with_config(AuthServerConfig::from_env()).await
}
let app = Router::new()
.route("/slack-auth", post(auth))
.route("/device/new", post(device_new));
pub async fn serve_with_config(config: AuthServerConfig) -> Result<()> {
db::init_db(&config.db_path)?;
if let Some(password) = config.bootstrap_password()? {
db::ensure_local_identity(
&config.db_path,
&config.bootstrap.username,
&config.bootstrap.email,
&config.bootstrap.display_name,
&password,
)?;
}
let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await.unwrap();
log::info!("Starting auth server on port 8080");
let app = build_router(config.clone());
let listener = tokio::net::TcpListener::bind(&config.listen).await?;
log::info!("Starting auth server on {}", config.listen);
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_signal())
.await
.unwrap();
.await?;
Ok(())
}
async fn device_new() -> StatusCode {
pub fn build_router(config: AuthServerConfig) -> Router {
Router::new()
.route("/healthz", get(healthz))
.route("/device/new", post(device_new))
.route("/v1/auth/login", post(login_local))
.route("/v1/control/register", post(control_register))
.route("/v1/control/map", post(control_map))
.route("/v1/tailnet/discover", get(tailnet_discover))
.route("/v1/tailscale/login/start", post(tailscale_login_start))
.route("/v1/tailscale/login/:session_id", get(tailscale_login_status))
.with_state(AppState {
config,
tailscale: tailscale::TailscaleBridgeManager::default(),
})
}
async fn login_local(
State(state): State<AppState>,
Json(request): Json<LocalAuthRequest>,
) -> AppResult<Json<LocalAuthResponse>> {
let db_path = state.config.db_path.clone();
blocking(move || db::authenticate_local(&db_path, &request.identifier, &request.password))
.await?
.map(Json)
.ok_or_else(|| (StatusCode::UNAUTHORIZED, "invalid credentials".to_owned()))
}
async fn control_register(
headers: HeaderMap,
State(state): State<AppState>,
Json(request): Json<RegisterRequest>,
) -> AppResult<Json<RegisterResponse>> {
let token = bearer_token(&headers)?;
let db_path = state.config.db_path.clone();
let user = blocking({
let db_path = db_path.clone();
let token = token.clone();
move || db::user_for_session(&db_path, &token)
})
.await?
.ok_or_else(|| (StatusCode::UNAUTHORIZED, "unknown session".to_owned()))?;
let response_user = user.profile.clone();
let node = blocking(move || db::upsert_node(&db_path, &user, &request)).await?;
Ok(Json(RegisterResponse {
user: response_user,
machine_authorized: node.machine_authorized,
node_key_expired: node.node_key_expired,
auth_url: None,
error: None,
node,
}))
}
async fn control_map(
headers: HeaderMap,
State(state): State<AppState>,
Json(request): Json<MapRequest>,
) -> AppResult<Json<MapResponse>> {
let token = bearer_token(&headers)?;
let db_path = state.config.db_path.clone();
let domain = state.config.tailnet_domain.clone();
let user = blocking({
let db_path = db_path.clone();
let token = token.clone();
move || db::user_for_session(&db_path, &token)
})
.await?
.ok_or_else(|| (StatusCode::UNAUTHORIZED, "unknown session".to_owned()))?;
let response = blocking(move || db::map_for_node(&db_path, &user, &request, &domain)).await?;
Ok(Json(response))
}
async fn tailnet_discover(
Query(query): Query<TailnetDiscoveryQuery>,
) -> AppResult<Json<TailnetDiscovery>> {
if query.email.trim().is_empty() {
return Err((StatusCode::BAD_REQUEST, "email is required".to_owned()));
}
let discovery = discovery::discover_tailnet(&query.email)
.await
.map_err(|err| (StatusCode::BAD_GATEWAY, err.to_string()))?;
Ok(Json(discovery))
}
async fn tailscale_login_start(
State(state): State<AppState>,
Json(request): Json<tailscale::TailscaleLoginStartRequest>,
) -> AppResult<Json<tailscale::TailscaleLoginStartResponse>> {
let response = state
.tailscale
.start_login(request)
.await
.map_err(internal_error)?;
Ok(Json(response))
}
async fn tailscale_login_status(
AxumPath(session_id): AxumPath<String>,
State(state): State<AppState>,
) -> AppResult<Json<tailscale::TailscaleLoginStatus>> {
state
.tailscale
.status(&session_id)
.await
.map_err(internal_error)?
.map(Json)
.ok_or_else(|| (StatusCode::NOT_FOUND, "unknown tailscale login session".to_owned()))
}
async fn healthz() -> impl IntoResponse {
StatusCode::OK
}
async fn device_new() -> impl IntoResponse {
StatusCode::OK
}
async fn blocking<F, T>(work: F) -> AppResult<T>
where
F: FnOnce() -> Result<T> + Send + 'static,
T: Send + 'static,
{
tokio::task::spawn_blocking(work)
.await
.map_err(|err| (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()))?
.map_err(internal_error)
}
fn internal_error(err: anyhow::Error) -> (StatusCode, String) {
(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
}
fn bearer_token(headers: &HeaderMap) -> AppResult<String> {
let value = headers.get(AUTHORIZATION).ok_or_else(|| {
(
StatusCode::UNAUTHORIZED,
"missing authorization header".to_owned(),
)
})?;
let value = value.to_str().map_err(|_| {
(
StatusCode::BAD_REQUEST,
"invalid authorization header".to_owned(),
)
})?;
value
.strip_prefix("Bearer ")
.map(ToOwned::to_owned)
.ok_or_else(|| (StatusCode::UNAUTHORIZED, "expected bearer token".to_owned()))
}
async fn shutdown_signal() {
let ctrl_c = async {
signal::ctrl_c()
@ -51,12 +316,115 @@ async fn shutdown_signal() {
}
}
// mod db {
// use rusqlite::{Connection, Result};
#[cfg(test)]
mod tests {
use super::*;
use axum::{
body::{to_bytes, Body},
http::{Request, StatusCode},
};
use tempfile::tempdir;
use tower::ServiceExt;
// #[derive(Debug)]
// struct User {
// id: i32,
// created_at: String,
// }
// }
#[tokio::test]
async fn login_register_and_map_round_trip() -> Result<()> {
let dir = tempdir()?;
let password_file = dir.path().join("bootstrap-password.txt");
std::fs::write(&password_file, "bootstrap-pass\n")?;
let db_path = dir.path().join("server.sqlite3");
let config = AuthServerConfig {
listen: "127.0.0.1:0".to_owned(),
db_path: db_path.to_string_lossy().to_string(),
tailnet_domain: "burrow.net".to_owned(),
bootstrap: BootstrapIdentity {
password_file: password_file.to_string_lossy().to_string(),
..BootstrapIdentity::default()
},
};
db::init_db(&config.db_path)?;
let password = config.bootstrap_password()?.expect("bootstrap password");
db::ensure_local_identity(
&config.db_path,
&config.bootstrap.username,
&config.bootstrap.email,
&config.bootstrap.display_name,
&password,
)?;
let app = build_router(config);
let response = app
.clone()
.oneshot(
Request::post("/v1/auth/login")
.header("content-type", "application/json")
.body(Body::from(serde_json::to_vec(&LocalAuthRequest {
identifier: "contact".to_owned(),
password: "bootstrap-pass".to_owned(),
})?))?,
)
.await?;
assert_eq!(response.status(), StatusCode::OK);
let login: LocalAuthResponse =
serde_json::from_slice(&to_bytes(response.into_body(), usize::MAX).await?)?;
let response = app
.clone()
.oneshot(
Request::post("/v1/control/register")
.header("content-type", "application/json")
.header("authorization", format!("Bearer {}", login.access_token))
.body(Body::from(serde_json::to_vec(&RegisterRequest {
node_key: "nodekey:1234".to_owned(),
machine_key: Some("machinekey:1234".to_owned()),
addresses: vec!["100.64.0.10/32".to_owned()],
endpoints: vec!["198.51.100.10:41641".to_owned()],
hostinfo: Some(crate::control::Hostinfo {
hostname: Some("devbox".to_owned()),
os: Some("linux".to_owned()),
os_version: Some("6.13".to_owned()),
services: vec!["ssh".to_owned()],
request_tags: vec!["tag:dev".to_owned()],
}),
..RegisterRequest::default()
})?))?,
)
.await?;
assert_eq!(response.status(), StatusCode::OK);
let response = app
.oneshot(
Request::post("/v1/control/map")
.header("content-type", "application/json")
.header("authorization", format!("Bearer {}", login.access_token))
.body(Body::from(serde_json::to_vec(&MapRequest {
node_key: "nodekey:1234".to_owned(),
stream: true,
endpoints: vec!["198.51.100.10:41641".to_owned()],
..MapRequest::default()
})?))?,
)
.await?;
assert_eq!(response.status(), StatusCode::OK);
let map: MapResponse =
serde_json::from_slice(&to_bytes(response.into_body(), usize::MAX).await?)?;
assert_eq!(map.domain, "burrow.net");
assert_eq!(map.node.name, "devbox");
assert!(map.dns.expect("dns").magic_dns);
Ok(())
}
#[tokio::test]
async fn tailnet_discover_requires_email() -> Result<()> {
let app = build_router(AuthServerConfig::default());
let response = app
.oneshot(
Request::get("/v1/tailnet/discover?email=")
.body(Body::empty())?,
)
.await?;
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
Ok(())
}
}

View file

@ -1,8 +0,0 @@
pub mod slack;
pub use super::db;
#[derive(serde::Deserialize, Default, Debug)]
pub struct OpenIdUser {
pub sub: String,
pub name: String,
}

View file

@ -1,102 +0,0 @@
use anyhow::Result;
use axum::{
extract::Json,
http::StatusCode,
routing::{get, post},
};
use reqwest::header::AUTHORIZATION;
use serde::Deserialize;
use super::db::store_connection;
#[derive(Deserialize)]
pub struct SlackToken {
slack_token: String,
}
pub async fn auth(Json(payload): Json<SlackToken>) -> (StatusCode, String) {
let slack_user = match fetch_slack_user(&payload.slack_token).await {
Ok(user) => user,
Err(e) => {
log::error!("Failed to fetch Slack user: {:?}", e);
return (StatusCode::UNAUTHORIZED, String::new());
}
};
log::info!(
"Slack user {} ({}) logged in.",
slack_user.name,
slack_user.sub
);
let conn = match store_connection(slack_user, "slack", &payload.slack_token, None) {
Ok(user) => user,
Err(e) => {
log::error!("Failed to fetch Slack user: {:?}", e);
return (StatusCode::UNAUTHORIZED, String::new());
}
};
(StatusCode::OK, String::new())
}
async fn fetch_slack_user(access_token: &str) -> Result<super::OpenIdUser> {
let client = reqwest::Client::new();
let res = client
.get("https://slack.com/api/openid.connect.userInfo")
.header(AUTHORIZATION, format!("Bearer {}", access_token))
.send()
.await?
.json::<serde_json::Value>()
.await?;
let res_ok = res
.get("ok")
.and_then(|v| v.as_bool())
.ok_or(anyhow::anyhow!("Slack user object not ok!"))?;
if !res_ok {
return Err(anyhow::anyhow!("Slack user object not ok!"));
}
Ok(serde_json::from_value(res)?)
}
// async fn fetch_save_slack_user_data(query: Query<CallbackQuery>) -> anyhow::Result<()> {
// let client = reqwest::Client::new();
// log::trace!("Code was {}", &query.code);
// let mut url = Url::parse("https://slack.com/api/openid.connect.token")?;
// {
// let mut q = url.query_pairs_mut();
// q.append_pair("client_id", &var("CLIENT_ID")?);
// q.append_pair("client_secret", &var("CLIENT_SECRET")?);
// q.append_pair("code", &query.code);
// q.append_pair("grant_type", "authorization_code");
// q.append_pair("redirect_uri", "https://burrow.rs/callback");
// }
// let data = client
// .post(url)
// .send()
// .await?
// .json::<slack::CodeExchangeResponse>()
// .await?;
// if !data.ok {
// return Err(anyhow::anyhow!("Slack code exchange response not ok!"));
// }
// if let Some(access_token) = data.access_token {
// log::trace!("Access token is {access_token}");
// let user = slack::fetch_slack_user(&access_token)
// .await
// .map_err(|err| anyhow::anyhow!("Failed to fetch Slack user info {:#?}", err))?;
// db::store_user(user, access_token, String::new())
// .map_err(|_| anyhow::anyhow!("Failed to store user in db"))?;
// Ok(())
// } else {
// Err(anyhow::anyhow!("Access token not found in response"))
// }
// }

View file

@ -0,0 +1,519 @@
use std::{
collections::HashMap,
env,
path::{Path, PathBuf},
process::Stdio,
sync::Arc,
time::Duration,
};
use anyhow::{anyhow, Context, Result};
use rand::RngCore;
use reqwest::Client;
use serde::{Deserialize, Serialize};
use tokio::{
io::{AsyncBufReadExt, BufReader},
process::{Child, Command},
sync::Mutex,
task::JoinHandle,
};
#[derive(Clone, Debug, Default, Deserialize)]
pub struct TailscaleLoginStartRequest {
pub account_name: String,
pub identity_name: String,
#[serde(default)]
pub hostname: Option<String>,
#[serde(default)]
pub control_url: Option<String>,
#[serde(default)]
pub packet_socket: Option<String>,
}
#[derive(Clone, Debug, Serialize, Deserialize, Default)]
pub struct TailscaleLoginStatus {
pub backend_state: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth_url: Option<String>,
#[serde(default)]
pub running: bool,
#[serde(default)]
pub needs_login: bool,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub tailnet_name: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub magic_dns_suffix: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub self_dns_name: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub tailscale_ips: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub health: Vec<String>,
}
#[derive(Clone, Debug, Serialize)]
pub struct TailscaleLoginStartResponse {
pub session_id: String,
pub status: TailscaleLoginStatus,
}
pub struct TailscaleLoginSession {
pub session_id: String,
pub helper: Arc<TailscaleHelperProcess>,
pub status: TailscaleLoginStatus,
}
#[derive(Clone, Default)]
pub struct TailscaleBridgeManager {
client: Client,
sessions: Arc<Mutex<HashMap<String, Arc<ManagedSession>>>>,
}
pub struct TailscaleHelperProcess {
session_id: String,
listen_url: String,
packet_socket: Option<PathBuf>,
control_url: Option<String>,
state_dir: PathBuf,
child: Arc<Mutex<Child>>,
_stderr_task: JoinHandle<()>,
}
type ManagedSession = TailscaleHelperProcess;
#[derive(Debug, Deserialize)]
struct HelperHello {
listen_addr: String,
#[serde(default)]
packet_socket: Option<String>,
}
impl TailscaleBridgeManager {
pub async fn start_login(
&self,
request: TailscaleLoginStartRequest,
) -> Result<TailscaleLoginStartResponse> {
let session = self.ensure_session(request).await?;
Ok(TailscaleLoginStartResponse {
session_id: session.session_id,
status: session.status,
})
}
pub async fn ensure_session(
&self,
request: TailscaleLoginStartRequest,
) -> Result<TailscaleLoginSession> {
let key = session_key_for_request(&request);
let requested_packet_socket = request
.packet_socket
.as_deref()
.map(str::trim)
.filter(|value| !value.is_empty());
let requested_control_url = request
.control_url
.as_deref()
.map(str::trim)
.filter(|value| !value.is_empty());
if let Some(existing) = self.sessions.lock().await.get(&key).cloned() {
let needs_restart_for_socket = match (requested_packet_socket, existing.packet_socket())
{
(Some(requested), Some(current)) => current != Path::new(requested),
(Some(_), None) => true,
_ => false,
};
let needs_restart_for_control_url =
requested_control_url != existing.control_url().map(|value| value.trim());
if !needs_restart_for_socket && !needs_restart_for_control_url {
match self.fetch_status(existing.as_ref()).await {
Ok(status) => {
return Ok(TailscaleLoginSession {
session_id: existing.session_id.clone(),
helper: existing,
status,
});
}
Err(err) => {
log::warn!(
"tailscale login session {} is stale, restarting: {err}",
existing.session_id
);
}
}
} else {
log::info!(
"tailscale login session {} no longer matches requested transport, restarting",
existing.session_id
);
}
self.sessions.lock().await.remove(&key);
let _ = self.shutdown_session(existing.as_ref()).await;
}
let session = Arc::new(spawn_tailscale_helper(&request).await?);
let status = self.wait_for_status(session.as_ref()).await?;
let response = TailscaleLoginSession {
session_id: session.session_id.clone(),
helper: session.clone(),
status,
};
self.sessions.lock().await.insert(key, session);
Ok(response)
}
pub async fn status(&self, session_id: &str) -> Result<Option<TailscaleLoginStatus>> {
let session = {
let sessions = self.sessions.lock().await;
sessions
.values()
.find(|session| session.session_id == session_id)
.cloned()
};
match session {
Some(session) => match self.fetch_status(session.as_ref()).await {
Ok(status) => Ok(Some(status)),
Err(err) => {
self.remove_session_by_id(session_id).await;
Err(err)
}
},
None => Ok(None),
}
}
pub async fn cancel(&self, session_id: &str) -> Result<bool> {
let session = self.remove_session_by_id(session_id).await;
match session {
Some(session) => {
self.shutdown_session(session.as_ref()).await?;
Ok(true)
}
None => Ok(false),
}
}
async fn wait_for_status(&self, session: &ManagedSession) -> Result<TailscaleLoginStatus> {
let mut last_error = None;
let mut last_status = None;
for _ in 0..40 {
match session.status_with_client(&self.client).await {
Ok(status) if status.running || status.auth_url.is_some() => return Ok(status),
Ok(status) => last_status = Some(status),
Err(err) => last_error = Some(err),
}
tokio::time::sleep(Duration::from_millis(250)).await;
}
if let Some(status) = last_status {
return Ok(status);
}
Err(last_error.unwrap_or_else(|| anyhow!("tailscale helper did not become ready")))
}
async fn fetch_status(&self, session: &ManagedSession) -> Result<TailscaleLoginStatus> {
session.status_with_client(&self.client).await
}
async fn remove_session_by_id(&self, session_id: &str) -> Option<Arc<ManagedSession>> {
let mut sessions = self.sessions.lock().await;
let key = sessions
.iter()
.find_map(|(key, session)| (session.session_id == session_id).then(|| key.clone()))?;
sessions.remove(&key)
}
async fn shutdown_session(&self, session: &ManagedSession) -> Result<()> {
session.shutdown_with_client(&self.client).await
}
}
impl TailscaleHelperProcess {
pub fn session_id(&self) -> &str {
&self.session_id
}
pub fn packet_socket(&self) -> Option<&Path> {
self.packet_socket.as_deref()
}
pub fn control_url(&self) -> Option<&str> {
self.control_url.as_deref()
}
pub fn state_dir(&self) -> &Path {
&self.state_dir
}
pub async fn status(&self) -> Result<TailscaleLoginStatus> {
self.status_with_client(&Client::new()).await
}
pub async fn shutdown(&self) -> Result<()> {
self.shutdown_with_client(&Client::new()).await
}
async fn status_with_client(&self, client: &Client) -> Result<TailscaleLoginStatus> {
let mut child = self.child.lock().await;
if let Some(status) = child.try_wait()? {
return Err(anyhow!(
"tailscale helper exited with status {status} for {}",
self.state_dir.display()
));
}
drop(child);
let response = client
.get(format!("{}/status", self.listen_url))
.send()
.await
.context("failed to query tailscale helper status")?
.error_for_status()
.context("tailscale helper status request failed")?;
let status = response
.json::<TailscaleLoginStatus>()
.await
.context("invalid tailscale helper status response")?;
log::info!(
"tailscale helper status session={} backend_state={} running={} needs_login={} auth_url={:?}",
self.session_id,
status.backend_state,
status.running,
status.needs_login,
status.auth_url
);
Ok(status)
}
async fn shutdown_with_client(&self, client: &Client) -> Result<()> {
let _ = client.post(format!("{}/shutdown", self.listen_url)).send().await;
for _ in 0..10 {
let mut child = self.child.lock().await;
if child.try_wait()?.is_some() {
return Ok(());
}
drop(child);
tokio::time::sleep(Duration::from_millis(100)).await;
}
let mut child = self.child.lock().await;
child
.start_kill()
.context("failed to kill tailscale helper")?;
let _ = child.wait().await;
Ok(())
}
}
pub async fn spawn_tailscale_helper(
request: &TailscaleLoginStartRequest,
) -> Result<TailscaleHelperProcess> {
let state_dir = state_root().join(session_dir_name(request));
tokio::fs::create_dir_all(&state_dir)
.await
.with_context(|| format!("failed to create {}", state_dir.display()))?;
let mut child = helper_command(request, &state_dir)?
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.context("failed to spawn tailscale login helper")?;
let stdout = child
.stdout
.take()
.context("tailscale helper stdout unavailable")?;
let stderr = child
.stderr
.take()
.context("tailscale helper stderr unavailable")?;
let hello_line = tokio::time::timeout(Duration::from_secs(20), async move {
let mut lines = BufReader::new(stdout).lines();
lines.next_line().await
})
.await
.context("timed out waiting for tailscale helper startup")??
.context("tailscale helper exited before reporting listen address")?;
let hello: HelperHello =
serde_json::from_str(&hello_line).context("invalid tailscale helper startup line")?;
let stderr_task = tokio::spawn(async move {
let mut lines = BufReader::new(stderr).lines();
while let Ok(Some(line)) = lines.next_line().await {
log::info!("tailscale-login-bridge: {line}");
}
});
Ok(TailscaleHelperProcess {
session_id: random_session_id(),
listen_url: format!("http://{}", hello.listen_addr),
packet_socket: hello.packet_socket.map(PathBuf::from),
control_url: request.control_url.clone(),
state_dir,
child: Arc::new(Mutex::new(child)),
_stderr_task: stderr_task,
})
}
fn helper_command(request: &TailscaleLoginStartRequest, state_dir: &Path) -> Result<Command> {
let mut command = if let Ok(path) = env::var("BURROW_TAILSCALE_HELPER") {
Command::new(path)
} else {
let helper_dir = Path::new(env!("CARGO_MANIFEST_DIR"))
.join("..")
.join("Tools/tailscale-login-bridge");
let mut command = Command::new("go");
command.current_dir(helper_dir).arg("run").arg(".");
command.env("GOWORK", "off");
command
};
command
.arg("--listen")
.arg("127.0.0.1:0")
.arg("--state-dir")
.arg(state_dir)
.arg("--hostname")
.arg(default_hostname(request));
if let Some(control_url) = request.control_url.as_deref() {
let trimmed = control_url.trim();
if !trimmed.is_empty() {
command.arg("--control-url").arg(trimmed);
}
}
if let Some(packet_socket) = request.packet_socket.as_deref() {
let trimmed = packet_socket.trim();
if !trimmed.is_empty() {
command.arg("--packet-socket").arg(trimmed);
}
}
Ok(command)
}
pub(crate) fn packet_socket_path(request: &TailscaleLoginStartRequest) -> PathBuf {
state_root().join(session_dir_name(request)).join("packet.sock")
}
pub(crate) fn state_root() -> PathBuf {
if let Ok(path) = env::var("BURROW_TAILSCALE_STATE_ROOT") {
return PathBuf::from(path);
}
let home = env::var_os("HOME")
.map(PathBuf::from)
.unwrap_or_else(|| PathBuf::from("."));
if cfg!(target_vendor = "apple") {
return home
.join("Library")
.join("Application Support")
.join("Burrow")
.join("tailscale");
}
home.join(".local")
.join("share")
.join("burrow")
.join("tailscale")
}
pub(crate) fn session_dir_name(request: &TailscaleLoginStartRequest) -> String {
format!(
"{}-{}-{}",
slug(&request.account_name),
slug(&request.identity_name),
slug(control_scope(request))
)
}
fn session_key_for_request(request: &TailscaleLoginStartRequest) -> String {
format!(
"{}:{}:{}",
request.account_name,
request.identity_name,
control_scope(request)
)
}
fn control_scope(request: &TailscaleLoginStartRequest) -> &str {
request
.control_url
.as_deref()
.map(str::trim)
.filter(|value| !value.is_empty())
.unwrap_or("tailscale-managed")
}
pub(crate) fn default_hostname(request: &TailscaleLoginStartRequest) -> String {
request
.hostname
.as_deref()
.filter(|value| !value.trim().is_empty())
.map(ToOwned::to_owned)
.unwrap_or_else(|| format!("burrow-{}", slug(&request.identity_name)))
}
fn random_session_id() -> String {
let mut bytes = [0_u8; 12];
rand::thread_rng().fill_bytes(&mut bytes);
bytes.iter().map(|byte| format!("{byte:02x}")).collect()
}
fn slug(input: &str) -> String {
let mut output = String::with_capacity(input.len());
for ch in input.chars() {
if ch.is_ascii_alphanumeric() {
output.push(ch.to_ascii_lowercase());
} else if ch == '-' || ch == '_' {
output.push('-');
}
}
if output.is_empty() {
"default".to_owned()
} else {
output
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn slug_sanitizes_input() {
assert_eq!(slug("Apple Phone"), "applephone");
assert_eq!(slug("default_identity"), "default-identity");
assert_eq!(slug(""), "default");
}
#[test]
fn state_dir_is_scoped_by_account_identity_and_control_plane() {
let request = TailscaleLoginStartRequest {
account_name: "default".to_owned(),
identity_name: "apple".to_owned(),
hostname: None,
control_url: None,
packet_socket: None,
};
assert_eq!(session_dir_name(&request), "default-apple-tailscale-managed");
assert_eq!(default_hostname(&request), "burrow-apple");
let custom_request = TailscaleLoginStartRequest {
control_url: Some("https://ts.burrow.net".to_owned()),
..request
};
assert_eq!(
session_dir_name(&custom_request),
"default-apple-httpstsburrownet"
);
}
}

View file

@ -0,0 +1,87 @@
use anyhow::{Context, Result};
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum TailnetProvider {
Tailscale,
Headscale,
Burrow,
}
impl Default for TailnetProvider {
fn default() -> Self {
Self::Tailscale
}
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct TailnetConfig {
#[serde(default)]
pub provider: TailnetProvider,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub authority: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub account: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub identity: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub tailnet: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub hostname: Option<String>,
}
impl TailnetConfig {
pub fn from_slice(bytes: &[u8]) -> Result<Self> {
let payload = std::str::from_utf8(bytes).context("tailnet payload must be valid UTF-8")?;
Self::from_str(payload)
}
pub fn from_str(payload: &str) -> Result<Self> {
let trimmed = payload.trim();
if trimmed.starts_with('{') {
return serde_json::from_str(trimmed).context("invalid tailnet JSON payload");
}
toml::from_str(trimmed).context("invalid tailnet TOML payload")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parses_json_payload() {
let config = TailnetConfig::from_str(
r#"{
"provider":"tailscale",
"account":"default",
"identity":"apple",
"tailnet":"example.ts.net",
"hostname":"burrow-phone"
}"#,
)
.unwrap();
assert_eq!(config.provider, TailnetProvider::Tailscale);
assert_eq!(config.account.as_deref(), Some("default"));
assert_eq!(config.identity.as_deref(), Some("apple"));
}
#[test]
fn parses_toml_payload() {
let config = TailnetConfig::from_str(
r#"
provider = "headscale"
authority = "https://headscale.example.com"
account = "default"
identity = "apple"
"#,
)
.unwrap();
assert_eq!(config.provider, TailnetProvider::Headscale);
assert_eq!(
config.authority.as_deref(),
Some("https://headscale.example.com")
);
}
}

View file

@ -0,0 +1,359 @@
use anyhow::{anyhow, Context, Result};
use reqwest::{Client, StatusCode, Url};
use serde::{Deserialize, Serialize};
use tracing::{debug, info};
use super::TailnetProvider;
pub const TAILNET_DISCOVERY_REL: &str = "https://burrow.net/rel/tailnet-control-server";
const TAILNET_DISCOVERY_PATH: &str = "/.well-known/burrow-tailnet";
const WEBFINGER_PATH: &str = "/.well-known/webfinger";
const MANAGED_TAILSCALE_AUTHORITY: &str = "controlplane.tailscale.com";
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TailnetDiscovery {
pub domain: String,
pub provider: TailnetProvider,
pub authority: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub oidc_issuer: Option<String>,
}
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct TailnetAuthorityProbe {
pub authority: String,
pub status_code: i32,
pub summary: String,
pub detail: String,
pub reachable: bool,
}
#[derive(Clone, Debug, Default, Deserialize)]
struct WebFingerDocument {
#[serde(default)]
links: Vec<WebFingerLink>,
}
#[derive(Clone, Debug, Default, Deserialize)]
struct WebFingerLink {
#[serde(default)]
rel: String,
#[serde(default)]
href: Option<String>,
}
pub async fn discover_tailnet(email: &str) -> Result<TailnetDiscovery> {
let domain = email_domain(email)?;
info!(%email, %domain, "tailnet discovery requested");
let base_url = Url::parse(&format!("https://{domain}"))
.with_context(|| format!("invalid discovery domain {domain}"))?;
let client = Client::builder()
.user_agent("burrow-tailnet-discovery")
.timeout(std::time::Duration::from_secs(10))
.build()
.context("failed to build tailnet discovery client")?;
discover_tailnet_at(&client, email, &base_url).await
}
pub fn normalize_authority(authority: &str) -> String {
let trimmed = authority.trim();
if trimmed.contains("://") {
trimmed.to_owned()
} else {
format!("https://{trimmed}")
}
}
pub fn is_managed_tailscale_authority(authority: &str) -> bool {
let normalized = normalize_authority(authority)
.trim_end_matches('/')
.to_ascii_lowercase();
normalized == format!("https://{MANAGED_TAILSCALE_AUTHORITY}")
|| normalized == format!("http://{MANAGED_TAILSCALE_AUTHORITY}")
}
pub async fn probe_tailnet_authority(authority: &str) -> Result<TailnetAuthorityProbe> {
let authority = normalize_authority(authority);
if is_managed_tailscale_authority(&authority) {
return Ok(TailnetAuthorityProbe {
authority,
status_code: 200,
summary: "Tailscale-managed control plane".to_owned(),
detail: "Using Tailscale's default login server.".to_owned(),
reachable: true,
});
}
let base_url =
Url::parse(&authority).with_context(|| format!("invalid tailnet authority {authority}"))?;
let client = Client::builder()
.user_agent("burrow-tailnet-probe")
.timeout(std::time::Duration::from_secs(10))
.build()
.context("failed to build tailnet authority probe client")?;
if let Some(status) =
probe_url(&client, base_url.join("/health")?, &authority, "Tailnet server reachable").await?
{
return Ok(status);
}
if let Some(status) = probe_url(
&client,
base_url.clone(),
&authority,
"Tailnet server reachable",
)
.await?
{
return Ok(status);
}
Err(anyhow!("could not connect to the server"))
}
pub async fn discover_tailnet_at(
client: &Client,
email: &str,
base_url: &Url,
) -> Result<TailnetDiscovery> {
let domain = email_domain(email)?;
debug!(%email, %domain, base_url = %base_url, "starting tailnet domain discovery");
if let Some(discovery) = discover_well_known(client, base_url).await? {
info!(
%email,
%domain,
authority = %discovery.authority,
provider = ?discovery.provider,
"resolved tailnet discovery from well-known document"
);
return Ok(TailnetDiscovery { domain, ..discovery });
}
if let Some(authority) = discover_webfinger(client, email, base_url).await? {
info!(%email, %domain, %authority, "resolved tailnet discovery from webfinger");
return Ok(TailnetDiscovery {
domain,
provider: inferred_provider(Some(&authority), None),
authority,
oidc_issuer: None,
});
}
Err(anyhow!("no tailnet discovery metadata found for {domain}"))
}
pub fn email_domain(email: &str) -> Result<String> {
let trimmed = email.trim();
let (_, domain) = trimmed
.rsplit_once('@')
.ok_or_else(|| anyhow!("email address must include a domain"))?;
let domain = domain.trim().trim_matches('.').to_ascii_lowercase();
if domain.is_empty() {
return Err(anyhow!("email address must include a domain"));
}
Ok(domain)
}
pub fn inferred_provider(
authority: Option<&str>,
explicit: Option<&TailnetProvider>,
) -> TailnetProvider {
if matches!(explicit, Some(TailnetProvider::Burrow)) {
return TailnetProvider::Burrow;
}
if authority.is_some_and(is_managed_tailscale_authority) {
return TailnetProvider::Tailscale;
}
TailnetProvider::Headscale
}
async fn discover_well_known(client: &Client, base_url: &Url) -> Result<Option<TailnetDiscovery>> {
let url = base_url
.join(TAILNET_DISCOVERY_PATH)
.context("failed to build tailnet discovery URL")?;
debug!(%url, "requesting tailnet well-known document");
let response = client
.get(url)
.header("accept", "application/json")
.send()
.await
.context("tailnet well-known request failed")?;
match response.status() {
StatusCode::OK => response
.json::<TailnetDiscovery>()
.await
.context("invalid tailnet discovery document")
.map(Some),
StatusCode::NOT_FOUND => Ok(None),
status => Err(anyhow!("tailnet well-known lookup failed with HTTP {status}")),
}
}
async fn discover_webfinger(client: &Client, email: &str, base_url: &Url) -> Result<Option<String>> {
let mut url = base_url
.join(WEBFINGER_PATH)
.context("failed to build webfinger URL")?;
url.query_pairs_mut()
.append_pair("resource", &format!("acct:{email}"))
.append_pair("rel", TAILNET_DISCOVERY_REL);
debug!(%email, url = %url, "requesting tailnet webfinger document");
let response = client
.get(url)
.header("accept", "application/jrd+json, application/json")
.send()
.await
.context("tailnet webfinger request failed")?;
match response.status() {
StatusCode::OK => {
let document = response
.json::<WebFingerDocument>()
.await
.context("invalid webfinger document")?;
Ok(document
.links
.into_iter()
.find(|link| link.rel == TAILNET_DISCOVERY_REL)
.and_then(|link| link.href)
.filter(|href| !href.trim().is_empty()))
}
StatusCode::NOT_FOUND => Ok(None),
status => Err(anyhow!("tailnet webfinger lookup failed with HTTP {status}")),
}
}
async fn probe_url(
client: &Client,
url: Url,
authority: &str,
summary: &str,
) -> Result<Option<TailnetAuthorityProbe>> {
let response = match client
.get(url)
.header("accept", "application/json")
.send()
.await
{
Ok(response) => response,
Err(_) => return Ok(None),
};
let status = response.status();
if !status.is_success() {
return Ok(None);
}
let detail = response.text().await.unwrap_or_default().trim().to_owned();
Ok(Some(TailnetAuthorityProbe {
authority: authority.to_owned(),
status_code: i32::from(status.as_u16()),
summary: summary.to_owned(),
detail,
reachable: true,
}))
}
#[cfg(test)]
mod tests {
use axum::{routing::get, Router};
use serde_json::json;
use tokio::net::TcpListener;
use super::*;
#[test]
fn extracts_domain_from_email() {
assert_eq!(email_domain("Contact@Burrow.net").unwrap(), "burrow.net");
assert!(email_domain("contact").is_err());
}
#[test]
fn detects_managed_tailscale_authority() {
assert!(is_managed_tailscale_authority("controlplane.tailscale.com"));
assert!(is_managed_tailscale_authority("https://controlplane.tailscale.com/"));
assert!(!is_managed_tailscale_authority("https://ts.burrow.net"));
}
#[tokio::test]
async fn discovers_from_well_known_document() -> Result<()> {
let router = Router::new().route(
TAILNET_DISCOVERY_PATH,
get(|| async {
axum::Json(json!({
"domain": "burrow.net",
"provider": "headscale",
"authority": "https://ts.burrow.net",
"oidc_issuer": "https://auth.burrow.net/application/o/ts/"
}))
}),
);
let listener = TcpListener::bind("127.0.0.1:0").await?;
let base_url = Url::parse(&format!("http://{}", listener.local_addr()?))?;
let server = tokio::spawn(async move { axum::serve(listener, router).await });
let client = Client::builder().build()?;
let discovery = discover_tailnet_at(&client, "contact@burrow.net", &base_url).await?;
assert_eq!(discovery.provider, TailnetProvider::Headscale);
assert_eq!(discovery.authority, "https://ts.burrow.net");
assert_eq!(discovery.domain, "burrow.net");
server.abort();
Ok(())
}
#[tokio::test]
async fn falls_back_to_webfinger_authority() -> Result<()> {
let router = Router::new()
.route(
TAILNET_DISCOVERY_PATH,
get(|| async { (StatusCode::NOT_FOUND, "") }),
)
.route(
WEBFINGER_PATH,
get(|| async {
axum::Json(json!({
"subject": "acct:contact@burrow.net",
"links": [
{
"rel": TAILNET_DISCOVERY_REL,
"href": "https://ts.burrow.net"
}
]
}))
}),
);
let listener = TcpListener::bind("127.0.0.1:0").await?;
let base_url = Url::parse(&format!("http://{}", listener.local_addr()?))?;
let server = tokio::spawn(async move { axum::serve(listener, router).await });
let client = Client::builder().build()?;
let discovery = discover_tailnet_at(&client, "contact@burrow.net", &base_url).await?;
assert_eq!(discovery.provider, TailnetProvider::Headscale);
assert_eq!(discovery.authority, "https://ts.burrow.net");
server.abort();
Ok(())
}
#[tokio::test]
async fn probes_custom_authority() -> Result<()> {
let router = Router::new().route("/health", get(|| async { "ok" }));
let listener = TcpListener::bind("127.0.0.1:0").await?;
let authority = format!("http://{}", listener.local_addr()?);
let server = tokio::spawn(async move { axum::serve(listener, router).await });
let status = probe_tailnet_authority(&authority).await?;
assert_eq!(status.authority, authority);
assert_eq!(status.status_code, 200);
assert!(status.reachable);
server.abort();
Ok(())
}
}

255
burrow/src/control/mod.rs Normal file
View file

@ -0,0 +1,255 @@
pub mod config;
pub mod discovery;
use std::collections::BTreeMap;
use serde::{Deserialize, Serialize};
use serde_json::Value;
pub use config::{TailnetConfig, TailnetProvider};
pub use discovery::{TailnetDiscovery, TAILNET_DISCOVERY_REL};
pub const BURROW_CAPABILITY_VERSION: i32 = 1;
pub const BURROW_TAILNET_DOMAIN: &str = "burrow.net";
pub type NodeCapMap = BTreeMap<String, Vec<Value>>;
pub type PeerCapMap = BTreeMap<String, Vec<Value>>;
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct Hostinfo {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub hostname: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub os: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub os_version: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub services: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub request_tags: Vec<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct UserProfile {
pub id: i64,
pub login_name: String,
pub display_name: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub profile_pic_url: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub groups: Vec<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct RegisterAuth {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth_key: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub oauth_access_token: Option<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub struct Node {
pub id: i64,
pub stable_id: String,
pub name: String,
pub user_id: i64,
pub node_key: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub machine_key: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub disco_key: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub addresses: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub allowed_ips: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub endpoints: Vec<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub home_derp: Option<i32>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub hostinfo: Option<Hostinfo>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub tags: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub primary_routes: Vec<String>,
#[serde(default = "default_capability_version")]
pub cap_version: i32,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub cap_map: NodeCapMap,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub peer_cap_map: PeerCapMap,
#[serde(default)]
pub machine_authorized: bool,
#[serde(default)]
pub node_key_expired: bool,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub created_at: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub updated_at: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub last_seen: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub online: Option<bool>,
}
impl Node {
pub fn preferred_name(request: &RegisterRequest) -> String {
if let Some(name) = request.name.as_deref() {
return name.to_owned();
}
if let Some(hostname) = request
.hostinfo
.as_ref()
.and_then(|hostinfo| hostinfo.hostname.as_deref())
{
return hostname.to_owned();
}
format!("node-{}", short_key(&request.node_key))
}
pub fn normalized_allowed_ips(request: &RegisterRequest) -> Vec<String> {
if request.allowed_ips.is_empty() {
return request.addresses.clone();
}
request.allowed_ips.clone()
}
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct RegisterRequest {
#[serde(default = "default_capability_version")]
pub version: i32,
pub node_key: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub old_node_key: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub machine_key: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub disco_key: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth: Option<RegisterAuth>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub expiry: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub followup: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub hostinfo: Option<Hostinfo>,
#[serde(default)]
pub ephemeral: bool,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub tailnet: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub name: Option<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub addresses: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub allowed_ips: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub endpoints: Vec<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub home_derp: Option<i32>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub tags: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub primary_routes: Vec<String>,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub cap_map: NodeCapMap,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub peer_cap_map: PeerCapMap,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub struct RegisterResponse {
pub user: UserProfile,
pub node: Node,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth_url: Option<String>,
pub machine_authorized: bool,
pub node_key_expired: bool,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct MapRequest {
#[serde(default = "default_capability_version")]
pub version: i32,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub compress: Option<String>,
#[serde(default)]
pub keep_alive: bool,
pub node_key: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub disco_key: Option<String>,
#[serde(default)]
pub stream: bool,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub hostinfo: Option<Hostinfo>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub map_session_handle: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub map_session_seq: Option<i64>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub endpoints: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub debug_flags: Vec<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub connection_handle: Option<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct DnsConfig {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub resolvers: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub search_domains: Vec<String>,
#[serde(default)]
pub magic_dns: bool,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct PacketFilter {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub sources: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub destinations: Vec<String>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub protocols: Vec<String>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub struct MapResponse {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub map_session_handle: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub seq: Option<i64>,
pub node: Node,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub peers: Vec<Node>,
pub domain: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub dns: Option<DnsConfig>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub packet_filters: Vec<PacketFilter>,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct LocalAuthRequest {
pub identifier: String,
pub password: String,
}
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, Eq)]
pub struct LocalAuthResponse {
pub access_token: String,
pub user: UserProfile,
}
fn default_capability_version() -> i32 {
BURROW_CAPABILITY_VERSION
}
fn short_key(key: &str) -> String {
key.chars().take(8).collect()
}

View file

@ -1,11 +1,11 @@
use std::{
ffi::{c_char, CStr},
path::PathBuf,
sync::Arc,
sync::{Arc, Mutex},
thread,
};
use once_cell::sync::OnceCell;
use once_cell::sync::{Lazy, OnceCell};
use tokio::{
runtime::{Builder, Handle},
sync::Notify,
@ -14,25 +14,35 @@ use tracing::error;
use crate::daemon::daemon_main;
static BURROW_NOTIFY: OnceCell<Arc<Notify>> = OnceCell::new();
static BURROW_HANDLE: OnceCell<Handle> = OnceCell::new();
static BURROW_READY: OnceCell<()> = OnceCell::new();
static BURROW_SPAWN_LOCK: Lazy<Mutex<()>> = Lazy::new(|| Mutex::new(()));
#[no_mangle]
pub unsafe extern "C" fn spawn_in_process(path: *const c_char, db_path: *const c_char) {
let path_buf = if path.is_null() {
None
} else {
Some(PathBuf::from(CStr::from_ptr(path).to_str().unwrap()))
};
let db_path_buf = if db_path.is_null() {
None
} else {
Some(PathBuf::from(CStr::from_ptr(db_path).to_str().unwrap()))
};
spawn_in_process_with_paths(path_buf, db_path_buf);
}
pub fn spawn_in_process_with_paths(path_buf: Option<PathBuf>, db_path_buf: Option<PathBuf>) {
crate::tracing::initialize();
let notify = BURROW_NOTIFY.get_or_init(|| Arc::new(Notify::new()));
let _guard = BURROW_SPAWN_LOCK.lock().unwrap();
if BURROW_READY.get().is_some() {
return;
}
let notify = Arc::new(Notify::new());
let handle = BURROW_HANDLE.get_or_init(|| {
let path_buf = if path.is_null() {
None
} else {
Some(PathBuf::from(CStr::from_ptr(path).to_str().unwrap()))
};
let db_path_buf = if db_path.is_null() {
None
} else {
Some(PathBuf::from(CStr::from_ptr(db_path).to_str().unwrap()))
};
let sender = notify.clone();
let (handle_tx, handle_rx) = tokio::sync::oneshot::channel();
@ -62,4 +72,5 @@ pub unsafe extern "C" fn spawn_in_process(path: *const c_char, db_path: *const c
let receiver = notify.clone();
handle.block_on(async move { receiver.notified().await });
let _ = BURROW_READY.set(());
}

View file

@ -3,32 +3,35 @@ use std::{
sync::Arc,
};
use anyhow::{anyhow, Context, Result};
use anyhow::Result;
use rusqlite::Connection;
use tokio::{
sync::{mpsc, watch, RwLock},
task::JoinHandle,
};
use tokio::sync::{mpsc, watch, RwLock};
use tokio_stream::wrappers::ReceiverStream;
use tonic::{Request, Response, Status as RspStatus};
use tracing::warn;
use tun::{tokio::TunInterface, TunOptions};
use tracing::{debug, info, warn};
use tun::tokio::TunInterface;
use super::rpc::{
grpc_defs::{
networks_server::Networks, tunnel_server::Tunnel, Empty, Network, NetworkDeleteRequest,
NetworkListResponse, NetworkReorderRequest, NetworkType, State as RPCTunnelState,
TunnelConfigurationResponse, TunnelStatusResponse,
use super::{
rpc::grpc_defs::{
networks_server::Networks, tailnet_control_server::TailnetControl, tunnel_server::Tunnel,
Empty, Network, NetworkDeleteRequest, NetworkListResponse, NetworkReorderRequest,
State as RPCTunnelState, TailnetDiscoverRequest, TailnetDiscoverResponse,
TailnetProbeRequest, TailnetProbeResponse, TunnelConfigurationResponse, TunnelPacket,
TunnelStatusResponse,
},
ServerConfig,
runtime::{tailnet_helper_request, ActiveTunnel, ResolvedTunnel},
};
use crate::{
auth::server::tailscale::{
packet_socket_path, TailscaleBridgeManager,
TailscaleLoginStartRequest as BridgeLoginStartRequest, TailscaleLoginStatus,
},
control::discovery,
daemon::rpc::ServerConfig,
database::{add_network, delete_network, get_connection, list_networks, reorder_network},
tor::{self, Config as TorConfig, TorHandle},
wireguard::{Config as WireGuardConfig, Interface as WireGuardInterface},
};
#[derive(Debug, Clone, PartialEq, Eq)]
#[derive(Debug, Clone)]
enum RunState {
Running,
Idle,
@ -43,167 +46,25 @@ impl RunState {
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
enum RuntimeIdentity {
DefaultWireGuard,
Network { id: i32, network_type: NetworkType },
}
#[derive(Clone, Debug)]
enum ResolvedTunnel {
WireGuard {
identity: RuntimeIdentity,
config: WireGuardConfig,
},
Tor {
identity: RuntimeIdentity,
config: TorConfig,
},
}
impl ResolvedTunnel {
fn from_networks(networks: &[Network], fallback: &WireGuardConfig) -> Result<Self> {
let Some(network) = networks.first() else {
return Ok(Self::WireGuard {
identity: RuntimeIdentity::DefaultWireGuard,
config: fallback.clone(),
});
};
let identity = RuntimeIdentity::Network {
id: network.id,
network_type: network.r#type(),
};
match network.r#type() {
NetworkType::WireGuard => {
let payload = String::from_utf8(network.payload.clone())
.context("wireguard payload must be valid UTF-8")?;
let config = WireGuardConfig::from_content_fmt(&payload, "ini")?;
Ok(Self::WireGuard { identity, config })
}
NetworkType::Tor => {
let config = TorConfig::from_payload(&network.payload)?;
Ok(Self::Tor { identity, config })
}
NetworkType::HackClub => {
Err(anyhow!("HackClub runtime is not available on this branch"))
}
}
}
fn identity(&self) -> &RuntimeIdentity {
match self {
Self::WireGuard { identity, .. } | Self::Tor { identity, .. } => identity,
}
}
fn server_config(&self) -> Result<ServerConfig> {
match self {
Self::WireGuard { config, .. } => ServerConfig::try_from(config),
Self::Tor { config, .. } => Ok(ServerConfig {
address: config.address.clone(),
name: config.tun_name.clone(),
mtu: config.mtu.map(|mtu| mtu as i32),
}),
}
}
async fn start(self, tun_interface: Arc<RwLock<Option<TunInterface>>>) -> Result<ActiveTunnel> {
match self {
Self::WireGuard { identity, config } => {
let tun = TunOptions::new()
.address(config.interface.address.clone())
.open()?;
tun_interface.write().await.replace(tun);
let mut interface: WireGuardInterface = config.try_into()?;
interface.set_tun_ref(tun_interface.clone()).await;
let interface = Arc::new(RwLock::new(interface));
let run_interface = interface.clone();
let task = tokio::spawn(async move {
let guard = run_interface.read().await;
guard.run().await
});
Ok(ActiveTunnel::WireGuard { identity, interface, task })
}
Self::Tor { identity, config } => {
let mut tun_options = TunOptions::new().address(config.address.clone());
if let Some(name) = config.tun_name.as_deref() {
tun_options = tun_options.name(name);
}
let tun = tun_options.open()?;
tun_interface.write().await.replace(tun);
match tor::spawn(config).await {
Ok(handle) => Ok(ActiveTunnel::Tor { identity, handle }),
Err(err) => {
tun_interface.write().await.take();
Err(err)
}
}
}
}
}
}
enum ActiveTunnel {
WireGuard {
identity: RuntimeIdentity,
interface: Arc<RwLock<WireGuardInterface>>,
task: JoinHandle<Result<()>>,
},
Tor {
identity: RuntimeIdentity,
handle: TorHandle,
},
}
impl ActiveTunnel {
fn identity(&self) -> &RuntimeIdentity {
match self {
Self::WireGuard { identity, .. } | Self::Tor { identity, .. } => identity,
}
}
async fn shutdown(self, tun_interface: &Arc<RwLock<Option<TunInterface>>>) -> Result<()> {
match self {
Self::WireGuard { interface, task, .. } => {
interface.read().await.remove_tun().await;
let task_result = task.await;
tun_interface.write().await.take();
task_result??;
Ok(())
}
Self::Tor { handle, .. } => {
let result = handle.shutdown().await;
tun_interface.write().await.take();
result
}
}
}
}
#[derive(Clone)]
pub struct DaemonRPCServer {
tun_interface: Arc<RwLock<Option<TunInterface>>>,
default_config: Arc<RwLock<WireGuardConfig>>,
db_path: Option<PathBuf>,
wg_state_chan: (watch::Sender<RunState>, watch::Receiver<RunState>),
network_update_chan: (watch::Sender<()>, watch::Receiver<()>),
active_tunnel: Arc<RwLock<Option<ActiveTunnel>>>,
tailnet_login: TailscaleBridgeManager,
}
impl DaemonRPCServer {
pub fn new(config: Arc<RwLock<WireGuardConfig>>, db_path: Option<&Path>) -> Result<Self> {
pub fn new(db_path: Option<&Path>) -> Result<Self> {
Ok(Self {
tun_interface: Arc::new(RwLock::new(None)),
default_config: config,
db_path: db_path.map(Path::to_owned),
wg_state_chan: watch::channel(RunState::Idle),
network_update_chan: watch::channel(()),
active_tunnel: Arc::new(RwLock::new(None)),
tailnet_login: TailscaleBridgeManager::default(),
})
}
@ -222,20 +83,25 @@ impl DaemonRPCServer {
async fn resolve_tunnel(&self) -> Result<ResolvedTunnel, RspStatus> {
let conn = self.get_connection()?;
let networks = list_networks(&conn).map_err(proc_err)?;
let fallback = self.default_config.read().await.clone();
ResolvedTunnel::from_networks(&networks, &fallback).map_err(proc_err)
ResolvedTunnel::from_networks(&networks).map_err(proc_err)
}
async fn current_tunnel_configuration(&self) -> Result<TunnelConfigurationResponse, RspStatus> {
let config = self
.resolve_tunnel()
.await?
.server_config()
.map_err(proc_err)?;
Ok(TunnelConfigurationResponse {
addresses: config.address,
mtu: config.mtu.unwrap_or(1500),
})
let config = {
let active = self.active_tunnel.read().await;
active
.as_ref()
.map(|tunnel| tunnel.server_config().clone())
};
let config = match config {
Some(config) => config,
None => self
.resolve_tunnel()
.await?
.server_config()
.map_err(proc_err)?,
};
Ok(configuration_rsp(config))
}
async fn stop_active_tunnel(&self) -> Result<bool, RspStatus> {
@ -254,8 +120,18 @@ impl DaemonRPCServer {
async fn replace_active_tunnel(&self, desired: ResolvedTunnel) -> Result<(), RspStatus> {
let _ = self.stop_active_tunnel().await?;
let tailnet_helper = match &desired {
ResolvedTunnel::Tailnet { identity, config } => Some(
self.tailnet_login
.ensure_session(tailnet_helper_request(identity, config))
.await
.map_err(proc_err)?
.helper,
),
_ => None,
};
let active = desired
.start(self.tun_interface.clone())
.start(self.tun_interface.clone(), tailnet_helper)
.await
.map_err(proc_err)?;
self.active_tunnel.write().await.replace(active);
@ -279,11 +155,34 @@ impl DaemonRPCServer {
Ok(())
}
fn tailnet_bridge_request(
account_name: String,
identity_name: String,
hostname: String,
authority: String,
) -> BridgeLoginStartRequest {
let mut request = BridgeLoginStartRequest {
account_name,
identity_name,
hostname: (!hostname.trim().is_empty()).then_some(hostname),
control_url: Self::tailnet_control_url(&authority),
packet_socket: None,
};
request.packet_socket = Some(packet_socket_path(&request).display().to_string());
request
}
fn tailnet_control_url(authority: &str) -> Option<String> {
let authority = discovery::normalize_authority(authority);
(!discovery::is_managed_tailscale_authority(&authority)).then_some(authority)
}
}
#[tonic::async_trait]
impl Tunnel for DaemonRPCServer {
type TunnelConfigurationStream = ReceiverStream<Result<TunnelConfigurationResponse, RspStatus>>;
type TunnelPacketsStream = ReceiverStream<Result<TunnelPacket, RspStatus>>;
type TunnelStatusStream = ReceiverStream<Result<TunnelStatusResponse, RspStatus>>;
async fn tunnel_configuration(
@ -309,6 +208,62 @@ impl Tunnel for DaemonRPCServer {
Ok(Response::new(ReceiverStream::new(rx)))
}
async fn tunnel_packets(
&self,
request: Request<tonic::Streaming<TunnelPacket>>,
) -> Result<Response<Self::TunnelPacketsStream>, RspStatus> {
let (packet_tx, mut packet_rx) = {
let guard = self.active_tunnel.read().await;
let Some(active) = guard.as_ref() else {
return Err(RspStatus::failed_precondition("no active tunnel"));
};
active.packet_stream().ok_or_else(|| {
RspStatus::failed_precondition(
"active tunnel does not support packet streaming",
)
})?
};
let (tx, rx) = mpsc::channel(128);
tokio::spawn(async move {
loop {
match packet_rx.recv().await {
Ok(payload) => {
if tx.send(Ok(TunnelPacket { payload })).await.is_err() {
break;
}
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => continue,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
});
let mut inbound = request.into_inner();
tokio::spawn(async move {
loop {
match inbound.message().await {
Ok(Some(packet)) => {
debug!(
"daemon tunnel packet stream received {} bytes from client",
packet.payload.len()
);
if packet_tx.send(packet.payload).await.is_err() {
break;
}
}
Ok(None) => break,
Err(error) => {
warn!("tailnet packet stream receive error: {error}");
break;
}
}
}
});
Ok(Response::new(ReceiverStream::new(rx)))
}
async fn tunnel_start(&self, _request: Request<Empty>) -> Result<Response<Empty>, RspStatus> {
let desired = self.resolve_tunnel().await?;
let already_running = {
@ -418,13 +373,168 @@ impl Networks for DaemonRPCServer {
}
}
#[tonic::async_trait]
impl TailnetControl for DaemonRPCServer {
async fn discover(
&self,
request: Request<TailnetDiscoverRequest>,
) -> Result<Response<TailnetDiscoverResponse>, RspStatus> {
let request = request.into_inner();
info!(email = %request.email, "daemon tailnet discover RPC received");
let discovery = discovery::discover_tailnet(&request.email)
.await
.map_err(proc_err)?;
info!(
email = %request.email,
authority = %discovery.authority,
provider = ?discovery.provider,
"daemon tailnet discover RPC resolved"
);
Ok(Response::new(TailnetDiscoverResponse {
domain: discovery.domain,
authority: discovery.authority.clone(),
oidc_issuer: discovery.oidc_issuer.unwrap_or_default(),
managed: matches!(
discovery::inferred_provider(Some(&discovery.authority), Some(&discovery.provider)),
crate::control::TailnetProvider::Tailscale
),
}))
}
async fn probe(
&self,
request: Request<TailnetProbeRequest>,
) -> Result<Response<TailnetProbeResponse>, RspStatus> {
let request = request.into_inner();
let status = discovery::probe_tailnet_authority(&request.authority)
.await
.map_err(proc_err)?;
Ok(Response::new(TailnetProbeResponse {
authority: status.authority,
status_code: status.status_code,
summary: status.summary,
detail: status.detail,
reachable: status.reachable,
}))
}
async fn login_start(
&self,
request: Request<super::rpc::grpc_defs::TailnetLoginStartRequest>,
) -> Result<Response<super::rpc::grpc_defs::TailnetLoginStatusResponse>, RspStatus> {
let request = request.into_inner();
info!(
account = %request.account_name,
identity = %request.identity_name,
authority = %request.authority,
"daemon tailnet login start RPC received"
);
let response = self
.tailnet_login
.start_login(Self::tailnet_bridge_request(
request.account_name,
request.identity_name,
request.hostname,
request.authority,
))
.await
.map_err(proc_err)?;
info!(
session_id = %response.session_id,
backend_state = %response.status.backend_state,
running = response.status.running,
needs_login = response.status.needs_login,
auth_url = ?response.status.auth_url,
"daemon tailnet login start RPC resolved"
);
Ok(Response::new(tailnet_login_rsp(
response.session_id,
response.status,
)))
}
async fn login_status(
&self,
request: Request<super::rpc::grpc_defs::TailnetLoginStatusRequest>,
) -> Result<Response<super::rpc::grpc_defs::TailnetLoginStatusResponse>, RspStatus> {
let request = request.into_inner();
info!(session_id = %request.session_id, "daemon tailnet login status RPC received");
let status = self
.tailnet_login
.status(&request.session_id)
.await
.map_err(proc_err)?;
let Some(status) = status else {
return Err(RspStatus::not_found("tailnet login session not found"));
};
info!(
session_id = %request.session_id,
backend_state = %status.backend_state,
running = status.running,
needs_login = status.needs_login,
auth_url = ?status.auth_url,
"daemon tailnet login status RPC resolved"
);
Ok(Response::new(tailnet_login_rsp(request.session_id, status)))
}
async fn login_cancel(
&self,
request: Request<super::rpc::grpc_defs::TailnetLoginCancelRequest>,
) -> Result<Response<Empty>, RspStatus> {
let request = request.into_inner();
let canceled = self
.tailnet_login
.cancel(&request.session_id)
.await
.map_err(proc_err)?;
if !canceled {
return Err(RspStatus::not_found("tailnet login session not found"));
}
Ok(Response::new(Empty {}))
}
}
fn proc_err(err: impl ToString) -> RspStatus {
RspStatus::internal(err.to_string())
}
fn configuration_rsp(config: ServerConfig) -> TunnelConfigurationResponse {
TunnelConfigurationResponse {
addresses: config.address,
mtu: config.mtu.unwrap_or(1000),
routes: config.routes,
dns_servers: config.dns_servers,
search_domains: config.search_domains,
include_default_route: config.include_default_route,
}
}
fn status_rsp(state: RunState) -> TunnelStatusResponse {
TunnelStatusResponse {
state: state.to_rpc().into(),
start: None,
start: None, // TODO: Add timestamp
}
}
fn tailnet_login_rsp(
session_id: String,
status: TailscaleLoginStatus,
) -> super::rpc::grpc_defs::TailnetLoginStatusResponse {
super::rpc::grpc_defs::TailnetLoginStatusResponse {
session_id,
backend_state: status.backend_state,
auth_url: status.auth_url.unwrap_or_default(),
running: status.running,
needs_login: status.needs_login,
tailnet_name: status.tailnet_name.unwrap_or_default(),
magic_dns_suffix: status.magic_dns_suffix.unwrap_or_default(),
self_dns_name: status.self_dns_name.unwrap_or_default(),
tailnet_ips: status.tailscale_ips,
health: status.health,
}
}

View file

@ -4,22 +4,23 @@ pub mod apple;
mod instance;
mod net;
pub mod rpc;
mod runtime;
use anyhow::{Error as AhError, Result};
use instance::DaemonRPCServer;
pub use net::{get_socket_path, DaemonClient};
pub use rpc::{DaemonCommand, DaemonResponseData, DaemonStartOptions};
use tokio::{
net::UnixListener,
sync::{Notify, RwLock},
};
use tokio::{net::UnixListener, sync::Notify};
use tokio_stream::wrappers::UnixListenerStream;
use tonic::transport::Server;
use tracing::info;
use crate::{
daemon::rpc::grpc_defs::{networks_server::NetworksServer, tunnel_server::TunnelServer},
database::{get_connection, load_interface},
daemon::rpc::grpc_defs::{
networks_server::NetworksServer, tailnet_control_server::TailnetControlServer,
tunnel_server::TunnelServer,
},
database::get_connection,
};
pub async fn daemon_main(
@ -27,12 +28,8 @@ pub async fn daemon_main(
db_path: Option<&Path>,
notify_ready: Option<Arc<Notify>>,
) -> Result<()> {
if let Some(n) = notify_ready {
n.notify_one()
}
let conn = get_connection(db_path)?;
let config = load_interface(&conn, "1")?;
let burrow_server = DaemonRPCServer::new(Arc::new(RwLock::new(config)), db_path.clone())?;
let _conn = get_connection(db_path)?;
let burrow_server = DaemonRPCServer::new(db_path)?;
let spp = socket_path.clone();
let tmp = get_socket_path();
let sock_path = spp.unwrap_or(Path::new(tmp.as_str()));
@ -42,17 +39,243 @@ pub async fn daemon_main(
let uds = UnixListener::bind(sock_path)?;
let serve_job = tokio::spawn(async move {
let uds_stream = UnixListenerStream::new(uds);
let tailnet_server = burrow_server.clone();
let _srv = Server::builder()
.add_service(TunnelServer::new(burrow_server.clone()))
.add_service(NetworksServer::new(burrow_server))
.add_service(TailnetControlServer::new(tailnet_server))
.serve_with_incoming(uds_stream)
.await?;
Ok::<(), AhError>(())
});
if let Some(n) = notify_ready {
n.notify_one();
}
info!("Starting daemon...");
tokio::try_join!(serve_job)
.map(|_| ())
.map_err(|e| e.into())
}
#[cfg(test)]
mod tests {
use std::{
path::PathBuf,
time::{SystemTime, UNIX_EPOCH},
};
use anyhow::{anyhow, Result};
use tokio::time::{timeout, Duration};
use super::*;
use crate::daemon::rpc::{
client::BurrowClient,
grpc_defs::{
Empty, Network, NetworkListResponse, NetworkReorderRequest, NetworkType,
TunnelConfigurationResponse, TunnelStatusResponse,
},
};
#[tokio::test]
async fn daemon_tracks_network_priority_via_grpc() -> Result<()> {
let socket_path = temp_path("sock");
let db_path = temp_path("sqlite3");
let ready = Arc::new(Notify::new());
let daemon_ready = ready.clone();
let daemon_socket_path = socket_path.clone();
let daemon_db_path = db_path.clone();
let daemon_task = tokio::spawn(async move {
daemon_main(
Some(daemon_socket_path.as_path()),
Some(daemon_db_path.as_path()),
Some(daemon_ready),
)
.await
});
timeout(Duration::from_secs(5), ready.notified()).await?;
let mut client = timeout(
Duration::from_secs(5),
BurrowClient::from_uds_path(&socket_path),
)
.await??;
let mut config_stream = client
.tunnel_client
.tunnel_configuration(Empty {})
.await?
.into_inner();
let mut network_stream = client
.networks_client
.network_list(Empty {})
.await?
.into_inner();
let mut status_stream = client
.tunnel_client
.tunnel_status(Empty {})
.await?
.into_inner();
let initial_config = next_configuration(&mut config_stream).await?;
assert!(initial_config.addresses.is_empty());
assert_eq!(initial_config.mtu, 1500);
let initial_networks = next_networks(&mut network_stream).await?;
assert!(initial_networks.network.is_empty());
let initial_status = next_status(&mut status_stream).await?;
assert_eq!(
initial_status.state(),
crate::daemon::rpc::grpc_defs::State::Stopped
);
client.tunnel_client.tunnel_start(Empty {}).await?;
let passthrough_status = next_status(&mut status_stream).await?;
assert_eq!(
passthrough_status.state(),
crate::daemon::rpc::grpc_defs::State::Running
);
client.tunnel_client.tunnel_stop(Empty {}).await?;
let stopped_status = next_status(&mut status_stream).await?;
assert_eq!(
stopped_status.state(),
crate::daemon::rpc::grpc_defs::State::Stopped
);
client
.networks_client
.network_add(Network {
id: 1,
r#type: NetworkType::WireGuard.into(),
payload: sample_wireguard_payload(),
})
.await?;
let networks_after_wg = next_networks(&mut network_stream).await?;
assert_eq!(
network_ids(&networks_after_wg),
vec![(1, NetworkType::WireGuard)]
);
let wireguard_config = next_configuration(&mut config_stream).await?;
assert_eq!(
wireguard_config.addresses,
vec!["10.8.0.2/32", "fd00::2/128"]
);
assert_eq!(wireguard_config.mtu, 1420);
client
.networks_client
.network_add(Network {
id: 2,
r#type: NetworkType::WireGuard.into(),
payload: sample_wireguard_payload_with("10.77.0.2/32", 1380),
})
.await?;
let networks_after_second_add = next_networks(&mut network_stream).await?;
assert_eq!(
network_ids(&networks_after_second_add),
vec![(1, NetworkType::WireGuard), (2, NetworkType::WireGuard)]
);
let still_wireguard = next_configuration(&mut config_stream).await?;
assert_eq!(still_wireguard.addresses, wireguard_config.addresses);
client
.networks_client
.network_reorder(NetworkReorderRequest { id: 2, index: 0 })
.await?;
let networks_after_reorder = next_networks(&mut network_stream).await?;
assert_eq!(
network_ids(&networks_after_reorder),
vec![(2, NetworkType::WireGuard), (1, NetworkType::WireGuard)]
);
let second_wireguard_config = next_configuration(&mut config_stream).await?;
assert_eq!(second_wireguard_config.addresses, vec!["10.77.0.2/32"]);
assert_eq!(second_wireguard_config.mtu, 1380);
daemon_task.abort();
let _ = daemon_task.await;
cleanup_path(&socket_path);
cleanup_path(&db_path);
Ok(())
}
fn temp_path(ext: &str) -> PathBuf {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("system time is after unix epoch")
.as_nanos();
std::env::temp_dir().join(format!("burrow-daemon-test-{now}.{ext}"))
}
fn cleanup_path(path: &Path) {
let _ = std::fs::remove_file(path);
}
fn sample_wireguard_payload() -> Vec<u8> {
br#"[Interface]
PrivateKey = OEPVdomeLTxTIBvv3TYsJRge0Hp9NMiY0sIrhT8OWG8=
Address = 10.8.0.2/32, fd00::2/128
ListenPort = 51820
MTU = 1420
[Peer]
PublicKey = 8GaFjVO6c4luCHG4ONO+1bFG8tO+Zz5/Gy+Geht1USM=
PresharedKey = ha7j4BjD49sIzyF9SNlbueK0AMHghlj6+u0G3bzC698=
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = wg.burrow.rs:51820
"#
.to_vec()
}
fn sample_wireguard_payload_with(address: &str, mtu: u16) -> Vec<u8> {
format!(
"[Interface]\nPrivateKey = OEPVdomeLTxTIBvv3TYsJRge0Hp9NMiY0sIrhT8OWG8=\nAddress = {address}\nListenPort = 51820\nMTU = {mtu}\n\n[Peer]\nPublicKey = 8GaFjVO6c4luCHG4ONO+1bFG8tO+Zz5/Gy+Geht1USM=\nPresharedKey = ha7j4BjD49sIzyF9SNlbueK0AMHghlj6+u0G3bzC698=\nAllowedIPs = 0.0.0.0/0, ::/0\nEndpoint = wg.burrow.rs:51820\n"
)
.into_bytes()
}
async fn next_configuration(
stream: &mut tonic::Streaming<TunnelConfigurationResponse>,
) -> Result<TunnelConfigurationResponse> {
timeout(Duration::from_secs(5), stream.message())
.await??
.ok_or_else(|| anyhow!("configuration stream ended unexpectedly"))
}
async fn next_networks(
stream: &mut tonic::Streaming<NetworkListResponse>,
) -> Result<NetworkListResponse> {
timeout(Duration::from_secs(5), stream.message())
.await??
.ok_or_else(|| anyhow!("network stream ended unexpectedly"))
}
async fn next_status(
stream: &mut tonic::Streaming<TunnelStatusResponse>,
) -> Result<TunnelStatusResponse> {
timeout(Duration::from_secs(5), stream.message())
.await??
.ok_or_else(|| anyhow!("status stream ended unexpectedly"))
}
fn network_ids(response: &NetworkListResponse) -> Vec<(i32, NetworkType)> {
response
.network
.iter()
.map(|network| (network.id, network.r#type()))
.collect()
}
}

View file

@ -11,11 +11,7 @@ use tokio::{
use tracing::{debug, error, info};
use crate::daemon::rpc::{
DaemonCommand,
DaemonMessage,
DaemonNotification,
DaemonRequest,
DaemonResponse,
DaemonCommand, DaemonMessage, DaemonNotification, DaemonRequest, DaemonResponse,
DaemonResponseData,
};

View file

@ -1,30 +1,45 @@
use anyhow::Result;
use hyper_util::rt::TokioIo;
use std::path::Path;
use tokio::net::UnixStream;
use tonic::transport::{Endpoint, Uri};
use tower::service_fn;
use super::grpc_defs::{networks_client::NetworksClient, tunnel_client::TunnelClient};
use super::grpc_defs::{
networks_client::NetworksClient, tailnet_control_client::TailnetControlClient,
tunnel_client::TunnelClient,
};
use crate::daemon::get_socket_path;
pub struct BurrowClient<T> {
pub networks_client: NetworksClient<T>,
pub tailnet_client: TailnetControlClient<T>,
pub tunnel_client: TunnelClient<T>,
}
impl BurrowClient<tonic::transport::Channel> {
#[cfg(any(target_os = "linux", target_vendor = "apple"))]
pub async fn from_uds() -> Result<Self> {
Self::from_uds_path(get_socket_path()).await
}
#[cfg(any(target_os = "linux", target_vendor = "apple"))]
pub async fn from_uds_path(path: impl AsRef<Path>) -> Result<Self> {
let socket_path = path.as_ref().to_owned();
let channel = Endpoint::try_from("http://[::]:50051")? // NOTE: this is a hack(?)
.connect_with_connector(service_fn(|_: Uri| async {
let sock_path = get_socket_path();
Ok::<_, std::io::Error>(TokioIo::new(UnixStream::connect(sock_path).await?))
.connect_with_connector(service_fn(move |_: Uri| {
let socket_path = socket_path.clone();
async move {
Ok::<_, std::io::Error>(TokioIo::new(UnixStream::connect(&socket_path).await?))
}
}))
.await?;
let nw_client = NetworksClient::new(channel.clone());
let tailnet_client = TailnetControlClient::new(channel.clone());
let tun_client = TunnelClient::new(channel.clone());
Ok(BurrowClient {
networks_client: nw_client,
tailnet_client,
tunnel_client: tun_client,
})
}

View file

@ -3,7 +3,7 @@ use serde::{Deserialize, Serialize};
use tun::TunOptions;
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[serde(tag="method", content="params")]
#[serde(tag = "method", content = "params")]
pub enum DaemonCommand {
Start(DaemonStartOptions),
ServerInfo,

Some files were not shown because too many files have changed in this diff Show more