Add Burrow forge infrastructure and tailnet control plane
This commit is contained in:
parent
d1ed826389
commit
de25f240d5
51 changed files with 9058 additions and 0 deletions
56
nixos/README.md
Normal file
56
nixos/README.md
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
# Burrow Forge Runbook
|
||||
|
||||
This directory contains the Burrow forge host definition and the Hetzner bootstrap shape for `burrow-forge`.
|
||||
|
||||
Mail hosting is intentionally not part of this NixOS host in the current plan. Burrow's first mail path is Forward Email with Burrow-owned custom S3 backups; see [`docs/FORWARDEMAIL.md`](../docs/FORWARDEMAIL.md).
|
||||
|
||||
## Files
|
||||
|
||||
- `hosts/burrow-forge/default.nix`: host entrypoint
|
||||
- `modules/burrow-forge.nix`: Forgejo, Caddy, PostgreSQL, and admin bootstrap module
|
||||
- `modules/burrow-forge-runner.nix`: Forgejo Actions runner and agent identity bootstrap
|
||||
- `modules/burrow-forgejo-nsc.nix`: Namespace-backed ephemeral Forgejo runner services
|
||||
- `modules/burrow-authentik.nix`: minimal Authentik IdP for Burrow control planes
|
||||
- `modules/burrow-headscale.nix`: Headscale control plane rooted in Authentik OIDC
|
||||
- `hetzner-cloud-config.yaml`: desired Hetzner host shape
|
||||
- `keys/contact_at_burrow_net.pub`: initial operator SSH public key
|
||||
- `keys/agent_at_burrow_net.pub`: automation SSH public key
|
||||
- `../Scripts/hetzner-forge.sh`: Hetzner inventory and replace workflow
|
||||
- `../Scripts/nsc-build-and-upload-image.sh`: temporary Namespace builder -> raw image -> Hetzner snapshot
|
||||
- `../Scripts/bootstrap-forge-intake.sh`: copy the Forgejo bootstrap password and agent SSH key into `/var/lib/burrow/intake/`
|
||||
- `../Scripts/check-forge-host.sh`: verify Forgejo, Caddy, the local runner, optional NSC services, and optional Tailnet services after boot
|
||||
- `../Scripts/cloudflare-upsert-a-record.sh`: upsert DNS-only Cloudflare `A` records for Burrow host cutovers
|
||||
- `../Scripts/forge-deploy.sh`: remote `nixos-rebuild` entrypoint for the forge host
|
||||
- `../Scripts/provision-forgejo-nsc.sh`: render Burrow Namespace dispatcher/autoscaler runtime inputs and ensure the default Forgejo scope exists
|
||||
- `../Scripts/sync-forgejo-nsc-config.sh`: copy intake-backed dispatcher/autoscaler inputs to the host
|
||||
|
||||
## Intended Flow
|
||||
|
||||
1. Build and upload the raw NixOS image with `Scripts/hetzner-forge.sh build-image` or `Scripts/nsc-build-and-upload-image.sh`.
|
||||
2. Recreate `burrow-forge` from the latest labeled snapshot with `Scripts/hetzner-forge.sh recreate-from-image --yes`.
|
||||
3. Run `Scripts/bootstrap-forge-intake.sh` to place the Forgejo bootstrap password file and automation SSH key under `/var/lib/burrow/intake/`.
|
||||
4. Let `burrow-forgejo-bootstrap.service` create or rotate the initial Forgejo admin account.
|
||||
5. Let `burrow-forgejo-runner-bootstrap.service` register the self-hosted Forgejo runner and seed Git identity as `agent <agent@burrow.net>`.
|
||||
6. Run `Scripts/provision-forgejo-nsc.sh` locally, then `Scripts/sync-forgejo-nsc-config.sh` to place the Namespace dispatcher/autoscaler runtime inputs under `/var/lib/burrow/intake/`.
|
||||
7. Ensure `/var/lib/burrow/intake/authentik.env` exists on the host, and let `services.burrow.headscale` generate `/var/lib/burrow/intake/authentik_headscale_client_secret.txt` on first boot if it is absent.
|
||||
8. Use `Scripts/cloudflare-upsert-a-record.sh` to point `git.burrow.net`, `burrow.net`, `auth.burrow.net`, `ts.burrow.net`, and `nsc-autoscaler.burrow.net` at the host with Cloudflare proxying disabled for ACME.
|
||||
9. Use `Scripts/forge-deploy.sh --allow-dirty` for subsequent remote `nixos-rebuild` runs from the live workspace.
|
||||
10. Configure Forward Email custom S3 backups for `burrow.net` and `burrow.rs` out-of-band with `Tools/forwardemail-custom-s3.sh`.
|
||||
|
||||
## Current Constraints
|
||||
|
||||
- `burrow-forge` is live on NixOS in `hel1` at `89.167.47.21`, and `Scripts/check-forge-host.sh --expect-nsc` passes locally against that host.
|
||||
- Public Burrow forge cutover completed on March 15, 2026:
|
||||
- `burrow.net`, `git.burrow.net`, and `nsc-autoscaler.burrow.net` now publish public `A` records to `89.167.47.21`
|
||||
- HTTP redirects to HTTPS on all three names
|
||||
- `https://burrow.net` returns the root forge landing response
|
||||
- `https://git.burrow.net` returns the live Forgejo front door
|
||||
- `https://nsc-autoscaler.burrow.net` terminates TLS on Caddy and returns the expected application-level `404` for `/`
|
||||
- The Cloudflare token currently in `intake/cloudflare-token.txt` is an account-scoped token: `POST /accounts/<account>/tokens/verify` succeeds, while `POST /user/tokens/verify` returns `Invalid API Token`.
|
||||
- `burrow.rs` still resolves publicly to a Vercel `DEPLOYMENT_NOT_FOUND` response.
|
||||
- Both domains publish Forward Email MX/TXT records.
|
||||
- Forward Email custom S3 is live on both domains against the Hetzner `burrow` bucket and the public regional endpoint `https://hel1.your-objectstorage.com`.
|
||||
- The current Hetzner account contains both:
|
||||
- the older Ubuntu bootstrap server in `hil`
|
||||
- the live `burrow-forge` NixOS server in `hel1`
|
||||
- The remaining forge work is follow-on product/integration work, not host bring-up, mail backup wiring, or public DNS cutover.
|
||||
10
nixos/hetzner-cloud-config.yaml
Normal file
10
nixos/hetzner-cloud-config.yaml
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
name: burrow-forge
|
||||
server_type: ccx23
|
||||
location: hel1
|
||||
image: ubuntu-24.04
|
||||
ssh_keys:
|
||||
- contact@burrow.net
|
||||
- agent@burrow.net
|
||||
labels:
|
||||
project: burrow
|
||||
role: forge
|
||||
58
nixos/hosts/burrow-forge/default.nix
Normal file
58
nixos/hosts/burrow-forge/default.nix
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
{ self, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
./disko-config.nix
|
||||
self.nixosModules.burrow-forge
|
||||
self.nixosModules.burrow-forge-runner
|
||||
self.nixosModules.burrow-forgejo-nsc
|
||||
self.nixosModules.burrow-authentik
|
||||
self.nixosModules.burrow-headscale
|
||||
];
|
||||
|
||||
system.stateVersion = "24.11";
|
||||
|
||||
time.timeZone = "America/Los_Angeles";
|
||||
|
||||
nix.settings.experimental-features = [
|
||||
"nix-command"
|
||||
"flakes"
|
||||
];
|
||||
|
||||
services.burrow.forge = {
|
||||
enable = true;
|
||||
adminPasswordFile = "/var/lib/burrow/intake/forgejo_pass_contact_at_burrow_net.txt";
|
||||
authorizedKeys = [
|
||||
(builtins.readFile ../../keys/contact_at_burrow_net.pub)
|
||||
(builtins.readFile ../../keys/agent_at_burrow_net.pub)
|
||||
];
|
||||
};
|
||||
|
||||
services.burrow.forgeRunner = {
|
||||
enable = true;
|
||||
sshPrivateKeyFile = "/var/lib/burrow/intake/agent_at_burrow_net_ed25519";
|
||||
};
|
||||
|
||||
services.burrow.forgejoNsc = {
|
||||
enable = true;
|
||||
nscTokenFile = "/var/lib/burrow/intake/forgejo_nsc_token.txt";
|
||||
dispatcher = {
|
||||
configFile = "/var/lib/burrow/intake/forgejo_nsc_dispatcher.yaml";
|
||||
};
|
||||
autoscaler = {
|
||||
enable = true;
|
||||
configFile = "/var/lib/burrow/intake/forgejo_nsc_autoscaler.yaml";
|
||||
};
|
||||
};
|
||||
|
||||
services.burrow.authentik = {
|
||||
enable = true;
|
||||
envFile = "/var/lib/burrow/intake/authentik.env";
|
||||
headscaleClientSecretFile = "/var/lib/burrow/intake/authentik_headscale_client_secret.txt";
|
||||
};
|
||||
|
||||
services.burrow.headscale = {
|
||||
enable = true;
|
||||
};
|
||||
}
|
||||
36
nixos/hosts/burrow-forge/disko-config.nix
Normal file
36
nixos/hosts/burrow-forge/disko-config.nix
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
{ lib, ... }:
|
||||
|
||||
{
|
||||
disko.devices = {
|
||||
disk.main = {
|
||||
type = "disk";
|
||||
device = lib.mkDefault "/dev/sda";
|
||||
imageName = "burrow-forge";
|
||||
imageSize = "80G";
|
||||
content = {
|
||||
type = "gpt";
|
||||
partitions = {
|
||||
ESP = {
|
||||
size = "512M";
|
||||
type = "EF00";
|
||||
content = {
|
||||
type = "filesystem";
|
||||
format = "vfat";
|
||||
mountpoint = "/boot";
|
||||
mountOptions = [ "umask=0077" ];
|
||||
};
|
||||
};
|
||||
|
||||
root = {
|
||||
size = "100%";
|
||||
content = {
|
||||
type = "filesystem";
|
||||
format = "ext4";
|
||||
mountpoint = "/";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
11
nixos/hosts/burrow-forge/hardware-configuration.nix
Normal file
11
nixos/hosts/burrow-forge/hardware-configuration.nix
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
{ ... }:
|
||||
|
||||
{
|
||||
# Derived from Hetzner Cloud rescue-mode hardware inspection.
|
||||
boot.initrd.availableKernelModules = [
|
||||
"ahci"
|
||||
"sd_mod"
|
||||
"virtio_pci"
|
||||
"virtio_scsi"
|
||||
];
|
||||
}
|
||||
1
nixos/keys/agent_at_burrow_net.pub
Normal file
1
nixos/keys/agent_at_burrow_net.pub
Normal file
|
|
@ -0,0 +1 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEN0+tRJy7Y2DW0uGYHb86N2t02WyU5lDNX6FaxBF/G8 agent@burrow.net
|
||||
1
nixos/keys/contact_at_burrow_net.pub
Normal file
1
nixos/keys/contact_at_burrow_net.pub
Normal file
|
|
@ -0,0 +1 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO42guJ5QvNMw3k6YKWlQnjcTsc+X4XI9F2GBtl8aHOa
|
||||
271
nixos/modules/burrow-authentik.nix
Normal file
271
nixos/modules/burrow-authentik.nix
Normal file
|
|
@ -0,0 +1,271 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.burrow.authentik;
|
||||
runtimeDir = "/run/burrow-authentik";
|
||||
envFile = "${runtimeDir}/authentik.env";
|
||||
blueprintDir = "${runtimeDir}/blueprints";
|
||||
blueprintFile = "${blueprintDir}/burrow-authentik.yaml";
|
||||
postgresVolume = "burrow-authentik-postgresql:/var/lib/postgresql/data";
|
||||
dataVolume = "burrow-authentik-data:/data";
|
||||
authentikBlueprint = pkgs.writeText "burrow-authentik-blueprint.yaml" ''
|
||||
version: 1
|
||||
metadata:
|
||||
name: Burrow Authentik
|
||||
labels:
|
||||
blueprints.goauthentik.io/description: Minimal Burrow Authentik applications
|
||||
entries:
|
||||
- model: authentik_providers_oauth2.scopemapping
|
||||
id: burrow-oidc-email
|
||||
identifiers:
|
||||
name: Burrow OIDC Email
|
||||
attrs:
|
||||
name: Burrow OIDC Email
|
||||
scope_name: email
|
||||
description: Verified email mapping for Burrow
|
||||
expression: |
|
||||
return {
|
||||
"email": request.user.email,
|
||||
"email_verified": True,
|
||||
}
|
||||
|
||||
- model: authentik_providers_oauth2.oauth2provider
|
||||
id: burrow-oidc-provider-ts
|
||||
identifiers:
|
||||
name: Burrow Tailnet
|
||||
attrs:
|
||||
authorization_flow: !Find [authentik_flows.flow, [slug, default-provider-authorization-implicit-consent]]
|
||||
invalidation_flow: !Find [authentik_flows.flow, [slug, default-provider-invalidation-flow]]
|
||||
issuer_mode: per_provider
|
||||
slug: ${cfg.headscaleProviderSlug}
|
||||
client_type: confidential
|
||||
client_id: ${cfg.headscaleDomain}
|
||||
client_secret: !Env [AUTHENTIK_BURROW_TS_CLIENT_SECRET, ""]
|
||||
include_claims_in_id_token: true
|
||||
redirect_uris:
|
||||
- matching_mode: strict
|
||||
url: https://${cfg.headscaleDomain}/oidc/callback
|
||||
property_mappings:
|
||||
- !Find [authentik_providers_oauth2.scopemapping, [managed, goauthentik.io/providers/oauth2/scope-openid]]
|
||||
- !KeyOf burrow-oidc-email
|
||||
- !Find [authentik_providers_oauth2.scopemapping, [managed, goauthentik.io/providers/oauth2/scope-profile]]
|
||||
signing_key: !Find [authentik_crypto.certificatekeypair, [name, authentik Self-signed Certificate]]
|
||||
|
||||
- model: authentik_core.application
|
||||
identifiers:
|
||||
slug: ${cfg.headscaleProviderSlug}
|
||||
attrs:
|
||||
name: Burrow Tailnet
|
||||
slug: ${cfg.headscaleProviderSlug}
|
||||
provider: !KeyOf burrow-oidc-provider-ts
|
||||
meta_launch_url: https://${cfg.headscaleDomain}/
|
||||
'';
|
||||
in
|
||||
{
|
||||
options.services.burrow.authentik = {
|
||||
enable = lib.mkEnableOption "the Burrow Authentik identity provider";
|
||||
|
||||
domain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "auth.burrow.net";
|
||||
description = "Public Authentik domain.";
|
||||
};
|
||||
|
||||
port = lib.mkOption {
|
||||
type = lib.types.port;
|
||||
default = 9002;
|
||||
description = "Local Authentik HTTP listen port.";
|
||||
};
|
||||
|
||||
image = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "ghcr.io/goauthentik/server:2026.2.1";
|
||||
description = "Authentik container image reference.";
|
||||
};
|
||||
|
||||
envFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/var/lib/burrow/intake/authentik.env";
|
||||
description = "Host-local Authentik bootstrap environment file.";
|
||||
};
|
||||
|
||||
headscaleDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "ts.burrow.net";
|
||||
description = "Headscale public domain used for the bundled OIDC client.";
|
||||
};
|
||||
|
||||
headscaleProviderSlug = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "ts";
|
||||
description = "Authentik provider slug for Headscale.";
|
||||
};
|
||||
|
||||
headscaleClientSecretFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/var/lib/burrow/intake/authentik_headscale_client_secret.txt";
|
||||
description = "Host-local file containing the Authentik Headscale OIDC client secret.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
virtualisation.podman.enable = true;
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d ${runtimeDir} 0750 root root -"
|
||||
"d ${blueprintDir} 0750 root root -"
|
||||
];
|
||||
|
||||
systemd.services.burrow-authentik-runtime = {
|
||||
description = "Render the Burrow Authentik runtime environment";
|
||||
before = [
|
||||
"podman-burrow-authentik-postgresql.service"
|
||||
"podman-burrow-authentik-server.service"
|
||||
"podman-burrow-authentik-worker.service"
|
||||
];
|
||||
wantedBy = [
|
||||
"podman-burrow-authentik-postgresql.service"
|
||||
"podman-burrow-authentik-server.service"
|
||||
"podman-burrow-authentik-worker.service"
|
||||
];
|
||||
after = lib.optionals config.services.burrow.headscale.enable [
|
||||
"burrow-headscale-client-secret.service"
|
||||
];
|
||||
wants = lib.optionals config.services.burrow.headscale.enable [
|
||||
"burrow-headscale-client-secret.service"
|
||||
];
|
||||
path = [ pkgs.coreutils ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
|
||||
if [ ! -s ${lib.escapeShellArg cfg.envFile} ]; then
|
||||
echo "Authentik env file missing: ${cfg.envFile}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -s ${lib.escapeShellArg cfg.headscaleClientSecretFile} ]; then
|
||||
echo "Headscale client secret missing: ${cfg.headscaleClientSecretFile}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
install -d -m 0750 -o root -g root ${runtimeDir} ${blueprintDir}
|
||||
install -m 0644 -o root -g root ${authentikBlueprint} ${blueprintFile}
|
||||
|
||||
source ${lib.escapeShellArg cfg.envFile}
|
||||
|
||||
read_secret() {
|
||||
tr -d '\r\n' < "$1"
|
||||
}
|
||||
|
||||
cat > ${envFile} <<EOF
|
||||
PG_DB=authentik
|
||||
PG_USER=authentik
|
||||
PG_PASS=$PG_PASS
|
||||
POSTGRES_DB=authentik
|
||||
POSTGRES_USER=authentik
|
||||
POSTGRES_PASSWORD=$PG_PASS
|
||||
AUTHENTIK_POSTGRESQL__HOST=127.0.0.1
|
||||
AUTHENTIK_POSTGRESQL__PORT=5433
|
||||
AUTHENTIK_POSTGRESQL__NAME=authentik
|
||||
AUTHENTIK_POSTGRESQL__USER=authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD=$PG_PASS
|
||||
AUTHENTIK_LISTEN__HTTP=0.0.0.0:${toString cfg.port}
|
||||
AUTHENTIK_SECRET_KEY=$AUTHENTIK_SECRET_KEY
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD=$AUTHENTIK_BOOTSTRAP_PASSWORD
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN=$AUTHENTIK_BOOTSTRAP_TOKEN
|
||||
AUTHENTIK_BURROW_TS_CLIENT_SECRET=$(read_secret ${lib.escapeShellArg cfg.headscaleClientSecretFile})
|
||||
EOF
|
||||
chown root:root ${envFile}
|
||||
chmod 0600 ${envFile}
|
||||
'';
|
||||
};
|
||||
|
||||
virtualisation.oci-containers.containers."burrow-authentik-postgresql" = {
|
||||
image = "docker.io/library/postgres:16-alpine";
|
||||
autoStart = true;
|
||||
environmentFiles = [ envFile ];
|
||||
cmd = [
|
||||
"-c"
|
||||
"port=5433"
|
||||
"-c"
|
||||
"listen_addresses=127.0.0.1"
|
||||
];
|
||||
volumes = [ postgresVolume ];
|
||||
extraOptions = [
|
||||
"--network=host"
|
||||
"--pull=always"
|
||||
];
|
||||
};
|
||||
|
||||
virtualisation.oci-containers.containers."burrow-authentik-server" = {
|
||||
image = cfg.image;
|
||||
autoStart = true;
|
||||
cmd = [ "server" ];
|
||||
environmentFiles = [ envFile ];
|
||||
volumes = [
|
||||
dataVolume
|
||||
"${blueprintFile}:/blueprints/burrow-authentik.yaml:ro"
|
||||
];
|
||||
extraOptions = [
|
||||
"--network=host"
|
||||
"--pull=always"
|
||||
];
|
||||
};
|
||||
|
||||
virtualisation.oci-containers.containers."burrow-authentik-worker" = {
|
||||
image = cfg.image;
|
||||
autoStart = true;
|
||||
cmd = [ "worker" ];
|
||||
environmentFiles = [ envFile ];
|
||||
volumes = [
|
||||
dataVolume
|
||||
"${blueprintFile}:/blueprints/burrow-authentik.yaml:ro"
|
||||
];
|
||||
extraOptions = [
|
||||
"--network=host"
|
||||
"--pull=always"
|
||||
"--user=root"
|
||||
];
|
||||
};
|
||||
|
||||
systemd.services.burrow-authentik-ready = {
|
||||
description = "Wait for Burrow Authentik to become ready";
|
||||
after = [ "podman-burrow-authentik-server.service" ];
|
||||
wants = [ "podman-burrow-authentik-server.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [
|
||||
pkgs.coreutils
|
||||
pkgs.curl
|
||||
];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
|
||||
for _ in $(seq 1 90); do
|
||||
if ${pkgs.curl}/bin/curl -fsS http://127.0.0.1:${toString cfg.port}/-/health/ready/ >/dev/null; then
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "Authentik did not become ready on ${cfg.domain}" >&2
|
||||
exit 1
|
||||
'';
|
||||
};
|
||||
|
||||
services.caddy.virtualHosts."${cfg.domain}".extraConfig = ''
|
||||
encode gzip zstd
|
||||
reverse_proxy 127.0.0.1:${toString cfg.port}
|
||||
'';
|
||||
};
|
||||
}
|
||||
213
nixos/modules/burrow-forge-runner.nix
Normal file
213
nixos/modules/burrow-forge-runner.nix
Normal file
|
|
@ -0,0 +1,213 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.burrow.forgeRunner;
|
||||
runnerPkg = pkgs.forgejo-runner;
|
||||
stateDir = cfg.stateDir;
|
||||
runnerFile = "${stateDir}/.runner";
|
||||
configFile = "${stateDir}/runner.yaml";
|
||||
labelsCsv = lib.concatStringsSep "," (map (label: "${label}:host") cfg.labels);
|
||||
sshPrivateKeyFile = cfg.sshPrivateKeyFile or "";
|
||||
in
|
||||
{
|
||||
options.services.burrow.forgeRunner = {
|
||||
enable = lib.mkEnableOption "the Burrow Forgejo Actions runner";
|
||||
|
||||
instanceUrl = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "http://127.0.0.1:3000";
|
||||
description = "Forgejo base URL used by the local runner for registration and job polling.";
|
||||
};
|
||||
|
||||
labels = lib.mkOption {
|
||||
type = with lib.types; listOf str;
|
||||
default = [ "burrow-forge" ];
|
||||
description = "Runner labels exposed to Forgejo Actions.";
|
||||
};
|
||||
|
||||
name = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "burrow-forge-agent";
|
||||
description = "Runner name shown in Forgejo.";
|
||||
};
|
||||
|
||||
capacity = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 1;
|
||||
description = "Maximum concurrent jobs on this runner.";
|
||||
};
|
||||
|
||||
stateDir = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/var/lib/forgejo-runner-agent";
|
||||
description = "Persistent runner state directory.";
|
||||
};
|
||||
|
||||
user = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "forgejo-runner-agent";
|
||||
description = "System user that runs the Forgejo runner.";
|
||||
};
|
||||
|
||||
group = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "forgejo-runner-agent";
|
||||
description = "System group that runs the Forgejo runner.";
|
||||
};
|
||||
|
||||
forgejoConfigFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/var/lib/forgejo/custom/conf/app.ini";
|
||||
description = "Forgejo app.ini path used to generate runner tokens.";
|
||||
};
|
||||
|
||||
gitUserName = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "agent";
|
||||
description = "Git commit author name for automation on the forge host.";
|
||||
};
|
||||
|
||||
gitUserEmail = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "agent@burrow.net";
|
||||
description = "Git commit author email for automation on the forge host.";
|
||||
};
|
||||
|
||||
sshPrivateKeyFile = lib.mkOption {
|
||||
type = with lib.types; nullOr str;
|
||||
default = null;
|
||||
description = "Optional host-local path to the agent SSH private key copied into the runner home.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
users.groups.${cfg.group} = { };
|
||||
|
||||
users.users.${cfg.user} = {
|
||||
isSystemUser = true;
|
||||
group = cfg.group;
|
||||
description = "Burrow Forgejo Actions runner";
|
||||
home = cfg.stateDir;
|
||||
createHome = true;
|
||||
shell = pkgs.bashInteractive;
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
runnerPkg
|
||||
bash
|
||||
coreutils
|
||||
findutils
|
||||
git
|
||||
git-lfs
|
||||
openssh
|
||||
python3
|
||||
rsync
|
||||
];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d ${stateDir} 0750 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
|
||||
systemd.services.burrow-forgejo-runner-bootstrap = {
|
||||
description = "Bootstrap Burrow Forgejo runner registration";
|
||||
after = [ "forgejo.service" "network-online.target" "systemd-tmpfiles-setup.service" ];
|
||||
wants = [ "forgejo.service" "network-online.target" "systemd-tmpfiles-setup.service" ];
|
||||
before = [ "burrow-forgejo-runner.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
umask 077
|
||||
|
||||
install -d -m 0750 -o ${cfg.user} -g ${cfg.group} ${stateDir}
|
||||
cat > ${configFile} <<EOF
|
||||
runner:
|
||||
file: ${runnerFile}
|
||||
capacity: ${toString cfg.capacity}
|
||||
name: ${cfg.name}
|
||||
labels:
|
||||
EOF
|
||||
for label in ${lib.concatStringsSep " " cfg.labels}; do
|
||||
echo " - ${"$"}label:host" >> ${configFile}
|
||||
done
|
||||
cat >> ${configFile} <<'EOF'
|
||||
cache:
|
||||
enabled: false
|
||||
EOF
|
||||
chown ${cfg.user}:${cfg.group} ${configFile}
|
||||
chmod 0640 ${configFile}
|
||||
|
||||
install -d -m 0700 -o ${cfg.user} -g ${cfg.group} ${stateDir}/.ssh
|
||||
${pkgs.util-linux}/bin/runuser -u ${cfg.user} -- \
|
||||
${pkgs.git}/bin/git config --global user.name ${lib.escapeShellArg cfg.gitUserName}
|
||||
${pkgs.util-linux}/bin/runuser -u ${cfg.user} -- \
|
||||
${pkgs.git}/bin/git config --global user.email ${lib.escapeShellArg cfg.gitUserEmail}
|
||||
|
||||
if [ -n ${lib.escapeShellArg sshPrivateKeyFile} ] && [ -s ${lib.escapeShellArg sshPrivateKeyFile} ]; then
|
||||
install -m 0600 -o ${cfg.user} -g ${cfg.group} \
|
||||
${lib.escapeShellArg sshPrivateKeyFile} \
|
||||
${stateDir}/.ssh/id_ed25519
|
||||
cat > ${stateDir}/.ssh/config <<EOF
|
||||
Host *
|
||||
IdentityFile ${stateDir}/.ssh/id_ed25519
|
||||
IdentitiesOnly yes
|
||||
StrictHostKeyChecking accept-new
|
||||
EOF
|
||||
chown ${cfg.user}:${cfg.group} ${stateDir}/.ssh/config
|
||||
chmod 0600 ${stateDir}/.ssh/config
|
||||
fi
|
||||
|
||||
if [ ! -s ${runnerFile} ]; then
|
||||
token="$(${pkgs.util-linux}/bin/runuser -u forgejo -- \
|
||||
${config.services.forgejo.package}/bin/forgejo actions generate-runner-token --config ${cfg.forgejoConfigFile} | tr -d '\r\n')"
|
||||
if [ -z "${"$"}token" ]; then
|
||||
echo "[burrow-forgejo-runner] failed to generate runner token" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
${pkgs.util-linux}/bin/runuser -u ${cfg.user} -- \
|
||||
${runnerPkg}/bin/forgejo-runner register \
|
||||
--no-interactive \
|
||||
--instance ${lib.escapeShellArg cfg.instanceUrl} \
|
||||
--token "${"$"}token" \
|
||||
--name ${lib.escapeShellArg cfg.name} \
|
||||
--labels ${lib.escapeShellArg labelsCsv} \
|
||||
--config ${configFile}
|
||||
fi
|
||||
'';
|
||||
};
|
||||
|
||||
systemd.services.burrow-forgejo-runner = {
|
||||
description = "Burrow Forgejo Actions runner";
|
||||
after = [ "burrow-forgejo-runner-bootstrap.service" ];
|
||||
wants = [ "burrow-forgejo-runner-bootstrap.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
WorkingDirectory = stateDir;
|
||||
Restart = "on-failure";
|
||||
RestartSec = 2;
|
||||
ExecStart = pkgs.writeShellScript "burrow-forgejo-runner" ''
|
||||
set -euo pipefail
|
||||
export PATH="/run/wrappers/bin:/run/current-system/sw/bin:${"$"}{PATH:-}"
|
||||
tmp="$(${pkgs.coreutils}/bin/mktemp)"
|
||||
set +e
|
||||
${runnerPkg}/bin/forgejo-runner daemon --config ${configFile} 2>&1 | ${pkgs.coreutils}/bin/tee "${"$"}tmp"
|
||||
rc="${"$"}{PIPESTATUS[0]}"
|
||||
set -e
|
||||
if ${pkgs.gnugrep}/bin/grep -qi "unregistered runner" "${"$"}tmp"; then
|
||||
rm -f ${runnerFile}
|
||||
fi
|
||||
rm -f "${"$"}tmp"
|
||||
exit "${"$"}rc"
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
247
nixos/modules/burrow-forge.nix
Normal file
247
nixos/modules/burrow-forge.nix
Normal file
|
|
@ -0,0 +1,247 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.burrow.forge;
|
||||
forgejoCfg = config.services.forgejo;
|
||||
forgejoExe = lib.getExe forgejoCfg.package;
|
||||
forgejoWorkPath = forgejoCfg.stateDir;
|
||||
forgejoCustomPath = "${forgejoWorkPath}/custom";
|
||||
forgejoConfigFile = "${forgejoCustomPath}/conf/app.ini";
|
||||
forgejoAdminArgs = "--config ${lib.escapeShellArg forgejoConfigFile} --work-path ${lib.escapeShellArg forgejoWorkPath} --custom-path ${lib.escapeShellArg forgejoCustomPath}";
|
||||
homeRepoPath = "/${cfg.homeOwner}/${cfg.homeRepo}";
|
||||
homeRepoUrl = "https://${cfg.gitDomain}${homeRepoPath}";
|
||||
in
|
||||
{
|
||||
options.services.burrow.forge = {
|
||||
enable = lib.mkEnableOption "the Burrow Forge host";
|
||||
|
||||
gitDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "git.burrow.net";
|
||||
description = "Public Forgejo domain.";
|
||||
};
|
||||
|
||||
siteDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "burrow.net";
|
||||
description = "Root site domain.";
|
||||
};
|
||||
|
||||
homeOwner = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "hackclub";
|
||||
description = "Canonical Forgejo org/user for the Burrow home repository.";
|
||||
};
|
||||
|
||||
homeRepo = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "burrow";
|
||||
description = "Canonical Forgejo repository name for the Burrow home repository.";
|
||||
};
|
||||
|
||||
contactEmail = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "contact@burrow.net";
|
||||
description = "Operator contact email.";
|
||||
};
|
||||
|
||||
nscAutoscalerDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "nsc-autoscaler.burrow.net";
|
||||
description = "Public webhook domain for the Forgejo Namespace autoscaler.";
|
||||
};
|
||||
|
||||
adminUsername = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "contact";
|
||||
description = "Initial Forgejo admin username.";
|
||||
};
|
||||
|
||||
adminEmail = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "contact@burrow.net";
|
||||
description = "Initial Forgejo admin email.";
|
||||
};
|
||||
|
||||
adminPasswordFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "Host-local path to the plaintext bootstrap password file for the initial Forgejo admin.";
|
||||
};
|
||||
|
||||
authorizedKeys = lib.mkOption {
|
||||
type = with lib.types; listOf str;
|
||||
default = [ ];
|
||||
description = "SSH keys allowed for root login and operational bootstrap.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
networking.hostName = "burrow-forge";
|
||||
networking.useDHCP = lib.mkDefault true;
|
||||
|
||||
services.qemuGuest.enable = true;
|
||||
|
||||
boot.loader.grub = {
|
||||
enable = true;
|
||||
efiSupport = true;
|
||||
efiInstallAsRemovable = true;
|
||||
device = "nodev";
|
||||
};
|
||||
|
||||
fileSystems."/boot".neededForBoot = true;
|
||||
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
package = pkgs.postgresql_16;
|
||||
};
|
||||
|
||||
services.openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PasswordAuthentication = false;
|
||||
KbdInteractiveAuthentication = false;
|
||||
PermitRootLogin = "prohibit-password";
|
||||
};
|
||||
};
|
||||
|
||||
users.users.root.openssh.authorizedKeys.keys = cfg.authorizedKeys;
|
||||
|
||||
networking.firewall.allowedTCPPorts = [
|
||||
22
|
||||
80
|
||||
443
|
||||
2222
|
||||
];
|
||||
|
||||
services.forgejo = {
|
||||
enable = true;
|
||||
database = {
|
||||
type = "postgres";
|
||||
createDatabase = true;
|
||||
};
|
||||
lfs.enable = true;
|
||||
settings = {
|
||||
server = {
|
||||
DOMAIN = cfg.gitDomain;
|
||||
ROOT_URL = "https://${cfg.gitDomain}/";
|
||||
HTTP_PORT = 3000;
|
||||
SSH_DOMAIN = cfg.gitDomain;
|
||||
SSH_PORT = 2222;
|
||||
START_SSH_SERVER = true;
|
||||
};
|
||||
|
||||
service = {
|
||||
DISABLE_REGISTRATION = true;
|
||||
REQUIRE_SIGNIN_VIEW = false;
|
||||
DEFAULT_ALLOW_CREATE_ORGANIZATION = false;
|
||||
ENABLE_NOTIFY_MAIL = false;
|
||||
NO_REPLY_ADDRESS = cfg.adminEmail;
|
||||
};
|
||||
|
||||
session = {
|
||||
COOKIE_SECURE = true;
|
||||
SAME_SITE = "strict";
|
||||
};
|
||||
|
||||
openid = {
|
||||
ENABLE_OPENID_SIGNIN = false;
|
||||
ENABLE_OPENID_SIGNUP = false;
|
||||
};
|
||||
|
||||
actions = {
|
||||
ENABLED = true;
|
||||
};
|
||||
|
||||
repository = {
|
||||
DEFAULT_BRANCH = "main";
|
||||
ENABLE_PUSH_CREATE_USER = false;
|
||||
};
|
||||
|
||||
ui = {
|
||||
DEFAULT_THEME = "forgejo-auto";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services.caddy = {
|
||||
enable = true;
|
||||
email = cfg.contactEmail;
|
||||
virtualHosts =
|
||||
{
|
||||
"${cfg.gitDomain}".extraConfig = ''
|
||||
encode gzip zstd
|
||||
@root path /
|
||||
redir @root ${homeRepoPath} 308
|
||||
reverse_proxy 127.0.0.1:${toString config.services.forgejo.settings.server.HTTP_PORT}
|
||||
'';
|
||||
"${cfg.siteDomain}".extraConfig = ''
|
||||
@root path /
|
||||
redir @root ${homeRepoUrl} 308
|
||||
respond 404
|
||||
'';
|
||||
}
|
||||
// lib.optionalAttrs (
|
||||
config.services.burrow.forgejoNsc.enable && config.services.burrow.forgejoNsc.autoscaler.enable
|
||||
) {
|
||||
"${cfg.nscAutoscalerDomain}".extraConfig = ''
|
||||
encode gzip zstd
|
||||
reverse_proxy 127.0.0.1:8090
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.burrow-forgejo-bootstrap = {
|
||||
description = "Seed the initial Burrow Forgejo admin account";
|
||||
after = [ "forgejo.service" ];
|
||||
requires = [ "forgejo.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [
|
||||
forgejoCfg.package
|
||||
pkgs.coreutils
|
||||
pkgs.gnugrep
|
||||
];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = forgejoCfg.user;
|
||||
Group = forgejoCfg.group;
|
||||
WorkingDirectory = forgejoCfg.stateDir;
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
|
||||
if [ ! -s ${lib.escapeShellArg cfg.adminPasswordFile} ]; then
|
||||
echo "bootstrap password file is missing; skipping admin bootstrap" >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
password="$(tr -d '\r\n' < ${lib.escapeShellArg cfg.adminPasswordFile})"
|
||||
if [ -z "$password" ]; then
|
||||
echo "bootstrap password file is empty; skipping admin bootstrap" >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
log_file="$(mktemp)"
|
||||
trap 'rm -f "$log_file"' EXIT
|
||||
|
||||
if ! ${forgejoExe} admin user create \
|
||||
${forgejoAdminArgs} \
|
||||
--admin \
|
||||
--username ${lib.escapeShellArg cfg.adminUsername} \
|
||||
--email ${lib.escapeShellArg cfg.adminEmail} \
|
||||
--password "$password" \
|
||||
--must-change-password=false >"$log_file" 2>&1; then
|
||||
if grep -qi "already exists" "$log_file"; then
|
||||
${forgejoExe} admin user change-password \
|
||||
${forgejoAdminArgs} \
|
||||
--username ${lib.escapeShellArg cfg.adminUsername} \
|
||||
--password "$password" \
|
||||
--must-change-password=false
|
||||
else
|
||||
cat "$log_file" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
234
nixos/modules/burrow-forgejo-nsc.nix
Normal file
234
nixos/modules/burrow-forgejo-nsc.nix
Normal file
|
|
@ -0,0 +1,234 @@
|
|||
{ config, lib, pkgs, self, ... }:
|
||||
|
||||
let
|
||||
inherit (lib)
|
||||
mkEnableOption
|
||||
mkIf
|
||||
mkOption
|
||||
types
|
||||
mkAfter
|
||||
mkDefault
|
||||
optional
|
||||
optionalAttrs
|
||||
optionalString
|
||||
;
|
||||
|
||||
cfg = config.services.burrow.forgejoNsc;
|
||||
dispatcherRuntimeConfig = "${cfg.stateDir}/dispatcher.yaml";
|
||||
autoscalerRuntimeConfig = "${cfg.stateDir}/autoscaler.yaml";
|
||||
|
||||
pendingCheck = configPath: pkgs.writeShellScript "forgejo-nsc-check-pending" ''
|
||||
set -euo pipefail
|
||||
if ${pkgs.gnugrep}/bin/grep -q 'PENDING-' '${configPath}'; then
|
||||
echo "forgejo-nsc config still contains placeholder values (PENDING-); update ${configPath} before starting." >&2
|
||||
exit 1
|
||||
fi
|
||||
'';
|
||||
|
||||
nscTokenPath = "${cfg.stateDir}/nsc.token";
|
||||
tokenSync = optionalString (cfg.nscTokenFile != null) ''
|
||||
install -m 600 ${lib.escapeShellArg cfg.nscTokenFile} ${lib.escapeShellArg nscTokenPath}
|
||||
chown ${cfg.user}:${cfg.group} ${nscTokenPath}
|
||||
chmod 600 ${nscTokenPath}
|
||||
'';
|
||||
dispatcherConfigSync = optionalString (cfg.dispatcher.configFile != null) ''
|
||||
install -m 400 ${lib.escapeShellArg cfg.dispatcher.configFile} ${lib.escapeShellArg dispatcherRuntimeConfig}
|
||||
chown ${cfg.user}:${cfg.group} ${lib.escapeShellArg dispatcherRuntimeConfig}
|
||||
chmod 400 ${lib.escapeShellArg dispatcherRuntimeConfig}
|
||||
'';
|
||||
autoscalerConfigSync = optionalString (cfg.autoscaler.configFile != null) ''
|
||||
install -m 400 ${lib.escapeShellArg cfg.autoscaler.configFile} ${lib.escapeShellArg autoscalerRuntimeConfig}
|
||||
chown ${cfg.user}:${cfg.group} ${lib.escapeShellArg autoscalerRuntimeConfig}
|
||||
chmod 400 ${lib.escapeShellArg autoscalerRuntimeConfig}
|
||||
'';
|
||||
|
||||
dispatcherEnv =
|
||||
cfg.extraEnv
|
||||
// optionalAttrs (cfg.nscTokenFile != null) { NSC_TOKEN_FILE = nscTokenPath; }
|
||||
// optionalAttrs (cfg.nscTokenSpecFile != null) { NSC_TOKEN_SPEC_FILE = cfg.nscTokenSpecFile; }
|
||||
// optionalAttrs (cfg.nscEndpoint != null) { NSC_ENDPOINT = cfg.nscEndpoint; };
|
||||
in {
|
||||
options.services.burrow.forgejoNsc = {
|
||||
enable = mkEnableOption "Forgejo Namespace Cloud runner dispatcher";
|
||||
|
||||
user = mkOption {
|
||||
type = types.str;
|
||||
default = "forgejo-nsc";
|
||||
description = "System user that runs the forgejo-nsc services.";
|
||||
};
|
||||
|
||||
group = mkOption {
|
||||
type = types.str;
|
||||
default = "forgejo-nsc";
|
||||
description = "System group for the forgejo-nsc services.";
|
||||
};
|
||||
|
||||
stateDir = mkOption {
|
||||
type = types.str;
|
||||
default = "/var/lib/forgejo-nsc";
|
||||
description = "State directory for the dispatcher/autoscaler.";
|
||||
};
|
||||
|
||||
nscTokenFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Optional NSC token file (exported as NSC_TOKEN_FILE).";
|
||||
};
|
||||
|
||||
nscTokenSpecFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Optional NSC token spec file (exported as NSC_TOKEN_SPEC_FILE).";
|
||||
};
|
||||
|
||||
nscEndpoint = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Optional NSC endpoint override (exported as NSC_ENDPOINT).";
|
||||
};
|
||||
|
||||
extraEnv = mkOption {
|
||||
type = types.attrsOf types.str;
|
||||
default = { };
|
||||
description = "Extra environment variables injected into the services.";
|
||||
};
|
||||
|
||||
nscPackage = mkOption {
|
||||
type = types.nullOr types.package;
|
||||
default = self.packages.${pkgs.stdenv.hostPlatform.system}.nsc or null;
|
||||
description = "Optional nsc CLI package added to the service PATH.";
|
||||
};
|
||||
|
||||
dispatcher = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = "Enable the forgejo-nsc dispatcher service.";
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = self.packages.${pkgs.stdenv.hostPlatform.system}.forgejo-nsc-dispatcher;
|
||||
description = "Package providing the forgejo-nsc dispatcher binary.";
|
||||
};
|
||||
|
||||
configFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Host-local YAML config file for the dispatcher.";
|
||||
};
|
||||
|
||||
allowPending = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Allow placeholder values (PENDING-) in the dispatcher config.";
|
||||
};
|
||||
};
|
||||
|
||||
autoscaler = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Enable the forgejo-nsc autoscaler service.";
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = self.packages.${pkgs.stdenv.hostPlatform.system}.forgejo-nsc-autoscaler;
|
||||
description = "Package providing the forgejo-nsc autoscaler binary.";
|
||||
};
|
||||
|
||||
configFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Host-local YAML config file for the autoscaler.";
|
||||
};
|
||||
|
||||
allowPending = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Allow placeholder values (PENDING-) in the autoscaler config.";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
assertions = [
|
||||
{
|
||||
assertion = (!cfg.dispatcher.enable) || cfg.dispatcher.configFile != null;
|
||||
message = "services.burrow.forgejoNsc.dispatcher.configFile must be set when the dispatcher is enabled.";
|
||||
}
|
||||
{
|
||||
assertion = (!cfg.autoscaler.enable) || cfg.autoscaler.configFile != null;
|
||||
message = "services.burrow.forgejoNsc.autoscaler.configFile must be set when the autoscaler is enabled.";
|
||||
}
|
||||
];
|
||||
|
||||
users.groups.${cfg.group} = { };
|
||||
users.users.${cfg.user} = {
|
||||
uid = mkDefault 2011;
|
||||
isSystemUser = true;
|
||||
group = cfg.group;
|
||||
description = "Forgejo Namespace Cloud runner services";
|
||||
home = cfg.stateDir;
|
||||
createHome = true;
|
||||
shell = pkgs.bashInteractive;
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = mkAfter [
|
||||
"d ${cfg.stateDir} 0750 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
|
||||
systemd.services.forgejo-nsc-dispatcher = mkIf cfg.dispatcher.enable {
|
||||
description = "Forgejo Namespace Cloud dispatcher";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network-online.target" ];
|
||||
wants = [ "network-online.target" ];
|
||||
unitConfig.ConditionPathExists =
|
||||
optional (cfg.dispatcher.configFile != null) cfg.dispatcher.configFile
|
||||
++ optional (cfg.nscTokenFile != null) cfg.nscTokenFile;
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
WorkingDirectory = cfg.stateDir;
|
||||
ExecStart = "${cfg.dispatcher.package}/bin/forgejo-nsc-dispatcher --config ${dispatcherRuntimeConfig}";
|
||||
Restart = "on-failure";
|
||||
RestartSec = 5;
|
||||
};
|
||||
path = lib.optional (cfg.nscPackage != null) cfg.nscPackage;
|
||||
environment = dispatcherEnv;
|
||||
preStart = lib.concatStringsSep "\n" (lib.filter (s: s != "") [
|
||||
(optionalString (!cfg.dispatcher.allowPending) (pendingCheck cfg.dispatcher.configFile))
|
||||
dispatcherConfigSync
|
||||
tokenSync
|
||||
]);
|
||||
};
|
||||
|
||||
systemd.services.forgejo-nsc-autoscaler = mkIf cfg.autoscaler.enable {
|
||||
description = "Forgejo Namespace Cloud autoscaler";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network-online.target" "forgejo-nsc-dispatcher.service" ];
|
||||
wants = [ "network-online.target" ];
|
||||
unitConfig.ConditionPathExists =
|
||||
optional (cfg.autoscaler.configFile != null) cfg.autoscaler.configFile
|
||||
++ optional (cfg.nscTokenFile != null) cfg.nscTokenFile;
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
WorkingDirectory = cfg.stateDir;
|
||||
ExecStart = "${cfg.autoscaler.package}/bin/forgejo-nsc-autoscaler --config ${autoscalerRuntimeConfig}";
|
||||
Restart = "on-failure";
|
||||
RestartSec = 5;
|
||||
};
|
||||
path = lib.optional (cfg.nscPackage != null) cfg.nscPackage;
|
||||
environment = dispatcherEnv;
|
||||
preStart = lib.concatStringsSep "\n" (lib.filter (s: s != "") [
|
||||
(optionalString (!cfg.autoscaler.allowPending) (pendingCheck cfg.autoscaler.configFile))
|
||||
autoscalerConfigSync
|
||||
tokenSync
|
||||
]);
|
||||
};
|
||||
};
|
||||
}
|
||||
11
nixos/modules/burrow-headscale-policy.hujson
Normal file
11
nixos/modules/burrow-headscale-policy.hujson
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
{
|
||||
// Bootstrap with a simple allow-all policy; Burrow-specific lane segmentation
|
||||
// can be layered on once the control plane is live.
|
||||
acls: [
|
||||
{
|
||||
action: "accept",
|
||||
src: ["*"],
|
||||
dst: ["*:*"],
|
||||
},
|
||||
],
|
||||
}
|
||||
225
nixos/modules/burrow-headscale.nix
Normal file
225
nixos/modules/burrow-headscale.nix
Normal file
|
|
@ -0,0 +1,225 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.burrow.headscale;
|
||||
policyFile = ./burrow-headscale-policy.hujson;
|
||||
in
|
||||
{
|
||||
options.services.burrow.headscale = {
|
||||
enable = lib.mkEnableOption "the Burrow Headscale control plane";
|
||||
|
||||
domain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "ts.burrow.net";
|
||||
description = "Public Headscale control-plane domain.";
|
||||
};
|
||||
|
||||
tailDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "tail.burrow.net";
|
||||
description = "MagicDNS suffix served by Headscale.";
|
||||
};
|
||||
|
||||
port = lib.mkOption {
|
||||
type = lib.types.port;
|
||||
default = 8413;
|
||||
description = "Local Headscale listen port.";
|
||||
};
|
||||
|
||||
oidcIssuer = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "https://${config.services.burrow.authentik.domain}/application/o/${config.services.burrow.authentik.headscaleProviderSlug}/";
|
||||
description = "OIDC issuer URL used by Headscale.";
|
||||
};
|
||||
|
||||
oidcClientSecretFile = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = config.services.burrow.authentik.headscaleClientSecretFile;
|
||||
description = "Host-local file containing the OIDC client secret used by Headscale.";
|
||||
};
|
||||
|
||||
bootstrapUsers = lib.mkOption {
|
||||
type = with lib.types; listOf (submodule {
|
||||
options = {
|
||||
name = lib.mkOption {
|
||||
type = str;
|
||||
description = "Headscale username.";
|
||||
};
|
||||
displayName = lib.mkOption {
|
||||
type = str;
|
||||
description = "Friendly display name.";
|
||||
};
|
||||
email = lib.mkOption {
|
||||
type = str;
|
||||
description = "User email address.";
|
||||
};
|
||||
};
|
||||
});
|
||||
default = [
|
||||
{
|
||||
name = "contact";
|
||||
displayName = "Burrow";
|
||||
email = "contact@burrow.net";
|
||||
}
|
||||
{
|
||||
name = "conrad";
|
||||
displayName = "Conrad";
|
||||
email = "conrad@burrow.net";
|
||||
}
|
||||
{
|
||||
name = "agent";
|
||||
displayName = "Agent";
|
||||
email = "agent@burrow.net";
|
||||
}
|
||||
{
|
||||
name = "infra";
|
||||
displayName = "Infrastructure";
|
||||
email = "infra@burrow.net";
|
||||
}
|
||||
];
|
||||
description = "Users to create or reconcile inside Headscale.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.headscale ];
|
||||
|
||||
systemd.services.burrow-headscale-client-secret = {
|
||||
description = "Ensure the Burrow Headscale OIDC client secret exists";
|
||||
before =
|
||||
[ "headscale.service" ]
|
||||
++ lib.optionals config.services.burrow.authentik.enable [ "burrow-authentik-runtime.service" ];
|
||||
wantedBy =
|
||||
[ "headscale.service" ]
|
||||
++ lib.optionals config.services.burrow.authentik.enable [ "burrow-authentik-runtime.service" ];
|
||||
path = [
|
||||
pkgs.coreutils
|
||||
pkgs.openssl
|
||||
];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
|
||||
install -d -m 0755 /var/lib/burrow/intake
|
||||
|
||||
if [ ! -s ${lib.escapeShellArg cfg.oidcClientSecretFile} ]; then
|
||||
umask 077
|
||||
${pkgs.openssl}/bin/openssl rand -base64 48 > ${lib.escapeShellArg cfg.oidcClientSecretFile}
|
||||
chown root:root ${lib.escapeShellArg cfg.oidcClientSecretFile}
|
||||
chmod 0400 ${lib.escapeShellArg cfg.oidcClientSecretFile}
|
||||
fi
|
||||
'';
|
||||
};
|
||||
|
||||
services.headscale = {
|
||||
enable = true;
|
||||
address = "127.0.0.1";
|
||||
port = cfg.port;
|
||||
settings = {
|
||||
server_url = "https://${cfg.domain}";
|
||||
dns = {
|
||||
magic_dns = true;
|
||||
base_domain = cfg.tailDomain;
|
||||
nameservers.global = [
|
||||
"1.1.1.1"
|
||||
"1.0.0.1"
|
||||
"2606:4700:4700::1111"
|
||||
"2606:4700:4700::1001"
|
||||
];
|
||||
search_domains = [ cfg.tailDomain ];
|
||||
};
|
||||
database.sqlite.write_ahead_log = true;
|
||||
log.level = "info";
|
||||
policy = {
|
||||
mode = "file";
|
||||
path = policyFile;
|
||||
};
|
||||
oidc = {
|
||||
only_start_if_oidc_is_available = true;
|
||||
issuer = cfg.oidcIssuer;
|
||||
client_id = cfg.domain;
|
||||
client_secret_path = "\${CREDENTIALS_DIRECTORY}/oidc_client_secret";
|
||||
scope = [
|
||||
"openid"
|
||||
"profile"
|
||||
"email"
|
||||
];
|
||||
pkce = {
|
||||
enabled = true;
|
||||
method = "S256";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.headscale = {
|
||||
after =
|
||||
[ "burrow-headscale-client-secret.service" ]
|
||||
++ lib.optionals config.services.burrow.authentik.enable [ "burrow-authentik-ready.service" ];
|
||||
wants =
|
||||
[ "burrow-headscale-client-secret.service" ]
|
||||
++ lib.optionals config.services.burrow.authentik.enable [ "burrow-authentik-ready.service" ];
|
||||
requires =
|
||||
[ "burrow-headscale-client-secret.service" ]
|
||||
++ lib.optionals config.services.burrow.authentik.enable [ "burrow-authentik-ready.service" ];
|
||||
serviceConfig.LoadCredential = [
|
||||
"oidc_client_secret:${cfg.oidcClientSecretFile}"
|
||||
];
|
||||
};
|
||||
|
||||
systemd.services.headscale-bootstrap = {
|
||||
description = "Bootstrap Burrow Headscale users";
|
||||
after = [ "headscale.service" ];
|
||||
requires = [ "headscale.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [
|
||||
pkgs.coreutils
|
||||
pkgs.headscale
|
||||
pkgs.jq
|
||||
];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = "root";
|
||||
Group = "root";
|
||||
};
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
|
||||
list_users() {
|
||||
${pkgs.headscale}/bin/headscale users list -o json
|
||||
}
|
||||
|
||||
ensure_user() {
|
||||
local name="$1"
|
||||
local display_name="$2"
|
||||
local email="$3"
|
||||
if list_users | ${pkgs.jq}/bin/jq -e --arg name "$name" 'map(select(.name == $name)) | length > 0' >/dev/null; then
|
||||
return 0
|
||||
fi
|
||||
${pkgs.headscale}/bin/headscale users create "$name" --display-name "$display_name" --email "$email" >/dev/null
|
||||
}
|
||||
|
||||
for _ in $(seq 1 60); do
|
||||
if list_users >/dev/null 2>&1; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
${lib.concatMapStringsSep "\n" (user: ''
|
||||
ensure_user ${lib.escapeShellArg user.name} ${lib.escapeShellArg user.displayName} ${lib.escapeShellArg user.email}
|
||||
'') cfg.bootstrapUsers}
|
||||
'';
|
||||
};
|
||||
|
||||
services.caddy.virtualHosts."${cfg.domain}".extraConfig = ''
|
||||
encode gzip zstd
|
||||
reverse_proxy 127.0.0.1:${toString cfg.port}
|
||||
'';
|
||||
};
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue