mirror of
https://github.com/oddlama/nix-config.git
synced 2025-10-11 07:10:39 +02:00
refactor: major refactor into proper reusable modules. No logical changes.
This commit is contained in:
parent
04872f6ec5
commit
84ac34cb6c
80 changed files with 761 additions and 776 deletions
85
README.md
85
README.md
|
@ -2,39 +2,64 @@
|
||||||
|
|
||||||
This is my personal nix config.
|
This is my personal nix config.
|
||||||
|
|
||||||
## Structure
|
## Hosts
|
||||||
|
|
||||||
- `hosts/` contains configuration for all hosts.
|
TODO make a table.
|
||||||
- `common/` shared configuration. Hosts will include what they need from here.
|
|
||||||
- `core/` configuration that is shared across _all_ machines. (base setup, ssh, ...)
|
|
||||||
- `dev/` configuration for development machines
|
|
||||||
- `graphical/` configuration for graphical setup
|
|
||||||
- `hardware/` configuration for various hardware components
|
|
||||||
- `<something>.nix` commonly required configuration for `<something>`
|
|
||||||
- `<hostname>/` configuration for `<hostname>`
|
|
||||||
- `[microvms/]` configuration for microvms. This is optional even for existing microvms, since they can also be defined in-place.
|
|
||||||
- `secrets/` Local secrets for this host. Still theoretically accessible by other hosts, but owned by this one.
|
|
||||||
- `local.nix.age` Repository-wide local secrets. Decrypted on import via `builtins.extraBuiltins.rageImportEncrypted`.
|
|
||||||
- `[host.pub]` This host's public key. Used for agenix rekeying if it exists.
|
|
||||||
- `default.nix` The actual system definition. Follow the imports from there to see what it entails.
|
|
||||||
- `fs.nix` Filesystem setup.
|
|
||||||
- `net.nix` Networking setup.
|
|
||||||
- `nom/` - My laptop and main development machine
|
- `nom/` - My laptop and main development machine
|
||||||
- `ward/` - ODROID H3, energy efficient SBC. Used as a firewall betwenn my ISP and internal home network. Hosts some lightweight services using full KVM virtual machines.
|
- `ward/` - ODROID H3, energy efficient SBC. Used as a firewall betwenn my ISP and internal home network. Hosts some lightweight services using full KVM virtual machines.
|
||||||
- `envoy/` - Hetzner Cloud server. Primarily used as my mailserver and VPN provider.
|
- `envoy/` - Hetzner Cloud server. Primarily used as my mailserver and VPN provider.
|
||||||
|
- `sentinel/` - Hetzner Cloud server. Primarily used as a http proxy
|
||||||
- `zackbiene/` - ODROID N2+. Hosts IoT and Home Automation stuff and fully isolates that stuff from my internal network.
|
- `zackbiene/` - ODROID N2+. Hosts IoT and Home Automation stuff and fully isolates that stuff from my internal network.
|
||||||
- not yet ready to be publicized: my main development machine, the powerful home server, some services ... (still in transition from gentoo :/)
|
- not yet ready to be publicized: my main development machine, the powerful home server, some services ... (still in transition from gentoo :/)
|
||||||
- `modules/` additional NixOS modules that are not yet upstreamed, or specific to this setup.
|
|
||||||
- `interface-naming.nix` Provides an option to rename interfaces based on their MAC address
|
## Structure
|
||||||
- `microvms.nix` Used to define microvms including all of the boilerplate setup (networking, shares, local wireguard)
|
|
||||||
- `repo.nix` Provides options to define and access repository-wide secrets
|
- `apps/` Additional runnable actions for flake maintenance, like showing wireguard QR codes.
|
||||||
- `wireguard.nix` A meta module that allows defining wireguard networks that automatically collects network participants across nodes
|
|
||||||
- `nix/` library functions and plumbing
|
- `hosts/<hostname>` contains the top-level configuration for `<hostname>`.
|
||||||
- `apps/` Additional runnable actions for this flake
|
Follow the imports from there to see what it entails.
|
||||||
- `default.nix` Collects all apps and generates a definition for a specified system
|
|
||||||
- `draw-graph.nix` (**WIP:** infrastructure graph renderer)
|
By convention I place secrets related to this host in the `secrets/` subfolder, but any host
|
||||||
- `format-secrets.nix` Runs the code formatter on the secret .nix files
|
could technically use them. Especialy important files in this folder are:
|
||||||
- `show-wireguard-qr.nix` Generates a QR code for external wireguard participants
|
|
||||||
|
- `host.pub` This host's public key (retrieved after initial setup). Used to rekey secrets so the host can access them at runtime.
|
||||||
|
- `local.nix.age` Repository-wide local secrets. Decrypted on import, see `modules/repo/secrets.nix` for more information.
|
||||||
|
|
||||||
|
Some hosts define microvms that run as their guests. These are typically stored
|
||||||
|
in `microvms/<vm>` and have the same layout as a regular host.
|
||||||
|
|
||||||
|
- `modules/` contains modularized configuration. If you are interested in reusable parts of
|
||||||
|
my configuration, this is probably the folder you are looking for. Unless stated otherwise,
|
||||||
|
all of these will be regular reusable modules like those you would find in `nixpkgs/nixos/modules`,
|
||||||
|
and the tree of all relevant modules is included via `modules/default.nix`.
|
||||||
|
|
||||||
|
- `modules/config/` contains configuration that is I use across all my host and is applied by default.
|
||||||
|
These just add configuration unconditionally and don't expose any further options.
|
||||||
|
|
||||||
|
- `modules/optional/` contains configuration that is only needed sometimes, and which should
|
||||||
|
be included explicitly by hosts that require it.
|
||||||
|
|
||||||
|
- `modules/meta/` contains meta-modules that simplify the option interface of existing options.
|
||||||
|
I use this for stuff that I don't need on all my hosts and that may require different settings
|
||||||
|
for each host while sharing a common basis.
|
||||||
|
|
||||||
|
Some of these are "meta" in the sense that they depend on their own definitions on multiple hosts (wireguard).
|
||||||
|
These are probably as opinionated as stuff in `modules/config/` but may be a little more general.
|
||||||
|
The `wireguard` module would even be a candidate for extraction to a separate flake, together with the related apps.
|
||||||
|
|
||||||
|
- `modules/<xyz>/` regular modules related to <xyz>, similar structure as in `nixpkgs/nixos/modules`
|
||||||
|
|
||||||
|
- `pkgs/` Custom packages and scripts
|
||||||
|
|
||||||
|
- `secrets/` Global secrets and age identities
|
||||||
|
- `global.nix.age` Repository-wide global secrets. Available on nodes via the repo module as `config.repo.secrets.global`.
|
||||||
|
- `backup.pub` Backup age-identity in case I ever lose my YubiKey or it breaks.
|
||||||
|
- `yk1-nix-rage.pub` Master YubiKey split-identity. Used as a key-grab.
|
||||||
|
|
||||||
|
- `users/` User account configuration mostly via home-manager.
|
||||||
|
This is the place to look for my dotfiles.
|
||||||
|
|
||||||
|
- `nix/` library functions and flake plumbing
|
||||||
- `checks.nix` pre-commit-hooks for this repository
|
- `checks.nix` pre-commit-hooks for this repository
|
||||||
- `colmena.nix` Setup for distributed deployment using colmena (actually defines all NixOS hosts)
|
- `colmena.nix` Setup for distributed deployment using colmena (actually defines all NixOS hosts)
|
||||||
- `dev-shell.nix` Environment setup for `nix develop` for using this flake
|
- `dev-shell.nix` Environment setup for `nix develop` for using this flake
|
||||||
|
@ -43,12 +68,6 @@ This is my personal nix config.
|
||||||
- `generate-node.nix` Helper function that outputs everything that is necessary to define a new node in a predictable format. Used to define colmena nodes and microvms.
|
- `generate-node.nix` Helper function that outputs everything that is necessary to define a new node in a predictable format. Used to define colmena nodes and microvms.
|
||||||
- `lib.nix` Commonly used functionality or helpers that weren't available in the standard library
|
- `lib.nix` Commonly used functionality or helpers that weren't available in the standard library
|
||||||
- `rage-decrypt-and-cache.sh` Auxiliary script for repository-wide secrets that decrypts a file and caches the output in /tmp
|
- `rage-decrypt-and-cache.sh` Auxiliary script for repository-wide secrets that decrypts a file and caches the output in /tmp
|
||||||
- `secrets/` Global secrets and age identities
|
|
||||||
- `global.nix.age` Repository-wide global secrets. Available on nodes via the repo module as `config.repo.secrets.global`.
|
|
||||||
- `backup.pub` Backup age-identity in case I ever lose my YubiKey or it breaks.
|
|
||||||
- `yk1-nix-rage.pub` Master YubiKey split-identity. Used as a key-grab.
|
|
||||||
- `pkgs/` Custom packages and scripts
|
|
||||||
- `users/` User account configuration via home-manager. Imported by each host separately.
|
|
||||||
|
|
||||||
## How-To
|
## How-To
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,6 @@
|
||||||
};
|
};
|
||||||
args = inputs // {inherit pkgs;};
|
args = inputs // {inherit pkgs;};
|
||||||
apps = [
|
apps = [
|
||||||
./draw-graph.nix
|
|
||||||
./format-secrets.nix
|
./format-secrets.nix
|
||||||
./show-wireguard-qr.nix
|
./show-wireguard-qr.nix
|
||||||
];
|
];
|
|
@ -13,13 +13,13 @@
|
||||||
;
|
;
|
||||||
|
|
||||||
nodeNames = attrNames self.nodes;
|
nodeNames = attrNames self.nodes;
|
||||||
wireguardNetworks = unique (concatMap (n: attrNames self.nodes.${n}.config.extra.wireguard) nodeNames);
|
wireguardNetworks = unique (concatMap (n: attrNames self.nodes.${n}.config.meta.wireguard) nodeNames);
|
||||||
|
|
||||||
externalPeersForNet = wgName:
|
externalPeersForNet = wgName:
|
||||||
concatMap (serverNode:
|
concatMap (serverNode:
|
||||||
map
|
map
|
||||||
(peer: {inherit wgName serverNode peer;})
|
(peer: {inherit wgName serverNode peer;})
|
||||||
(attrNames self.nodes.${serverNode}.config.extra.wireguard.${wgName}.server.externalPeers))
|
(attrNames self.nodes.${serverNode}.config.meta.wireguard.${wgName}.server.externalPeers))
|
||||||
(self.extraLib.wireguard wgName).participatingServerNodes;
|
(self.extraLib.wireguard wgName).participatingServerNodes;
|
||||||
allExternalPeers = concatMap externalPeersForNet wireguardNetworks;
|
allExternalPeers = concatMap externalPeersForNet wireguardNetworks;
|
||||||
in
|
in
|
|
@ -107,7 +107,7 @@
|
||||||
microvmNodes = nixpkgs.lib.concatMapAttrs (_: node:
|
microvmNodes = nixpkgs.lib.concatMapAttrs (_: node:
|
||||||
nixpkgs.lib.mapAttrs'
|
nixpkgs.lib.mapAttrs'
|
||||||
(vm: def: nixpkgs.lib.nameValuePair def.nodeName node.config.microvm.vms.${vm}.config)
|
(vm: def: nixpkgs.lib.nameValuePair def.nodeName node.config.microvm.vms.${vm}.config)
|
||||||
(node.config.extra.microvms.vms or {}))
|
(node.config.meta.microvms.vms or {}))
|
||||||
self.colmenaNodes;
|
self.colmenaNodes;
|
||||||
# Expose all nodes in a single attribute
|
# Expose all nodes in a single attribute
|
||||||
nodes = self.colmenaNodes // self.microvmNodes;
|
nodes = self.colmenaNodes // self.microvmNodes;
|
||||||
|
@ -130,7 +130,7 @@
|
||||||
|
|
||||||
apps =
|
apps =
|
||||||
agenix-rekey.defineApps self pkgs self.nodes
|
agenix-rekey.defineApps self pkgs self.nodes
|
||||||
// import ./nix/apps inputs system;
|
// import ./apps inputs system;
|
||||||
checks = import ./nix/checks.nix inputs system;
|
checks = import ./nix/checks.nix inputs system;
|
||||||
devShells.default = import ./nix/dev-shell.nix inputs system;
|
devShells.default = import ./nix/dev-shell.nix inputs system;
|
||||||
formatter = pkgs.alejandra;
|
formatter = pkgs.alejandra;
|
||||||
|
|
|
@ -1,10 +0,0 @@
|
||||||
{lib, ...}: {
|
|
||||||
boot.loader = {
|
|
||||||
grub = {
|
|
||||||
enable = true;
|
|
||||||
efiSupport = false;
|
|
||||||
};
|
|
||||||
timeout = lib.mkDefault 2;
|
|
||||||
};
|
|
||||||
console.earlySetup = true;
|
|
||||||
}
|
|
|
@ -1,44 +0,0 @@
|
||||||
{config, ...}: {
|
|
||||||
imports = [
|
|
||||||
./impermanence.nix
|
|
||||||
./inputrc.nix
|
|
||||||
./issue.nix
|
|
||||||
./net.nix
|
|
||||||
./nix.nix
|
|
||||||
./resolved.nix
|
|
||||||
./ssh.nix
|
|
||||||
./system.nix
|
|
||||||
./xdg.nix
|
|
||||||
|
|
||||||
../../../users/root
|
|
||||||
|
|
||||||
../../../modules/deteministic-ids.nix
|
|
||||||
../../../modules/distributed-config.nix
|
|
||||||
../../../modules/extra.nix
|
|
||||||
../../../modules/interface-naming.nix
|
|
||||||
../../../modules/microvms.nix
|
|
||||||
../../../modules/oauth2-proxy.nix
|
|
||||||
../../../modules/promtail.nix
|
|
||||||
../../../modules/provided-domains.nix
|
|
||||||
../../../modules/repo.nix
|
|
||||||
../../../modules/telegraf.nix
|
|
||||||
../../../modules/wireguard.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
home-manager = {
|
|
||||||
useGlobalPkgs = true;
|
|
||||||
useUserPackages = true;
|
|
||||||
verbose = true;
|
|
||||||
};
|
|
||||||
|
|
||||||
# If the host defines microvms, ensure that this core module and
|
|
||||||
# some boilerplate is imported automatically.
|
|
||||||
extra.microvms.commonImports = [
|
|
||||||
./.
|
|
||||||
{home-manager.users.root.home.minimal = true;}
|
|
||||||
];
|
|
||||||
|
|
||||||
# Required even when using home-manager's zsh module since the /etc/profile load order
|
|
||||||
# is partly controlled by this. See nix-community/home-manager#3681.
|
|
||||||
programs.zsh.enable = true;
|
|
||||||
}
|
|
|
@ -1,92 +0,0 @@
|
||||||
{
|
|
||||||
config,
|
|
||||||
lib,
|
|
||||||
pkgs,
|
|
||||||
nodeName,
|
|
||||||
...
|
|
||||||
}: let
|
|
||||||
inherit
|
|
||||||
(lib)
|
|
||||||
concatStringsSep
|
|
||||||
head
|
|
||||||
mapAttrsToList
|
|
||||||
mkDefault
|
|
||||||
mkForce
|
|
||||||
;
|
|
||||||
in {
|
|
||||||
networking = {
|
|
||||||
hostName = nodeName;
|
|
||||||
useDHCP = mkForce false;
|
|
||||||
useNetworkd = true;
|
|
||||||
dhcpcd.enable = false;
|
|
||||||
|
|
||||||
nftables = {
|
|
||||||
firewall.enable = true;
|
|
||||||
stopRuleset = mkDefault ''
|
|
||||||
table inet filter {
|
|
||||||
chain input {
|
|
||||||
type filter hook input priority filter; policy drop;
|
|
||||||
ct state invalid drop
|
|
||||||
ct state {established, related} accept
|
|
||||||
|
|
||||||
iifname lo accept
|
|
||||||
meta l4proto ipv6-icmp accept
|
|
||||||
meta l4proto icmp accept
|
|
||||||
tcp dport ${toString (head config.services.openssh.ports)} accept
|
|
||||||
}
|
|
||||||
chain forward {
|
|
||||||
type filter hook forward priority filter; policy drop;
|
|
||||||
}
|
|
||||||
chain output {
|
|
||||||
type filter hook output priority filter; policy accept;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
|
|
||||||
# TODO mkForce nftables
|
|
||||||
nftables.firewall = {
|
|
||||||
zones = lib.mkForce {
|
|
||||||
local.localZone = true;
|
|
||||||
};
|
|
||||||
|
|
||||||
rules = lib.mkForce {
|
|
||||||
icmp = {
|
|
||||||
early = true;
|
|
||||||
after = ["ct"];
|
|
||||||
from = "all";
|
|
||||||
to = ["local"];
|
|
||||||
extraLines = [
|
|
||||||
"ip6 nexthdr icmpv6 icmpv6 type { echo-request, destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept"
|
|
||||||
"ip protocol icmp icmp type { echo-request, destination-unreachable, router-advertisement, time-exceeded, parameter-problem } accept"
|
|
||||||
#"ip6 saddr fe80::/10 ip6 daddr fe80::/10 udp dport 546 accept" # (dhcpv6)
|
|
||||||
];
|
|
||||||
};
|
|
||||||
|
|
||||||
ssh = {
|
|
||||||
early = true;
|
|
||||||
after = ["ct"];
|
|
||||||
from = "all";
|
|
||||||
to = ["local"];
|
|
||||||
allowedTCPPorts = config.services.openssh.ports;
|
|
||||||
};
|
|
||||||
|
|
||||||
untrusted-to-local = {
|
|
||||||
from = ["untrusted"];
|
|
||||||
to = ["local"];
|
|
||||||
|
|
||||||
inherit
|
|
||||||
(config.networking.firewall)
|
|
||||||
allowedTCPPorts
|
|
||||||
allowedUDPPorts
|
|
||||||
;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
systemd.network.enable = true;
|
|
||||||
|
|
||||||
# Rename known network interfaces
|
|
||||||
extra.networking.renameInterfacesByMac = lib.mapAttrs (_: v: v.mac) (config.repo.secrets.local.networking.interfaces or {});
|
|
||||||
}
|
|
|
@ -1,11 +0,0 @@
|
||||||
{lib, ...}: {
|
|
||||||
boot.loader = {
|
|
||||||
efi.canTouchEfiVariables = true;
|
|
||||||
systemd-boot = {
|
|
||||||
enable = true;
|
|
||||||
configurationLimit = 15;
|
|
||||||
};
|
|
||||||
timeout = lib.mkDefault 2;
|
|
||||||
};
|
|
||||||
console.earlySetup = true;
|
|
||||||
}
|
|
|
@ -8,19 +8,17 @@
|
||||||
nixos-hardware.common-gpu-intel
|
nixos-hardware.common-gpu-intel
|
||||||
nixos-hardware.common-pc-laptop
|
nixos-hardware.common-pc-laptop
|
||||||
nixos-hardware.common-pc-laptop-ssd
|
nixos-hardware.common-pc-laptop-ssd
|
||||||
|
../../modules/optional/hardware/intel.nix
|
||||||
|
../../modules/optional/hardware/physical.nix
|
||||||
|
|
||||||
../common/core
|
../../modules
|
||||||
../common/dev
|
../../modules/optional/boot-efi.nix
|
||||||
../common/graphical
|
../../modules/optional/initrd-ssh.nix
|
||||||
|
../../modules/optional/dev
|
||||||
../common/hardware/intel.nix
|
../../modules/optional/graphical
|
||||||
../common/hardware/physical.nix
|
../../modules/optional/laptop.nix
|
||||||
../common/efi.nix
|
#../../modules/optional/sound.nix
|
||||||
../common/initrd-ssh.nix
|
../../modules/optional/zfs.nix
|
||||||
../common/laptop.nix
|
|
||||||
# ../common/sound.nix
|
|
||||||
../common/yubikey.nix
|
|
||||||
../common/zfs.nix
|
|
||||||
|
|
||||||
../../users/myuser
|
../../users/myuser
|
||||||
|
|
||||||
|
@ -30,10 +28,8 @@
|
||||||
|
|
||||||
boot.initrd.availableKernelModules = ["xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod"];
|
boot.initrd.availableKernelModules = ["xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod"];
|
||||||
|
|
||||||
hardware.opengl.enable = true;
|
|
||||||
|
|
||||||
console = {
|
console = {
|
||||||
font = "ter-v28n";
|
font = "ter-v28n";
|
||||||
packages = with pkgs; [terminus_font];
|
packages = [pkgs.terminus_font];
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -17,5 +17,5 @@ in {
|
||||||
reloadServices = ["nginx"];
|
reloadServices = ["nginx"];
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
extra.acme.wildcardDomains = acme.domains;
|
security.acme.wildcardDomains = acme.domains;
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,32 +4,32 @@
|
||||||
...
|
...
|
||||||
}: {
|
}: {
|
||||||
imports = [
|
imports = [
|
||||||
../common/core
|
../../modules/optional/hardware/hetzner-cloud.nix
|
||||||
../common/hardware/hetzner-cloud.nix
|
|
||||||
../common/bios-boot.nix
|
|
||||||
../common/initrd-ssh.nix
|
|
||||||
../common/zfs.nix
|
|
||||||
|
|
||||||
./fs.nix
|
../../modules
|
||||||
./net.nix
|
../../modules/optional/boot-bios.nix
|
||||||
|
../../modules/optional/initrd-ssh.nix
|
||||||
|
../../modules/optional/zfs.nix
|
||||||
|
|
||||||
./acme.nix
|
./acme.nix
|
||||||
|
./fs.nix
|
||||||
|
./net.nix
|
||||||
./oauth2.nix
|
./oauth2.nix
|
||||||
];
|
];
|
||||||
|
|
||||||
users.groups.acme.members = ["nginx"];
|
users.groups.acme.members = ["nginx"];
|
||||||
services.nginx.enable = true;
|
services.nginx.enable = true;
|
||||||
|
|
||||||
extra.promtail = {
|
meta.promtail = {
|
||||||
enable = true;
|
enable = true;
|
||||||
proxy = "sentinel";
|
proxy = "sentinel";
|
||||||
};
|
};
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
# Connect safely via wireguard to skip authentication
|
||||||
networking.hosts.${config.extra.wireguard.proxy-sentinel.ipv4} = [config.providedDomains.influxdb];
|
networking.hosts.${config.meta.wireguard.proxy-sentinel.ipv4} = [config.networking.providedDomains.influxdb];
|
||||||
extra.telegraf = {
|
meta.telegraf = {
|
||||||
enable = true;
|
enable = true;
|
||||||
influxdb2.domain = config.providedDomains.influxdb;
|
influxdb2.domain = config.networking.providedDomains.influxdb;
|
||||||
influxdb2.organization = "servers";
|
influxdb2.organization = "servers";
|
||||||
influxdb2.bucket = "telegraf";
|
influxdb2.bucket = "telegraf";
|
||||||
};
|
};
|
||||||
|
|
|
@ -52,7 +52,7 @@
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
extra.wireguard.proxy-sentinel.server = {
|
meta.wireguard.proxy-sentinel.server = {
|
||||||
host = config.networking.fqdn;
|
host = config.networking.fqdn;
|
||||||
port = 51443;
|
port = 51443;
|
||||||
reservedAddresses = ["10.43.0.0/24" "fd00:43::/120"];
|
reservedAddresses = ["10.43.0.0/24" "fd00:43::/120"];
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
pkgs,
|
pkgs,
|
||||||
...
|
...
|
||||||
}: {
|
}: {
|
||||||
extra.oauth2_proxy = {
|
meta.oauth2_proxy = {
|
||||||
enable = true;
|
enable = true;
|
||||||
cookieDomain = config.repo.secrets.local.personalDomain;
|
cookieDomain = config.repo.secrets.local.personalDomain;
|
||||||
portalDomain = "oauth2.${config.repo.secrets.local.personalDomain}";
|
portalDomain = "oauth2.${config.repo.secrets.local.personalDomain}";
|
||||||
|
@ -22,15 +22,15 @@
|
||||||
in {
|
in {
|
||||||
provider = "oidc";
|
provider = "oidc";
|
||||||
scope = "openid email";
|
scope = "openid email";
|
||||||
loginURL = "https://${config.providedDomains.kanidm}/ui/oauth2";
|
loginURL = "https://${config.networking.providedDomains.kanidm}/ui/oauth2";
|
||||||
redeemURL = "https://${config.providedDomains.kanidm}/oauth2/token";
|
redeemURL = "https://${config.networking.providedDomains.kanidm}/oauth2/token";
|
||||||
validateURL = "https://${config.providedDomains.kanidm}/oauth2/openid/${clientId}/userinfo";
|
validateURL = "https://${config.networking.providedDomains.kanidm}/oauth2/openid/${clientId}/userinfo";
|
||||||
clientID = clientId;
|
clientID = clientId;
|
||||||
keyFile = config.age.secrets.oauth2-proxy-secret.path;
|
keyFile = config.age.secrets.oauth2-proxy-secret.path;
|
||||||
email.domains = ["*"];
|
email.domains = ["*"];
|
||||||
|
|
||||||
extraConfig = {
|
extraConfig = {
|
||||||
oidc-issuer-url = "https://${config.providedDomains.kanidm}/oauth2/openid/${clientId}";
|
oidc-issuer-url = "https://${config.networking.providedDomains.kanidm}/oauth2/openid/${clientId}";
|
||||||
provider-display-name = "Kanidm";
|
provider-display-name = "Kanidm";
|
||||||
#skip-provider-button = true;
|
#skip-provider-button = true;
|
||||||
};
|
};
|
||||||
|
|
|
@ -7,13 +7,13 @@
|
||||||
imports = [
|
imports = [
|
||||||
nixos-hardware.common-cpu-intel
|
nixos-hardware.common-cpu-intel
|
||||||
nixos-hardware.common-pc-ssd
|
nixos-hardware.common-pc-ssd
|
||||||
|
../../modules/optional/hardware/intel.nix
|
||||||
|
../../modules/optional/hardware/physical.nix
|
||||||
|
|
||||||
../common/core
|
../../modules
|
||||||
../common/hardware/intel.nix
|
../../modules/optional/boot-efi.nix
|
||||||
../common/hardware/physical.nix
|
../../modules/optional/initrd-ssh.nix
|
||||||
../common/initrd-ssh.nix
|
../../modules/optional/zfs.nix
|
||||||
../common/efi.nix
|
|
||||||
../common/zfs.nix
|
|
||||||
|
|
||||||
./fs.nix
|
./fs.nix
|
||||||
./net.nix
|
./net.nix
|
||||||
|
@ -21,16 +21,16 @@
|
||||||
|
|
||||||
boot.initrd.availableKernelModules = ["xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" "sdhci_pci" "r8169"];
|
boot.initrd.availableKernelModules = ["xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" "sdhci_pci" "r8169"];
|
||||||
|
|
||||||
extra.promtail = {
|
meta.promtail = {
|
||||||
enable = true;
|
enable = true;
|
||||||
proxy = "sentinel";
|
proxy = "sentinel";
|
||||||
};
|
};
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
# Connect safely via wireguard to skip authentication
|
||||||
networking.hosts.${nodes.sentinel.config.extra.wireguard.proxy-sentinel.ipv4} = [nodes.sentinel.config.providedDomains.influxdb];
|
networking.hosts.${nodes.sentinel.config.meta.wireguard.proxy-sentinel.ipv4} = [nodes.sentinel.config.networking.providedDomains.influxdb];
|
||||||
extra.telegraf = {
|
meta.telegraf = {
|
||||||
enable = true;
|
enable = true;
|
||||||
influxdb2.domain = nodes.sentinel.config.providedDomains.influxdb;
|
influxdb2.domain = nodes.sentinel.config.networking.providedDomains.influxdb;
|
||||||
influxdb2.organization = "servers";
|
influxdb2.organization = "servers";
|
||||||
influxdb2.bucket = "telegraf";
|
influxdb2.bucket = "telegraf";
|
||||||
};
|
};
|
||||||
|
@ -38,7 +38,11 @@
|
||||||
# TODO track my github stats
|
# TODO track my github stats
|
||||||
# services.telegraf.extraConfig.inputs.github = {};
|
# services.telegraf.extraConfig.inputs.github = {};
|
||||||
|
|
||||||
extra.microvms.vms = let
|
meta.microvms.commonImports = [
|
||||||
|
./microvms/common.nix
|
||||||
|
];
|
||||||
|
|
||||||
|
meta.microvms.vms = let
|
||||||
defaults = {
|
defaults = {
|
||||||
system = "x86_64-linux";
|
system = "x86_64-linux";
|
||||||
autostart = true;
|
autostart = true;
|
||||||
|
|
|
@ -8,30 +8,10 @@
|
||||||
sentinelCfg = nodes.sentinel.config;
|
sentinelCfg = nodes.sentinel.config;
|
||||||
adguardhomeDomain = "adguardhome.${sentinelCfg.repo.secrets.local.personalDomain}";
|
adguardhomeDomain = "adguardhome.${sentinelCfg.repo.secrets.local.personalDomain}";
|
||||||
in {
|
in {
|
||||||
imports = [
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [config.services.adguardhome.settings.bind_port];
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [config.services.adguardhome.settings.bind_port];
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes.sentinel = {
|
nodes.sentinel = {
|
||||||
providedDomains.adguard = adguardhomeDomain;
|
networking.providedDomains.adguard = adguardhomeDomain;
|
||||||
|
|
||||||
services.nginx = {
|
services.nginx = {
|
||||||
upstreams.adguardhome = {
|
upstreams.adguardhome = {
|
||||||
|
@ -43,7 +23,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${adguardhomeDomain} = {
|
virtualHosts.${adguardhomeDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert adguardhomeDomain;
|
useACMEWildcardHost = true;
|
||||||
oauth2.enable = true;
|
oauth2.enable = true;
|
||||||
oauth2.allowedGroups = ["access_adguardhome"];
|
oauth2.allowedGroups = ["access_adguardhome"];
|
||||||
locations."/" = {
|
locations."/" = {
|
||||||
|
@ -57,7 +37,7 @@ in {
|
||||||
services.adguardhome = {
|
services.adguardhome = {
|
||||||
enable = true;
|
enable = true;
|
||||||
settings = {
|
settings = {
|
||||||
bind_host = config.extra.wireguard.proxy-sentinel.ipv4;
|
bind_host = config.meta.wireguard.proxy-sentinel.ipv4;
|
||||||
bind_port = 3000;
|
bind_port = 3000;
|
||||||
#dns = {
|
#dns = {
|
||||||
# edns_client_subnet.enabled = false;
|
# edns_client_subnet.enabled = false;
|
||||||
|
|
18
hosts/ward/microvms/common.nix
Normal file
18
hosts/ward/microvms/common.nix
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
{nodes, ...}: let
|
||||||
|
sentinelCfg = nodes.sentinel.config;
|
||||||
|
in {
|
||||||
|
meta.wireguard-proxy.sentinel = {};
|
||||||
|
meta.promtail = {
|
||||||
|
enable = true;
|
||||||
|
proxy = "sentinel";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Connect safely via wireguard to skip authentication
|
||||||
|
networking.hosts.${sentinelCfg.meta.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.networking.providedDomains.influxdb];
|
||||||
|
meta.telegraf = {
|
||||||
|
enable = true;
|
||||||
|
influxdb2.domain = sentinelCfg.networking.providedDomains.influxdb;
|
||||||
|
influxdb2.organization = "servers";
|
||||||
|
influxdb2.bucket = "telegraf";
|
||||||
|
};
|
||||||
|
}
|
|
@ -9,27 +9,7 @@
|
||||||
sentinelCfg = nodes.sentinel.config;
|
sentinelCfg = nodes.sentinel.config;
|
||||||
grafanaDomain = "grafana.${sentinelCfg.repo.secrets.local.personalDomain}";
|
grafanaDomain = "grafana.${sentinelCfg.repo.secrets.local.personalDomain}";
|
||||||
in {
|
in {
|
||||||
imports = [
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [config.services.grafana.settings.server.http_port];
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [config.services.grafana.settings.server.http_port];
|
|
||||||
};
|
|
||||||
|
|
||||||
age.secrets.grafana-secret-key = {
|
age.secrets.grafana-secret-key = {
|
||||||
rekeyFile = ./secrets/grafana-secret-key.age;
|
rekeyFile = ./secrets/grafana-secret-key.age;
|
||||||
|
@ -55,7 +35,7 @@ in {
|
||||||
config.age.secrets.grafana-loki-basic-auth-password
|
config.age.secrets.grafana-loki-basic-auth-password
|
||||||
];
|
];
|
||||||
|
|
||||||
providedDomains.grafana = grafanaDomain;
|
networking.providedDomains.grafana = grafanaDomain;
|
||||||
|
|
||||||
services.nginx = {
|
services.nginx = {
|
||||||
upstreams.grafana = {
|
upstreams.grafana = {
|
||||||
|
@ -67,7 +47,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${grafanaDomain} = {
|
virtualHosts.${grafanaDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert grafanaDomain;
|
useACMEWildcardHost = true;
|
||||||
locations."/" = {
|
locations."/" = {
|
||||||
proxyPass = "http://grafana";
|
proxyPass = "http://grafana";
|
||||||
proxyWebsockets = true;
|
proxyWebsockets = true;
|
||||||
|
@ -87,7 +67,7 @@ in {
|
||||||
root_url = "https://${grafanaDomain}";
|
root_url = "https://${grafanaDomain}";
|
||||||
enforce_domain = true;
|
enforce_domain = true;
|
||||||
enable_gzip = true;
|
enable_gzip = true;
|
||||||
http_addr = config.extra.wireguard.proxy-sentinel.ipv4;
|
http_addr = config.meta.wireguard.proxy-sentinel.ipv4;
|
||||||
http_port = 3001;
|
http_port = 3001;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -111,9 +91,9 @@ in {
|
||||||
client_secret = "aZKNCM6KpjBy4RqwKJXMLXzyx9rKH6MZTFk4wYrKWuBqLj6t"; # TODO temporary test not a real secret
|
client_secret = "aZKNCM6KpjBy4RqwKJXMLXzyx9rKH6MZTFk4wYrKWuBqLj6t"; # TODO temporary test not a real secret
|
||||||
scopes = "openid email profile";
|
scopes = "openid email profile";
|
||||||
login_attribute_path = "prefered_username";
|
login_attribute_path = "prefered_username";
|
||||||
auth_url = "https://${sentinelCfg.providedDomains.kanidm}/ui/oauth2";
|
auth_url = "https://${sentinelCfg.networking.providedDomains.kanidm}/ui/oauth2";
|
||||||
token_url = "https://${sentinelCfg.providedDomains.kanidm}/oauth2/token";
|
token_url = "https://${sentinelCfg.networking.providedDomains.kanidm}/oauth2/token";
|
||||||
api_url = "https://${sentinelCfg.providedDomains.kanidm}/oauth2/openid/grafana/userinfo";
|
api_url = "https://${sentinelCfg.networking.providedDomains.kanidm}/oauth2/openid/grafana/userinfo";
|
||||||
use_pkce = true;
|
use_pkce = true;
|
||||||
# Allow mapping oauth2 roles to server admin
|
# Allow mapping oauth2 roles to server admin
|
||||||
allow_assign_grafana_admin = true;
|
allow_assign_grafana_admin = true;
|
||||||
|
@ -128,7 +108,7 @@ in {
|
||||||
name = "InfluxDB (servers)";
|
name = "InfluxDB (servers)";
|
||||||
type = "influxdb";
|
type = "influxdb";
|
||||||
access = "proxy";
|
access = "proxy";
|
||||||
url = "https://${sentinelCfg.providedDomains.influxdb}";
|
url = "https://${sentinelCfg.networking.providedDomains.influxdb}";
|
||||||
orgId = 1;
|
orgId = 1;
|
||||||
secureJsonData.token = "$__file{${config.age.secrets.grafana-influxdb-token.path}}";
|
secureJsonData.token = "$__file{${config.age.secrets.grafana-influxdb-token.path}}";
|
||||||
jsonData.version = "Flux";
|
jsonData.version = "Flux";
|
||||||
|
@ -140,7 +120,7 @@ in {
|
||||||
name = "Loki";
|
name = "Loki";
|
||||||
type = "loki";
|
type = "loki";
|
||||||
access = "proxy";
|
access = "proxy";
|
||||||
url = "https://${sentinelCfg.providedDomains.loki}";
|
url = "https://${sentinelCfg.networking.providedDomains.loki}";
|
||||||
orgId = 1;
|
orgId = 1;
|
||||||
basicAuth = true;
|
basicAuth = true;
|
||||||
basicAuthUser = "${nodeName}+grafana-loki-basic-auth-password";
|
basicAuthUser = "${nodeName}+grafana-loki-basic-auth-password";
|
||||||
|
|
|
@ -10,31 +10,10 @@
|
||||||
influxdbPort = 8086;
|
influxdbPort = 8086;
|
||||||
in {
|
in {
|
||||||
microvm.mem = 1024;
|
microvm.mem = 1024;
|
||||||
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [influxdbPort];
|
||||||
imports = [
|
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [influxdbPort];
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes.sentinel = {
|
nodes.sentinel = {
|
||||||
providedDomains.influxdb = influxdbDomain;
|
networking.providedDomains.influxdb = influxdbDomain;
|
||||||
|
|
||||||
services.nginx = {
|
services.nginx = {
|
||||||
upstreams.influxdb = {
|
upstreams.influxdb = {
|
||||||
|
@ -46,7 +25,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${influxdbDomain} = {
|
virtualHosts.${influxdbDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert influxdbDomain;
|
useACMEWildcardHost = true;
|
||||||
oauth2.enable = true;
|
oauth2.enable = true;
|
||||||
oauth2.allowedGroups = ["access_influxdb"];
|
oauth2.allowedGroups = ["access_influxdb"];
|
||||||
locations."/" = {
|
locations."/" = {
|
||||||
|
@ -54,7 +33,7 @@ in {
|
||||||
proxyWebsockets = true;
|
proxyWebsockets = true;
|
||||||
extraConfig = ''
|
extraConfig = ''
|
||||||
satisfy any;
|
satisfy any;
|
||||||
${lib.concatMapStrings (ip: "allow ${ip};\n") sentinelCfg.extra.wireguard.proxy-sentinel.server.reservedAddresses}
|
${lib.concatMapStrings (ip: "allow ${ip};\n") sentinelCfg.meta.wireguard.proxy-sentinel.server.reservedAddresses}
|
||||||
deny all;
|
deny all;
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
@ -66,7 +45,7 @@ in {
|
||||||
enable = true;
|
enable = true;
|
||||||
settings = {
|
settings = {
|
||||||
reporting-disabled = true;
|
reporting-disabled = true;
|
||||||
http-bind-address = "${config.extra.wireguard.proxy-sentinel.ipv4}:${toString influxdbPort}";
|
http-bind-address = "${config.meta.wireguard.proxy-sentinel.ipv4}:${toString influxdbPort}";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -10,27 +10,7 @@
|
||||||
kanidmDomain = "auth.${sentinelCfg.repo.secrets.local.personalDomain}";
|
kanidmDomain = "auth.${sentinelCfg.repo.secrets.local.personalDomain}";
|
||||||
kanidmPort = 8300;
|
kanidmPort = 8300;
|
||||||
in {
|
in {
|
||||||
imports = [
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [kanidmPort];
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [kanidmPort];
|
|
||||||
};
|
|
||||||
|
|
||||||
age.secrets."kanidm-self-signed.crt" = {
|
age.secrets."kanidm-self-signed.crt" = {
|
||||||
rekeyFile = ./secrets/kanidm-self-signed.crt.age;
|
rekeyFile = ./secrets/kanidm-self-signed.crt.age;
|
||||||
|
@ -45,7 +25,7 @@ in {
|
||||||
};
|
};
|
||||||
|
|
||||||
nodes.sentinel = {
|
nodes.sentinel = {
|
||||||
providedDomains.kanidm = kanidmDomain;
|
networking.providedDomains.kanidm = kanidmDomain;
|
||||||
|
|
||||||
services.nginx = {
|
services.nginx = {
|
||||||
upstreams.kanidm = {
|
upstreams.kanidm = {
|
||||||
|
@ -57,7 +37,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${kanidmDomain} = {
|
virtualHosts.${kanidmDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert kanidmDomain;
|
useACMEWildcardHost = true;
|
||||||
locations."/".proxyPass = "https://kanidm";
|
locations."/".proxyPass = "https://kanidm";
|
||||||
# Allow using self-signed certs to satisfy kanidm's requirement
|
# Allow using self-signed certs to satisfy kanidm's requirement
|
||||||
# for TLS connections. (Although this is over wireguard anyway)
|
# for TLS connections. (Although this is over wireguard anyway)
|
||||||
|
@ -76,7 +56,7 @@ in {
|
||||||
origin = "https://${kanidmDomain}";
|
origin = "https://${kanidmDomain}";
|
||||||
tls_chain = config.age.secrets."kanidm-self-signed.crt".path;
|
tls_chain = config.age.secrets."kanidm-self-signed.crt".path;
|
||||||
tls_key = config.age.secrets."kanidm-self-signed.key".path;
|
tls_key = config.age.secrets."kanidm-self-signed.key".path;
|
||||||
bindaddress = "${config.extra.wireguard.proxy-sentinel.ipv4}:${toString kanidmPort}";
|
bindaddress = "${config.meta.wireguard.proxy-sentinel.ipv4}:${toString kanidmPort}";
|
||||||
trust_x_forward_for = true;
|
trust_x_forward_for = true;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
|
@ -8,30 +8,10 @@
|
||||||
sentinelCfg = nodes.sentinel.config;
|
sentinelCfg = nodes.sentinel.config;
|
||||||
lokiDomain = "loki.${sentinelCfg.repo.secrets.local.personalDomain}";
|
lokiDomain = "loki.${sentinelCfg.repo.secrets.local.personalDomain}";
|
||||||
in {
|
in {
|
||||||
imports = [
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [config.services.loki.configuration.server.http_listen_port];
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [config.services.loki.configuration.server.http_listen_port];
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes.sentinel = {
|
nodes.sentinel = {
|
||||||
providedDomains.loki = lokiDomain;
|
networking.providedDomains.loki = lokiDomain;
|
||||||
|
|
||||||
age.secrets.loki-basic-auth-hashes = {
|
age.secrets.loki-basic-auth-hashes = {
|
||||||
rekeyFile = ./secrets/loki-basic-auth-hashes.age;
|
rekeyFile = ./secrets/loki-basic-auth-hashes.age;
|
||||||
|
@ -52,7 +32,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${lokiDomain} = {
|
virtualHosts.${lokiDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert lokiDomain;
|
useACMEWildcardHost = true;
|
||||||
locations."/" = {
|
locations."/" = {
|
||||||
proxyPass = "http://loki";
|
proxyPass = "http://loki";
|
||||||
proxyWebsockets = true;
|
proxyWebsockets = true;
|
||||||
|
@ -86,7 +66,7 @@ in {
|
||||||
auth_enabled = false;
|
auth_enabled = false;
|
||||||
|
|
||||||
server = {
|
server = {
|
||||||
http_listen_address = config.extra.wireguard.proxy-sentinel.ipv4;
|
http_listen_address = config.meta.wireguard.proxy-sentinel.ipv4;
|
||||||
http_listen_port = 3100;
|
http_listen_port = 3100;
|
||||||
log_level = "warn";
|
log_level = "warn";
|
||||||
};
|
};
|
||||||
|
|
|
@ -8,40 +8,21 @@
|
||||||
sentinelCfg = nodes.sentinel.config;
|
sentinelCfg = nodes.sentinel.config;
|
||||||
vaultwardenDomain = "pw.${sentinelCfg.repo.secrets.local.personalDomain}";
|
vaultwardenDomain = "pw.${sentinelCfg.repo.secrets.local.personalDomain}";
|
||||||
in {
|
in {
|
||||||
imports = [
|
meta.wireguard-proxy.sentinel.allowedTCPPorts = [
|
||||||
../../../../modules/proxy-via-sentinel.nix
|
config.services.vaultwarden.config.rocketPort
|
||||||
|
config.services.vaultwarden.config.websocketPort
|
||||||
];
|
];
|
||||||
|
|
||||||
extra.promtail = {
|
|
||||||
enable = true;
|
|
||||||
proxy = "sentinel";
|
|
||||||
};
|
|
||||||
|
|
||||||
# Connect safely via wireguard to skip authentication
|
|
||||||
networking.hosts.${sentinelCfg.extra.wireguard.proxy-sentinel.ipv4} = [sentinelCfg.providedDomains.influxdb];
|
|
||||||
extra.telegraf = {
|
|
||||||
enable = true;
|
|
||||||
influxdb2.domain = sentinelCfg.providedDomains.influxdb;
|
|
||||||
influxdb2.organization = "servers";
|
|
||||||
influxdb2.bucket = "telegraf";
|
|
||||||
};
|
|
||||||
|
|
||||||
age.secrets.vaultwarden-env = {
|
age.secrets.vaultwarden-env = {
|
||||||
rekeyFile = ./secrets/vaultwarden-env.age;
|
rekeyFile = ./secrets/vaultwarden-env.age;
|
||||||
mode = "440";
|
mode = "440";
|
||||||
group = "vaultwarden";
|
group = "vaultwarden";
|
||||||
};
|
};
|
||||||
|
|
||||||
networking.nftables.firewall.rules = lib.mkForce {
|
|
||||||
sentinel-to-local.allowedTCPPorts = [
|
|
||||||
config.services.vaultwarden.config.rocketPort
|
|
||||||
config.services.vaultwarden.config.websocketPort
|
|
||||||
];
|
|
||||||
};
|
|
||||||
|
|
||||||
nodes.sentinel = {
|
nodes.sentinel = {
|
||||||
providedDomains.vaultwarden = vaultwardenDomain;
|
networking.providedDomains.vaultwarden = vaultwardenDomain;
|
||||||
|
|
||||||
|
services.nginx = {
|
||||||
upstreams.vaultwarden = {
|
upstreams.vaultwarden = {
|
||||||
servers."${config.services.vaultwarden.config.rocketAddress}:${toString config.services.vaultwarden.config.rocketPort}" = {};
|
servers."${config.services.vaultwarden.config.rocketAddress}:${toString config.services.vaultwarden.config.rocketPort}" = {};
|
||||||
extraConfig = ''
|
extraConfig = ''
|
||||||
|
@ -58,7 +39,7 @@ in {
|
||||||
};
|
};
|
||||||
virtualHosts.${vaultwardenDomain} = {
|
virtualHosts.${vaultwardenDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = sentinelCfg.lib.extra.matchingWildcardCert vaultwardenDomain;
|
useACMEWildcardHost = true;
|
||||||
extraConfig = ''
|
extraConfig = ''
|
||||||
client_max_body_size 256M;
|
client_max_body_size 256M;
|
||||||
'';
|
'';
|
||||||
|
@ -73,6 +54,7 @@ in {
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
};
|
||||||
|
|
||||||
services.vaultwarden = {
|
services.vaultwarden = {
|
||||||
enable = true;
|
enable = true;
|
||||||
|
@ -84,9 +66,9 @@ in {
|
||||||
webVaultEnabled = true;
|
webVaultEnabled = true;
|
||||||
|
|
||||||
websocketEnabled = true;
|
websocketEnabled = true;
|
||||||
websocketAddress = config.extra.wireguard.proxy-sentinel.ipv4;
|
websocketAddress = config.meta.wireguard.proxy-sentinel.ipv4;
|
||||||
websocketPort = 3012;
|
websocketPort = 3012;
|
||||||
rocketAddress = config.extra.wireguard.proxy-sentinel.ipv4;
|
rocketAddress = config.meta.wireguard.proxy-sentinel.ipv4;
|
||||||
rocketPort = 8012;
|
rocketPort = 8012;
|
||||||
|
|
||||||
signupsAllowed = false;
|
signupsAllowed = false;
|
||||||
|
|
|
@ -172,12 +172,12 @@ in {
|
||||||
|
|
||||||
systemd.services.kea-dhcp4-server.after = ["sys-subsystem-net-devices-${utils.escapeSystemdPath "lan-self"}.device"];
|
systemd.services.kea-dhcp4-server.after = ["sys-subsystem-net-devices-${utils.escapeSystemdPath "lan-self"}.device"];
|
||||||
|
|
||||||
extra.microvms.networking = {
|
meta.microvms.networking = {
|
||||||
baseMac = config.repo.secrets.local.networking.interfaces.lan.mac;
|
baseMac = config.repo.secrets.local.networking.interfaces.lan.mac;
|
||||||
macvtapInterface = "lan";
|
macvtapInterface = "lan";
|
||||||
wireguard.openFirewallRules = ["lan-to-local"];
|
wireguard.openFirewallRules = ["lan-to-local"];
|
||||||
};
|
};
|
||||||
|
|
||||||
# Allow accessing influx
|
# Allow accessing influx
|
||||||
extra.wireguard.proxy-sentinel.client.via = "sentinel";
|
meta.wireguard.proxy-sentinel.client.via = "sentinel";
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,26 +6,25 @@
|
||||||
...
|
...
|
||||||
}: {
|
}: {
|
||||||
imports = [
|
imports = [
|
||||||
../common/core
|
../../modules/optional/hardware/odroid-n2plus.nix
|
||||||
../common/hardware/odroid-n2plus.nix
|
|
||||||
../common/initrd-ssh.nix
|
|
||||||
../common/zfs.nix
|
|
||||||
../common/bios-boot.nix
|
|
||||||
|
|
||||||
./fs.nix
|
../../modules
|
||||||
./net.nix
|
../../modules/optional/boot-bios.nix
|
||||||
|
../../modules/optional/initrd-ssh.nix
|
||||||
|
../../modules/optional/zfs.nix
|
||||||
|
|
||||||
#./dnsmasq.nix
|
#./dnsmasq.nix
|
||||||
./esphome.nix
|
./esphome.nix
|
||||||
|
./fs.nix
|
||||||
./home-assistant.nix
|
./home-assistant.nix
|
||||||
./hostapd.nix
|
./hostapd.nix
|
||||||
./mosquitto.nix
|
./mosquitto.nix
|
||||||
|
./net.nix
|
||||||
./nginx.nix
|
./nginx.nix
|
||||||
./zigbee2mqtt.nix
|
./zigbee2mqtt.nix
|
||||||
];
|
];
|
||||||
|
|
||||||
# TODO boot.loader.grub.devices = ["/dev/disk/by-id/${config.repo.secrets.local.disk.main}"];
|
# TODO boot.loader.grub.devices = ["/dev/disk/by-id/${config.repo.secrets.local.disk.main}"];
|
||||||
console.earlySetup = true;
|
|
||||||
|
|
||||||
# Fails if there are no SMART devices
|
# Fails if there are no SMART devices
|
||||||
services.smartd.enable = lib.mkForce false;
|
services.smartd.enable = lib.mkForce false;
|
||||||
|
|
|
@ -4,9 +4,6 @@
|
||||||
pkgs,
|
pkgs,
|
||||||
...
|
...
|
||||||
}: {
|
}: {
|
||||||
imports = [../../modules/hostapd.nix];
|
|
||||||
disabledModules = ["services/networking/hostapd.nix"];
|
|
||||||
|
|
||||||
# Associates each known client to a unique password
|
# Associates each known client to a unique password
|
||||||
age.secrets.wifi-clients.rekeyFile = ./secrets/wifi-clients.age;
|
age.secrets.wifi-clients.rekeyFile = ./secrets/wifi-clients.age;
|
||||||
|
|
||||||
|
|
23
modules/config/boot.nix
Normal file
23
modules/config/boot.nix
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
pkgs,
|
||||||
|
...
|
||||||
|
}: {
|
||||||
|
boot = {
|
||||||
|
initrd.systemd = {
|
||||||
|
enable = true;
|
||||||
|
emergencyAccess = config.repo.secrets.global.root.hashedPassword;
|
||||||
|
# TODO good idea? targets.emergency.wants = ["network.target" "sshd.service"];
|
||||||
|
extraBin.ip = "${pkgs.iproute2}/bin/ip";
|
||||||
|
};
|
||||||
|
|
||||||
|
# NOTE: Add "rd.systemd.unit=rescue.target" to debug initrd
|
||||||
|
kernelParams = ["log_buf_len=10M"];
|
||||||
|
tmp.useTmpfs = true;
|
||||||
|
|
||||||
|
loader.timeout = lib.mkDefault 2;
|
||||||
|
};
|
||||||
|
|
||||||
|
console.earlySetup = true;
|
||||||
|
}
|
12
modules/config/home-manager.nix
Normal file
12
modules/config/home-manager.nix
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
{
|
||||||
|
home-manager = {
|
||||||
|
useGlobalPkgs = true;
|
||||||
|
useUserPackages = true;
|
||||||
|
verbose = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
# Required even when using home-manager's zsh module since the /etc/profile load order
|
||||||
|
# is partly controlled by this. See nix-community/home-manager#3681.
|
||||||
|
# TODO remove once we have nushell
|
||||||
|
programs.zsh.enable = true;
|
||||||
|
}
|
|
@ -1,10 +1,7 @@
|
||||||
{
|
{
|
||||||
config,
|
|
||||||
extraLib,
|
extraLib,
|
||||||
inputs,
|
inputs,
|
||||||
lib,
|
lib,
|
||||||
nodePath,
|
|
||||||
pkgs,
|
|
||||||
...
|
...
|
||||||
}: {
|
}: {
|
||||||
# IP address math library
|
# IP address math library
|
||||||
|
@ -211,6 +208,7 @@
|
||||||
# Do linear probing. Returns the first unused value at or after the given value.
|
# Do linear probing. Returns the first unused value at or after the given value.
|
||||||
probe = avoid: value:
|
probe = avoid: value:
|
||||||
if lib.elem value avoid
|
if lib.elem value avoid
|
||||||
|
# TODO lib.mod
|
||||||
# Poor man's modulo, because nix has no modulo. Luckily we operate on a residue
|
# Poor man's modulo, because nix has no modulo. Luckily we operate on a residue
|
||||||
# class of x modulo 2^n, so we can use bitAnd instead.
|
# class of x modulo 2^n, so we can use bitAnd instead.
|
||||||
then probe avoid (builtins.bitAnd (capacity - 1) (value + 1))
|
then probe avoid (builtins.bitAnd (capacity - 1) (value + 1))
|
||||||
|
@ -297,6 +295,7 @@
|
||||||
# Do linear probing. Returns the first unused value at or after the given value.
|
# Do linear probing. Returns the first unused value at or after the given value.
|
||||||
probe = avoid: value:
|
probe = avoid: value:
|
||||||
if lib.elem value avoid
|
if lib.elem value avoid
|
||||||
|
# TODO lib.mod
|
||||||
# Poor man's modulo, because nix has no modulo. Luckily we operate on a residue
|
# Poor man's modulo, because nix has no modulo. Luckily we operate on a residue
|
||||||
# class of x modulo 2^n, so we can use bitAnd instead.
|
# class of x modulo 2^n, so we can use bitAnd instead.
|
||||||
then probe avoid (builtins.bitAnd (capacity - 1) (value + 1))
|
then probe avoid (builtins.bitAnd (capacity - 1) (value + 1))
|
||||||
|
@ -332,111 +331,4 @@
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
# Define local repo secrets
|
|
||||||
repo.secretFiles = let
|
|
||||||
local = nodePath + "/secrets/local.nix.age";
|
|
||||||
in
|
|
||||||
{
|
|
||||||
global = ../../../secrets/global.nix.age;
|
|
||||||
}
|
|
||||||
// lib.optionalAttrs (nodePath != null && lib.pathExists local) {inherit local;};
|
|
||||||
|
|
||||||
# Setup secret rekeying parameters
|
|
||||||
age.rekey = {
|
|
||||||
inherit
|
|
||||||
(inputs.self.secretsConfig)
|
|
||||||
masterIdentities
|
|
||||||
extraEncryptionPubkeys
|
|
||||||
;
|
|
||||||
|
|
||||||
# This is technically impure, but intended. We need to rekey on the
|
|
||||||
# current system due to yubikey availability.
|
|
||||||
forceRekeyOnSystem = builtins.extraBuiltins.unsafeCurrentSystem;
|
|
||||||
hostPubkey = let
|
|
||||||
pubkeyPath =
|
|
||||||
if nodePath == null
|
|
||||||
then null
|
|
||||||
else nodePath + "/secrets/host.pub";
|
|
||||||
in
|
|
||||||
lib.mkIf (pubkeyPath != null && lib.pathExists pubkeyPath) pubkeyPath;
|
|
||||||
};
|
|
||||||
|
|
||||||
age.generators.dhparams.script = {pkgs, ...}: "${pkgs.openssl}/bin/openssl dhparam 4096";
|
|
||||||
age.generators.basic-auth.script = {
|
|
||||||
pkgs,
|
|
||||||
lib,
|
|
||||||
decrypt,
|
|
||||||
deps,
|
|
||||||
...
|
|
||||||
}:
|
|
||||||
lib.flip lib.concatMapStrings deps ({
|
|
||||||
name,
|
|
||||||
host,
|
|
||||||
file,
|
|
||||||
}: ''
|
|
||||||
echo " -> Aggregating [32m"${lib.escapeShellArg host}":[m[33m"${lib.escapeShellArg name}"[m" >&2
|
|
||||||
${decrypt} ${lib.escapeShellArg file} \
|
|
||||||
| ${pkgs.apacheHttpd}/bin/htpasswd -niBC 12 ${lib.escapeShellArg host}"+"${lib.escapeShellArg name} \
|
|
||||||
|| die "Failure while aggregating basic auth hashes"
|
|
||||||
'');
|
|
||||||
|
|
||||||
boot = {
|
|
||||||
initrd.systemd = {
|
|
||||||
enable = true;
|
|
||||||
emergencyAccess = config.repo.secrets.global.root.hashedPassword;
|
|
||||||
# TODO good idea? targets.emergency.wants = ["network.target" "sshd.service"];
|
|
||||||
extraBin = with pkgs; {
|
|
||||||
ip = "${iproute2}/bin/ip";
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
# Add "rd.systemd.unit=rescue.target" to debug initrd
|
|
||||||
kernelParams = ["log_buf_len=10M"];
|
|
||||||
tmp.useTmpfs = true;
|
|
||||||
};
|
|
||||||
|
|
||||||
# Just before switching, remove the agenix directory if it exists.
|
|
||||||
# This can happen when a secret is used in the initrd because it will
|
|
||||||
# then be copied to the initramfs under the same path. This materializes
|
|
||||||
# /run/agenix as a directory which will cause issues when the actual system tries
|
|
||||||
# to create a link called /run/agenix. Agenix should probably fail in this case,
|
|
||||||
# but doesn't and instead puts the generation link into the existing directory.
|
|
||||||
# TODO See https://github.com/ryantm/agenix/pull/187.
|
|
||||||
system.activationScripts.removeAgenixLink.text = "[[ ! -L /run/agenix ]] && [[ -d /run/agenix ]] && rm -rf /run/agenix";
|
|
||||||
system.activationScripts.agenixNewGeneration.deps = ["removeAgenixLink"];
|
|
||||||
|
|
||||||
# Disable sudo which is entierly unnecessary.
|
|
||||||
security.sudo.enable = false;
|
|
||||||
|
|
||||||
time.timeZone = lib.mkDefault "Europe/Berlin";
|
|
||||||
i18n.defaultLocale = "C.UTF-8";
|
|
||||||
console.keyMap = "de-latin1-nodeadkeys";
|
|
||||||
|
|
||||||
systemd.enableUnifiedCgroupHierarchy = true;
|
|
||||||
users.mutableUsers = false;
|
|
||||||
|
|
||||||
users.deterministicIds = let
|
|
||||||
uidGid = id: {
|
|
||||||
uid = id;
|
|
||||||
gid = id;
|
|
||||||
};
|
|
||||||
in {
|
|
||||||
systemd-oom = uidGid 999;
|
|
||||||
systemd-coredump = uidGid 998;
|
|
||||||
sshd = uidGid 997;
|
|
||||||
nscd = uidGid 996;
|
|
||||||
polkituser = uidGid 995;
|
|
||||||
microvm = uidGid 994;
|
|
||||||
promtail = uidGid 993;
|
|
||||||
grafana = uidGid 992;
|
|
||||||
acme = uidGid 991;
|
|
||||||
kanidm = uidGid 990;
|
|
||||||
loki = uidGid 989;
|
|
||||||
vaultwarden = uidGid 988;
|
|
||||||
oauth2_proxy = uidGid 987;
|
|
||||||
influxdb2 = uidGid 986;
|
|
||||||
telegraf = uidGid 985;
|
|
||||||
rtkit = uidGid 984;
|
|
||||||
};
|
|
||||||
}
|
}
|
8
modules/config/microvms.nix
Normal file
8
modules/config/microvms.nix
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
{
|
||||||
|
# If the host defines microvms, ensure that our modules and
|
||||||
|
# some boilerplate is imported automatically.
|
||||||
|
meta.microvms.commonImports = [
|
||||||
|
../.
|
||||||
|
{home-manager.users.root.home.minimal = true;}
|
||||||
|
];
|
||||||
|
}
|
20
modules/config/net.nix
Normal file
20
modules/config/net.nix
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
nodeName,
|
||||||
|
...
|
||||||
|
}: {
|
||||||
|
systemd.network.enable = true;
|
||||||
|
|
||||||
|
networking = {
|
||||||
|
hostName = nodeName;
|
||||||
|
useDHCP = lib.mkForce false;
|
||||||
|
useNetworkd = true;
|
||||||
|
dhcpcd.enable = false;
|
||||||
|
|
||||||
|
# Rename known network interfaces from local secrets
|
||||||
|
renameInterfacesByMac =
|
||||||
|
lib.mapAttrs (_: v: v.mac)
|
||||||
|
(config.repo.secrets.local.networking.interfaces or {});
|
||||||
|
};
|
||||||
|
}
|
70
modules/config/nftables.nix
Normal file
70
modules/config/nftables.nix
Normal file
|
@ -0,0 +1,70 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
...
|
||||||
|
}: {
|
||||||
|
networking.nftables = {
|
||||||
|
stopRuleset = lib.mkDefault ''
|
||||||
|
table inet filter {
|
||||||
|
chain input {
|
||||||
|
type filter hook input priority filter; policy drop;
|
||||||
|
ct state invalid drop
|
||||||
|
ct state {established, related} accept
|
||||||
|
|
||||||
|
iifname lo accept
|
||||||
|
meta l4proto ipv6-icmp accept
|
||||||
|
meta l4proto icmp accept
|
||||||
|
tcp dport ${toString (lib.head config.services.openssh.ports)} accept
|
||||||
|
}
|
||||||
|
chain forward {
|
||||||
|
type filter hook forward priority filter; policy drop;
|
||||||
|
}
|
||||||
|
chain output {
|
||||||
|
type filter hook output priority filter; policy accept;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'';
|
||||||
|
|
||||||
|
firewall = {
|
||||||
|
enable = true;
|
||||||
|
|
||||||
|
# TODO mkForce nftables
|
||||||
|
zones = lib.mkForce {
|
||||||
|
local.localZone = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
rules = lib.mkForce {
|
||||||
|
icmp = {
|
||||||
|
early = true;
|
||||||
|
after = ["ct"];
|
||||||
|
from = "all";
|
||||||
|
to = ["local"];
|
||||||
|
extraLines = [
|
||||||
|
"ip6 nexthdr icmpv6 icmpv6 type { echo-request, destination-unreachable, packet-too-big, time-exceeded, parameter-problem, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept"
|
||||||
|
"ip protocol icmp icmp type { echo-request, destination-unreachable, router-advertisement, time-exceeded, parameter-problem } accept"
|
||||||
|
#"ip6 saddr fe80::/10 ip6 daddr fe80::/10 udp dport 546 accept" # (dhcpv6)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
ssh = {
|
||||||
|
early = true;
|
||||||
|
after = ["ct"];
|
||||||
|
from = "all";
|
||||||
|
to = ["local"];
|
||||||
|
allowedTCPPorts = config.services.openssh.ports;
|
||||||
|
};
|
||||||
|
|
||||||
|
untrusted-to-local = {
|
||||||
|
from = ["untrusted"];
|
||||||
|
to = ["local"];
|
||||||
|
|
||||||
|
inherit
|
||||||
|
(config.networking.firewall)
|
||||||
|
allowedTCPPorts
|
||||||
|
allowedUDPPorts
|
||||||
|
;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
|
@ -24,7 +24,6 @@
|
||||||
(lib.mkAfter ["mdns"])
|
(lib.mkAfter ["mdns"])
|
||||||
];
|
];
|
||||||
|
|
||||||
# TODO mkForce nftables
|
|
||||||
# Open port 5353 for any interfaces that have MulticastDNS enabled
|
# Open port 5353 for any interfaces that have MulticastDNS enabled
|
||||||
networking.nftables.firewall = let
|
networking.nftables.firewall = let
|
||||||
# Determine all networks that have MulticastDNS enabled
|
# Determine all networks that have MulticastDNS enabled
|
||||||
|
@ -38,7 +37,7 @@
|
||||||
knownMacs =
|
knownMacs =
|
||||||
lib.mapAttrs'
|
lib.mapAttrs'
|
||||||
(k: v: lib.nameValuePair v k)
|
(k: v: lib.nameValuePair v k)
|
||||||
config.extra.networking.renameInterfacesByMac;
|
config.networking.renameInterfacesByMac;
|
||||||
# A helper that returns the link name for the given mac address,
|
# A helper that returns the link name for the given mac address,
|
||||||
# or null if it doesn't exist or the given mac was null.
|
# or null if it doesn't exist or the given mac was null.
|
||||||
linkNameFor = mac:
|
linkNameFor = mac:
|
||||||
|
@ -61,6 +60,7 @@
|
||||||
);
|
);
|
||||||
in
|
in
|
||||||
lib.mkIf (mdnsInterfaces != []) {
|
lib.mkIf (mdnsInterfaces != []) {
|
||||||
|
# TODO mkForce nftables
|
||||||
zones = lib.mkForce {
|
zones = lib.mkForce {
|
||||||
mdns.interfaces = mdnsInterfaces;
|
mdns.interfaces = mdnsInterfaces;
|
||||||
};
|
};
|
64
modules/config/secrets.nix
Normal file
64
modules/config/secrets.nix
Normal file
|
@ -0,0 +1,64 @@
|
||||||
|
{
|
||||||
|
inputs,
|
||||||
|
lib,
|
||||||
|
nodePath,
|
||||||
|
...
|
||||||
|
}: {
|
||||||
|
# Define local repo secrets
|
||||||
|
repo.secretFiles = let
|
||||||
|
local = nodePath + "/secrets/local.nix.age";
|
||||||
|
in
|
||||||
|
{
|
||||||
|
global = ../../secrets/global.nix.age;
|
||||||
|
}
|
||||||
|
// lib.optionalAttrs (nodePath != null && lib.pathExists local) {inherit local;};
|
||||||
|
|
||||||
|
# Setup secret rekeying parameters
|
||||||
|
age.rekey = {
|
||||||
|
inherit
|
||||||
|
(inputs.self.secretsConfig)
|
||||||
|
masterIdentities
|
||||||
|
extraEncryptionPubkeys
|
||||||
|
;
|
||||||
|
|
||||||
|
# This is technically impure, but intended. We need to rekey on the
|
||||||
|
# current system due to yubikey availability.
|
||||||
|
forceRekeyOnSystem = builtins.extraBuiltins.unsafeCurrentSystem;
|
||||||
|
hostPubkey = let
|
||||||
|
pubkeyPath =
|
||||||
|
if nodePath == null
|
||||||
|
then null
|
||||||
|
else nodePath + "/secrets/host.pub";
|
||||||
|
in
|
||||||
|
lib.mkIf (pubkeyPath != null && lib.pathExists pubkeyPath) pubkeyPath;
|
||||||
|
};
|
||||||
|
|
||||||
|
age.generators.dhparams.script = {pkgs, ...}: "${pkgs.openssl}/bin/openssl dhparam 4096";
|
||||||
|
age.generators.basic-auth.script = {
|
||||||
|
pkgs,
|
||||||
|
lib,
|
||||||
|
decrypt,
|
||||||
|
deps,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
lib.flip lib.concatMapStrings deps ({
|
||||||
|
name,
|
||||||
|
host,
|
||||||
|
file,
|
||||||
|
}: ''
|
||||||
|
echo " -> Aggregating [32m"${lib.escapeShellArg host}":[m[33m"${lib.escapeShellArg name}"[m" >&2
|
||||||
|
${decrypt} ${lib.escapeShellArg file} \
|
||||||
|
| ${pkgs.apacheHttpd}/bin/htpasswd -niBC 12 ${lib.escapeShellArg host}"+"${lib.escapeShellArg name} \
|
||||||
|
|| die "Failure while aggregating basic auth hashes"
|
||||||
|
'');
|
||||||
|
|
||||||
|
# Just before switching, remove the agenix directory if it exists.
|
||||||
|
# This can happen when a secret is used in the initrd because it will
|
||||||
|
# then be copied to the initramfs under the same path. This materializes
|
||||||
|
# /run/agenix as a directory which will cause issues when the actual system tries
|
||||||
|
# to create a link called /run/agenix. Agenix should probably fail in this case,
|
||||||
|
# but doesn't and instead puts the generation link into the existing directory.
|
||||||
|
# TODO See https://github.com/ryantm/agenix/pull/187.
|
||||||
|
system.activationScripts.removeAgenixLink.text = "[[ ! -L /run/agenix ]] && [[ -d /run/agenix ]] && rm -rf /run/agenix";
|
||||||
|
system.activationScripts.agenixNewGeneration.deps = ["removeAgenixLink"];
|
||||||
|
}
|
10
modules/config/system.nix
Normal file
10
modules/config/system.nix
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
{lib, ...}: {
|
||||||
|
# Disable sudo which is entierly unnecessary.
|
||||||
|
security.sudo.enable = false;
|
||||||
|
|
||||||
|
time.timeZone = lib.mkDefault "Europe/Berlin";
|
||||||
|
i18n.defaultLocale = "C.UTF-8";
|
||||||
|
console.keyMap = "de-latin1-nodeadkeys";
|
||||||
|
|
||||||
|
systemd.enableUnifiedCgroupHierarchy = true;
|
||||||
|
}
|
27
modules/config/users.nix
Normal file
27
modules/config/users.nix
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
{
|
||||||
|
users.mutableUsers = false;
|
||||||
|
|
||||||
|
users.deterministicIds = let
|
||||||
|
uidGid = id: {
|
||||||
|
uid = id;
|
||||||
|
gid = id;
|
||||||
|
};
|
||||||
|
in {
|
||||||
|
systemd-oom = uidGid 999;
|
||||||
|
systemd-coredump = uidGid 998;
|
||||||
|
sshd = uidGid 997;
|
||||||
|
nscd = uidGid 996;
|
||||||
|
polkituser = uidGid 995;
|
||||||
|
microvm = uidGid 994;
|
||||||
|
promtail = uidGid 993;
|
||||||
|
grafana = uidGid 992;
|
||||||
|
acme = uidGid 991;
|
||||||
|
kanidm = uidGid 990;
|
||||||
|
loki = uidGid 989;
|
||||||
|
vaultwarden = uidGid 988;
|
||||||
|
oauth2_proxy = uidGid 987;
|
||||||
|
influxdb2 = uidGid 986;
|
||||||
|
telegraf = uidGid 985;
|
||||||
|
rtkit = uidGid 984;
|
||||||
|
};
|
||||||
|
}
|
42
modules/default.nix
Normal file
42
modules/default.nix
Normal file
|
@ -0,0 +1,42 @@
|
||||||
|
{
|
||||||
|
imports = [
|
||||||
|
../users/root
|
||||||
|
|
||||||
|
./config/boot.nix
|
||||||
|
./config/home-manager.nix
|
||||||
|
./config/impermanence.nix
|
||||||
|
./config/inputrc.nix
|
||||||
|
./config/issue.nix
|
||||||
|
./config/lib.nix
|
||||||
|
./config/microvms.nix
|
||||||
|
./config/net.nix
|
||||||
|
./config/nftables.nix
|
||||||
|
./config/nix.nix
|
||||||
|
./config/resolved.nix
|
||||||
|
./config/secrets.nix
|
||||||
|
./config/ssh.nix
|
||||||
|
./config/system.nix
|
||||||
|
./config/users.nix
|
||||||
|
./config/xdg.nix
|
||||||
|
|
||||||
|
./meta/microvms.nix
|
||||||
|
./meta/nginx.nix
|
||||||
|
./meta/oauth2-proxy.nix
|
||||||
|
./meta/promtail.nix
|
||||||
|
./meta/telegraf.nix
|
||||||
|
./meta/wireguard-proxy.nix
|
||||||
|
./meta/wireguard.nix
|
||||||
|
|
||||||
|
./networking/hostapd.nix
|
||||||
|
./networking/interface-naming.nix
|
||||||
|
./networking/provided-domains.nix
|
||||||
|
|
||||||
|
./repo/distributed-config.nix
|
||||||
|
./repo/meta.nix
|
||||||
|
./repo/secrets.nix
|
||||||
|
|
||||||
|
./security/acme-wildcard.nix
|
||||||
|
|
||||||
|
./system/deteministic-ids.nix
|
||||||
|
];
|
||||||
|
}
|
|
@ -1,136 +0,0 @@
|
||||||
{
|
|
||||||
config,
|
|
||||||
lib,
|
|
||||||
nodePath,
|
|
||||||
...
|
|
||||||
}: let
|
|
||||||
inherit
|
|
||||||
(lib)
|
|
||||||
assertMsg
|
|
||||||
filter
|
|
||||||
flip
|
|
||||||
genAttrs
|
|
||||||
hasInfix
|
|
||||||
head
|
|
||||||
mapAttrs
|
|
||||||
mapAttrs'
|
|
||||||
mdDoc
|
|
||||||
mkIf
|
|
||||||
mkOption
|
|
||||||
nameValuePair
|
|
||||||
optionals
|
|
||||||
removeSuffix
|
|
||||||
types
|
|
||||||
;
|
|
||||||
in {
|
|
||||||
options.extra = {
|
|
||||||
acme.wildcardDomains = mkOption {
|
|
||||||
default = [];
|
|
||||||
example = ["example.org"];
|
|
||||||
type = types.listOf types.str;
|
|
||||||
description = mdDoc ''
|
|
||||||
All domains for which a wildcard certificate will be generated.
|
|
||||||
This will define the given `security.acme.certs` and set `extraDomainNames` correctly,
|
|
||||||
but does not fill any options such as credentials or dnsProvider. These have to be set
|
|
||||||
individually for each cert by the user or via `security.acme.defaults`.
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
options.services.nginx.virtualHosts = mkOption {
|
|
||||||
type = types.attrsOf (types.submodule ({config, ...}: {
|
|
||||||
options.recommendedSecurityHeaders = mkOption {
|
|
||||||
type = types.bool;
|
|
||||||
default = true;
|
|
||||||
description = mdDoc ''Whether to add additional security headers to the "/" location.'';
|
|
||||||
};
|
|
||||||
config = mkIf config.recommendedSecurityHeaders {
|
|
||||||
locations."/".extraConfig = ''
|
|
||||||
# Enable HTTP Strict Transport Security (HSTS)
|
|
||||||
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
|
|
||||||
|
|
||||||
# Minimize information leaked to other domains
|
|
||||||
add_header Referrer-Policy "origin-when-cross-origin";
|
|
||||||
|
|
||||||
add_header X-XSS-Protection "1; mode=block";
|
|
||||||
add_header X-Frame-Options "DENY";
|
|
||||||
add_header X-Content-Type-Options "nosniff";
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
}));
|
|
||||||
};
|
|
||||||
|
|
||||||
config = {
|
|
||||||
lib.extra = {
|
|
||||||
# For a given domain, this searches for a matching wildcard acme domain that
|
|
||||||
# would include the given domain. If no such domain is defined in
|
|
||||||
# extra.acme.wildcardDomains, an assertion is triggered.
|
|
||||||
matchingWildcardCert = domain: let
|
|
||||||
matchingCerts =
|
|
||||||
filter
|
|
||||||
(x: !hasInfix "." (removeSuffix ".${x}" domain))
|
|
||||||
config.extra.acme.wildcardDomains;
|
|
||||||
in
|
|
||||||
assert assertMsg (matchingCerts != []) "No wildcard certificate was defined that matches ${domain}";
|
|
||||||
head matchingCerts;
|
|
||||||
};
|
|
||||||
|
|
||||||
security.acme.certs = genAttrs config.extra.acme.wildcardDomains (domain: {
|
|
||||||
extraDomainNames = ["*.${domain}"];
|
|
||||||
});
|
|
||||||
|
|
||||||
age.secrets = mkIf config.services.nginx.enable {
|
|
||||||
"dhparams.pem" = {
|
|
||||||
rekeyFile = nodePath + "/secrets/dhparams.pem.age";
|
|
||||||
generator = "dhparams";
|
|
||||||
mode = "440";
|
|
||||||
group = "nginx";
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
# Sensible defaults for nginx
|
|
||||||
services.nginx = mkIf config.services.nginx.enable {
|
|
||||||
recommendedBrotliSettings = true;
|
|
||||||
recommendedGzipSettings = true;
|
|
||||||
recommendedOptimisation = true;
|
|
||||||
recommendedProxySettings = true;
|
|
||||||
recommendedTlsSettings = true;
|
|
||||||
|
|
||||||
# SSL config
|
|
||||||
sslCiphers = "EECDH+AESGCM:EDH+AESGCM:!aNULL";
|
|
||||||
sslDhparam = config.age.secrets."dhparams.pem".path;
|
|
||||||
commonHttpConfig = ''
|
|
||||||
log_format json_combined escape=json '{'
|
|
||||||
'"time": $msec,'
|
|
||||||
'"remote_addr":"$remote_addr",'
|
|
||||||
'"status":$status,'
|
|
||||||
'"method":"$request_method",'
|
|
||||||
'"host":"$host",'
|
|
||||||
'"uri":"$request_uri",'
|
|
||||||
'"request_size":$request_length,'
|
|
||||||
'"response_size":$body_bytes_sent,'
|
|
||||||
'"response_time":$request_time,'
|
|
||||||
'"referrer":"$http_referer",'
|
|
||||||
'"user_agent":"$http_user_agent"'
|
|
||||||
'}';
|
|
||||||
error_log syslog:server=unix:/dev/log,nohostname;
|
|
||||||
access_log syslog:server=unix:/dev/log,nohostname json_combined;
|
|
||||||
ssl_ecdh_curve secp384r1;
|
|
||||||
'';
|
|
||||||
|
|
||||||
virtualHosts.localhost = {
|
|
||||||
locations."= /nginx_status".extraConfig = ''
|
|
||||||
allow 127.0.0.0/8;
|
|
||||||
deny all;
|
|
||||||
stub_status;
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
networking.firewall.allowedTCPPorts = optionals config.services.nginx.enable [80 443];
|
|
||||||
|
|
||||||
services.telegraf.extraConfig.inputs = mkIf config.services.nginx.enable {
|
|
||||||
nginx.urls = ["http://localhost/nginx_status"];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -35,8 +35,8 @@
|
||||||
;
|
;
|
||||||
|
|
||||||
parentConfig = config;
|
parentConfig = config;
|
||||||
cfg = config.extra.microvms;
|
cfg = config.meta.microvms;
|
||||||
inherit (config.extra.microvms) vms;
|
inherit (config.meta.microvms) vms;
|
||||||
inherit (config.lib) net;
|
inherit (config.lib) net;
|
||||||
|
|
||||||
# Configuration for each microvm
|
# Configuration for each microvm
|
||||||
|
@ -94,7 +94,7 @@
|
||||||
nodes = mkMerge config.microvm.vms.${vmName}.config.options.nodes.definitions;
|
nodes = mkMerge config.microvm.vms.${vmName}.config.options.nodes.definitions;
|
||||||
|
|
||||||
microvm.vms.${vmName} = let
|
microvm.vms.${vmName} = let
|
||||||
node = import ../nix/generate-node.nix inputs vmCfg.nodeName {
|
node = import ../../nix/generate-node.nix inputs vmCfg.nodeName {
|
||||||
inherit (vmCfg) system configPath;
|
inherit (vmCfg) system configPath;
|
||||||
};
|
};
|
||||||
mac = (net.mac.assignMacs "02:01:27:00:00:00" 24 [] (attrNames vms)).${vmName};
|
mac = (net.mac.assignMacs "02:01:27:00:00:00" 24 [] (attrNames vms)).${vmName};
|
||||||
|
@ -165,7 +165,7 @@
|
||||||
gc.automatic = mkForce false;
|
gc.automatic = mkForce false;
|
||||||
};
|
};
|
||||||
|
|
||||||
extra.networking.renameInterfacesByMac.${vmCfg.networking.mainLinkName} = mac;
|
networking.renameInterfacesByMac.${vmCfg.networking.mainLinkName} = mac;
|
||||||
|
|
||||||
systemd.network.networks =
|
systemd.network.networks =
|
||||||
{
|
{
|
||||||
|
@ -186,7 +186,7 @@
|
||||||
# would not come online if the private key wasn't rekeyed yet).
|
# would not come online if the private key wasn't rekeyed yet).
|
||||||
# FIXME ideally this would be conditional at runtime if the
|
# FIXME ideally this would be conditional at runtime if the
|
||||||
# agenix activation had an error, but this is not trivial.
|
# agenix activation had an error, but this is not trivial.
|
||||||
${parentConfig.extra.wireguard."${nodeName}-local-vms".unitConfName} = {
|
${parentConfig.meta.wireguard."${nodeName}-local-vms".unitConfName} = {
|
||||||
linkConfig.RequiredForOnline = "no";
|
linkConfig.RequiredForOnline = "no";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -198,7 +198,7 @@
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
extra.wireguard = mkIf vmCfg.localWireguard {
|
meta.wireguard = mkIf vmCfg.localWireguard {
|
||||||
"${nodeName}-local-vms" = {
|
"${nodeName}-local-vms" = {
|
||||||
server = {
|
server = {
|
||||||
host =
|
host =
|
||||||
|
@ -222,7 +222,7 @@ in {
|
||||||
{microvm.host.enable = vms != {};}
|
{microvm.host.enable = vms != {};}
|
||||||
];
|
];
|
||||||
|
|
||||||
options.extra.microvms = {
|
options.meta.microvms = {
|
||||||
commonImports = mkOption {
|
commonImports = mkOption {
|
||||||
type = types.listOf types.unspecified;
|
type = types.listOf types.unspecified;
|
||||||
default = [];
|
default = [];
|
||||||
|
@ -362,7 +362,7 @@ in {
|
||||||
config = mkIf (vms != {}) (
|
config = mkIf (vms != {}) (
|
||||||
{
|
{
|
||||||
# Define a local wireguard server to communicate with vms securely
|
# Define a local wireguard server to communicate with vms securely
|
||||||
extra.wireguard = mkIf (any (x: x.localWireguard) (attrValues vms)) {
|
meta.wireguard = mkIf (any (x: x.localWireguard) (attrValues vms)) {
|
||||||
"${nodeName}-local-vms" = {
|
"${nodeName}-local-vms" = {
|
||||||
server = {
|
server = {
|
||||||
host =
|
host =
|
79
modules/meta/nginx.nix
Normal file
79
modules/meta/nginx.nix
Normal file
|
@ -0,0 +1,79 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
nodePath,
|
||||||
|
...
|
||||||
|
}: let
|
||||||
|
inherit
|
||||||
|
(lib)
|
||||||
|
mdDoc
|
||||||
|
mkIf
|
||||||
|
mkOption
|
||||||
|
types
|
||||||
|
;
|
||||||
|
in {
|
||||||
|
options.services.nginx.virtualHosts = mkOption {
|
||||||
|
type = types.attrsOf (types.submodule ({config, ...}: {
|
||||||
|
options.recommendedSecurityHeaders = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = mdDoc ''Whether to add additional security headers to the "/" location.'';
|
||||||
|
};
|
||||||
|
config = mkIf config.recommendedSecurityHeaders {
|
||||||
|
locations."/".extraConfig = ''
|
||||||
|
# Enable HTTP Strict Transport Security (HSTS)
|
||||||
|
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
|
||||||
|
|
||||||
|
# Minimize information leaked to other domains
|
||||||
|
add_header Referrer-Policy "origin-when-cross-origin";
|
||||||
|
|
||||||
|
add_header X-XSS-Protection "1; mode=block";
|
||||||
|
add_header X-Frame-Options "DENY";
|
||||||
|
add_header X-Content-Type-Options "nosniff";
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
|
||||||
|
config = mkIf config.services.nginx.enable {
|
||||||
|
age.secrets."dhparams.pem" = {
|
||||||
|
rekeyFile = nodePath + "/secrets/dhparams.pem.age";
|
||||||
|
generator = "dhparams";
|
||||||
|
mode = "440";
|
||||||
|
group = "nginx";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Sensible defaults for nginx
|
||||||
|
services.nginx = {
|
||||||
|
recommendedBrotliSettings = true;
|
||||||
|
recommendedGzipSettings = true;
|
||||||
|
recommendedOptimisation = true;
|
||||||
|
recommendedProxySettings = true;
|
||||||
|
recommendedTlsSettings = true;
|
||||||
|
|
||||||
|
# SSL config
|
||||||
|
sslCiphers = "EECDH+AESGCM:EDH+AESGCM:!aNULL";
|
||||||
|
sslDhparam = config.age.secrets."dhparams.pem".path;
|
||||||
|
commonHttpConfig = ''
|
||||||
|
log_format json_combined escape=json '{'
|
||||||
|
'"time": $msec,'
|
||||||
|
'"remote_addr":"$remote_addr",'
|
||||||
|
'"status":$status,'
|
||||||
|
'"method":"$request_method",'
|
||||||
|
'"host":"$host",'
|
||||||
|
'"uri":"$request_uri",'
|
||||||
|
'"request_size":$request_length,'
|
||||||
|
'"response_size":$body_bytes_sent,'
|
||||||
|
'"response_time":$request_time,'
|
||||||
|
'"referrer":"$http_referer",'
|
||||||
|
'"user_agent":"$http_user_agent"'
|
||||||
|
'}';
|
||||||
|
error_log syslog:server=unix:/dev/log,nohostname;
|
||||||
|
access_log syslog:server=unix:/dev/log,nohostname json_combined;
|
||||||
|
ssl_ecdh_curve secp384r1;
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [80 443];
|
||||||
|
};
|
||||||
|
}
|
|
@ -17,9 +17,9 @@
|
||||||
types
|
types
|
||||||
;
|
;
|
||||||
|
|
||||||
cfg = config.extra.oauth2_proxy;
|
cfg = config.meta.oauth2_proxy;
|
||||||
in {
|
in {
|
||||||
options.extra.oauth2_proxy = {
|
options.meta.oauth2_proxy = {
|
||||||
enable = mkEnableOption (mdDoc "oauth2 proxy");
|
enable = mkEnableOption (mdDoc "oauth2 proxy");
|
||||||
|
|
||||||
cookieDomain = mkOption {
|
cookieDomain = mkOption {
|
||||||
|
@ -141,7 +141,7 @@ in {
|
||||||
|
|
||||||
virtualHosts.${cfg.portalDomain} = {
|
virtualHosts.${cfg.portalDomain} = {
|
||||||
forceSSL = true;
|
forceSSL = true;
|
||||||
useACMEHost = config.lib.extra.matchingWildcardCert cfg.portalDomain;
|
useACMEWildcardHost = true;
|
||||||
oauth2.enable = true;
|
oauth2.enable = true;
|
||||||
locations."/".proxyPass = "http://oauth2_proxy";
|
locations."/".proxyPass = "http://oauth2_proxy";
|
||||||
};
|
};
|
|
@ -15,9 +15,9 @@
|
||||||
types
|
types
|
||||||
;
|
;
|
||||||
|
|
||||||
cfg = config.extra.promtail;
|
cfg = config.meta.promtail;
|
||||||
in {
|
in {
|
||||||
options.extra.promtail = {
|
options.meta.promtail = {
|
||||||
enable = mkEnableOption (mdDoc "promtail to push logs to a loki instance.");
|
enable = mkEnableOption (mdDoc "promtail to push logs to a loki instance.");
|
||||||
proxy = mkOption {
|
proxy = mkOption {
|
||||||
type = types.str;
|
type = types.str;
|
||||||
|
@ -50,7 +50,7 @@ in {
|
||||||
{
|
{
|
||||||
basic_auth.username = "${nodeName}+promtail-loki-basic-auth-password";
|
basic_auth.username = "${nodeName}+promtail-loki-basic-auth-password";
|
||||||
basic_auth.password_file = config.age.secrets.promtail-loki-basic-auth-password.path;
|
basic_auth.password_file = config.age.secrets.promtail-loki-basic-auth-password.path;
|
||||||
url = "https://${nodes.${cfg.proxy}.config.providedDomains.loki}/loki/api/v1/push";
|
url = "https://${nodes.${cfg.proxy}.config.networking.providedDomains.loki}/loki/api/v1/push";
|
||||||
}
|
}
|
||||||
];
|
];
|
||||||
|
|
|
@ -17,9 +17,9 @@
|
||||||
types
|
types
|
||||||
;
|
;
|
||||||
|
|
||||||
cfg = config.extra.telegraf;
|
cfg = config.meta.telegraf;
|
||||||
in {
|
in {
|
||||||
options.extra.telegraf = {
|
options.meta.telegraf = {
|
||||||
enable = mkEnableOption (mdDoc "telegraf to push metrics to influx.");
|
enable = mkEnableOption (mdDoc "telegraf to push metrics to influx.");
|
||||||
influxdb2 = {
|
influxdb2 = {
|
||||||
domain = mkOption {
|
domain = mkOption {
|
||||||
|
@ -111,12 +111,23 @@ in {
|
||||||
path_smartctl = "${pkgs.smartmontools}/bin/smartctl";
|
path_smartctl = "${pkgs.smartmontools}/bin/smartctl";
|
||||||
use_sudo = true;
|
use_sudo = true;
|
||||||
};
|
};
|
||||||
|
}
|
||||||
|
// optionalAttrs config.services.nginx.enable {
|
||||||
|
nginx.urls = ["http://localhost/nginx_status"];
|
||||||
# TODO } // optionalAttrs config.services.iwd.enable {
|
# TODO } // optionalAttrs config.services.iwd.enable {
|
||||||
# TODO wireless = { };
|
# TODO wireless = { };
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
services.nginx.virtualHosts = mkIf config.services.telegraf.enable {
|
||||||
|
localhost.locations."= /nginx_status".extraConfig = ''
|
||||||
|
allow 127.0.0.0/8;
|
||||||
|
deny all;
|
||||||
|
stub_status;
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
systemd.services.telegraf = {
|
systemd.services.telegraf = {
|
||||||
path = [
|
path = [
|
||||||
"/run/wrappers"
|
"/run/wrappers"
|
81
modules/meta/wireguard-proxy.nix
Normal file
81
modules/meta/wireguard-proxy.nix
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
nodes,
|
||||||
|
...
|
||||||
|
}: let
|
||||||
|
inherit
|
||||||
|
(lib)
|
||||||
|
attrNames
|
||||||
|
flip
|
||||||
|
mdDoc
|
||||||
|
mkForce
|
||||||
|
mkIf
|
||||||
|
mkMerge
|
||||||
|
mkOption
|
||||||
|
types
|
||||||
|
;
|
||||||
|
|
||||||
|
cfg = config.meta.wireguard-proxy;
|
||||||
|
in {
|
||||||
|
options.meta.wireguard-proxy = mkOption {
|
||||||
|
default = {};
|
||||||
|
description = mdDoc ''
|
||||||
|
Each entry here will setup a wireguard network that connects via the
|
||||||
|
given node and adds appropriate firewall zones. There will a zone for
|
||||||
|
the interface and one for the proxy server specifically. A corresponding
|
||||||
|
rule `''${name}-to-local` will be created to easily expose services to the proxy.
|
||||||
|
'';
|
||||||
|
type = types.attrsOf (types.submodule ({name, ...}: {
|
||||||
|
options = {
|
||||||
|
nicName = mkOption {
|
||||||
|
type = types.str;
|
||||||
|
default = "proxy-${name}";
|
||||||
|
description = mdDoc "The name for the created wireguard network and its interface";
|
||||||
|
};
|
||||||
|
allowedTCPPorts = mkOption {
|
||||||
|
type = types.listOf types.int;
|
||||||
|
default = [];
|
||||||
|
description = mdDoc "Convenience option to allow incoming TCP connections from the proxy server (just the server, not the entire network).";
|
||||||
|
};
|
||||||
|
allowedUDPPorts = mkOption {
|
||||||
|
type = types.listOf types.int;
|
||||||
|
default = [];
|
||||||
|
description = mdDoc "Convenience option to allow incoming UDP connections from the proxy server (just the server, not the entire network).";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
|
||||||
|
config = mkIf (cfg != {}) {
|
||||||
|
meta.wireguard = mkMerge (flip map (attrNames cfg) (proxy: {
|
||||||
|
${cfg.${proxy}.nicName}.client.via = proxy;
|
||||||
|
}));
|
||||||
|
|
||||||
|
networking.nftables.firewall = mkMerge (flip map (attrNames cfg) (proxy: {
|
||||||
|
zones = mkForce {
|
||||||
|
# Parent zone for the whole interface
|
||||||
|
${cfg.${proxy}.nicName}.interfaces = [cfg.${proxy}.nicName];
|
||||||
|
# Subzone to specifically target the proxy host
|
||||||
|
${proxy} = {
|
||||||
|
parent = cfg.${proxy}.nicName;
|
||||||
|
ipv4Addresses = [nodes.${proxy}.config.meta.wireguard.${cfg.${proxy}.nicName}.ipv4];
|
||||||
|
ipv6Addresses = [nodes.${proxy}.config.meta.wireguard.${cfg.${proxy}.nicName}.ipv6];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
rules = mkForce {
|
||||||
|
"${proxy}-to-local" = {
|
||||||
|
from = [proxy];
|
||||||
|
to = ["local"];
|
||||||
|
|
||||||
|
inherit
|
||||||
|
(cfg.${proxy})
|
||||||
|
allowedTCPPorts
|
||||||
|
allowedUDPPorts
|
||||||
|
;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
}
|
|
@ -44,7 +44,7 @@
|
||||||
;
|
;
|
||||||
|
|
||||||
inherit (config.lib) net;
|
inherit (config.lib) net;
|
||||||
cfg = config.extra.wireguard;
|
cfg = config.meta.wireguard;
|
||||||
|
|
||||||
configForNetwork = wgName: wgCfg: let
|
configForNetwork = wgName: wgCfg: let
|
||||||
inherit
|
inherit
|
||||||
|
@ -258,7 +258,7 @@
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
in {
|
in {
|
||||||
options.extra.wireguard = mkOption {
|
options.meta.wireguard = mkOption {
|
||||||
default = {};
|
default = {};
|
||||||
description = "Configures wireguard networks via systemd-networkd.";
|
description = "Configures wireguard networks via systemd-networkd.";
|
||||||
type = types.lazyAttrsOf (types.submodule ({
|
type = types.lazyAttrsOf (types.submodule ({
|
|
@ -1193,4 +1193,5 @@ in {
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
disabledModules = ["services/networking/hostapd.nix"];
|
||||||
}
|
}
|
|
@ -1,3 +1,4 @@
|
||||||
|
# Provides an option to easily rename interfaces by their mac addresses.
|
||||||
{
|
{
|
||||||
config,
|
config,
|
||||||
extraLib,
|
extraLib,
|
||||||
|
@ -15,7 +16,7 @@
|
||||||
types
|
types
|
||||||
;
|
;
|
||||||
|
|
||||||
cfg = config.extra.networking.renameInterfacesByMac;
|
cfg = config.networking.renameInterfacesByMac;
|
||||||
|
|
||||||
interfaceNamesUdevRules = pkgs.writeTextFile {
|
interfaceNamesUdevRules = pkgs.writeTextFile {
|
||||||
name = "interface-names-udev-rules";
|
name = "interface-names-udev-rules";
|
||||||
|
@ -25,7 +26,7 @@
|
||||||
destination = "/etc/udev/rules.d/01-interface-names.rules";
|
destination = "/etc/udev/rules.d/01-interface-names.rules";
|
||||||
};
|
};
|
||||||
in {
|
in {
|
||||||
options.extra.networking.renameInterfacesByMac = mkOption {
|
options.networking.renameInterfacesByMac = mkOption {
|
||||||
default = {};
|
default = {};
|
||||||
example = {lan = "11:22:33:44:55:66";};
|
example = {lan = "11:22:33:44:55:66";};
|
||||||
description = "Allows naming of network interfaces based on their physical address";
|
description = "Allows naming of network interfaces based on their physical address";
|
|
@ -1,5 +1,5 @@
|
||||||
{lib, ...}: {
|
{lib, ...}: {
|
||||||
options.providedDomains = lib.mkOption {
|
options.networking.providedDomains = lib.mkOption {
|
||||||
type = lib.types.attrsOf lib.types.str;
|
type = lib.types.attrsOf lib.types.str;
|
||||||
default = {};
|
default = {};
|
||||||
description = "Registry of domains that this host 'provides' (that refer to this host with some functionality). For easy cross-node referencing.";
|
description = "Registry of domains that this host 'provides' (that refer to this host with some functionality). For easy cross-node referencing.";
|
7
modules/optional/boot-bios.nix
Normal file
7
modules/optional/boot-bios.nix
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
{
|
||||||
|
boot.loader.grub = {
|
||||||
|
enable = true;
|
||||||
|
efiSupport = false;
|
||||||
|
configurationLimit = 32;
|
||||||
|
};
|
||||||
|
}
|
7
modules/optional/boot-efi.nix
Normal file
7
modules/optional/boot-efi.nix
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
{
|
||||||
|
boot.loader = {
|
||||||
|
systemd-boot.enable = true;
|
||||||
|
systemd-boot.configurationLimit = 32;
|
||||||
|
efi.canTouchEfiVariables = true;
|
||||||
|
};
|
||||||
|
}
|
|
@ -1,6 +1,7 @@
|
||||||
{
|
{
|
||||||
imports = [
|
imports = [
|
||||||
./documentation.nix
|
./documentation.nix
|
||||||
|
./yubikey.nix
|
||||||
];
|
];
|
||||||
|
|
||||||
environment.enableDebugInfo = true;
|
environment.enableDebugInfo = true;
|
|
@ -13,7 +13,4 @@
|
||||||
};
|
};
|
||||||
|
|
||||||
services.xserver.videoDrivers = ["nvidia"];
|
services.xserver.videoDrivers = ["nvidia"];
|
||||||
|
|
||||||
virtualisation.docker.enableNvidia = true;
|
|
||||||
virtualisation.podman.enableNvidia = true;
|
|
||||||
}
|
}
|
|
@ -1,25 +0,0 @@
|
||||||
{
|
|
||||||
lib,
|
|
||||||
nodes,
|
|
||||||
...
|
|
||||||
}: {
|
|
||||||
extra.wireguard.proxy-sentinel.client.via = "sentinel";
|
|
||||||
|
|
||||||
networking.nftables.firewall = {
|
|
||||||
zones = lib.mkForce {
|
|
||||||
proxy-sentinel.interfaces = ["proxy-sentinel"];
|
|
||||||
sentinel = {
|
|
||||||
parent = "proxy-sentinel";
|
|
||||||
ipv4Addresses = [nodes.sentinel.config.extra.wireguard.proxy-sentinel.ipv4];
|
|
||||||
ipv6Addresses = [nodes.sentinel.config.extra.wireguard.proxy-sentinel.ipv6];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
rules = lib.mkForce {
|
|
||||||
sentinel-to-local = {
|
|
||||||
from = ["sentinel"];
|
|
||||||
to = ["local"];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -11,7 +11,6 @@
|
||||||
attrNames
|
attrNames
|
||||||
concatMap
|
concatMap
|
||||||
elem
|
elem
|
||||||
filter
|
|
||||||
mdDoc
|
mdDoc
|
||||||
mkOption
|
mkOption
|
||||||
mkOptionType
|
mkOptionType
|
||||||
|
@ -37,7 +36,7 @@ in {
|
||||||
allNodes = attrNames colmenaNodes;
|
allNodes = attrNames colmenaNodes;
|
||||||
isColmenaNode = elem nodeName allNodes;
|
isColmenaNode = elem nodeName allNodes;
|
||||||
foreignConfigs = concatMap (n: colmenaNodes.${n}.config.nodes.${nodeName} or []) allNodes;
|
foreignConfigs = concatMap (n: colmenaNodes.${n}.config.nodes.${nodeName} or []) allNodes;
|
||||||
toplevelAttrs = ["age" "providedDomains" "networking" "systemd" "services"];
|
toplevelAttrs = ["age" "networking" "systemd" "services"];
|
||||||
in
|
in
|
||||||
optionalAttrs isColmenaNode (mergeToplevelConfigs toplevelAttrs (
|
optionalAttrs isColmenaNode (mergeToplevelConfigs toplevelAttrs (
|
||||||
foreignConfigs
|
foreignConfigs
|
19
modules/repo/meta.nix
Normal file
19
modules/repo/meta.nix
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
{}
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
# TODO define special args in a more documented and readOnly accessible way
|
||||||
|
|
|
@ -88,7 +88,7 @@ in {
|
||||||
# at least via its parent folder so it can access relative files.
|
# at least via its parent folder so it can access relative files.
|
||||||
nix.extraOptions = mkIf cfg.defineNixExtraBuiltins ''
|
nix.extraOptions = mkIf cfg.defineNixExtraBuiltins ''
|
||||||
plugin-files = ${pkgs.nix-plugins}/lib/nix/plugins
|
plugin-files = ${pkgs.nix-plugins}/lib/nix/plugins
|
||||||
extra-builtins-file = ${../nix}/extra-builtins.nix
|
extra-builtins-file = ${inputs.self.outPath}/nix/extra-builtins.nix
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
}
|
}
|
59
modules/security/acme-wildcard.nix
Normal file
59
modules/security/acme-wildcard.nix
Normal file
|
@ -0,0 +1,59 @@
|
||||||
|
{
|
||||||
|
config,
|
||||||
|
lib,
|
||||||
|
...
|
||||||
|
}: let
|
||||||
|
inherit
|
||||||
|
(lib)
|
||||||
|
assertMsg
|
||||||
|
filter
|
||||||
|
genAttrs
|
||||||
|
hasInfix
|
||||||
|
head
|
||||||
|
mdDoc
|
||||||
|
mkIf
|
||||||
|
mkOption
|
||||||
|
removeSuffix
|
||||||
|
types
|
||||||
|
;
|
||||||
|
in {
|
||||||
|
options.security.acme.wildcardDomains = mkOption {
|
||||||
|
default = [];
|
||||||
|
example = ["example.org"];
|
||||||
|
type = types.listOf types.str;
|
||||||
|
description = mdDoc ''
|
||||||
|
All domains for which a wildcard certificate will be generated.
|
||||||
|
This will define the given `security.acme.certs` and set `extraDomainNames` correctly,
|
||||||
|
but does not fill any options such as credentials or dnsProvider. These have to be set
|
||||||
|
individually for each cert by the user or via `security.acme.defaults`.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
options.services.nginx.virtualHosts = mkOption {
|
||||||
|
type = types.attrsOf (types.submodule (submod: {
|
||||||
|
options.useACMEWildcardHost = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = mdDoc ''Automatically set useACMEHost with the correct wildcard domain for the virtualHosts's main domain.'';
|
||||||
|
};
|
||||||
|
config = let
|
||||||
|
# This retrieves all matching wildcard certs that would include
|
||||||
|
# the corresponding domain. If no such domain is defined in
|
||||||
|
# security.acme.wildcardDomains, an assertion is triggered.
|
||||||
|
domain = submod.config._module.args.name;
|
||||||
|
matchingCerts =
|
||||||
|
filter
|
||||||
|
(x: !hasInfix "." (removeSuffix ".${x}" domain))
|
||||||
|
config.security.acme.wildcardDomains;
|
||||||
|
in
|
||||||
|
mkIf submod.config.useACMEWildcardHost {
|
||||||
|
useACMEHost = assert assertMsg (matchingCerts != []) "No wildcard certificate was defined that matches ${domain}";
|
||||||
|
head matchingCerts;
|
||||||
|
};
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
|
||||||
|
config.security.acme.certs = genAttrs config.security.acme.wildcardDomains (domain: {
|
||||||
|
extraDomainNames = ["*.${domain}"];
|
||||||
|
});
|
||||||
|
}
|
|
@ -1,39 +0,0 @@
|
||||||
{
|
|
||||||
self,
|
|
||||||
pkgs,
|
|
||||||
...
|
|
||||||
}: let
|
|
||||||
inherit
|
|
||||||
(pkgs.lib)
|
|
||||||
concatStringsSep
|
|
||||||
filterAttrs
|
|
||||||
hasInfix
|
|
||||||
mapAttrsToList
|
|
||||||
;
|
|
||||||
mapAttrsToLines = f: attrs: concatStringsSep "\n" (mapAttrsToList f attrs);
|
|
||||||
filterMapAttrsToLines = filter: f: attrs: concatStringsSep "\n" (mapAttrsToList f (filterAttrs filter attrs));
|
|
||||||
renderNode = nodeName: node: let
|
|
||||||
renderNic = nicName: nic: ''
|
|
||||||
nic_${nicName}: ${
|
|
||||||
if hasInfix "wlan" nicName
|
|
||||||
then "📶"
|
|
||||||
else "🖧"
|
|
||||||
} ${self.hosts.${nodeName}.physicalConnections.${nicName}} {
|
|
||||||
shape: sql_table
|
|
||||||
MAC: ${nic.matchConfig.MACAddress}
|
|
||||||
}
|
|
||||||
'';
|
|
||||||
in ''
|
|
||||||
${nodeName}: {
|
|
||||||
${filterMapAttrsToLines (_: v: v.matchConfig ? MACAddress) renderNic node.config.systemd.network.networks}
|
|
||||||
}
|
|
||||||
'';
|
|
||||||
# TODO vms
|
|
||||||
graph = ''
|
|
||||||
${mapAttrsToLines renderNode self.colmenaNodes}
|
|
||||||
'';
|
|
||||||
in
|
|
||||||
pkgs.writeShellScript "draw-graph" ''
|
|
||||||
set -euo pipefail
|
|
||||||
echo "${graph}"
|
|
||||||
''
|
|
|
@ -2,8 +2,7 @@
|
||||||
self,
|
self,
|
||||||
pre-commit-hooks,
|
pre-commit-hooks,
|
||||||
...
|
...
|
||||||
}: system:
|
}: system: {
|
||||||
with self.pkgs.${system}; {
|
|
||||||
pre-commit-check =
|
pre-commit-check =
|
||||||
pre-commit-hooks.lib.${system}.run
|
pre-commit-hooks.lib.${system}.run
|
||||||
{
|
{
|
||||||
|
|
|
@ -230,7 +230,7 @@ in rec {
|
||||||
inherit (self.nodes.${head participatingNodes}.config.lib) net;
|
inherit (self.nodes.${head participatingNodes}.config.lib) net;
|
||||||
|
|
||||||
# Returns the given node's wireguard configuration of this network
|
# Returns the given node's wireguard configuration of this network
|
||||||
wgCfgOf = node: self.nodes.${node}.config.extra.wireguard.${wgName};
|
wgCfgOf = node: self.nodes.${node}.config.meta.wireguard.${wgName};
|
||||||
|
|
||||||
sortedPeers = peerA: peerB:
|
sortedPeers = peerA: peerB:
|
||||||
if peerA < peerB
|
if peerA < peerB
|
||||||
|
@ -261,7 +261,7 @@ in rec {
|
||||||
# All nodes that are part of this network
|
# All nodes that are part of this network
|
||||||
participatingNodes =
|
participatingNodes =
|
||||||
filter
|
filter
|
||||||
(n: builtins.hasAttr wgName self.nodes.${n}.config.extra.wireguard)
|
(n: builtins.hasAttr wgName self.nodes.${n}.config.meta.wireguard)
|
||||||
(attrNames self.nodes);
|
(attrNames self.nodes);
|
||||||
|
|
||||||
# Partition nodes by whether they are servers
|
# Partition nodes by whether they are servers
|
||||||
|
@ -305,7 +305,7 @@ in rec {
|
||||||
(n:
|
(n:
|
||||||
filter (x: !types.isLazyValue x)
|
filter (x: !types.isLazyValue x)
|
||||||
(concatLists
|
(concatLists
|
||||||
(self.nodes.${n}.options.extra.wireguard.type.functor.wrapped.getSubOptions (wgCfgOf n)).addresses.definitions))
|
(self.nodes.${n}.options.meta.wireguard.type.functor.wrapped.getSubOptions (wgCfgOf n)).addresses.definitions))
|
||||||
++ flatten (concatMap (n: attrValues (wgCfgOf n).server.externalPeers) participatingNodes);
|
++ flatten (concatMap (n: attrValues (wgCfgOf n).server.externalPeers) participatingNodes);
|
||||||
|
|
||||||
# The cidrv4 and cidrv6 of the network spanned by all participating peer addresses.
|
# The cidrv4 and cidrv6 of the network spanned by all participating peer addresses.
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue