Home Posts Notes Now

NixOS

Overview

I use Nix in my homelab to manage some Proxmox VE LXC containers. The goal is to have all the configuration encoded in a .nix file and use it to:

Prerequisites

A machine with either NixOS or Nix package manager that can be used to run all the commands. The initial image can be generated via a couple of configuration files.

Before we start, let's tell Nix we want to use a couple experimental features by adding this line to /etc/nix/nix.conf:

extra-experimental-features = nix-command flakes

I also have a shell.nix I can use to load the packages I need by running nix-shell. For a more seamless setup you can also use direnv as described here.

{ pkgs ? import <nixpkgs> { } }:
pkgs.mkShell {
  buildInputs = with pkgs; [ colmena ];
}

I also use Visual Studio Code with Nix IDE and the nil language server configured as follows:

    "nix.enableLanguageServer": true,
    "nix.serverPath": "nil",
    "nix.serverSettings": {
        "nil": {
            "formatting": {
                "command": [
                    "nixpkgs-fmt"
                ]
            }
        }
    }

Base image generation

A base.nix containing non-LXC specific values:

{
  system.stateVersion = "23.11";
  nix.settings.trusted-users = [ "nixos" ];
  users.users.nixos =
    {
      isNormalUser = true;
      extraGroups = [ "wheel" ];
      openssh.authorizedKeys.keys = [
        "ssh-ed25519 <REDACTED>"
      ];
    };
  services.openssh = {
    enable = true;
    settings.PasswordAuthentication = false;
    settings.KbdInteractiveAuthentication = false;
    settings.PermitRootLogin = "no";
  };
  security.sudo.wheelNeedsPassword = false;
}

Setting up OpenSSH and the user's authorised keys is really important. The deployment process relies on the tooling being able to connect to the machine via SSH so it can apply the changes.

A lxc.nix file containing some LXC-specific pieces of configuration:

{ modulesPath, ... }:
{
  imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
    ./base.nix
  ];
  boot.isContainer = true;
  # Supress systemd units that don't work because of LXC
  systemd.suppressedSystemUnits = [
    "dev-mqueue.mount"
    "sys-kernel-debug.mount"
    "sys-fs-fuse-connections.mount"
  ];
}

This can all be in a single file, but it seems neater this way, especially if in the future I will decide to also have NixOS VMs.

The image can be generated via:

nix run github:nix-community/nixos-generators -- \
    --format proxmox-lxc --configuration ./lxc.nix

Note this image is not ready to run nixos-rebuild and activation will fail. A workaround is to run colmena apply, wait for stuff to be copied over and wait for an activation failure. At this stage we have everything we need, we can reboot the LXC container and run colmena apply again, which will succeed. I'm pretty sure there is a better way to fix this. Related links:

Initial LXC provisioning

I am not doing this using Nix. I deploy using Terraform but it can be done also via the Proxmox UI. Things to watch out when creating an LXC:

I find it works well to have a very minimal base image like the one described above and then manage all the other settings later.

Deploying changes to remote servers

For deployments I use colmena where I define machines, their configuration. I could have just used nixos-rebuild but I choose Colmena because it supports deploying to multiple hosts. Anyway, I choose to adopt the flakes approach by adding a flake.nix file:

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
  };
  outputs = { nixpkgs, ... }: {
    colmena = {
      meta = {
        nixpkgs = import nixpkgs {
          system = "x86_64-linux";
          overlays = [];
        };
      };

      host-a = { name, nodes, pkgs, ... }: {
        deployment = {
          targetUser = "nixos";
          targetHost = "192.168.4.93";
        };
        imports = [./lxc.nix];
        time.timeZone = "Europe/London";
      };
    };
  };
}

This defines that the machine is reachable a deployment.targetHost = "192.168.4.93". For this to work ensure it either is a static IP address or that the DHCP server has a reservation in place.

This only configures time.timeZone as an example. In a real scenario all the configuration of the machine is going to end there (or in other files that are going to be imported).

The new state can be built using colmena build and applied to the remote machines via colmena apply.

Resources

Thanks for reading. Feel free to reach out for any comment or question.

Backlinks