Deployment Plan: The Blueprint for What Runs Where#

What It Is#

Imagine you’re building a house. You tell the architect “I want a garage, a home theater, and a wine cellar,” and they design the foundation that supports what you need — the right thickness under the theater for soundproofing, a reinforced slab for the garage, drainage for the cellar.

The deployment plan works the same way. You tell PSW what apps you want; an AI of your choice designs the LXC targets, the ZFS storage layout, the GPU passthrough, and the NFS wiring that fits those apps onto your hardware. PSW never talks to an LLM itself — it renders a self-contained prompt you paste into whichever AI you already use (ChatGPT, Claude.ai, Gemini, a local LLM, whatever), and it validates the JSON reply against a strict schema before saving anything. See AI Planner for the full protocol.

Where It Fits in the Wizard#

Start → Hardware → Plan → Install → Launch
                    ↑
               You are here

The Plan screen has four substeps:

  1. Pick apps. Core apps are locked on; everything else is a checkbox.
  2. Show prompt. PSW renders the planner prompt and gives you a Copy button.
  3. Paste response. You paste the JSON your AI returned. PSW validates it and surfaces field-level errors if something is wrong — you fix the JSON (or paste the errors back to your AI and re-ask) and re-validate.
  4. Review. Read-only summary of the applied plan.

No API key is collected anywhere in PSW. You bring your own AI.

What’s in the Deployment Plan#

The deployment plan is a file called deployment-plan.yml saved at the root of your user project . After the AI has run and you’ve applied the response, it looks like this:

user_apps:
  - jellyfin
  - postgres
  - sonarr
  - ...

targets:
  core:
    node: homelab
    apps: [postgres, traefik, authelia, lldap, forgejo, psw_dashboard, dragonfly]
    resources:
      cores: 2
      cpulimit: 2
      memory_mb: 4096
      memory_swap_mb: 2048
      root_disk_gb: 20
    rationale: "Shared DB + auth + proxy — co-located because of inter-service chatter."

  media:
    node: homelab
    apps: [jellyfin, sonarr, radarr, bazarr]
    resources:
      cores: 4
      cpulimit: 4
      memory_mb: 6144
      memory_swap_mb: 3072
      root_disk_gb: 32
    devices:
      gpu:
        pci_address: "0000:00:02.0"
        render_device: /dev/dri/renderD128
        nvidia: false      # NVIDIA AI-class passthrough (CDI)
        amd_rocm: false    # AMD ROCm AI-class passthrough (kfd + dri); mutually exclusive with nvidia
    mounts:
      - source: { kind: nfs, nfs_server: nas, nfs_export: /rpool/media }
        mode: ro
    rationale: "Media apps on the node with the iGPU; library over NFS from the storage node."

cluster:
  roles:
    homelab: single

generated_at: 2026-04-19T12:00:00Z
generator_version: v1
generator_inputs_digest: sha256:abc...

The only user-editable field is user_apps#

You pick apps. Everything else — target names, node assignments, cores/memory/disk, GPU passthrough, USB pinning, NFS wiring — is produced by the AI. If the plan is wrong, you re-ask the AI; you do not hand-edit targets:.

Targets#

A target is an LXC container on a Proxmox node . Each target in targets: is keyed by its name (the AI picks it, e.g. core, media, observability) and holds:

  • node — which physical node it runs on.
  • apps — the apps placed on this target.
  • resources — cores, cpulimit, memory_mb, memory_swap_mb, root_disk_gb.
  • devices — optional GPU passthrough + USB passthroughs (omitted when there are none — no empty usb: [] stubs).
  • mounts — shared storage the target consumes: either a local ZFS dataset (kind: bind) or a peer node’s NFS export (kind: nfs). The LXC mount path and in-container path are not in the plan — the deploy engine derives them from the ZFS dataset’s own mountpoint and each app’s meta.yml → storage[].path, so PSW’s rootless-podman / bind-mount conventions stay out of the user project.
  • rationale — a one-line explanation the Review panel shows you.

Cluster topology#

For 2+ node deployments, cluster records:

  • roles — each node’s role (single, storage, compute, gpu, mixed).
  • reserved_gpus — AI-class GPUs parked for future AI workloads even if no AI apps are selected today (so psw app plan <ai-app> doesn’t have to evict media later).

NFS exports and mounts live in storage.yml , not here — that’s the one place psw node apply-storage reads from. Each dataset declares its own nfs_export; each client node declares its nfs_mounts at the top level of its storage.yml. Putting them in deployment-plan.yml as well would be duplication, so we don’t.

What flows downstream#

deployment-plan.yml
    │
    ├── nodes/<name>/storage.yml       (per-node ZFS layout, written by plan --apply)
    │
    ├── nodes/<name>/proxmox-storage.yml (installer-focused, adapted from storage.yml)
    │
    ├── Install step
    │   → Proxmox auto-installer reads proxmox-storage.yml (hdsize, ashift, devices)
    │   → `psw node apply-storage` creates ZFS pools + datasets + NFS exports/mounts
    │
    └── Bootstrap + converge
        → Reads targets[].apps from deployment-plan.yml
        → Provisions LXCs with the resources the AI sized
        → Deploys apps

You select apps once. The planner places them. Every downstream step reads from there.

Iterating#

Running the planner is not deterministic across different AIs (or across re-asks of the same AI), and that’s by design. If the first plan doesn’t feel right:

  1. Click Regenerate (or re-run psw node plan) — the same prompt, a fresh attempt.
  2. Or tweak your AI’s system prompt / switch models / pick a different AI entirely.
  3. Paste a different response. The Review panel shows you what changed.

Nothing on disk moves until you click Validate & save (or run psw node plan --apply), so you can iterate safely.

Key Concepts Referenced#

  • AI Planner — the copy/paste protocol, validation rules, and prompt layout
  • Targets — the LXCs the planner creates
  • Core apps — always included, always on the core target
  • App metadata — each app’s resources: + placement: the planner reads
  • Storage — ZFS pools, datasets, bind mounts, NFS
  • Bootstrap , Convergence — what happens after the plan is applied