App Metadata#
What Is It?#
App metadata is the identity card of every app in PSW. It describes everything PSW needs to know about an app: what container image (a packaged snapshot of the app, run by Podman ) to use, which port it listens on, what secrets it needs, whether it supports single sign-on, how to back it up, what other apps it connects to, and more.
Think of it like the nutritional label on a food package. You don’t need to open the box to know what’s inside — the label tells you the ingredients, serving size, and allergens. App metadata tells PSW the ingredients (dependencies), serving size (ports and resources), and allergens (hardware requirements) of each app — so it can deploy and integrate it automatically.
Why Does It Exist?#
Without metadata, PSW would need hardcoded logic for every app: “if the app is Sonarr, use port 8989 and connect it to Prowlarr.” That would be fragile, hard to maintain, and impossible to extend.
With metadata, PSW is data-driven. Adding a new app means writing its metadata files — no code changes needed. The convention system , wiring , secrets generation, validation , and convergence all read from metadata to figure out what to do.
The File#
Each app in the catalog has a single configuration file: meta.yml. This unified format contains everything PSW needs — structural metadata (what the app is, what it depends on) and deployment parameters (container image, ports, environment, storage) — in one place.
# sonarr/meta.yml (simplified)
category: Media
description: Sonarr TV show manager
upstream: https://sonarr.tv
requires: [postgres, prowlarr, sabnzbd]
systemd_requires: [postgres]
integrations: [prowlarr, sabnzbd, qbittorrent, ntfy]
image: ghcr.io/linuxserver/sonarr:4.0.17
port: 8989
subdomain: sonarr
env:
PUID: "1000"
PGID: "1000"
TZ: "{{ psw_timezone | default('UTC') }}"
storage:
- type: config
path: /config
local: true
- type: media
path: /media
mode: rw
required: true
required_secrets: [sonarr_api_key, sonarr_db_password]
readiness:
port: 8989
endpoint: /ping
retries: 30
delay: 2
monitoring_enabled: true
exporter_image: ghcr.io/onedr0p/exportarr:v2.3.0
setup_reconcilers:
- type: arr.setup.postgres
requires: [postgres]
- type: arr.setup.api_key
integration_reconcilers:
- type: arr.root_folder
params: { media_type: tv, media_path: /media/tv }Notice there’s no prefix on field names — port: 8989, not sonarr_port: 8989. PSW adds the prefix internally when needed (for cross-app variable references in Jinja2
templates).
Some values contain Jinja2 template expressions (like {{ psw_timezone }}). PSW’s deploy engine resolves these at render time — they become concrete values before any file reaches a server.
Rendering is strict: any variable referenced in a template or in a meta.yml expression must be declared somewhere (the app’s own meta.yml, a dependency declared in requires/integrations, or a global like psw_domain). If you need a variable to be optional, either declare it in meta.yml with an empty default (extra_args: '') or guard the reference with | default(...). Missing variables fail the deploy with a clear error instead of silently emitting empty values into config files.
Derived Fields — Computed Automatically#
Several deployment settings are computed from other fields instead of being declared separately:
| Derived Field | Rule |
|---|---|
systemd_requires | Defaults to requires — only declare explicitly when it should be a subset (e.g. Grafana needs postgres at startup but not Prometheus or Loki) |
load_vars | Automatically includes all requires + integrations + Authelia
for SSO apps |
sso_forward_auth | Derived from sso_type: oauth2/oidc apps handle auth natively (false), all others use forward auth (true) |
| Health check | For apps with a readiness block, the deploy engine uses the same endpoint — no separate declaration needed |
What Metadata Describes#
Identity#
| Field | What It Means | Example |
|---|---|---|
image | The container image to run — a packaged snapshot of the app and everything it needs, pulled from a registry and run by Podman on the target | ghcr.io/linuxserver/sonarr:4.0.17 |
port | The port the app listens on | 8989 |
subdomain | The subdomain for web access (used by Traefik for routing ) | sonarr → sonarr.yourdomain.ca |
category | Which group the app belongs to in the wizard’s app catalog — also used by the planner for target grouping | core, media, observability, home-automation, security, ai, infra |
description | What the app does (human-readable) | “TV show management and automation…” |
upstream | Link to the app’s official project page | https://sonarr.tv |
Dependencies and Integrations#
| Field | What It Means | Example |
|---|---|---|
requires | Apps that must be installed first (hard dependencies) | [postgres, prowlarr, sabnzbd] |
integrations | Apps this one can wire to (optional connections) | [prowlarr, sabnzbd, qbittorrent, ntfy] |
The difference: requires means “won’t work without it” — PSW validates this. integrations means “connects to it if available” — PSW silently skips if the other app isn’t deployed.
Capabilities#
An app can advertise what capability it provides to other apps:
# postgres/meta.yml
provides:
- databaseThis is how PSW knows which app plays which role without hardcoding names. When another app declares needs_database: true, PSW checks that its systemd_requires includes at least one app declaring provides: [database] — so tomorrow you could swap PostgreSQL
for a different database backend by changing that one app’s metadata, not by editing PSW code.
Apps that need a database can also declare how that database should be created:
# sonarr/meta.yml
needs_database: true
database_provisioning:
strategy: servarrMost apps use the default single strategy (one database named after the app). Some apps, like the *arr apps, need a custom layout (servarr), so the metadata declares that instead of PSW keeping a hardcoded app list in the deploy engine.
Apps can also declare db_teardown to tell PSW how to clean up their database on psw app remove: none (no database), standard (drop one database + one user), or arr (drop the servarr pair).
provides is a plain list of capability names. Today database is the only one in use; the mechanism is there so future capabilities (queue, object-store, …) slot in the same way.
Secrets#
| Field | What It Means | Example |
|---|---|---|
required_secrets | Secrets PSW auto-generates when you add the app | [sonarr_api_key, sonarr_db_password] |
user_provided_secrets | Secrets you must supply (API tokens, licenses) | [cloudflare_api_token] |
Storage#
Each app declares what persistent storage it needs:
storage:
- type: config # App configuration files
path: /config
local: true # Stored on the target's local disk
- type: media # Shared media library
path: /media
mode: rw # Read-write access
required: true # Target must have this storage availableStorage types include config, data, media, downloads, database, cache, and more. PSW uses this to set up the right volume mounts during deployment.
Routing and Authentication#
| Field | What It Means |
|---|---|
subdomain | Enables automatic HTTPS routing via Traefik |
sso_type | How the app integrates with Authelia
for SSO
: "oauth2" (native OIDC), "proxy" (forward auth), or "none" |
routing_mode | Controls how the app’s Traefik
route is generated. Values: standard (auto-generated, default), custom (app handles its own routing), forward_auth_provider (the app that provides forward-auth middleware for all other apps), dual_router (two routers: one public API, one SSO-protected UI) |
Apps that declare a subdomain automatically get an HTTPS route. Apps that declare an SSO type automatically get single sign-on configured.
Monitoring#
| Field | What It Means |
|---|---|
monitoring_enabled | Whether Prometheus should scrape this app for metrics |
exporter_image | Optional sidecar container that exports metrics (for apps without built-in /metrics) |
Backup#
Apps declare what to back up for the backup convention :
backup:
database: true # Dump the PostgreSQL database
volumes:
- path: /config
priority: critical
exclude:
- "logs/"
- "MediaCover/"Hardware Requirements#
Hardware needs live inside the placement: block (see “Placement — Where the App Should Land” below) as an ordered preference list. The planner walks the list top-to-bottom and pins the app to a node where the FIRST satisfiable path is satisfiable. Two examples:
- Frigate (NVR with neural-network detection) prefers an AI-class GPU but happily falls back to a Coral USB accelerator:
[{gpu_class: ai}, {usb_class: coral}]. - Zigbee2MQTT only knows one path — a Zigbee dongle:
[{usb_class: zigbee}].
Whether the planner is allowed to skip the hardware entirely (CPU-only fallback) is a per-app hardware_required flag. Set true for apps that genuinely cannot run without hardware (Ollama needs a GPU; Zigbee2MQTT needs a Zigbee dongle); leave it false (the default) when CPU is a slower-but-acceptable fallback (Jellyfin without a media GPU, Frigate with neither GPU nor Coral).
PSW checks nodes/<node>/hardware.yml in your user project
to make sure the node
actually carries one of the alternatives before deploying.
Resources — What the App Needs to Run#
Every app declares a resources: block so the AI planner
can size its LXC target
. Think of this as the app’s appetite card.
resources:
memory_mb: 512 # Steady-state RAM (MiB)
memory_peak_mb: 1024 # Burst ceiling (scrubs, scans, vacuums)
cpu_weight: 100 # Relative CPU weight: 100 = normal, 200 = DB, 400 = transcoder
storage_estimate_gb: # Per-class disk estimate (GB)
config: 2
media: 0
downloads: 0The planner sums these numbers across all apps landing on the same target, adds overhead, and picks cores / RAM / root-disk accordingly. storage_estimate_gb keys must match the app’s storage[] types.
Placement — Where the App Should Land#
Every app also declares a placement: block telling the planner where it wants to live.
placement:
category: media # Which target group: core, media, observability,
# home-automation, security, ai, or infra
hardware: # Ordered preference: first satisfiable path wins.
- gpu_class: ai # 1st choice — AI-class GPU (≥8 GB VRAM)
- usb_class: coral # 2nd choice — Coral USB accelerator
hardware_required: false # true = refuse to deploy without one of the
# alternatives; false = CPU fallback OK.
nfs_tolerant: true # Safe on NFS, or demands local ZFS (databases)?
requires_unique_target: false # Force onto its own LXC (e.g. postgres)
affinity:
prefer_same_target: [radarr, lidarr, prowlarr]
prefer_same_node: []
avoid_same_target: []Only category is mandatory. hardware defaults to an empty list (CPU-only, the majority of apps); the field exists so hardware-pinned apps (media transcoders, AI workloads, Zigbee/Z-Wave bridges, Frigate’s GPU+Coral preference) can express their constraints.
Homepage Widget#
Apps can declare how they appear on the Homepage dashboard:
homepage:
type: sonarr # Widget type (determines what stats to show)
category: Media # Dashboard section
description: TV Shows # Short labelReadiness Probe#
Before wiring runs, PSW needs to know when an app is ready to accept connections:
readiness:
port: 8989 # Port to check
endpoint: /ping # HTTP endpoint to probe
retries: 30 # How many times to try
delay: 2 # Seconds between retriesThis prevents wiring from failing because an app is still starting up.
Aggregator Capability#
Aggregator apps (like Traefik , Authelia , Prometheus ) declare how they collect convention files:
# traefik/meta.yml
aggregator:
convention: routing
collect:
source_subdir: routing
file_glob: "*.yml"
dest_subdir: dynamic
sync:
strategy: dir
restart_service: traefikThis tells PSW: “Traefik handles the routing convention. Collect *.yml files from each app’s routing/ folder, put them in dynamic/, and restart Traefik when they change.”
Reconciler Declarations#
Apps declare their automation directly in meta.yml using two separate sections — one for each lifecycle phase. This split is semantic: the two kinds of reconciler run at different times and have different responsibilities.
# sonarr/meta.yml
setup_reconcilers:
- type: arr.setup.postgres
requires: [postgres]
- type: arr.setup.api_key
- type: arr.setup.auth
integration_reconcilers:
- type: arr.root_folder
params:
media_type: tv
media_path: /media/tv
api_version: v3
- type: arr.prowlarr_app
requires: [prowlarr]
params:
api_version: v3
sync_categories: [5000, 5010, 5020, 5030, 5040, 5045, 5050]
- type: arr.download_client.sabnzbd
requires: [sabnzbd]
params:
media_type: tv
api_version: v3
- type: arr.ntfy
requires: [ntfy]
params:
api_version: v3| Section | When it runs | What it’s for |
|---|---|---|
setup_reconcilers | Inline during deploy, between “service ready” and “sidecars start” | Configures the app itself — admin account, API key, internal auth mode, database connection |
integration_reconcilers | Post-deploy, in the integration pass after every target has deployed | Connects the app to other apps — registers Sonarr in Prowlarr, adds a download client, creates notification hooks |
Each entry has the same shape in either section:
| Field | What It Means |
|---|---|
type | Which reconciler to use (e.g. arr.setup.api_key, arr.root_folder, forgejo.authelia) |
requires | Apps that must be deployed for this reconciler to run — checked before execution, not at runtime |
params | Configuration passed to the reconciler (media type, API version, etc.) |
The requires field replaces runtime dependency checks. If a required app isn’t deployed, the reconciler is silently skipped — no error, no wasted work. See wiring
for the reasoning behind the split and a full tour of how the two sections work.
Special Flags and Lifecycle Fields#
| Flag | What It Means | Example Apps |
|---|---|---|
bootstrap_only | Deployed only during bootstrap
, not via psw app add | PostgreSQL , Forgejo , Traefik |
broadcast | Automatically deployed to every managed target | Alloy , Node Exporter |
homepage_visible | Whether the app appears on the Homepage
dashboard (default: true). Set to false for infrastructure apps that don’t need a dashboard entry | Alloy , Homepage itself |
deploy_priority | Controls deployment order — lower values deploy first (default: 100). Used during bootstrap
to ensure dependencies are ready | PostgreSQL
(0), Traefik
(10) |
git_remote | Declares that an app can host the project’s config Git remote, including the admin user and where bootstrap should persist API/runner tokens | Forgejo |
setup_callback | Filename of a callback file that captures secrets generated during first deployment (e.g. API keys created by the app on first boot) | Jellyfin , Home Assistant |
reset_invalidates_secrets | List of secret keys that become invalid when the app is reset and should be removed | Forgejo
(forgejo_runner_token) |
preserve_on_reset | Volumes or files to save before resetting an app and restore afterwards (e.g. TLS certificates) | Traefik (Let’s Encrypt certs) |
managed_by | Marks the app as plumbing owned by a higher-level feature. The CLI hides it from psw app list and refuses psw app add; users interact with the feature instead (e.g. psw remote expose) | Pangolin
, Newt
(both managed_by: remote-access) |
How PSW Uses Metadata#
Metadata flows through the entire system:
- When you add an app — PSW reads metadata to know which secrets to generate, which dependencies to validate, and what files to create in your user project
- During validation — the project graph loads all metadata to check dependencies, port conflicts, hardware availability, and convention consistency
- During convergence — metadata drives which conventions to generate (routing, SSO, monitoring, backup, homepage), which wiring to run, how databases are provisioned, and how to verify readiness
- On the dashboard — metadata provides app descriptions, categories, and upstream links for the web UI
Key Ideas#
- Data-driven — PSW’s behavior is defined by metadata, not hardcoded per-app logic
- One file, one model — everything about an app lives in a single
meta.yml, loaded into a single validated model. Derived fields are computed automatically to eliminate duplication - Single source of truth — each piece of configuration lives in exactly one place. No field is declared twice, no data is split across files
- Declarative — apps declare what they need; PSW figures out how to provide it
- Validated — metadata is checked at load time using Pydantic
— invalid metadata fails early with clear errors, including cross-field consistency checks (e.g.
needs_databaserequires postgres in systemd dependencies) - Extensible — adding a new app means writing one
meta.ymlfile, not modifying PSW code