komodo LXC¶
Overview¶
| Property | Value |
|---|---|
| Hostname | komodo |
| IP Address | 192.168.0.105 |
| VMID | 105 |
| OS | Alpine Linux v3.23 |
| Kernel | 6.17.4-1-pve |
| CPU | 1 core |
| RAM | 32 GB |
| Swap | 8 GB |
| Disk | 10 GB (local-lvm, 37% used) |
| Purpose | Komodo deployment and infrastructure management platform |
Running Services¶
| Service | Description |
|---|---|
sshd |
OpenSSH server |
crond |
Scheduled tasks |
| Docker daemon | Container runtime |
tailscaled |
Tailscale daemon (Tailscale IP: 100.86.108.33) |
Open Ports¶
| Port | Protocol | Service |
|---|---|---|
| 22 | TCP | SSH |
| 9120 | TCP | Komodo Core web UI and API |
Docker Stack¶
All three Komodo components run as Docker containers from a single Compose stack.
Containers¶
| Container | Image | Port | Description |
|---|---|---|---|
komodo-core-1 |
ghcr.io/moghtech/komodo-core:2 |
9120 | Core API server and web UI |
komodo-mongo-1 |
mongo |
27017 (internal) | MongoDB - stores all Komodo state |
komodo-periphery-1 |
ghcr.io/moghtech/komodo-periphery:2 |
8120 (internal) | Local periphery agent |
Docker Volumes¶
| Volume | Description |
|---|---|
komodo_mongo-data |
MongoDB data directory |
komodo_mongo-config |
MongoDB configuration |
komodo_keys |
Core/Periphery PKI key storage (v2) |
Komodo Configuration¶
| Setting | Value |
|---|---|
| Database | MongoDB at mongo:27017 |
| Auth | Local auth enabled |
| OIDC / OAuth | Disabled |
| Monitoring interval | 15 seconds |
| JWT TTL | 1 day |
| First server | https://periphery:8120 (local agent) |
KOMODO_HOST |
http://192.168.0.105:9120 |
TZ |
Europe/Budapest |
KOMODO_DISABLE_USER_REGISTRATION |
true |
KOMODO_ENABLE_NEW_USERS |
false |
Architecture¶
Komodo is a self-hosted alternative to tools like Portainer or Dockge with a focus on GitOps-style deployments. It consists of:
- Core - Central server. Manages resources (servers, stacks, builds). Exposes the web UI on port 9120.
- Periphery - Lightweight agent installed on each managed server. Executes actions on behalf of Core (deploy stacks, restart containers, collect stats).
- MongoDB - Stores all state: servers, stacks, alerts, resource definitions.
In v2, Core generates a PKI keypair on startup (/config/keys/core.key + core.pub). Each Periphery must be configured with the Core's public key (core_public_keys) to accept incoming connections.
Managed Servers¶
| Server | Address | Periphery type | Notes |
|---|---|---|---|
| Local | https://periphery:8120 |
Docker container (komodo-periphery-1) | Built-in local agent on the komodo LXC |
| LXC 100 | outbound → http://192.168.0.105:9120 |
systemd periphery.service |
Main Docker host - 18 stacks |
| Nobara | https://192.168.0.100:8120 |
systemd periphery.service |
Desktop PC, not 24/7 |
| VPS | outbound via Tailscale → http://100.86.108.33:9120 |
systemd periphery.service |
Hetzner VPS - Pangolin stack |
| Minecraft | outbound → http://192.168.0.105:9120 |
systemd periphery.service |
LXC 112 - Minecraft server |
| HAOS | - | Not supported | Home Assistant OS is a locked-down Alpine VM - no periphery install possible |
Periphery PKI configuration (v2)¶
Each managed host must have the Core public key in its periphery config. Retrieve it from the Core startup log or from Settings in the Komodo UI.
LXC 100 (outbound mode) (/etc/komodo/periphery.config.toml):
core_public_keys = ["MCowBQYDK2VuAyEAanLhSIyYAQmX7NLhn1PH+fiTClnhp+jrv5BPAnKgdCM="]
core_addresses = ["http://192.168.0.105:9120"]
connect_as = "LXC 100"
Nobara (/etc/komodo/periphery.config.toml):
Local periphery container - configured via /etc/komodo/periphery.config.toml on LXC 105, mounted into the container at /config/config.toml.
Updating¶
Use the community addon script (already set up as a shell command):
This downloads the latest upstream mongo.compose.yaml, migrates compose.env as needed, pulls new images, and restarts the stack. Backups of both files are created before any changes.
For manual updates:
cd /opt/komodo
docker compose -p komodo -f mongo.compose.yaml --env-file compose.env pull
docker compose -p komodo -f mongo.compose.yaml --env-file compose.env up -d
Current version: v2.1.2
Adding a new managed server¶
- Install periphery on the target host:
- Add the Core public key to
/etc/komodo/periphery.config.toml: - Restart and enable the service:
- Add the server in Komodo UI: Servers → New Server →
https://<ip>:8120
Lessons Learned¶
- Alpine does not have
ss: Usenetstatfrom thenet-toolspackage instead, or installiproute2withapk add iproute2. - High RAM allocation: 32 GB RAM is allocated to this LXC, but actual usage is lower. This may be intentional for MongoDB's working set cache or could be reduced after profiling.
- Swap is configured: Unlike most other LXCs in this homelab, komodo has 8 GB swap - useful because MongoDB can have large memory requirements during indexing.
- Periphery on managed hosts: Each host managed by Komodo must run the
peripheryagent. The agent opens an outbound connection to Core - no inbound firewall rules are needed on the managed host. - KOMODO_HOST must be set correctly: The default value in the community script template is
https://demo.komo.do. This must be changed to the actual host URL (http://192.168.0.105:9120), otherwise webhooks and OAuth redirects will be broken. - v2 PKI auth: v2 removed passkey auth in favour of PKI. The Core public key must be added to every periphery config. The local container periphery needs the key via a mounted config file (
/etc/komodo/periphery.config.toml:/config/config.toml) since env vars are not picked up for this field. restartvsup -d:docker compose restartdoes not recreate containers - new volume mounts requireup -d.- Port conflict on Nobara: Nobara had an old v1 periphery container running on port 8120. Stop and remove it before starting the systemd service.
- Nobara periphery install needs sudo: The installer writes to
/usr/local/bin- run withsudo python3, not as a regular user. update_komodoneeds a TTY: Running it via plain SSH fails. Usetype=update bash <(curl -fsSL ...)for non-interactive execution, or SSH with-t.- Tailscale on LXC 105 (Alpine): Requires TUN device in
/etc/pve/lxc/105.conf(same as LXC 109). Install viaapk add tailscale, start withrc-service tailscale start, enable withrc-update add tailscale default. Use--accept-dns=falseto avoid DNS conflicts. - VPS periphery outbound mode: The VPS periphery connects outbound to Core via Tailscale (
core_address = "http://100.86.108.33:9120"). No inbound port needs to be opened on the VPS. Requires an onboarding key generated in Settings → Onboarding. - Onboarding key is one-time use: After the periphery successfully onboards, comment out the
onboarding_keyline in the periphery config. If left in, the next periphery restart will attempt to re-onboard and may create a duplicate server entry. connect_asis case-sensitive: The value must exactly match the server name in Komodo (e.g.connect_as = "VPS"not"vps"). A mismatch causes the onboarding flow to create a NEW server instead of connecting to the existing one. If this happens, a duplicate server entry will appear in the database and the original server will show as unreachable even though periphery reports "Logged in". Fix: correct the case in the config, delete the duplicate from MongoDB (db.Server.deleteOne({_id: ObjectId("...")})), restart periphery.- Periphery backoff after network outage: After a network outage, periphery on managed hosts (e.g. LXC 100) enters exponential backoff and may not reconnect automatically. If a server shows as unreachable in Komodo after a network event, SSH to the host and run
systemctl restart periphery. The services themselves keep running - only Komodo visibility is lost. - Inbound vs outbound periphery mode: In inbound mode, Core connects to periphery via HTTP/WebSocket. A known issue (likely reqwest connection pool poisoning) causes Core to stop retrying after a connection drop - only a Core restart recovers it. Solution: switch to outbound mode where periphery initiates the connection and reconnects automatically. LXC 100 was migrated to outbound mode on 2026-04-06.
- Migrating existing server from inbound to outbound mode: (1) In Komodo UI, clear the server's Address field and set Periphery Public Key to the periphery's public key (from its startup log). (2) In periphery config, add
core_addressesandconnect_as = "<exact server name>". (3) Restart periphery - it connects without an onboarding key. Do NOT use an onboarding key for existing servers - it creates a duplicate entry. If a duplicate was created, delete it from MongoDB:db.Server.deleteOne({_id: ObjectId("...")}). - LXC 100 periphery public key:
MCowBQYDK2VuAyEA9sCPWCwh2XNxmYdmWMKvOiWv729oZmBo+uuVsDqoxk4=