DNAT rules applied via docker exec are lost whenever wg-easy reloads the
WireGuard interface (PostDown flushes the nat table then PostUp only
re-adds static rules). Fix: embed DNS (port 53) and service (port 80)
DNAT rules directly in wg0.conf PostUp/PostDown so they reapply on every
interface restart. ensure_postup_dnat() patches existing configs on startup.
get_server_config() now returns the WG server IP (e.g. 10.0.0.1) for
dns_ip instead of the cell-dns container IP (172.20.0.3). This makes the
value consistent with what get_peer_config() writes into the .conf file,
and fixes the stale hint text in Peers.jsx and WireGuard.jsx.
UI: fallback dns_ip changed from 172.20.0.3 to 10.0.0.1; split-tunnel
fallback drops the 172.20.0.0/16 stale range.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DNS A records now return the WireGuard server IP (10.0.0.1) instead of
Docker bridge VIPs so cross-cell peers resolve service names correctly
regardless of their bridge subnet. DNAT rules (wg0:53→cell-dns:53 and
wg0:80→cell-caddy:80) are applied at startup. Caddy routes by Host header,
eliminating the Docker bridge subnet conflict. Firewall cell rules allow
DNS and service (Caddy) traffic from linked cell subnets. Split-tunnel
AllowedIPs now dynamically includes connected-cell VPN subnets and drops
the 172.20.0.0/16 range. Peers with route_via set now receive full-tunnel
config (0.0.0.0/0) so all their traffic exits via the remote cell.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Cell link [Peer] blocks can vanish from wg0.conf after a container
rebuild or config reset. The startup recovery previously only restored
VPN peer rules (iptables) but not the WireGuard peer blocks needed for
cell-to-cell tunnels, leaving the link red with no automatic recovery.
Add _restore_cell_wg_peers() called from _apply_startup_enforcement()
that reconciles wg0.conf against cell_links.json and re-adds any missing
[Peer] blocks, then calls _syncconf() to hot-reload the interface.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The PostUp rule appended `iptables -A FORWARD -i wg0 -j ACCEPT` which
allowed any WireGuard-connected client full internet access regardless of
per-peer rules, even when no peers were configured in wg0.conf.
Fix: change PostUp/PostDown to use DROP as the catch-all. Per-peer and
per-cell rules use -I (insert at top) so they take precedence; unknown
or unconfigured WG traffic hits the DROP at the bottom.
Also add reconcile_stale_peer_rules() called on startup to remove FORWARD
rules for peer IPs that no longer exist in the registry, preventing deleted
peers from retaining firewall access across container restarts.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the ability to route a specific peer's internet traffic through a
connected cell acting as an exit relay.
Cell A side:
- PUT /api/peers/<peer>/route-via {"via_cell": "cellB"} sets route_via
- Updates WG AllowedIPs to include 0.0.0.0/0 for the exit cell peer
- Adds ip rule + ip route in policy table inside cell-wireguard so the
specific peer's traffic egresses via cellB's WG IP
- Sets exit_relay_active on the cell link and pushes use_as_exit_relay=True
to cellB via peer-sync
Cell B side:
- Receives use_as_exit_relay in the peer-sync payload
- Calls apply_cell_rules(..., exit_relay=True) to add FORWARD -o eth0 ACCEPT
- Stores remote_exit_relay_active flag for startup recovery
Startup recovery:
- apply_all_cell_rules passes exit_relay=remote_exit_relay_active (cellB)
- _apply_startup_enforcement reapplies ip rule for each peer with route_via (cellA)
since policy routing rules don't survive container restart
peer_registry gets route_via field with lazy migration.
22 new tests across test_cell_link_manager, test_peer_registry, test_peer_route_via.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the ability for a cell to signal to a peer that it's willing to
route internet traffic on their behalf. This is the signaling layer
for Phase 3 (per-peer routing via exit cell).
Changes:
- cell_links.json: exit_offered (bool) + remote_exit_offered (bool)
fields with lazy migration (default false for existing records)
- _push_permissions_to_remote: includes exit_offered in the push body
- apply_remote_permissions: accepts exit_offered kwarg; stores it as
remote_exit_offered on the matching cell link
- peer-sync receiver: passes exit_offered from body to apply_remote_permissions
- CellLinkManager.set_exit_offered(cell_name, offered): persists +
triggers push so the remote learns of our offer immediately
- PUT /api/cells/<name>/exit-offer: REST endpoint to toggle the flag
- 12 new tests covering all new paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
MASQUERADE rewrites the source IP of forwarded packets from
the cell's WG address (10.0.x.1) to cell-wireguard's bridge
IP (172.20.x.9). The peer-sync endpoint authenticates callers
by checking that the source IP is inside a known cell's vpn_subnet,
so MASQUERADE caused all pushes to fail with 403.
Fix: _push_permissions_to_remote() now calls _local_wg_ip() to
get the local wg0 address and passes it as X-Forwarded-For.
_authenticate_peer_cell() already supports XFF for exactly this
proxying scenario. Also adds a test verifying the header is present
in the constructed curl command.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
/api/cells/peer-sync/permissions is called over the WireGuard tunnel
by remote cells — they have no session cookie and cannot produce a CSRF
token. The endpoint authenticates via source IP (must be in the remote
cell's vpn_subnet) and WireGuard public key instead.
Without this, the global enforce_auth hook returns 401 before the route
handler runs, so all cross-cell permission pushes fail even when the
WG tunnel and iptables rules are correct.
Also adds a test verifying the route can be reached without a session.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
wg set updates WireGuard peer state but does not add kernel routes —
unlike wg-quick. Without ip route add, traffic to a remote cell's
vpn_subnet is routed via the default gateway (internet) instead of wg0,
causing all cross-cell pushes to time out with HTTP 000.
- add_cell_peer() now calls _ensure_cell_route(vpn_subnet) after
writing the peer config and running _syncconf
- _ensure_cell_route() runs docker exec cell-wireguard ip route add
(idempotent, non-fatal); no-op inside test dirs
- sync_cell_routes() parses wg0.conf at startup to re-add any routes
lost across container restarts; called from _apply_startup_enforcement
- 5 new unit tests covering both normal and test-dir no-op paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The per-cell catch-all DROP was reaching position 5 before our ACCEPT
(position 6) because apply_all_cell_rules can re-run after
ensure_cell_api_dnat, pushing the DNAT ACCEPT below the DROP.
Fix: add the API-sync ACCEPT inside apply_cell_rules itself, tagged with
the cell's own tag and inserted LAST (= position 1, above the DROP).
Since it's part of the cell's rule block it is always in the right
position relative to the catch-all DROP, regardless of call order.
Also adds _get_cell_api_ip() helper (docker inspect cell-api) so the
destination IP is always current, and two new tests that verify both the
rule exists and that the insertion order guarantees it wins over DROP.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
cell-api has no route to remote WG tunnel IPs — only cell-wireguard does.
Fix _push_permissions_to_remote() to use 'docker exec cell-wireguard curl'
so outbound sync HTTP traverses the WG tunnel from the right namespace.
On the receive side, add ensure_cell_api_dnat() which installs three
iptables rules inside cell-wireguard on startup:
- PREROUTING DNAT: wg0:3000 → cell-api:3000 (Docker bridge IP)
- POSTROUTING MASQUERADE: so cell-api's reply routes back via wg0
- FORWARD ACCEPT: allow the wg0→eth0 forwarded traffic
Called from _apply_startup_enforcement() so rules survive container restarts.
Tests updated to mock subprocess.run instead of urllib.request.urlopen.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When PIC A updates service sharing permissions, it immediately pushes
the mirrored state to PIC B over the WireGuard tunnel so B's UI shows
what A is sharing with it in real time.
Architecture:
- Push model: update_permissions() → _push_permissions_to_remote() →
POST /api/cells/peer-sync/permissions on remote cell
- Auth: source IP must be inside a known cell's vpn_subnet (WireGuard
tunnel proves identity) + body's from_public_key must match stored key
- Mirror semantics: our inbound (what we share) → their outbound view
- Non-fatal: push failures set pending_push=True; replay_pending_pushes()
retries at startup so offline cells catch up on reconnect
- add_connection() also pushes initial state so remote sees permissions
immediately on the first connect
New fields on cell_links.json records (lazy-migrated):
remote_api_url, last_push_status, last_push_at, last_push_error,
pending_push, last_remote_update_at
New endpoint: POST /api/cells/peer-sync/permissions
30 new tests (1101 total).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Phase 1 — connection fixes:
- routing_manager.stop(): remove iptables -F / -t nat -F nuclear flush that
would wipe WireGuard MASQUERADE and all peer rules on any UI stop action
- wireguard_manager.add_cell_peer(): reject vpn_subnet that overlaps the local
WG network (routing blackhole — was the root cause of no handshake)
- wireguard_manager._syncconf(): pass Endpoint to 'wg set' so cell peers with
static endpoints are synced to the kernel (not just AllowedIPs)
Phase 2 — service-sharing permissions backend:
- firewall_manager: add _cell_tag(), clear_cell_rules(), apply_cell_rules(),
apply_all_cell_rules() — iptables FORWARD rules for cell-to-cell traffic
using 'pic-cell-<name>' comment tags, distinct from 'pic-peer-*'
- app.py startup enforcement: call apply_all_cell_rules(cell_links) so rules
survive API restarts
- cell_link_manager: permissions schema {inbound, outbound} per service;
lazy migration for existing entries; update_permissions(), get_permissions();
apply_cell_rules wired into add_connection/remove_connection
- routes/cells.py: GET /api/cells/services, GET+PUT /api/cells/<n>/permissions;
RuntimeError now returns 400 (not 500) from add_connection
Removed broken 'test' cell (subnet 10.0.0.0/24 collided with local WG network).
Second PIC must use a distinct subnet (e.g. 10.0.1.0/24) before reconnecting.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- API key store was out of sync with wg0.conf: get_keys() generated a random
phantom key instead of reading the actual WireGuard server key, so all peer
configs had the wrong PublicKey and could never handshake. Fixed by writing
correct raw-bytes key files at deploy time and adding _sync_wg_keys() to API
startup so the store auto-syncs from wg0.conf on every restart.
- apply_domain() fell back silently when zone file had no $ORIGIN directive;
now also parses the SOA MNAME as the old-domain fallback.
- apply_cell_name() only replaced the hostname if old_name matched literally
in the zone file; now auto-detects the actual hostname (non-service A record)
so a stale zone (mycell vs dev) is corrected on next config apply.
- DNS zone file corrected: SOA pic.ngo. admin.pic.ngo., mycell → dev.
- WireGuard UI: add 30s auto-poll for peer statuses; fix "peers currently
connected" counter to show online/total instead of total count.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
A3 — Peer add atomicity: track firewall_applied flag and call
clear_peer_rules() during rollback so partial peer-add failures
don't leave stale iptables rules behind. Added test.
A2 — Pending config flag: instead of clearing before spawning the
helper container (fire-and-forget), set applying=True and let the
helper clear it on success by writing to cell_config.json via a
mounted /app/data volume. On API restart after a failed apply,
_recover_pending_apply() resets the applying flag so the UI shows
pending changes and the user can retry. GET /api/config/pending now
includes the applying field.
A5 (foundation) — Extract all manager instantiation into managers.py.
app.py re-exports every name so existing test patches (patch('app.X'))
continue to work unchanged. 1021 unit tests pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_generate_zone_content writes records as "name TTL IN A value" but the
regex only matched "name IN A value" (no TTL), so renaming the cell
never updated the DNS hostname record. Updated regex to make TTL optional.
Also fixed the unit test zone fixture to use the actual generated format.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- check_csrf() now issues a token for sessions that predate CSRF (existing logins) instead of blocking them
- /api/wireguard/check-port and /api/wireguard/refresh-ip accept GET so native fetch calls bypass the token requirement
- WireGuard.jsx: changed three native fetch POST → GET for the above endpoints
- Peers.jsx: add X-CSRF-Token header to three native fetch mutation calls (calendar collection, peer PUT, clear-reinstall)
- api.js: export getCsrfToken() so non-Axios callers can read the current token
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- network_manager: api/webui DNS records now point to Caddy (172.20.0.2)
instead of their container IPs so Caddy can reverse-proxy correctly
- ip_utils: add webui.dev block to generated Caddyfile
- config/caddy/Caddyfile: regenerated with webui.dev block
- config/dns/Corefile: simplify to single forward zone (remove duplicate)
- app.py peer_dashboard: rename peer_name→name, rx_bytes→transfer_rx,
tx_bytes→transfer_tx to match PeerDashboard.jsx; add service_urls dict
- app.py peer_services: fix DNS (10.0.0.1→real CoreDNS IP), CalDAV URL
(radicale.dev:5232→calendar.dev), email structure (flat→nested smtp/imap
objects), rename webdav→files, add WireGuard config text, add username field
- PeerDashboard.jsx: render service icon links from service_urls
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
linuxserver/wireguard auto-generates its own PrivateKey on first container
start, independently of the PIC API's key-store. When the two diverge, the
API generates peer configs with the wrong server public key and the WireGuard
handshake fails silently — the client can ping the VPN subnet (10.0.0.x) but
gets no internet and cannot reach any Docker service (172.20.0.x).
Adds _sync_keys_from_conf(): called at the top of apply_config(), reads the
PrivateKey from wg0.conf, derives the matching public key, and overwrites the
API key files (private.key / public.key) if they differ. This makes wg0.conf
the authoritative source for the server identity, keeping get_peer_config()
consistent with the live WireGuard interface.
Adds 5 new tests in TestSyncKeysFromConf covering:
- key-store update when conf key differs
- no-op when keys already match
- get_peer_config() uses the synced key
- no raise when conf is missing
- apply_config() passes the synced key through bootstrap
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
apply_config() now calls _load_registered_peers() when wg0.conf is empty
so all active peers from peers.json are written back into the config file
after a bootstrap — preventing clients from losing tunnel access after
an API restart that regenerated wg0.conf from scratch.
Adds test_wireguard_vpn_routing.py (36 tests) covering:
- generate_config() PostUp/PostDown rules enabling internet forwarding
(MASQUERADE + FORWARD ACCEPT required for internet-through-VPN)
- get_peer_config() DNS field pointing to cell-dns for domain resolution
- apply_config() bootstrap peer restoration from peers.json
- _load_registered_peers() filtering (inactive, missing fields, malformed)
- add_peer() /32 AllowedIPs enforcement to prevent route leaks
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause: write_env_file used os.replace() which creates a new inode.
Docker file bind-mounts track the original inode at mount time, so the
container's /app/.env.compose never saw updates — docker compose always
read the stale port value and skipped container recreation.
Fixes:
- ip_utils.write_env_file: write in-place (open 'w') instead of os.replace()
so Docker bind-mounted files see the update immediately
- apply_pending_config: add --force-recreate to docker compose up for
specific-container restarts, bypassing config-hash comparison as a
belt-and-suspenders measure
Tests added:
- TestWriteEnvFileInPlace: verifies inode is preserved across writes
- TestApplyPendingConfigForceRecreate: verifies --force-recreate is in the
docker compose command for specific-container restarts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
If wg0.conf exists but is empty or has no [Interface] section,
apply_config previously found no lines to update and silently
returned with no changes — leaving the container broken on next
restart with an empty config.
Fix: detect empty/missing [Interface] section and regenerate the
full config from generate_config() before applying field updates.
This was the root cause of port changes not propagating:
apply_config was called but found nothing to patch in an empty file.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bug 1 — port not propagated to wg0.conf:
The identity update path (wireguard_port via PUT /api/config) was calling
wireguard_manager.update_config() which only saves to a JSON file via
BaseServiceManager. wg0.conf was never updated, so after a container
restart the WireGuard interface would still listen on the old port.
Fix: call apply_config() instead — it writes ListenPort into wg0.conf.
Bug 2 — check_port_open ignored configured port:
check_port_open() checked for 'listening port' in wg show output but
never compared it against the configured port. A port-mismatch (e.g.
after config change but before restart) would return True — misleading.
Fix: require 'listening port: {configured_port}' to match exactly.
Tests added:
- test_check_port_open_wrong_port_returns_false
- test_check_port_open_explicit_port_matches
- test_check_port_open_explicit_port_mismatch
- test_wireguard_port_identity_change_calls_apply_config
- test_wireguard_port_same_value_does_not_call_apply_config
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- api/app.py: sync WireGuard server config on peer add/remove (non-fatal)
- docker-compose.yml: add privileged:true to wireguard service
- E2E tests: fix logout selector, DNS IP lookup, wg config DNS line, VIP skip guards,
badge text selectors, heading .first, async logout wait
- Integration tests: fix 4 tests that sent unauthenticated requests expecting 400
(now use authenticated session helpers); accept 401 as valid in webui proxy test;
add password field to service_access validation test
- Remove stale tracked config templates (config/api/api/*, config/api/cell.env, etc.)
that no longer exist on disk after config layout was reorganised
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
config_manager: make per-file copy errors non-fatal during restore
(resolves test failures when /app/config/* is not writable by test runner)
test_live_api.py: fix NameError (_req.Session not requests.Session)
test_negative_scenarios.py: replace raw requests.* with authenticated _S.*
(all endpoints now require auth; unauthenticated calls return 401)
wg/conftest.py: fix wg_server_info — public key is at /api/wireguard/keys
test_admin_navigation.py, test_peer_acl.py: add .first to ambiguous locators
to avoid Playwright strict-mode errors when desktop+mobile nav both mount
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
/api/peer/dashboard now returns live WireGuard stats (online, rx_bytes,
tx_bytes, last_handshake, allowed_ips) by calling wireguard_manager.
/api/peer/services now returns a structured dict with wireguard, email,
caldav, webdav sections containing hostnames and credentials.
Fixes 2 failing E2E API tests.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- api/app.py: email/calendar/files provisioning now best-effort (non-fatal); fixed email_manager.create_email_user call to include domain argument
- tests/integration: added module-level auth sessions to all integration test files; added admin auth to api fixture and _resolve_admin_pass() helper; added TEST_PEER_PASSWORD constant; added password to peer creation calls
- tests/test_peer_provisioning.py: renamed rollback test to reflect new best-effort semantics (email failure no longer causes rollback)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Backend:
- AuthManager (api/auth_manager.py): server-side user store with bcrypt
password hashing, account lockout after 5 failed attempts (15 min),
and atomic file writes
- AuthRoutes (api/auth_routes.py): Blueprint at /api/auth/* — login,
logout, me, change-password, admin reset-password, list-users
- app.py: register auth_bp blueprint; add enforce_auth before_request
hook (401 for unauthenticated, 403 for wrong role; only active when
auth store has users so pre-auth tests remain green); instantiate
AuthManager; update POST /api/peers to require password >= 10 chars
and auto-provision email + calendar + files + auth accounts with full
rollback on any failure; extend DELETE /api/peers to tear down all
four service accounts; add /api/peer/dashboard and /api/peer/services
peer-scoped routes; fix is_local_request to also trust the last
X-Forwarded-For entry appended by the reverse proxy (Caddy)
- Role-based access: admin for /api/* (except /api/auth/* which is
public and /api/peer/* which is peer-only)
- setup_cell.py: generate and print initial admin password, store in
.admin_initial_password with 0600 permissions; cleaned up on first
admin login
Frontend:
- AuthContext.jsx: React context with login/logout/me state and Axios
interceptor for automatic 401 redirect
- PrivateRoute.jsx: route guard component
- Login.jsx: login page with error handling and must-change-password
redirect
- AccountSettings.jsx: change-password form for any authenticated user
- PeerDashboard.jsx: peer-role landing page (IP, service list)
- MyServices.jsx: peer service links page
- App.jsx, Sidebar.jsx: AuthContext integration, logout button,
PrivateRoute wrappers, peer-role routing
- Peers.jsx, WireGuard.jsx, api.js: auth-aware API calls
Tests: 100 new auth tests all pass (test_auth_manager, test_auth_routes,
test_route_protection, test_peer_provisioning). Fix pre-existing test
failures: update WireGuard test keys to valid 44-char base64 format
(test_wireguard_manager, test_peer_wg_integration), add password field
and service manager mocks to test_api_endpoints peer tests, add auth
helpers to conftest.py. Full suite: 845 passed, 0 failures.
Fixed: .admin_initial_password security cleanup on bootstrap, username
minimum length (3 chars enforced by USERNAME_RE regex)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause: sysctl -q net.ipv4.conf.all.rp_filter=0 in PostUp exited non-zero
inside the linuxserver/wireguard container (no permission), causing wg-quick to
tear down the wg0 interface — breaking peer status, port check, and internet
access through full tunnel.
- wireguard_manager.py: add || true to both sysctl PostUp/PostDown lines
- docker-compose.yml: add net.ipv4.conf.all.rp_filter=0 to wireguard sysctls
- WireGuard.jsx: kick off port check asynchronously on page load (was refresh-only)
- tests: add TestWireGuardSysctlAndPortCheck — 14 new tests covering sysctl
content, check_port_open (interface up / down / fallback-to-handshake),
get_peer_status (online / offline / not-found / no-handshake), and
get_all_peer_statuses (multi-peer / empty / skips interface line)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- export_config: clean output (no internal _keys), identity exposed as 'identity'
- import_config: handle 'identity' key, merge into existing config (not replace)
- restore_config: accept optional services list for selective restore
- backup_config: include 'identity' in manifest services list
- new GET /api/config/backups/<id>/download → zip file download
- new POST /api/config/backup/upload → zip file upload
- webui: Download + Upload buttons, restore modal with per-service checkboxes
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Settings: remove Save buttons; autosave is silent (no toast on success, error only)
- Settings: loadAll() resets dirty flags to prevent stale autosave after discard
- app.py: fix domain/ip_range "actually changed" check — full identity is always
sent on save so these were triggering pending on every keystroke regardless
- app.py: _dedup_changes handles port-change format "service field: old → new"
(split on ':' not ' changed') so dns_port changed twice shows one entry
- app.py: domain + cell_name changes now go through pending restart banner;
apply_domain/apply_cell_name write files immediately (reload=False) and set
pending; Discard restores zone files + Caddyfile to pre-change state
- app.py: _set_pending_restart captures pre-change snapshot BEFORE config writes
(was snapshotting after, making Discard a no-op)
- app.py: is_local_request reads /proc/net/route to allow the actual Docker
bridge subnet (172.0.0.0/24) which is not RFC-1918; fixes Containers page 403
- container_manager: get_container_logs raises instead of swallowing exceptions
so nonexistent container returns 500+error not 200+empty
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Autosave on Apply (was broken):
- App.jsx called useDraftConfig() in the same component that rendered
DraftConfigProvider — a component cannot consume context it provides.
Fixed by splitting into AppCore (consumes context, all logic) and App
(thin shell that wraps AppCore in DraftConfigProvider). The hook now
runs inside the provider and hasDirty()/flushAll() work correctly.
Cell name / domain length validation (255-char DNS standard):
- api/app.py: reject cell_name or domain > 255 chars or empty with 400
- api/app.py: reject ip_range without CIDR prefix (bare IPs shift all VIPs)
- webui/src/pages/Settings.jsx: cellNameError + domainError computed values
block saveIdentity and show inline error; maxLength={255} on inputs
- tests/test_identity_validation.py: 8 unit tests for the new validation
Cell name overflow on all pages:
- Dashboard.jsx: add min-w-0 to flex child div + truncate + title on cell_name
- CellNetwork.jsx: min-w-0 + truncate + title on cell_name, domain, endpoint,
vpn_subnet in invite cards and connected-cells list
Apply-and-verify integration tests:
- tests/integration/test_apply_propagation.py: TestPendingState (no restarts)
and TestApplyAndVerify (triggers real container restart + health poll)
covering the full save → apply → wait → verify propagation lifecycle
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- tests/integration/conftest.py: get_live_service_vips() now reads from the
config API's service_ips field instead of docker exec. The docker exec approach
spawns a fresh Python process that imports firewall_manager with its hardcoded
initial SERVICE_IPS, ignoring any update_service_ips() calls made at runtime.
The config API always computes VIPs from the current ip_range, so it matches what
the running app actually uses when writing iptables rules.
- api/app.py: reject ip_range values without a CIDR prefix (e.g. '10.0.0.1')
with a 400. Bare IPs are parsed as /32 by ipaddress.ip_network(strict=False),
which shifts all VIP offsets and produces unusable Docker subnet configs.
- tests/integration/test_config_api.py: update bare-ip test to expect 400 now
that the API enforces the prefix requirement.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Port conflict validation:
- api/port_registry.py: detect_conflicts() checks all service sections for shared port values
- api/app.py: returns HTTP 409 on port conflict after existing range validation
- webui/src/pages/Settings.jsx: JS-side detectPortConflicts() with useMemo shows inline
conflict errors and blocks Save before the request is made; catch blocks surface server
error messages (including 409) instead of generic fallbacks
Config autosave on Apply:
- webui/src/contexts/DraftConfigContext.jsx: new context; Settings registers flush callbacks
per section; App calls flushAll() before applyPending() when any section is dirty
- webui/src/App.jsx: wraps tree with DraftConfigProvider, handleApply shows 'saving' banner
state and awaits flushAll()
- webui/src/pages/Settings.jsx: registers identity + per-service flushers; propagates dirty
state into context via setDirty; uses refs to avoid stale closures
Extended integration test coverage (114 new tests):
- tests/integration/test_config_api.py: GET/PUT config, export, import, backup lifecycle
- tests/integration/test_network_services.py: DNS records + DHCP reservations CRUD
- tests/integration/test_containers.py: list, restart, logs, stats; recovery polling
- tests/integration/test_negative_scenarios.py: error-path coverage for all endpoints
- tests/test_port_conflicts.py: 20 unit tests for port_registry.detect_conflicts()
Pre-commit hook updated to skip tests/integration/ (live-stack tests require a running
stack and must be run explicitly via `make test-integration`).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously the loop broke after processing the first zone file it found.
If dev.zone already existed (created by apply_ip_range), it would be
processed and the loop would stop — leaving any other zone files (e.g.
cell.zone from an earlier domain) in place. get_dns_records() reads all
.zone files so the stale zone appeared doubled in the UI.
Fix: collect all non-local zone files first, write the target, then
delete every file that is not the current domain's zone.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sprint 1 — Security & correctness:
- Restore all 10 commented-out is_local_request() checks (vault, containers, images, volumes)
- Fix XFF spoofing: only trust the LAST X-Forwarded-For entry (Caddy's append), not all
- Require prefix length in wireguard.address (was accepting bare IPs like 10.0.0.1)
- Validate service_access list in add_peer (valid: calendar/files/mail/webdav)
- Fix dhcp/reservations POST/DELETE: unpack mac/ip/hostname from body (was passing dict as positional arg)
- Fix network/test POST: remove spurious data arg (test_connectivity takes no args)
- Fix remove_peer: clear iptables rules and regenerate DNS ACLs on deletion (was leaving stale rules)
- Fix CoreDNS reload: SIGHUP → SIGUSR1 (SIGHUP kills the process; SIGUSR1 triggers reload plugin)
- Remove local.{domain} block from Corefile template (local.zone doesn't exist, caused log spam)
- Fix routing_manager._remove_nat_rule: targeted -D instead of flushing entire POSTROUTING chain
Sprint 2 — State consistency:
- Atomic config writes in config_manager, ip_utils, firewall_manager, network_manager
(write to .tmp → fsync → os.replace, prevents truncated files on kill)
- backup_config: now also backs up Caddyfile, Corefile, .env, DNS zone files
- restore_config: restores all of the above so config stays consistent after restore
Sprint 3 — Dead code / documentation:
- Remove CellManager instantiation from app startup (was never called, double-instantiated all managers)
- Document routing_manager scope (targets host, not cell-wireguard; methods not called by any active route)
Sprint 4 — Test infrastructure:
- Add tests/conftest.py with shared tmp_dir, tmp_config_dir, tmp_data_dir, flask_client fixtures
- Add tests/test_config_validation.py: 400 paths for ip_range, port, wireguard.address validation
- Add tests/test_ip_utils_caddyfile.py: 14 tests for write_caddyfile (was completely untested)
- Expand test_app_misc.py: 7 new is_local_request tests covering XFF spoofing and cell-network IPs
- Add --cov-fail-under=70 to make test-coverage
- Add pre-commit hook that runs pytest before every commit
414 tests pass (was 372).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
UI: validateServiceConfig() checks all port fields (1–65535) and
WireGuard address (IP/CIDR) on every keystroke; Save button is
disabled and saveService() guards against any field errors.
API: update_config() rejects out-of-range port values and invalid
WireGuard address before persisting, returning 400 with a clear
field path (e.g. email.smtp_port).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
API: rejects ip_range outside 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16
with a 400 error before saving to config.
UI: isRFC1918Cidr() validates on every keystroke; error message shown inline
below the field; Save Identity button disabled while the value is invalid.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs triggered when ip_range is set to a subnet outside 172.16.0.0/12
(e.g. 172.0.0.0/24):
1. is_local_request() used ip.is_private which returns False for 172.0.x.x,
causing Caddy reverse-proxy requests to get 403 on the containers endpoint.
Fix: also accept IPs in the configured cell-network subnet.
2. apply_pending_config() hardcoded 'pic_api:latest' as the helper container
image. docker-compose v1 builds pic_api:latest (underscore) but compose v2+
builds pic-api:latest (hyphen). On a v2 install the helper would fail to
start silently, leaving the network unreconstructed after an ip_range change.
Fix: read the actual image tag from cell-api's own container metadata.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs caused DNS to fail when the domain name changes:
1. generate_corefile() hardcoded 'cell' as the zone name instead of
using the configured domain — on startup it would silently reset any
domain change back to 'cell'
2. apply_domain() regex replaced ALL non-dot zones (including local.cell)
with the new domain → duplicate zone blocks → CoreDNS crash
Fix: add a domain parameter to generate_corefile/apply_all_dns_rules,
add _configured_domain() helper in app.py, and delegate Corefile updates
in apply_domain() to generate_corefile() so the logic is in one place.
Also parameterise SERVICE_HOSTS ACL entries via the domain argument.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
`make reinstall` wipes config/ then `make setup` creates an empty
Caddyfile (ensure_file just touches it). Add write_caddyfile() to
ip_utils.py that generates the full reverse-proxy config from ip_range,
cell_name, and domain. Call it from setup_cell.py so fresh installs
always get a valid Caddyfile. Also regenerate it in app.py whenever
ip_range, domain, or cell_name changes so Caddy stays in sync.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When containers=['*'] (ip_range change or full restart), the previous
code ran docker compose down/up in a background thread inside cell-api.
docker compose down killed cell-api, terminating the thread before
docker compose up could run — leaving all containers stopped.
Fix: spawn an independent docker run --rm container (pic_api:latest)
that has the docker socket and project dir mounted. This helper outlives
cell-api being stopped and completes the up -d independently.
For specific-container restarts (port changes), keep the direct approach
since the API container is not in the affected set.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The apply_pending_config endpoint spawns _do_apply in a background thread.
subprocess was used but not imported inside the closure, causing
NameError: name 'subprocess' is not defined on every Apply click —
silently swallowed, so containers never restarted and no error was shown.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs fixed:
1. calendar_manager and wireguard_manager (port-only) called
_restart_container immediately in apply_config, bypassing the pending
restart banner and restarting the container before the docker port
binding in .env was updated — leaving the service broken until the
banner was applied manually. apply_config now only updates the config
file (radicale.conf / wg0.conf); the docker compose restart happens
via the banner as intended.
2. Port change detection in update_config used `if old_val is not None`
to guard against triggering on unchanged values. When a service's port
was never explicitly saved (first time), old_val was None, so the
pending restart was never queued. Fix: fall back to PORT_DEFAULTS[key]
so the comparison is always against the effective current value.
Add TestPortChangeDetection (5 tests) covering first-save and multi-service
accumulation cases.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- docker-compose: fix WireGuard port mapping to ${WG_PORT}:${WG_PORT} so
the daemon ListenPort matches the Docker host-to-container binding
- app.py: sync wireguard.port ↔ identity.wireguard_port in both directions
so changing either keeps them consistent; identity path now also updates
wg0.conf via wireguard_manager.update_config
- Settings.jsx: remove duplicate wireguard_port from Cell Identity section
(port is configurable under WireGuard VPN service config); add
refreshConfig() after saveService so other pages see new values immediately
- WireGuard.jsx: import useConfig() and use service_configs.wireguard.port
as the reactive port source for endpoint display and port-open warnings
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>