Port conflict validation:
- api/port_registry.py: detect_conflicts() checks all service sections for shared port values
- api/app.py: returns HTTP 409 on port conflict after existing range validation
- webui/src/pages/Settings.jsx: JS-side detectPortConflicts() with useMemo shows inline
conflict errors and blocks Save before the request is made; catch blocks surface server
error messages (including 409) instead of generic fallbacks
Config autosave on Apply:
- webui/src/contexts/DraftConfigContext.jsx: new context; Settings registers flush callbacks
per section; App calls flushAll() before applyPending() when any section is dirty
- webui/src/App.jsx: wraps tree with DraftConfigProvider, handleApply shows 'saving' banner
state and awaits flushAll()
- webui/src/pages/Settings.jsx: registers identity + per-service flushers; propagates dirty
state into context via setDirty; uses refs to avoid stale closures
Extended integration test coverage (114 new tests):
- tests/integration/test_config_api.py: GET/PUT config, export, import, backup lifecycle
- tests/integration/test_network_services.py: DNS records + DHCP reservations CRUD
- tests/integration/test_containers.py: list, restart, logs, stats; recovery polling
- tests/integration/test_negative_scenarios.py: error-path coverage for all endpoints
- tests/test_port_conflicts.py: 20 unit tests for port_registry.detect_conflicts()
Pre-commit hook updated to skip tests/integration/ (live-stack tests require a running
stack and must be run explicitly via `make test-integration`).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sprint 1 — Security & correctness:
- Restore all 10 commented-out is_local_request() checks (vault, containers, images, volumes)
- Fix XFF spoofing: only trust the LAST X-Forwarded-For entry (Caddy's append), not all
- Require prefix length in wireguard.address (was accepting bare IPs like 10.0.0.1)
- Validate service_access list in add_peer (valid: calendar/files/mail/webdav)
- Fix dhcp/reservations POST/DELETE: unpack mac/ip/hostname from body (was passing dict as positional arg)
- Fix network/test POST: remove spurious data arg (test_connectivity takes no args)
- Fix remove_peer: clear iptables rules and regenerate DNS ACLs on deletion (was leaving stale rules)
- Fix CoreDNS reload: SIGHUP → SIGUSR1 (SIGHUP kills the process; SIGUSR1 triggers reload plugin)
- Remove local.{domain} block from Corefile template (local.zone doesn't exist, caused log spam)
- Fix routing_manager._remove_nat_rule: targeted -D instead of flushing entire POSTROUTING chain
Sprint 2 — State consistency:
- Atomic config writes in config_manager, ip_utils, firewall_manager, network_manager
(write to .tmp → fsync → os.replace, prevents truncated files on kill)
- backup_config: now also backs up Caddyfile, Corefile, .env, DNS zone files
- restore_config: restores all of the above so config stays consistent after restore
Sprint 3 — Dead code / documentation:
- Remove CellManager instantiation from app startup (was never called, double-instantiated all managers)
- Document routing_manager scope (targets host, not cell-wireguard; methods not called by any active route)
Sprint 4 — Test infrastructure:
- Add tests/conftest.py with shared tmp_dir, tmp_config_dir, tmp_data_dir, flask_client fixtures
- Add tests/test_config_validation.py: 400 paths for ip_range, port, wireguard.address validation
- Add tests/test_ip_utils_caddyfile.py: 14 tests for write_caddyfile (was completely untested)
- Expand test_app_misc.py: 7 new is_local_request tests covering XFF spoofing and cell-network IPs
- Add --cov-fail-under=70 to make test-coverage
- Add pre-commit hook that runs pytest before every commit
414 tests pass (was 372).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
UI: validateServiceConfig() checks all port fields (1–65535) and
WireGuard address (IP/CIDR) on every keystroke; Save button is
disabled and saveService() guards against any field errors.
API: update_config() rejects out-of-range port values and invalid
WireGuard address before persisting, returning 400 with a clear
field path (e.g. email.smtp_port).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
API: rejects ip_range outside 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16
with a 400 error before saving to config.
UI: isRFC1918Cidr() validates on every keystroke; error message shown inline
below the field; Save Identity button disabled while the value is invalid.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs triggered when ip_range is set to a subnet outside 172.16.0.0/12
(e.g. 172.0.0.0/24):
1. is_local_request() used ip.is_private which returns False for 172.0.x.x,
causing Caddy reverse-proxy requests to get 403 on the containers endpoint.
Fix: also accept IPs in the configured cell-network subnet.
2. apply_pending_config() hardcoded 'pic_api:latest' as the helper container
image. docker-compose v1 builds pic_api:latest (underscore) but compose v2+
builds pic-api:latest (hyphen). On a v2 install the helper would fail to
start silently, leaving the network unreconstructed after an ip_range change.
Fix: read the actual image tag from cell-api's own container metadata.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs caused DNS to fail when the domain name changes:
1. generate_corefile() hardcoded 'cell' as the zone name instead of
using the configured domain — on startup it would silently reset any
domain change back to 'cell'
2. apply_domain() regex replaced ALL non-dot zones (including local.cell)
with the new domain → duplicate zone blocks → CoreDNS crash
Fix: add a domain parameter to generate_corefile/apply_all_dns_rules,
add _configured_domain() helper in app.py, and delegate Corefile updates
in apply_domain() to generate_corefile() so the logic is in one place.
Also parameterise SERVICE_HOSTS ACL entries via the domain argument.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
`make reinstall` wipes config/ then `make setup` creates an empty
Caddyfile (ensure_file just touches it). Add write_caddyfile() to
ip_utils.py that generates the full reverse-proxy config from ip_range,
cell_name, and domain. Call it from setup_cell.py so fresh installs
always get a valid Caddyfile. Also regenerate it in app.py whenever
ip_range, domain, or cell_name changes so Caddy stays in sync.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When containers=['*'] (ip_range change or full restart), the previous
code ran docker compose down/up in a background thread inside cell-api.
docker compose down killed cell-api, terminating the thread before
docker compose up could run — leaving all containers stopped.
Fix: spawn an independent docker run --rm container (pic_api:latest)
that has the docker socket and project dir mounted. This helper outlives
cell-api being stopped and completes the up -d independently.
For specific-container restarts (port changes), keep the direct approach
since the API container is not in the affected set.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The apply_pending_config endpoint spawns _do_apply in a background thread.
subprocess was used but not imported inside the closure, causing
NameError: name 'subprocess' is not defined on every Apply click —
silently swallowed, so containers never restarted and no error was shown.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs fixed:
1. calendar_manager and wireguard_manager (port-only) called
_restart_container immediately in apply_config, bypassing the pending
restart banner and restarting the container before the docker port
binding in .env was updated — leaving the service broken until the
banner was applied manually. apply_config now only updates the config
file (radicale.conf / wg0.conf); the docker compose restart happens
via the banner as intended.
2. Port change detection in update_config used `if old_val is not None`
to guard against triggering on unchanged values. When a service's port
was never explicitly saved (first time), old_val was None, so the
pending restart was never queued. Fix: fall back to PORT_DEFAULTS[key]
so the comparison is always against the effective current value.
Add TestPortChangeDetection (5 tests) covering first-save and multi-service
accumulation cases.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- docker-compose: fix WireGuard port mapping to ${WG_PORT}:${WG_PORT} so
the daemon ListenPort matches the Docker host-to-container binding
- app.py: sync wireguard.port ↔ identity.wireguard_port in both directions
so changing either keeps them consistent; identity path now also updates
wg0.conf via wireguard_manager.update_config
- Settings.jsx: remove duplicate wireguard_port from Cell Identity section
(port is configurable under WireGuard VPN service config); add
refreshConfig() after saveService so other pages see new values immediately
- WireGuard.jsx: import useConfig() and use service_configs.wireguard.port
as the reactive port source for endpoint display and port-open warnings
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When ip_range changes, Docker cannot modify a network subnet in-place.
_set_pending_restart now accepts network_recreate=True; apply endpoint
runs `docker compose down` before `up -d` in that case so the bridge
network is fully recreated with the new subnet.
Service page fixes:
- GET /api/config includes service_ips (dns, vip_mail, vip_calendar,
vip_files, vip_webdav) computed via ip_utils
- Email/Calendar/Files pages read IPs and ports from useConfig() instead
of hardcoded 172.20.0.x constants and default port literals
- Apply feedback: spinner → success/timeout/error banners via health polling
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Service pages (Email, Calendar, Files) now read IPs and ports from the
config API instead of hardcoded 172.20.0.x constants:
- GET /api/config now includes service_ips (dns, vip_mail, vip_calendar,
vip_files, vip_webdav) computed from ip_range via ip_utils
- Email.jsx: mailIp, dnsIp, imapPort, smtpPort, webmailPort from context
- Calendar.jsx: calendarIp, dnsIp, calendarPort from context
- Files.jsx: filesIp, webdavIp, webdavPort, filegatorPort from context
Apply button now shows restart progress:
- "Restarting containers — please wait…" spinner while polling /health
- "Containers restarted successfully" on success (clears after 4s)
- "Timed out" / error message if health doesn't come back in 45s
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- DELETE /api/config/pending endpoint calls _clear_pending_restart()
- cellAPI.cancelPending() calls the new endpoint
- PendingRestartBanner shows a "Discard" button alongside "Apply Now";
clicking it drops the pending state without restarting any containers
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
All host port bindings in docker-compose.yml now use \${VAR:-default} substitution,
driven by the .env file generated by ip_utils.write_env_file(). Changing a port in
Settings triggers a per-container pending-restart banner so only the affected container
is restarted on Apply.
- ip_utils: add PORT_DEFAULTS, PORT_ENV_VAR_NAMES, PORT_TO_CONTAINERS; extend
write_env_file() to accept optional ports dict and write all port env vars
- docker-compose: convert all hardcoded port bindings to \${VAR:-default} form
- app.py: add _collect_service_ports helper; detect port changes in update_config,
write updated .env and call _set_pending_restart with specific container list;
update _set_pending_restart to merge/accumulate pending state with containers list;
update apply_pending_config to use --no-deps <service> for targeted restarts
- config_manager: add submission_port, webmail_port to email schema; add manager_port
to files schema
- Settings.jsx: make all email/files ports editable, add submission_port, webmail_port,
manager_port fields; update stale identity note
- tests: 8 new tests for PORT_DEFAULTS, PORT_ENV_VAR_NAMES, and port override in write_env_file
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When ip_range changes, a persistent amber banner appears at the top of
every page showing what changed and a "Apply Now" button. Clicking it
shows a confirmation modal ("containers will restart briefly"), then
calls POST /api/config/apply which runs docker compose up -d from inside
the API container — no manual make start needed.
Backend:
- _set_pending_restart() / _clear_pending_restart() helpers track state
in config_manager so it survives page refresh
- GET /api/config/pending returns { needs_restart, changed_at, changes }
- POST /api/config/apply runs docker compose up -d via the mounted
docker.sock, using the project working_dir label to resolve host paths
- docker-compose.yml mounts docker-compose.yml itself read-only into
the API container so docker compose can read it from inside
Frontend (App.jsx):
- Polls /api/config/pending every 5 s alongside the health check
- PendingRestartBanner component with confirmation modal
- Optimistically clears banner on Apply click; API and containers
restart in the background
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
docker-compose.yml now uses ${VAR:-default} for every container IP and
the network subnet, so there are no hardcoded addresses in the YAML.
How it works:
- setup_cell.py generates .env at project root from ip_range (gitignored).
- docker-compose reads .env automatically at startup.
- When ip_range changes in Settings, the API writes a new .env via
ip_utils.write_env_file(); DNS/firewall/vIPs update immediately.
- User runs `make start` to recreate containers with the new IPs.
api/ip_utils.py gains ENV_VAR_NAMES dict and write_env_file(ip_range, path).
The old update_docker_compose_ips() direct-patch approach is removed from app.py.
3 new tests added (TestWriteEnvFile); total 324 pass.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When ip_range changes in Settings, the new subnet is now applied to:
- DNS zone records (network_manager.apply_ip_range)
- Caddy virtual IPs (firewall_manager.ensure_caddy_virtual_ips)
- iptables per-service rules (firewall_manager.update_service_ips)
- docker-compose.yml static IPs if writable (ip_utils.update_docker_compose_ips)
New module ip_utils.py derives all container IPs from the subnet using
fixed offsets so the entire stack stays consistent from one setting.
321 tests pass (72 new tests added for ip_utils, apply_ip_range, update_service_ips).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- NetworkManager.bootstrap_dns_records(): creates A records for all
cell services (api, webui, calendar, files, mail, webmail, webdav,
<cell_name>) using their static container IPs — only runs when the
zone file doesn't exist yet (idempotent)
- API startup: _bootstrap_dns() thread reads cell_name/domain from
config_manager and calls bootstrap — runs alongside enforcement thread
- Fix: add_dns_record(data) and remove_dns_record(data) now correctly
unpack dict kwargs instead of passing dict as positional arg
- Fix: remove duplicate cell{} block in config/dns/Corefile
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- app.py: ConfigManager now uses CONFIG_DIR env var for config file path
instead of hardcoded './config/cell_config.json' — config was being read
from the image's working directory, making all settings writes ephemeral
(lost on container restart)
- wireguard_manager: generate_config uses configured address/port instead of
hardcoded 10.0.0.1 in DNAT rules and Address field
- scripts/setup_cell.py: full setup script — generates WireGuard keys (wg
binary or Python cryptography fallback), writes wg0.conf and cell_config.json
with correct _identity key; CELL_NAME / VPN_ADDRESS / WG_PORT env vars
- Makefile: setup target passes env vars through; build-api / build-webui targets
- README: replace install.sh references with make setup && make start
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Site-to-site WireGuard tunnels between PIC cells with automatic DNS forwarding.
Each cell generates an invite JSON (public key, endpoint, VPN subnet, DNS IP,
domain); the remote cell imports it to establish a bidirectional tunnel and
CoreDNS forwarding block so each cell's domain resolves across the mesh.
Backend:
- CellLinkManager: invite generation, add/remove connections, live WireGuard
handshake status; stores links in data/cell_links.json
- WireGuardManager: add_cell_peer() accepts subnet CIDRs (not /32) and an
optional endpoint for site-to-site peers; _read_iface_field() reads port,
address, and network directly from wg0.conf at runtime instead of constants
- NetworkManager: add/remove CoreDNS forwarding blocks per remote cell domain
- app.py: /api/cells/* routes; _next_peer_ip() derives VPN range from
configured address so peer allocation follows any address change
Frontend:
- CellNetwork page: invite panel (JSON + QR), connect form (paste JSON),
connected cells list (green/red status, disconnect button)
- App.jsx: Cell Network nav entry and route
Tests: 25 new tests across test_wireguard_manager, test_network_manager,
test_cell_link_manager (263 total)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Backend:
- wireguard_manager: _get_configured_port/address/network() read from wg0.conf
instead of module-level constants; get_split_tunnel_ips() derives VPN network
from configured Address; get_server_config() returns configured port, dns_ip,
split_tunnel_ips, vpn_network
- add_peer() and get_peer_config() use configured port (not hardcoded 51820)
- _next_peer_ip() derives subnet from wireguard_manager._get_configured_address()
so new peers are allocated IPs from the correct VPN range after address change
- refresh-ip and check-port API endpoints return configured port, not 51820
- PUT /api/config: when wireguard port/address changes, all peers are marked
config_needs_reinstall so users know to re-download tunnel configs
- get_peer_config endpoint: uses configured split tunnel IPs (not hardcoded)
Frontend:
- Peers.jsx: SERVICES domains use live domain from ConfigContext; generateConfig()
uses serverConf.dns_ip and serverConf.split_tunnel_ips; vpn_network shown in
peer-access description; DNS hint uses live domain; server config loaded at
mount time so it is available without re-fetching on every peer action;
handleUpdatePeer uses /32 for server-side AllowedIPs (was incorrectly using
full/split tunnel CIDRs which the backend rejects)
- WireGuard.jsx: generateWireGuardConfig() uses serverConfig.dns_ip,
split_tunnel_ips from server-config API; split-tunnel description shows
live IPs
Tests: 9 new tests in TestWireGuardConfigReads verify all config reads
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Changes:
- ConfigContext.jsx: React context that loads /api/config once; exposes domain,
cell_name, refresh() — wraps entire app in App.jsx
- Email/Calendar/Files pages: replace hardcoded 'mail.cell', 'calendar.cell',
'files.cell', 'webdav.cell' with domain from ConfigContext; hostname updates
immediately after Settings save (refreshConfig() called on save)
- /api/status: cell_name and domain now read from stored _identity in config_manager,
not hardcoded 'personal-internet-cell' / 'cell.local'
- network_manager.apply_cell_name(old, new): updates hostname A-record in primary
zone file and reloads CoreDNS; called from PUT /api/config when cell_name changes
- Old identity captured before save so apply_cell_name gets the correct old value
- Settings EmailForm: smtp/imap ports are read-only with note (docker-compose.yml level)
- Settings FilesForm: port is read-only with note (Caddy proxies on 80 externally)
- Settings CalendarForm: port labeled "Internal port; clients use 80 via Caddy"
Tests added:
- test_apply_cell_name_renames_host_record
- test_apply_cell_name_noop_when_same
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each service manager now has apply_config() that writes to the actual config:
- network: dhcp_range → dnsmasq.conf (reload cell-dhcp), ntp_servers → chrony.conf
(restart cell-ntp), domain → dnsmasq.conf domain= line
- email: domain → mailserver.env OVERRIDE_HOSTNAME + POSTMASTER_ADDRESS,
restart cell-mail
- wireguard: port/address/private_key → wg0.conf ListenPort/Address/PrivateKey,
restart cell-wireguard
- calendar: port → radicale config hosts=, restart cell-radicale
PUT /api/config now calls apply_config() after persisting JSON, and returns
{restarted: [...], warnings: [...]} so Settings UI can show which containers
were restarted. _restart_container() helper added to BaseServiceManager.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- PUT /api/config now calls service_manager.update_config() for each service
so changes write to the service's own config file, not just cell_config.json
- email_manager.get_status() now reads smtp_port/imap_port/domain from its
config file (defaults: 587/993/cell.local) and includes them in the response
- calendar_manager.get_status() includes configured port (default 5232)
- file_manager.get_status() uses configured port from service config
- Email.jsx reads imap_port/smtp_port from API status instead of hardcoding
- Settings service sections show "port changes require container restart" note
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
app.py:
- Alert logic now checks status.running (container up/down) instead of healthy
(which requires connectivity tests) — services are only alerted when actually down
- Add POST /api/health/history/clear endpoint to reset history + alert counters
log_manager.py:
- get_all_log_file_infos: include rotated backup files (*.log.1, *.log.2 ...) in listing,
marked with backup=true so UI can dim them and hide rotate button
api.js: add monitoringAPI.clearHealthHistory
Logs page:
- Health History: add Clear button with confirmation
- File panel: show full filename (including .log.1 backups), explain host path and naming,
hide rotate button for backup files
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
docker-compose.yml:
- Add json-file logging driver (max-size: 10m, max-file: 5) to all 13 containers
- Docker now owns container stdout/stderr rotation automatically
- Add ./data/logs:/app/api/data/logs volume to API — service logs now persist across restarts
log_manager.py:
- Remove container log collection hack (Docker handles it natively)
- Add set_service_level(service, level) — change log level at runtime without restart
- Add get_service_levels() — return current per-service levels
- Simplify get_all_log_file_infos to return only service log files
app.py:
- Add GET /api/logs/verbosity — return current per-service log levels
- Add PUT /api/logs/verbosity — update levels at runtime, persist to config/log_levels.json
- Load persisted log level overrides at startup from log_levels.json
- Simplify rotate endpoint (service logs only, container logs owned by Docker)
wireguard_manager.py:
- get_keys(): return empty strings if key files don't exist (prevents get_status crash
when wg0.conf is missing at startup and falls through to generate_config)
Logs page (4 tabs):
- API Service Logs: structured JSON logs from Python managers, with search/filter/rotate panel
- Container Logs: live docker logs (read via existing /api/containers/<name>/logs endpoint)
- Verbosity Config: per-service level dropdowns, apply immediately + persist
- Health History: existing health poll table
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- log_manager: add collect_container_logs (appends docker logs to container_<name>.log),
get_container_log_lines, rotate_container_log, get_all_log_file_infos
- app.py: new endpoints /api/logs/files (all log file sizes), /api/logs/containers/<name>
(collect+return stored container logs); rotate endpoint now handles both service and container logs
- Logs page: split into API Service Logs tab (python manager logs) and Container Logs tab
(persistent docker stdout/stderr); Statistics tab shows both kinds with per-row rotate;
each tab has a description explaining what it shows and where files live
- wireguard_manager: test_connectivity peer_ip=None guard (already in previous commit, now rebuilt)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The connectivity endpoint was calling routing_manager.test_connectivity()
(no args, internal health check) instead of test_routing_connectivity(target_ip).
Also ping/traceroute aren't installed in the API container; run them via
docker exec cell-wireguard instead.
Updated test_api_endpoints to mock test_routing_connectivity and cover
the new DELETE /firewall/<id> and GET /live-iptables endpoints.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Server-side access control:
- firewall_manager.py: per-peer iptables FORWARD rules in WireGuard container;
virtual IPs on Caddy (172.20.0.21-24) for per-service DROP/ACCEPT targeting
- CoreDNS Corefile regenerated with ACL blocks for blocked services per peer
- POST /api/wireguard/apply-enforcement re-applies rules after WireGuard restart;
wg0.conf PostUp calls it via curl so rules restore automatically on container start
WireGuard fixes:
- _syncconf uses `wg set peer` instead of `wg syncconf` to avoid resetting ListenPort
- add_peer validates AllowedIPs must be /32 — rejects full/split tunnel CIDRs that
would route internet or LAN traffic to that peer
- _config_file() checks for linuxserver wg_confs/ subdirectory first
UI:
- Peers page fetches /api/wireguard/peers/statuses for live handshake data;
status badge now shows real Online/Offline + seconds since last handshake
- IP field removed from Add Peer form (auto-assigned from 10.0.0.0/24)
Tests (246 pass):
- test_firewall_manager.py: 22 tests for ACL generation, iptables rule correctness,
comment tagging, clear_peer_rules filter logic
- test_peer_wg_integration.py: 10 tests for /32 enforcement, IP auto-assignment,
syncconf called on add/remove
- test_wireguard_manager.py: updated to reflect correct IPs and /32 requirement
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Peer creation/edit form now configures:
- Tunnel mode: full (0.0.0.0/0) or split (PIC only)
- Per-service access toggles (calendar, files, mail, webdav)
- Peer-to-peer communication toggle
- Optional calendar account creation
- Access capability badges in peer list
Bug fixes:
- DNS in client configs was 8.8.8.8 / 172.20.0.2 — now 172.20.0.3 (CoreDNS)
This was why .cell domains didn't resolve on connected VPN peers
- get_peer_config API uses stored internet_access to set AllowedIPs
- New PUT /api/peers/<name> endpoint with config_changed detection
- POST /api/peers/<name>/clear-reinstall clears reinstall flag after download
- Routing page reads real host routes via /proc/1/net/route (pid: host)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- WireGuard default changed to full tunnel (0.0.0.0/0) — all peer traffic
routes through PIC server so internet latency matches server's clean 41ms
- UI tunnel toggle now defaults to Full tunnel
- API /peers/config accepts allowed_ips param so UI toggle wires through
- Routing page reads real host routes via /proc/1/net/route (pid: host)
instead of mock data; shows ens18/192.168.31.1 correctly
- Add iproute2 + util-linux to API Dockerfile
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Fix CoreDNS not loading .cell zones (wrong Corefile path, now uses -conf flag)
- Fix WireGuard server address conflict (172.20.0.1/16 overlapped with Docker
network; changed to 10.0.0.1/24 to eliminate duplicate routes)
- Add SERVERMODE=true and sysctls to WireGuard docker-compose for server mode
- Fix DNS zone file parser to handle 4-field records (name IN type value)
- Add get_dns_records() to NetworkManager; mount data/dns into API container
- Fix peer config endpoint: look up IP/key from registry, use real endpoint
- Add bulk peer statuses endpoint keyed by public_key
- Normalize snake_case API fields to camelCase in WireGuard UI
- Add port check endpoint (checks via live handshake, not unreliable TCP probe)
- Add Caddy virtual hosts for ui/calendar/files/mail .cell domains (HTTP only)
- Fix cell config domain default from cell.local to cell
- Fix Routing Network Config tab (was calling hardcoded localhost:3000)
- Fix DNS records display (record.value not record.ip)
- Move service access guide to top of Dashboard with login hints
- Add /api/routing/setup endpoint
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- WireGuardManager: get_external_ip() (cached 1h), check_port_open(),
get_server_config() returning public_key + detected endpoint
- API: /api/wireguard/server-config returns real external IP;
/api/wireguard/refresh-ip forces re-detection;
/api/wireguard/peers/config now looks up peer IP + private key from
registry and uses real server endpoint automatically
- Fix doubled port in Endpoint (178.x:51820:51820 → 178.x:51820)
- Fix Address=/32 when peer_ip already has mask
- WebUI nginx: proxy /api/ and /health to cell-api (fixes localhost:3000
hardcode — UI now works from any machine)
- api.js: baseURL='' so all calls go through nginx proxy
- WireGuard page: show Server Endpoint card with external IP, endpoint,
public key, and Refresh IP button
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>