Sends 50 pings at 0.2s intervals through the cell-to-cell tunnel and
asserts that ≤5% exceed 3× the median RTT (floor 15ms). Catches
server-side packet processing regressions on wired paths.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Concurrent callers (health monitor + startup) could both pass the
delete-all loop and each insert a copy, producing duplicate
ESTABLISHED,RELATED rules. Lock serialises all calls.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
pic1 ships alpine but not busybox; ensure_cell_subnet_routes() now uses
the alpine image so route injection works on all cells.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- firewall_manager: add _get_wg_server_ip() helper; scope ensure_cell_api_dnat(),
ensure_dns_dnat(), ensure_service_dnat() DNAT rules with -d server_ip; add
ensure_wg_masquerade() (Docker→wg0 MASQUERADE+FORWARD) and
ensure_cell_subnet_routes() (host routes via docker run busybox)
- wireguard_manager: scope PostUp DNAT rules with -d server_ip in generate_config()
and ensure_postup_dnat(); add Docker→wg0 MASQUERADE+FORWARD rules
- app.py: call ensure_wg_masquerade() and ensure_cell_subnet_routes() in
_apply_startup_enforcement()
- tests/test_firewall_manager.py: mock _get_wg_server_ip, add
test_dnat_is_scoped_to_server_ip and test_returns_false_when_wg_server_ip_not_found
- tests/e2e/wg/test_cell_to_cell_routing.py: rewrite to use dynamic config
(no hardcoded IPs/ports), add latency and domain access tests
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The cell catch-all DROP rule blocked all traffic from a connected cell's
subnet, including ESTABLISHED/RELATED packets (ICMP replies, TCP ACKs) for
connections initiated by local VPN peers. This broke ping to the remote
cell's WireGuard IP even when the cell-to-cell tunnel was healthy.
Change the DROP to match only NEW,INVALID connections so established reply
traffic passes through to the stateful ACCEPT rule.
Also adds tests/e2e/wg/test_cell_to_cell_routing.py — an end-to-end test
that brings up a real WireGuard tunnel from the test runner to pic1 and
verifies full cross-cell routing including ICMP ping, API /health, and Caddy.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
apply_cell_rules drops all traffic from a cell's subnet except specific
service ports. This also drops ICMP replies and TCP ACKs for connections
initiated by local peers to the connected cell, breaking cross-cell
routing (ping to 10.0.0.1 silently dropped by test's cell DROP rule).
Fix: ensure_forward_stateful() inserts a stateful ESTABLISHED,RELATED
ACCEPT at the top of FORWARD. Called from apply_cell_rules (every cell
add/update) and from _apply_startup_enforcement. Idempotent.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three related fixes for split-tunnel peers that need to reach connected cells:
1. apply_peer_rules/apply_all_peer_rules now accept wg_subnet (actual local VPN
subnet) and cell_subnets (connected cells' vpn_subnets) parameters instead of
hardcoding 10.0.0.0/24. All callers (startup, add_peer, update_peer,
apply-enforcement endpoint) pass the real values.
2. Explicit ACCEPT rules are inserted in FORWARD for each connected cell's
subnet so split-tunnel peers (internet_access=False) can still reach
connected cells via the wg0→wg0 path.
3. apply_ip_range in network_manager now loads cell_links.json and passes it
to generate_corefile(), fixing a race where the bootstrap DNS thread could
overwrite the Corefile and wipe cross-cell DNS forwarding zones on startup.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When a cell is connected to others, changing the local WireGuard address
or Docker ip_range to a subnet that overlaps a connected cell's vpn_subnet
would break routing. Both now return 409 with the conflicting cell name.
- wireguard.address: derive network from new address, check all connected
cells' vpn_subnet for overlap (after existing format validation)
- ip_range: check all connected cells' vpn_subnet for overlap (after
existing RFC-1918 validation)
Tests: 4 cases each (overlap → 409, no overlap → ok, no cells → ok,
format error still fires first → 400).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two gaps allowed a cell to take a domain already in use by a connected cell:
1. PUT /api/config domain change: added check against cell_link_manager's
connected cells list before saving — returns 409 if the new domain
collides with any connected cell's domain.
2. accept_invite healing path: a remote cell changing its domain via a
re-invite was not validated against other connected cells' domains.
Now calls _check_invite_conflicts(invite, exclude_cell=name) before
applying any change.
Also: the healing path now detects domain changes (alongside dns_ip/
vpn_subnet/endpoint), updates the stored domain, and refreshes the DNS
forward rule when the domain changes.
Tests: 3 new domain-conflict tests in test_config_validation.py;
3 new accept_invite healing tests in test_cell_link_manager.py.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Flask's default cookie name ('session') is shared across all ports on the same
hostname. When two PIC instances are accessed via localhost:portA and localhost:portB,
logging into one overwrites the other's session cookie, causing repeated logouts.
Derive a unique 8-hex suffix from each instance's persistent SECRET_KEY and set
SESSION_COOKIE_NAME = 'pic_sess_<suffix>'. This ensures each cell uses a distinct
cookie name, so sessions are fully isolated regardless of hostname.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_get_dnat_container_ips() used a concatenating docker inspect format that
produced "invalid IP" when containers had multiple network attachments.
The old ensure_postup_dnat appended rather than replacing, so each update
call added a broken duplicate set of rules causing iptables to fail on
startup and tear down wg0 entirely.
Fix _get_dnat_container_ips to use a space separator in the format string
and validate each token as a real IP before accepting it.
Rewrite ensure_postup_dnat with _is_dnat_rule() helper: strips every
managed DNAT/FORWARD rule (any IP, port 53/80) on semicolon-split and
appends a single correct set — fully idempotent regardless of prior state.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Login.jsx:
- Eye/EyeOff toggle on the password field
- Locked account error now shows exact minutes remaining ("Try again in 3 minutes")
instead of generic "Try again later"
AccountSettings.jsx:
- PasswordInput component wraps all 4 password fields with individual eye toggles
(current password, new password, confirm, admin reset)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
3 new tests in TestSetPeerRouteVia:
- 409 when remote_exit_relay_active=True (would create A→B→A cycle)
- disable (via_cell=null) bypasses loop check — always allowed
- no 409 when remote_exit_relay_active=False (safe to enable)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three issues fixed together:
1. WireGuard address changes now go through the pending-restart queue
(shown in the UI banner) instead of restarting cell-wireguard immediately.
Only private_key changes still restart immediately; address and port
changes both defer to the user-initiated Apply flow. Previously the
address change was silently applied and never appeared in Settings →
Pending Configuration.
2. When the WG address changes, the API spawns a background thread that
pushes the updated invite to all connected cells (over LAN, before the
WG tunnel is back up). This lets remote cells automatically update
their dns_ip, AllowedIPs, and CoreDNS forwarding rules without manual
re-pairing.
3. accept_invite now handles the "already connected but changed" case:
if the remote cell re-sends an invite with a different dns_ip, vpn_subnet
or endpoint, we update the stored link, the WG AllowedIPs, and the
CoreDNS forward rule in place — no delete/re-add required. Previously
the endpoint was ignored and returned the stale record unchanged.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three fixes:
1. Extend the docker-exec safety guard in wireguard_manager to also check
for 'wg_confs' in the config path. When running unit tests on the host
the API uses /app/config/wireguard/wg0.conf (no wg_confs subdir), so the
old '/tmp/' | 'pytest' check didn't fire — _syncconf and friends were
executing live 'docker exec cell-wireguard wg set' calls against the
running container, removing real VPN peers that didn't appear in the
test config. The wg_confs subdir only exists inside the container mount,
so its presence reliably gates live calls.
2. Fix get_split_tunnel_ips() wrong path: self.data_dir + 'api/cell_links.json'
→ self.data_dir + 'cell_links.json'. The extra 'api/' segment produced
/app/data/api/cell_links.json inside the container instead of the real
/app/data/cell_links.json, so connected cells were silently excluded from
split-tunnel CIDRs.
3. update_peer_ip_registry and ip_update now also call
wireguard_manager.update_peer_ip so wg0.conf AllowedIPs stay in sync when
a peer's VPN IP changes at runtime (previously only peers.json was updated).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
**Auto mutual pairing**
When Cell A imports Cell B's invite (POST /api/cells on A), A now
immediately pushes its own invite to Cell B over the LAN (using the
endpoint IP, before the WG tunnel exists) via the new endpoint:
POST /api/cells/peer-sync/accept-invite
Cell B auto-adds Cell A as a WireGuard peer and DNS forward, completing
the bidirectional tunnel without any manual action on Cell B's UI.
The endpoint is idempotent and unauthenticated (runs before WG tunnel).
Previously, the pairing was one-sided: Cell A had Cell B as a WG peer
but Cell B never had Cell A — the tunnel never established and all
cross-cell operations silently failed.
**Conflict detection (add_connection + accept-invite)**
_check_invite_conflicts() now validates before connecting:
- VPN subnet must not overlap own subnet or any already-connected cell's subnet
- Domain must not match own domain or any already-connected cell's domain
Returns clear error messages so the admin knows which cell to reconfigure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
**PIC UI always accessible (service_access=[])**
Remove the per-peer Caddy:80 ACCEPT/DROP rule from apply_peer_rules.
Service access was enforced at two layers (iptables DROP + CoreDNS ACL),
but the iptables layer also blocked the PIC web UI served through Caddy.
CoreDNS ACL alone is sufficient — DNS blocks service hostnames; the UI
path through Caddy remains reachable regardless of service_access value.
**Exit-relay internet routing (route_via another cell)**
update_peer_ip validated new_ip as a single ip_network, rejecting the
comma-separated '10.0.1.0/24, 0.0.0.0/0' string passed by
update_cell_peer_allowed_ips(add_default_route=True). The AllowedIPs
in wg0.conf was never updated, so WireGuard never routed internet traffic
through the exit cell's tunnel. Fix: validate each CIDR individually and
apply the change live via wg set without a container restart.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- _build_acl_block: put all blocked IPs for a service in ONE acl block instead
of one block per peer — the first block's allow-all was silently granting
access to every peer after the first blocked one (first-match semantics)
- generate_corefile: add 'reload' plugin so SIGUSR1 triggers Corefile reload
in newer CoreDNS builds (without it the signal was a no-op)
- tests/test_firewall_manager.py: new tests for single merged ACL block and
the reload directive
- tests/e2e/api/test_peer_access_update.py: e2e tests for service_access,
internet_access, and peer_access updates persisting live to iptables/CoreDNS
- tests/e2e/api/test_cell_to_cell.py: e2e tests for cell-to-cell connection
management, permissions API, and cross-cell service access restrictions
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DNAT rules applied via docker exec are lost whenever wg-easy reloads the
WireGuard interface (PostDown flushes the nat table then PostUp only
re-adds static rules). Fix: embed DNS (port 53) and service (port 80)
DNAT rules directly in wg0.conf PostUp/PostDown so they reapply on every
interface restart. ensure_postup_dnat() patches existing configs on startup.
get_server_config() now returns the WG server IP (e.g. 10.0.0.1) for
dns_ip instead of the cell-dns container IP (172.20.0.3). This makes the
value consistent with what get_peer_config() writes into the .conf file,
and fixes the stale hint text in Peers.jsx and WireGuard.jsx.
UI: fallback dns_ip changed from 172.20.0.3 to 10.0.0.1; split-tunnel
fallback drops the 172.20.0.0/16 stale range.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
DNS A records now return the WireGuard server IP (10.0.0.1) instead of
Docker bridge VIPs so cross-cell peers resolve service names correctly
regardless of their bridge subnet. DNAT rules (wg0:53→cell-dns:53 and
wg0:80→cell-caddy:80) are applied at startup. Caddy routes by Host header,
eliminating the Docker bridge subnet conflict. Firewall cell rules allow
DNS and service (Caddy) traffic from linked cell subnets. Split-tunnel
AllowedIPs now dynamically includes connected-cell VPN subnets and drops
the 172.20.0.0/16 range. Peers with route_via set now receive full-tunnel
config (0.0.0.0/0) so all their traffic exits via the remote cell.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Cell link [Peer] blocks can vanish from wg0.conf after a container
rebuild or config reset. The startup recovery previously only restored
VPN peer rules (iptables) but not the WireGuard peer blocks needed for
cell-to-cell tunnels, leaving the link red with no automatic recovery.
Add _restore_cell_wg_peers() called from _apply_startup_enforcement()
that reconciles wg0.conf against cell_links.json and re-adds any missing
[Peer] blocks, then calls _syncconf() to hot-reload the interface.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The PostUp rule appended `iptables -A FORWARD -i wg0 -j ACCEPT` which
allowed any WireGuard-connected client full internet access regardless of
per-peer rules, even when no peers were configured in wg0.conf.
Fix: change PostUp/PostDown to use DROP as the catch-all. Per-peer and
per-cell rules use -I (insert at top) so they take precedence; unknown
or unconfigured WG traffic hits the DROP at the bottom.
Also add reconcile_stale_peer_rules() called on startup to remove FORWARD
rules for peer IPs that no longer exist in the registry, preventing deleted
peers from retaining firewall access across container restarts.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Locks vitest + @testing-library versions added in 94957ab so
make test-webui is reproducible.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
CellNetwork page (CellPanel):
- Internet Sharing section below service toggles
- Toggle: 'Offer my internet to <cell>' (calls PUT /api/cells/<n>/exit-offer)
- Read-only indicator: whether remote cell offers internet back
- Contextual hints explaining what each party needs to do next
Peers page:
- Fetches connected cells on mount
- Edit modal: Internet Exit dropdown (route-via) showing all connected cells
with ✓ marker for cells that have offered internet
- Warning if selected cell hasn't offered internet yet
- On save, calls PUT /api/peers/<n>/route-via only when value changed
- Table badge shows 'via <cell>' for peers with active routing
api.js:
- cellLinkAPI.setExitOffer(cellName, offered)
- peerRegistryAPI.setRouteVia(peerName, viaCell)
Tests (vitest + @testing-library/react):
- 19 new frontend tests in src/__tests__/
- CellNetworkInternetSharing.test.jsx (10 tests)
- PeersRouteVia.test.jsx (9 tests)
- make test-webui target runs them via docker node:18-alpine
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the ability to route a specific peer's internet traffic through a
connected cell acting as an exit relay.
Cell A side:
- PUT /api/peers/<peer>/route-via {"via_cell": "cellB"} sets route_via
- Updates WG AllowedIPs to include 0.0.0.0/0 for the exit cell peer
- Adds ip rule + ip route in policy table inside cell-wireguard so the
specific peer's traffic egresses via cellB's WG IP
- Sets exit_relay_active on the cell link and pushes use_as_exit_relay=True
to cellB via peer-sync
Cell B side:
- Receives use_as_exit_relay in the peer-sync payload
- Calls apply_cell_rules(..., exit_relay=True) to add FORWARD -o eth0 ACCEPT
- Stores remote_exit_relay_active flag for startup recovery
Startup recovery:
- apply_all_cell_rules passes exit_relay=remote_exit_relay_active (cellB)
- _apply_startup_enforcement reapplies ip rule for each peer with route_via (cellA)
since policy routing rules don't survive container restart
peer_registry gets route_via field with lazy migration.
22 new tests across test_cell_link_manager, test_peer_registry, test_peer_route_via.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds the ability for a cell to signal to a peer that it's willing to
route internet traffic on their behalf. This is the signaling layer
for Phase 3 (per-peer routing via exit cell).
Changes:
- cell_links.json: exit_offered (bool) + remote_exit_offered (bool)
fields with lazy migration (default false for existing records)
- _push_permissions_to_remote: includes exit_offered in the push body
- apply_remote_permissions: accepts exit_offered kwarg; stores it as
remote_exit_offered on the matching cell link
- peer-sync receiver: passes exit_offered from body to apply_remote_permissions
- CellLinkManager.set_exit_offered(cell_name, offered): persists +
triggers push so the remote learns of our offer immediately
- PUT /api/cells/<name>/exit-offer: REST endpoint to toggle the flag
- 12 new tests covering all new paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
MASQUERADE rewrites the source IP of forwarded packets from
the cell's WG address (10.0.x.1) to cell-wireguard's bridge
IP (172.20.x.9). The peer-sync endpoint authenticates callers
by checking that the source IP is inside a known cell's vpn_subnet,
so MASQUERADE caused all pushes to fail with 403.
Fix: _push_permissions_to_remote() now calls _local_wg_ip() to
get the local wg0 address and passes it as X-Forwarded-For.
_authenticate_peer_cell() already supports XFF for exactly this
proxying scenario. Also adds a test verifying the header is present
in the constructed curl command.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
/api/cells/peer-sync/permissions is called over the WireGuard tunnel
by remote cells — they have no session cookie and cannot produce a CSRF
token. The endpoint authenticates via source IP (must be in the remote
cell's vpn_subnet) and WireGuard public key instead.
Without this, the global enforce_auth hook returns 401 before the route
handler runs, so all cross-cell permission pushes fail even when the
WG tunnel and iptables rules are correct.
Also adds a test verifying the route can be reached without a session.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
wg set updates WireGuard peer state but does not add kernel routes —
unlike wg-quick. Without ip route add, traffic to a remote cell's
vpn_subnet is routed via the default gateway (internet) instead of wg0,
causing all cross-cell pushes to time out with HTTP 000.
- add_cell_peer() now calls _ensure_cell_route(vpn_subnet) after
writing the peer config and running _syncconf
- _ensure_cell_route() runs docker exec cell-wireguard ip route add
(idempotent, non-fatal); no-op inside test dirs
- sync_cell_routes() parses wg0.conf at startup to re-add any routes
lost across container restarts; called from _apply_startup_enforcement
- 5 new unit tests covering both normal and test-dir no-op paths
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The per-cell catch-all DROP was reaching position 5 before our ACCEPT
(position 6) because apply_all_cell_rules can re-run after
ensure_cell_api_dnat, pushing the DNAT ACCEPT below the DROP.
Fix: add the API-sync ACCEPT inside apply_cell_rules itself, tagged with
the cell's own tag and inserted LAST (= position 1, above the DROP).
Since it's part of the cell's rule block it is always in the right
position relative to the catch-all DROP, regardless of call order.
Also adds _get_cell_api_ip() helper (docker inspect cell-api) so the
destination IP is always current, and two new tests that verify both the
rule exists and that the insertion order guarantees it wins over DROP.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
cell-api has no route to remote WG tunnel IPs — only cell-wireguard does.
Fix _push_permissions_to_remote() to use 'docker exec cell-wireguard curl'
so outbound sync HTTP traverses the WG tunnel from the right namespace.
On the receive side, add ensure_cell_api_dnat() which installs three
iptables rules inside cell-wireguard on startup:
- PREROUTING DNAT: wg0:3000 → cell-api:3000 (Docker bridge IP)
- POSTROUTING MASQUERADE: so cell-api's reply routes back via wg0
- FORWARD ACCEPT: allow the wg0→eth0 forwarded traffic
Called from _apply_startup_enforcement() so rules survive container restarts.
Tests updated to mock subprocess.run instead of urllib.request.urlopen.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When PIC A updates service sharing permissions, it immediately pushes
the mirrored state to PIC B over the WireGuard tunnel so B's UI shows
what A is sharing with it in real time.
Architecture:
- Push model: update_permissions() → _push_permissions_to_remote() →
POST /api/cells/peer-sync/permissions on remote cell
- Auth: source IP must be inside a known cell's vpn_subnet (WireGuard
tunnel proves identity) + body's from_public_key must match stored key
- Mirror semantics: our inbound (what we share) → their outbound view
- Non-fatal: push failures set pending_push=True; replay_pending_pushes()
retries at startup so offline cells catch up on reconnect
- add_connection() also pushes initial state so remote sees permissions
immediately on the first connect
New fields on cell_links.json records (lazy-migrated):
remote_api_url, last_push_status, last_push_at, last_push_error,
pending_push, last_remote_update_at
New endpoint: POST /api/cells/peer-sync/permissions
30 new tests (1101 total).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The /api/wireguard/peers/statuses endpoint returns {pubkey: {online,...}}
not {peers: [{public_key,...}]}. The status mapping loop was always
producing an empty statusByKey, making every connected cell show Offline.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace heuristic directory scan with explicit container detection:
/app/scripts path means container, script sibling to api/ means host.
Prevents accidental /app/data/api misdetection when that dir exists.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- CellNetwork.jsx CopyButton: use execCommand fallback when clipboard API
is unavailable (HTTP non-localhost context)
- Makefile reset-admin-password: run inside cell-api container via docker exec
so bcrypt and all deps are available without host installation
- docker-compose.yml: mount ./scripts:/app/scripts:ro in cell-api so the
reset script is accessible inside the container
- scripts/reset_admin_password.py: auto-detect API module path and data dir
so the script works in both host (api/ sibling) and container (/app) layouts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Phase 1 — connection fixes:
- routing_manager.stop(): remove iptables -F / -t nat -F nuclear flush that
would wipe WireGuard MASQUERADE and all peer rules on any UI stop action
- wireguard_manager.add_cell_peer(): reject vpn_subnet that overlaps the local
WG network (routing blackhole — was the root cause of no handshake)
- wireguard_manager._syncconf(): pass Endpoint to 'wg set' so cell peers with
static endpoints are synced to the kernel (not just AllowedIPs)
Phase 2 — service-sharing permissions backend:
- firewall_manager: add _cell_tag(), clear_cell_rules(), apply_cell_rules(),
apply_all_cell_rules() — iptables FORWARD rules for cell-to-cell traffic
using 'pic-cell-<name>' comment tags, distinct from 'pic-peer-*'
- app.py startup enforcement: call apply_all_cell_rules(cell_links) so rules
survive API restarts
- cell_link_manager: permissions schema {inbound, outbound} per service;
lazy migration for existing entries; update_permissions(), get_permissions();
apply_cell_rules wired into add_connection/remove_connection
- routes/cells.py: GET /api/cells/services, GET+PUT /api/cells/<n>/permissions;
RuntimeError now returns 400 (not 500) from add_connection
Removed broken 'test' cell (subnet 10.0.0.0/24 collided with local WG network).
Second PIC must use a distinct subnet (e.g. 10.0.1.0/24) before reconnecting.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- API key store was out of sync with wg0.conf: get_keys() generated a random
phantom key instead of reading the actual WireGuard server key, so all peer
configs had the wrong PublicKey and could never handshake. Fixed by writing
correct raw-bytes key files at deploy time and adding _sync_wg_keys() to API
startup so the store auto-syncs from wg0.conf on every restart.
- apply_domain() fell back silently when zone file had no $ORIGIN directive;
now also parses the SOA MNAME as the old-domain fallback.
- apply_cell_name() only replaced the hostname if old_name matched literally
in the zone file; now auto-detects the actual hostname (non-service A record)
so a stale zone (mycell vs dev) is corrected on next config apply.
- DNS zone file corrected: SOA pic.ngo. admin.pic.ngo., mycell → dev.
- WireGuard UI: add 30s auto-poll for peer statuses; fix "peers currently
connected" counter to show online/total instead of total count.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>