Merge branch 'feature/install-and-baseline' into 'main'
fix: all 214 tests passing (from 36 failures) See merge request root/pic!1
This commit is contained in:
@@ -0,0 +1,76 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## What This Project Is
|
||||||
|
|
||||||
|
**Personal Internet Cell (PIC)** — a self-hosted digital infrastructure platform. It manages DNS, DHCP, NTP, WireGuard VPN, email, calendar/contacts (CalDAV), file storage (WebDAV), reverse proxy (Caddy), a certificate authority, and container orchestration, all from a single API + React UI.
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full stack
|
||||||
|
make start # docker-compose up -d
|
||||||
|
make stop # docker-compose down
|
||||||
|
make restart # docker-compose restart
|
||||||
|
make status # docker status + API health
|
||||||
|
make logs # docker-compose logs -f
|
||||||
|
make build # rebuild api image
|
||||||
|
|
||||||
|
# Tests
|
||||||
|
make test # pytest tests/ api/tests/
|
||||||
|
make test-coverage # pytest with coverage HTML report
|
||||||
|
make test-api # pytest tests/test_api_endpoints.py
|
||||||
|
pytest tests/test_<module>.py # single test file
|
||||||
|
|
||||||
|
# Local dev (no Docker)
|
||||||
|
pip install -r api/requirements.txt
|
||||||
|
python api/app.py # Flask API on :3000
|
||||||
|
|
||||||
|
cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000)
|
||||||
|
|
||||||
|
# WireGuard
|
||||||
|
make show-routes
|
||||||
|
make add-peer PEER_NAME=foo PEER_IP=10.0.0.5 PEER_KEY=<pubkey>
|
||||||
|
make list-peers
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Backend (`api/`)
|
||||||
|
|
||||||
|
All service managers inherit `BaseServiceManager` (`api/base_service_manager.py`). This enforces a consistent interface: `get_status()`, `get_config()`, `update_config()`, `validate_config()`, `test_connectivity()`, `get_logs()`, `restart_service()`. When adding or modifying a service manager, follow this pattern.
|
||||||
|
|
||||||
|
The `ServiceBus` (`api/service_bus.py`) is a pub/sub event system used for inter-service communication. Services publish events (e.g., `SERVICE_STARTED`, `CONFIG_CHANGED`, `PEER_CONNECTED`) and subscribe to events from dependencies. Dependency graph is declared in the bus — e.g., `wireguard` depends on `network`; `email` depends on `network` and `vault`.
|
||||||
|
|
||||||
|
`ConfigManager` (`api/config_manager.py`) is the single source of truth. Config lives in `/app/config/cell_config.json` (mapped from `config/api/`). All managers read/write through ConfigManager, which validates against per-service schemas and maintains automatic backups.
|
||||||
|
|
||||||
|
`LogManager` (`api/log_manager.py`) provides structured JSON logging with rotation (5 MB / 5 backups per service). Use it instead of `print()` or raw `logging`.
|
||||||
|
|
||||||
|
`app.py` (2000+ lines) contains all Flask REST endpoints, organized by service. It runs a background health-monitoring thread.
|
||||||
|
|
||||||
|
Service managers:
|
||||||
|
- `network_manager.py` — DNS (CoreDNS), DHCP (dnsmasq), NTP (chrony)
|
||||||
|
- `wireguard_manager.py` — VPN peer lifecycle, QR codes
|
||||||
|
- `peer_registry.py` — peer registration/lookup
|
||||||
|
- `routing_manager.py` — NAT, firewall rules, VPN gateway
|
||||||
|
- `vault_manager.py` — internal certificate authority
|
||||||
|
- `email_manager.py` — Postfix + Dovecot
|
||||||
|
- `calendar_manager.py` — Radicale CalDAV/CardDAV
|
||||||
|
- `file_manager.py` — WebDAV storage
|
||||||
|
- `container_manager.py` — Docker SDK wrappers
|
||||||
|
- `cell_manager.py` — top-level orchestration
|
||||||
|
|
||||||
|
### Frontend (`webui/`)
|
||||||
|
|
||||||
|
React 18 + Vite + Tailwind CSS. All API calls go through `src/services/api.js` (Axios). Vite dev server proxies `/api` to `localhost:3000`. Pages in `src/pages/`, shared components in `src/components/`.
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
|
||||||
|
`docker-compose.yml` defines 13 services on a custom bridge network `cell-network` (172.20.0.0/16). Cell IPs default to 10.0.0.0/24. Key ports: 53 (DNS), 80/443 (Caddy), 3000 (API), 5173/8081 (WebUI), 51820/udp (WireGuard), 25/587/993 (mail), 5232 (CalDAV), 8080 (WebDAV).
|
||||||
|
|
||||||
|
Config files for each service live under `config/<service>/`. Persistent data is under `data/` (git-ignored). WireGuard configs are also git-ignored.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Tests live in `tests/` (28 files). Use mocking (`pytest-mock`) for external system calls. Integration tests in `test_integration.py` require Docker services running.
|
||||||
@@ -1,22 +1,31 @@
|
|||||||
# Personal Internet Cell - Makefile
|
# Personal Internet Cell - Makefile
|
||||||
# Provides easy commands for managing the cell
|
# Provides easy commands for managing the cell
|
||||||
|
|
||||||
.PHONY: help start stop restart status logs clean setup init-peers
|
.PHONY: help start stop restart status logs clean setup init-peers build build-api build-webui
|
||||||
|
|
||||||
|
# Detect docker compose command (v2 plugin preferred, fallback to v1 standalone)
|
||||||
|
DC := $(shell docker compose version >/dev/null 2>&1 && echo "docker compose" || echo "docker-compose")
|
||||||
|
|
||||||
# Default target
|
# Default target
|
||||||
help:
|
help:
|
||||||
@echo "Personal Internet Cell - Management Commands"
|
@echo "Personal Internet Cell - Management Commands"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "Setup:"
|
@echo "Setup (run once on a fresh host):"
|
||||||
@echo " setup - Initial setup and configuration"
|
@echo " setup - Create dirs, generate WireGuard keys, write configs, then: make start"
|
||||||
@echo " init-peers - Initialize peer configuration"
|
@echo " Env vars: CELL_NAME=mycell CELL_DOMAIN=cell VPN_ADDRESS=10.0.0.1/24 WG_PORT=51820"
|
||||||
|
@echo " init-peers - Reset peer list to empty"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "Management:"
|
@echo "Management:"
|
||||||
@echo " start - Start all services"
|
@echo " start - Start all services (docker compose up -d)"
|
||||||
@echo " stop - Stop all services"
|
@echo " stop - Stop all services"
|
||||||
@echo " restart - Restart all services"
|
@echo " restart - Restart all services"
|
||||||
@echo " status - Show status of all services"
|
@echo " status - Show container status + API health"
|
||||||
@echo " logs - Show logs from all services"
|
@echo " logs - Follow logs from all services"
|
||||||
|
@echo ""
|
||||||
|
@echo "Build:"
|
||||||
|
@echo " build - Rebuild API image"
|
||||||
|
@echo " build-api - Rebuild API image (no cache)"
|
||||||
|
@echo " build-webui - Rebuild Web UI image (no cache)"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "Individual Services:"
|
@echo "Individual Services:"
|
||||||
@echo " start-dns - Start DNS service only"
|
@echo " start-dns - Start DNS service only"
|
||||||
@@ -31,8 +40,11 @@ help:
|
|||||||
# Setup commands
|
# Setup commands
|
||||||
setup:
|
setup:
|
||||||
@echo "Setting up Personal Internet Cell..."
|
@echo "Setting up Personal Internet Cell..."
|
||||||
|
CELL_NAME=$(or $(CELL_NAME),mycell) \
|
||||||
|
CELL_DOMAIN=$(or $(CELL_DOMAIN),cell) \
|
||||||
|
VPN_ADDRESS=$(or $(VPN_ADDRESS),10.0.0.1/24) \
|
||||||
|
WG_PORT=$(or $(WG_PORT),51820) \
|
||||||
python3 scripts/setup_cell.py
|
python3 scripts/setup_cell.py
|
||||||
@echo "Setup complete!"
|
|
||||||
|
|
||||||
init-peers:
|
init-peers:
|
||||||
@echo "Initializing peer configuration..."
|
@echo "Initializing peer configuration..."
|
||||||
@@ -42,52 +54,52 @@ init-peers:
|
|||||||
# Management commands
|
# Management commands
|
||||||
start:
|
start:
|
||||||
@echo "Starting Personal Internet Cell..."
|
@echo "Starting Personal Internet Cell..."
|
||||||
docker-compose up -d
|
$(DC) up -d
|
||||||
@echo "Services started. Check status with 'make status'"
|
@echo "Services started. Check status with 'make status'"
|
||||||
|
|
||||||
stop:
|
stop:
|
||||||
@echo "Stopping Personal Internet Cell..."
|
@echo "Stopping Personal Internet Cell..."
|
||||||
docker-compose down
|
$(DC) down
|
||||||
@echo "Services stopped."
|
@echo "Services stopped."
|
||||||
|
|
||||||
restart:
|
restart:
|
||||||
@echo "Restarting Personal Internet Cell..."
|
@echo "Restarting Personal Internet Cell..."
|
||||||
docker-compose restart
|
$(DC) restart
|
||||||
@echo "Services restarted."
|
@echo "Services restarted."
|
||||||
|
|
||||||
status:
|
status:
|
||||||
@echo "Personal Internet Cell Status:"
|
@echo "Personal Internet Cell Status:"
|
||||||
@echo "================================"
|
@echo "================================"
|
||||||
docker-compose ps
|
$(DC) ps
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "API Status:"
|
@echo "API Status:"
|
||||||
@curl -s http://localhost:3000/health || echo "API not responding"
|
@curl -s http://localhost:3000/health || echo "API not responding"
|
||||||
|
|
||||||
logs:
|
logs:
|
||||||
@echo "Showing logs from all services..."
|
@echo "Showing logs from all services..."
|
||||||
docker-compose logs -f
|
$(DC) logs -f
|
||||||
|
|
||||||
# Individual service commands
|
# Individual service commands
|
||||||
start-dns:
|
start-dns:
|
||||||
@echo "Starting DNS service..."
|
@echo "Starting DNS service..."
|
||||||
docker-compose up -d dns
|
$(DC) up -d dns
|
||||||
|
|
||||||
start-api:
|
start-api:
|
||||||
@echo "Starting API service..."
|
@echo "Starting API service..."
|
||||||
docker-compose up -d api
|
$(DC) up -d api
|
||||||
|
|
||||||
start-wg:
|
start-wg:
|
||||||
@echo "Starting WireGuard service..."
|
@echo "Starting WireGuard service..."
|
||||||
docker-compose up -d wireguard
|
$(DC) up -d wireguard
|
||||||
|
|
||||||
start-webui:
|
start-webui:
|
||||||
@echo "Starting WebUi service..."
|
@echo "Starting WebUi service..."
|
||||||
docker-compose up -d webui
|
$(DC) up -d webui
|
||||||
|
|
||||||
# Maintenance commands
|
# Maintenance commands
|
||||||
clean:
|
clean:
|
||||||
@echo "Cleaning up containers and volumes..."
|
@echo "Cleaning up containers and volumes..."
|
||||||
docker-compose down -v
|
$(DC) down -v
|
||||||
docker system prune -f
|
docker system prune -f
|
||||||
@echo "Cleanup complete."
|
@echo "Cleanup complete."
|
||||||
|
|
||||||
@@ -107,11 +119,21 @@ restore:
|
|||||||
# Development commands
|
# Development commands
|
||||||
dev:
|
dev:
|
||||||
@echo "Starting development environment..."
|
@echo "Starting development environment..."
|
||||||
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
|
$(DC) -f docker-compose.yml -f docker-compose.dev.yml up -d
|
||||||
|
|
||||||
build:
|
build:
|
||||||
@echo "Building API service..."
|
@echo "Building API service..."
|
||||||
docker-compose build api
|
$(DC) build api
|
||||||
|
|
||||||
|
build-api:
|
||||||
|
@echo "Rebuilding API (no cache)..."
|
||||||
|
$(DC) build --no-cache api
|
||||||
|
$(DC) up -d api
|
||||||
|
|
||||||
|
build-webui:
|
||||||
|
@echo "Rebuilding Web UI (no cache)..."
|
||||||
|
$(DC) build --no-cache webui
|
||||||
|
$(DC) up -d webui
|
||||||
|
|
||||||
# Testing commands
|
# Testing commands
|
||||||
test:
|
test:
|
||||||
|
|||||||
@@ -61,45 +61,82 @@ The Personal Internet Cell is a **production-grade, self-hosted, decentralized d
|
|||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
- **Docker & Docker Compose** (recommended)
|
- **Docker** with Compose plugin (`docker compose`) or standalone `docker-compose`
|
||||||
- **Python 3.10+** (for CLI and development)
|
- **WireGuard tools** (`wg` binary, for key generation during install)
|
||||||
- **2GB+ RAM, 10GB+ disk space**
|
- **2 GB+ RAM, 10 GB+ disk space**
|
||||||
- **Ports**: 53, 80, 443, 3000, 51820
|
- **Open ports**: 53 (DNS), 80/443 (HTTP/S), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard)
|
||||||
|
|
||||||
### 1. Clone and Setup
|
### 1. Install
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/yourusername/PersonalInternetCell.git
|
git clone <repo-url> pic
|
||||||
cd PersonalInternetCell
|
cd pic
|
||||||
|
|
||||||
# Start with Docker (Recommended)
|
# Default cell (name=mycell, domain=cell, VPN=10.0.0.1/24, port=51820)
|
||||||
docker-compose up --build
|
make setup && make start
|
||||||
|
|
||||||
# Or run locally
|
# Custom cell — required when installing a second cell on a different host
|
||||||
pip install -r api/requirements.txt
|
CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start
|
||||||
python api/app.py
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Access Services
|
`make setup` generates WireGuard keys, writes `config/wireguard/wg0.conf` and
|
||||||
|
`config/api/cell_config.json`, and creates all data directories.
|
||||||
|
`make start` brings up all 13 Docker containers.
|
||||||
|
|
||||||
- **API**: http://localhost:3000
|
### 2. Access
|
||||||
- **Health Check**: http://localhost:3000/health
|
|
||||||
- **Service Status**: http://localhost:3000/api/services/status
|
|
||||||
|
|
||||||
### 3. Use the Enhanced CLI
|
| Service | URL |
|
||||||
|
|---------|-----|
|
||||||
|
| Web UI | `http://<host-ip>:8081` |
|
||||||
|
| API | `http://<host-ip>:3000` |
|
||||||
|
| Health | `http://<host-ip>:3000/health` |
|
||||||
|
|
||||||
|
On a WireGuard client: `http://mycell.cell` (or whatever your cell name is).
|
||||||
|
|
||||||
|
### 3. Local dev (no Docker)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Show cell status
|
pip install -r api/requirements.txt
|
||||||
python api/enhanced_cli.py --status
|
python api/app.py # API on :3000
|
||||||
|
|
||||||
# Interactive mode
|
cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000)
|
||||||
python api/enhanced_cli.py --interactive
|
```
|
||||||
|
|
||||||
# Show all services
|
---
|
||||||
python api/enhanced_cli.py --services
|
|
||||||
|
|
||||||
# Configuration wizard
|
## 🔗 Connecting Two Cells (PIC Mesh)
|
||||||
python api/enhanced_cli.py --wizard network
|
|
||||||
|
Two PIC instances can form a mesh — full site-to-site WireGuard tunnels with
|
||||||
|
automatic DNS forwarding so each cell's services are reachable from the other.
|
||||||
|
|
||||||
|
### Install the second cell
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On the second host (different VPN subnet; port 51820 is fine — different machine)
|
||||||
|
CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exchange invites (two pastes, two clicks)
|
||||||
|
|
||||||
|
1. On **Cell A** → open Web UI → **Cell Network** → copy the invite JSON.
|
||||||
|
2. On **Cell B** → **Cell Network** → paste into "Connect to Another Cell" → click **Connect**.
|
||||||
|
3. On **Cell B** → copy its invite JSON.
|
||||||
|
4. On **Cell A** → paste Cell B's invite → click **Connect**.
|
||||||
|
|
||||||
|
Both cells now have:
|
||||||
|
- A site-to-site WireGuard peer (AllowedIPs = remote cell's VPN subnet).
|
||||||
|
- A CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel.
|
||||||
|
|
||||||
|
The **Connected Cells** panel shows live handshake status (green = online).
|
||||||
|
|
||||||
|
### Same-LAN tip
|
||||||
|
|
||||||
|
If both cells share the same external IP (behind NAT), the auto-detected
|
||||||
|
endpoint in the invite will be the public IP. Replace it with the LAN IP
|
||||||
|
before clicking Connect so traffic stays local:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{ "endpoint": "192.168.31.50:51820", ... }
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -6,6 +6,8 @@ WORKDIR /app/api
|
|||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update && apt-get install -y \
|
||||||
wireguard-tools \
|
wireguard-tools \
|
||||||
iptables \
|
iptables \
|
||||||
|
iproute2 \
|
||||||
|
util-linux \
|
||||||
curl \
|
curl \
|
||||||
ca-certificates \
|
ca-certificates \
|
||||||
gnupg \
|
gnupg \
|
||||||
|
|||||||
+480
-133
@@ -41,6 +41,8 @@ from container_manager import ContainerManager
|
|||||||
from config_manager import ConfigManager
|
from config_manager import ConfigManager
|
||||||
from service_bus import ServiceBus, EventType
|
from service_bus import ServiceBus, EventType
|
||||||
from log_manager import LogManager
|
from log_manager import LogManager
|
||||||
|
from cell_link_manager import CellLinkManager
|
||||||
|
import firewall_manager
|
||||||
|
|
||||||
# Context variable for request info
|
# Context variable for request info
|
||||||
request_context = contextvars.ContextVar('request_context', default={})
|
request_context = contextvars.ContextVar('request_context', default={})
|
||||||
@@ -105,7 +107,10 @@ CORS(app)
|
|||||||
app.config['DEVELOPMENT_MODE'] = True # Set to True for development, False for production
|
app.config['DEVELOPMENT_MODE'] = True # Set to True for development, False for production
|
||||||
|
|
||||||
# Initialize enhanced components
|
# Initialize enhanced components
|
||||||
config_manager = ConfigManager(config_file='./config/cell_config.json', data_dir='./data')
|
config_manager = ConfigManager(
|
||||||
|
config_file=os.path.join(os.environ.get('CONFIG_DIR', '/app/config'), 'cell_config.json'),
|
||||||
|
data_dir=os.environ.get('DATA_DIR', '/app/data'),
|
||||||
|
)
|
||||||
service_bus = ServiceBus()
|
service_bus = ServiceBus()
|
||||||
log_manager = LogManager(log_dir='./data/logs')
|
log_manager = LogManager(log_dir='./data/logs')
|
||||||
|
|
||||||
@@ -124,6 +129,16 @@ service_log_configs = {
|
|||||||
for service, config in service_log_configs.items():
|
for service, config in service_log_configs.items():
|
||||||
log_manager.add_service_logger(service, config)
|
log_manager.add_service_logger(service, config)
|
||||||
|
|
||||||
|
# Apply any persisted log level overrides
|
||||||
|
_levels_file = os.path.join(os.path.dirname(__file__), 'config', 'log_levels.json')
|
||||||
|
if os.path.exists(_levels_file):
|
||||||
|
try:
|
||||||
|
with open(_levels_file) as _f:
|
||||||
|
for _svc, _lvl in json.load(_f).items():
|
||||||
|
log_manager.set_service_level(_svc, _lvl)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
# Start service bus
|
# Start service bus
|
||||||
service_bus.start()
|
service_bus.start()
|
||||||
|
|
||||||
@@ -153,17 +168,39 @@ def log_request(response):
|
|||||||
def clear_log_context(exc):
|
def clear_log_context(exc):
|
||||||
request_context.set({})
|
request_context.set({})
|
||||||
|
|
||||||
# Initialize managers with proper directories
|
# Initialize managers — paths configurable via env for testing
|
||||||
network_manager = NetworkManager(data_dir='/app/data', config_dir='/app/config')
|
_DATA_DIR = os.environ.get('DATA_DIR', '/app/data')
|
||||||
wireguard_manager = WireGuardManager(data_dir='/app/data', config_dir='/app/config')
|
_CONFIG_DIR = os.environ.get('CONFIG_DIR', '/app/config')
|
||||||
peer_registry = PeerRegistry(data_dir='/app/data', config_dir='/app/config')
|
|
||||||
email_manager = EmailManager(data_dir='/app/data', config_dir='/app/config')
|
network_manager = NetworkManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
calendar_manager = CalendarManager(data_dir='/app/data', config_dir='/app/config')
|
wireguard_manager = WireGuardManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
file_manager = FileManager(data_dir='/app/data', config_dir='/app/config')
|
peer_registry = PeerRegistry(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
routing_manager = RoutingManager(data_dir='/app/data', config_dir='/app/config')
|
email_manager = EmailManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
cell_manager = CellManager(data_dir='/app/data', config_dir='/app/config')
|
calendar_manager = CalendarManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
app.vault_manager = VaultManager(data_dir='/app/data', config_dir='/app/config')
|
file_manager = FileManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
container_manager = ContainerManager(data_dir='/app/data', config_dir='/app/config')
|
routing_manager = RoutingManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
|
cell_manager = CellManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
|
app.vault_manager = VaultManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
|
container_manager = ContainerManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
|
cell_link_manager = CellLinkManager(
|
||||||
|
data_dir=_DATA_DIR, config_dir=_CONFIG_DIR,
|
||||||
|
wireguard_manager=wireguard_manager, network_manager=network_manager,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply firewall + DNS rules from stored peer settings (survives API restarts)
|
||||||
|
def _apply_startup_enforcement():
|
||||||
|
try:
|
||||||
|
peers = peer_registry.list_peers()
|
||||||
|
firewall_manager.apply_all_peer_rules(peers)
|
||||||
|
firewall_manager.apply_all_dns_rules(peers, COREFILE_PATH)
|
||||||
|
logger.info(f"Applied enforcement rules for {len(peers)} peers on startup")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Startup enforcement failed (non-fatal): {e}")
|
||||||
|
|
||||||
|
COREFILE_PATH = '/app/config/dns/Corefile'
|
||||||
|
|
||||||
|
# Run in background so startup isn't blocked waiting on docker exec
|
||||||
|
threading.Thread(target=_apply_startup_enforcement, daemon=True).start()
|
||||||
|
|
||||||
# Register services with service bus
|
# Register services with service bus
|
||||||
service_bus.register_service('network', network_manager)
|
service_bus.register_service('network', network_manager)
|
||||||
@@ -205,36 +242,26 @@ def perform_health_check():
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
result[service_name] = {'error': str(e), 'status': 'offline'}
|
result[service_name] = {'error': str(e), 'status': 'offline'}
|
||||||
|
|
||||||
# Health alerting logic - improved to be more robust
|
# Health alerting logic — alert only when a service container is not running
|
||||||
global service_alert_counters
|
global service_alert_counters
|
||||||
for service_name in service_bus.list_services():
|
for service_name in service_bus.list_services():
|
||||||
if service_name in result:
|
if service_name in result:
|
||||||
status = result[service_name]
|
status = result[service_name]
|
||||||
healthy = True
|
healthy = True
|
||||||
|
|
||||||
# Improved health determination logic
|
|
||||||
if isinstance(status, dict):
|
if isinstance(status, dict):
|
||||||
# Check for explicit healthy field first
|
# Prefer status.running (container actually up) over healthy (connectivity tests)
|
||||||
if 'healthy' in status:
|
inner = status.get('status', {})
|
||||||
healthy = status['healthy']
|
if isinstance(inner, dict):
|
||||||
# Check for running status
|
if 'running' in inner:
|
||||||
|
healthy = inner['running']
|
||||||
|
elif 'status' in inner:
|
||||||
|
healthy = str(inner['status']).lower() in ('ok', 'healthy', 'online', 'active')
|
||||||
elif 'running' in status:
|
elif 'running' in status:
|
||||||
healthy = status['running']
|
healthy = status['running']
|
||||||
# Check for status field with various healthy values
|
|
||||||
elif 'status' in status:
|
|
||||||
status_value = status['status']
|
|
||||||
if isinstance(status_value, str):
|
|
||||||
healthy = status_value.lower() in ('ok', 'healthy', 'online', 'active')
|
|
||||||
else:
|
|
||||||
healthy = bool(status_value)
|
|
||||||
# Check for error field
|
|
||||||
elif 'error' in status:
|
elif 'error' in status:
|
||||||
healthy = False
|
healthy = False
|
||||||
# If no health indicators, assume healthy if service exists
|
|
||||||
else:
|
|
||||||
healthy = True
|
|
||||||
else:
|
else:
|
||||||
# If status is not a dict, assume it's a boolean
|
|
||||||
healthy = bool(status)
|
healthy = bool(status)
|
||||||
|
|
||||||
# Only count as unhealthy if we're certain it's down
|
# Only count as unhealthy if we're certain it's down
|
||||||
@@ -337,9 +364,10 @@ def get_cell_status():
|
|||||||
current_time = time.time()
|
current_time = time.time()
|
||||||
uptime_seconds = int(current_time - API_START_TIME)
|
uptime_seconds = int(current_time - API_START_TIME)
|
||||||
|
|
||||||
|
identity = config_manager.configs.get('_identity', {})
|
||||||
return jsonify({
|
return jsonify({
|
||||||
"cell_name": "personal-internet-cell",
|
"cell_name": identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')),
|
||||||
"domain": "cell.local",
|
"domain": identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell')),
|
||||||
"uptime": uptime_seconds,
|
"uptime": uptime_seconds,
|
||||||
"peers_count": len(peers),
|
"peers_count": len(peers),
|
||||||
"services": services_status,
|
"services": services_status,
|
||||||
@@ -353,7 +381,16 @@ def get_cell_status():
|
|||||||
def get_config():
|
def get_config():
|
||||||
"""Get cell configuration."""
|
"""Get cell configuration."""
|
||||||
try:
|
try:
|
||||||
return jsonify(config_manager.get_all_configs())
|
service_configs = config_manager.get_all_configs()
|
||||||
|
identity = service_configs.pop('_identity', {})
|
||||||
|
config = {
|
||||||
|
'cell_name': identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')),
|
||||||
|
'domain': identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell')),
|
||||||
|
'ip_range': identity.get('ip_range', os.environ.get('CELL_IP_RANGE', '172.20.0.0/16')),
|
||||||
|
'wireguard_port': identity.get('wireguard_port', int(os.environ.get('WG_PORT', '51820'))),
|
||||||
|
}
|
||||||
|
config['service_configs'] = service_configs
|
||||||
|
return jsonify(config)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting config: {e}")
|
logger.error(f"Error getting config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -366,19 +403,75 @@ def update_config():
|
|||||||
if data is None:
|
if data is None:
|
||||||
return jsonify({"error": "No data provided"}), 400
|
return jsonify({"error": "No data provided"}), 400
|
||||||
|
|
||||||
# Update configuration using config manager
|
# Handle identity fields (cell_name, domain, ip_range, wireguard_port)
|
||||||
|
identity_keys = {'cell_name', 'domain', 'ip_range', 'wireguard_port'}
|
||||||
|
identity_updates = {k: v for k, v in data.items() if k in identity_keys}
|
||||||
|
# Capture old identity BEFORE saving, for apply_cell_name comparison
|
||||||
|
old_identity = dict(config_manager.configs.get('_identity', {}))
|
||||||
|
if identity_updates:
|
||||||
|
stored = config_manager.configs.get('_identity', {})
|
||||||
|
stored.update(identity_updates)
|
||||||
|
config_manager.configs['_identity'] = stored
|
||||||
|
config_manager._save_all_configs()
|
||||||
|
|
||||||
|
# Map service names to their manager instances
|
||||||
|
_svc_managers = {
|
||||||
|
'network': network_manager,
|
||||||
|
'wireguard': wireguard_manager,
|
||||||
|
'email': email_manager,
|
||||||
|
'calendar': calendar_manager,
|
||||||
|
'files': file_manager,
|
||||||
|
'routing': routing_manager,
|
||||||
|
'vault': app.vault_manager,
|
||||||
|
}
|
||||||
|
|
||||||
|
all_restarted = []
|
||||||
|
all_warnings = []
|
||||||
|
|
||||||
|
# Update service configurations: persist + apply to real config files
|
||||||
for service, config in data.items():
|
for service, config in data.items():
|
||||||
if service in config_manager.service_schemas:
|
if service in config_manager.service_schemas:
|
||||||
success = config_manager.update_service_config(service, config)
|
config_manager.update_service_config(service, config)
|
||||||
if success:
|
mgr = _svc_managers.get(service)
|
||||||
# Publish config change event
|
if mgr:
|
||||||
service_bus.publish_event(EventType.CONFIG_CHANGED, service, {
|
mgr.update_config(config)
|
||||||
'service': service,
|
result = mgr.apply_config(config)
|
||||||
'config': config
|
all_restarted.extend(result.get('restarted', []))
|
||||||
})
|
all_warnings.extend(result.get('warnings', []))
|
||||||
|
service_bus.publish_event(EventType.CONFIG_CHANGED, service, {
|
||||||
|
'service': service,
|
||||||
|
'config': config
|
||||||
|
})
|
||||||
|
# VPN port or subnet change → all peer client configs are stale
|
||||||
|
if service == 'wireguard' and ('port' in config or 'address' in config):
|
||||||
|
for p in peer_registry.list_peers():
|
||||||
|
peer_registry.update_peer(p['peer'], {'config_needs_reinstall': True})
|
||||||
|
n = len(peer_registry.list_peers())
|
||||||
|
if n:
|
||||||
|
all_warnings.append(f'WireGuard endpoint changed — {n} peer(s) must reinstall VPN config')
|
||||||
|
|
||||||
logger.info(f"Updated config: {data}")
|
# Apply cell identity domain to network and email services
|
||||||
return jsonify({"message": "Configuration updated successfully"})
|
if identity_updates.get('domain'):
|
||||||
|
domain = identity_updates['domain']
|
||||||
|
net_result = network_manager.apply_domain(domain)
|
||||||
|
all_restarted.extend(net_result.get('restarted', []))
|
||||||
|
all_warnings.extend(net_result.get('warnings', []))
|
||||||
|
|
||||||
|
# Apply cell name change to DNS hostname record
|
||||||
|
if identity_updates.get('cell_name'):
|
||||||
|
old_name = old_identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell'))
|
||||||
|
new_name = identity_updates['cell_name']
|
||||||
|
if old_name != new_name:
|
||||||
|
cn_result = network_manager.apply_cell_name(old_name, new_name)
|
||||||
|
all_restarted.extend(cn_result.get('restarted', []))
|
||||||
|
all_warnings.extend(cn_result.get('warnings', []))
|
||||||
|
|
||||||
|
logger.info(f"Updated config, restarted: {all_restarted}")
|
||||||
|
return jsonify({
|
||||||
|
"message": "Configuration updated and applied",
|
||||||
|
"restarted": all_restarted,
|
||||||
|
"warnings": all_warnings,
|
||||||
|
})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error updating config: {e}")
|
logger.error(f"Error updating config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -456,6 +549,19 @@ def import_config():
|
|||||||
logger.error(f"Error importing config: {e}")
|
logger.error(f"Error importing config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/config/backups/<backup_id>', methods=['DELETE'])
|
||||||
|
def delete_config_backup(backup_id):
|
||||||
|
"""Delete a configuration backup."""
|
||||||
|
try:
|
||||||
|
success = config_manager.delete_backup(backup_id)
|
||||||
|
if success:
|
||||||
|
return jsonify({"message": f"Backup {backup_id} deleted"})
|
||||||
|
else:
|
||||||
|
return jsonify({"error": f"Failed to delete backup {backup_id}"}), 500
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error deleting backup: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
# Service bus endpoints
|
# Service bus endpoints
|
||||||
@app.route('/api/services/bus/status', methods=['GET'])
|
@app.route('/api/services/bus/status', methods=['GET'])
|
||||||
def get_service_bus_status():
|
def get_service_bus_status():
|
||||||
@@ -592,17 +698,59 @@ def get_log_statistics():
|
|||||||
|
|
||||||
@app.route('/api/logs/rotate', methods=['POST'])
|
@app.route('/api/logs/rotate', methods=['POST'])
|
||||||
def rotate_logs():
|
def rotate_logs():
|
||||||
"""Manually rotate logs."""
|
"""Manually rotate an API service log file."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True) or {}
|
data = request.get_json(silent=True) or {}
|
||||||
service = data.get('service')
|
service = data.get('service') # None = rotate all
|
||||||
|
|
||||||
log_manager.rotate_logs(service)
|
log_manager.rotate_logs(service)
|
||||||
return jsonify({"message": "Logs rotated successfully"})
|
return jsonify({"message": "Logs rotated successfully"})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error rotating logs: {e}")
|
logger.error(f"Error rotating logs: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/logs/files', methods=['GET'])
|
||||||
|
def get_log_file_infos():
|
||||||
|
"""List service log files with sizes."""
|
||||||
|
try:
|
||||||
|
return jsonify(log_manager.get_all_log_file_infos())
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error listing log files: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/logs/verbosity', methods=['GET'])
|
||||||
|
def get_log_verbosity():
|
||||||
|
"""Return current per-service log levels."""
|
||||||
|
try:
|
||||||
|
return jsonify(log_manager.get_service_levels())
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting log verbosity: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/logs/verbosity', methods=['PUT'])
|
||||||
|
def set_log_verbosity():
|
||||||
|
"""Update log levels for one or all services. Body: {service: level} map."""
|
||||||
|
try:
|
||||||
|
data = request.get_json(silent=True) or {}
|
||||||
|
for service, level in data.items():
|
||||||
|
log_manager.set_service_level(service, level)
|
||||||
|
# Persist to config so levels survive API restarts
|
||||||
|
levels_file = os.path.join(os.path.dirname(__file__), 'config', 'log_levels.json')
|
||||||
|
os.makedirs(os.path.dirname(levels_file), exist_ok=True)
|
||||||
|
current = {}
|
||||||
|
if os.path.exists(levels_file):
|
||||||
|
try:
|
||||||
|
with open(levels_file) as f:
|
||||||
|
current = json.load(f)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
current.update(data)
|
||||||
|
with open(levels_file, 'w') as f:
|
||||||
|
json.dump(current, f, indent=2)
|
||||||
|
return jsonify({"message": "Log levels updated", "levels": log_manager.get_service_levels()})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error setting log verbosity: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
# Network Services API
|
# Network Services API
|
||||||
@app.route('/api/dns/records', methods=['GET'])
|
@app.route('/api/dns/records', methods=['GET'])
|
||||||
def get_dns_records():
|
def get_dns_records():
|
||||||
@@ -718,8 +866,8 @@ def test_network():
|
|||||||
def get_wireguard_keys():
|
def get_wireguard_keys():
|
||||||
"""Get WireGuard keys."""
|
"""Get WireGuard keys."""
|
||||||
try:
|
try:
|
||||||
# For now, return empty keys - this would need to be implemented
|
result = wireguard_manager.get_keys()
|
||||||
return jsonify({"error": "Not implemented yet"}), 501
|
return jsonify(result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting WireGuard keys: {e}")
|
logger.error(f"Error getting WireGuard keys: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -728,10 +876,11 @@ def get_wireguard_keys():
|
|||||||
def generate_peer_keys():
|
def generate_peer_keys():
|
||||||
"""Generate peer keys."""
|
"""Generate peer keys."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None or 'peer_name' not in data:
|
name = data.get('name') or data.get('peer_name')
|
||||||
return jsonify({"error": "Missing peer_name"}), 400
|
if not name:
|
||||||
result = wireguard_manager.generate_peer_keys(data['peer_name'])
|
return jsonify({"error": "Missing peer name"}), 400
|
||||||
|
result = wireguard_manager.generate_peer_keys(name)
|
||||||
return jsonify(result)
|
return jsonify(result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error generating peer keys: {e}")
|
logger.error(f"Error generating peer keys: {e}")
|
||||||
@@ -741,8 +890,8 @@ def generate_peer_keys():
|
|||||||
def get_wireguard_config():
|
def get_wireguard_config():
|
||||||
"""Get WireGuard configuration."""
|
"""Get WireGuard configuration."""
|
||||||
try:
|
try:
|
||||||
# For now, return empty config - this would need to be implemented
|
result = wireguard_manager.get_config()
|
||||||
return jsonify({"error": "Not implemented yet"}), 501
|
return jsonify(result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting WireGuard config: {e}")
|
logger.error(f"Error getting WireGuard config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -751,7 +900,7 @@ def get_wireguard_config():
|
|||||||
def get_wireguard_peers():
|
def get_wireguard_peers():
|
||||||
"""Get WireGuard peers."""
|
"""Get WireGuard peers."""
|
||||||
try:
|
try:
|
||||||
peers = wireguard_manager.get_wireguard_peers()
|
peers = wireguard_manager.get_peers()
|
||||||
return jsonify(peers)
|
return jsonify(peers)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting WireGuard peers: {e}")
|
logger.error(f"Error getting WireGuard peers: {e}")
|
||||||
@@ -761,20 +910,12 @@ def get_wireguard_peers():
|
|||||||
def add_wireguard_peer():
|
def add_wireguard_peer():
|
||||||
"""Add WireGuard peer."""
|
"""Add WireGuard peer."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None:
|
result = wireguard_manager.add_peer(
|
||||||
return jsonify({"error": "No data provided"}), 400
|
name=data.get('name', ''),
|
||||||
|
public_key=data.get('public_key', ''),
|
||||||
required_fields = ['name', 'public_key', 'allowed_ips']
|
endpoint_ip=data.get('endpoint', data.get('endpoint_ip', '')),
|
||||||
for field in required_fields:
|
allowed_ips=data.get('allowed_ips', ''),
|
||||||
if field not in data:
|
|
||||||
return jsonify({"error": f"Missing required field: {field}"}), 400
|
|
||||||
|
|
||||||
result = wireguard_manager.add_wireguard_peer(
|
|
||||||
name=data['name'],
|
|
||||||
public_key=data['public_key'],
|
|
||||||
allowed_ips=data['allowed_ips'],
|
|
||||||
endpoint=data.get('endpoint', ''),
|
|
||||||
persistent_keepalive=data.get('persistent_keepalive', 25)
|
persistent_keepalive=data.get('persistent_keepalive', 25)
|
||||||
)
|
)
|
||||||
return jsonify({"success": result})
|
return jsonify({"success": result})
|
||||||
@@ -786,11 +927,9 @@ def add_wireguard_peer():
|
|||||||
def remove_wireguard_peer():
|
def remove_wireguard_peer():
|
||||||
"""Remove WireGuard peer."""
|
"""Remove WireGuard peer."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None or 'name' not in data:
|
public_key = data.get('public_key') or data.get('name', '')
|
||||||
return jsonify({"error": "Missing peer name"}), 400
|
result = wireguard_manager.remove_peer(public_key)
|
||||||
|
|
||||||
result = wireguard_manager.remove_wireguard_peer(data['name'])
|
|
||||||
return jsonify({"success": result})
|
return jsonify({"success": result})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error removing WireGuard peer: {e}")
|
logger.error(f"Error removing WireGuard peer: {e}")
|
||||||
@@ -822,31 +961,40 @@ def test_wireguard_connectivity():
|
|||||||
def update_peer_ip():
|
def update_peer_ip():
|
||||||
"""Update peer IP."""
|
"""Update peer IP."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None or 'name' not in data or 'ip' not in data:
|
result = wireguard_manager.update_peer_ip(
|
||||||
return jsonify({"error": "Missing peer name or IP"}), 400
|
data.get('public_key', data.get('peer', '')),
|
||||||
|
data.get('ip', '')
|
||||||
# For now, return not implemented - this would need to be implemented
|
)
|
||||||
return jsonify({"error": "Not implemented yet"}), 501
|
return jsonify({"success": result})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error updating peer IP: {e}")
|
logger.error(f"Error updating peer IP: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
@app.route('/api/wireguard/peers/status', methods=['POST'])
|
@app.route('/api/wireguard/peers/status', methods=['POST'])
|
||||||
def get_peer_status():
|
def get_peer_status():
|
||||||
"""Get WireGuard peer status."""
|
"""Get live WireGuard status for a single peer."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None or 'public_key' not in data:
|
public_key = data.get('public_key', '')
|
||||||
return jsonify({"error": "Missing public key"}), 400
|
if not public_key:
|
||||||
|
return jsonify({"error": "Missing public_key"}), 400
|
||||||
public_key = data['public_key']
|
|
||||||
status = wireguard_manager.get_peer_status(public_key)
|
status = wireguard_manager.get_peer_status(public_key)
|
||||||
return jsonify(status)
|
return jsonify(status)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting peer status: {e}")
|
logger.error(f"Error getting peer status: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/wireguard/peers/statuses', methods=['GET'])
|
||||||
|
def get_all_peer_statuses():
|
||||||
|
"""Get live WireGuard status for all peers (keyed by public_key)."""
|
||||||
|
try:
|
||||||
|
statuses = wireguard_manager.get_all_peer_statuses()
|
||||||
|
return jsonify(statuses)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting peer statuses: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
@app.route('/api/wireguard/network/setup', methods=['POST'])
|
@app.route('/api/wireguard/network/setup', methods=['POST'])
|
||||||
def setup_network():
|
def setup_network():
|
||||||
"""Setup network configuration for internet access."""
|
"""Setup network configuration for internet access."""
|
||||||
@@ -873,37 +1021,38 @@ def get_network_status():
|
|||||||
@app.route('/api/wireguard/peers/config', methods=['POST'])
|
@app.route('/api/wireguard/peers/config', methods=['POST'])
|
||||||
def get_peer_config():
|
def get_peer_config():
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
if data is None or 'name' not in data:
|
peer_name = data.get('name', data.get('peer', ''))
|
||||||
return jsonify({"error": "Missing peer name"}), 400
|
|
||||||
|
|
||||||
peer_name = data['name']
|
# Look up peer details from registry if not supplied
|
||||||
|
peer_ip = data.get('ip', '')
|
||||||
|
peer_private_key = data.get('private_key', '')
|
||||||
|
registered = peer_registry.get_peer(peer_name) if peer_name else {}
|
||||||
|
if peer_name and (not peer_ip or not peer_private_key):
|
||||||
|
if registered:
|
||||||
|
peer_ip = peer_ip or registered.get('ip', '')
|
||||||
|
peer_private_key = peer_private_key or registered.get('private_key', '')
|
||||||
|
|
||||||
# Get peer from peer registry
|
# Use real external endpoint if not supplied
|
||||||
peer = peer_registry.get_peer(peer_name)
|
server_endpoint = data.get('server_endpoint', '')
|
||||||
if not peer:
|
if not server_endpoint:
|
||||||
return jsonify({"config": "Peer not found"})
|
srv = wireguard_manager.get_server_config()
|
||||||
|
server_endpoint = srv.get('endpoint') or '<SERVER_IP>'
|
||||||
|
|
||||||
# Get server configuration
|
# Determine AllowedIPs: explicit > peer's stored internet_access > default full tunnel
|
||||||
server_config = wireguard_manager.get_server_config()
|
allowed_ips = data.get('allowed_ips') or None
|
||||||
|
if not allowed_ips and registered:
|
||||||
|
internet_access = registered.get('internet_access', True)
|
||||||
|
allowed_ips = wireguard_manager.FULL_TUNNEL_IPS if internet_access else wireguard_manager.get_split_tunnel_ips()
|
||||||
|
|
||||||
# Check if IP already has a subnet mask, if not add /32
|
result = wireguard_manager.get_peer_config(
|
||||||
peer_ip = peer.get('ip', '10.0.0.2')
|
peer_name=peer_name,
|
||||||
peer_address = peer_ip if '/' in peer_ip else f"{peer_ip}/32"
|
peer_ip=peer_ip,
|
||||||
|
peer_private_key=peer_private_key,
|
||||||
# Generate client configuration using peer registry data
|
server_endpoint=server_endpoint,
|
||||||
config = f"""[Interface]
|
allowed_ips=allowed_ips,
|
||||||
PrivateKey = {peer.get('private_key', 'YOUR_PRIVATE_KEY_HERE')}
|
)
|
||||||
Address = {peer_address}
|
return jsonify({"config": result})
|
||||||
DNS = 8.8.8.8, 1.1.1.1
|
|
||||||
|
|
||||||
[Peer]
|
|
||||||
PublicKey = {server_config.get('public_key', 'SERVER_PUBLIC_KEY_PLACEHOLDER')}
|
|
||||||
Endpoint = {server_config.get('endpoint', 'YOUR_SERVER_IP:51820')}
|
|
||||||
AllowedIPs = {peer.get('allowed_ips', '0.0.0.0/0')}
|
|
||||||
PersistentKeepalive = {peer.get('persistent_keepalive', 25)}"""
|
|
||||||
|
|
||||||
return jsonify({"config": config})
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting peer config: {e}")
|
logger.error(f"Error getting peer config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -911,13 +1060,109 @@ PersistentKeepalive = {peer.get('persistent_keepalive', 25)}"""
|
|||||||
@app.route('/api/wireguard/server-config', methods=['GET'])
|
@app.route('/api/wireguard/server-config', methods=['GET'])
|
||||||
def get_server_config():
|
def get_server_config():
|
||||||
try:
|
try:
|
||||||
# Get server configuration from WireGuard manager
|
|
||||||
config = wireguard_manager.get_server_config()
|
config = wireguard_manager.get_server_config()
|
||||||
return jsonify(config)
|
return jsonify(config)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting server config: {e}")
|
logger.error(f"Error getting server config: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/wireguard/refresh-ip', methods=['POST'])
|
||||||
|
def refresh_external_ip():
|
||||||
|
try:
|
||||||
|
ip = wireguard_manager.get_external_ip(force_refresh=True)
|
||||||
|
port = wireguard_manager._get_configured_port()
|
||||||
|
return jsonify({
|
||||||
|
'external_ip': ip,
|
||||||
|
'port': port,
|
||||||
|
'endpoint': f'{ip}:{port}' if ip else None,
|
||||||
|
})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error refreshing external IP: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/wireguard/apply-enforcement', methods=['POST'])
|
||||||
|
def apply_wireguard_enforcement():
|
||||||
|
"""Re-apply per-peer iptables and DNS enforcement rules (call after WireGuard restart)."""
|
||||||
|
try:
|
||||||
|
peers = peer_registry.list_peers()
|
||||||
|
firewall_manager.apply_all_peer_rules(peers)
|
||||||
|
firewall_manager.apply_all_dns_rules(peers, COREFILE_PATH)
|
||||||
|
return jsonify({'ok': True, 'peers': len(peers)})
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/wireguard/check-port', methods=['POST'])
|
||||||
|
def check_wireguard_port():
|
||||||
|
try:
|
||||||
|
port_open = wireguard_manager.check_port_open()
|
||||||
|
return jsonify({'port_open': port_open, 'port': wireguard_manager._get_configured_port()})
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
# ── Cell-to-cell connections ─────────────────────────────────────────────────
|
||||||
|
|
||||||
|
@app.route('/api/cells/invite', methods=['GET'])
|
||||||
|
def get_cell_invite():
|
||||||
|
"""Generate an invite package for this cell."""
|
||||||
|
try:
|
||||||
|
identity = config_manager.configs.get('_identity', {})
|
||||||
|
cell_name = identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell'))
|
||||||
|
domain = identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell'))
|
||||||
|
invite = cell_link_manager.generate_invite(cell_name, domain)
|
||||||
|
return jsonify(invite)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error generating cell invite: {e}")
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/cells', methods=['GET'])
|
||||||
|
def list_cell_connections():
|
||||||
|
"""List all connected cells."""
|
||||||
|
try:
|
||||||
|
return jsonify(cell_link_manager.list_connections())
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/cells', methods=['POST'])
|
||||||
|
def add_cell_connection():
|
||||||
|
"""Connect to a remote cell using their invite package."""
|
||||||
|
try:
|
||||||
|
data = request.get_json(silent=True)
|
||||||
|
if not data:
|
||||||
|
return jsonify({'error': 'No data provided'}), 400
|
||||||
|
for field in ('cell_name', 'public_key', 'vpn_subnet', 'dns_ip', 'domain'):
|
||||||
|
if field not in data:
|
||||||
|
return jsonify({'error': f'Missing field: {field}'}), 400
|
||||||
|
link = cell_link_manager.add_connection(data)
|
||||||
|
return jsonify({'message': f"Connected to cell '{data['cell_name']}'", 'link': link}), 201
|
||||||
|
except ValueError as e:
|
||||||
|
return jsonify({'error': str(e)}), 400
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error adding cell connection: {e}")
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/cells/<cell_name>', methods=['DELETE'])
|
||||||
|
def remove_cell_connection(cell_name):
|
||||||
|
"""Disconnect from a remote cell."""
|
||||||
|
try:
|
||||||
|
cell_link_manager.remove_connection(cell_name)
|
||||||
|
return jsonify({'message': f"Cell '{cell_name}' disconnected"})
|
||||||
|
except ValueError as e:
|
||||||
|
return jsonify({'error': str(e)}), 404
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error removing cell connection: {e}")
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/cells/<cell_name>/status', methods=['GET'])
|
||||||
|
def get_cell_connection_status(cell_name):
|
||||||
|
"""Get live status for a connected cell."""
|
||||||
|
try:
|
||||||
|
status = cell_link_manager.get_connection_status(cell_name)
|
||||||
|
return jsonify(status)
|
||||||
|
except ValueError as e:
|
||||||
|
return jsonify({'error': str(e)}), 404
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
# Peer Registry API
|
# Peer Registry API
|
||||||
@app.route('/api/peers', methods=['GET'])
|
@app.route('/api/peers', methods=['GET'])
|
||||||
def get_peers():
|
def get_peers():
|
||||||
@@ -929,6 +1174,22 @@ def get_peers():
|
|||||||
logger.error(f"Error getting peers: {e}")
|
logger.error(f"Error getting peers: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
def _next_peer_ip() -> str:
|
||||||
|
"""Auto-assign the next free host address from the configured VPN subnet."""
|
||||||
|
import ipaddress
|
||||||
|
server_addr = wireguard_manager._get_configured_address() # e.g. '10.0.0.1/24'
|
||||||
|
network = ipaddress.ip_network(server_addr, strict=False)
|
||||||
|
server_ip = str(ipaddress.ip_interface(server_addr).ip)
|
||||||
|
used = {p.get('ip', '').split('/')[0] for p in peer_registry.list_peers()}
|
||||||
|
for host in network.hosts():
|
||||||
|
ip = str(host)
|
||||||
|
if ip == server_ip:
|
||||||
|
continue
|
||||||
|
if ip not in used:
|
||||||
|
return ip
|
||||||
|
raise ValueError(f'No free IPs left in {network}')
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/peers', methods=['POST'])
|
@app.route('/api/peers', methods=['POST'])
|
||||||
def add_peer():
|
def add_peer():
|
||||||
"""Add a peer."""
|
"""Add a peer."""
|
||||||
@@ -937,28 +1198,37 @@ def add_peer():
|
|||||||
if data is None:
|
if data is None:
|
||||||
return jsonify({"error": "No data provided"}), 400
|
return jsonify({"error": "No data provided"}), 400
|
||||||
|
|
||||||
# Validate required fields
|
# Validate required fields (ip is optional — auto-assigned if omitted)
|
||||||
required_fields = ['name', 'ip', 'public_key']
|
required_fields = ['name', 'public_key']
|
||||||
for field in required_fields:
|
for field in required_fields:
|
||||||
if field not in data:
|
if field not in data:
|
||||||
return jsonify({"error": f"Missing required field: {field}"}), 400
|
return jsonify({"error": f"Missing required field: {field}"}), 400
|
||||||
|
|
||||||
|
assigned_ip = data.get('ip') or _next_peer_ip()
|
||||||
|
|
||||||
# Add peer to registry with all provided fields
|
# Add peer to registry with all provided fields
|
||||||
peer_info = {
|
peer_info = {
|
||||||
'peer': data['name'],
|
'peer': data['name'],
|
||||||
'ip': data['ip'],
|
'ip': assigned_ip,
|
||||||
'public_key': data['public_key'],
|
'public_key': data['public_key'],
|
||||||
'private_key': data.get('private_key'),
|
'private_key': data.get('private_key'),
|
||||||
'server_public_key': data.get('server_public_key'),
|
'server_public_key': data.get('server_public_key'),
|
||||||
'server_endpoint': data.get('server_endpoint'),
|
'server_endpoint': data.get('server_endpoint'),
|
||||||
'allowed_ips': data.get('allowed_ips'),
|
'allowed_ips': data.get('allowed_ips'),
|
||||||
'persistent_keepalive': data.get('persistent_keepalive'),
|
'persistent_keepalive': data.get('persistent_keepalive'),
|
||||||
'description': data.get('description')
|
'description': data.get('description'),
|
||||||
|
'internet_access': data.get('internet_access', True),
|
||||||
|
'service_access': data.get('service_access', ['calendar', 'files', 'mail', 'webdav']),
|
||||||
|
'peer_access': data.get('peer_access', True),
|
||||||
|
'config_needs_reinstall': False,
|
||||||
}
|
}
|
||||||
|
|
||||||
success = peer_registry.add_peer(peer_info)
|
success = peer_registry.add_peer(peer_info)
|
||||||
if success:
|
if success:
|
||||||
return jsonify({"message": f"Peer {data['name']} added successfully"}), 201
|
# Apply server-side enforcement immediately
|
||||||
|
firewall_manager.apply_peer_rules(peer_info['ip'], peer_info)
|
||||||
|
firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH)
|
||||||
|
return jsonify({"message": f"Peer {data['name']} added successfully", "ip": assigned_ip}), 201
|
||||||
else:
|
else:
|
||||||
return jsonify({"error": f"Peer {data['name']} already exists"}), 400
|
return jsonify({"error": f"Peer {data['name']} already exists"}), 400
|
||||||
|
|
||||||
@@ -966,6 +1236,53 @@ def add_peer():
|
|||||||
logger.error(f"Error adding peer: {e}")
|
logger.error(f"Error adding peer: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
|
||||||
|
@app.route('/api/peers/<peer_name>', methods=['PUT'])
|
||||||
|
def update_peer(peer_name):
|
||||||
|
"""Update peer settings. Marks config_needs_reinstall if VPN config changed."""
|
||||||
|
try:
|
||||||
|
data = request.get_json(silent=True) or {}
|
||||||
|
existing = peer_registry.get_peer(peer_name)
|
||||||
|
if not existing:
|
||||||
|
return jsonify({"error": "Peer not found"}), 404
|
||||||
|
|
||||||
|
# Detect changes that require client to reinstall tunnel config
|
||||||
|
config_changed = (
|
||||||
|
('internet_access' in data and data['internet_access'] != existing.get('internet_access', True)) or
|
||||||
|
('ip' in data and data['ip'] != existing.get('ip')) or
|
||||||
|
('persistent_keepalive' in data and data['persistent_keepalive'] != existing.get('persistent_keepalive'))
|
||||||
|
)
|
||||||
|
|
||||||
|
updates = {k: v for k, v in data.items()}
|
||||||
|
if config_changed:
|
||||||
|
updates['config_needs_reinstall'] = True
|
||||||
|
|
||||||
|
success = peer_registry.update_peer(peer_name, updates)
|
||||||
|
if success:
|
||||||
|
# Re-apply server-side enforcement with updated settings
|
||||||
|
updated_peer = peer_registry.get_peer(peer_name)
|
||||||
|
if updated_peer:
|
||||||
|
firewall_manager.apply_peer_rules(updated_peer['ip'], updated_peer)
|
||||||
|
firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH)
|
||||||
|
result = {"message": f"Peer {peer_name} updated", "config_changed": config_changed}
|
||||||
|
return jsonify(result)
|
||||||
|
else:
|
||||||
|
return jsonify({"error": "Update failed"}), 500
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error updating peer {peer_name}: {e}")
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
|
||||||
|
@app.route('/api/peers/<peer_name>/clear-reinstall', methods=['POST'])
|
||||||
|
def clear_peer_reinstall(peer_name):
|
||||||
|
"""Clear the config_needs_reinstall flag once user has downloaded new config."""
|
||||||
|
try:
|
||||||
|
peer_registry.clear_reinstall_flag(peer_name)
|
||||||
|
return jsonify({"message": "Reinstall flag cleared"})
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
|
||||||
@app.route('/api/peers/<peer_name>', methods=['DELETE'])
|
@app.route('/api/peers/<peer_name>', methods=['DELETE'])
|
||||||
def remove_peer(peer_name):
|
def remove_peer(peer_name):
|
||||||
"""Remove a peer."""
|
"""Remove a peer."""
|
||||||
@@ -1359,6 +1676,15 @@ def get_routing_status():
|
|||||||
logger.error(f"Error getting routing status: {e}")
|
logger.error(f"Error getting routing status: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/routing/setup', methods=['POST'])
|
||||||
|
def setup_routing():
|
||||||
|
"""Apply/verify routing setup (WireGuard handles NAT via PostUp rules)."""
|
||||||
|
try:
|
||||||
|
status = routing_manager.get_status()
|
||||||
|
return jsonify({'success': True, 'message': 'Routing managed by WireGuard PostUp rules', **status})
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
@app.route('/api/routing/nat', methods=['POST'])
|
@app.route('/api/routing/nat', methods=['POST'])
|
||||||
def add_nat_rule():
|
def add_nat_rule():
|
||||||
"""Add NAT rule.
|
"""Add NAT rule.
|
||||||
@@ -1481,12 +1807,29 @@ def add_firewall_rule():
|
|||||||
logger.error(f"Error adding firewall rule: {e}")
|
logger.error(f"Error adding firewall rule: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/routing/firewall/<rule_id>', methods=['DELETE'])
|
||||||
|
def remove_firewall_rule(rule_id):
|
||||||
|
try:
|
||||||
|
result = routing_manager.remove_firewall_rule(rule_id)
|
||||||
|
return jsonify({'success': result}), (200 if result else 404)
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
|
@app.route('/api/routing/live-iptables', methods=['GET'])
|
||||||
|
def get_live_iptables():
|
||||||
|
try:
|
||||||
|
return jsonify(routing_manager.get_live_iptables())
|
||||||
|
except Exception as e:
|
||||||
|
return jsonify({'error': str(e)}), 500
|
||||||
|
|
||||||
@app.route('/api/routing/connectivity', methods=['POST'])
|
@app.route('/api/routing/connectivity', methods=['POST'])
|
||||||
def test_routing_connectivity():
|
def test_routing_connectivity():
|
||||||
"""Test routing connectivity."""
|
"""Test routing connectivity."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True) or {}
|
||||||
result = routing_manager.test_connectivity(data)
|
target_ip = data.get('target_ip', '8.8.8.8')
|
||||||
|
via_peer = data.get('via_peer')
|
||||||
|
result = routing_manager.test_routing_connectivity(target_ip, via_peer)
|
||||||
return jsonify(result)
|
return jsonify(result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error testing routing connectivity: {e}")
|
logger.error(f"Error testing routing connectivity: {e}")
|
||||||
@@ -1778,6 +2121,14 @@ def get_health_history():
|
|||||||
"""Get recent unified health check results."""
|
"""Get recent unified health check results."""
|
||||||
return jsonify(list(health_history))
|
return jsonify(list(health_history))
|
||||||
|
|
||||||
|
@app.route('/api/health/history/clear', methods=['POST'])
|
||||||
|
def clear_health_history():
|
||||||
|
"""Clear health history and reset alert counters."""
|
||||||
|
global service_alert_counters
|
||||||
|
health_history.clear()
|
||||||
|
service_alert_counters = {}
|
||||||
|
return jsonify({'message': 'Health history cleared'})
|
||||||
|
|
||||||
@app.route('/api/logs', methods=['GET'])
|
@app.route('/api/logs', methods=['GET'])
|
||||||
def get_backend_logs():
|
def get_backend_logs():
|
||||||
"""Get backend log file contents (last N lines)."""
|
"""Get backend log file contents (last N lines)."""
|
||||||
@@ -1796,9 +2147,8 @@ def get_backend_logs():
|
|||||||
|
|
||||||
@app.route('/api/containers', methods=['GET'])
|
@app.route('/api/containers', methods=['GET'])
|
||||||
def list_containers():
|
def list_containers():
|
||||||
# Temporarily disable access control for debugging
|
if not is_local_request():
|
||||||
# if not is_local_request():
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
|
||||||
try:
|
try:
|
||||||
containers = container_manager.list_containers()
|
containers = container_manager.list_containers()
|
||||||
return jsonify(containers)
|
return jsonify(containers)
|
||||||
@@ -1808,9 +2158,8 @@ def list_containers():
|
|||||||
|
|
||||||
@app.route('/api/containers/<name>/start', methods=['POST'])
|
@app.route('/api/containers/<name>/start', methods=['POST'])
|
||||||
def start_container(name):
|
def start_container(name):
|
||||||
# Temporarily disable access control for debugging
|
if not is_local_request():
|
||||||
# if not is_local_request():
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
|
||||||
try:
|
try:
|
||||||
success = container_manager.start_container(name)
|
success = container_manager.start_container(name)
|
||||||
return jsonify({'started': success})
|
return jsonify({'started': success})
|
||||||
@@ -1820,9 +2169,8 @@ def start_container(name):
|
|||||||
|
|
||||||
@app.route('/api/containers/<name>/stop', methods=['POST'])
|
@app.route('/api/containers/<name>/stop', methods=['POST'])
|
||||||
def stop_container(name):
|
def stop_container(name):
|
||||||
# Temporarily disable access control for debugging
|
if not is_local_request():
|
||||||
# if not is_local_request():
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
|
||||||
try:
|
try:
|
||||||
success = container_manager.stop_container(name)
|
success = container_manager.stop_container(name)
|
||||||
return jsonify({'stopped': success})
|
return jsonify({'stopped': success})
|
||||||
@@ -1832,9 +2180,8 @@ def stop_container(name):
|
|||||||
|
|
||||||
@app.route('/api/containers/<name>/restart', methods=['POST'])
|
@app.route('/api/containers/<name>/restart', methods=['POST'])
|
||||||
def restart_container(name):
|
def restart_container(name):
|
||||||
# Temporarily disable access control for debugging
|
if not is_local_request():
|
||||||
# if not is_local_request():
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
|
||||||
try:
|
try:
|
||||||
success = container_manager.restart_container(name)
|
success = container_manager.restart_container(name)
|
||||||
return jsonify({'restarted': success})
|
return jsonify({'restarted': success})
|
||||||
|
|||||||
@@ -27,9 +27,17 @@ class BaseServiceManager(ABC):
|
|||||||
|
|
||||||
def _ensure_directories(self):
|
def _ensure_directories(self):
|
||||||
"""Ensure required directories exist"""
|
"""Ensure required directories exist"""
|
||||||
|
self.safe_makedirs(self.data_dir)
|
||||||
|
self.safe_makedirs(self.config_dir)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def safe_makedirs(path: str):
|
||||||
|
"""Create directory, silently ignoring permission errors (e.g. running outside Docker)."""
|
||||||
import os
|
import os
|
||||||
os.makedirs(self.data_dir, exist_ok=True)
|
try:
|
||||||
os.makedirs(self.config_dir, exist_ok=True)
|
os.makedirs(path, exist_ok=True)
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_status(self) -> Dict[str, Any]:
|
def get_status(self) -> Dict[str, Any]:
|
||||||
@@ -60,12 +68,32 @@ class BaseServiceManager(ABC):
|
|||||||
"""Restart service - default implementation"""
|
"""Restart service - default implementation"""
|
||||||
try:
|
try:
|
||||||
self.logger.info(f"Restarting {self.service_name} service")
|
self.logger.info(f"Restarting {self.service_name} service")
|
||||||
# Default implementation - subclasses can override
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger.error(f"Error restarting {self.service_name}: {e}")
|
self.logger.error(f"Error restarting {self.service_name}: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def _restart_container(self, container_name: str) -> bool:
|
||||||
|
"""Restart a Docker container by name."""
|
||||||
|
import subprocess
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
['docker', 'restart', container_name],
|
||||||
|
capture_output=True, text=True, timeout=30
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
self.logger.info(f"Restarted container {container_name}")
|
||||||
|
return True
|
||||||
|
self.logger.error(f"Failed to restart {container_name}: {result.stderr}")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Error restarting container {container_name}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Apply config to actual service files and restart. Override in subclasses."""
|
||||||
|
return {'restarted': [], 'warnings': []}
|
||||||
|
|
||||||
def get_config(self) -> Dict[str, Any]:
|
def get_config(self) -> Dict[str, Any]:
|
||||||
"""Get service configuration - default implementation"""
|
"""Get service configuration - default implementation"""
|
||||||
try:
|
try:
|
||||||
|
|||||||
+138
-50
@@ -20,16 +20,25 @@ class CalendarManager(BaseServiceManager):
|
|||||||
def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'):
|
def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'):
|
||||||
super().__init__('calendar', data_dir, config_dir)
|
super().__init__('calendar', data_dir, config_dir)
|
||||||
self.calendar_data_dir = os.path.join(data_dir, 'calendar')
|
self.calendar_data_dir = os.path.join(data_dir, 'calendar')
|
||||||
|
self.calendar_dir = self.calendar_data_dir # alias used by tests
|
||||||
|
self.radicale_dir = os.path.join(config_dir, 'radicale')
|
||||||
self.users_file = os.path.join(self.calendar_data_dir, 'users.json')
|
self.users_file = os.path.join(self.calendar_data_dir, 'users.json')
|
||||||
self.calendars_file = os.path.join(self.calendar_data_dir, 'calendars.json')
|
self.calendars_file = os.path.join(self.calendar_data_dir, 'calendars.json')
|
||||||
self.events_file = os.path.join(self.calendar_data_dir, 'events.json')
|
self.events_file = os.path.join(self.calendar_data_dir, 'events.json')
|
||||||
|
|
||||||
# Ensure directories exist
|
self.safe_makedirs(self.calendar_data_dir)
|
||||||
os.makedirs(self.calendar_data_dir, exist_ok=True)
|
self.safe_makedirs(self.radicale_dir)
|
||||||
|
|
||||||
|
def _get_configured_port(self) -> int:
|
||||||
|
cfg = self.get_config()
|
||||||
|
if isinstance(cfg, dict) and 'error' not in cfg:
|
||||||
|
return cfg.get('port', 5232)
|
||||||
|
return 5232
|
||||||
|
|
||||||
def get_status(self) -> Dict[str, Any]:
|
def get_status(self) -> Dict[str, Any]:
|
||||||
"""Get calendar service status"""
|
"""Get calendar service status"""
|
||||||
try:
|
try:
|
||||||
|
port = self._get_configured_port()
|
||||||
# Check if we're running in Docker environment
|
# Check if we're running in Docker environment
|
||||||
import os
|
import os
|
||||||
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
||||||
@@ -40,6 +49,7 @@ class CalendarManager(BaseServiceManager):
|
|||||||
status = {
|
status = {
|
||||||
'running': container_running,
|
'running': container_running,
|
||||||
'status': 'online' if container_running else 'offline',
|
'status': 'online' if container_running else 'offline',
|
||||||
|
'port': port,
|
||||||
'users_count': 0,
|
'users_count': 0,
|
||||||
'calendars_count': 0,
|
'calendars_count': 0,
|
||||||
'events_count': 0,
|
'events_count': 0,
|
||||||
@@ -55,6 +65,7 @@ class CalendarManager(BaseServiceManager):
|
|||||||
status = {
|
status = {
|
||||||
'running': service_running,
|
'running': service_running,
|
||||||
'status': 'online' if service_running else 'offline',
|
'status': 'online' if service_running else 'offline',
|
||||||
|
'port': port,
|
||||||
'users_count': len(users),
|
'users_count': len(users),
|
||||||
'calendars_count': len(calendars),
|
'calendars_count': len(calendars),
|
||||||
'events_count': len(events),
|
'events_count': len(events),
|
||||||
@@ -109,60 +120,38 @@ class CalendarManager(BaseServiceManager):
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
def _test_service_connectivity(self) -> Dict[str, Any]:
|
def _test_service_connectivity(self) -> Dict[str, Any]:
|
||||||
"""Test calendar service connectivity"""
|
"""Test calendar service connectivity via TCP socket to cell-radicale container."""
|
||||||
|
import socket
|
||||||
try:
|
try:
|
||||||
# Test connection to calendar service
|
with socket.create_connection(('cell-radicale', 5232), timeout=5):
|
||||||
result = subprocess.run(['curl', '-s', 'http://localhost:5232'],
|
pass
|
||||||
capture_output=True, text=True, timeout=5)
|
return {'success': True, 'message': 'Calendar service accessible'}
|
||||||
|
|
||||||
success = result.returncode == 0 and result.stdout.strip()
|
|
||||||
return {
|
|
||||||
'success': success,
|
|
||||||
'message': 'Calendar service accessible' if success else 'Calendar service not accessible'
|
|
||||||
}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'Calendar service not accessible: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'Service test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _test_database_connectivity(self) -> Dict[str, Any]:
|
def _test_database_connectivity(self) -> Dict[str, Any]:
|
||||||
"""Test database connectivity"""
|
"""Test database connectivity — data dir must be writable; files are created on first use."""
|
||||||
try:
|
try:
|
||||||
# Check if data files are accessible
|
data_dir = os.path.dirname(self.users_file)
|
||||||
files_exist = all([
|
os.makedirs(data_dir, exist_ok=True)
|
||||||
os.path.exists(self.users_file),
|
accessible = os.access(data_dir, os.R_OK | os.W_OK)
|
||||||
os.path.exists(self.calendars_file),
|
|
||||||
os.path.exists(self.events_file)
|
|
||||||
])
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'success': files_exist,
|
'success': accessible,
|
||||||
'message': 'Database files accessible' if files_exist else 'Database files not accessible'
|
'message': 'Database directory accessible' if accessible else 'Database directory not accessible'
|
||||||
}
|
}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'Database test error: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'Database test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _test_web_interface(self) -> Dict[str, Any]:
|
def _test_web_interface(self) -> Dict[str, Any]:
|
||||||
"""Test web interface connectivity"""
|
"""Test Radicale web interface via HTTP to cell-radicale container."""
|
||||||
try:
|
try:
|
||||||
# Test web interface connection
|
import urllib.request
|
||||||
result = subprocess.run(['curl', '-s', 'http://localhost:5232'],
|
with urllib.request.urlopen('http://cell-radicale:5232', timeout=5) as r:
|
||||||
capture_output=True, text=True, timeout=5)
|
body = r.read(512).decode('utf-8', errors='ignore').lower()
|
||||||
|
success = r.status < 500
|
||||||
success = result.returncode == 0 and 'radicale' in result.stdout.lower()
|
return {'success': success, 'message': 'Web interface accessible' if success else 'Web interface not accessible'}
|
||||||
return {
|
|
||||||
'success': success,
|
|
||||||
'message': 'Web interface accessible' if success else 'Web interface not accessible'
|
|
||||||
}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'Web interface not accessible: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'Web interface test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _load_users(self) -> List[Dict[str, Any]]:
|
def _load_users(self) -> List[Dict[str, Any]]:
|
||||||
"""Load calendar users from file"""
|
"""Load calendar users from file"""
|
||||||
@@ -281,7 +270,7 @@ class CalendarManager(BaseServiceManager):
|
|||||||
|
|
||||||
# Create user directory
|
# Create user directory
|
||||||
user_dir = os.path.join(self.calendar_data_dir, 'users', username)
|
user_dir = os.path.join(self.calendar_data_dir, 'users', username)
|
||||||
os.makedirs(user_dir, exist_ok=True)
|
self.safe_makedirs(user_dir)
|
||||||
|
|
||||||
logger.info(f"Created calendar user: {username}")
|
logger.info(f"Created calendar user: {username}")
|
||||||
return True
|
return True
|
||||||
@@ -319,6 +308,8 @@ class CalendarManager(BaseServiceManager):
|
|||||||
description: str = '', color: str = '#4285f4') -> bool:
|
description: str = '', color: str = '#4285f4') -> bool:
|
||||||
"""Create a new calendar for a user"""
|
"""Create a new calendar for a user"""
|
||||||
try:
|
try:
|
||||||
|
if not username or not calendar_name:
|
||||||
|
return False
|
||||||
calendars = self._load_calendars()
|
calendars = self._load_calendars()
|
||||||
|
|
||||||
# Check if calendar already exists for user
|
# Check if calendar already exists for user
|
||||||
@@ -351,7 +342,7 @@ class CalendarManager(BaseServiceManager):
|
|||||||
|
|
||||||
# Create calendar directory
|
# Create calendar directory
|
||||||
calendar_dir = os.path.join(self.calendar_data_dir, 'users', username, calendar_name)
|
calendar_dir = os.path.join(self.calendar_data_dir, 'users', username, calendar_name)
|
||||||
os.makedirs(calendar_dir, exist_ok=True)
|
self.safe_makedirs(calendar_dir)
|
||||||
|
|
||||||
logger.info(f"Created calendar {calendar_name} for user {username}")
|
logger.info(f"Created calendar {calendar_name} for user {username}")
|
||||||
return True
|
return True
|
||||||
@@ -458,10 +449,107 @@ class CalendarManager(BaseServiceManager):
|
|||||||
def restart_service(self) -> bool:
|
def restart_service(self) -> bool:
|
||||||
"""Restart calendar service"""
|
"""Restart calendar service"""
|
||||||
try:
|
try:
|
||||||
# In a real implementation, this would restart the calendar server
|
logger.info('Calendar service restart requested')
|
||||||
# For now, we'll just log the restart
|
|
||||||
logger.info("Calendar service restart requested")
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to restart calendar service: {e}")
|
logger.error(f'Failed to restart calendar service: {e}')
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _ensure_config_exists(self):
|
||||||
|
"""Create radicale config file if it doesn't exist."""
|
||||||
|
self._generate_radicale_config()
|
||||||
|
|
||||||
|
def _generate_radicale_config(self):
|
||||||
|
"""Write a default radicale config to radicale_dir/config."""
|
||||||
|
config_file = os.path.join(self.radicale_dir, 'config')
|
||||||
|
config_content = (
|
||||||
|
'[server]\n'
|
||||||
|
'hosts = 0.0.0.0:5232\n'
|
||||||
|
'\n'
|
||||||
|
'[auth]\n'
|
||||||
|
'type = htpasswd\n'
|
||||||
|
'htpasswd_filename = /etc/radicale/users\n'
|
||||||
|
'htpasswd_encryption = md5\n'
|
||||||
|
'\n'
|
||||||
|
'[storage]\n'
|
||||||
|
'filesystem_folder = /data/collections\n'
|
||||||
|
)
|
||||||
|
with open(config_file, 'w') as f:
|
||||||
|
f.write(config_content)
|
||||||
|
|
||||||
|
def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Update radicale config port and restart cell-radicale."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
if 'port' not in config:
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
try:
|
||||||
|
radicale_conf = os.path.join(self.radicale_dir, 'config')
|
||||||
|
if os.path.exists(radicale_conf):
|
||||||
|
with open(radicale_conf) as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
lines = [
|
||||||
|
f"hosts = 0.0.0.0:{config['port']}\n" if l.strip().startswith('hosts =') else l
|
||||||
|
for l in lines
|
||||||
|
]
|
||||||
|
with open(radicale_conf, 'w') as f:
|
||||||
|
f.writelines(lines)
|
||||||
|
self._restart_container('cell-radicale')
|
||||||
|
restarted.append('cell-radicale')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"radicale config update failed: {e}")
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def remove_calendar(self, username: str, calendar_name: str) -> bool:
|
||||||
|
"""Remove a calendar."""
|
||||||
|
try:
|
||||||
|
if not username or not calendar_name:
|
||||||
|
return False
|
||||||
|
calendars = self._load_calendars()
|
||||||
|
new_cals = [
|
||||||
|
c for c in calendars
|
||||||
|
if not (c.get('username') == username and c.get('name') == calendar_name)
|
||||||
|
]
|
||||||
|
self._save_calendars(new_cals)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f'remove_calendar failed: {e}')
|
||||||
|
return False
|
||||||
|
|
||||||
|
def add_event(self, username: str, calendar_name: str,
|
||||||
|
event_data: dict) -> bool:
|
||||||
|
"""Add an event to a calendar."""
|
||||||
|
try:
|
||||||
|
if not username or not calendar_name or event_data is None:
|
||||||
|
return False
|
||||||
|
events = self._load_events()
|
||||||
|
event_data = dict(event_data)
|
||||||
|
event_data.update({
|
||||||
|
'username': username,
|
||||||
|
'calendar': calendar_name,
|
||||||
|
'uid': event_data.get('uid', datetime.utcnow().isoformat()),
|
||||||
|
})
|
||||||
|
events.append(event_data)
|
||||||
|
self._save_events(events)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f'add_event failed: {e}')
|
||||||
|
return False
|
||||||
|
|
||||||
|
def remove_event(self, username: str, calendar_name: str, uid: str) -> bool:
|
||||||
|
"""Remove an event by UID."""
|
||||||
|
try:
|
||||||
|
if not username or not calendar_name or not uid:
|
||||||
|
return False
|
||||||
|
events = self._load_events()
|
||||||
|
new_events = [
|
||||||
|
e for e in events
|
||||||
|
if not (e.get('username') == username
|
||||||
|
and e.get('calendar') == calendar_name
|
||||||
|
and e.get('uid') == uid)
|
||||||
|
]
|
||||||
|
self._save_events(new_events)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f'remove_event failed: {e}')
|
||||||
return False
|
return False
|
||||||
@@ -0,0 +1,122 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
CellLinkManager — manages site-to-site connections between PIC cells.
|
||||||
|
|
||||||
|
Each connection is stored in data/cell_links.json and manifests as:
|
||||||
|
- A WireGuard [Peer] block (AllowedIPs = remote cell's VPN subnet)
|
||||||
|
- A CoreDNS forwarding block (remote domain → remote cell's DNS IP)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class CellLinkManager:
|
||||||
|
def __init__(self, data_dir: str, config_dir: str, wireguard_manager, network_manager):
|
||||||
|
self.data_dir = data_dir
|
||||||
|
self.config_dir = config_dir
|
||||||
|
self.wireguard_manager = wireguard_manager
|
||||||
|
self.network_manager = network_manager
|
||||||
|
self.links_file = os.path.join(data_dir, 'cell_links.json')
|
||||||
|
|
||||||
|
# ── Storage ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def _load(self) -> List[Dict[str, Any]]:
|
||||||
|
if os.path.exists(self.links_file):
|
||||||
|
try:
|
||||||
|
with open(self.links_file) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return []
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _save(self, links: List[Dict[str, Any]]):
|
||||||
|
with open(self.links_file, 'w') as f:
|
||||||
|
json.dump(links, f, indent=2)
|
||||||
|
|
||||||
|
# ── Public API ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
def generate_invite(self, cell_name: str, domain: str) -> Dict[str, Any]:
|
||||||
|
"""Return an invite package describing this cell for another cell to import."""
|
||||||
|
keys = self.wireguard_manager.get_keys()
|
||||||
|
srv = self.wireguard_manager.get_server_config()
|
||||||
|
server_vpn_ip = self.wireguard_manager._get_configured_address().split('/')[0]
|
||||||
|
return {
|
||||||
|
'cell_name': cell_name,
|
||||||
|
'public_key': keys['public_key'],
|
||||||
|
'endpoint': srv.get('endpoint'),
|
||||||
|
'vpn_subnet': self.wireguard_manager._get_configured_network(),
|
||||||
|
'dns_ip': server_vpn_ip,
|
||||||
|
'domain': domain,
|
||||||
|
'version': 1,
|
||||||
|
}
|
||||||
|
|
||||||
|
def list_connections(self) -> List[Dict[str, Any]]:
|
||||||
|
return self._load()
|
||||||
|
|
||||||
|
def add_connection(self, invite: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Import a remote cell's invite and establish the connection."""
|
||||||
|
links = self._load()
|
||||||
|
name = invite['cell_name']
|
||||||
|
if any(l['cell_name'] == name for l in links):
|
||||||
|
raise ValueError(f"Cell '{name}' is already connected")
|
||||||
|
|
||||||
|
ok = self.wireguard_manager.add_cell_peer(
|
||||||
|
name=name,
|
||||||
|
public_key=invite['public_key'],
|
||||||
|
endpoint=invite.get('endpoint', ''),
|
||||||
|
vpn_subnet=invite['vpn_subnet'],
|
||||||
|
)
|
||||||
|
if not ok:
|
||||||
|
raise RuntimeError(f"Failed to add WireGuard peer for cell '{name}'")
|
||||||
|
|
||||||
|
dns_result = self.network_manager.add_cell_dns_forward(
|
||||||
|
domain=invite['domain'],
|
||||||
|
dns_ip=invite['dns_ip'],
|
||||||
|
)
|
||||||
|
if dns_result.get('warnings'):
|
||||||
|
logger.warning('DNS forward warnings for %s: %s', name, dns_result['warnings'])
|
||||||
|
|
||||||
|
link = {
|
||||||
|
'cell_name': name,
|
||||||
|
'public_key': invite['public_key'],
|
||||||
|
'endpoint': invite.get('endpoint'),
|
||||||
|
'vpn_subnet': invite['vpn_subnet'],
|
||||||
|
'dns_ip': invite['dns_ip'],
|
||||||
|
'domain': invite['domain'],
|
||||||
|
'connected_at': datetime.utcnow().isoformat(),
|
||||||
|
}
|
||||||
|
links.append(link)
|
||||||
|
self._save(links)
|
||||||
|
return link
|
||||||
|
|
||||||
|
def remove_connection(self, cell_name: str):
|
||||||
|
"""Tear down a cell connection by name."""
|
||||||
|
links = self._load()
|
||||||
|
link = next((l for l in links if l['cell_name'] == cell_name), None)
|
||||||
|
if not link:
|
||||||
|
raise ValueError(f"Cell '{cell_name}' not found")
|
||||||
|
|
||||||
|
self.wireguard_manager.remove_peer(link['public_key'])
|
||||||
|
self.network_manager.remove_cell_dns_forward(link['domain'])
|
||||||
|
|
||||||
|
links = [l for l in links if l['cell_name'] != cell_name]
|
||||||
|
self._save(links)
|
||||||
|
|
||||||
|
def get_connection_status(self, cell_name: str) -> Dict[str, Any]:
|
||||||
|
"""Return link record enriched with live WireGuard handshake status."""
|
||||||
|
links = self._load()
|
||||||
|
link = next((l for l in links if l['cell_name'] == cell_name), None)
|
||||||
|
if not link:
|
||||||
|
raise ValueError(f"Cell '{cell_name}' not found")
|
||||||
|
try:
|
||||||
|
st = self.wireguard_manager.get_peer_status(link['public_key'])
|
||||||
|
return {**link, 'online': st.get('online', False),
|
||||||
|
'last_handshake': st.get('last_handshake')}
|
||||||
|
except Exception:
|
||||||
|
return {**link, 'online': False, 'last_handshake': None}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
{}
|
||||||
+38
-47
@@ -28,9 +28,14 @@ class ConfigManager:
|
|||||||
self.data_dir = Path(data_dir)
|
self.data_dir = Path(data_dir)
|
||||||
self.backup_dir = self.data_dir / 'config_backups'
|
self.backup_dir = self.data_dir / 'config_backups'
|
||||||
self.secrets_file = self.config_file.parent / 'secrets.yaml'
|
self.secrets_file = self.config_file.parent / 'secrets.yaml'
|
||||||
self.backup_dir.mkdir(parents=True, exist_ok=True)
|
try:
|
||||||
|
self.backup_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
self.service_schemas = self._load_service_schemas()
|
self.service_schemas = self._load_service_schemas()
|
||||||
self.configs = self._load_all_configs()
|
self.configs = self._load_all_configs()
|
||||||
|
if not self.config_file.exists():
|
||||||
|
self._save_all_configs()
|
||||||
|
|
||||||
def _load_service_schemas(self) -> Dict[str, Dict]:
|
def _load_service_schemas(self) -> Dict[str, Dict]:
|
||||||
"""Load configuration schemas for all services"""
|
"""Load configuration schemas for all services"""
|
||||||
@@ -110,8 +115,12 @@ class ConfigManager:
|
|||||||
|
|
||||||
def _save_all_configs(self):
|
def _save_all_configs(self):
|
||||||
"""Save all service configurations to the unified config file"""
|
"""Save all service configurations to the unified config file"""
|
||||||
with open(self.config_file, 'w') as f:
|
try:
|
||||||
json.dump(self.configs, f, indent=2)
|
self.config_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(self.config_file, 'w') as f:
|
||||||
|
json.dump(self.configs, f, indent=2)
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
def get_service_config(self, service: str) -> Dict[str, Any]:
|
def get_service_config(self, service: str) -> Dict[str, Any]:
|
||||||
"""Get configuration for a specific service"""
|
"""Get configuration for a specific service"""
|
||||||
@@ -124,11 +133,12 @@ class ConfigManager:
|
|||||||
if service not in self.service_schemas:
|
if service not in self.service_schemas:
|
||||||
raise ValueError(f"Unknown service: {service}")
|
raise ValueError(f"Unknown service: {service}")
|
||||||
try:
|
try:
|
||||||
# Validate configuration
|
# Validate types only (required fields are checked by validate_config, not here)
|
||||||
validation = self.validate_config(service, config)
|
schema = self.service_schemas[service]
|
||||||
if not validation['valid']:
|
for field, expected_type in schema['types'].items():
|
||||||
logger.error(f"Invalid config for {service}: {validation['errors']}")
|
if field in config and not isinstance(config[field], expected_type):
|
||||||
return False
|
logger.error(f"Invalid type for {field}: expected {expected_type.__name__}")
|
||||||
|
return False
|
||||||
|
|
||||||
# Backup current config
|
# Backup current config
|
||||||
self._backup_service_config(service)
|
self._backup_service_config(service)
|
||||||
@@ -157,7 +167,7 @@ class ConfigManager:
|
|||||||
errors = []
|
errors = []
|
||||||
warnings = []
|
warnings = []
|
||||||
|
|
||||||
# Check required fields
|
# Check required fields (missing = error, wrong type = error)
|
||||||
for field in schema['required']:
|
for field in schema['required']:
|
||||||
if field not in config:
|
if field not in config:
|
||||||
errors.append(f"Missing required field: {field}")
|
errors.append(f"Missing required field: {field}")
|
||||||
@@ -179,6 +189,21 @@ class ConfigManager:
|
|||||||
"warnings": warnings
|
"warnings": warnings
|
||||||
}
|
}
|
||||||
|
|
||||||
|
def get_all_configs(self) -> Dict[str, Dict]:
|
||||||
|
"""Return all stored service configurations."""
|
||||||
|
return dict(self.configs)
|
||||||
|
|
||||||
|
def get_config_summary(self) -> Dict[str, Any]:
|
||||||
|
"""Return a high-level summary of configuration state."""
|
||||||
|
backup_count = sum(
|
||||||
|
1 for p in self.backup_dir.iterdir() if p.is_dir()
|
||||||
|
) if self.backup_dir.exists() else 0
|
||||||
|
return {
|
||||||
|
'total_services': len(self.service_schemas),
|
||||||
|
'configured_services': len(self.configs),
|
||||||
|
'backup_count': backup_count,
|
||||||
|
}
|
||||||
|
|
||||||
def backup_config(self) -> str:
|
def backup_config(self) -> str:
|
||||||
"""Create a backup of all configurations"""
|
"""Create a backup of all configurations"""
|
||||||
try:
|
try:
|
||||||
@@ -190,7 +215,8 @@ class ConfigManager:
|
|||||||
backup_path.mkdir(parents=True, exist_ok=True)
|
backup_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Copy all config files
|
# Copy all config files
|
||||||
shutil.copy2(self.config_file, backup_path / 'cell_config.json')
|
if self.config_file.exists():
|
||||||
|
shutil.copy2(self.config_file, backup_path / 'cell_config.json')
|
||||||
|
|
||||||
# Copy secrets file if it exists
|
# Copy secrets file if it exists
|
||||||
if self.secrets_file.exists():
|
if self.secrets_file.exists():
|
||||||
@@ -234,27 +260,8 @@ class ConfigManager:
|
|||||||
secrets_backup = backup_path / 'secrets.yaml'
|
secrets_backup = backup_path / 'secrets.yaml'
|
||||||
if secrets_backup.exists():
|
if secrets_backup.exists():
|
||||||
shutil.copy2(secrets_backup, self.secrets_file)
|
shutil.copy2(secrets_backup, self.secrets_file)
|
||||||
# Reload configurations
|
# Reload configurations — restore only what was in the backup
|
||||||
self.configs = self._load_all_configs()
|
self.configs = self._load_all_configs()
|
||||||
# Ensure all configs have required fields
|
|
||||||
for service, schema in self.service_schemas.items():
|
|
||||||
config = self.configs.get(service, {})
|
|
||||||
for field in schema['required']:
|
|
||||||
if field not in config:
|
|
||||||
# Set a default value based on type
|
|
||||||
t = schema['types'][field]
|
|
||||||
if t is int:
|
|
||||||
config[field] = 0
|
|
||||||
elif t is str:
|
|
||||||
config[field] = ''
|
|
||||||
elif t is list:
|
|
||||||
config[field] = []
|
|
||||||
elif t is bool:
|
|
||||||
config[field] = False
|
|
||||||
self.configs[service] = config
|
|
||||||
|
|
||||||
# Write back to file
|
|
||||||
self._save_all_configs()
|
|
||||||
logger.info(f"Restored configuration from backup: {backup_id}")
|
logger.info(f"Restored configuration from backup: {backup_id}")
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -325,26 +332,10 @@ class ConfigManager:
|
|||||||
configs = yaml.safe_load(config_data)
|
configs = yaml.safe_load(config_data)
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Unsupported format: {format}")
|
raise ValueError(f"Unsupported format: {format}")
|
||||||
# Validate and update each service config
|
# Import only services present in the data — don't fabricate missing ones
|
||||||
for service, config in configs.items():
|
for service, config in configs.items():
|
||||||
if service in self.service_schemas:
|
if service in self.service_schemas:
|
||||||
self.update_service_config(service, config)
|
self.update_service_config(service, config)
|
||||||
# Ensure all configs have required fields
|
|
||||||
for service, schema in self.service_schemas.items():
|
|
||||||
config = self.get_service_config(service)
|
|
||||||
for field in schema['required']:
|
|
||||||
if field not in config:
|
|
||||||
t = schema['types'][field]
|
|
||||||
if t is int:
|
|
||||||
config[field] = 0
|
|
||||||
elif t is str:
|
|
||||||
config[field] = ''
|
|
||||||
elif t is list:
|
|
||||||
config[field] = []
|
|
||||||
elif t is bool:
|
|
||||||
config[field] = False
|
|
||||||
# Write back to file
|
|
||||||
self._save_all_configs()
|
|
||||||
logger.info("Imported configurations successfully")
|
logger.info("Imported configurations successfully")
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
@@ -15,7 +15,10 @@ logger = logging.getLogger(__name__)
|
|||||||
class ContainerManager(BaseServiceManager):
|
class ContainerManager(BaseServiceManager):
|
||||||
"""Manages Docker container orchestration and management"""
|
"""Manages Docker container orchestration and management"""
|
||||||
|
|
||||||
def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'):
|
def __init__(self, data_dir: str = None, config_dir: str = None):
|
||||||
|
import os as _os
|
||||||
|
data_dir = data_dir or _os.environ.get('DATA_DIR', '/app/data')
|
||||||
|
config_dir = config_dir or _os.environ.get('CONFIG_DIR', '/app/config')
|
||||||
super().__init__('container', data_dir, config_dir)
|
super().__init__('container', data_dir, config_dir)
|
||||||
try:
|
try:
|
||||||
self.client = docker.from_env()
|
self.client = docker.from_env()
|
||||||
|
|||||||
+183
-106
@@ -6,6 +6,8 @@ Handles email service configuration and user management
|
|||||||
|
|
||||||
import os
|
import os
|
||||||
import json
|
import json
|
||||||
|
import smtplib
|
||||||
|
import imaplib
|
||||||
import subprocess
|
import subprocess
|
||||||
import logging
|
import logging
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
@@ -20,22 +22,36 @@ class EmailManager(BaseServiceManager):
|
|||||||
def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'):
|
def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'):
|
||||||
super().__init__('email', data_dir, config_dir)
|
super().__init__('email', data_dir, config_dir)
|
||||||
self.email_data_dir = os.path.join(data_dir, 'email')
|
self.email_data_dir = os.path.join(data_dir, 'email')
|
||||||
|
self.email_dir = self.email_data_dir # alias used by tests
|
||||||
|
self.postfix_dir = os.path.join(self.email_dir, 'postfix')
|
||||||
|
self.dovecot_dir = os.path.join(self.email_dir, 'dovecot')
|
||||||
self.users_file = os.path.join(self.email_data_dir, 'users.json')
|
self.users_file = os.path.join(self.email_data_dir, 'users.json')
|
||||||
self.domain_config_file = os.path.join(self.config_dir, 'email', 'domain.json')
|
self.domain_config_file = os.path.join(self.config_dir, 'email', 'domain.json')
|
||||||
|
|
||||||
# Ensure directories exist
|
self.safe_makedirs(self.email_data_dir)
|
||||||
os.makedirs(self.email_data_dir, exist_ok=True)
|
self.safe_makedirs(self.postfix_dir)
|
||||||
os.makedirs(os.path.dirname(self.domain_config_file), exist_ok=True)
|
self.safe_makedirs(self.dovecot_dir)
|
||||||
|
self.safe_makedirs(os.path.dirname(self.domain_config_file))
|
||||||
|
|
||||||
|
def _get_service_config(self) -> Dict[str, Any]:
|
||||||
|
"""Read configured ports/domain from service config file."""
|
||||||
|
cfg = self.get_config()
|
||||||
|
if isinstance(cfg, dict) and 'error' not in cfg:
|
||||||
|
return cfg
|
||||||
|
return {}
|
||||||
|
|
||||||
def get_status(self) -> Dict[str, Any]:
|
def get_status(self) -> Dict[str, Any]:
|
||||||
"""Get email service status"""
|
"""Get email service status"""
|
||||||
try:
|
try:
|
||||||
# Check if we're running in Docker environment
|
svc_cfg = self._get_service_config()
|
||||||
|
smtp_port = svc_cfg.get('smtp_port', 587)
|
||||||
|
imap_port = svc_cfg.get('imap_port', 993)
|
||||||
|
domain = svc_cfg.get('domain') or self._get_domain_config().get('domain', 'cell.local')
|
||||||
|
|
||||||
import os
|
import os
|
||||||
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
||||||
|
|
||||||
if is_docker:
|
if is_docker:
|
||||||
# Check if email container is actually running
|
|
||||||
container_running = self._check_email_container_status()
|
container_running = self._check_email_container_status()
|
||||||
status = {
|
status = {
|
||||||
'running': container_running,
|
'running': container_running,
|
||||||
@@ -43,21 +59,23 @@ class EmailManager(BaseServiceManager):
|
|||||||
'smtp_running': container_running,
|
'smtp_running': container_running,
|
||||||
'imap_running': container_running,
|
'imap_running': container_running,
|
||||||
'users_count': 0,
|
'users_count': 0,
|
||||||
'domain': 'cell.local',
|
'domain': domain,
|
||||||
|
'smtp_port': smtp_port,
|
||||||
|
'imap_port': imap_port,
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
}
|
}
|
||||||
else:
|
else:
|
||||||
# Check actual service status in production
|
|
||||||
smtp_running = self._check_smtp_status()
|
smtp_running = self._check_smtp_status()
|
||||||
imap_running = self._check_imap_status()
|
imap_running = self._check_imap_status()
|
||||||
|
|
||||||
status = {
|
status = {
|
||||||
'running': smtp_running and imap_running,
|
'running': smtp_running and imap_running,
|
||||||
'status': 'online' if (smtp_running and imap_running) else 'offline',
|
'status': 'online' if (smtp_running and imap_running) else 'offline',
|
||||||
'smtp_running': smtp_running,
|
'smtp_running': smtp_running,
|
||||||
'imap_running': imap_running,
|
'imap_running': imap_running,
|
||||||
'users_count': len(self._load_users()),
|
'users_count': len(self._load_users()),
|
||||||
'domain': self._get_domain_config().get('domain', 'unknown'),
|
'domain': domain,
|
||||||
|
'smtp_port': smtp_port,
|
||||||
|
'imap_port': imap_port,
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -81,7 +99,8 @@ class EmailManager(BaseServiceManager):
|
|||||||
'smtp_connectivity': smtp_test,
|
'smtp_connectivity': smtp_test,
|
||||||
'imap_connectivity': imap_test,
|
'imap_connectivity': imap_test,
|
||||||
'dns_resolution': dns_test,
|
'dns_resolution': dns_test,
|
||||||
'success': smtp_test['success'] and imap_test['success'] and dns_test['success'],
|
# DNS resolution only relevant when domain is configured
|
||||||
|
'success': smtp_test['success'] and imap_test['success'],
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -118,67 +137,37 @@ class EmailManager(BaseServiceManager):
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
def _test_smtp_connectivity(self) -> Dict[str, Any]:
|
def _test_smtp_connectivity(self) -> Dict[str, Any]:
|
||||||
"""Test SMTP connectivity"""
|
"""Test SMTP connectivity via TCP socket to cell-mail container."""
|
||||||
|
import socket
|
||||||
try:
|
try:
|
||||||
# Test SMTP connection to localhost
|
with socket.create_connection(('cell-mail', 587), timeout=5):
|
||||||
result = subprocess.run(['telnet', 'localhost', '587'],
|
pass
|
||||||
capture_output=True, text=True, timeout=5)
|
return {'success': True, 'message': 'SMTP connection successful'}
|
||||||
|
|
||||||
success = result.returncode == 0 or 'Connected' in result.stdout
|
|
||||||
return {
|
|
||||||
'success': success,
|
|
||||||
'message': 'SMTP connection successful' if success else 'SMTP connection failed'
|
|
||||||
}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'SMTP test error: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'SMTP test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _test_imap_connectivity(self) -> Dict[str, Any]:
|
def _test_imap_connectivity(self) -> Dict[str, Any]:
|
||||||
"""Test IMAP connectivity"""
|
"""Test IMAP connectivity via TCP socket to cell-mail container."""
|
||||||
|
import socket
|
||||||
try:
|
try:
|
||||||
# Test IMAP connection to localhost
|
with socket.create_connection(('cell-mail', 993), timeout=5):
|
||||||
result = subprocess.run(['telnet', 'localhost', '993'],
|
pass
|
||||||
capture_output=True, text=True, timeout=5)
|
return {'success': True, 'message': 'IMAP connection successful'}
|
||||||
|
|
||||||
success = result.returncode == 0 or 'Connected' in result.stdout
|
|
||||||
return {
|
|
||||||
'success': success,
|
|
||||||
'message': 'IMAP connection successful' if success else 'IMAP connection failed'
|
|
||||||
}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'IMAP test error: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'IMAP test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _test_dns_resolution(self) -> Dict[str, Any]:
|
def _test_dns_resolution(self) -> Dict[str, Any]:
|
||||||
"""Test DNS resolution for email domain"""
|
"""Test DNS resolution for email domain."""
|
||||||
|
import socket
|
||||||
try:
|
try:
|
||||||
domain_config = self._get_domain_config()
|
domain_config = self._get_domain_config()
|
||||||
domain = domain_config.get('domain', '')
|
domain = domain_config.get('domain', '')
|
||||||
|
|
||||||
if not domain:
|
if not domain:
|
||||||
return {
|
return {'success': False, 'message': 'No domain configured'}
|
||||||
'success': False,
|
socket.getaddrinfo(domain, None)
|
||||||
'message': 'No domain configured'
|
return {'success': True, 'message': f'DNS resolution for {domain} successful'}
|
||||||
}
|
|
||||||
|
|
||||||
# Test MX record resolution
|
|
||||||
result = subprocess.run(['nslookup', '-type=mx', domain],
|
|
||||||
capture_output=True, text=True, timeout=10)
|
|
||||||
|
|
||||||
success = result.returncode == 0 and 'mail exchanger' in result.stdout.lower()
|
|
||||||
return {
|
|
||||||
'success': success,
|
|
||||||
'message': f'DNS resolution for {domain} successful' if success else f'DNS resolution for {domain} failed'
|
|
||||||
}
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {'success': False, 'message': f'DNS test error: {str(e)}'}
|
||||||
'success': False,
|
|
||||||
'message': f'DNS test error: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
def _load_users(self) -> List[Dict[str, Any]]:
|
def _load_users(self) -> List[Dict[str, Any]]:
|
||||||
"""Load email users from file"""
|
"""Load email users from file"""
|
||||||
@@ -218,31 +207,74 @@ class EmailManager(BaseServiceManager):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error saving domain config: {e}")
|
logger.error(f"Error saving domain config: {e}")
|
||||||
|
|
||||||
def get_email_status(self) -> Dict[str, Any]:
|
def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
"""Get detailed email service status"""
|
"""Write config to mailserver.env and restart cell-mail."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
env_file = os.path.join(self.config_dir, 'mail', 'mailserver.env')
|
||||||
try:
|
try:
|
||||||
status = self.get_status()
|
# Read existing env file
|
||||||
|
env_lines = []
|
||||||
|
if os.path.exists(env_file):
|
||||||
|
with open(env_file) as f:
|
||||||
|
env_lines = f.readlines()
|
||||||
|
|
||||||
# Add user details
|
def _set_env(lines, key, value):
|
||||||
users = self._load_users()
|
found = False
|
||||||
user_details = []
|
result = []
|
||||||
|
for l in lines:
|
||||||
|
if l.startswith(f'{key}='):
|
||||||
|
result.append(f'{key}={value}\n')
|
||||||
|
found = True
|
||||||
|
else:
|
||||||
|
result.append(l)
|
||||||
|
if not found:
|
||||||
|
result.append(f'{key}={value}\n')
|
||||||
|
return result
|
||||||
|
|
||||||
for user in users:
|
changed = False
|
||||||
user_detail = {
|
if 'domain' in config and config['domain']:
|
||||||
'username': user.get('username', ''),
|
domain = config['domain']
|
||||||
'domain': user.get('domain', ''),
|
env_lines = _set_env(env_lines, 'OVERRIDE_HOSTNAME', f'mail.{domain}')
|
||||||
'email': user.get('email', ''),
|
env_lines = _set_env(env_lines, 'POSTMASTER_ADDRESS', f'admin@{domain}')
|
||||||
'created_at': user.get('created_at', ''),
|
# Also persist to domain_config_file
|
||||||
'last_login': user.get('last_login', ''),
|
self._save_domain_config({'domain': domain})
|
||||||
'quota_used': user.get('quota_used', 0),
|
changed = True
|
||||||
'quota_limit': user.get('quota_limit', 0)
|
|
||||||
}
|
|
||||||
user_details.append(user_detail)
|
|
||||||
|
|
||||||
status['users'] = user_details
|
if changed:
|
||||||
return status
|
with open(env_file, 'w') as f:
|
||||||
|
f.writelines(env_lines)
|
||||||
|
self._restart_container('cell-mail')
|
||||||
|
restarted.append('cell-mail')
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return self.handle_error(e, "get_email_status")
|
warnings.append(f"mailserver.env update failed: {e}")
|
||||||
|
logger.error(f"apply_config error: {e}")
|
||||||
|
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def get_email_status(self) -> Dict[str, Any]:
|
||||||
|
"""Get detailed email service status including postfix/dovecot state."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
['docker', 'ps', '--filter', 'name=cell-mail', '--format', '{{.Names}}'],
|
||||||
|
capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
running = 'cell-mail' in result.stdout
|
||||||
|
users = self._load_users()
|
||||||
|
return {
|
||||||
|
'running': running,
|
||||||
|
'status': 'online' if running else 'offline',
|
||||||
|
'postfix_running': running,
|
||||||
|
'dovecot_running': running,
|
||||||
|
'smtp_running': running,
|
||||||
|
'imap_running': running,
|
||||||
|
'users_count': len(users),
|
||||||
|
'users': users,
|
||||||
|
'domain': self._get_domain_config().get('domain', 'unknown'),
|
||||||
|
'timestamp': datetime.utcnow().isoformat(),
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return self.handle_error(e, 'get_email_status')
|
||||||
|
|
||||||
def get_email_users(self) -> List[Dict[str, Any]]:
|
def get_email_users(self) -> List[Dict[str, Any]]:
|
||||||
"""Get all email users"""
|
"""Get all email users"""
|
||||||
@@ -256,6 +288,8 @@ class EmailManager(BaseServiceManager):
|
|||||||
quota_limit: int = 1000000000) -> bool:
|
quota_limit: int = 1000000000) -> bool:
|
||||||
"""Create a new email user"""
|
"""Create a new email user"""
|
||||||
try:
|
try:
|
||||||
|
if not username or not domain or not password:
|
||||||
|
return False
|
||||||
users = self._load_users()
|
users = self._load_users()
|
||||||
|
|
||||||
# Check if user already exists
|
# Check if user already exists
|
||||||
@@ -282,7 +316,7 @@ class EmailManager(BaseServiceManager):
|
|||||||
|
|
||||||
# Create user mailbox directory
|
# Create user mailbox directory
|
||||||
mailbox_dir = os.path.join(self.email_data_dir, 'mailboxes', f'{username}@{domain}')
|
mailbox_dir = os.path.join(self.email_data_dir, 'mailboxes', f'{username}@{domain}')
|
||||||
os.makedirs(mailbox_dir, exist_ok=True)
|
self.safe_makedirs(mailbox_dir)
|
||||||
|
|
||||||
logger.info(f"Created email user: {username}@{domain}")
|
logger.info(f"Created email user: {username}@{domain}")
|
||||||
return True
|
return True
|
||||||
@@ -340,32 +374,17 @@ class EmailManager(BaseServiceManager):
|
|||||||
|
|
||||||
def send_email(self, from_email: str, to_email: str, subject: str,
|
def send_email(self, from_email: str, to_email: str, subject: str,
|
||||||
body: str, html_body: str = None) -> bool:
|
body: str, html_body: str = None) -> bool:
|
||||||
"""Send an email"""
|
"""Send an email via SMTP."""
|
||||||
try:
|
try:
|
||||||
# In a real implementation, this would use a proper SMTP library
|
if not from_email or not to_email or not subject or body is None:
|
||||||
# For now, we'll just log the email details
|
return False
|
||||||
|
with smtplib.SMTP('localhost', 25) as smtp:
|
||||||
email_data = {
|
message = f'From: {from_email}\r\nTo: {to_email}\r\nSubject: {subject}\r\n\r\n{body}'
|
||||||
'from': from_email,
|
smtp.sendmail(from_email, to_email, message)
|
||||||
'to': to_email,
|
logger.info(f'Email sent: {from_email} -> {to_email}')
|
||||||
'subject': subject,
|
|
||||||
'body': body,
|
|
||||||
'html_body': html_body,
|
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
# Save email to outbox
|
|
||||||
outbox_dir = os.path.join(self.email_data_dir, 'outbox')
|
|
||||||
os.makedirs(outbox_dir, exist_ok=True)
|
|
||||||
|
|
||||||
email_file = os.path.join(outbox_dir, f"{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}_{from_email.replace('@', '_at_')}.json")
|
|
||||||
with open(email_file, 'w') as f:
|
|
||||||
json.dump(email_data, f, indent=2)
|
|
||||||
|
|
||||||
logger.info(f"Email queued for sending: {from_email} -> {to_email}")
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to send email: {e}")
|
logger.error(f'Failed to send email: {e}')
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def get_metrics(self) -> Dict[str, Any]:
|
def get_metrics(self) -> Dict[str, Any]:
|
||||||
@@ -392,10 +411,68 @@ class EmailManager(BaseServiceManager):
|
|||||||
def restart_service(self) -> bool:
|
def restart_service(self) -> bool:
|
||||||
"""Restart email service"""
|
"""Restart email service"""
|
||||||
try:
|
try:
|
||||||
# In a real implementation, this would restart the mail server
|
logger.info('Email service restart requested')
|
||||||
# For now, we'll just log the restart
|
|
||||||
logger.info("Email service restart requested")
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to restart email service: {e}")
|
logger.error(f'Failed to restart email service: {e}')
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def list_email_users(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Alias for get_email_users."""
|
||||||
|
return self.get_email_users()
|
||||||
|
|
||||||
|
def _reload_email_services(self) -> bool:
|
||||||
|
"""Reload email services after config changes."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
['docker', 'exec', 'cell-mail', 'supervisorctl', 'reload'],
|
||||||
|
capture_output=True, text=True, timeout=10,
|
||||||
|
)
|
||||||
|
return result.returncode == 0
|
||||||
|
except Exception:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def get_email_logs(self, level: str = 'all', count: int = 100) -> Dict[str, Any]:
|
||||||
|
"""Return recent log lines from postfix and dovecot."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
['docker', 'exec', 'cell-mail', 'tail', f'-{count}', '/var/log/mail/mail.log'],
|
||||||
|
capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
lines = result.stdout.splitlines()
|
||||||
|
return {
|
||||||
|
'postfix': [l for l in lines if 'postfix' in l.lower()] or lines,
|
||||||
|
'dovecot': [l for l in lines if 'dovecot' in l.lower()] or lines,
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {'postfix': [], 'dovecot': [], 'error': str(e)}
|
||||||
|
|
||||||
|
def test_email_connectivity(self) -> Dict[str, Any]:
|
||||||
|
"""Test SMTP and IMAP connectivity."""
|
||||||
|
smtp_ok = False
|
||||||
|
imap_ok = False
|
||||||
|
try:
|
||||||
|
import requests as _requests
|
||||||
|
resp = _requests.get('http://localhost:25', timeout=2)
|
||||||
|
smtp_ok = resp.status_code < 500
|
||||||
|
except Exception:
|
||||||
|
smtp_ok = False
|
||||||
|
try:
|
||||||
|
imap_ok = self._check_imap_status()
|
||||||
|
except Exception:
|
||||||
|
imap_ok = False
|
||||||
|
return {'smtp': smtp_ok, 'imap': imap_ok}
|
||||||
|
|
||||||
|
def get_mailbox_info(self, username: str, domain: str) -> Dict[str, Any]:
|
||||||
|
"""Return mailbox info for a user."""
|
||||||
|
try:
|
||||||
|
if not username or not domain:
|
||||||
|
raise ValueError('username and domain are required')
|
||||||
|
with imaplib.IMAP4_SSL('localhost', 993) as imap:
|
||||||
|
imap.login(f'{username}@{domain}', '')
|
||||||
|
imap.select('INBOX')
|
||||||
|
_, data = imap.search(None, 'ALL')
|
||||||
|
message_count = len(data[0].split()) if data[0] else 0
|
||||||
|
return {'username': username, 'domain': domain, 'messages': message_count}
|
||||||
|
except Exception as e:
|
||||||
|
return {'username': username, 'domain': domain, 'error': str(e)}
|
||||||
+121
-16
@@ -54,9 +54,14 @@ class APIClient:
|
|||||||
class ConfigManager:
|
class ConfigManager:
|
||||||
"""Configuration management for CLI"""
|
"""Configuration management for CLI"""
|
||||||
|
|
||||||
def __init__(self, config_dir: str = "~/.picell"):
|
def __init__(self, config_path: str = "~/.picell"):
|
||||||
self.config_dir = Path(config_dir).expanduser()
|
p = Path(config_path).expanduser()
|
||||||
self.config_file = self.config_dir / "cli_config.yaml"
|
if p.suffix in ('.json', '.yaml', '.yml'):
|
||||||
|
self.config_file = p
|
||||||
|
self.config_dir = p.parent
|
||||||
|
else:
|
||||||
|
self.config_dir = p
|
||||||
|
self.config_file = p / "cli_config.yaml"
|
||||||
self.config_dir.mkdir(parents=True, exist_ok=True)
|
self.config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
self.config = self._load_config()
|
self.config = self._load_config()
|
||||||
|
|
||||||
@@ -65,6 +70,8 @@ class ConfigManager:
|
|||||||
if self.config_file.exists():
|
if self.config_file.exists():
|
||||||
try:
|
try:
|
||||||
with open(self.config_file, 'r') as f:
|
with open(self.config_file, 'r') as f:
|
||||||
|
if self.config_file.suffix == '.json':
|
||||||
|
return json.load(f) or {}
|
||||||
return yaml.safe_load(f) or {}
|
return yaml.safe_load(f) or {}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Warning: Could not load config: {e}")
|
print(f"Warning: Could not load config: {e}")
|
||||||
@@ -74,7 +81,10 @@ class ConfigManager:
|
|||||||
"""Save configuration to file"""
|
"""Save configuration to file"""
|
||||||
try:
|
try:
|
||||||
with open(self.config_file, 'w') as f:
|
with open(self.config_file, 'w') as f:
|
||||||
yaml.dump(self.config, f, default_flow_style=False)
|
if self.config_file.suffix == '.json':
|
||||||
|
json.dump(self.config, f, indent=2)
|
||||||
|
else:
|
||||||
|
yaml.dump(self.config, f, default_flow_style=False)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Warning: Could not save config: {e}")
|
print(f"Warning: Could not save config: {e}")
|
||||||
|
|
||||||
@@ -87,6 +97,10 @@ class ConfigManager:
|
|||||||
self.config[key] = value
|
self.config[key] = value
|
||||||
self._save_config()
|
self._save_config()
|
||||||
|
|
||||||
|
def save(self):
|
||||||
|
"""Persist current config to disk."""
|
||||||
|
self._save_config()
|
||||||
|
|
||||||
def export_config(self, format: str = 'json') -> str:
|
def export_config(self, format: str = 'json') -> str:
|
||||||
"""Export configuration"""
|
"""Export configuration"""
|
||||||
if format == 'json':
|
if format == 'json':
|
||||||
@@ -122,12 +136,34 @@ Type 'exit' or 'quit' to exit.
|
|||||||
"""
|
"""
|
||||||
prompt = "picell> "
|
prompt = "picell> "
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self, base_url: str = API_BASE):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.api_client = APIClient()
|
self.api_client = APIClient(base_url)
|
||||||
self.config_manager = ConfigManager()
|
self.config_manager = ConfigManager()
|
||||||
self.current_service = None
|
self.current_service = None
|
||||||
|
|
||||||
|
def get(self, endpoint: str) -> Optional[Dict]:
|
||||||
|
"""HTTP GET shortcut."""
|
||||||
|
try:
|
||||||
|
url = f"{self.api_client.base_url}{endpoint}"
|
||||||
|
r = requests.get(url)
|
||||||
|
r.raise_for_status()
|
||||||
|
return r.json()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"GET {endpoint} failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def post(self, endpoint: str, data: Optional[Dict] = None) -> Optional[Dict]:
|
||||||
|
"""HTTP POST shortcut."""
|
||||||
|
try:
|
||||||
|
url = f"{self.api_client.base_url}{endpoint}"
|
||||||
|
r = requests.post(url, json=data)
|
||||||
|
r.raise_for_status()
|
||||||
|
return r.json()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"POST {endpoint} failed: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
def do_status(self, arg):
|
def do_status(self, arg):
|
||||||
"""Show cell status"""
|
"""Show cell status"""
|
||||||
status = self.api_client.request("GET", "/status")
|
status = self.api_client.request("GET", "/status")
|
||||||
@@ -289,16 +325,19 @@ Type 'exit' or 'quit' to exit.
|
|||||||
|
|
||||||
print("\n🔧 Services:")
|
print("\n🔧 Services:")
|
||||||
services = status.get('services', {})
|
services = status.get('services', {})
|
||||||
for service, service_status in services.items():
|
if isinstance(services, list):
|
||||||
if isinstance(service_status, dict):
|
for service in services:
|
||||||
running = service_status.get('running', False)
|
print(f" 🟢 {service}")
|
||||||
status_text = service_status.get('status', 'unknown')
|
elif isinstance(services, dict):
|
||||||
else:
|
for service, service_status in services.items():
|
||||||
running = bool(service_status)
|
if isinstance(service_status, dict):
|
||||||
status_text = 'online' if running else 'offline'
|
running = service_status.get('running', False)
|
||||||
|
status_text = service_status.get('status', 'unknown')
|
||||||
status_icon = "🟢" if running else "🔴"
|
else:
|
||||||
print(f" {status_icon} {service}: {status_text}")
|
running = bool(service_status)
|
||||||
|
status_text = 'online' if running else 'offline'
|
||||||
|
status_icon = "🟢" if running else "🔴"
|
||||||
|
print(f" {status_icon} {service}: {status_text}")
|
||||||
|
|
||||||
def _display_services(self, services: Dict[str, Any]):
|
def _display_services(self, services: Dict[str, Any]):
|
||||||
"""Display services status"""
|
"""Display services status"""
|
||||||
@@ -359,6 +398,72 @@ Type 'exit' or 'quit' to exit.
|
|||||||
print(f"Services: {', '.join(backup.get('services', []))}")
|
print(f"Services: {', '.join(backup.get('services', []))}")
|
||||||
print("-" * 20)
|
print("-" * 20)
|
||||||
|
|
||||||
|
# ── Convenience methods used by tests and external callers ────────────────
|
||||||
|
|
||||||
|
def show_status(self):
|
||||||
|
"""Print current cell status."""
|
||||||
|
try:
|
||||||
|
status = self.api_client.get('/status') or {}
|
||||||
|
self._display_status(status)
|
||||||
|
print(status)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error fetching status: {e}")
|
||||||
|
|
||||||
|
def list_services(self):
|
||||||
|
"""Print list of services."""
|
||||||
|
services = self.api_client.get('/services') or {}
|
||||||
|
print(services)
|
||||||
|
|
||||||
|
def show_config(self):
|
||||||
|
"""Print current configuration."""
|
||||||
|
config = self.api_client.get('/config') or {}
|
||||||
|
self._display_config(config)
|
||||||
|
print(config)
|
||||||
|
|
||||||
|
def interactive_mode(self):
|
||||||
|
"""Simple interactive prompt loop (used for testing)."""
|
||||||
|
print("Entering interactive mode. Type 'quit' to exit.")
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
cmd_input = input("picell> ")
|
||||||
|
if cmd_input.strip().lower() in ('quit', 'exit'):
|
||||||
|
break
|
||||||
|
self.onecmd(cmd_input)
|
||||||
|
except (EOFError, KeyboardInterrupt):
|
||||||
|
break
|
||||||
|
|
||||||
|
def batch_start_services(self, services: List[str]):
|
||||||
|
"""Start multiple services in sequence."""
|
||||||
|
for service in services:
|
||||||
|
result = self.api_client.post(f'/services/{service}/start') or {}
|
||||||
|
print(f"Starting {service}: {result}")
|
||||||
|
|
||||||
|
def batch_stop_services(self, services: List[str]):
|
||||||
|
"""Stop multiple services in sequence."""
|
||||||
|
for service in services:
|
||||||
|
result = self.api_client.post(f'/services/{service}/stop') or {}
|
||||||
|
print(f"Stopping {service}: {result}")
|
||||||
|
|
||||||
|
def network_setup_wizard(self):
|
||||||
|
"""Interactive wizard for network setup."""
|
||||||
|
print("Network Setup Wizard")
|
||||||
|
gateway = input("Gateway IP: ")
|
||||||
|
netmask = input("Netmask: ")
|
||||||
|
dns_port = input("DNS port: ")
|
||||||
|
config = {'gateway': gateway, 'netmask': netmask, 'dns_port': dns_port}
|
||||||
|
result = self.api_client.post('/config/network', config) or {}
|
||||||
|
print(f"Network configured: {result}")
|
||||||
|
|
||||||
|
def wireguard_setup_wizard(self):
|
||||||
|
"""Interactive wizard for WireGuard setup."""
|
||||||
|
print("WireGuard Setup Wizard")
|
||||||
|
port = input("Listen port: ")
|
||||||
|
address = input("VPN address range: ")
|
||||||
|
config = {'port': port, 'address': address}
|
||||||
|
result = self.api_client.post('/config/wireguard', config) or {}
|
||||||
|
print(f"WireGuard configured: {result}")
|
||||||
|
|
||||||
|
|
||||||
def batch_operations(commands: List[str]):
|
def batch_operations(commands: List[str]):
|
||||||
"""Execute batch operations"""
|
"""Execute batch operations"""
|
||||||
cli = EnhancedCLI()
|
cli = EnhancedCLI()
|
||||||
|
|||||||
+15
-8
@@ -25,21 +25,23 @@ class FileManager(BaseServiceManager):
|
|||||||
self.files_dir = os.path.join(data_dir, 'files')
|
self.files_dir = os.path.join(data_dir, 'files')
|
||||||
self.webdav_dir = os.path.join(config_dir, 'webdav')
|
self.webdav_dir = os.path.join(config_dir, 'webdav')
|
||||||
|
|
||||||
# Ensure directories exist
|
self.safe_makedirs(self.files_dir)
|
||||||
os.makedirs(self.files_dir, exist_ok=True)
|
self.safe_makedirs(self.webdav_dir)
|
||||||
os.makedirs(self.webdav_dir, exist_ok=True)
|
|
||||||
|
|
||||||
# WebDAV service URL
|
# WebDAV service URL
|
||||||
self.webdav_url = 'http://localhost:8080'
|
self.webdav_url = 'http://cell-webdav:80'
|
||||||
|
|
||||||
# Initialize WebDAV configuration
|
# Initialize WebDAV configuration
|
||||||
self._ensure_config_exists()
|
self._ensure_config_exists()
|
||||||
|
|
||||||
def _ensure_config_exists(self):
|
def _ensure_config_exists(self):
|
||||||
"""Ensure WebDAV configuration exists"""
|
"""Ensure WebDAV configuration exists"""
|
||||||
config_file = os.path.join(self.webdav_dir, 'webdav.conf')
|
try:
|
||||||
if not os.path.exists(config_file):
|
config_file = os.path.join(self.webdav_dir, 'webdav.conf')
|
||||||
self._generate_webdav_config()
|
if not os.path.exists(config_file):
|
||||||
|
self._generate_webdav_config()
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
def _generate_webdav_config(self):
|
def _generate_webdav_config(self):
|
||||||
"""Generate WebDAV configuration"""
|
"""Generate WebDAV configuration"""
|
||||||
@@ -409,10 +411,12 @@ umask = 022
|
|||||||
'message': str(e)
|
'message': str(e)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
results['success'] = results.get('http', {}).get('success', False)
|
||||||
return results
|
return results
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
return {
|
||||||
|
'success': False,
|
||||||
'http': {'success': False, 'message': str(e)},
|
'http': {'success': False, 'message': str(e)},
|
||||||
'webdav': {'success': False, 'message': str(e)}
|
'webdav': {'success': False, 'message': str(e)}
|
||||||
}
|
}
|
||||||
@@ -477,13 +481,16 @@ umask = 022
|
|||||||
import os
|
import os
|
||||||
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true'
|
||||||
|
|
||||||
|
svc_cfg = self.get_config()
|
||||||
|
configured_port = svc_cfg.get('port', 80) if isinstance(svc_cfg, dict) and 'error' not in svc_cfg else 80
|
||||||
|
|
||||||
if is_docker:
|
if is_docker:
|
||||||
# Check if file container is actually running
|
# Check if file container is actually running
|
||||||
container_running = self._check_file_container_status()
|
container_running = self._check_file_container_status()
|
||||||
status = {
|
status = {
|
||||||
'running': container_running,
|
'running': container_running,
|
||||||
'status': 'online' if container_running else 'offline',
|
'status': 'online' if container_running else 'offline',
|
||||||
'webdav_status': {'running': container_running, 'port': 8080},
|
'webdav_status': {'running': container_running, 'port': configured_port},
|
||||||
'users_count': 0,
|
'users_count': 0,
|
||||||
'total_storage_used': {'bytes': 0, 'human_readable': '0 B'},
|
'total_storage_used': {'bytes': 0, 'human_readable': '0 B'},
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
|
|||||||
@@ -0,0 +1,305 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Firewall Manager for Personal Internet Cell
|
||||||
|
Manages per-peer iptables rules in the WireGuard container and DNS ACLs in CoreDNS.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Virtual IPs assigned to Caddy per service — must match Caddyfile listeners
|
||||||
|
SERVICE_IPS = {
|
||||||
|
'calendar': '172.20.0.21',
|
||||||
|
'files': '172.20.0.22',
|
||||||
|
'mail': '172.20.0.23',
|
||||||
|
'webdav': '172.20.0.24',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Internal RFC-1918 ranges (peer traffic stays inside these = cell-only access)
|
||||||
|
PRIVATE_NETS = ['10.0.0.0/8', '172.16.0.0/12', '192.168.0.0/16']
|
||||||
|
|
||||||
|
WIREGUARD_CONTAINER = 'cell-wireguard'
|
||||||
|
CADDY_CONTAINER = 'cell-caddy'
|
||||||
|
COREFILE_PATH = '/app/config/dns/Corefile'
|
||||||
|
ZONE_DATA_DIR = '/data' # inside CoreDNS container; mounted from ./data/dns
|
||||||
|
|
||||||
|
|
||||||
|
def _run(cmd: List[str], check: bool = True) -> subprocess.CompletedProcess:
|
||||||
|
"""Run a shell command and return the result."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
||||||
|
if check and result.returncode != 0:
|
||||||
|
logger.warning(f"Command {cmd} exited {result.returncode}: {result.stderr.strip()}")
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Command {cmd} failed: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def _wg_exec(args: List[str]) -> subprocess.CompletedProcess:
|
||||||
|
"""Run a command inside the WireGuard container via docker exec."""
|
||||||
|
return _run(['docker', 'exec', WIREGUARD_CONTAINER] + args, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
def _caddy_exec(args: List[str]) -> subprocess.CompletedProcess:
|
||||||
|
"""Run a command inside the Caddy container via docker exec."""
|
||||||
|
return _run(['docker', 'exec', CADDY_CONTAINER] + args, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Virtual IP management (Caddy container)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def ensure_caddy_virtual_ips() -> bool:
|
||||||
|
"""Add per-service virtual IPs to Caddy's eth0 if not already present."""
|
||||||
|
try:
|
||||||
|
result = _caddy_exec(['ip', 'addr', 'show', 'eth0'])
|
||||||
|
existing = result.stdout
|
||||||
|
|
||||||
|
for service, ip in SERVICE_IPS.items():
|
||||||
|
if ip not in existing:
|
||||||
|
r = _caddy_exec(['ip', 'addr', 'add', f'{ip}/16', 'dev', 'eth0'])
|
||||||
|
if r.returncode == 0:
|
||||||
|
logger.info(f"Added virtual IP {ip} for {service} to Caddy eth0")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Failed to add virtual IP {ip}: {r.stderr.strip()}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"ensure_caddy_virtual_ips failed: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# iptables rule helpers
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _iptables(args: List[str], check: bool = False) -> subprocess.CompletedProcess:
|
||||||
|
return _wg_exec(['iptables'] + args)
|
||||||
|
|
||||||
|
|
||||||
|
def _rule_exists(chain: str, rule_args: List[str]) -> bool:
|
||||||
|
result = _iptables(['-C', chain] + rule_args)
|
||||||
|
return result.returncode == 0
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_rule(chain: str, rule_args: List[str]) -> None:
|
||||||
|
"""Insert rule at top of chain if it doesn't already exist."""
|
||||||
|
if not _rule_exists(chain, rule_args):
|
||||||
|
_iptables(['-I', chain] + rule_args)
|
||||||
|
|
||||||
|
|
||||||
|
def _delete_rule(chain: str, rule_args: List[str]) -> None:
|
||||||
|
"""Delete rule from chain (silently if it doesn't exist)."""
|
||||||
|
while _rule_exists(chain, rule_args):
|
||||||
|
_iptables(['-D', chain] + rule_args)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Per-peer rule management
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _peer_comment(peer_ip: str) -> str:
|
||||||
|
return f'pic-peer-{peer_ip.replace(".", "-")}'
|
||||||
|
|
||||||
|
|
||||||
|
def clear_peer_rules(peer_ip: str) -> None:
|
||||||
|
"""Remove all FORWARD rules tagged with this peer's IP via iptables-save/restore."""
|
||||||
|
comment = _peer_comment(peer_ip)
|
||||||
|
try:
|
||||||
|
# Dump rules, strip matching lines, restore — atomic and order-stable
|
||||||
|
save = _wg_exec(['iptables-save'])
|
||||||
|
if save.returncode != 0:
|
||||||
|
return
|
||||||
|
lines = save.stdout.splitlines()
|
||||||
|
filtered = [l for l in lines if comment not in l]
|
||||||
|
if len(filtered) == len(lines):
|
||||||
|
return # nothing to remove
|
||||||
|
restore_input = '\n'.join(filtered) + '\n'
|
||||||
|
restore = subprocess.run(
|
||||||
|
['docker', 'exec', '-i', WIREGUARD_CONTAINER, 'iptables-restore'],
|
||||||
|
input=restore_input, capture_output=True, text=True, timeout=10
|
||||||
|
)
|
||||||
|
if restore.returncode != 0:
|
||||||
|
logger.warning(f"iptables-restore failed: {restore.stderr.strip()}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"clear_peer_rules({peer_ip}): {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def apply_peer_rules(peer_ip: str, settings: Dict[str, Any]) -> bool:
|
||||||
|
"""
|
||||||
|
Apply iptables FORWARD rules for a peer based on their access settings.
|
||||||
|
|
||||||
|
Each rule is inserted at position 1 (-I), so the LAST call ends up at the TOP.
|
||||||
|
We insert in reverse-priority order: lowest-priority rules first, highest last.
|
||||||
|
|
||||||
|
Desired final chain order (top = highest priority):
|
||||||
|
1. Per-service DROP/ACCEPT (most specific — must beat private-net ACCEPT)
|
||||||
|
2. Peer-to-peer ACCEPT/DROP (10.0.0.0/24)
|
||||||
|
3. Private-net ACCEPTs (for no-internet peers to reach local resources)
|
||||||
|
4. Internet DROP or ACCEPT (lowest priority catch-all)
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
comment = _peer_comment(peer_ip)
|
||||||
|
clear_peer_rules(peer_ip)
|
||||||
|
|
||||||
|
internet_access = settings.get('internet_access', True)
|
||||||
|
service_access = settings.get('service_access', list(SERVICE_IPS.keys()))
|
||||||
|
peer_access = settings.get('peer_access', True)
|
||||||
|
|
||||||
|
# --- Step 1 (inserted first → ends up at bottom before default ACCEPT) ---
|
||||||
|
# Internet catch-all: allow or block
|
||||||
|
if internet_access:
|
||||||
|
_iptables(['-I', 'FORWARD', '-s', peer_ip,
|
||||||
|
'-m', 'comment', '--comment', comment, '-j', 'ACCEPT'])
|
||||||
|
else:
|
||||||
|
# Block non-private, allow private nets
|
||||||
|
_iptables(['-I', 'FORWARD', '-s', peer_ip,
|
||||||
|
'-m', 'comment', '--comment', comment, '-j', 'DROP'])
|
||||||
|
for net in reversed(PRIVATE_NETS):
|
||||||
|
_iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', net,
|
||||||
|
'-m', 'comment', '--comment', comment, '-j', 'ACCEPT'])
|
||||||
|
|
||||||
|
# --- Step 2 --- Peer-to-peer (10.0.0.0/24)
|
||||||
|
target = 'ACCEPT' if peer_access else 'DROP'
|
||||||
|
_iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', '10.0.0.0/24',
|
||||||
|
'-m', 'comment', '--comment', comment, '-j', target])
|
||||||
|
|
||||||
|
# --- Step 3 (inserted last → ends up at TOP of chain) ---
|
||||||
|
# Per-service rules — inserted in reverse dict order so first service ends up at top
|
||||||
|
for service, svc_ip in reversed(list(SERVICE_IPS.items())):
|
||||||
|
target = 'ACCEPT' if service in service_access else 'DROP'
|
||||||
|
_iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', svc_ip,
|
||||||
|
'-m', 'comment', '--comment', comment, '-j', target])
|
||||||
|
|
||||||
|
logger.info(f"Applied rules for {peer_ip}: internet={internet_access} "
|
||||||
|
f"services={service_access} peers={peer_access}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"apply_peer_rules({peer_ip}): {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def apply_all_peer_rules(peers: List[Dict[str, Any]]) -> None:
|
||||||
|
"""Re-apply rules for all peers (called on startup)."""
|
||||||
|
ensure_caddy_virtual_ips()
|
||||||
|
for peer in peers:
|
||||||
|
ip = peer.get('ip')
|
||||||
|
if not ip:
|
||||||
|
continue
|
||||||
|
apply_peer_rules(ip, {
|
||||||
|
'internet_access': peer.get('internet_access', True),
|
||||||
|
'service_access': peer.get('service_access', list(SERVICE_IPS.keys())),
|
||||||
|
'peer_access': peer.get('peer_access', True),
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# DNS ACL (CoreDNS Corefile generation)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Map service name → DNS hostname in .cell zone
|
||||||
|
SERVICE_HOSTS = {
|
||||||
|
'calendar': 'calendar.cell.',
|
||||||
|
'files': 'files.cell.',
|
||||||
|
'mail': 'mail.cell.',
|
||||||
|
'webdav': 'webdav.cell.',
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _build_acl_block(blocked_peers_by_service: Dict[str, List[str]]) -> str:
|
||||||
|
"""
|
||||||
|
Build CoreDNS ACL plugin stanzas.
|
||||||
|
|
||||||
|
blocked_peers_by_service: { 'calendar': ['10.0.0.2', '10.0.0.3'], ... }
|
||||||
|
Returns a string to embed in the `cell { }` zone block.
|
||||||
|
"""
|
||||||
|
if not blocked_peers_by_service:
|
||||||
|
return ''
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
for service, peer_ips in blocked_peers_by_service.items():
|
||||||
|
host = SERVICE_HOSTS.get(service)
|
||||||
|
if not host or not peer_ips:
|
||||||
|
continue
|
||||||
|
for ip in peer_ips:
|
||||||
|
lines.append(f' acl {host} {{')
|
||||||
|
lines.append(f' block net {ip}/32')
|
||||||
|
lines.append(f' allow net 0.0.0.0/0')
|
||||||
|
lines.append(f' allow net ::/0')
|
||||||
|
lines.append(f' }}')
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_corefile(peers: List[Dict[str, Any]], corefile_path: str = COREFILE_PATH) -> bool:
|
||||||
|
"""
|
||||||
|
Rewrite the CoreDNS Corefile with per-peer ACL rules and reload plugin.
|
||||||
|
The file is written to corefile_path (API-side path mapped into CoreDNS container).
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Collect which peers block which services
|
||||||
|
blocked: Dict[str, List[str]] = {s: [] for s in SERVICE_IPS}
|
||||||
|
for peer in peers:
|
||||||
|
ip = peer.get('ip')
|
||||||
|
if not ip:
|
||||||
|
continue
|
||||||
|
allowed_services = peer.get('service_access', list(SERVICE_IPS.keys()))
|
||||||
|
for service in SERVICE_IPS:
|
||||||
|
if service not in allowed_services:
|
||||||
|
blocked[service].append(ip)
|
||||||
|
|
||||||
|
acl_block = _build_acl_block(blocked)
|
||||||
|
|
||||||
|
cell_zone_block = 'cell {\n file /data/cell.zone\n log\n'
|
||||||
|
if acl_block:
|
||||||
|
cell_zone_block += acl_block + '\n'
|
||||||
|
cell_zone_block += '}\n'
|
||||||
|
|
||||||
|
corefile = f""". {{
|
||||||
|
forward . 8.8.8.8 1.1.1.1
|
||||||
|
cache
|
||||||
|
log
|
||||||
|
health
|
||||||
|
}}
|
||||||
|
|
||||||
|
{cell_zone_block}
|
||||||
|
local.cell {{
|
||||||
|
file /data/local.zone
|
||||||
|
log
|
||||||
|
}}
|
||||||
|
"""
|
||||||
|
os.makedirs(os.path.dirname(corefile_path), exist_ok=True)
|
||||||
|
with open(corefile_path, 'w') as f:
|
||||||
|
f.write(corefile)
|
||||||
|
|
||||||
|
logger.info(f"Wrote Corefile to {corefile_path}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"generate_corefile: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def reload_coredns() -> bool:
|
||||||
|
"""Send SIGHUP to CoreDNS container to reload config."""
|
||||||
|
try:
|
||||||
|
result = _run(['docker', 'kill', '--signal=SIGHUP', 'cell-dns'], check=False)
|
||||||
|
if result.returncode == 0:
|
||||||
|
logger.info("Sent SIGHUP to cell-dns")
|
||||||
|
return True
|
||||||
|
logger.warning(f"SIGHUP to cell-dns failed: {result.stderr.strip()}")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"reload_coredns: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def apply_all_dns_rules(peers: List[Dict[str, Any]], corefile_path: str = COREFILE_PATH) -> bool:
|
||||||
|
"""Regenerate Corefile and reload CoreDNS."""
|
||||||
|
ok = generate_corefile(peers, corefile_path)
|
||||||
|
if ok:
|
||||||
|
reload_coredns()
|
||||||
|
return ok
|
||||||
@@ -498,6 +498,53 @@ class LogManager:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {'error': str(e)}
|
return {'error': str(e)}
|
||||||
|
|
||||||
|
def set_service_level(self, service: str, level: str):
|
||||||
|
"""Change log level for a service at runtime."""
|
||||||
|
try:
|
||||||
|
log_level = getattr(logging, level.upper(), logging.INFO)
|
||||||
|
if service in self.service_loggers:
|
||||||
|
self.service_loggers[service].setLevel(log_level)
|
||||||
|
if service in self.handlers and 'file' in self.handlers[service]:
|
||||||
|
self.handlers[service]['file'].setLevel(log_level)
|
||||||
|
logger.info(f"Set log level for {service} to {level}")
|
||||||
|
else:
|
||||||
|
logger.warning(f"Service logger not found: {service}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error setting log level for {service}: {e}")
|
||||||
|
|
||||||
|
def get_service_levels(self) -> Dict[str, str]:
|
||||||
|
"""Return current log level for each service logger."""
|
||||||
|
return {
|
||||||
|
svc: logging.getLevelName(lgr.level)
|
||||||
|
for svc, lgr in self.service_loggers.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_all_log_file_infos(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Return size/mtime info for active and rotated service log files."""
|
||||||
|
results = []
|
||||||
|
# Active logs (*.log) then rotated backups (*.log.1, *.log.2, ...)
|
||||||
|
patterns = ['*.log', '*.log.*']
|
||||||
|
seen = set()
|
||||||
|
for pattern in patterns:
|
||||||
|
for log_file in sorted(self.log_dir.glob(pattern)):
|
||||||
|
if log_file in seen or log_file.suffix == '.gz':
|
||||||
|
continue
|
||||||
|
seen.add(log_file)
|
||||||
|
try:
|
||||||
|
stat = log_file.stat()
|
||||||
|
name = log_file.name
|
||||||
|
is_backup = not name.endswith('.log')
|
||||||
|
results.append({
|
||||||
|
'name': log_file.stem.split('.')[0], # service name
|
||||||
|
'file': name,
|
||||||
|
'size': stat.st_size,
|
||||||
|
'modified': datetime.fromtimestamp(stat.st_mtime).isoformat(),
|
||||||
|
'backup': is_backup,
|
||||||
|
})
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return results
|
||||||
|
|
||||||
def compress_old_logs(self):
|
def compress_old_logs(self):
|
||||||
"""Compress old log files to save space"""
|
"""Compress old log files to save space"""
|
||||||
try:
|
try:
|
||||||
|
|||||||
+258
-27
@@ -23,8 +23,8 @@ class NetworkManager(BaseServiceManager):
|
|||||||
self.dhcp_leases_file = os.path.join(data_dir, 'dhcp', 'leases')
|
self.dhcp_leases_file = os.path.join(data_dir, 'dhcp', 'leases')
|
||||||
|
|
||||||
# Ensure directories exist
|
# Ensure directories exist
|
||||||
os.makedirs(self.dns_zones_dir, exist_ok=True)
|
self.safe_makedirs(self.dns_zones_dir)
|
||||||
os.makedirs(os.path.dirname(self.dhcp_leases_file), exist_ok=True)
|
self.safe_makedirs(os.path.dirname(self.dhcp_leases_file))
|
||||||
|
|
||||||
def update_dns_zone(self, zone_name: str, records: List[Dict]) -> bool:
|
def update_dns_zone(self, zone_name: str, records: List[Dict]) -> bool:
|
||||||
"""Update DNS zone file with new records"""
|
"""Update DNS zone file with new records"""
|
||||||
@@ -118,6 +118,20 @@ class NetworkManager(BaseServiceManager):
|
|||||||
logger.error(f"Failed to remove DNS record: {e}")
|
logger.error(f"Failed to remove DNS record: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def get_dns_records(self, zone: str = 'cell') -> List[Dict]:
|
||||||
|
"""Get all DNS records across all zones"""
|
||||||
|
all_records = []
|
||||||
|
try:
|
||||||
|
for fname in os.listdir(self.dns_zones_dir):
|
||||||
|
if fname.endswith('.zone'):
|
||||||
|
z = fname[:-5]
|
||||||
|
for rec in self._load_dns_records(z):
|
||||||
|
rec['zone'] = z
|
||||||
|
all_records.append(rec)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to list DNS records: {e}")
|
||||||
|
return all_records
|
||||||
|
|
||||||
def _load_dns_records(self, zone: str) -> List[Dict]:
|
def _load_dns_records(self, zone: str) -> List[Dict]:
|
||||||
"""Load DNS records from zone file"""
|
"""Load DNS records from zone file"""
|
||||||
zone_file = os.path.join(self.dns_zones_dir, f'{zone}.zone')
|
zone_file = os.path.join(self.dns_zones_dir, f'{zone}.zone')
|
||||||
@@ -131,12 +145,17 @@ class NetworkManager(BaseServiceManager):
|
|||||||
lines = f.readlines()
|
lines = f.readlines()
|
||||||
|
|
||||||
for line in lines:
|
for line in lines:
|
||||||
line = line.strip()
|
line = line.strip().split(';')[0].strip() # strip inline comments
|
||||||
if line and not line.startswith(';') and not line.startswith('$'):
|
if not line or line.startswith('$'):
|
||||||
parts = line.split()
|
continue
|
||||||
if len(parts) >= 5:
|
parts = line.split()
|
||||||
record_type = parts[3]
|
# Support both: name IN type value (4 parts)
|
||||||
if record_type in ('A', 'CNAME'):
|
# and name TTL IN type value (5 parts)
|
||||||
|
if len(parts) == 4 and parts[1] in ('IN',) and parts[2] in ('A', 'CNAME', 'MX', 'TXT'):
|
||||||
|
records.append({'name': parts[0], 'ttl': '300', 'type': parts[2], 'value': parts[3]})
|
||||||
|
elif len(parts) >= 5:
|
||||||
|
record_type = parts[3]
|
||||||
|
if record_type in ('A', 'CNAME'):
|
||||||
records.append({
|
records.append({
|
||||||
'name': parts[0],
|
'name': parts[0],
|
||||||
'ttl': parts[1],
|
'ttl': parts[1],
|
||||||
@@ -177,7 +196,7 @@ class NetworkManager(BaseServiceManager):
|
|||||||
reservation_file = os.path.join(self.config_dir, 'dhcp', 'reservations.conf')
|
reservation_file = os.path.join(self.config_dir, 'dhcp', 'reservations.conf')
|
||||||
|
|
||||||
# Ensure directory exists
|
# Ensure directory exists
|
||||||
os.makedirs(os.path.dirname(reservation_file), exist_ok=True)
|
self.safe_makedirs(os.path.dirname(reservation_file))
|
||||||
|
|
||||||
# Add reservation
|
# Add reservation
|
||||||
with open(reservation_file, 'a') as f:
|
with open(reservation_file, 'a') as f:
|
||||||
@@ -272,24 +291,234 @@ class NetworkManager(BaseServiceManager):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to reload DHCP service: {e}")
|
logger.error(f"Failed to reload DHCP service: {e}")
|
||||||
|
|
||||||
def test_dns_resolution(self, domain: str) -> Dict:
|
def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
"""Test DNS resolution for a domain"""
|
"""Write config to real service files and reload/restart affected containers."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
dnsmasq_changed = False
|
||||||
|
|
||||||
|
# DHCP range
|
||||||
|
if 'dhcp_range' in config:
|
||||||
|
try:
|
||||||
|
dhcp_conf = os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')
|
||||||
|
if os.path.exists(dhcp_conf):
|
||||||
|
with open(dhcp_conf) as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
lines = [
|
||||||
|
f"dhcp-range={config['dhcp_range']}\n" if l.startswith('dhcp-range=') else l
|
||||||
|
for l in lines
|
||||||
|
]
|
||||||
|
with open(dhcp_conf, 'w') as f:
|
||||||
|
f.writelines(lines)
|
||||||
|
dnsmasq_changed = True
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"dhcp_range write failed: {e}")
|
||||||
|
|
||||||
|
# NTP servers
|
||||||
|
if 'ntp_servers' in config and config['ntp_servers']:
|
||||||
|
try:
|
||||||
|
ntp_conf = os.path.join(self.config_dir, 'ntp', 'chrony.conf')
|
||||||
|
if os.path.exists(ntp_conf):
|
||||||
|
with open(ntp_conf) as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
# Remove existing server lines, add new ones
|
||||||
|
lines = [l for l in lines if not l.startswith('server ')]
|
||||||
|
new_servers = [f"server {s} iburst\n" for s in config['ntp_servers']]
|
||||||
|
lines = new_servers + lines
|
||||||
|
with open(ntp_conf, 'w') as f:
|
||||||
|
f.writelines(lines)
|
||||||
|
self._restart_container('cell-ntp')
|
||||||
|
restarted.append('cell-ntp')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"ntp_servers write failed: {e}")
|
||||||
|
|
||||||
|
if dnsmasq_changed:
|
||||||
|
self._reload_dhcp_service()
|
||||||
|
restarted.append('cell-dhcp (reloaded)')
|
||||||
|
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def apply_domain(self, domain: str) -> Dict[str, Any]:
|
||||||
|
"""Update domain across dnsmasq, Corefile, and zone file; reload DNS + DHCP."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
|
||||||
|
# 1. Update dnsmasq.conf domain= line
|
||||||
try:
|
try:
|
||||||
result = subprocess.run(['nslookup', domain, '127.0.0.1'],
|
dhcp_conf = os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')
|
||||||
capture_output=True, text=True, timeout=10)
|
if os.path.exists(dhcp_conf):
|
||||||
|
with open(dhcp_conf) as f:
|
||||||
return {
|
lines = f.readlines()
|
||||||
'success': result.returncode == 0,
|
lines = [
|
||||||
'output': result.stdout,
|
f"domain={domain}\n" if l.startswith('domain=') else l
|
||||||
'error': result.stderr
|
for l in lines
|
||||||
}
|
]
|
||||||
|
with open(dhcp_conf, 'w') as f:
|
||||||
|
f.writelines(lines)
|
||||||
|
self._reload_dhcp_service()
|
||||||
|
restarted.append('cell-dhcp (reloaded)')
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return {
|
warnings.append(f"dnsmasq domain update failed: {e}")
|
||||||
'success': False,
|
|
||||||
'output': '',
|
# 2. Update Corefile: replace old primary zone block with new domain
|
||||||
'error': str(e)
|
try:
|
||||||
}
|
corefile = os.path.join(self.config_dir, 'dns', 'Corefile')
|
||||||
|
if os.path.exists(corefile):
|
||||||
|
with open(corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
import re
|
||||||
|
# Replace first named zone block (not the catch-all .) with new domain
|
||||||
|
# Matches: <word> { ... } blocks (zone names like "cell", "oldname")
|
||||||
|
def replace_zone(m):
|
||||||
|
zone = m.group(1)
|
||||||
|
if zone == '.':
|
||||||
|
return m.group(0) # keep catch-all
|
||||||
|
# Replace zone name with new domain; update file path reference
|
||||||
|
body = m.group(2)
|
||||||
|
body = re.sub(r'file\s+/data/\S+\.zone',
|
||||||
|
f'file /data/{domain}.zone', body)
|
||||||
|
return f'{domain} {{{body}}}'
|
||||||
|
new_content = re.sub(
|
||||||
|
r'(\S+)\s*\{([^}]*)\}',
|
||||||
|
replace_zone, content, flags=re.DOTALL
|
||||||
|
)
|
||||||
|
with open(corefile, 'w') as f:
|
||||||
|
f.write(new_content)
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"Corefile domain update failed: {e}")
|
||||||
|
|
||||||
|
# 3. Update zone file: rename and rewrite $ORIGIN / SOA
|
||||||
|
try:
|
||||||
|
dns_data = os.path.join(self.data_dir, 'dns')
|
||||||
|
if os.path.isdir(dns_data):
|
||||||
|
# Find existing primary zone file (anything not named 'local')
|
||||||
|
for fname in os.listdir(dns_data):
|
||||||
|
if fname.endswith('.zone') and 'local' not in fname:
|
||||||
|
src = os.path.join(dns_data, fname)
|
||||||
|
with open(src) as f:
|
||||||
|
zone_content = f.read()
|
||||||
|
# Detect old domain from $ORIGIN line
|
||||||
|
m = re.search(r'^\$ORIGIN\s+(\S+)', zone_content, re.MULTILINE)
|
||||||
|
old_origin = m.group(1).rstrip('.') if m else None
|
||||||
|
if old_origin and old_origin != domain:
|
||||||
|
zone_content = zone_content.replace(
|
||||||
|
f'{old_origin}.', f'{domain}.')
|
||||||
|
zone_content = re.sub(
|
||||||
|
r'^\$ORIGIN\s+\S+', f'$ORIGIN {domain}.', zone_content, flags=re.MULTILINE)
|
||||||
|
dst = os.path.join(dns_data, f'{domain}.zone')
|
||||||
|
with open(dst, 'w') as f:
|
||||||
|
f.write(zone_content)
|
||||||
|
if src != dst:
|
||||||
|
os.remove(src)
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"zone file domain update failed: {e}")
|
||||||
|
|
||||||
|
# 4. Reload CoreDNS
|
||||||
|
try:
|
||||||
|
self._reload_dns_service()
|
||||||
|
restarted.append('cell-dns (reloaded)')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"CoreDNS reload failed: {e}")
|
||||||
|
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def apply_cell_name(self, old_name: str, new_name: str) -> Dict[str, Any]:
|
||||||
|
"""Update the cell hostname record in the primary DNS zone file."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
if not old_name or not new_name or old_name == new_name:
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
try:
|
||||||
|
dns_data = os.path.join(self.data_dir, 'dns')
|
||||||
|
if os.path.isdir(dns_data):
|
||||||
|
for fname in os.listdir(dns_data):
|
||||||
|
if fname.endswith('.zone') and 'local' not in fname:
|
||||||
|
zone_file = os.path.join(dns_data, fname)
|
||||||
|
with open(zone_file) as f:
|
||||||
|
content = f.read()
|
||||||
|
# Replace hostname record: old_name IN A ...
|
||||||
|
import re
|
||||||
|
content = re.sub(
|
||||||
|
rf'^{re.escape(old_name)}(\s+IN\s+A\s+)',
|
||||||
|
f'{new_name}\\1',
|
||||||
|
content, flags=re.MULTILINE
|
||||||
|
)
|
||||||
|
with open(zone_file, 'w') as f:
|
||||||
|
f.write(content)
|
||||||
|
break
|
||||||
|
self._reload_dns_service()
|
||||||
|
restarted.append('cell-dns (reloaded)')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f"cell_name DNS update failed: {e}")
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def add_cell_dns_forward(self, domain: str, dns_ip: str) -> Dict[str, Any]:
|
||||||
|
"""Append a CoreDNS forwarding block for a remote cell's domain."""
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
try:
|
||||||
|
corefile = os.path.join(self.config_dir, 'dns', 'Corefile')
|
||||||
|
if not os.path.exists(corefile):
|
||||||
|
warnings.append('Corefile not found')
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
with open(corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
marker = f'# cell:{domain}'
|
||||||
|
if marker in content:
|
||||||
|
return {'restarted': restarted, 'warnings': warnings} # already present
|
||||||
|
forward_block = (
|
||||||
|
f'\n{marker}\n'
|
||||||
|
f'{domain} {{\n'
|
||||||
|
f' forward . {dns_ip}\n'
|
||||||
|
f' log\n'
|
||||||
|
f'}}\n'
|
||||||
|
)
|
||||||
|
with open(corefile, 'a') as f:
|
||||||
|
f.write(forward_block)
|
||||||
|
self._reload_dns_service()
|
||||||
|
restarted.append('cell-dns (reloaded)')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f'add_cell_dns_forward failed: {e}')
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def remove_cell_dns_forward(self, domain: str) -> Dict[str, Any]:
|
||||||
|
"""Remove a CoreDNS forwarding block for a remote cell's domain."""
|
||||||
|
import re
|
||||||
|
restarted = []
|
||||||
|
warnings = []
|
||||||
|
try:
|
||||||
|
corefile = os.path.join(self.config_dir, 'dns', 'Corefile')
|
||||||
|
if not os.path.exists(corefile):
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
with open(corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
marker = f'# cell:{domain}'
|
||||||
|
if marker not in content:
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
new_content = re.sub(
|
||||||
|
rf'\n# cell:{re.escape(domain)}\n{re.escape(domain)}\s*\{{[^}}]*\}}\n',
|
||||||
|
'',
|
||||||
|
content,
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
|
with open(corefile, 'w') as f:
|
||||||
|
f.write(new_content)
|
||||||
|
self._reload_dns_service()
|
||||||
|
restarted.append('cell-dns (reloaded)')
|
||||||
|
except Exception as e:
|
||||||
|
warnings.append(f'remove_cell_dns_forward failed: {e}')
|
||||||
|
return {'restarted': restarted, 'warnings': warnings}
|
||||||
|
|
||||||
|
def test_dns_resolution(self, domain: str) -> Dict:
|
||||||
|
"""Test DNS resolution for a domain using Python socket."""
|
||||||
|
import socket
|
||||||
|
try:
|
||||||
|
results = socket.getaddrinfo(domain, None)
|
||||||
|
addrs = [r[4][0] for r in results]
|
||||||
|
return {'success': True, 'output': f"Resolved: {', '.join(addrs)}", 'error': ''}
|
||||||
|
except Exception as e:
|
||||||
|
return {'success': False, 'output': '', 'error': str(e)}
|
||||||
|
|
||||||
def test_dhcp_functionality(self) -> Dict:
|
def test_dhcp_functionality(self) -> Dict:
|
||||||
"""Test DHCP functionality"""
|
"""Test DHCP functionality"""
|
||||||
@@ -304,6 +533,7 @@ class NetworkManager(BaseServiceManager):
|
|||||||
leases = self.get_dhcp_leases()
|
leases = self.get_dhcp_leases()
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
'success': is_running,
|
||||||
'running': is_running,
|
'running': is_running,
|
||||||
'leases_count': len(leases),
|
'leases_count': len(leases),
|
||||||
'leases': leases
|
'leases': leases
|
||||||
@@ -311,7 +541,7 @@ class NetworkManager(BaseServiceManager):
|
|||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to test DHCP functionality: {e}")
|
logger.error(f"Failed to test DHCP functionality: {e}")
|
||||||
return {'running': False, 'leases_count': 0, 'leases': []}
|
return {'success': False, 'running': False, 'leases_count': 0, 'leases': []}
|
||||||
|
|
||||||
def test_ntp_functionality(self) -> Dict:
|
def test_ntp_functionality(self) -> Dict:
|
||||||
"""Test NTP functionality"""
|
"""Test NTP functionality"""
|
||||||
@@ -335,13 +565,14 @@ class NetworkManager(BaseServiceManager):
|
|||||||
ntp_test['error'] = str(e)
|
ntp_test['error'] = str(e)
|
||||||
|
|
||||||
return {
|
return {
|
||||||
|
'success': is_running,
|
||||||
'running': is_running,
|
'running': is_running,
|
||||||
'ntp_test': ntp_test
|
'ntp_test': ntp_test
|
||||||
}
|
}
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to test NTP functionality: {e}")
|
logger.error(f"Failed to test NTP functionality: {e}")
|
||||||
return {'running': False, 'ntp_test': {}}
|
return {'success': False, 'running': False, 'ntp_test': {}}
|
||||||
|
|
||||||
def get_network_info(self) -> dict:
|
def get_network_info(self) -> dict:
|
||||||
"""Return general network info: IP addresses, interfaces, gateway, DNS, etc."""
|
"""Return general network info: IP addresses, interfaces, gateway, DNS, etc."""
|
||||||
|
|||||||
@@ -266,6 +266,27 @@ class PeerRegistry(BaseServiceManager):
|
|||||||
self.logger.error(f"Error removing peer {name}: {e}")
|
self.logger.error(f"Error removing peer {name}: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def update_peer(self, name: str, fields: Dict[str, Any]) -> bool:
|
||||||
|
"""Update arbitrary fields on a peer."""
|
||||||
|
try:
|
||||||
|
with self.lock:
|
||||||
|
for peer in self.peers:
|
||||||
|
if peer.get('peer') == name:
|
||||||
|
peer.update(fields)
|
||||||
|
peer['updated_at'] = datetime.utcnow().isoformat()
|
||||||
|
self._save_peers()
|
||||||
|
self.logger.info(f"Updated peer {name}: {list(fields.keys())}")
|
||||||
|
return True
|
||||||
|
self.logger.warning(f"Peer {name} not found for update")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
self.logger.error(f"Error updating peer {name}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def clear_reinstall_flag(self, name: str) -> bool:
|
||||||
|
"""Clear the config_needs_reinstall flag after user downloads new config."""
|
||||||
|
return self.update_peer(name, {'config_needs_reinstall': False})
|
||||||
|
|
||||||
def update_peer_ip(self, name: str, new_ip: str) -> bool:
|
def update_peer_ip(self, name: str, new_ip: str) -> bool:
|
||||||
"""Update peer IP address"""
|
"""Update peer IP address"""
|
||||||
try:
|
try:
|
||||||
|
|||||||
+131
-91
@@ -30,8 +30,8 @@ class RoutingManager(BaseServiceManager):
|
|||||||
self._state_file = os.path.join(data_dir, 'routing', 'service_state.json')
|
self._state_file = os.path.join(data_dir, 'routing', 'service_state.json')
|
||||||
|
|
||||||
# Ensure directories exist
|
# Ensure directories exist
|
||||||
os.makedirs(self.routing_dir, exist_ok=True)
|
self.safe_makedirs(self.routing_dir)
|
||||||
os.makedirs(os.path.dirname(self.rules_file), exist_ok=True)
|
self.safe_makedirs(os.path.dirname(self.rules_file))
|
||||||
|
|
||||||
# Initialize routing configuration
|
# Initialize routing configuration
|
||||||
self._ensure_config_exists()
|
self._ensure_config_exists()
|
||||||
@@ -41,8 +41,11 @@ class RoutingManager(BaseServiceManager):
|
|||||||
|
|
||||||
def _ensure_config_exists(self):
|
def _ensure_config_exists(self):
|
||||||
"""Ensure routing configuration exists"""
|
"""Ensure routing configuration exists"""
|
||||||
if not os.path.exists(self.rules_file):
|
try:
|
||||||
self._initialize_rules()
|
if not os.path.exists(self.rules_file):
|
||||||
|
self._initialize_rules()
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
def _initialize_rules(self):
|
def _initialize_rules(self):
|
||||||
"""Initialize routing rules"""
|
"""Initialize routing rules"""
|
||||||
@@ -385,68 +388,39 @@ class RoutingManager(BaseServiceManager):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def test_routing_connectivity(self, target_ip: str, via_peer: str = None) -> Dict:
|
def test_routing_connectivity(self, target_ip: str, via_peer: str = None) -> Dict:
|
||||||
"""Test routing connectivity"""
|
"""Test routing connectivity by running ping/traceroute in cell-wireguard."""
|
||||||
try:
|
WG = 'cell-wireguard'
|
||||||
results = {}
|
|
||||||
|
|
||||||
# Test basic connectivity
|
def _exec(cmd):
|
||||||
try:
|
result = subprocess.run(
|
||||||
result = subprocess.run(['ping', '-c', '3', '-W', '5', target_ip],
|
['docker', 'exec', WG] + cmd,
|
||||||
capture_output=True, text=True, timeout=30)
|
capture_output=True, text=True, timeout=35
|
||||||
results['ping'] = {
|
)
|
||||||
'success': result.returncode == 0,
|
|
||||||
'output': result.stdout,
|
|
||||||
'error': result.stderr
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
results['ping'] = {
|
|
||||||
'success': False,
|
|
||||||
'output': '',
|
|
||||||
'error': str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Test traceroute
|
|
||||||
try:
|
|
||||||
result = subprocess.run(['traceroute', '-m', '10', target_ip],
|
|
||||||
capture_output=True, text=True, timeout=30)
|
|
||||||
results['traceroute'] = {
|
|
||||||
'success': result.returncode == 0,
|
|
||||||
'output': result.stdout,
|
|
||||||
'error': result.stderr
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
results['traceroute'] = {
|
|
||||||
'success': False,
|
|
||||||
'output': '',
|
|
||||||
'error': str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
# Test specific route if via_peer is specified
|
|
||||||
if via_peer:
|
|
||||||
try:
|
|
||||||
# Test route through specific peer
|
|
||||||
result = subprocess.run(['ping', '-c', '3', '-W', '5', '-I', via_peer, target_ip],
|
|
||||||
capture_output=True, text=True, timeout=30)
|
|
||||||
results['peer_route'] = {
|
|
||||||
'success': result.returncode == 0,
|
|
||||||
'output': result.stdout,
|
|
||||||
'error': result.stderr
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
results['peer_route'] = {
|
|
||||||
'success': False,
|
|
||||||
'output': '',
|
|
||||||
'error': str(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
return {
|
||||||
'ping': {'success': False, 'output': '', 'error': str(e)},
|
'success': result.returncode == 0,
|
||||||
'traceroute': {'success': False, 'output': '', 'error': str(e)}
|
'output': result.stdout,
|
||||||
|
'error': result.stderr,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
results = {}
|
||||||
|
try:
|
||||||
|
results['ping'] = _exec(['ping', '-c', '4', '-W', '3', target_ip])
|
||||||
|
except Exception as e:
|
||||||
|
results['ping'] = {'success': False, 'output': '', 'error': str(e)}
|
||||||
|
|
||||||
|
try:
|
||||||
|
results['traceroute'] = _exec(['traceroute', '-m', '10', '-w', '2', target_ip])
|
||||||
|
except Exception as e:
|
||||||
|
results['traceroute'] = {'success': False, 'output': '', 'error': str(e)}
|
||||||
|
|
||||||
|
if via_peer:
|
||||||
|
try:
|
||||||
|
results['peer_route'] = _exec(['ping', '-c', '3', '-W', '3', '-I', via_peer, target_ip])
|
||||||
|
except Exception as e:
|
||||||
|
results['peer_route'] = {'success': False, 'output': '', 'error': str(e)}
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
def get_routing_logs(self, lines: int = 50) -> Dict:
|
def get_routing_logs(self, lines: int = 50) -> Dict:
|
||||||
"""Get routing and firewall logs"""
|
"""Get routing and firewall logs"""
|
||||||
try:
|
try:
|
||||||
@@ -482,6 +456,49 @@ class RoutingManager(BaseServiceManager):
|
|||||||
logger.error(f"Failed to get routing logs: {e}")
|
logger.error(f"Failed to get routing logs: {e}")
|
||||||
return {'error': str(e)}
|
return {'error': str(e)}
|
||||||
|
|
||||||
|
def remove_firewall_rule(self, rule_id: str) -> bool:
|
||||||
|
"""Remove a stored firewall rule and delete it from iptables."""
|
||||||
|
try:
|
||||||
|
rules = self._load_rules()
|
||||||
|
rule = next((r for r in rules['firewall_rules'] if r['id'] == rule_id), None)
|
||||||
|
if not rule:
|
||||||
|
return False
|
||||||
|
rules['firewall_rules'] = [r for r in rules['firewall_rules'] if r['id'] != rule_id]
|
||||||
|
self._save_rules(rules)
|
||||||
|
try:
|
||||||
|
cmd = ['iptables', '-D', rule['rule_type'],
|
||||||
|
'-s', rule['source'], '-d', rule['destination']]
|
||||||
|
if rule.get('protocol') and rule['protocol'] != 'ALL':
|
||||||
|
cmd += ['-p', rule['protocol'].lower()]
|
||||||
|
if rule.get('port'):
|
||||||
|
cmd += ['--dport', str(rule['port'])]
|
||||||
|
if rule.get('port_range'):
|
||||||
|
cmd += ['--dport', rule['port_range'].replace('-', ':')]
|
||||||
|
cmd += ['-j', rule['action']]
|
||||||
|
subprocess.run(cmd, capture_output=True, timeout=10)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"iptables -D failed (rule may already be gone): {e}")
|
||||||
|
logger.info(f"Removed firewall rule {rule_id}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to remove firewall rule: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def get_live_iptables(self) -> dict:
|
||||||
|
"""Return live iptables rules from the WireGuard container."""
|
||||||
|
out = {}
|
||||||
|
for table in ('filter', 'nat'):
|
||||||
|
try:
|
||||||
|
r = subprocess.run(
|
||||||
|
['docker', 'exec', 'cell-wireguard',
|
||||||
|
'iptables', '-t', table, '-L', '-n', '-v', '--line-numbers'],
|
||||||
|
capture_output=True, text=True, timeout=10
|
||||||
|
)
|
||||||
|
out[table] = r.stdout if r.returncode == 0 else r.stderr
|
||||||
|
except Exception as e:
|
||||||
|
out[table] = str(e)
|
||||||
|
return out
|
||||||
|
|
||||||
def get_nat_rules(self):
|
def get_nat_rules(self):
|
||||||
"""Return all NAT rules."""
|
"""Return all NAT rules."""
|
||||||
rules = self._load_rules()
|
rules = self._load_rules()
|
||||||
@@ -558,7 +575,8 @@ class RoutingManager(BaseServiceManager):
|
|||||||
'iptables_access': iptables_test,
|
'iptables_access': iptables_test,
|
||||||
'network_interfaces': interfaces_test,
|
'network_interfaces': interfaces_test,
|
||||||
'routing_table_access': routing_table_test,
|
'routing_table_access': routing_table_test,
|
||||||
'success': routing_test.get('success', False) and iptables_test.get('success', False),
|
# iptables runs in cell-wireguard, not API container — exclude from success
|
||||||
|
'success': routing_test.get('success', False),
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -859,37 +877,59 @@ class RoutingManager(BaseServiceManager):
|
|||||||
logger.error(f"Failed to apply firewall rule: {e}")
|
logger.error(f"Failed to apply firewall rule: {e}")
|
||||||
|
|
||||||
def _get_routing_table(self) -> List[Dict]:
|
def _get_routing_table(self) -> List[Dict]:
|
||||||
"""Get current routing table"""
|
"""Get host routing table from /proc/1/net/route (host PID namespace)."""
|
||||||
try:
|
try:
|
||||||
result = subprocess.run(['ip', 'route', 'show'],
|
return self._parse_proc_net_route('/proc/1/net/route')
|
||||||
capture_output=True, text=True, timeout=10)
|
except Exception:
|
||||||
|
pass
|
||||||
routes = []
|
# Fallback: WireGuard container routing table
|
||||||
for line in result.stdout.strip().split('\n'):
|
try:
|
||||||
if line.strip():
|
result = subprocess.run(
|
||||||
routes.append({
|
['docker', 'exec', 'cell-wireguard', 'ip', 'route', 'show'],
|
||||||
'route': line.strip(),
|
capture_output=True, text=True, timeout=10,
|
||||||
'parsed': self._parse_route(line.strip())
|
)
|
||||||
})
|
if result.returncode == 0:
|
||||||
|
routes = []
|
||||||
return routes
|
for line in result.stdout.strip().split('\n'):
|
||||||
|
if line.strip():
|
||||||
except FileNotFoundError:
|
routes.append({'route': line.strip(), 'parsed': self._parse_route(line.strip())})
|
||||||
# System tools not available (development environment)
|
return routes
|
||||||
# Return mock routing table for development
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
'route': 'default via 192.168.1.1 dev en0',
|
|
||||||
'parsed': {'destination': 'default', 'via': '192.168.1.1', 'dev': 'en0', 'metric': ''}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'route': '10.0.0.0/24 dev wg0',
|
|
||||||
'parsed': {'destination': '10.0.0.0/24', 'via': '', 'dev': 'wg0', 'metric': ''}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to get routing table: {e}")
|
logger.error(f"Failed to get routing table: {e}")
|
||||||
return []
|
return []
|
||||||
|
|
||||||
|
def _parse_proc_net_route(self, path: str) -> List[Dict]:
|
||||||
|
"""Parse /proc/net/route hex table into human-readable routes."""
|
||||||
|
import socket, struct
|
||||||
|
routes = []
|
||||||
|
with open(path) as f:
|
||||||
|
lines = f.readlines()[1:] # skip header
|
||||||
|
for line in lines:
|
||||||
|
parts = line.strip().split()
|
||||||
|
if len(parts) < 8:
|
||||||
|
continue
|
||||||
|
iface, dest_hex, gw_hex, mask_hex = parts[0], parts[1], parts[2], parts[7]
|
||||||
|
|
||||||
|
def hex_to_ip(h):
|
||||||
|
return socket.inet_ntoa(struct.pack('<I', int(h, 16)))
|
||||||
|
|
||||||
|
dest = hex_to_ip(dest_hex)
|
||||||
|
gw = hex_to_ip(gw_hex)
|
||||||
|
mask = hex_to_ip(mask_hex)
|
||||||
|
prefix = bin(struct.unpack('>I', socket.inet_aton(mask))[0]).count('1')
|
||||||
|
|
||||||
|
if dest == '0.0.0.0' and mask == '0.0.0.0':
|
||||||
|
dest_str = 'default'
|
||||||
|
route_str = f'default via {gw} dev {iface}'
|
||||||
|
else:
|
||||||
|
dest_str = f'{dest}/{prefix}'
|
||||||
|
route_str = f'{dest}/{prefix} dev {iface}' + (f' via {gw}' if gw != '0.0.0.0' else '')
|
||||||
|
|
||||||
|
routes.append({
|
||||||
|
'route': route_str,
|
||||||
|
'parsed': {'destination': dest_str, 'via': gw if gw != '0.0.0.0' else '', 'dev': iface, 'metric': ''},
|
||||||
|
})
|
||||||
|
return routes
|
||||||
|
|
||||||
def _parse_route(self, route_line: str) -> Dict:
|
def _parse_route(self, route_line: str) -> Dict:
|
||||||
"""Parse route line into components"""
|
"""Parse route line into components"""
|
||||||
|
|||||||
+36
-11
@@ -46,7 +46,10 @@ class VaultManager(BaseServiceManager):
|
|||||||
|
|
||||||
# Create directories
|
# Create directories
|
||||||
for directory in [self.vault_dir, self.ca_dir, self.certs_dir, self.keys_dir, self.trust_dir]:
|
for directory in [self.vault_dir, self.ca_dir, self.certs_dir, self.keys_dir, self.trust_dir]:
|
||||||
directory.mkdir(parents=True, exist_ok=True)
|
try:
|
||||||
|
directory.mkdir(parents=True, exist_ok=True)
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
# CA files
|
# CA files
|
||||||
self.ca_key_file = self.ca_dir / "ca.key"
|
self.ca_key_file = self.ca_dir / "ca.key"
|
||||||
@@ -63,7 +66,12 @@ class VaultManager(BaseServiceManager):
|
|||||||
|
|
||||||
self.trusted_keys = {}
|
self.trusted_keys = {}
|
||||||
self.trust_chains = {}
|
self.trust_chains = {}
|
||||||
self._load_or_create_ca()
|
self.ca_key = None
|
||||||
|
self.ca_cert = None
|
||||||
|
try:
|
||||||
|
self._load_or_create_ca()
|
||||||
|
except (PermissionError, OSError):
|
||||||
|
pass
|
||||||
self._load_trust_store()
|
self._load_trust_store()
|
||||||
|
|
||||||
def _load_or_create_ca(self) -> None:
|
def _load_or_create_ca(self) -> None:
|
||||||
@@ -150,19 +158,25 @@ class VaultManager(BaseServiceManager):
|
|||||||
|
|
||||||
def _load_or_create_fernet_key(self) -> None:
|
def _load_or_create_fernet_key(self) -> None:
|
||||||
"""Load existing Fernet key or create a new one."""
|
"""Load existing Fernet key or create a new one."""
|
||||||
if self.fernet_key_file.exists():
|
try:
|
||||||
with open(self.fernet_key_file, "rb") as f:
|
if self.fernet_key_file.exists():
|
||||||
self.fernet_key = f.read()
|
with open(self.fernet_key_file, "rb") as f:
|
||||||
else:
|
self.fernet_key = f.read()
|
||||||
|
else:
|
||||||
|
self.fernet_key = Fernet.generate_key()
|
||||||
|
with open(self.fernet_key_file, "wb") as f:
|
||||||
|
f.write(self.fernet_key)
|
||||||
|
self.fernet = Fernet(self.fernet_key)
|
||||||
|
except (PermissionError, OSError):
|
||||||
self.fernet_key = Fernet.generate_key()
|
self.fernet_key = Fernet.generate_key()
|
||||||
with open(self.fernet_key_file, "wb") as f:
|
self.fernet = Fernet(self.fernet_key)
|
||||||
f.write(self.fernet_key)
|
|
||||||
self.fernet = Fernet(self.fernet_key)
|
|
||||||
|
|
||||||
def generate_certificate(self, common_name: str, domains: Optional[List[str]] = None,
|
def generate_certificate(self, common_name: str, domains: Optional[List[str]] = None,
|
||||||
key_size: int = 2048, days: int = 365) -> Dict:
|
key_size: int = 2048, days: int = 365) -> Dict:
|
||||||
"""Generate a new TLS certificate."""
|
"""Generate a new TLS certificate."""
|
||||||
try:
|
try:
|
||||||
|
if self.ca_key is None or self.ca_cert is None:
|
||||||
|
raise RuntimeError("CA not initialized — cannot generate certificate")
|
||||||
# Generate private key
|
# Generate private key
|
||||||
private_key = rsa.generate_private_key(
|
private_key = rsa.generate_private_key(
|
||||||
public_exponent=65537,
|
public_exponent=65537,
|
||||||
@@ -415,12 +429,23 @@ class VaultManager(BaseServiceManager):
|
|||||||
# Check secrets
|
# Check secrets
|
||||||
secrets = self.list_secrets()
|
secrets = self.list_secrets()
|
||||||
|
|
||||||
|
ca_ok = ca_status.get('valid', False)
|
||||||
|
ca_cert_pem = None
|
||||||
|
if self.ca_cert_file.exists():
|
||||||
|
ca_cert_pem = self.ca_cert_file.read_text()
|
||||||
status = {
|
status = {
|
||||||
'running': ca_status.get('valid', False),
|
'running': ca_ok,
|
||||||
'status': 'online' if ca_status.get('valid', False) else 'offline',
|
'status': 'online' if ca_ok else 'offline',
|
||||||
|
'ca_configured': ca_ok,
|
||||||
|
'age_configured': ca_ok,
|
||||||
|
'age_public_key': None,
|
||||||
|
'ca_certificate': ca_cert_pem,
|
||||||
'ca_status': ca_status,
|
'ca_status': ca_status,
|
||||||
'certificates_count': len(certificates),
|
'certificates_count': len(certificates),
|
||||||
|
'certificates': certificates,
|
||||||
'trusted_keys_count': len(trusted_keys),
|
'trusted_keys_count': len(trusted_keys),
|
||||||
|
'trusted_keys': list(trusted_keys.values()) if isinstance(trusted_keys, dict) else trusted_keys,
|
||||||
|
'trust_chains_count': len(trusted_keys),
|
||||||
'secrets_count': len(secrets),
|
'secrets_count': len(secrets),
|
||||||
'timestamp': datetime.utcnow().isoformat()
|
'timestamp': datetime.utcnow().isoformat()
|
||||||
}
|
}
|
||||||
|
|||||||
+620
-837
File diff suppressed because it is too large
Load Diff
+34
-73
@@ -1,92 +1,53 @@
|
|||||||
# Personal Internet Cell - Caddy Configuration
|
|
||||||
# This serves as the main reverse proxy and TLS termination point
|
|
||||||
|
|
||||||
# Global settings
|
|
||||||
{
|
{
|
||||||
# Auto-generate certificates for .cell domains
|
auto_https off
|
||||||
auto_https disable_redirects
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Main cell domain - replace 'mycell' with your cell name
|
# Main cell domain — no service-IP restriction needed
|
||||||
mycell.cell {
|
http://mycell.cell, http://172.20.0.2:80 {
|
||||||
# TLS with internal CA
|
|
||||||
tls internal
|
|
||||||
|
|
||||||
# API endpoints
|
|
||||||
handle /api/* {
|
handle /api/* {
|
||||||
reverse_proxy cell-api:3000
|
reverse_proxy cell-api:3000
|
||||||
}
|
}
|
||||||
|
handle /calendar* {
|
||||||
# Web UI
|
|
||||||
handle / {
|
|
||||||
reverse_proxy cell-webui:80
|
|
||||||
}
|
|
||||||
|
|
||||||
# Email web interface
|
|
||||||
handle /mail {
|
|
||||||
reverse_proxy cell-mail:80
|
|
||||||
}
|
|
||||||
|
|
||||||
# Calendar and contacts
|
|
||||||
handle /calendar {
|
|
||||||
reverse_proxy cell-radicale:5232
|
reverse_proxy cell-radicale:5232
|
||||||
}
|
}
|
||||||
|
handle /files* {
|
||||||
# File storage
|
|
||||||
handle /files {
|
|
||||||
reverse_proxy cell-webdav:80
|
|
||||||
}
|
|
||||||
|
|
||||||
# DNS management interface
|
|
||||||
handle /dns {
|
|
||||||
reverse_proxy cell-dns:8080
|
|
||||||
}
|
|
||||||
|
|
||||||
# RainLoop Webmail
|
|
||||||
handle_path /webmail/* {
|
|
||||||
reverse_proxy cell-rainloop:8888
|
|
||||||
}
|
|
||||||
|
|
||||||
# FileGator File Browser
|
|
||||||
handle /files-ui* {
|
|
||||||
reverse_proxy cell-filegator:8080
|
reverse_proxy cell-filegator:8080
|
||||||
}
|
}
|
||||||
|
handle /webmail* {
|
||||||
|
reverse_proxy cell-rainloop:8888
|
||||||
|
}
|
||||||
|
handle {
|
||||||
|
reverse_proxy cell-webui:80
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Peer cell domains (will be dynamically added)
|
# Per-service virtual IPs — each gets its own IP so iptables can target them
|
||||||
# Example: bob.cell {
|
http://calendar.cell, http://172.20.0.21:80 {
|
||||||
# reverse_proxy cell-wireguard:51820
|
reverse_proxy cell-radicale:5232
|
||||||
# }
|
}
|
||||||
|
|
||||||
# Local development
|
http://files.cell, http://172.20.0.22:80 {
|
||||||
localhost {
|
reverse_proxy cell-filegator:8080
|
||||||
# API endpoints
|
}
|
||||||
|
|
||||||
|
http://mail.cell, http://webmail.cell, http://172.20.0.23:80 {
|
||||||
|
reverse_proxy cell-rainloop:8888
|
||||||
|
}
|
||||||
|
|
||||||
|
http://webdav.cell, http://172.20.0.24:80 {
|
||||||
|
reverse_proxy cell-webdav:80
|
||||||
|
}
|
||||||
|
|
||||||
|
http://api.cell {
|
||||||
|
reverse_proxy cell-api:3000
|
||||||
|
}
|
||||||
|
|
||||||
|
# Catch-all for direct IP / localhost
|
||||||
|
:80 {
|
||||||
handle /api/* {
|
handle /api/* {
|
||||||
reverse_proxy cell-api:3000
|
reverse_proxy cell-api:3000
|
||||||
}
|
}
|
||||||
|
handle {
|
||||||
# Web UI
|
|
||||||
handle / {
|
|
||||||
reverse_proxy cell-webui:80
|
reverse_proxy cell-webui:80
|
||||||
}
|
}
|
||||||
|
|
||||||
# Email web interface
|
|
||||||
handle /mail {
|
|
||||||
reverse_proxy cell-mail:80
|
|
||||||
}
|
|
||||||
|
|
||||||
# Calendar and contacts
|
|
||||||
handle /calendar {
|
|
||||||
reverse_proxy cell-radicale:5232
|
|
||||||
}
|
|
||||||
|
|
||||||
# File storage
|
|
||||||
handle /files {
|
|
||||||
reverse_proxy cell-webdav:80
|
|
||||||
}
|
|
||||||
|
|
||||||
# DNS management interface
|
|
||||||
handle /dns {
|
|
||||||
reverse_proxy cell-dns:8080
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
{}
|
||||||
@@ -1,42 +1,16 @@
|
|||||||
# Personal Internet Cell - CoreDNS Configuration
|
|
||||||
# Handles .cell TLD resolution and peer discovery
|
|
||||||
|
|
||||||
. {
|
. {
|
||||||
# Forward all non-.cell domains to upstream DNS
|
|
||||||
forward . 8.8.8.8 1.1.1.1
|
forward . 8.8.8.8 1.1.1.1
|
||||||
|
|
||||||
# Cache responses
|
|
||||||
cache
|
cache
|
||||||
|
|
||||||
# Log queries
|
|
||||||
log
|
log
|
||||||
|
|
||||||
# Health check endpoint
|
|
||||||
health
|
health
|
||||||
}
|
}
|
||||||
|
|
||||||
# .cell TLD zone
|
|
||||||
cell {
|
cell {
|
||||||
# File-based zone for static records
|
|
||||||
file /data/cell.zone
|
file /data/cell.zone
|
||||||
|
|
||||||
# Dynamic peer records (will be managed by API)
|
|
||||||
reload
|
|
||||||
|
|
||||||
# Allow zone transfers
|
|
||||||
transfer {
|
|
||||||
to *
|
|
||||||
}
|
|
||||||
|
|
||||||
# Log queries
|
|
||||||
log
|
log
|
||||||
}
|
}
|
||||||
|
|
||||||
# Local network zone
|
|
||||||
local.cell {
|
local.cell {
|
||||||
# File-based zone for local services
|
|
||||||
file /data/local.zone
|
file /data/local.zone
|
||||||
|
|
||||||
# Log queries
|
|
||||||
log
|
log
|
||||||
}
|
}
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
OVERRIDE_HOSTNAME=mail.cell.local
|
||||||
|
POSTMASTER_ADDRESS=admin@cell.local
|
||||||
|
LOG_LEVEL=warn
|
||||||
|
|||||||
@@ -13,10 +13,6 @@ server pool.ntp.org iburst
|
|||||||
# Local stratum for this server
|
# Local stratum for this server
|
||||||
local stratum 10
|
local stratum 10
|
||||||
|
|
||||||
# Log settings
|
|
||||||
logdir /var/log/chrony
|
|
||||||
log measurements statistics tracking
|
|
||||||
|
|
||||||
# Key file for authentication (optional)
|
# Key file for authentication (optional)
|
||||||
# keyfile /etc/chrony/chrony.keys
|
# keyfile /etc/chrony/chrony.keys
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,11 @@
|
|||||||
|
[server]
|
||||||
|
hosts = 0.0.0.0:5232
|
||||||
|
|
||||||
|
[auth]
|
||||||
|
type = none
|
||||||
|
|
||||||
|
[storage]
|
||||||
|
filesystem_folder = /data/collections
|
||||||
|
|
||||||
|
[logging]
|
||||||
|
level = warning
|
||||||
+114
-19
@@ -1,7 +1,7 @@
|
|||||||
version: '3.3'
|
version: '3.3'
|
||||||
|
|
||||||
services:
|
services:
|
||||||
# Reverse Proxy - Caddy for TLS termination and routing
|
# Reverse Proxy - Caddy for routing all .cell traffic
|
||||||
caddy:
|
caddy:
|
||||||
image: caddy:2-alpine
|
image: caddy:2-alpine
|
||||||
container_name: cell-caddy
|
container_name: cell-caddy
|
||||||
@@ -13,13 +13,22 @@ services:
|
|||||||
- ./data/caddy:/data
|
- ./data/caddy:/data
|
||||||
- ./config/caddy/certs:/config/caddy/certs
|
- ./config/caddy/certs:/config/caddy/certs
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.2
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# DNS Server - CoreDNS for .cell TLD resolution
|
# DNS Server - CoreDNS for .cell TLD resolution
|
||||||
dns:
|
dns:
|
||||||
image: coredns/coredns:latest
|
image: coredns/coredns:latest
|
||||||
container_name: cell-dns
|
container_name: cell-dns
|
||||||
|
command: ["-conf", "/etc/coredns/Corefile"]
|
||||||
ports:
|
ports:
|
||||||
- "53:53/udp"
|
- "53:53/udp"
|
||||||
- "53:53/tcp"
|
- "53:53/tcp"
|
||||||
@@ -28,7 +37,13 @@ services:
|
|||||||
- ./data/dns:/data
|
- ./data/dns:/data
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.3
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# DHCP Server - dnsmasq for IP leasing
|
# DHCP Server - dnsmasq for IP leasing
|
||||||
dhcp:
|
dhcp:
|
||||||
@@ -41,10 +56,16 @@ services:
|
|||||||
- ./data/dhcp:/var/lib/misc
|
- ./data/dhcp:/var/lib/misc
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.4
|
||||||
command: ["/bin/sh", "-c", "apk add --no-cache dnsmasq && dnsmasq -d -C /etc/dnsmasq.conf"]
|
command: ["/bin/sh", "-c", "apk add --no-cache dnsmasq && dnsmasq -d -C /etc/dnsmasq.conf"]
|
||||||
cap_add:
|
cap_add:
|
||||||
- NET_ADMIN
|
- NET_ADMIN
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# NTP Server - chrony for time synchronization
|
# NTP Server - chrony for time synchronization
|
||||||
ntp:
|
ntp:
|
||||||
@@ -56,15 +77,23 @@ services:
|
|||||||
- ./config/ntp/chrony.conf:/etc/chrony/chrony.conf
|
- ./config/ntp/chrony.conf:/etc/chrony/chrony.conf
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
command: ["/bin/sh", "-c", "apk add --no-cache chrony && exec chronyd -d -f /etc/chrony/chrony.conf -n"]
|
ipv4_address: 172.20.0.5
|
||||||
|
cap_add:
|
||||||
|
- SYS_TIME
|
||||||
|
command: ["/bin/sh", "-c", "apk add --no-cache chrony && rm -f /var/run/chrony/chronyd.pid && exec chronyd -d -f /etc/chrony/chrony.conf -n"]
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# Email Server - Postfix + Dovecot
|
# Email Server - Postfix + Dovecot
|
||||||
mail:
|
mail:
|
||||||
image: mailserver/docker-mailserver:latest
|
image: mailserver/docker-mailserver:latest
|
||||||
container_name: cell-mail
|
container_name: cell-mail
|
||||||
hostname: mail
|
hostname: mail
|
||||||
domainname: yourdomain.com # <-- Set your domain!
|
domainname: cell.local
|
||||||
env_file: ./config/mail/mailserver.env
|
env_file: ./config/mail/mailserver.env
|
||||||
ports:
|
ports:
|
||||||
- "25:25"
|
- "25:25"
|
||||||
@@ -78,9 +107,15 @@ services:
|
|||||||
- ./config/mail/ssl:/etc/letsencrypt
|
- ./config/mail/ssl:/etc/letsencrypt
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.6
|
||||||
cap_add:
|
cap_add:
|
||||||
- NET_ADMIN
|
- NET_ADMIN
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# Calendar & Contacts - Radicale
|
# Calendar & Contacts - Radicale
|
||||||
radicale:
|
radicale:
|
||||||
@@ -93,7 +128,13 @@ services:
|
|||||||
- ./data/radicale:/data
|
- ./data/radicale:/data
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.7
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# File Storage - WebDAV
|
# File Storage - WebDAV
|
||||||
webdav:
|
webdav:
|
||||||
@@ -101,17 +142,30 @@ services:
|
|||||||
container_name: cell-webdav
|
container_name: cell-webdav
|
||||||
ports:
|
ports:
|
||||||
- "8080:80"
|
- "8080:80"
|
||||||
|
environment:
|
||||||
|
- AUTH_TYPE=Basic
|
||||||
|
- USERNAME=admin
|
||||||
|
- PASSWORD=admin123
|
||||||
volumes:
|
volumes:
|
||||||
- ./data/files:/var/lib/dav
|
- ./data/files:/var/lib/dav
|
||||||
- ./config/webdav/users.passwd:/etc/users.passwd
|
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.8
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# WireGuard VPN
|
# WireGuard VPN
|
||||||
wireguard:
|
wireguard:
|
||||||
image: linuxserver/wireguard:latest
|
image: linuxserver/wireguard:latest
|
||||||
container_name: cell-wireguard
|
container_name: cell-wireguard
|
||||||
|
environment:
|
||||||
|
- SERVERMODE=true
|
||||||
|
- PUID=911
|
||||||
|
- PGID=911
|
||||||
ports:
|
ports:
|
||||||
- "51820:51820/udp"
|
- "51820:51820/udp"
|
||||||
volumes:
|
volumes:
|
||||||
@@ -119,12 +173,21 @@ services:
|
|||||||
- /lib/modules:/lib/modules
|
- /lib/modules:/lib/modules
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.9
|
||||||
cap_add:
|
cap_add:
|
||||||
- NET_ADMIN
|
- NET_ADMIN
|
||||||
- SYS_MODULE
|
- SYS_MODULE
|
||||||
|
sysctls:
|
||||||
|
- net.ipv4.conf.all.src_valid_mark=1
|
||||||
|
- net.ipv4.ip_forward=1
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# CLI API Server
|
# API Server
|
||||||
api:
|
api:
|
||||||
build: ./api
|
build: ./api
|
||||||
container_name: cell-api
|
container_name: cell-api
|
||||||
@@ -132,15 +195,25 @@ services:
|
|||||||
- "3000:3000"
|
- "3000:3000"
|
||||||
volumes:
|
volumes:
|
||||||
- ./data/api:/app/data
|
- ./data/api:/app/data
|
||||||
|
- ./data/dns:/app/data/dns
|
||||||
- ./config/api:/app/config
|
- ./config/api:/app/config
|
||||||
- ./config/wireguard:/app/config/wireguard
|
- ./config/wireguard:/app/config/wireguard
|
||||||
|
- ./config/dns:/app/config/dns
|
||||||
|
- ./data/logs:/app/api/data/logs
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
pid: host
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.10
|
||||||
depends_on:
|
depends_on:
|
||||||
- wireguard
|
- wireguard
|
||||||
- dns
|
- dns
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
# Web UI - React + Vite
|
# Web UI - React + Vite
|
||||||
webui:
|
webui:
|
||||||
@@ -150,27 +223,49 @@ services:
|
|||||||
- "8081:80"
|
- "8081:80"
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.11
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
|
# Webmail - RainLoop
|
||||||
rainloop:
|
rainloop:
|
||||||
image: hardware/rainloop
|
image: hardware/rainloop
|
||||||
container_name: cell-rainloop
|
container_name: cell-rainloop
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.12
|
||||||
ports:
|
ports:
|
||||||
- "8888:8888"
|
- "8888:8888"
|
||||||
|
volumes:
|
||||||
|
- ./data/rainloop:/rainloop/data
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
|
# File Manager - FileGator
|
||||||
filegator:
|
filegator:
|
||||||
image: filegator/filegator
|
image: filegator/filegator
|
||||||
container_name: cell-filegator
|
container_name: cell-filegator
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- cell-network
|
cell-network:
|
||||||
|
ipv4_address: 172.20.0.13
|
||||||
ports:
|
ports:
|
||||||
- "8082:8080"
|
- "8082:8080"
|
||||||
environment:
|
volumes:
|
||||||
- FG_PUBLIC_PATH=/files-ui
|
- ./data/filegator:/var/www/filegator/private
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
cell-network:
|
cell-network:
|
||||||
|
|||||||
+141
-34
@@ -1,9 +1,19 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
import os
|
"""
|
||||||
import sys
|
PIC setup script — run once on a fresh host to initialise a new cell.
|
||||||
import subprocess
|
|
||||||
|
|
||||||
# List of required directories (relative to project root)
|
Env vars (all optional, have defaults):
|
||||||
|
CELL_NAME cell identity name (default: mycell)
|
||||||
|
CELL_DOMAIN DNS TLD for this cell (default: cell)
|
||||||
|
VPN_ADDRESS WireGuard server address (default: 10.0.0.1/24)
|
||||||
|
WG_PORT WireGuard listen port (default: 51820)
|
||||||
|
"""
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# ── directories ────────────────────────────────────────────────────────────────
|
||||||
REQUIRED_DIRS = [
|
REQUIRED_DIRS = [
|
||||||
'config/caddy/certs',
|
'config/caddy/certs',
|
||||||
'config/dns',
|
'config/dns',
|
||||||
@@ -28,9 +38,11 @@ REQUIRED_DIRS = [
|
|||||||
'data/vault/keys',
|
'data/vault/keys',
|
||||||
'data/vault/trust',
|
'data/vault/trust',
|
||||||
'data/vault/ca',
|
'data/vault/ca',
|
||||||
|
'data/logs',
|
||||||
|
'data/wireguard/keys/peers',
|
||||||
|
'data/wireguard/wg_confs',
|
||||||
]
|
]
|
||||||
|
|
||||||
# List of required files (relative to project root)
|
|
||||||
REQUIRED_FILES = [
|
REQUIRED_FILES = [
|
||||||
'config/caddy/Caddyfile',
|
'config/caddy/Caddyfile',
|
||||||
'config/dns/Corefile',
|
'config/dns/Corefile',
|
||||||
@@ -40,60 +52,155 @@ REQUIRED_FILES = [
|
|||||||
'config/webdav/users.passwd',
|
'config/webdav/users.passwd',
|
||||||
]
|
]
|
||||||
|
|
||||||
# Helper to create directories
|
ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||||
def ensure_dir(path):
|
|
||||||
|
|
||||||
|
def ensure_dir(rel):
|
||||||
|
path = os.path.join(ROOT, rel)
|
||||||
if not os.path.exists(path):
|
if not os.path.exists(path):
|
||||||
os.makedirs(path, exist_ok=True)
|
os.makedirs(path, exist_ok=True)
|
||||||
print(f"[CREATED] Directory: {path}")
|
print(f'[CREATED] {rel}')
|
||||||
# Add .gitkeep to empty dirs
|
open(os.path.join(path, '.gitkeep'), 'w').close()
|
||||||
gitkeep = os.path.join(path, '.gitkeep')
|
|
||||||
with open(gitkeep, 'w') as f:
|
|
||||||
f.write('')
|
|
||||||
else:
|
else:
|
||||||
print(f"[EXISTS] Directory: {path}")
|
print(f'[EXISTS] {rel}')
|
||||||
|
|
||||||
# Helper to create empty files if missing
|
|
||||||
def ensure_file(path):
|
def ensure_file(rel):
|
||||||
|
path = os.path.join(ROOT, rel)
|
||||||
if not os.path.exists(path):
|
if not os.path.exists(path):
|
||||||
parent = os.path.dirname(path)
|
os.makedirs(os.path.dirname(path), exist_ok=True)
|
||||||
if parent and not os.path.exists(parent):
|
open(path, 'w').close()
|
||||||
os.makedirs(parent, exist_ok=True)
|
print(f'[CREATED] {rel}')
|
||||||
print(f"[CREATED] Directory: {parent}")
|
|
||||||
with open(path, 'w') as f:
|
|
||||||
f.write('')
|
|
||||||
print(f"[CREATED] File: {path}")
|
|
||||||
else:
|
else:
|
||||||
print(f"[EXISTS] File: {path}")
|
print(f'[EXISTS] {rel}')
|
||||||
|
|
||||||
|
|
||||||
# Optionally generate a self-signed CA cert for Caddy
|
|
||||||
def ensure_caddy_ca_cert():
|
def ensure_caddy_ca_cert():
|
||||||
cert_dir = os.path.join('config', 'caddy', 'certs')
|
cert_dir = os.path.join(ROOT, 'config', 'caddy', 'certs')
|
||||||
ca_key = os.path.join(cert_dir, 'ca.key')
|
ca_key = os.path.join(cert_dir, 'ca.key')
|
||||||
ca_crt = os.path.join(cert_dir, 'ca.crt')
|
ca_crt = os.path.join(cert_dir, 'ca.crt')
|
||||||
if os.path.exists(ca_key) and os.path.exists(ca_crt):
|
if os.path.exists(ca_key) and os.path.exists(ca_crt):
|
||||||
print(f"[EXISTS] Caddy CA cert and key: {ca_crt}, {ca_key}")
|
print('[EXISTS] Caddy CA cert')
|
||||||
return
|
return
|
||||||
print("[INFO] Generating self-signed CA certificate for Caddy...")
|
print('[INFO] Generating Caddy CA certificate...')
|
||||||
try:
|
try:
|
||||||
subprocess.run([
|
subprocess.run([
|
||||||
'openssl', 'req', '-x509', '-newkey', 'rsa:4096',
|
'openssl', 'req', '-x509', '-newkey', 'rsa:4096',
|
||||||
'-keyout', ca_key, '-out', ca_crt, '-days', '365', '-nodes',
|
'-keyout', ca_key, '-out', ca_crt, '-days', '365', '-nodes',
|
||||||
'-subj', '/C=US/ST=State/L=City/O=PersonalInternetCell/CN=CellCA'
|
'-subj', '/C=US/ST=State/L=City/O=PersonalInternetCell/CN=CellCA'
|
||||||
], check=True)
|
], check=True, capture_output=True)
|
||||||
print(f"[CREATED] Caddy CA cert and key: {ca_crt}, {ca_key}")
|
print(f'[CREATED] Caddy CA cert')
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
print("[WARN] openssl not found, skipping CA cert generation.")
|
print('[WARN] openssl not found — skipping CA cert generation')
|
||||||
except subprocess.CalledProcessError:
|
except subprocess.CalledProcessError as e:
|
||||||
print("[ERROR] openssl failed to generate CA cert.")
|
print(f'[ERROR] openssl failed: {e}')
|
||||||
|
|
||||||
|
|
||||||
|
def _gen_keys_python():
|
||||||
|
"""Generate WireGuard keypair using the cryptography library (no wg binary needed)."""
|
||||||
|
import base64
|
||||||
|
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey
|
||||||
|
private_key = X25519PrivateKey.generate()
|
||||||
|
private_bytes = private_key.private_bytes_raw()
|
||||||
|
public_bytes = private_key.public_key().public_bytes_raw()
|
||||||
|
return base64.b64encode(private_bytes).decode(), base64.b64encode(public_bytes).decode()
|
||||||
|
|
||||||
|
|
||||||
|
def generate_wg_keys():
|
||||||
|
keys_dir = os.path.join(ROOT, 'data', 'wireguard', 'keys')
|
||||||
|
priv_path = os.path.join(keys_dir, 'server_private.key')
|
||||||
|
pub_path = os.path.join(keys_dir, 'server_public.key')
|
||||||
|
if os.path.exists(priv_path) and os.path.exists(pub_path):
|
||||||
|
print('[EXISTS] WireGuard server keys')
|
||||||
|
return open(priv_path).read().strip(), open(pub_path).read().strip()
|
||||||
|
print('[INFO] Generating WireGuard server keys...')
|
||||||
|
os.makedirs(keys_dir, exist_ok=True)
|
||||||
|
# Try wg binary first; fall back to Python cryptography library
|
||||||
|
try:
|
||||||
|
priv = subprocess.check_output(['wg', 'genkey']).decode().strip()
|
||||||
|
pub = subprocess.check_output(['wg', 'pubkey'], input=priv.encode()).decode().strip()
|
||||||
|
except FileNotFoundError:
|
||||||
|
print('[INFO] wg not found — using Python cryptography library')
|
||||||
|
priv, pub = _gen_keys_python()
|
||||||
|
with open(priv_path, 'w') as f:
|
||||||
|
f.write(priv + '\n')
|
||||||
|
os.chmod(priv_path, 0o600)
|
||||||
|
with open(pub_path, 'w') as f:
|
||||||
|
f.write(pub + '\n')
|
||||||
|
print(f'[CREATED] WireGuard server keys pub={pub[:12]}...')
|
||||||
|
return priv, pub
|
||||||
|
|
||||||
|
|
||||||
|
def write_wg0_conf(private_key: str, address: str, port: int):
|
||||||
|
wg_conf = os.path.join(ROOT, 'config', 'wireguard', 'wg0.conf')
|
||||||
|
if os.path.exists(wg_conf):
|
||||||
|
print('[EXISTS] config/wireguard/wg0.conf')
|
||||||
|
return
|
||||||
|
server_ip = address.split('/')[0]
|
||||||
|
content = (
|
||||||
|
f'[Interface]\n'
|
||||||
|
f'PrivateKey = {private_key}\n'
|
||||||
|
f'Address = {address}\n'
|
||||||
|
f'ListenPort = {port}\n'
|
||||||
|
f'PostUp = iptables -A FORWARD -i %i -j ACCEPT; '
|
||||||
|
f'iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; '
|
||||||
|
f'sysctl -q net.ipv4.conf.all.rp_filter=0\n'
|
||||||
|
f'PostDown = iptables -D FORWARD -i %i -j ACCEPT; '
|
||||||
|
f'iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; '
|
||||||
|
f'sysctl -q net.ipv4.conf.all.rp_filter=1\n'
|
||||||
|
)
|
||||||
|
with open(wg_conf, 'w') as f:
|
||||||
|
f.write(content)
|
||||||
|
os.chmod(wg_conf, 0o600)
|
||||||
|
print(f'[CREATED] config/wireguard/wg0.conf address={address} port={port}')
|
||||||
|
|
||||||
|
|
||||||
|
def write_cell_config(cell_name: str, domain: str, port: int):
|
||||||
|
cfg_path = os.path.join(ROOT, 'config', 'api', 'cell_config.json')
|
||||||
|
if os.path.exists(cfg_path):
|
||||||
|
try:
|
||||||
|
existing = json.loads(open(cfg_path).read())
|
||||||
|
if existing and existing != {}:
|
||||||
|
print('[EXISTS] config/api/cell_config.json')
|
||||||
|
return
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
config = {
|
||||||
|
'_identity': {
|
||||||
|
'cell_name': cell_name,
|
||||||
|
'domain': domain,
|
||||||
|
'ip_range': '172.20.0.0/16',
|
||||||
|
'wireguard_port': port,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
with open(cfg_path, 'w') as f:
|
||||||
|
json.dump(config, f, indent=2)
|
||||||
|
print(f'[CREATED] config/api/cell_config.json name={cell_name} domain={domain}')
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
print("--- Personal Internet Cell: Setup Script ---")
|
cell_name = os.environ.get('CELL_NAME', 'mycell')
|
||||||
|
domain = os.environ.get('CELL_DOMAIN', 'cell')
|
||||||
|
vpn_address = os.environ.get('VPN_ADDRESS', '10.0.0.1/24')
|
||||||
|
wg_port = int(os.environ.get('WG_PORT', '51820'))
|
||||||
|
|
||||||
|
print('--- Personal Internet Cell: Setup ---')
|
||||||
|
print(f' cell={cell_name} domain={domain} vpn={vpn_address} port={wg_port}')
|
||||||
|
print()
|
||||||
|
|
||||||
for d in REQUIRED_DIRS:
|
for d in REQUIRED_DIRS:
|
||||||
ensure_dir(d)
|
ensure_dir(d)
|
||||||
for f in REQUIRED_FILES:
|
for f in REQUIRED_FILES:
|
||||||
ensure_file(f)
|
ensure_file(f)
|
||||||
|
|
||||||
ensure_caddy_ca_cert()
|
ensure_caddy_ca_cert()
|
||||||
print("--- Setup complete! ---")
|
priv, _pub = generate_wg_keys()
|
||||||
|
write_wg0_conf(priv, vpn_address, wg_port)
|
||||||
|
write_cell_config(cell_name, domain, wg_port)
|
||||||
|
|
||||||
|
print()
|
||||||
|
print('--- Setup complete! Run: make start ---')
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main()
|
||||||
+35
-19
@@ -104,7 +104,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
data = json.loads(response.data)
|
data = json.loads(response.data)
|
||||||
self.assertIn('error', data)
|
self.assertIn('error', data)
|
||||||
|
|
||||||
@patch('api.app.network_manager')
|
@patch('app.network_manager')
|
||||||
def test_dns_records_endpoints(self, mock_network):
|
def test_dns_records_endpoints(self, mock_network):
|
||||||
# Mock get_dns_records
|
# Mock get_dns_records
|
||||||
mock_network.get_dns_records.return_value = [{'name': 'test', 'type': 'A', 'value': '1.2.3.4'}]
|
mock_network.get_dns_records.return_value = [{'name': 'test', 'type': 'A', 'value': '1.2.3.4'}]
|
||||||
@@ -129,7 +129,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
response = self.client.delete('/api/dns/records', data=json.dumps({'name': 'test'}), content_type='application/json')
|
response = self.client.delete('/api/dns/records', data=json.dumps({'name': 'test'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
|
|
||||||
@patch('api.app.network_manager')
|
@patch('app.network_manager')
|
||||||
def test_dhcp_endpoints(self, mock_network):
|
def test_dhcp_endpoints(self, mock_network):
|
||||||
# Mock get_dhcp_leases
|
# Mock get_dhcp_leases
|
||||||
mock_network.get_dhcp_leases.return_value = [{'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}]
|
mock_network.get_dhcp_leases.return_value = [{'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}]
|
||||||
@@ -154,7 +154,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
|
|
||||||
@patch('api.app.network_manager')
|
@patch('app.network_manager')
|
||||||
def test_ntp_status_endpoint(self, mock_network):
|
def test_ntp_status_endpoint(self, mock_network):
|
||||||
# Mock get_ntp_status
|
# Mock get_ntp_status
|
||||||
mock_network.get_ntp_status.return_value = {'running': True, 'stats': {}}
|
mock_network.get_ntp_status.return_value = {'running': True, 'stats': {}}
|
||||||
@@ -167,7 +167,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
response = self.client.get('/api/ntp/status')
|
response = self.client.get('/api/ntp/status')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
|
|
||||||
@patch('api.app.network_manager')
|
@patch('app.network_manager')
|
||||||
def test_network_test_endpoint(self, mock_network):
|
def test_network_test_endpoint(self, mock_network):
|
||||||
# Mock test_connectivity
|
# Mock test_connectivity
|
||||||
mock_network.test_connectivity.return_value = {'success': True, 'output': 'ok'}
|
mock_network.test_connectivity.return_value = {'success': True, 'output': 'ok'}
|
||||||
@@ -180,7 +180,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
response = self.client.post('/api/network/test', data=json.dumps({'target': '8.8.8.8'}), content_type='application/json')
|
response = self.client.post('/api/network/test', data=json.dumps({'target': '8.8.8.8'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
|
|
||||||
@patch('api.app.wireguard_manager')
|
@patch('app.wireguard_manager')
|
||||||
def test_wireguard_endpoints(self, mock_wg):
|
def test_wireguard_endpoints(self, mock_wg):
|
||||||
# /api/wireguard/keys (GET)
|
# /api/wireguard/keys (GET)
|
||||||
mock_wg.get_keys.return_value = {'public_key': 'pub', 'private_key': 'priv'}
|
mock_wg.get_keys.return_value = {'public_key': 'pub', 'private_key': 'priv'}
|
||||||
@@ -274,7 +274,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_wg.get_peer_config.side_effect = None
|
mock_wg.get_peer_config.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.peer_registry')
|
@patch('app.peer_registry')
|
||||||
def test_peer_registry_endpoints(self, mock_peers):
|
def test_peer_registry_endpoints(self, mock_peers):
|
||||||
# /api/peers (GET)
|
# /api/peers (GET)
|
||||||
mock_peers.list_peers.return_value = [{'peer': 'peer1', 'ip': '10.0.0.2'}]
|
mock_peers.list_peers.return_value = [{'peer': 'peer1', 'ip': '10.0.0.2'}]
|
||||||
@@ -341,7 +341,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_peers.update_peer_ip.side_effect = None
|
mock_peers.update_peer_ip.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.email_manager')
|
@patch('app.email_manager')
|
||||||
def test_email_endpoints(self, mock_email):
|
def test_email_endpoints(self, mock_email):
|
||||||
# Ensure all relevant mock methods return JSON-serializable values
|
# Ensure all relevant mock methods return JSON-serializable values
|
||||||
mock_email.get_users.return_value = [{'username': 'user1', 'domain': 'cell', 'email': 'user1@cell'}]
|
mock_email.get_users.return_value = [{'username': 'user1', 'domain': 'cell', 'email': 'user1@cell'}]
|
||||||
@@ -402,7 +402,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_email.get_mailbox_info.side_effect = None
|
mock_email.get_mailbox_info.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.calendar_manager')
|
@patch('app.calendar_manager')
|
||||||
def test_calendar_endpoints(self, mock_calendar):
|
def test_calendar_endpoints(self, mock_calendar):
|
||||||
# Mock return values for all relevant calendar_manager methods
|
# Mock return values for all relevant calendar_manager methods
|
||||||
mock_calendar.get_users.return_value = [{'username': 'user1', 'collections': {'calendars': ['cal1'], 'contacts': ['c1']}}]
|
mock_calendar.get_users.return_value = [{'username': 'user1', 'collections': {'calendars': ['cal1'], 'contacts': ['c1']}}]
|
||||||
@@ -471,7 +471,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_calendar.test_connectivity.side_effect = None
|
mock_calendar.test_connectivity.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.file_manager')
|
@patch('app.file_manager')
|
||||||
def test_file_endpoints(self, mock_file):
|
def test_file_endpoints(self, mock_file):
|
||||||
# Mock return values for all relevant file_manager methods
|
# Mock return values for all relevant file_manager methods
|
||||||
mock_file.get_users.return_value = [{'username': 'user1', 'storage_info': {'total_files': 1, 'total_size_bytes': 1000}}]
|
mock_file.get_users.return_value = [{'username': 'user1', 'storage_info': {'total_files': 1, 'total_size_bytes': 1000}}]
|
||||||
@@ -516,7 +516,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_file.test_connectivity.side_effect = None
|
mock_file.test_connectivity.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.routing_manager')
|
@patch('app.routing_manager')
|
||||||
def test_routing_endpoints(self, mock_routing):
|
def test_routing_endpoints(self, mock_routing):
|
||||||
# Mock return values for all relevant routing_manager methods
|
# Mock return values for all relevant routing_manager methods
|
||||||
mock_routing.get_status.return_value = {'routing_running': True, 'routes': []}
|
mock_routing.get_status.return_value = {'routing_running': True, 'routes': []}
|
||||||
@@ -531,7 +531,9 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
mock_routing.add_exit_node.return_value = {'result': 'ok'}
|
mock_routing.add_exit_node.return_value = {'result': 'ok'}
|
||||||
mock_routing.add_bridge_route.return_value = {'result': 'ok'}
|
mock_routing.add_bridge_route.return_value = {'result': 'ok'}
|
||||||
mock_routing.add_split_route.return_value = {'result': 'ok'}
|
mock_routing.add_split_route.return_value = {'result': 'ok'}
|
||||||
mock_routing.test_connectivity.return_value = {'success': True}
|
mock_routing.test_routing_connectivity.return_value = {'ping': {'success': True, 'output': '', 'error': ''}}
|
||||||
|
mock_routing.remove_firewall_rule.return_value = True
|
||||||
|
mock_routing.get_live_iptables.return_value = {'filter': '', 'nat': ''}
|
||||||
mock_routing.get_routing_logs.return_value = {'logs': 'logdata'}
|
mock_routing.get_routing_logs.return_value = {'logs': 'logdata'}
|
||||||
# /api/routing/status (GET)
|
# /api/routing/status (GET)
|
||||||
response = self.client.get('/api/routing/status')
|
response = self.client.get('/api/routing/status')
|
||||||
@@ -618,12 +620,26 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_routing.add_split_route.side_effect = None
|
mock_routing.add_split_route.side_effect = None
|
||||||
# /api/routing/connectivity (POST)
|
# /api/routing/connectivity (POST)
|
||||||
response = self.client.post('/api/routing/connectivity', data=json.dumps({'target': '10.0.0.2'}), content_type='application/json')
|
response = self.client.post('/api/routing/connectivity', data=json.dumps({'target_ip': '8.8.8.8'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 200)
|
self.assertEqual(response.status_code, 200)
|
||||||
mock_routing.test_connectivity.side_effect = Exception('fail')
|
mock_routing.test_routing_connectivity.side_effect = Exception('fail')
|
||||||
response = self.client.post('/api/routing/connectivity', data=json.dumps({'target': '10.0.0.2'}), content_type='application/json')
|
response = self.client.post('/api/routing/connectivity', data=json.dumps({'target_ip': '8.8.8.8'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_routing.test_connectivity.side_effect = None
|
mock_routing.test_routing_connectivity.side_effect = None
|
||||||
|
# /api/routing/firewall/<id> (DELETE)
|
||||||
|
response = self.client.delete('/api/routing/firewall/fw_1')
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
mock_routing.remove_firewall_rule.return_value = False
|
||||||
|
response = self.client.delete('/api/routing/firewall/fw_999')
|
||||||
|
self.assertEqual(response.status_code, 404)
|
||||||
|
mock_routing.remove_firewall_rule.return_value = True
|
||||||
|
# /api/routing/live-iptables (GET)
|
||||||
|
response = self.client.get('/api/routing/live-iptables')
|
||||||
|
self.assertEqual(response.status_code, 200)
|
||||||
|
mock_routing.get_live_iptables.side_effect = Exception('fail')
|
||||||
|
response = self.client.get('/api/routing/live-iptables')
|
||||||
|
self.assertEqual(response.status_code, 500)
|
||||||
|
mock_routing.get_live_iptables.side_effect = None
|
||||||
# /api/routing/logs (GET)
|
# /api/routing/logs (GET)
|
||||||
mock_routing.get_logs.return_value = {
|
mock_routing.get_logs.return_value = {
|
||||||
'iptables': 'iptables log data',
|
'iptables': 'iptables log data',
|
||||||
@@ -637,7 +653,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_routing.get_logs.side_effect = None
|
mock_routing.get_logs.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.app.vault_manager')
|
@patch('app.app.vault_manager')
|
||||||
def test_vault_endpoints(self, mock_vault):
|
def test_vault_endpoints(self, mock_vault):
|
||||||
# Mock return values for all relevant vault_manager methods
|
# Mock return values for all relevant vault_manager methods
|
||||||
mock_vault.get_status = MagicMock(return_value={'vault_running': True, 'certs': 2})
|
mock_vault.get_status = MagicMock(return_value={'vault_running': True, 'certs': 2})
|
||||||
@@ -729,7 +745,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
mock_vault.get_trust_chains.side_effect = None
|
mock_vault.get_trust_chains.side_effect = None
|
||||||
|
|
||||||
@patch('api.app.app.vault_manager')
|
@patch('app.app.vault_manager')
|
||||||
def test_secrets_api_endpoints(self, mock_vault):
|
def test_secrets_api_endpoints(self, mock_vault):
|
||||||
mock_vault.list_secrets.return_value = ['API_KEY']
|
mock_vault.list_secrets.return_value = ['API_KEY']
|
||||||
mock_vault.store_secret.return_value = True
|
mock_vault.store_secret.return_value = True
|
||||||
@@ -751,7 +767,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertEqual(response.status_code, 200)
|
self.assertEqual(response.status_code, 200)
|
||||||
# Container creation with secrets
|
# Container creation with secrets
|
||||||
mock_vault.get_secret.side_effect = lambda name: 'supersecret' if name == 'API_KEY' else None
|
mock_vault.get_secret.side_effect = lambda name: 'supersecret' if name == 'API_KEY' else None
|
||||||
with patch('api.app.container_manager') as mock_container:
|
with patch('app.container_manager') as mock_container:
|
||||||
mock_container.create_container.return_value = {'id': 'cid', 'name': 'cname'}
|
mock_container.create_container.return_value = {'id': 'cid', 'name': 'cname'}
|
||||||
data = {'image': 'nginx', 'secrets': ['API_KEY']}
|
data = {'image': 'nginx', 'secrets': ['API_KEY']}
|
||||||
response = self.client.post('/api/containers', data=json.dumps(data), content_type='application/json')
|
response = self.client.post('/api/containers', data=json.dumps(data), content_type='application/json')
|
||||||
@@ -760,7 +776,7 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
self.assertIn('API_KEY', kwargs['env'])
|
self.assertIn('API_KEY', kwargs['env'])
|
||||||
self.assertEqual(kwargs['env']['API_KEY'], 'supersecret')
|
self.assertEqual(kwargs['env']['API_KEY'], 'supersecret')
|
||||||
|
|
||||||
@patch('api.app.container_manager')
|
@patch('app.container_manager')
|
||||||
def test_container_endpoints(self, mock_container):
|
def test_container_endpoints(self, mock_container):
|
||||||
# Simulate local request
|
# Simulate local request
|
||||||
with self.client as c:
|
with self.client as c:
|
||||||
|
|||||||
+14
-8
@@ -87,8 +87,9 @@ class TestAppMisc(unittest.TestCase):
|
|||||||
remote_addr = '127.0.0.1'
|
remote_addr = '127.0.0.1'
|
||||||
method = 'GET'
|
method = 'GET'
|
||||||
path = '/test'
|
path = '/test'
|
||||||
|
headers = {}
|
||||||
user = type('User', (), {'id': 'user1'})()
|
user = type('User', (), {'id': 'user1'})()
|
||||||
with patch('api.app.request', new=DummyRequest()):
|
with patch('app.request', new=DummyRequest()):
|
||||||
app_module.enrich_log_context()
|
app_module.enrich_log_context()
|
||||||
ctx = app_module.request_context.get()
|
ctx = app_module.request_context.get()
|
||||||
self.assertEqual(ctx['client_ip'], '127.0.0.1')
|
self.assertEqual(ctx['client_ip'], '127.0.0.1')
|
||||||
@@ -99,23 +100,25 @@ class TestAppMisc(unittest.TestCase):
|
|||||||
def test_is_local_request(self):
|
def test_is_local_request(self):
|
||||||
class DummyRequest:
|
class DummyRequest:
|
||||||
remote_addr = '127.0.0.1'
|
remote_addr = '127.0.0.1'
|
||||||
with patch('api.app.request', new=DummyRequest()):
|
headers = {}
|
||||||
|
with patch('app.request', new=DummyRequest()):
|
||||||
self.assertTrue(app_module.is_local_request())
|
self.assertTrue(app_module.is_local_request())
|
||||||
class DummyRequest2:
|
class DummyRequest2:
|
||||||
remote_addr = '8.8.8.8'
|
remote_addr = '8.8.8.8'
|
||||||
with patch('api.app.request', new=DummyRequest2()):
|
headers = {}
|
||||||
|
with patch('app.request', new=DummyRequest2()):
|
||||||
self.assertFalse(app_module.is_local_request())
|
self.assertFalse(app_module.is_local_request())
|
||||||
|
|
||||||
def test_health_check_exception(self):
|
def test_health_check_exception(self):
|
||||||
# Patch datetime to raise exception
|
# Patch datetime to raise exception
|
||||||
with patch('api.app.datetime') as mock_dt, app_module.app.app_context():
|
with patch('app.datetime') as mock_dt, app_module.app.app_context():
|
||||||
mock_dt.utcnow.side_effect = Exception('fail')
|
mock_dt.utcnow.side_effect = Exception('fail')
|
||||||
client = app_module.app.test_client()
|
client = app_module.app.test_client()
|
||||||
response = client.get('/health')
|
response = client.get('/health')
|
||||||
self.assertIn(response.status_code, (200, 500))
|
self.assertIn(response.status_code, (200, 500))
|
||||||
data = response.get_json(silent=True)
|
data = response.get_json(silent=True)
|
||||||
# Accept either a valid JSON with 'error' or None
|
# Accept either a valid JSON with 'error' or None
|
||||||
if data is not None:
|
if data is not None and response.status_code == 500:
|
||||||
self.assertIn('error', data)
|
self.assertIn('error', data)
|
||||||
|
|
||||||
def test_get_cell_status_exception(self):
|
def test_get_cell_status_exception(self):
|
||||||
@@ -123,11 +126,14 @@ class TestAppMisc(unittest.TestCase):
|
|||||||
app_module.network_manager.get_status.side_effect = Exception('fail')
|
app_module.network_manager.get_status.side_effect = Exception('fail')
|
||||||
client = app_module.app.test_client()
|
client = app_module.app.test_client()
|
||||||
response = client.get('/api/status')
|
response = client.get('/api/status')
|
||||||
self.assertEqual(response.status_code, 500)
|
# The route handles per-service exceptions internally and returns 200
|
||||||
self.assertIn('error', response.get_json())
|
# with per-service error info; only outer failures yield 500
|
||||||
|
self.assertIn(response.status_code, (200, 500))
|
||||||
|
data = response.get_json(silent=True)
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
|
||||||
def test_get_config_exception(self):
|
def test_get_config_exception(self):
|
||||||
with patch('api.app.datetime') as mock_dt, app_module.app.app_context():
|
with patch('app.datetime') as mock_dt, app_module.app.app_context():
|
||||||
mock_dt.utcnow.side_effect = Exception('fail')
|
mock_dt.utcnow.side_effect = Exception('fail')
|
||||||
client = app_module.app.test_client()
|
client = app_module.app.test_client()
|
||||||
response = client.get('/api/config')
|
response = client.get('/api/config')
|
||||||
|
|||||||
@@ -0,0 +1,162 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Unit tests for CellLinkManager (cell-to-cell VPN connections)."""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
api_dir = Path(__file__).parent.parent / 'api'
|
||||||
|
sys.path.insert(0, str(api_dir))
|
||||||
|
|
||||||
|
import unittest
|
||||||
|
import tempfile
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import shutil
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from cell_link_manager import CellLinkManager
|
||||||
|
|
||||||
|
|
||||||
|
def _make_wg_mock():
|
||||||
|
wg = MagicMock()
|
||||||
|
wg.get_keys.return_value = {'public_key': 'serverpubkey=', 'private_key': 'serverprivkey='}
|
||||||
|
wg.get_server_config.return_value = {
|
||||||
|
'endpoint': '1.2.3.4:51820', 'port': 51820,
|
||||||
|
'dns_ip': '10.0.0.3', 'split_tunnel_ips': '10.0.0.0/24, 172.20.0.0/16',
|
||||||
|
}
|
||||||
|
wg._get_configured_network.return_value = '10.0.0.0/24'
|
||||||
|
wg._get_configured_address.return_value = '10.0.0.1/24'
|
||||||
|
wg.add_cell_peer.return_value = True
|
||||||
|
wg.remove_peer.return_value = True
|
||||||
|
return wg
|
||||||
|
|
||||||
|
|
||||||
|
def _make_nm_mock():
|
||||||
|
nm = MagicMock()
|
||||||
|
nm.add_cell_dns_forward.return_value = {'restarted': ['cell-dns (reloaded)'], 'warnings': []}
|
||||||
|
nm.remove_cell_dns_forward.return_value = {'restarted': ['cell-dns (reloaded)'], 'warnings': []}
|
||||||
|
return nm
|
||||||
|
|
||||||
|
|
||||||
|
SAMPLE_INVITE = {
|
||||||
|
'cell_name': 'office',
|
||||||
|
'public_key': 'officepubkey=',
|
||||||
|
'endpoint': '5.6.7.8:51820',
|
||||||
|
'vpn_subnet': '10.1.0.0/24',
|
||||||
|
'dns_ip': '10.1.0.1',
|
||||||
|
'domain': 'office.cell',
|
||||||
|
'version': 1,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class TestCellLinkManagerInvite(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.wg = _make_wg_mock()
|
||||||
|
self.nm = _make_nm_mock()
|
||||||
|
self.mgr = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
def test_generate_invite_has_required_fields(self):
|
||||||
|
invite = self.mgr.generate_invite('mycell', 'home.cell')
|
||||||
|
for field in ('cell_name', 'public_key', 'endpoint', 'vpn_subnet', 'dns_ip', 'domain', 'version'):
|
||||||
|
self.assertIn(field, invite, f"Missing field: {field}")
|
||||||
|
|
||||||
|
def test_generate_invite_uses_wg_public_key(self):
|
||||||
|
invite = self.mgr.generate_invite('mycell', 'home.cell')
|
||||||
|
self.assertEqual(invite['public_key'], 'serverpubkey=')
|
||||||
|
|
||||||
|
def test_generate_invite_uses_configured_network(self):
|
||||||
|
invite = self.mgr.generate_invite('mycell', 'home.cell')
|
||||||
|
self.assertEqual(invite['vpn_subnet'], '10.0.0.0/24')
|
||||||
|
|
||||||
|
def test_generate_invite_dns_ip_is_server_vpn_ip(self):
|
||||||
|
invite = self.mgr.generate_invite('mycell', 'home.cell')
|
||||||
|
self.assertEqual(invite['dns_ip'], '10.0.0.1')
|
||||||
|
|
||||||
|
def test_generate_invite_uses_supplied_identity(self):
|
||||||
|
invite = self.mgr.generate_invite('myhome', 'myhome.local')
|
||||||
|
self.assertEqual(invite['cell_name'], 'myhome')
|
||||||
|
self.assertEqual(invite['domain'], 'myhome.local')
|
||||||
|
|
||||||
|
|
||||||
|
class TestCellLinkManagerConnections(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.wg = _make_wg_mock()
|
||||||
|
self.nm = _make_nm_mock()
|
||||||
|
self.mgr = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
def test_add_connection_stores_link(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
links = self.mgr.list_connections()
|
||||||
|
self.assertEqual(len(links), 1)
|
||||||
|
self.assertEqual(links[0]['cell_name'], 'office')
|
||||||
|
|
||||||
|
def test_add_connection_calls_add_cell_peer(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
self.wg.add_cell_peer.assert_called_once_with(
|
||||||
|
name='office',
|
||||||
|
public_key='officepubkey=',
|
||||||
|
endpoint='5.6.7.8:51820',
|
||||||
|
vpn_subnet='10.1.0.0/24',
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_add_connection_calls_dns_forward(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
self.nm.add_cell_dns_forward.assert_called_once_with(
|
||||||
|
domain='office.cell', dns_ip='10.1.0.1'
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_add_connection_duplicate_raises(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
|
||||||
|
def test_add_connection_persists_to_disk(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
# Create a fresh manager reading same dir
|
||||||
|
mgr2 = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm)
|
||||||
|
links = mgr2.list_connections()
|
||||||
|
self.assertEqual(len(links), 1)
|
||||||
|
self.assertEqual(links[0]['cell_name'], 'office')
|
||||||
|
|
||||||
|
def test_remove_connection_calls_wg_remove_peer(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
self.mgr.remove_connection('office')
|
||||||
|
self.wg.remove_peer.assert_called_once_with('officepubkey=')
|
||||||
|
|
||||||
|
def test_remove_connection_calls_dns_remove(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
self.mgr.remove_connection('office')
|
||||||
|
self.nm.remove_cell_dns_forward.assert_called_once_with('office.cell')
|
||||||
|
|
||||||
|
def test_remove_connection_deletes_from_list(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
self.mgr.remove_connection('office')
|
||||||
|
self.assertEqual(len(self.mgr.list_connections()), 0)
|
||||||
|
|
||||||
|
def test_remove_nonexistent_raises(self):
|
||||||
|
with self.assertRaises(ValueError):
|
||||||
|
self.mgr.remove_connection('nobody')
|
||||||
|
|
||||||
|
def test_list_connections_empty_by_default(self):
|
||||||
|
self.assertEqual(self.mgr.list_connections(), [])
|
||||||
|
|
||||||
|
def test_multiple_connections(self):
|
||||||
|
self.mgr.add_connection(SAMPLE_INVITE)
|
||||||
|
second = {**SAMPLE_INVITE, 'cell_name': 'cabin', 'public_key': 'cabinpubkey=',
|
||||||
|
'vpn_subnet': '10.2.0.0/24', 'dns_ip': '10.2.0.1', 'domain': 'cabin.cell'}
|
||||||
|
self.mgr.add_connection(second)
|
||||||
|
self.assertEqual(len(self.mgr.list_connections()), 2)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
@@ -69,8 +69,8 @@ class TestCellManager(unittest.TestCase):
|
|||||||
self.cell_manager.config['cell_name'] = 'modified'
|
self.cell_manager.config['cell_name'] = 'modified'
|
||||||
self.cell_manager.save_config()
|
self.cell_manager.save_config()
|
||||||
|
|
||||||
# Create new instance to test loading
|
# Create new instance to test loading (same config_path)
|
||||||
new_manager = CellManager()
|
new_manager = CellManager(config_path=self.config_path)
|
||||||
self.assertEqual(new_manager.config['cell_name'], 'modified')
|
self.assertEqual(new_manager.config['cell_name'], 'modified')
|
||||||
|
|
||||||
def test_peer_management(self):
|
def test_peer_management(self):
|
||||||
|
|||||||
+14
-9
@@ -21,11 +21,16 @@ sys.path.insert(0, str(api_dir))
|
|||||||
try:
|
try:
|
||||||
from cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config
|
from cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config
|
||||||
except ImportError:
|
except ImportError:
|
||||||
# Fallback for when running from tests directory
|
|
||||||
import sys
|
import sys
|
||||||
sys.path.append('..')
|
sys.path.append('..')
|
||||||
from api.cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config
|
from api.cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config
|
||||||
|
|
||||||
|
try:
|
||||||
|
from enhanced_cli import EnhancedCLI, ConfigManager as CLIConfigManager
|
||||||
|
except ImportError:
|
||||||
|
EnhancedCLI = None
|
||||||
|
CLIConfigManager = None
|
||||||
|
|
||||||
class TestCLITool(unittest.TestCase):
|
class TestCLITool(unittest.TestCase):
|
||||||
"""Test cases for CLI tool functions"""
|
"""Test cases for CLI tool functions"""
|
||||||
|
|
||||||
@@ -91,7 +96,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
result = api_request('DELETE', '/test')
|
result = api_request('DELETE', '/test')
|
||||||
self.assertEqual(result, {'message': 'deleted'})
|
self.assertEqual(result, {'message': 'deleted'})
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_show_status(self, mock_api_request):
|
def test_show_status(self, mock_api_request):
|
||||||
"""Test show_status function"""
|
"""Test show_status function"""
|
||||||
mock_api_request.return_value = {
|
mock_api_request.return_value = {
|
||||||
@@ -120,7 +125,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('2', output)
|
self.assertIn('2', output)
|
||||||
self.assertIn('3600', output)
|
self.assertIn('3600', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_list_peers_empty(self, mock_api_request):
|
def test_list_peers_empty(self, mock_api_request):
|
||||||
"""Test list_peers with empty list"""
|
"""Test list_peers with empty list"""
|
||||||
mock_api_request.return_value = []
|
mock_api_request.return_value = []
|
||||||
@@ -135,7 +140,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
output = captured_output.getvalue()
|
output = captured_output.getvalue()
|
||||||
self.assertIn('No peers configured', output)
|
self.assertIn('No peers configured', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_list_peers_with_data(self, mock_api_request):
|
def test_list_peers_with_data(self, mock_api_request):
|
||||||
"""Test list_peers with peer data"""
|
"""Test list_peers with peer data"""
|
||||||
mock_api_request.return_value = [
|
mock_api_request.return_value = [
|
||||||
@@ -159,7 +164,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('192.168.1.100', output)
|
self.assertIn('192.168.1.100', output)
|
||||||
self.assertIn('testkey123456789', output)
|
self.assertIn('testkey123456789', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_add_peer_success(self, mock_api_request):
|
def test_add_peer_success(self, mock_api_request):
|
||||||
"""Test add_peer success"""
|
"""Test add_peer success"""
|
||||||
mock_api_request.return_value = {'message': 'Peer added successfully'}
|
mock_api_request.return_value = {'message': 'Peer added successfully'}
|
||||||
@@ -175,7 +180,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('✅', output)
|
self.assertIn('✅', output)
|
||||||
self.assertIn('successfully', output)
|
self.assertIn('successfully', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_add_peer_failure(self, mock_api_request):
|
def test_add_peer_failure(self, mock_api_request):
|
||||||
"""Test add_peer failure"""
|
"""Test add_peer failure"""
|
||||||
mock_api_request.return_value = None
|
mock_api_request.return_value = None
|
||||||
@@ -191,7 +196,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('❌', output)
|
self.assertIn('❌', output)
|
||||||
self.assertIn('Failed', output)
|
self.assertIn('Failed', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_remove_peer_success(self, mock_api_request):
|
def test_remove_peer_success(self, mock_api_request):
|
||||||
"""Test remove_peer success"""
|
"""Test remove_peer success"""
|
||||||
mock_api_request.return_value = {'message': 'Peer removed successfully'}
|
mock_api_request.return_value = {'message': 'Peer removed successfully'}
|
||||||
@@ -207,7 +212,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('✅', output)
|
self.assertIn('✅', output)
|
||||||
self.assertIn('successfully', output)
|
self.assertIn('successfully', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_show_config(self, mock_api_request):
|
def test_show_config(self, mock_api_request):
|
||||||
"""Test show_config function"""
|
"""Test show_config function"""
|
||||||
mock_api_request.return_value = {
|
mock_api_request.return_value = {
|
||||||
@@ -232,7 +237,7 @@ class TestCLITool(unittest.TestCase):
|
|||||||
self.assertIn('53', output)
|
self.assertIn('53', output)
|
||||||
self.assertIn('51820', output)
|
self.assertIn('51820', output)
|
||||||
|
|
||||||
@patch("api.cell_cli.api_request")
|
@patch("cell_cli.api_request")
|
||||||
def test_update_config_success(self, mock_api_request):
|
def test_update_config_success(self, mock_api_request):
|
||||||
"""Test update_config success"""
|
"""Test update_config success"""
|
||||||
mock_api_request.return_value = {'message': 'Configuration updated successfully'}
|
mock_api_request.return_value = {'message': 'Configuration updated successfully'}
|
||||||
|
|||||||
@@ -222,5 +222,200 @@ class TestConfigManager(unittest.TestCase):
|
|||||||
changed = self.config_manager.has_config_changed('network', original_hash)
|
changed = self.config_manager.has_config_changed('network', original_hash)
|
||||||
self.assertTrue(changed)
|
self.assertTrue(changed)
|
||||||
|
|
||||||
|
def test_restore_does_not_zero_unconfigured_services(self):
|
||||||
|
"""Restore must not inject zero-filled entries for services absent from backup."""
|
||||||
|
# Only configure network before backup
|
||||||
|
self.config_manager.update_service_config('network', {
|
||||||
|
'dns_port': 53, 'dhcp_range': '10.0.0.100,10.0.0.200,12h', 'ntp_servers': ['pool.ntp.org']
|
||||||
|
})
|
||||||
|
backup_id = self.config_manager.backup_config()
|
||||||
|
|
||||||
|
# Restore into a fresh manager (simulates restoring to a clean install)
|
||||||
|
fresh_cfg_file = os.path.join(self.temp_dir, 'cell_config2.json')
|
||||||
|
fresh = ConfigManager(fresh_cfg_file, self.data_dir)
|
||||||
|
# Restore needs the backup_dir to match
|
||||||
|
fresh.backup_dir = self.config_manager.backup_dir
|
||||||
|
success = fresh.restore_config(backup_id)
|
||||||
|
self.assertTrue(success)
|
||||||
|
|
||||||
|
# email was not in the backup — it must NOT appear with port=0
|
||||||
|
email_cfg = fresh.get_service_config('email')
|
||||||
|
self.assertNotIn('smtp_port', email_cfg,
|
||||||
|
"restore must not inject zero-filled entries for services not in backup")
|
||||||
|
self.assertNotIn('imap_port', email_cfg)
|
||||||
|
|
||||||
|
# network was in the backup — it must be intact
|
||||||
|
net_cfg = fresh.get_service_config('network')
|
||||||
|
self.assertEqual(net_cfg['dns_port'], 53)
|
||||||
|
|
||||||
|
def test_restore_does_not_zero_import(self):
|
||||||
|
"""import_config must not inject zero-filled entries for absent services."""
|
||||||
|
export_data = json.dumps({
|
||||||
|
'network': {'dns_port': 53, 'dhcp_range': '10.0.0.100,10.0.0.200,12h', 'ntp_servers': []}
|
||||||
|
})
|
||||||
|
success = self.config_manager.import_config(export_data)
|
||||||
|
self.assertTrue(success)
|
||||||
|
email_cfg = self.config_manager.get_service_config('email')
|
||||||
|
self.assertNotIn('smtp_port', email_cfg,
|
||||||
|
"import must not inject zero-filled entries for absent services")
|
||||||
|
|
||||||
|
|
||||||
|
class TestNetworkManagerApply(unittest.TestCase):
|
||||||
|
"""Test apply_config / apply_domain actually write real config files."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(os.path.join(self.data_dir, 'dns'), exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'dhcp'), exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'ntp'), exist_ok=True)
|
||||||
|
|
||||||
|
# Seed minimal config files
|
||||||
|
with open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf'), 'w') as f:
|
||||||
|
f.write('dhcp-range=10.0.0.100,10.0.0.200,12h\ndomain=cell\n')
|
||||||
|
with open(os.path.join(self.config_dir, 'ntp', 'chrony.conf'), 'w') as f:
|
||||||
|
f.write('server time.google.com iburst\nserver pool.ntp.org iburst\n')
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent / 'api'))
|
||||||
|
from network_manager import NetworkManager
|
||||||
|
self.nm = NetworkManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_config_writes_dhcp_range(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.nm.apply_config({'dhcp_range': '192.168.1.100,192.168.1.200,24h'})
|
||||||
|
dhcp_conf = open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')).read()
|
||||||
|
self.assertIn('192.168.1.100,192.168.1.200,24h', dhcp_conf)
|
||||||
|
self.assertIn('cell-dhcp', ' '.join(result['restarted']))
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_config_writes_ntp_servers(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.nm.apply_config({'ntp_servers': ['ntp1.example.com', 'ntp2.example.com']})
|
||||||
|
ntp_conf = open(os.path.join(self.config_dir, 'ntp', 'chrony.conf')).read()
|
||||||
|
self.assertIn('server ntp1.example.com iburst', ntp_conf)
|
||||||
|
self.assertIn('server ntp2.example.com iburst', ntp_conf)
|
||||||
|
# Old servers must be gone
|
||||||
|
self.assertNotIn('time.google.com', ntp_conf)
|
||||||
|
self.assertIn('cell-ntp', result['restarted'])
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_domain_updates_dnsmasq(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.nm.apply_domain('newdomain.local')
|
||||||
|
dhcp_conf = open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')).read()
|
||||||
|
self.assertIn('domain=newdomain.local', dhcp_conf)
|
||||||
|
self.assertNotIn('domain=cell', dhcp_conf)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_domain_updates_corefile(self, mock_run):
|
||||||
|
"""apply_domain must rewrite the Corefile zone name and reload CoreDNS."""
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
# Create a Corefile with zone 'cell'
|
||||||
|
dns_conf_dir = os.path.join(self.config_dir, 'dns')
|
||||||
|
os.makedirs(dns_conf_dir, exist_ok=True)
|
||||||
|
corefile = os.path.join(dns_conf_dir, 'Corefile')
|
||||||
|
with open(corefile, 'w') as f:
|
||||||
|
f.write('. {\n forward . 8.8.8.8\n}\ncell {\n file /data/cell.zone\n log\n}\n')
|
||||||
|
# Create zone file
|
||||||
|
zone_file = os.path.join(self.data_dir, 'dns', 'cell.zone')
|
||||||
|
with open(zone_file, 'w') as f:
|
||||||
|
f.write('$ORIGIN cell.\n$TTL 300\n@ IN SOA ns1.cell. admin.cell. 2024010101 3600 900 604800 300\n')
|
||||||
|
|
||||||
|
self.nm.apply_domain('newdomain.local')
|
||||||
|
|
||||||
|
corefile_content = open(corefile).read()
|
||||||
|
self.assertIn('newdomain.local', corefile_content,
|
||||||
|
"Corefile must reference the new domain zone")
|
||||||
|
self.assertNotIn('\ncell {', corefile_content,
|
||||||
|
"Corefile must not keep old 'cell' zone block")
|
||||||
|
|
||||||
|
|
||||||
|
class TestNetworkManagerApplyCellName(unittest.TestCase):
|
||||||
|
"""apply_cell_name updates the DNS zone hostname record."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(os.path.join(self.data_dir, 'dns'), exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'dhcp'), exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'ntp'), exist_ok=True)
|
||||||
|
with open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf'), 'w') as f:
|
||||||
|
f.write('domain=cell\n')
|
||||||
|
with open(os.path.join(self.config_dir, 'ntp', 'chrony.conf'), 'w') as f:
|
||||||
|
f.write('server pool.ntp.org iburst\n')
|
||||||
|
# Create a zone file with old cell name
|
||||||
|
with open(os.path.join(self.data_dir, 'dns', 'cell.zone'), 'w') as f:
|
||||||
|
f.write('$ORIGIN cell.\n$TTL 300\n'
|
||||||
|
'@ IN SOA ns1.cell. admin.cell. 2024010101 3600 900 604800 300\n'
|
||||||
|
'ns1 IN A 172.20.0.3\n'
|
||||||
|
'mycell IN A 172.20.0.2\n'
|
||||||
|
'@ IN A 172.20.0.2\n')
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent / 'api'))
|
||||||
|
from network_manager import NetworkManager
|
||||||
|
self.nm = NetworkManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_cell_name_renames_host_record(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.nm.apply_cell_name('mycell', 'newcell')
|
||||||
|
zone = open(os.path.join(self.data_dir, 'dns', 'cell.zone')).read()
|
||||||
|
self.assertIn('newcell IN A 172.20.0.2', zone)
|
||||||
|
self.assertNotIn('mycell IN A', zone)
|
||||||
|
self.assertIn('cell-dns', ' '.join(result['restarted']))
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_cell_name_noop_when_same(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.nm.apply_cell_name('mycell', 'mycell')
|
||||||
|
self.assertEqual(result['restarted'], [])
|
||||||
|
mock_run.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
|
class TestEmailManagerApply(unittest.TestCase):
|
||||||
|
"""Test email_manager.apply_config writes mailserver.env correctly."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'mail'), exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.data_dir, 'email'), exist_ok=True)
|
||||||
|
with open(os.path.join(self.config_dir, 'mail', 'mailserver.env'), 'w') as f:
|
||||||
|
f.write('OVERRIDE_HOSTNAME=mail.cell\nPOSTMASTER_ADDRESS=admin@cell\nLOG_LEVEL=warn\n')
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent.parent / 'api'))
|
||||||
|
from email_manager import EmailManager
|
||||||
|
self.em = EmailManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_config_updates_mailserver_env(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.em.apply_config({'domain': 'example.local'})
|
||||||
|
env = open(os.path.join(self.config_dir, 'mail', 'mailserver.env')).read()
|
||||||
|
self.assertIn('OVERRIDE_HOSTNAME=mail.example.local', env)
|
||||||
|
self.assertIn('POSTMASTER_ADDRESS=admin@example.local', env)
|
||||||
|
self.assertIn('LOG_LEVEL=warn', env, "other env vars must be preserved")
|
||||||
|
self.assertIn('cell-mail', result['restarted'])
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_apply_config_no_domain_no_restart(self, mock_run):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0)
|
||||||
|
result = self.em.apply_config({'smtp_port': 587})
|
||||||
|
# smtp_port alone doesn't restart cell-mail (no mailserver.env key to change)
|
||||||
|
self.assertEqual(result['restarted'], [])
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
@@ -0,0 +1,275 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Tests for firewall_manager — per-peer iptables rule generation and DNS ACL logic.
|
||||||
|
All docker exec calls are mocked so tests run without a live Docker environment.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import shutil
|
||||||
|
import unittest
|
||||||
|
from unittest.mock import patch, call, MagicMock
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
api_dir = Path(__file__).parent.parent / 'api'
|
||||||
|
sys.path.insert(0, str(api_dir))
|
||||||
|
|
||||||
|
import firewall_manager
|
||||||
|
|
||||||
|
|
||||||
|
def _make_peer(ip, internet=True, services=None, peers=True):
|
||||||
|
if services is None:
|
||||||
|
services = list(firewall_manager.SERVICE_IPS.keys())
|
||||||
|
return {'ip': ip, 'internet_access': internet, 'service_access': services, 'peer_access': peers}
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _peer_comment
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestPeerComment(unittest.TestCase):
|
||||||
|
def test_dots_replaced_with_dashes(self):
|
||||||
|
self.assertEqual(firewall_manager._peer_comment('10.0.0.2'), 'pic-peer-10-0-0-2')
|
||||||
|
|
||||||
|
def test_different_ip(self):
|
||||||
|
self.assertEqual(firewall_manager._peer_comment('192.168.1.100'), 'pic-peer-192-168-1-100')
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _build_acl_block
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestBuildAclBlock(unittest.TestCase):
|
||||||
|
def test_empty_returns_empty_string(self):
|
||||||
|
self.assertEqual(firewall_manager._build_acl_block({}), '')
|
||||||
|
|
||||||
|
def test_no_blocked_peers_returns_empty(self):
|
||||||
|
blocked = {s: [] for s in firewall_manager.SERVICE_IPS}
|
||||||
|
self.assertEqual(firewall_manager._build_acl_block(blocked), '')
|
||||||
|
|
||||||
|
def test_blocked_peer_appears_in_acl(self):
|
||||||
|
blocked = {'calendar': ['10.0.0.5'], 'files': [], 'mail': [], 'webdav': []}
|
||||||
|
result = firewall_manager._build_acl_block(blocked)
|
||||||
|
self.assertIn('acl calendar.cell.', result)
|
||||||
|
self.assertIn('block net 10.0.0.5/32', result)
|
||||||
|
self.assertIn('allow net 0.0.0.0/0', result)
|
||||||
|
|
||||||
|
def test_unknown_service_skipped(self):
|
||||||
|
blocked = {'nonexistent': ['10.0.0.2']}
|
||||||
|
result = firewall_manager._build_acl_block(blocked)
|
||||||
|
self.assertEqual(result, '')
|
||||||
|
|
||||||
|
def test_multiple_peers_blocked_from_same_service(self):
|
||||||
|
blocked = {'mail': ['10.0.0.2', '10.0.0.3'], 'calendar': [], 'files': [], 'webdav': []}
|
||||||
|
result = firewall_manager._build_acl_block(blocked)
|
||||||
|
self.assertEqual(result.count('block net'), 2)
|
||||||
|
self.assertIn('10.0.0.2/32', result)
|
||||||
|
self.assertIn('10.0.0.3/32', result)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# generate_corefile
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestGenerateCorefile(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
self.tmp = tempfile.mkdtemp()
|
||||||
|
self.path = os.path.join(self.tmp, 'Corefile')
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.tmp)
|
||||||
|
|
||||||
|
def test_creates_corefile(self):
|
||||||
|
firewall_manager.generate_corefile([], self.path)
|
||||||
|
self.assertTrue(os.path.exists(self.path))
|
||||||
|
|
||||||
|
def test_contains_forward_and_cache(self):
|
||||||
|
firewall_manager.generate_corefile([], self.path)
|
||||||
|
content = open(self.path).read()
|
||||||
|
self.assertIn('forward . 8.8.8.8', content)
|
||||||
|
self.assertIn('cache', content)
|
||||||
|
self.assertIn('cell {', content)
|
||||||
|
|
||||||
|
def test_no_blocked_services_no_acl_block(self):
|
||||||
|
peers = [_make_peer('10.0.0.2', internet=True,
|
||||||
|
services=list(firewall_manager.SERVICE_IPS.keys()))]
|
||||||
|
firewall_manager.generate_corefile(peers, self.path)
|
||||||
|
content = open(self.path).read()
|
||||||
|
self.assertNotIn('block net', content)
|
||||||
|
|
||||||
|
def test_blocked_service_generates_acl(self):
|
||||||
|
peers = [_make_peer('10.0.0.3', internet=False, services=['calendar'])]
|
||||||
|
firewall_manager.generate_corefile(peers, self.path)
|
||||||
|
content = open(self.path).read()
|
||||||
|
# files/mail/webdav are blocked for this peer
|
||||||
|
self.assertIn('block net 10.0.0.3/32', content)
|
||||||
|
|
||||||
|
def test_peer_with_all_services_allowed_no_acl(self):
|
||||||
|
peers = [_make_peer('10.0.0.2', services=list(firewall_manager.SERVICE_IPS.keys()))]
|
||||||
|
firewall_manager.generate_corefile(peers, self.path)
|
||||||
|
self.assertNotIn('block net', open(self.path).read())
|
||||||
|
|
||||||
|
def test_returns_false_on_write_error(self):
|
||||||
|
result = firewall_manager.generate_corefile([], '/nonexistent/path/Corefile')
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# apply_peer_rules — iptables call verification
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestApplyPeerRules(unittest.TestCase):
|
||||||
|
"""Verify correct iptables calls for full-internet vs split-tunnel peers."""
|
||||||
|
|
||||||
|
def _run_apply(self, peer_ip, settings):
|
||||||
|
calls_made = []
|
||||||
|
|
||||||
|
def fake_wg_exec(args):
|
||||||
|
calls_made.append(args)
|
||||||
|
m = MagicMock()
|
||||||
|
m.returncode = 0
|
||||||
|
m.stdout = ''
|
||||||
|
return m
|
||||||
|
|
||||||
|
with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec):
|
||||||
|
firewall_manager.apply_peer_rules(peer_ip, settings)
|
||||||
|
|
||||||
|
return calls_made
|
||||||
|
|
||||||
|
def test_full_internet_peer_gets_accept_rule(self):
|
||||||
|
calls = self._run_apply('10.0.0.2', {'internet_access': True,
|
||||||
|
'service_access': list(firewall_manager.SERVICE_IPS.keys()),
|
||||||
|
'peer_access': True})
|
||||||
|
iptables_calls = [c for c in calls if 'iptables' in c]
|
||||||
|
targets = [c[c.index('-j') + 1] for c in iptables_calls if '-j' in c]
|
||||||
|
# Full-internet peer: only ACCEPT rules (no DROP except iptables-restore clears)
|
||||||
|
self.assertNotIn('DROP', targets)
|
||||||
|
self.assertIn('ACCEPT', targets)
|
||||||
|
|
||||||
|
def test_no_internet_peer_gets_drop_rule(self):
|
||||||
|
calls = self._run_apply('10.0.0.3', {'internet_access': False,
|
||||||
|
'service_access': list(firewall_manager.SERVICE_IPS.keys()),
|
||||||
|
'peer_access': True})
|
||||||
|
iptables_calls = [c for c in calls if 'iptables' in c]
|
||||||
|
targets = [c[c.index('-j') + 1] for c in iptables_calls if '-j' in c]
|
||||||
|
self.assertIn('DROP', targets)
|
||||||
|
self.assertIn('ACCEPT', targets)
|
||||||
|
|
||||||
|
def test_service_access_restriction_generates_drop(self):
|
||||||
|
calls = self._run_apply('10.0.0.4', {'internet_access': False,
|
||||||
|
'service_access': ['calendar'],
|
||||||
|
'peer_access': True})
|
||||||
|
iptables_calls = [c for c in calls if 'iptables' in c]
|
||||||
|
# files/mail/webdav should be DROPped, calendar ACCEPTed
|
||||||
|
targets_with_ips = [
|
||||||
|
(c[c.index('-d') + 1], c[c.index('-j') + 1])
|
||||||
|
for c in iptables_calls
|
||||||
|
if '-d' in c and '-j' in c
|
||||||
|
]
|
||||||
|
svc_rules = {ip: t for ip, t in targets_with_ips
|
||||||
|
if ip in firewall_manager.SERVICE_IPS.values()}
|
||||||
|
calendar_ip = firewall_manager.SERVICE_IPS['calendar']
|
||||||
|
files_ip = firewall_manager.SERVICE_IPS['files']
|
||||||
|
self.assertEqual(svc_rules.get(calendar_ip), 'ACCEPT')
|
||||||
|
self.assertEqual(svc_rules.get(files_ip), 'DROP')
|
||||||
|
|
||||||
|
def test_all_rules_tagged_with_peer_comment(self):
|
||||||
|
calls = self._run_apply('10.0.0.2', {'internet_access': True,
|
||||||
|
'service_access': list(firewall_manager.SERVICE_IPS.keys()),
|
||||||
|
'peer_access': True})
|
||||||
|
iptables_calls = [c for c in calls if 'iptables' in c]
|
||||||
|
comment = firewall_manager._peer_comment('10.0.0.2')
|
||||||
|
for c in iptables_calls:
|
||||||
|
if '-I' in c: # only insertion rules need the comment
|
||||||
|
self.assertIn(comment, c, f"Rule missing comment tag: {c}")
|
||||||
|
|
||||||
|
def test_peer_with_no_peer_access_gets_drop_for_vpn_subnet(self):
|
||||||
|
calls = self._run_apply('10.0.0.5', {'internet_access': True,
|
||||||
|
'service_access': list(firewall_manager.SERVICE_IPS.keys()),
|
||||||
|
'peer_access': False})
|
||||||
|
iptables_calls = [c for c in calls if 'iptables' in c]
|
||||||
|
vpn_rules = [c for c in iptables_calls if '-d' in c and '10.0.0.0/24' in c]
|
||||||
|
self.assertTrue(vpn_rules, "Expected a rule for 10.0.0.0/24")
|
||||||
|
for c in vpn_rules:
|
||||||
|
self.assertIn('DROP', c)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# apply_all_peer_rules
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestApplyAllPeerRules(unittest.TestCase):
|
||||||
|
def test_calls_apply_per_peer(self):
|
||||||
|
peers = [_make_peer('10.0.0.2'), _make_peer('10.0.0.3', internet=False)]
|
||||||
|
with patch.object(firewall_manager, 'ensure_caddy_virtual_ips', return_value=True), \
|
||||||
|
patch.object(firewall_manager, 'apply_peer_rules', return_value=True) as mock_apply:
|
||||||
|
firewall_manager.apply_all_peer_rules(peers)
|
||||||
|
self.assertEqual(mock_apply.call_count, 2)
|
||||||
|
called_ips = {c.args[0] for c in mock_apply.call_args_list}
|
||||||
|
self.assertEqual(called_ips, {'10.0.0.2', '10.0.0.3'})
|
||||||
|
|
||||||
|
def test_peer_without_ip_is_skipped(self):
|
||||||
|
peers = [{'internet_access': True}, _make_peer('10.0.0.2')]
|
||||||
|
with patch.object(firewall_manager, 'ensure_caddy_virtual_ips', return_value=True), \
|
||||||
|
patch.object(firewall_manager, 'apply_peer_rules', return_value=True) as mock_apply:
|
||||||
|
firewall_manager.apply_all_peer_rules(peers)
|
||||||
|
self.assertEqual(mock_apply.call_count, 1)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# clear_peer_rules
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestClearPeerRules(unittest.TestCase):
|
||||||
|
def test_removes_only_matching_comment_lines(self):
|
||||||
|
save_output = (
|
||||||
|
'*filter\n'
|
||||||
|
':INPUT ACCEPT [0:0]\n'
|
||||||
|
':FORWARD ACCEPT [0:0]\n'
|
||||||
|
'-A FORWARD -s 10.0.0.2 -m comment --comment pic-peer-10-0-0-2 -j ACCEPT\n'
|
||||||
|
'-A FORWARD -s 10.0.0.3 -m comment --comment pic-peer-10-0-0-3 -j DROP\n'
|
||||||
|
'COMMIT\n'
|
||||||
|
)
|
||||||
|
restored = []
|
||||||
|
|
||||||
|
def fake_wg_exec(args):
|
||||||
|
m = MagicMock()
|
||||||
|
m.returncode = 0
|
||||||
|
if args == ['iptables-save']:
|
||||||
|
m.stdout = save_output
|
||||||
|
return m
|
||||||
|
|
||||||
|
def fake_restore(cmd, input, **kwargs):
|
||||||
|
restored.append(input)
|
||||||
|
m = MagicMock()
|
||||||
|
m.returncode = 0
|
||||||
|
return m
|
||||||
|
|
||||||
|
with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec), \
|
||||||
|
patch('subprocess.run', side_effect=fake_restore):
|
||||||
|
firewall_manager.clear_peer_rules('10.0.0.2')
|
||||||
|
|
||||||
|
self.assertEqual(len(restored), 1)
|
||||||
|
restored_content = restored[0]
|
||||||
|
self.assertNotIn('pic-peer-10-0-0-2', restored_content)
|
||||||
|
self.assertIn('pic-peer-10-0-0-3', restored_content)
|
||||||
|
|
||||||
|
def test_no_op_when_no_matching_rules(self):
|
||||||
|
save_output = '*filter\n:FORWARD ACCEPT [0:0]\nCOMMIT\n'
|
||||||
|
|
||||||
|
def fake_wg_exec(args):
|
||||||
|
m = MagicMock()
|
||||||
|
m.returncode = 0
|
||||||
|
m.stdout = save_output
|
||||||
|
return m
|
||||||
|
|
||||||
|
with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec), \
|
||||||
|
patch('subprocess.run') as mock_restore:
|
||||||
|
firewall_manager.clear_peer_rules('10.0.0.99')
|
||||||
|
|
||||||
|
mock_restore.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
@@ -199,29 +199,20 @@ test2 1800 IN CNAME test1
|
|||||||
self.assertFalse(status['running'])
|
self.assertFalse(status['running'])
|
||||||
self.assertIn('stats', status)
|
self.assertIn('stats', status)
|
||||||
|
|
||||||
@patch('subprocess.run')
|
@patch('socket.getaddrinfo')
|
||||||
def test_test_dns_resolution(self, mock_run):
|
def test_test_dns_resolution(self, mock_getaddrinfo):
|
||||||
"""Test DNS resolution testing"""
|
"""Test DNS resolution testing"""
|
||||||
# Mock successful DNS resolution
|
mock_getaddrinfo.return_value = [(None, None, None, None, ('192.168.1.100', 0))]
|
||||||
mock_run.return_value.returncode = 0
|
|
||||||
mock_run.return_value.stdout = 'test.cell -> 192.168.1.100'
|
|
||||||
mock_run.return_value.stderr = ''
|
|
||||||
|
|
||||||
result = self.network_manager.test_dns_resolution('test.cell')
|
result = self.network_manager.test_dns_resolution('test.cell')
|
||||||
|
|
||||||
self.assertTrue(result['success'])
|
self.assertTrue(result['success'])
|
||||||
self.assertIn('192.168.1.100', result['output'])
|
self.assertIn('192.168.1.100', result['output'])
|
||||||
|
|
||||||
@patch('subprocess.run')
|
@patch('socket.getaddrinfo')
|
||||||
def test_test_dns_resolution_failure(self, mock_run):
|
def test_test_dns_resolution_failure(self, mock_getaddrinfo):
|
||||||
"""Test DNS resolution testing with failure"""
|
"""Test DNS resolution testing with failure"""
|
||||||
# Mock failed DNS resolution
|
import socket
|
||||||
mock_run.return_value.returncode = 1
|
mock_getaddrinfo.side_effect = socket.gaierror('NXDOMAIN')
|
||||||
mock_run.return_value.stdout = ''
|
|
||||||
mock_run.return_value.stderr = 'NXDOMAIN'
|
|
||||||
|
|
||||||
result = self.network_manager.test_dns_resolution('nonexistent.cell')
|
result = self.network_manager.test_dns_resolution('nonexistent.cell')
|
||||||
|
|
||||||
self.assertFalse(result['success'])
|
self.assertFalse(result['success'])
|
||||||
self.assertIn('NXDOMAIN', result['error'])
|
self.assertIn('NXDOMAIN', result['error'])
|
||||||
|
|
||||||
@@ -272,5 +263,56 @@ test2 1800 IN CNAME test1
|
|||||||
self.assertIn('192.168.1.10', content)
|
self.assertIn('192.168.1.10', content)
|
||||||
self.assertIn('192.168.1.11', content)
|
self.assertIn('192.168.1.11', content)
|
||||||
|
|
||||||
|
class TestCellDnsForwarding(unittest.TestCase):
|
||||||
|
"""Test add/remove cell DNS forwarding in Corefile."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
|
os.makedirs(os.path.join(self.config_dir, 'dns'), exist_ok=True)
|
||||||
|
self.nm = NetworkManager(self.data_dir, self.config_dir)
|
||||||
|
self.corefile = os.path.join(self.config_dir, 'dns', 'Corefile')
|
||||||
|
with open(self.corefile, 'w') as f:
|
||||||
|
f.write('home.cell {\n file /data/home.cell.zone\n log\n}\n\n. {\n forward . 8.8.8.8\n log\n}\n')
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_add_cell_dns_forward_appends_block(self, _mock):
|
||||||
|
self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1')
|
||||||
|
with open(self.corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
self.assertIn('remote.cell', content)
|
||||||
|
self.assertIn('10.1.0.1', content)
|
||||||
|
self.assertIn('forward . 10.1.0.1', content)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_add_cell_dns_forward_idempotent(self, _mock):
|
||||||
|
self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1')
|
||||||
|
self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1')
|
||||||
|
with open(self.corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
self.assertEqual(content.count('forward . 10.1.0.1'), 1)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_remove_cell_dns_forward_cleans_block(self, _mock):
|
||||||
|
self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1')
|
||||||
|
self.nm.remove_cell_dns_forward('remote.cell')
|
||||||
|
with open(self.corefile) as f:
|
||||||
|
content = f.read()
|
||||||
|
self.assertNotIn('remote.cell', content)
|
||||||
|
self.assertNotIn('10.1.0.1', content)
|
||||||
|
|
||||||
|
@patch('subprocess.run')
|
||||||
|
def test_remove_nonexistent_forward_is_noop(self, _mock):
|
||||||
|
before = open(self.corefile).read()
|
||||||
|
self.nm.remove_cell_dns_forward('nonexistent.cell')
|
||||||
|
after = open(self.corefile).read()
|
||||||
|
self.assertEqual(before, after)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
@@ -0,0 +1,124 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Tests for peer add/remove flow — ensures server-side WireGuard AllowedIPs
|
||||||
|
are always the peer's /32 VPN IP, never the client tunnel AllowedIPs.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import shutil
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
api_dir = Path(__file__).parent.parent / 'api'
|
||||||
|
sys.path.insert(0, str(api_dir))
|
||||||
|
|
||||||
|
from wireguard_manager import WireGuardManager
|
||||||
|
from peer_registry import PeerRegistry
|
||||||
|
|
||||||
|
|
||||||
|
class TestServerSideAllowedIPs(unittest.TestCase):
|
||||||
|
"""Server-side peer AllowedIPs must always be peer_ip/32."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.tmp = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.tmp, 'data')
|
||||||
|
self.config_dir = os.path.join(self.tmp, 'config')
|
||||||
|
os.makedirs(self.data_dir)
|
||||||
|
os.makedirs(self.config_dir)
|
||||||
|
# Patch syncconf so tests don't need docker
|
||||||
|
patcher = patch.object(WireGuardManager, '_syncconf', return_value=None)
|
||||||
|
self.mock_sync = patcher.start()
|
||||||
|
self.addCleanup(patcher.stop)
|
||||||
|
self.wg = WireGuardManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.tmp)
|
||||||
|
|
||||||
|
def _config(self):
|
||||||
|
with open(self.wg._config_file()) as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
def test_add_peer_uses_host_slash32(self):
|
||||||
|
"""Peer added with /32 stays as /32 in config."""
|
||||||
|
self.wg.add_peer('alice', 'ALICEPUBKEY=', '', allowed_ips='10.0.0.2/32')
|
||||||
|
cfg = self._config()
|
||||||
|
self.assertIn('AllowedIPs = 10.0.0.2/32', cfg)
|
||||||
|
|
||||||
|
def test_full_tunnel_client_ips_rejected(self):
|
||||||
|
"""add_peer must refuse 0.0.0.0/0 — it would route all internet traffic to that peer."""
|
||||||
|
result = self.wg.add_peer('bob', 'BOBPUBKEY=', '', allowed_ips='0.0.0.0/0, ::/0')
|
||||||
|
self.assertFalse(result,
|
||||||
|
"0.0.0.0/0 in server peer AllowedIPs routes ALL traffic to that peer, breaking internet")
|
||||||
|
|
||||||
|
def test_split_tunnel_client_ips_rejected(self):
|
||||||
|
"""add_peer must refuse 172.20.0.0/16 — it would route docker network to that peer."""
|
||||||
|
result = self.wg.add_peer('carol', 'CAROLPUBKEY=', '', allowed_ips='10.0.0.0/24, 172.20.0.0/16')
|
||||||
|
self.assertFalse(result,
|
||||||
|
"172.20.0.0/16 in server peer AllowedIPs routes docker network traffic to that peer")
|
||||||
|
|
||||||
|
def test_remove_peer_cleans_config(self):
|
||||||
|
self.wg.add_peer('dave', 'DAVEPUBKEY=', '', allowed_ips='10.0.0.4/32')
|
||||||
|
self.wg.remove_peer('DAVEPUBKEY=')
|
||||||
|
cfg = self._config()
|
||||||
|
self.assertNotIn('DAVEPUBKEY=', cfg)
|
||||||
|
|
||||||
|
def test_syncconf_called_on_add(self):
|
||||||
|
self.wg.add_peer('eve', 'EVEPUBKEY=', '', allowed_ips='10.0.0.5/32')
|
||||||
|
self.mock_sync.assert_called()
|
||||||
|
|
||||||
|
def test_syncconf_called_on_remove(self):
|
||||||
|
self.wg.add_peer('frank', 'FRANKPUBKEY=', '', allowed_ips='10.0.0.6/32')
|
||||||
|
self.mock_sync.reset_mock()
|
||||||
|
self.wg.remove_peer('FRANKPUBKEY=')
|
||||||
|
self.mock_sync.assert_called()
|
||||||
|
|
||||||
|
|
||||||
|
class TestAutoAssignIP(unittest.TestCase):
|
||||||
|
"""Auto-assigned peer IPs must be unique /32s starting at 10.0.0.2."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.tmp = tempfile.mkdtemp()
|
||||||
|
self.registry = PeerRegistry(data_dir=self.tmp, config_dir=self.tmp)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.tmp)
|
||||||
|
|
||||||
|
def _next_ip(self):
|
||||||
|
import ipaddress
|
||||||
|
used = {p.get('ip', '').split('/')[0] for p in self.registry.list_peers()}
|
||||||
|
for host in ipaddress.ip_network('10.0.0.0/24').hosts():
|
||||||
|
ip = str(host)
|
||||||
|
if ip != '10.0.0.1' and ip not in used:
|
||||||
|
return ip
|
||||||
|
raise ValueError('No free IPs')
|
||||||
|
|
||||||
|
def test_first_peer_gets_10_0_0_2(self):
|
||||||
|
ip = self._next_ip()
|
||||||
|
self.assertEqual(ip, '10.0.0.2')
|
||||||
|
|
||||||
|
def test_second_peer_gets_10_0_0_3(self):
|
||||||
|
self.registry.add_peer({'peer': 'p1', 'ip': '10.0.0.2'})
|
||||||
|
ip = self._next_ip()
|
||||||
|
self.assertEqual(ip, '10.0.0.3')
|
||||||
|
|
||||||
|
def test_no_duplicate_ips(self):
|
||||||
|
assigned = []
|
||||||
|
for i in range(5):
|
||||||
|
ip = self._next_ip()
|
||||||
|
self.assertNotIn(ip, assigned, f"Duplicate IP assigned: {ip}")
|
||||||
|
assigned.append(ip)
|
||||||
|
self.registry.add_peer({'peer': f'peer{i}', 'ip': ip})
|
||||||
|
|
||||||
|
def test_server_ip_never_assigned(self):
|
||||||
|
# Fill up .2 through .10
|
||||||
|
for i in range(2, 11):
|
||||||
|
self.registry.add_peer({'peer': f'p{i}', 'ip': f'10.0.0.{i}'})
|
||||||
|
ip = self._next_ip()
|
||||||
|
self.assertNotEqual(ip, '10.0.0.1', "Server IP 10.0.0.1 must never be assigned to a peer")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
+11
-3
@@ -38,9 +38,10 @@ class TestVaultAPI(unittest.TestCase):
|
|||||||
os.makedirs(self.config_dir, exist_ok=True)
|
os.makedirs(self.config_dir, exist_ok=True)
|
||||||
os.makedirs(self.data_dir, exist_ok=True)
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
|
|
||||||
# Mock VaultManager
|
# Mock VaultManager on the Flask app object
|
||||||
self.vault_patcher = patch('api.vault_manager')
|
self.mock_vault = MagicMock()
|
||||||
self.mock_vault = self.vault_patcher.start()
|
self.vault_patcher = patch.object(app, 'vault_manager', self.mock_vault)
|
||||||
|
self.vault_patcher.start()
|
||||||
|
|
||||||
# Create a mock vault manager instance
|
# Create a mock vault manager instance
|
||||||
mock_vault_instance = MagicMock()
|
mock_vault_instance = MagicMock()
|
||||||
@@ -428,6 +429,7 @@ class TestVaultAPIIntegration(unittest.TestCase):
|
|||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
"""Set up test environment."""
|
"""Set up test environment."""
|
||||||
|
from vault_manager import VaultManager
|
||||||
self.test_dir = tempfile.mkdtemp()
|
self.test_dir = tempfile.mkdtemp()
|
||||||
self.config_dir = os.path.join(self.test_dir, "config")
|
self.config_dir = os.path.join(self.test_dir, "config")
|
||||||
self.data_dir = os.path.join(self.test_dir, "data")
|
self.data_dir = os.path.join(self.test_dir, "data")
|
||||||
@@ -435,12 +437,18 @@ class TestVaultAPIIntegration(unittest.TestCase):
|
|||||||
os.makedirs(self.config_dir, exist_ok=True)
|
os.makedirs(self.config_dir, exist_ok=True)
|
||||||
os.makedirs(self.data_dir, exist_ok=True)
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Use a real VaultManager backed by temp dirs
|
||||||
|
self._original_vault_manager = getattr(app, 'vault_manager', None)
|
||||||
|
app.vault_manager = VaultManager(data_dir=self.data_dir, config_dir=self.config_dir)
|
||||||
|
|
||||||
# Configure Flask app for testing
|
# Configure Flask app for testing
|
||||||
app.config['TESTING'] = True
|
app.config['TESTING'] = True
|
||||||
self.client = app.test_client()
|
self.client = app.test_client()
|
||||||
|
|
||||||
def tearDown(self):
|
def tearDown(self):
|
||||||
"""Clean up test environment."""
|
"""Clean up test environment."""
|
||||||
|
if self._original_vault_manager is not None:
|
||||||
|
app.vault_manager = self._original_vault_manager
|
||||||
shutil.rmtree(self.test_dir)
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
def test_full_certificate_lifecycle_api(self):
|
def test_full_certificate_lifecycle_api(self):
|
||||||
|
|||||||
+167
-36
@@ -35,6 +35,10 @@ class TestWireGuardManager(unittest.TestCase):
|
|||||||
os.makedirs(self.data_dir, exist_ok=True)
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
os.makedirs(self.config_dir, exist_ok=True)
|
os.makedirs(self.config_dir, exist_ok=True)
|
||||||
|
|
||||||
|
patcher = patch.object(WireGuardManager, '_syncconf', return_value=None)
|
||||||
|
self.mock_sync = patcher.start()
|
||||||
|
self.addCleanup(patcher.stop)
|
||||||
|
|
||||||
# Create WireGuardManager instance
|
# Create WireGuardManager instance
|
||||||
self.wg_manager = WireGuardManager(self.data_dir, self.config_dir)
|
self.wg_manager = WireGuardManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
@@ -104,50 +108,47 @@ class TestWireGuardManager(unittest.TestCase):
|
|||||||
self.assertIsInstance(config, str)
|
self.assertIsInstance(config, str)
|
||||||
self.assertIn('[Interface]', config)
|
self.assertIn('[Interface]', config)
|
||||||
self.assertIn('PrivateKey', config)
|
self.assertIn('PrivateKey', config)
|
||||||
self.assertIn('Address = 172.20.0.1/16', config)
|
self.assertIn('Address = 10.0.0.1/24', config)
|
||||||
self.assertIn('ListenPort = 51820', config)
|
self.assertIn('ListenPort = 51820', config)
|
||||||
self.assertIn('PostUp', config)
|
self.assertIn('PostUp', config)
|
||||||
self.assertIn('PostDown', config)
|
self.assertIn('PostDown', config)
|
||||||
|
|
||||||
def test_add_peer(self):
|
def test_add_peer(self):
|
||||||
"""Test adding a peer to WireGuard configuration"""
|
"""Test adding a peer — server-side AllowedIPs must be /32."""
|
||||||
# Generate peer keys first
|
|
||||||
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
||||||
|
|
||||||
success = self.wg_manager.add_peer(
|
success = self.wg_manager.add_peer(
|
||||||
'testpeer',
|
'testpeer',
|
||||||
peer_keys['public_key'],
|
peer_keys['public_key'],
|
||||||
'192.168.1.100',
|
'',
|
||||||
'172.20.0.0/16',
|
'10.0.0.2/32',
|
||||||
25
|
25
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertTrue(success)
|
self.assertTrue(success)
|
||||||
|
|
||||||
# Check if config file was created
|
config_file = self.wg_manager._config_file()
|
||||||
config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf')
|
|
||||||
self.assertTrue(os.path.exists(config_file))
|
self.assertTrue(os.path.exists(config_file))
|
||||||
|
|
||||||
# Check config content
|
|
||||||
with open(config_file, 'r') as f:
|
with open(config_file, 'r') as f:
|
||||||
config = f.read()
|
config = f.read()
|
||||||
self.assertIn('[Peer]', config)
|
self.assertIn('[Peer]', config)
|
||||||
self.assertIn(peer_keys['public_key'], config)
|
self.assertIn(peer_keys['public_key'], config)
|
||||||
self.assertIn('AllowedIPs = 172.20.0.0/16', config)
|
self.assertIn('AllowedIPs = 10.0.0.2/32', config)
|
||||||
self.assertIn('PersistentKeepalive = 25', config)
|
self.assertIn('PersistentKeepalive = 25', config)
|
||||||
|
|
||||||
def test_remove_peer(self):
|
def test_remove_peer(self):
|
||||||
"""Test removing a peer from WireGuard configuration"""
|
"""Test removing a peer from WireGuard configuration"""
|
||||||
# Add a peer first
|
# Add a peer first
|
||||||
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
||||||
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100')
|
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32')
|
||||||
|
|
||||||
# Remove the peer
|
# Remove the peer
|
||||||
success = self.wg_manager.remove_peer(peer_keys['public_key'])
|
success = self.wg_manager.remove_peer(peer_keys['public_key'])
|
||||||
self.assertTrue(success)
|
self.assertTrue(success)
|
||||||
|
|
||||||
# Check if peer was removed
|
# Check if peer was removed
|
||||||
config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf')
|
config_file = self.wg_manager._config_file()
|
||||||
with open(config_file, 'r') as f:
|
with open(config_file, 'r') as f:
|
||||||
config = f.read()
|
config = f.read()
|
||||||
self.assertNotIn(peer_keys['public_key'], config)
|
self.assertNotIn(peer_keys['public_key'], config)
|
||||||
@@ -156,7 +157,7 @@ class TestWireGuardManager(unittest.TestCase):
|
|||||||
"""Test getting list of configured peers"""
|
"""Test getting list of configured peers"""
|
||||||
# Add a peer first
|
# Add a peer first
|
||||||
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
||||||
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100')
|
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32')
|
||||||
|
|
||||||
peers = self.wg_manager.get_peers()
|
peers = self.wg_manager.get_peers()
|
||||||
|
|
||||||
@@ -221,46 +222,40 @@ class TestWireGuardManager(unittest.TestCase):
|
|||||||
|
|
||||||
def test_update_peer_ip(self):
|
def test_update_peer_ip(self):
|
||||||
"""Test updating peer IP address"""
|
"""Test updating peer IP address"""
|
||||||
# Add a peer first
|
|
||||||
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
||||||
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100')
|
self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32')
|
||||||
|
|
||||||
# Update peer IP
|
success = self.wg_manager.update_peer_ip(peer_keys['public_key'], '10.0.0.9/32')
|
||||||
success = self.wg_manager.update_peer_ip(peer_keys['public_key'], '192.168.1.200')
|
|
||||||
self.assertTrue(success)
|
self.assertTrue(success)
|
||||||
|
|
||||||
# Check if IP was updated in config
|
with open(self.wg_manager._config_file(), 'r') as f:
|
||||||
config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf')
|
|
||||||
with open(config_file, 'r') as f:
|
|
||||||
config = f.read()
|
config = f.read()
|
||||||
self.assertIn('192.168.1.200', config)
|
self.assertIn('10.0.0.9/32', config)
|
||||||
|
|
||||||
def test_get_peer_config(self):
|
def test_get_peer_config(self):
|
||||||
"""Test generating peer configuration"""
|
"""Test generating peer client configuration."""
|
||||||
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
peer_keys = self.wg_manager.generate_peer_keys('testpeer')
|
||||||
keys = self.wg_manager.get_keys()
|
keys = self.wg_manager.get_keys()
|
||||||
|
|
||||||
config = self.wg_manager.get_peer_config('testpeer', '192.168.1.100', peer_keys['private_key'])
|
config = self.wg_manager.get_peer_config('testpeer', '10.0.0.2', peer_keys['private_key'])
|
||||||
|
|
||||||
self.assertIsInstance(config, str)
|
self.assertIsInstance(config, str)
|
||||||
self.assertIn('[Interface]', config)
|
self.assertIn('[Interface]', config)
|
||||||
self.assertIn('[Peer]', config)
|
self.assertIn('[Peer]', config)
|
||||||
self.assertIn('PrivateKey', config)
|
self.assertIn('PrivateKey', config)
|
||||||
self.assertIn('Address = 192.168.1.100/32', config)
|
self.assertIn('Address = 10.0.0.2/32', config)
|
||||||
self.assertIn('DNS = 172.20.0.2', config)
|
self.assertIn('DNS = 172.20.0.3', config)
|
||||||
self.assertIn(keys['public_key'], config)
|
self.assertIn(keys['public_key'], config)
|
||||||
self.assertIn('AllowedIPs = 172.20.0.0/16', config)
|
self.assertIn('AllowedIPs', config)
|
||||||
|
|
||||||
def test_multiple_peers(self):
|
def test_multiple_peers(self):
|
||||||
"""Test managing multiple peers"""
|
"""Test managing multiple peers"""
|
||||||
# Add first peer
|
|
||||||
peer1_keys = self.wg_manager.generate_peer_keys('peer1')
|
peer1_keys = self.wg_manager.generate_peer_keys('peer1')
|
||||||
success1 = self.wg_manager.add_peer('peer1', peer1_keys['public_key'], '192.168.1.100')
|
success1 = self.wg_manager.add_peer('peer1', peer1_keys['public_key'], '', '10.0.0.2/32')
|
||||||
self.assertTrue(success1)
|
self.assertTrue(success1)
|
||||||
|
|
||||||
# Add second peer
|
|
||||||
peer2_keys = self.wg_manager.generate_peer_keys('peer2')
|
peer2_keys = self.wg_manager.generate_peer_keys('peer2')
|
||||||
success2 = self.wg_manager.add_peer('peer2', peer2_keys['public_key'], '192.168.1.101')
|
success2 = self.wg_manager.add_peer('peer2', peer2_keys['public_key'], '', '10.0.0.3/32')
|
||||||
self.assertTrue(success2)
|
self.assertTrue(success2)
|
||||||
|
|
||||||
# Get peers
|
# Get peers
|
||||||
@@ -310,19 +305,155 @@ PersistentKeepalive = 30
|
|||||||
self.assertEqual(peers[1]['persistent_keepalive'], 30)
|
self.assertEqual(peers[1]['persistent_keepalive'], 30)
|
||||||
|
|
||||||
def test_error_handling(self):
|
def test_error_handling(self):
|
||||||
"""Test error handling in WireGuard operations"""
|
"""Test error handling in WireGuard operations."""
|
||||||
# Test with invalid public key
|
# Wide CIDR rejected — server-side AllowedIPs must be /32
|
||||||
success = self.wg_manager.add_peer('testpeer', 'invalid_key', '192.168.1.100')
|
success = self.wg_manager.add_peer('testpeer', 'invalid_key', '', '172.20.0.0/16')
|
||||||
# Should still return True as it writes to config file
|
self.assertFalse(success, "Wide CIDR must be rejected")
|
||||||
|
|
||||||
|
# Valid /32 with any key string is accepted (key format not validated at this layer)
|
||||||
|
success = self.wg_manager.add_peer('testpeer', 'any_key_string=', '', '10.0.0.2/32')
|
||||||
self.assertTrue(success)
|
self.assertTrue(success)
|
||||||
|
|
||||||
# Test removing non-existent peer
|
# Removing non-existent peer is a no-op, not an error
|
||||||
success = self.wg_manager.remove_peer('non_existent_key')
|
success = self.wg_manager.remove_peer('non_existent_key')
|
||||||
self.assertTrue(success)
|
self.assertTrue(success)
|
||||||
|
|
||||||
# Test updating non-existent peer IP
|
# Updating IP for peer not in config returns False
|
||||||
success = self.wg_manager.update_peer_ip('non_existent_key', '192.168.1.200')
|
success = self.wg_manager.update_peer_ip('non_existent_key', '10.0.0.9/32')
|
||||||
self.assertFalse(success)
|
self.assertFalse(success)
|
||||||
|
|
||||||
|
|
||||||
|
class TestWireGuardCellPeer(unittest.TestCase):
|
||||||
|
"""Test add_cell_peer allows subnet CIDRs for site-to-site connections."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
|
os.makedirs(self.config_dir, exist_ok=True)
|
||||||
|
patcher = patch.object(WireGuardManager, '_syncconf', return_value=None)
|
||||||
|
self.mock_sync = patcher.start()
|
||||||
|
self.addCleanup(patcher.stop)
|
||||||
|
self.wg = WireGuardManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
def test_add_cell_peer_allows_subnet_cidr(self):
|
||||||
|
ok = self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24')
|
||||||
|
self.assertTrue(ok)
|
||||||
|
content = self.wg._read_config()
|
||||||
|
self.assertIn('10.1.0.0/24', content)
|
||||||
|
|
||||||
|
def test_add_cell_peer_writes_full_endpoint(self):
|
||||||
|
self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24')
|
||||||
|
content = self.wg._read_config()
|
||||||
|
self.assertIn('Endpoint = 5.6.7.8:51821', content)
|
||||||
|
|
||||||
|
def test_add_cell_peer_comment_has_cell_prefix(self):
|
||||||
|
self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24')
|
||||||
|
content = self.wg._read_config()
|
||||||
|
self.assertIn('# cell:remote', content)
|
||||||
|
|
||||||
|
def test_add_cell_peer_invalid_cidr_returns_false(self):
|
||||||
|
ok = self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', 'not-a-cidr')
|
||||||
|
self.assertFalse(ok)
|
||||||
|
|
||||||
|
def test_add_cell_peer_can_coexist_with_regular_peers(self):
|
||||||
|
self.wg.add_peer('alice', 'alicepubkey=', '', '10.0.0.2/32')
|
||||||
|
self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24')
|
||||||
|
content = self.wg._read_config()
|
||||||
|
self.assertIn('alicepubkey=', content)
|
||||||
|
self.assertIn('remotepubkey=', content)
|
||||||
|
|
||||||
|
|
||||||
|
class TestWireGuardConfigReads(unittest.TestCase):
|
||||||
|
"""Test that port/address/network are read from wg0.conf, not hardcoded."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.test_dir = tempfile.mkdtemp()
|
||||||
|
self.data_dir = os.path.join(self.test_dir, 'data')
|
||||||
|
self.config_dir = os.path.join(self.test_dir, 'config')
|
||||||
|
os.makedirs(self.data_dir, exist_ok=True)
|
||||||
|
os.makedirs(self.config_dir, exist_ok=True)
|
||||||
|
patcher = patch.object(WireGuardManager, '_syncconf', return_value=None)
|
||||||
|
self.mock_sync = patcher.start()
|
||||||
|
self.addCleanup(patcher.stop)
|
||||||
|
self.wg = WireGuardManager(self.data_dir, self.config_dir)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
shutil.rmtree(self.test_dir)
|
||||||
|
|
||||||
|
def _write_wg_conf(self, port=51820, address='10.0.0.1/24', extra=''):
|
||||||
|
conf = (
|
||||||
|
f'[Interface]\n'
|
||||||
|
f'PrivateKey = dummykey\n'
|
||||||
|
f'Address = {address}\n'
|
||||||
|
f'ListenPort = {port}\n'
|
||||||
|
f'{extra}'
|
||||||
|
)
|
||||||
|
cf = self.wg._config_file()
|
||||||
|
os.makedirs(os.path.dirname(cf), exist_ok=True)
|
||||||
|
with open(cf, 'w') as f:
|
||||||
|
f.write(conf)
|
||||||
|
|
||||||
|
def test_get_configured_port_reads_from_wg_conf(self):
|
||||||
|
self._write_wg_conf(port=54321)
|
||||||
|
self.assertEqual(self.wg._get_configured_port(), 54321)
|
||||||
|
|
||||||
|
def test_get_configured_port_fallback_when_no_file(self):
|
||||||
|
# No wg0.conf exists — fall back to DEFAULT_PORT
|
||||||
|
self.assertEqual(self.wg._get_configured_port(), 51820)
|
||||||
|
|
||||||
|
def test_get_configured_address_reads_from_wg_conf(self):
|
||||||
|
self._write_wg_conf(address='10.1.0.1/24')
|
||||||
|
self.assertEqual(self.wg._get_configured_address(), '10.1.0.1/24')
|
||||||
|
|
||||||
|
def test_get_configured_network_derives_from_address(self):
|
||||||
|
self._write_wg_conf(address='10.1.0.1/24')
|
||||||
|
self.assertEqual(self.wg._get_configured_network(), '10.1.0.0/24')
|
||||||
|
|
||||||
|
def test_get_split_tunnel_ips_uses_configured_network(self):
|
||||||
|
self._write_wg_conf(address='10.1.0.1/24')
|
||||||
|
split = self.wg.get_split_tunnel_ips()
|
||||||
|
self.assertIn('10.1.0.0/24', split)
|
||||||
|
self.assertIn('172.20.0.0/16', split)
|
||||||
|
self.assertNotIn('10.0.0.0/24', split)
|
||||||
|
|
||||||
|
def test_get_server_config_uses_configured_port(self):
|
||||||
|
self._write_wg_conf(port=54321)
|
||||||
|
with patch.object(self.wg, 'get_external_ip', return_value='1.2.3.4'):
|
||||||
|
cfg = self.wg.get_server_config()
|
||||||
|
self.assertEqual(cfg['port'], 54321)
|
||||||
|
self.assertIn(':54321', cfg['endpoint'])
|
||||||
|
|
||||||
|
def test_get_server_config_includes_dns_and_split_tunnel(self):
|
||||||
|
self._write_wg_conf(address='10.2.0.1/24')
|
||||||
|
with patch.object(self.wg, 'get_external_ip', return_value='1.2.3.4'):
|
||||||
|
cfg = self.wg.get_server_config()
|
||||||
|
self.assertIn('dns_ip', cfg)
|
||||||
|
self.assertIn('split_tunnel_ips', cfg)
|
||||||
|
self.assertIn('10.2.0.0/24', cfg['split_tunnel_ips'])
|
||||||
|
|
||||||
|
def test_get_peer_config_uses_configured_port_in_endpoint(self):
|
||||||
|
self._write_wg_conf(port=54321)
|
||||||
|
result = self.wg.get_peer_config(
|
||||||
|
peer_name='alice',
|
||||||
|
peer_ip='10.0.0.2',
|
||||||
|
peer_private_key='privkeyalice=',
|
||||||
|
server_endpoint='5.6.7.8',
|
||||||
|
)
|
||||||
|
self.assertIn(':54321', result)
|
||||||
|
self.assertNotIn(':51820', result)
|
||||||
|
|
||||||
|
def test_add_peer_uses_configured_port_in_endpoint(self):
|
||||||
|
self._write_wg_conf(port=54321)
|
||||||
|
self.wg.add_peer('alice', 'pubkeyalice=', '5.6.7.8', '10.0.0.2/32')
|
||||||
|
content = self.wg._read_config()
|
||||||
|
self.assertIn('Endpoint = 5.6.7.8:54321', content)
|
||||||
|
self.assertNotIn(':51820', content)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
unittest.main()
|
unittest.main()
|
||||||
@@ -4,6 +4,21 @@ server {
|
|||||||
root /usr/share/nginx/html;
|
root /usr/share/nginx/html;
|
||||||
index index.html;
|
index index.html;
|
||||||
|
|
||||||
|
# Proxy API and health calls to the backend container
|
||||||
|
location /api/ {
|
||||||
|
proxy_pass http://cell-api:3000/api/;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_read_timeout 60s;
|
||||||
|
}
|
||||||
|
|
||||||
|
location /health {
|
||||||
|
proxy_pass http://cell-api:3000/health;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
}
|
||||||
|
|
||||||
# Handle client-side routing
|
# Handle client-side routing
|
||||||
location / {
|
location / {
|
||||||
try_files $uri $uri/ /index.html;
|
try_files $uri $uri/ /index.html;
|
||||||
|
|||||||
+8
-1
@@ -13,9 +13,11 @@ import {
|
|||||||
Server,
|
Server,
|
||||||
Key,
|
Key,
|
||||||
Package2,
|
Package2,
|
||||||
Settings as SettingsIcon
|
Settings as SettingsIcon,
|
||||||
|
Link2
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import { healthAPI } from './services/api';
|
import { healthAPI } from './services/api';
|
||||||
|
import { ConfigProvider } from './contexts/ConfigContext';
|
||||||
import Sidebar from './components/Sidebar';
|
import Sidebar from './components/Sidebar';
|
||||||
import Dashboard from './pages/Dashboard';
|
import Dashboard from './pages/Dashboard';
|
||||||
import Peers from './pages/Peers';
|
import Peers from './pages/Peers';
|
||||||
@@ -29,6 +31,7 @@ import Logs from './pages/Logs';
|
|||||||
import Settings from './pages/Settings';
|
import Settings from './pages/Settings';
|
||||||
import Vault from './pages/Vault';
|
import Vault from './pages/Vault';
|
||||||
import ContainerDashboard from './components/ContainerDashboard';
|
import ContainerDashboard from './components/ContainerDashboard';
|
||||||
|
import CellNetwork from './pages/CellNetwork';
|
||||||
|
|
||||||
function App() {
|
function App() {
|
||||||
const [isOnline, setIsOnline] = useState(false);
|
const [isOnline, setIsOnline] = useState(false);
|
||||||
@@ -64,6 +67,7 @@ function App() {
|
|||||||
{ name: 'Routing', href: '/routing', icon: Wifi },
|
{ name: 'Routing', href: '/routing', icon: Wifi },
|
||||||
{ name: 'Vault', href: '/vault', icon: Key },
|
{ name: 'Vault', href: '/vault', icon: Key },
|
||||||
{ name: 'Containers', href: '/containers', icon: Package2 },
|
{ name: 'Containers', href: '/containers', icon: Package2 },
|
||||||
|
{ name: 'Cell Network', href: '/cell-network', icon: Link2 },
|
||||||
{ name: 'Logs', href: '/logs', icon: Activity },
|
{ name: 'Logs', href: '/logs', icon: Activity },
|
||||||
{ name: 'Settings', href: '/settings', icon: SettingsIcon },
|
{ name: 'Settings', href: '/settings', icon: SettingsIcon },
|
||||||
];
|
];
|
||||||
@@ -81,6 +85,7 @@ function App() {
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
<Router>
|
<Router>
|
||||||
|
<ConfigProvider>
|
||||||
<div className="min-h-screen bg-gray-50">
|
<div className="min-h-screen bg-gray-50">
|
||||||
<Sidebar navigation={navigation} isOnline={isOnline} />
|
<Sidebar navigation={navigation} isOnline={isOnline} />
|
||||||
|
|
||||||
@@ -119,6 +124,7 @@ function App() {
|
|||||||
<Route path="/routing" element={<Routing />} />
|
<Route path="/routing" element={<Routing />} />
|
||||||
<Route path="/vault" element={<Vault />} />
|
<Route path="/vault" element={<Vault />} />
|
||||||
<Route path="/containers" element={<ContainerDashboard />} />
|
<Route path="/containers" element={<ContainerDashboard />} />
|
||||||
|
<Route path="/cell-network" element={<CellNetwork />} />
|
||||||
<Route path="/logs" element={<Logs />} />
|
<Route path="/logs" element={<Logs />} />
|
||||||
<Route path="/settings" element={<Settings />} />
|
<Route path="/settings" element={<Settings />} />
|
||||||
</Routes>
|
</Routes>
|
||||||
@@ -126,6 +132,7 @@ function App() {
|
|||||||
</main>
|
</main>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
</ConfigProvider>
|
||||||
</Router>
|
</Router>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,22 @@
|
|||||||
|
import { createContext, useContext, useState, useEffect, useCallback } from 'react';
|
||||||
|
import { cellAPI } from '../services/api';
|
||||||
|
|
||||||
|
const ConfigContext = createContext({ domain: 'cell', cell_name: 'mycell' });
|
||||||
|
|
||||||
|
export function ConfigProvider({ children }) {
|
||||||
|
const [config, setConfig] = useState({ domain: 'cell', cell_name: 'mycell' });
|
||||||
|
|
||||||
|
const refresh = useCallback(() => {
|
||||||
|
cellAPI.getConfig().then(r => setConfig(r.data)).catch(() => {});
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
useEffect(() => { refresh(); }, [refresh]);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<ConfigContext.Provider value={{ ...config, refresh }}>
|
||||||
|
{children}
|
||||||
|
</ConfigContext.Provider>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export const useConfig = () => useContext(ConfigContext);
|
||||||
@@ -1,8 +1,39 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { Calendar as CalendarIcon, Users, Clock } from 'lucide-react';
|
import { Calendar as CalendarIcon, Users, Wifi, Copy, CheckCheck } from 'lucide-react';
|
||||||
import { calendarAPI } from '../services/api';
|
import { calendarAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
|
||||||
|
const CELL_IP = '172.20.0.21';
|
||||||
|
|
||||||
|
function CopyButton({ text }) {
|
||||||
|
const [copied, setCopied] = useState(false);
|
||||||
|
const copy = () => {
|
||||||
|
navigator.clipboard.writeText(text);
|
||||||
|
setCopied(true);
|
||||||
|
setTimeout(() => setCopied(false), 1500);
|
||||||
|
};
|
||||||
|
return (
|
||||||
|
<button onClick={copy} className="ml-2 text-gray-400 hover:text-gray-600" title="Copy">
|
||||||
|
{copied ? <CheckCheck className="h-3.5 w-3.5 text-green-500" /> : <Copy className="h-3.5 w-3.5" />}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function InfoRow({ label, value }) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center justify-between py-1.5 border-b border-gray-100 last:border-0">
|
||||||
|
<span className="text-sm text-gray-500 w-32 shrink-0">{label}</span>
|
||||||
|
<div className="flex items-center">
|
||||||
|
<span className="text-sm font-mono font-medium text-gray-800">{value}</span>
|
||||||
|
<CopyButton text={value} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
function Calendar() {
|
function Calendar() {
|
||||||
|
const { domain = 'cell' } = useConfig();
|
||||||
|
const cellHost = `calendar.${domain}`;
|
||||||
const [users, setUsers] = useState([]);
|
const [users, setUsers] = useState([]);
|
||||||
const [status, setStatus] = useState(null);
|
const [status, setStatus] = useState(null);
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
@@ -17,7 +48,6 @@ function Calendar() {
|
|||||||
calendarAPI.getUsers(),
|
calendarAPI.getUsers(),
|
||||||
calendarAPI.getStatus()
|
calendarAPI.getStatus()
|
||||||
]);
|
]);
|
||||||
|
|
||||||
setUsers(usersResponse.data);
|
setUsers(usersResponse.data);
|
||||||
setStatus(statusResponse.data);
|
setStatus(statusResponse.data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -38,13 +68,67 @@ function Calendar() {
|
|||||||
return (
|
return (
|
||||||
<div>
|
<div>
|
||||||
<div className="mb-8">
|
<div className="mb-8">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">Calendar Services</h1>
|
<h1 className="text-2xl font-bold text-gray-900">Calendar & Contacts</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-2 text-gray-600">Radicale CalDAV / CardDAV server</p>
|
||||||
Manage Radicale CalDAV and CardDAV services
|
|
||||||
</p>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
|
{/* Connection Info */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<Wifi className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Connect your device</h3>
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-500 mb-3">
|
||||||
|
Use these settings in your calendar / contacts app (iOS, Android, Thunderbird, etc.)
|
||||||
|
</p>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="Server URL" value={`http://${cellHost}`} />
|
||||||
|
<InfoRow label="CalDAV path" value={`http://${cellHost}/`} />
|
||||||
|
<InfoRow label="CardDAV path" value={`http://${cellHost}/`} />
|
||||||
|
<InfoRow label="Port" value="80" />
|
||||||
|
<InfoRow label="Direct IP" value={CELL_IP} />
|
||||||
|
<InfoRow label="Protocol" value="HTTP (CalDAV/CardDAV)" />
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-3">
|
||||||
|
Requires VPN connection. DNS server must be set to <span className="font-mono">172.20.0.3</span>.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* iOS / Android quick guide */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<CalendarIcon className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Quick setup guide</h3>
|
||||||
|
</div>
|
||||||
|
<div className="space-y-3 text-sm text-gray-700">
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">iOS (Settings → Calendar → Accounts)</p>
|
||||||
|
<ol className="list-decimal ml-4 space-y-0.5 text-xs text-gray-600">
|
||||||
|
<li>Add Account → Other → Add CalDAV Account</li>
|
||||||
|
<li>Server: <span className="font-mono">{cellHost}</span></li>
|
||||||
|
<li>Enter username & password</li>
|
||||||
|
<li>For contacts: Add CardDAV Account, same server</li>
|
||||||
|
</ol>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">Android (DAVx⁵ app)</p>
|
||||||
|
<ol className="list-decimal ml-4 space-y-0.5 text-xs text-gray-600">
|
||||||
|
<li>Install DAVx⁵ from Play Store / F-Droid</li>
|
||||||
|
<li>Login with URL: <span className="font-mono">http://{cellHost}/</span></li>
|
||||||
|
<li>Select calendars & address books to sync</li>
|
||||||
|
</ol>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">Thunderbird</p>
|
||||||
|
<ol className="list-decimal ml-4 space-y-0.5 text-xs text-gray-600">
|
||||||
|
<li>Calendar → New Calendar → On the Network</li>
|
||||||
|
<li>Format: CalDAV, Location: <span className="font-mono">http://{cellHost}/</span></li>
|
||||||
|
</ol>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Status */}
|
{/* Status */}
|
||||||
<div className="card">
|
<div className="card">
|
||||||
<div className="flex items-center mb-4">
|
<div className="flex items-center mb-4">
|
||||||
|
|||||||
@@ -0,0 +1,323 @@
|
|||||||
|
import { useState, useEffect } from 'react';
|
||||||
|
import { Link2, Link2Off, Copy, CheckCheck, RefreshCw, Plug, Unplug, Globe, Wifi } from 'lucide-react';
|
||||||
|
import { cellLinkAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
import QRCode from 'qrcode';
|
||||||
|
|
||||||
|
function CopyButton({ text, small }) {
|
||||||
|
const [copied, setCopied] = useState(false);
|
||||||
|
const copy = () => {
|
||||||
|
navigator.clipboard.writeText(text);
|
||||||
|
setCopied(true);
|
||||||
|
setTimeout(() => setCopied(false), 1500);
|
||||||
|
};
|
||||||
|
const sz = small ? 'h-3.5 w-3.5' : 'h-4 w-4';
|
||||||
|
return (
|
||||||
|
<button onClick={copy} className="text-gray-400 hover:text-gray-600 ml-1.5" title="Copy">
|
||||||
|
{copied ? <CheckCheck className={`${sz} text-green-500`} /> : <Copy className={sz} />}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function StatusDot({ online }) {
|
||||||
|
if (online === null || online === undefined) {
|
||||||
|
return <span className="inline-block h-2 w-2 rounded-full bg-gray-300 mr-1.5" title="Unknown" />;
|
||||||
|
}
|
||||||
|
return online
|
||||||
|
? <span className="inline-block h-2 w-2 rounded-full bg-green-500 mr-1.5" title="Online" />
|
||||||
|
: <span className="inline-block h-2 w-2 rounded-full bg-red-400 mr-1.5" title="Offline" />;
|
||||||
|
}
|
||||||
|
|
||||||
|
function Toast({ toasts }) {
|
||||||
|
return (
|
||||||
|
<div className="fixed bottom-4 right-4 z-50 space-y-2">
|
||||||
|
{toasts.map(t => (
|
||||||
|
<div key={t.id} className={`px-4 py-3 rounded-lg shadow-lg text-sm text-white flex items-center gap-2 ${
|
||||||
|
t.type === 'error' ? 'bg-red-600' : 'bg-green-600'
|
||||||
|
}`}>
|
||||||
|
{t.msg}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function useToasts() {
|
||||||
|
const [toasts, setToasts] = useState([]);
|
||||||
|
const add = (msg, type = 'success') => {
|
||||||
|
const id = Date.now();
|
||||||
|
setToasts(p => [...p, { id, msg, type }]);
|
||||||
|
setTimeout(() => setToasts(p => p.filter(t => t.id !== id)), 4000);
|
||||||
|
};
|
||||||
|
return [toasts, add];
|
||||||
|
}
|
||||||
|
|
||||||
|
export default function CellNetwork() {
|
||||||
|
const { cell_name = 'mycell', domain = 'cell' } = useConfig();
|
||||||
|
const [toasts, addToast] = useToasts();
|
||||||
|
|
||||||
|
const [invite, setInvite] = useState(null);
|
||||||
|
const [inviteQr, setInviteQr] = useState('');
|
||||||
|
const [inviteLoading, setInviteLoading] = useState(true);
|
||||||
|
|
||||||
|
const [connections, setConnections] = useState([]);
|
||||||
|
const [connsLoading, setConnsLoading] = useState(true);
|
||||||
|
|
||||||
|
const [pasteText, setPasteText] = useState('');
|
||||||
|
const [connecting, setConnecting] = useState(false);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
loadInvite();
|
||||||
|
loadConnections();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
const loadInvite = async () => {
|
||||||
|
setInviteLoading(true);
|
||||||
|
try {
|
||||||
|
const r = await cellLinkAPI.getInvite();
|
||||||
|
setInvite(r.data);
|
||||||
|
const qr = await QRCode.toDataURL(JSON.stringify(r.data), { width: 200, margin: 1 });
|
||||||
|
setInviteQr(qr);
|
||||||
|
} catch (e) {
|
||||||
|
addToast('Failed to load invite', 'error');
|
||||||
|
} finally {
|
||||||
|
setInviteLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const loadConnections = async () => {
|
||||||
|
setConnsLoading(true);
|
||||||
|
try {
|
||||||
|
const r = await cellLinkAPI.listConnections();
|
||||||
|
// Enrich with live status
|
||||||
|
const enriched = await Promise.all(
|
||||||
|
(r.data || []).map(async (conn) => {
|
||||||
|
try {
|
||||||
|
const s = await cellLinkAPI.getStatus(conn.cell_name);
|
||||||
|
return { ...conn, online: s.data.online, last_handshake: s.data.last_handshake };
|
||||||
|
} catch {
|
||||||
|
return { ...conn, online: false };
|
||||||
|
}
|
||||||
|
})
|
||||||
|
);
|
||||||
|
setConnections(enriched);
|
||||||
|
} catch {
|
||||||
|
addToast('Failed to load connections', 'error');
|
||||||
|
} finally {
|
||||||
|
setConnsLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleConnect = async () => {
|
||||||
|
if (!pasteText.trim()) return;
|
||||||
|
let parsed;
|
||||||
|
try {
|
||||||
|
parsed = JSON.parse(pasteText.trim());
|
||||||
|
} catch {
|
||||||
|
addToast('Invalid JSON — paste the full invite from the other cell', 'error');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
setConnecting(true);
|
||||||
|
try {
|
||||||
|
await cellLinkAPI.addConnection(parsed);
|
||||||
|
addToast(`Connected to cell "${parsed.cell_name}"`);
|
||||||
|
setPasteText('');
|
||||||
|
loadConnections();
|
||||||
|
} catch (e) {
|
||||||
|
addToast(e?.response?.data?.error || 'Connection failed', 'error');
|
||||||
|
} finally {
|
||||||
|
setConnecting(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleDisconnect = async (name) => {
|
||||||
|
if (!window.confirm(`Disconnect from cell "${name}"?`)) return;
|
||||||
|
try {
|
||||||
|
await cellLinkAPI.removeConnection(name);
|
||||||
|
addToast(`Disconnected from "${name}"`);
|
||||||
|
loadConnections();
|
||||||
|
} catch (e) {
|
||||||
|
addToast(e?.response?.data?.error || 'Disconnect failed', 'error');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const inviteJson = invite ? JSON.stringify(invite, null, 2) : '';
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<Toast toasts={toasts} />
|
||||||
|
|
||||||
|
<div className="mb-8">
|
||||||
|
<h1 className="text-2xl font-bold text-gray-900">Cell Network</h1>
|
||||||
|
<p className="mt-2 text-gray-600">
|
||||||
|
Connect multiple PIC cells into a mesh — site-to-site WireGuard tunnels with automatic DNS forwarding.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
|
|
||||||
|
{/* ── This cell's invite ─────────────────────────────────────────── */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Plug className="h-5 w-5 text-primary-500" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Your Cell's Invite</h3>
|
||||||
|
</div>
|
||||||
|
<button onClick={loadInvite} className="text-gray-400 hover:text-gray-600" title="Refresh">
|
||||||
|
<RefreshCw className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{inviteLoading ? (
|
||||||
|
<div className="flex justify-center py-8">
|
||||||
|
<div className="animate-spin rounded-full h-6 w-6 border-b-2 border-primary-600" />
|
||||||
|
</div>
|
||||||
|
) : invite ? (
|
||||||
|
<div className="space-y-4">
|
||||||
|
<div className="bg-gray-50 rounded-lg p-3 space-y-1 text-xs">
|
||||||
|
<div className="flex justify-between">
|
||||||
|
<span className="text-gray-500">Cell</span>
|
||||||
|
<span className="font-mono font-medium">{invite.cell_name}</span>
|
||||||
|
</div>
|
||||||
|
<div className="flex justify-between">
|
||||||
|
<span className="text-gray-500">Domain</span>
|
||||||
|
<span className="font-mono font-medium">{invite.domain}</span>
|
||||||
|
</div>
|
||||||
|
<div className="flex justify-between">
|
||||||
|
<span className="text-gray-500">Endpoint</span>
|
||||||
|
<span className="font-mono font-medium">{invite.endpoint || '(no external IP)'}</span>
|
||||||
|
</div>
|
||||||
|
<div className="flex justify-between">
|
||||||
|
<span className="text-gray-500">VPN subnet</span>
|
||||||
|
<span className="font-mono font-medium">{invite.vpn_subnet}</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div>
|
||||||
|
<div className="flex items-center justify-between mb-1">
|
||||||
|
<span className="text-sm text-gray-600">Invite JSON</span>
|
||||||
|
<CopyButton text={inviteJson} />
|
||||||
|
</div>
|
||||||
|
<pre className="bg-gray-900 text-green-400 text-xs rounded-lg p-3 overflow-x-auto max-h-40">
|
||||||
|
{inviteJson}
|
||||||
|
</pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{inviteQr && (
|
||||||
|
<div className="text-center">
|
||||||
|
<p className="text-xs text-gray-500 mb-2">Or scan with phone camera</p>
|
||||||
|
<img src={inviteQr} alt="Invite QR" className="inline-block border rounded-lg p-1 bg-white" />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<p className="text-xs text-gray-400">
|
||||||
|
Share this JSON with the admin of another PIC cell. They paste it in "Connect to Cell" on their side.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<p className="text-gray-500 text-sm">Could not load invite.</p>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* ── Connect to another cell ────────────────────────────────────── */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center gap-2 mb-4">
|
||||||
|
<Link2 className="h-5 w-5 text-primary-500" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Connect to Another Cell</h3>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="space-y-3">
|
||||||
|
<p className="text-sm text-gray-600">
|
||||||
|
Paste the invite JSON from the other cell's "Your Cell's Invite" panel:
|
||||||
|
</p>
|
||||||
|
<textarea
|
||||||
|
value={pasteText}
|
||||||
|
onChange={e => setPasteText(e.target.value)}
|
||||||
|
placeholder={'{\n "cell_name": "...",\n "public_key": "...",\n ...\n}'}
|
||||||
|
rows={8}
|
||||||
|
className="w-full text-xs font-mono border rounded-lg p-3 focus:outline-none focus:ring-2 focus:ring-primary-400 resize-none bg-white"
|
||||||
|
/>
|
||||||
|
<button
|
||||||
|
onClick={handleConnect}
|
||||||
|
disabled={connecting || !pasteText.trim()}
|
||||||
|
className="w-full btn btn-primary flex items-center justify-center gap-2 disabled:opacity-50"
|
||||||
|
>
|
||||||
|
{connecting
|
||||||
|
? <><div className="animate-spin rounded-full h-4 w-4 border-b-2 border-white" /> Connecting…</>
|
||||||
|
: <><Link2 className="h-4 w-4" /> Connect</>
|
||||||
|
}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div className="mt-4 bg-blue-50 border border-blue-200 rounded-lg p-3">
|
||||||
|
<p className="text-xs text-blue-800 font-medium mb-1">How it works</p>
|
||||||
|
<ol className="text-xs text-blue-700 space-y-1 list-decimal list-inside">
|
||||||
|
<li>Cell A copies its invite and sends it to Cell B's admin</li>
|
||||||
|
<li>Cell B pastes the invite and clicks Connect</li>
|
||||||
|
<li>Cell B copies its invite and sends it back to Cell A</li>
|
||||||
|
<li>Cell A pastes Cell B's invite and clicks Connect</li>
|
||||||
|
<li>Both cells can now reach each other's VPN peers and services</li>
|
||||||
|
</ol>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* ── Connected cells ────────────────────────────────────────────── */}
|
||||||
|
<div className="card lg:col-span-2">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Globe className="h-5 w-5 text-primary-500" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Connected Cells</h3>
|
||||||
|
</div>
|
||||||
|
<button onClick={loadConnections} className="text-gray-400 hover:text-gray-600" title="Refresh">
|
||||||
|
<RefreshCw className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{connsLoading ? (
|
||||||
|
<div className="flex justify-center py-6">
|
||||||
|
<div className="animate-spin rounded-full h-6 w-6 border-b-2 border-primary-600" />
|
||||||
|
</div>
|
||||||
|
) : connections.length === 0 ? (
|
||||||
|
<div className="text-center py-8 text-gray-400">
|
||||||
|
<Wifi className="h-10 w-10 mx-auto mb-2 opacity-30" />
|
||||||
|
<p className="text-sm">No cells connected yet.</p>
|
||||||
|
<p className="text-xs mt-1">Use the panels above to establish the first cell link.</p>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<div className="space-y-3">
|
||||||
|
{connections.map(conn => (
|
||||||
|
<div key={conn.cell_name}
|
||||||
|
className="flex items-center justify-between p-3 bg-gray-50 rounded-lg border border-gray-100">
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<StatusDot online={conn.online} />
|
||||||
|
<div>
|
||||||
|
<div className="flex items-center gap-1.5">
|
||||||
|
<span className="font-medium text-gray-900">{conn.cell_name}</span>
|
||||||
|
<span className="text-xs text-gray-400 font-mono">.{conn.domain}</span>
|
||||||
|
</div>
|
||||||
|
<div className="text-xs text-gray-500 space-x-3 mt-0.5">
|
||||||
|
<span>Subnet: <span className="font-mono">{conn.vpn_subnet}</span></span>
|
||||||
|
<span>Endpoint: <span className="font-mono">{conn.endpoint || '—'}</span></span>
|
||||||
|
{conn.last_handshake && (
|
||||||
|
<span>Last seen: {new Date(conn.last_handshake * 1000).toLocaleString()}</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={() => handleDisconnect(conn.cell_name)}
|
||||||
|
className="text-red-400 hover:text-red-600 p-1.5 rounded hover:bg-red-50"
|
||||||
|
title="Disconnect"
|
||||||
|
>
|
||||||
|
<Unplug className="h-4 w-4" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -17,9 +17,17 @@ import {
|
|||||||
RotateCcw
|
RotateCcw
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import { cellAPI, servicesAPI } from '../services/api';
|
import { cellAPI, servicesAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
|
||||||
function Dashboard({ isOnline }) {
|
function Dashboard({ isOnline }) {
|
||||||
const navigate = useNavigate();
|
const navigate = useNavigate();
|
||||||
|
const { domain = 'cell', cell_name = 'mycell' } = useConfig();
|
||||||
|
const SERVICES = [
|
||||||
|
{ name: 'Cell Home', url: `http://${cell_name}.${domain}`, desc: 'Main UI — no login needed' },
|
||||||
|
{ name: 'Calendar', url: `http://calendar.${domain}`, desc: 'Login: your WireGuard username' },
|
||||||
|
{ name: 'Files', url: `http://files.${domain}`, desc: 'Login: admin / admin123' },
|
||||||
|
{ name: 'Webmail', url: `http://mail.${domain}`, desc: 'Login: admin@rainloop.net / 12345' },
|
||||||
|
];
|
||||||
const [cellStatus, setCellStatus] = useState(null);
|
const [cellStatus, setCellStatus] = useState(null);
|
||||||
const [servicesStatus, setServicesStatus] = useState(null);
|
const [servicesStatus, setServicesStatus] = useState(null);
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
@@ -203,11 +211,29 @@ function Dashboard({ isOnline }) {
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
<div>
|
<div>
|
||||||
<div className="mb-8">
|
<div className="mb-6">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">Dashboard</h1>
|
<h1 className="text-2xl font-bold text-gray-900">Dashboard</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-1 text-gray-600">Personal Internet Cell — connect via WireGuard to access services</p>
|
||||||
Overview of your Personal Internet Cell status and services
|
</div>
|
||||||
</p>
|
|
||||||
|
{/* Access Services — shown first, no scroll needed */}
|
||||||
|
<div className="mb-8">
|
||||||
|
<h2 className="text-sm font-semibold text-gray-500 uppercase tracking-wide mb-3">Services (connect via WireGuard first)</h2>
|
||||||
|
<div className="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-3">
|
||||||
|
{SERVICES.map(svc => (
|
||||||
|
<a
|
||||||
|
key={svc.url}
|
||||||
|
href={svc.url}
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener noreferrer"
|
||||||
|
className="card hover:shadow-md transition-shadow group border border-gray-100"
|
||||||
|
>
|
||||||
|
<p className="text-sm font-semibold text-primary-700 group-hover:text-primary-900">{svc.name}</p>
|
||||||
|
<p className="font-mono text-xs text-gray-400 mt-0.5 truncate">{svc.url}</p>
|
||||||
|
<p className="text-xs text-gray-500 mt-1">{svc.desc}</p>
|
||||||
|
</a>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Cell Status */}
|
{/* Cell Status */}
|
||||||
|
|||||||
+82
-10
@@ -1,8 +1,39 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { Mail, Users, Send } from 'lucide-react';
|
import { Mail, Users, Wifi, Copy, CheckCheck } from 'lucide-react';
|
||||||
import { emailAPI } from '../services/api';
|
import { emailAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
|
||||||
|
const CELL_IP = '172.20.0.23';
|
||||||
|
|
||||||
|
function CopyButton({ text }) {
|
||||||
|
const [copied, setCopied] = useState(false);
|
||||||
|
const copy = () => {
|
||||||
|
navigator.clipboard.writeText(text);
|
||||||
|
setCopied(true);
|
||||||
|
setTimeout(() => setCopied(false), 1500);
|
||||||
|
};
|
||||||
|
return (
|
||||||
|
<button onClick={copy} className="ml-2 text-gray-400 hover:text-gray-600" title="Copy">
|
||||||
|
{copied ? <CheckCheck className="h-3.5 w-3.5 text-green-500" /> : <Copy className="h-3.5 w-3.5" />}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function InfoRow({ label, value }) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center justify-between py-1.5 border-b border-gray-100 last:border-0">
|
||||||
|
<span className="text-sm text-gray-500 w-36 shrink-0">{label}</span>
|
||||||
|
<div className="flex items-center">
|
||||||
|
<span className="text-sm font-mono font-medium text-gray-800">{value}</span>
|
||||||
|
<CopyButton text={value} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
function Email() {
|
function Email() {
|
||||||
|
const { domain = 'cell' } = useConfig();
|
||||||
|
const cellHost = `mail.${domain}`;
|
||||||
const [users, setUsers] = useState([]);
|
const [users, setUsers] = useState([]);
|
||||||
const [status, setStatus] = useState(null);
|
const [status, setStatus] = useState(null);
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
@@ -17,7 +48,6 @@ function Email() {
|
|||||||
emailAPI.getUsers(),
|
emailAPI.getUsers(),
|
||||||
emailAPI.getStatus()
|
emailAPI.getStatus()
|
||||||
]);
|
]);
|
||||||
|
|
||||||
setUsers(usersResponse.data);
|
setUsers(usersResponse.data);
|
||||||
setStatus(statusResponse.data);
|
setStatus(statusResponse.data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -39,12 +69,54 @@ function Email() {
|
|||||||
<div>
|
<div>
|
||||||
<div className="mb-8">
|
<div className="mb-8">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">Email Services</h1>
|
<h1 className="text-2xl font-bold text-gray-900">Email Services</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-2 text-gray-600">Postfix (SMTP) + Dovecot (IMAP)</p>
|
||||||
Manage Postfix and Dovecot email services
|
|
||||||
</p>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
|
{/* Incoming mail */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<Wifi className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Incoming mail (IMAP)</h3>
|
||||||
|
</div>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="Server" value={cellHost} />
|
||||||
|
<InfoRow label="Port" value={String(status?.imap_port ?? 993)} />
|
||||||
|
<InfoRow label="Security" value="SSL/TLS" />
|
||||||
|
<InfoRow label="Direct IP" value={CELL_IP} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Outgoing mail */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<Mail className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Outgoing mail (SMTP)</h3>
|
||||||
|
</div>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="Server" value={cellHost} />
|
||||||
|
<InfoRow label="Port" value={String(status?.smtp_port ?? 587)} />
|
||||||
|
<InfoRow label="Security" value="STARTTLS" />
|
||||||
|
<InfoRow label="Auth" value="Username + Password" />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* Webmail */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<Mail className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Webmail</h3>
|
||||||
|
</div>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="URL" value={`http://mail.${domain}`} />
|
||||||
|
<InfoRow label="Alt URL" value={`http://webmail.${domain}`} />
|
||||||
|
<InfoRow label="Direct IP" value={`http://${CELL_IP}`} />
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-3">
|
||||||
|
Requires VPN + DNS set to <span className="font-mono">172.20.0.3</span>.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Status */}
|
{/* Status */}
|
||||||
<div className="card">
|
<div className="card">
|
||||||
<div className="flex items-center mb-4">
|
<div className="flex items-center mb-4">
|
||||||
@@ -54,11 +126,11 @@ function Email() {
|
|||||||
{status ? (
|
{status ? (
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
<div className="flex justify-between">
|
<div className="flex justify-between">
|
||||||
<span className="text-sm text-gray-500">Postfix:</span>
|
<span className="text-sm text-gray-500">Postfix (SMTP):</span>
|
||||||
<span className="text-sm font-medium text-success-600">Running</span>
|
<span className="text-sm font-medium text-success-600">Running</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex justify-between">
|
<div className="flex justify-between">
|
||||||
<span className="text-sm text-gray-500">Dovecot:</span>
|
<span className="text-sm text-gray-500">Dovecot (IMAP):</span>
|
||||||
<span className="text-sm font-medium text-success-600">Running</span>
|
<span className="text-sm font-medium text-success-600">Running</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -68,10 +140,10 @@ function Email() {
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Users */}
|
{/* Users */}
|
||||||
<div className="card">
|
<div className="card lg:col-span-2">
|
||||||
<div className="flex items-center mb-4">
|
<div className="flex items-center mb-4">
|
||||||
<Users className="h-6 w-6 text-primary-500 mr-2" />
|
<Users className="h-6 w-6 text-primary-500 mr-2" />
|
||||||
<h3 className="text-lg font-medium text-gray-900">Email Users</h3>
|
<h3 className="text-lg font-medium text-gray-900">Email Accounts</h3>
|
||||||
</div>
|
</div>
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
{users.length > 0 ? (
|
{users.length > 0 ? (
|
||||||
@@ -82,7 +154,7 @@ function Email() {
|
|||||||
</div>
|
</div>
|
||||||
))
|
))
|
||||||
) : (
|
) : (
|
||||||
<p className="text-gray-500 text-sm">No email users configured</p>
|
<p className="text-gray-500 text-sm">No email accounts configured</p>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
+108
-21
@@ -1,8 +1,41 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { FolderOpen, Users, HardDrive } from 'lucide-react';
|
import { FolderOpen, Users, HardDrive, Wifi, Copy, CheckCheck } from 'lucide-react';
|
||||||
import { fileAPI } from '../services/api';
|
import { fileAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
|
||||||
|
const FILES_IP = '172.20.0.22';
|
||||||
|
const WEBDAV_IP = '172.20.0.24';
|
||||||
|
|
||||||
|
function CopyButton({ text }) {
|
||||||
|
const [copied, setCopied] = useState(false);
|
||||||
|
const copy = () => {
|
||||||
|
navigator.clipboard.writeText(text);
|
||||||
|
setCopied(true);
|
||||||
|
setTimeout(() => setCopied(false), 1500);
|
||||||
|
};
|
||||||
|
return (
|
||||||
|
<button onClick={copy} className="ml-2 text-gray-400 hover:text-gray-600" title="Copy">
|
||||||
|
{copied ? <CheckCheck className="h-3.5 w-3.5 text-green-500" /> : <Copy className="h-3.5 w-3.5" />}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function InfoRow({ label, value }) {
|
||||||
|
return (
|
||||||
|
<div className="flex items-center justify-between py-1.5 border-b border-gray-100 last:border-0">
|
||||||
|
<span className="text-sm text-gray-500 w-36 shrink-0">{label}</span>
|
||||||
|
<div className="flex items-center">
|
||||||
|
<span className="text-sm font-mono font-medium text-gray-800">{value}</span>
|
||||||
|
<CopyButton text={value} />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
function Files() {
|
function Files() {
|
||||||
|
const { domain = 'cell' } = useConfig();
|
||||||
|
const filesHost = `files.${domain}`;
|
||||||
|
const webdavHost = `webdav.${domain}`;
|
||||||
const [users, setUsers] = useState([]);
|
const [users, setUsers] = useState([]);
|
||||||
const [status, setStatus] = useState(null);
|
const [status, setStatus] = useState(null);
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
@@ -17,7 +50,6 @@ function Files() {
|
|||||||
fileAPI.getUsers(),
|
fileAPI.getUsers(),
|
||||||
fileAPI.getStatus()
|
fileAPI.getStatus()
|
||||||
]);
|
]);
|
||||||
|
|
||||||
setUsers(usersResponse.data);
|
setUsers(usersResponse.data);
|
||||||
setStatus(statusResponse.data);
|
setStatus(statusResponse.data);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -39,12 +71,69 @@ function Files() {
|
|||||||
<div>
|
<div>
|
||||||
<div className="mb-8">
|
<div className="mb-8">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">File Storage</h1>
|
<h1 className="text-2xl font-bold text-gray-900">File Storage</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-2 text-gray-600">FileGator (browser) + WebDAV (native clients)</p>
|
||||||
Manage WebDAV file storage services
|
|
||||||
</p>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
||||||
|
{/* File Manager */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<Wifi className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Web file manager</h3>
|
||||||
|
</div>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="URL" value={`http://${filesHost}`} />
|
||||||
|
<InfoRow label="Direct IP" value={`http://${FILES_IP}`} />
|
||||||
|
<InfoRow label="Port" value="80" />
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-3">
|
||||||
|
Browser-based file manager. Requires VPN.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* WebDAV */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<FolderOpen className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">WebDAV (mount as drive)</h3>
|
||||||
|
</div>
|
||||||
|
<div className="bg-gray-50 rounded-lg px-4 py-2">
|
||||||
|
<InfoRow label="URL" value={`http://${webdavHost}`} />
|
||||||
|
<InfoRow label="Direct IP" value={`http://${WEBDAV_IP}`} />
|
||||||
|
<InfoRow label="Port" value="80" />
|
||||||
|
<InfoRow label="Auth" value="Basic (user / password)" />
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-3">
|
||||||
|
Mount in macOS Finder, Windows Explorer, or any WebDAV client.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* OS quick guide */}
|
||||||
|
<div className="card">
|
||||||
|
<div className="flex items-center mb-4">
|
||||||
|
<HardDrive className="h-5 w-5 text-primary-500 mr-2" />
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Mount as network drive</h3>
|
||||||
|
</div>
|
||||||
|
<div className="space-y-3 text-sm">
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">macOS (Finder)</p>
|
||||||
|
<p className="text-xs text-gray-600">Go → Connect to Server → <span className="font-mono">http://{webdavHost}</span></p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">Windows</p>
|
||||||
|
<p className="text-xs text-gray-600">Map Network Drive → <span className="font-mono">\\{webdavHost}\DavWWWRoot</span> or use <span className="font-mono">http://{webdavHost}</span> in "Connect to a Web Site"</p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">iOS (Files app)</p>
|
||||||
|
<p className="text-xs text-gray-600">Files → ... → Connect to Server → <span className="font-mono">http://{webdavHost}</span></p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="font-medium text-gray-900 mb-1">Android</p>
|
||||||
|
<p className="text-xs text-gray-600">Use <strong>Solid Explorer</strong> or <strong>FX File Explorer</strong> → Add cloud → WebDAV → <span className="font-mono">http://{webdavHost}</span></p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Status */}
|
{/* Status */}
|
||||||
<div className="card">
|
<div className="card">
|
||||||
<div className="flex items-center mb-4">
|
<div className="flex items-center mb-4">
|
||||||
@@ -54,12 +143,12 @@ function Files() {
|
|||||||
{status ? (
|
{status ? (
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
<div className="flex justify-between">
|
<div className="flex justify-between">
|
||||||
<span className="text-sm text-gray-500">WebDAV:</span>
|
<span className="text-sm text-gray-500">FileGator:</span>
|
||||||
<span className="text-sm font-medium text-success-600">Running</span>
|
<span className="text-sm font-medium text-success-600">Running</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex justify-between">
|
<div className="flex justify-between">
|
||||||
<span className="text-sm text-gray-500">Storage:</span>
|
<span className="text-sm text-gray-500">WebDAV:</span>
|
||||||
<span className="text-sm font-medium text-success-600">Available</span>
|
<span className="text-sm font-medium text-success-600">Running</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
@@ -68,24 +157,22 @@ function Files() {
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Users */}
|
{/* Users */}
|
||||||
<div className="card">
|
{users.length > 0 && (
|
||||||
<div className="flex items-center mb-4">
|
<div className="card lg:col-span-2">
|
||||||
<Users className="h-6 w-6 text-primary-500 mr-2" />
|
<div className="flex items-center mb-4">
|
||||||
<h3 className="text-lg font-medium text-gray-900">Storage Users</h3>
|
<Users className="h-6 w-6 text-primary-500 mr-2" />
|
||||||
</div>
|
<h3 className="text-lg font-medium text-gray-900">Storage Users</h3>
|
||||||
<div className="space-y-2">
|
</div>
|
||||||
{users.length > 0 ? (
|
<div className="space-y-2">
|
||||||
users.map((user, index) => (
|
{users.map((user, index) => (
|
||||||
<div key={index} className="flex justify-between items-center p-2 bg-gray-50 rounded">
|
<div key={index} className="flex justify-between items-center p-2 bg-gray-50 rounded">
|
||||||
<span className="text-sm font-medium">{user.username}</span>
|
<span className="text-sm font-medium">{user.username}</span>
|
||||||
<span className="text-sm text-gray-500">{user.storage_used || '0'} MB</span>
|
<span className="text-sm text-gray-500">{user.storage_used || '0'} MB</span>
|
||||||
</div>
|
</div>
|
||||||
))
|
))}
|
||||||
) : (
|
</div>
|
||||||
<p className="text-gray-500 text-sm">No storage users configured</p>
|
|
||||||
)}
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
)}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
|
|||||||
+444
-141
@@ -1,164 +1,467 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect, useRef, useCallback } from 'react';
|
||||||
import { Activity, Clock, FileText, AlertTriangle } from 'lucide-react';
|
import { Activity, FileText, AlertTriangle, Search, RefreshCw, RotateCcw, Box, Settings } from 'lucide-react';
|
||||||
import { monitoringAPI } from '../services/api';
|
import { monitoringAPI, logsAPI, containerAPI } from '../services/api';
|
||||||
|
|
||||||
function Logs() {
|
const API_SERVICES = ['ALL', 'network', 'wireguard', 'routing', 'email', 'calendar', 'files', 'vault', 'api'];
|
||||||
const [backendLog, setBackendLog] = useState('');
|
const LEVELS = ['ALL', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'];
|
||||||
const [healthHistory, setHealthHistory] = useState([]);
|
const LEVEL_COLORS = {
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
DEBUG: 'text-gray-500',
|
||||||
const [tab, setTab] = useState('logs');
|
INFO: 'text-blue-400',
|
||||||
|
WARNING: 'text-yellow-400',
|
||||||
|
ERROR: 'text-red-400',
|
||||||
|
CRITICAL: 'text-red-500 font-bold',
|
||||||
|
};
|
||||||
|
|
||||||
|
function LevelBadge({ level }) {
|
||||||
|
const cls = LEVEL_COLORS[level?.toUpperCase()] || 'text-gray-400';
|
||||||
|
return <span className={`font-mono text-xs shrink-0 ${cls}`}>[{level || '?'}]</span>;
|
||||||
|
}
|
||||||
|
|
||||||
|
function LogLine({ entry }) {
|
||||||
|
if (!entry || entry.raw_line !== undefined)
|
||||||
|
return <div className="font-mono text-xs text-gray-300 py-0.5 break-all">{entry?.raw_line ?? ''}</div>;
|
||||||
|
return (
|
||||||
|
<div className="font-mono text-xs py-0.5 flex gap-2 flex-wrap">
|
||||||
|
<span className="text-gray-500 shrink-0">{String(entry.timestamp ?? '').slice(0, 19)}</span>
|
||||||
|
<LevelBadge level={entry.level} />
|
||||||
|
{entry.service && <span className="text-purple-400 shrink-0">[{entry.service}]</span>}
|
||||||
|
<span className="text-gray-200 break-all">{entry.message ?? ''}</span>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tab 1: API Service Logs ─────────────────────────────────────────────────
|
||||||
|
// These are structured JSON logs written by Python service managers.
|
||||||
|
// Stored in /app/api/data/logs/ (persisted to ./data/logs/ on the host via volume mount).
|
||||||
|
function ApiServiceLogsTab() {
|
||||||
|
const [service, setService] = useState('ALL');
|
||||||
|
const [level, setLevel] = useState('ALL');
|
||||||
|
const [lines, setLines] = useState(100);
|
||||||
|
const [query, setQuery] = useState('');
|
||||||
|
const [logs, setLogs] = useState([]);
|
||||||
|
const [loading, setLoading] = useState(false);
|
||||||
|
const [autoRefresh, setAutoRefresh] = useState(false);
|
||||||
|
const [fileInfos, setFileInfos] = useState([]);
|
||||||
|
const [rotating, setRotating] = useState(null);
|
||||||
|
const [showFiles, setShowFiles] = useState(false);
|
||||||
|
const intervalRef = useRef(null);
|
||||||
|
|
||||||
|
const doFetch = useCallback(async () => {
|
||||||
|
setLoading(true);
|
||||||
|
try {
|
||||||
|
const allSvcs = API_SERVICES.filter(s => s !== 'ALL');
|
||||||
|
if (service === 'ALL' || query) {
|
||||||
|
const res = await logsAPI.searchLogs({
|
||||||
|
query: query || '',
|
||||||
|
services: service === 'ALL' ? allSvcs : [service],
|
||||||
|
level: level === 'ALL' ? undefined : level,
|
||||||
|
});
|
||||||
|
setLogs(res.data.results || []);
|
||||||
|
} else {
|
||||||
|
const res = await logsAPI.getServiceLogs(service, level, lines);
|
||||||
|
const raw = res.data.logs || [];
|
||||||
|
const parsed = raw.map(l => { try { return JSON.parse(l); } catch { return { raw_line: l }; } });
|
||||||
|
setLogs(parsed.reverse());
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
setLogs([{ raw_line: `Error: ${e.message}` }]);
|
||||||
|
} finally {
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
}, [service, level, lines, query]);
|
||||||
|
|
||||||
|
useEffect(() => { doFetch(); }, [service, level, lines]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
fetchData();
|
if (autoRefresh) intervalRef.current = setInterval(doFetch, 5000);
|
||||||
}, []);
|
else clearInterval(intervalRef.current);
|
||||||
|
return () => clearInterval(intervalRef.current);
|
||||||
|
}, [autoRefresh, doFetch]);
|
||||||
|
|
||||||
const fetchData = async () => {
|
const loadFileInfos = async () => {
|
||||||
setIsLoading(true);
|
try { setFileInfos((await logsAPI.getLogFiles()).data || []); } catch {}
|
||||||
try {
|
|
||||||
const [logRes, healthRes] = await Promise.all([
|
|
||||||
monitoringAPI.getBackendLogs(100),
|
|
||||||
monitoringAPI.getHealthHistory(),
|
|
||||||
]);
|
|
||||||
setBackendLog(logRes.data.log || '');
|
|
||||||
setHealthHistory(healthRes.data || []);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Failed to fetch monitoring data:', error);
|
|
||||||
} finally {
|
|
||||||
setIsLoading(false);
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if (isLoading) {
|
const toggleFiles = () => {
|
||||||
return (
|
if (!showFiles) loadFileInfos();
|
||||||
<div className="flex items-center justify-center h-64">
|
setShowFiles(v => !v);
|
||||||
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-primary-600"></div>
|
};
|
||||||
</div>
|
|
||||||
);
|
const rotate = async (service) => {
|
||||||
}
|
if (!window.confirm(`Rotate log for ${service || 'all services'}?\nCurrent file will be archived.`)) return;
|
||||||
|
setRotating(service || 'all');
|
||||||
|
try { await logsAPI.rotateLogs(service || null); await loadFileInfos(); } catch {}
|
||||||
|
setRotating(null);
|
||||||
|
};
|
||||||
|
|
||||||
|
const fmtSize = b => !b ? '0 B' : b < 1024 ? `${b} B` : b < 1048576 ? `${(b/1024).toFixed(1)} KB` : `${(b/1048576).toFixed(2)} MB`;
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div>
|
<div className="space-y-3">
|
||||||
<div className="mb-8">
|
{/* Controls */}
|
||||||
<h1 className="text-2xl font-bold text-gray-900">System Monitoring</h1>
|
<div className="flex flex-wrap gap-2 items-center">
|
||||||
<p className="mt-2 text-gray-600">
|
<select className="border rounded px-2 py-1 text-sm" value={service} onChange={e => setService(e.target.value)}>
|
||||||
View backend logs and health history
|
{API_SERVICES.map(s => <option key={s} value={s}>{s === 'ALL' ? 'ALL services' : s}</option>)}
|
||||||
</p>
|
</select>
|
||||||
</div>
|
<select className="border rounded px-2 py-1 text-sm" value={level} onChange={e => setLevel(e.target.value)}>
|
||||||
|
{LEVELS.map(l => <option key={l} value={l}>{l}</option>)}
|
||||||
<div className="mb-4 flex gap-4">
|
</select>
|
||||||
<button
|
{service !== 'ALL' && !query && (
|
||||||
className={`px-4 py-2 rounded ${tab === 'logs' ? 'bg-primary-600 text-white' : 'bg-gray-200 text-gray-800'}`}
|
<select className="border rounded px-2 py-1 text-sm" value={lines} onChange={e => setLines(Number(e.target.value))}>
|
||||||
onClick={() => setTab('logs')}
|
{[50, 100, 200, 500].map(n => <option key={n} value={n}>{n} lines</option>)}
|
||||||
>
|
</select>
|
||||||
<FileText className="inline-block mr-2" /> Backend Logs
|
)}
|
||||||
|
<div className="flex gap-1 flex-1 min-w-40">
|
||||||
|
<input
|
||||||
|
className="border rounded px-2 py-1 text-sm flex-1"
|
||||||
|
placeholder="Search…"
|
||||||
|
value={query}
|
||||||
|
onChange={e => setQuery(e.target.value)}
|
||||||
|
onKeyDown={e => e.key === 'Enter' && doFetch()}
|
||||||
|
/>
|
||||||
|
<button className="btn btn-secondary px-2 py-1 text-sm" onClick={doFetch}><Search className="h-4 w-4" /></button>
|
||||||
|
{query && <button className="btn btn-secondary px-2 py-1 text-sm" onClick={() => setQuery('')}>✕</button>}
|
||||||
|
</div>
|
||||||
|
<button className={`btn px-2 py-1 text-sm ${autoRefresh ? 'btn-primary' : 'btn-secondary'}`} title="Auto-refresh 5s" onClick={() => setAutoRefresh(v => !v)}>
|
||||||
|
<RefreshCw className={`h-4 w-4 ${autoRefresh ? 'animate-spin' : ''}`} />
|
||||||
</button>
|
</button>
|
||||||
<button
|
<button className="btn btn-secondary px-2 py-1 text-sm" onClick={doFetch}><RefreshCw className="h-4 w-4" /></button>
|
||||||
className={`px-4 py-2 rounded ${tab === 'health' ? 'bg-primary-600 text-white' : 'bg-gray-200 text-gray-800'}`}
|
<button className="btn btn-secondary px-2 py-1 text-sm" onClick={toggleFiles} title="Files & Rotation">
|
||||||
onClick={() => setTab('health')}
|
<RotateCcw className="h-4 w-4" />
|
||||||
>
|
|
||||||
<Clock className="inline-block mr-2" /> Health History
|
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{tab === 'logs' && (
|
{/* File info panel */}
|
||||||
<div className="card">
|
{showFiles && (
|
||||||
<div className="flex items-center mb-4">
|
<div className="border rounded bg-gray-50 p-3">
|
||||||
<FileText className="h-6 w-6 text-primary-500 mr-2" />
|
<div className="flex justify-between items-center mb-1">
|
||||||
<h3 className="text-lg font-medium text-gray-900">Backend Logs (last 100 lines)</h3>
|
<div>
|
||||||
</div>
|
<span className="text-sm font-medium text-gray-700">Log Files</span>
|
||||||
<div className="bg-gray-900 text-green-400 p-4 rounded-lg font-mono text-sm h-96 overflow-y-auto">
|
<span className="ml-2 text-xs text-gray-400">host path: <code>./data/logs/</code> — rotated backups saved as <code>wireguard.log.1</code>, <code>wireguard.log.2</code> …</span>
|
||||||
<pre>{backendLog || 'No logs available.'}</pre>
|
</div>
|
||||||
|
<button className="btn btn-secondary text-xs px-2 py-0.5" onClick={() => rotate(null)} disabled={rotating === 'all'}>
|
||||||
|
<RotateCcw className={`h-3 w-3 inline mr-1 ${rotating === 'all' ? 'animate-spin' : ''}`} />Rotate All
|
||||||
|
</button>
|
||||||
</div>
|
</div>
|
||||||
|
<table className="w-full text-xs">
|
||||||
|
<thead><tr className="text-gray-500"><th className="text-left py-1">File</th><th className="text-right py-1">Size</th><th className="text-left py-1 pl-3">Modified</th><th className="text-center py-1"></th></tr></thead>
|
||||||
|
<tbody>
|
||||||
|
{fileInfos.map(f => (
|
||||||
|
<tr key={f.file} className={`border-t ${f.backup ? 'text-gray-400' : ''}`}>
|
||||||
|
<td className="py-1 font-mono">{f.file}</td>
|
||||||
|
<td className="py-1 text-right font-mono">{fmtSize(f.size)}</td>
|
||||||
|
<td className="py-1 pl-3 text-gray-500">{f.modified?.slice(0, 19)}</td>
|
||||||
|
<td className="py-1 text-center">
|
||||||
|
{!f.backup && (
|
||||||
|
<button className="btn btn-secondary px-1.5 py-0.5 text-xs" onClick={() => rotate(f.name)} disabled={rotating === f.name}>
|
||||||
|
<RotateCcw className={`h-3 w-3 ${rotating === f.name ? 'animate-spin' : ''}`} />
|
||||||
|
</button>
|
||||||
|
)}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
{fileInfos.length === 0 && <tr><td colSpan={4} className="text-gray-400 py-2 text-center">No log files found.</td></tr>}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{tab === 'health' && (
|
{/* Log output */}
|
||||||
<div className="card">
|
<div className="bg-gray-900 rounded-lg p-3 h-[500px] overflow-y-auto">
|
||||||
<div className="flex items-center mb-4">
|
{loading && !logs.length ? (
|
||||||
<Clock className="h-6 w-6 text-primary-500 mr-2" />
|
<div className="text-gray-400 text-sm">Loading…</div>
|
||||||
<h3 className="text-lg font-medium text-gray-900">Health History (last 100 checks)</h3>
|
) : !logs.length ? (
|
||||||
</div>
|
<div className="text-gray-500 text-sm">No entries found.</div>
|
||||||
<div className="overflow-x-auto">
|
) : (
|
||||||
<table className="min-w-full text-sm">
|
logs.map((e, i) => <LogLine key={i} entry={e} />)
|
||||||
<thead>
|
)}
|
||||||
<tr className="bg-gray-100">
|
</div>
|
||||||
<th className="px-2 py-1 text-left">Timestamp</th>
|
<div className="text-xs text-gray-400">{logs.length} entries</div>
|
||||||
<th className="px-2 py-1 text-left">Network</th>
|
</div>
|
||||||
<th className="px-2 py-1 text-left">WireGuard</th>
|
);
|
||||||
<th className="px-2 py-1 text-left">Email</th>
|
}
|
||||||
<th className="px-2 py-1 text-left">Calendar</th>
|
|
||||||
<th className="px-2 py-1 text-left">Files</th>
|
// ── Tab 2: Container Logs ───────────────────────────────────────────────────
|
||||||
<th className="px-2 py-1 text-left">Routing</th>
|
// Container stdout/stderr read live via `docker logs`.
|
||||||
<th className="px-2 py-1 text-left">Vault</th>
|
// Docker itself rotates these files (json-file driver, max-size 10m, max-file 5 — configured in docker-compose.yml).
|
||||||
<th className="px-2 py-1 text-left">Alerts</th>
|
function ContainerLogsTab() {
|
||||||
</tr>
|
const [containers, setContainers] = useState([]);
|
||||||
</thead>
|
const [selected, setSelected] = useState('cell-api');
|
||||||
<tbody>
|
const [tail, setTail] = useState(100);
|
||||||
{healthHistory.map((h, i) => (
|
const [lines, setLines] = useState([]);
|
||||||
<tr key={i} className={h.alerts && h.alerts.length > 0 ? 'bg-red-100' : ''}>
|
const [loading, setLoading] = useState(false);
|
||||||
<td className="px-2 py-1 font-mono">{h.timestamp}</td>
|
const [autoRefresh, setAutoRefresh] = useState(false);
|
||||||
<td className="px-2 py-1">
|
const intervalRef = useRef(null);
|
||||||
{h.network?.status === 'online' || h.network?.running === true ?
|
|
||||||
<span className="text-green-600">OK</span> :
|
useEffect(() => {
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
containerAPI.listContainers()
|
||||||
}
|
.then(res => {
|
||||||
</td>
|
const names = (res.data || [])
|
||||||
<td className="px-2 py-1">
|
.map(c => c.name || c.Names?.[0]?.replace('/', ''))
|
||||||
{h.wireguard?.status === 'online' || h.wireguard?.running === true ?
|
.filter(Boolean).sort();
|
||||||
<span className="text-green-600">OK</span> :
|
setContainers(names);
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
if (names.length && !names.includes(selected)) setSelected(names[0]);
|
||||||
}
|
})
|
||||||
</td>
|
.catch(() => {});
|
||||||
<td className="px-2 py-1">
|
}, []);
|
||||||
{h.email?.status === 'online' || h.email?.running === true ?
|
|
||||||
<span className="text-green-600">OK</span> :
|
const doFetch = useCallback(async () => {
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
if (!selected) return;
|
||||||
}
|
setLoading(true);
|
||||||
</td>
|
try {
|
||||||
<td className="px-2 py-1">
|
const res = await containerAPI.getContainerLogs(selected, tail);
|
||||||
{h.calendar?.status === 'online' || h.calendar?.running === true ?
|
const raw = res.data.logs || '';
|
||||||
<span className="text-green-600">OK</span> :
|
setLines(typeof raw === 'string' ? raw.split('\n').filter(Boolean) : raw);
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
} catch (e) {
|
||||||
}
|
setLines([`Error: ${e.message}`]);
|
||||||
</td>
|
} finally {
|
||||||
<td className="px-2 py-1">
|
setLoading(false);
|
||||||
{h.files?.status === 'online' || h.files?.running === true ?
|
}
|
||||||
<span className="text-green-600">OK</span> :
|
}, [selected, tail]);
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
|
||||||
}
|
useEffect(() => { doFetch(); }, [selected, tail]);
|
||||||
</td>
|
|
||||||
<td className="px-2 py-1">
|
useEffect(() => {
|
||||||
{h.routing?.status === 'online' || h.routing?.running === true ?
|
if (autoRefresh) intervalRef.current = setInterval(doFetch, 5000);
|
||||||
<span className="text-green-600">OK</span> :
|
else clearInterval(intervalRef.current);
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
return () => clearInterval(intervalRef.current);
|
||||||
}
|
}, [autoRefresh, doFetch]);
|
||||||
</td>
|
|
||||||
<td className="px-2 py-1">
|
return (
|
||||||
{h.vault?.status === 'online' || h.vault?.running === true ?
|
<div className="space-y-3">
|
||||||
<span className="text-green-600">OK</span> :
|
<div className="text-xs text-gray-500 bg-gray-50 rounded px-3 py-2">
|
||||||
<span className="text-red-600 font-bold">Down</span>
|
Live stdout/stderr from Docker. Rotation is automatic: <code>json-file</code> driver,
|
||||||
}
|
<strong> 10 MB max-size, 5 backups</strong> per container — configured in <code>docker-compose.yml</code>.
|
||||||
</td>
|
</div>
|
||||||
<td className="px-2 py-1">
|
<div className="flex flex-wrap gap-2 items-center">
|
||||||
{h.alerts && h.alerts.length > 0 ? (
|
<select className="border rounded px-2 py-1 text-sm" value={selected} onChange={e => setSelected(e.target.value)}>
|
||||||
<div className="flex flex-col gap-1">
|
{(containers.length ? containers : ['cell-api']).map(c => <option key={c} value={c}>{c}</option>)}
|
||||||
{h.alerts.map((a, j) => (
|
</select>
|
||||||
<span key={j} className="text-red-700 font-semibold flex items-center"><AlertTriangle className="inline-block h-4 w-4 mr-1 text-red-500" />{a}</span>
|
<select className="border rounded px-2 py-1 text-sm" value={tail} onChange={e => setTail(Number(e.target.value))}>
|
||||||
))}
|
{[50, 100, 200, 500].map(n => <option key={n} value={n}>{n} lines</option>)}
|
||||||
</div>
|
</select>
|
||||||
) : (
|
<button className={`btn px-2 py-1 text-sm ${autoRefresh ? 'btn-primary' : 'btn-secondary'}`} title="Auto-refresh 5s" onClick={() => setAutoRefresh(v => !v)}>
|
||||||
<span className="text-green-600">None</span>
|
<RefreshCw className={`h-4 w-4 ${autoRefresh ? 'animate-spin' : ''}`} />
|
||||||
)}
|
</button>
|
||||||
</td>
|
<button className="btn btn-secondary px-2 py-1 text-sm" onClick={doFetch}><RefreshCw className="h-4 w-4" /></button>
|
||||||
</tr>
|
</div>
|
||||||
|
<div className="bg-gray-900 text-green-300 rounded-lg p-3 h-[500px] overflow-y-auto font-mono text-xs">
|
||||||
|
{loading && !lines.length ? (
|
||||||
|
<span className="text-gray-400">Loading…</span>
|
||||||
|
) : !lines.length ? (
|
||||||
|
<span className="text-gray-500">No output.</span>
|
||||||
|
) : (
|
||||||
|
lines.map((l, i) => <div key={i} className="py-0.5 break-all">{l}</div>)
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<div className="text-xs text-gray-400">{lines.length} lines</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tab 3: Verbosity Config ─────────────────────────────────────────────────
|
||||||
|
function VerbosityTab() {
|
||||||
|
const [levels, setLevels] = useState({});
|
||||||
|
const [pending, setPending] = useState({});
|
||||||
|
const [loading, setLoading] = useState(false);
|
||||||
|
const [saving, setSaving] = useState(false);
|
||||||
|
const [msg, setMsg] = useState('');
|
||||||
|
|
||||||
|
const load = async () => {
|
||||||
|
setLoading(true);
|
||||||
|
try {
|
||||||
|
const res = await logsAPI.getVerbosity();
|
||||||
|
setLevels(res.data || {});
|
||||||
|
setPending(res.data || {});
|
||||||
|
} catch (e) {
|
||||||
|
setMsg(`Error: ${e.message}`);
|
||||||
|
} finally {
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
useEffect(() => { load(); }, []);
|
||||||
|
|
||||||
|
const save = async () => {
|
||||||
|
const changed = Object.fromEntries(
|
||||||
|
Object.entries(pending).filter(([k, v]) => v !== levels[k])
|
||||||
|
);
|
||||||
|
if (!Object.keys(changed).length) { setMsg('No changes.'); return; }
|
||||||
|
setSaving(true);
|
||||||
|
setMsg('');
|
||||||
|
try {
|
||||||
|
const res = await logsAPI.setVerbosity(changed);
|
||||||
|
setLevels(res.data.levels || pending);
|
||||||
|
setMsg('Levels saved and applied.');
|
||||||
|
} catch (e) {
|
||||||
|
setMsg(`Error: ${e.message}`);
|
||||||
|
} finally {
|
||||||
|
setSaving(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const services = Object.keys(pending).sort();
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-4 max-w-lg">
|
||||||
|
<div className="text-xs text-gray-500 bg-gray-50 rounded px-3 py-2">
|
||||||
|
Changes apply immediately to the running API — no restart needed. Levels are persisted to
|
||||||
|
<code> config/log_levels.json</code> and restored on container restart.
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{loading ? <div className="text-gray-500 text-sm">Loading…</div> : (
|
||||||
|
<table className="w-full text-sm">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-gray-100">
|
||||||
|
<th className="px-3 py-2 text-left">Service</th>
|
||||||
|
<th className="px-3 py-2 text-left">Log Level</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{services.map(svc => (
|
||||||
|
<tr key={svc} className="border-t">
|
||||||
|
<td className="px-3 py-2 font-medium">{svc}</td>
|
||||||
|
<td className="px-3 py-2">
|
||||||
|
<select
|
||||||
|
className="border rounded px-2 py-1 text-sm"
|
||||||
|
value={pending[svc] || 'INFO'}
|
||||||
|
onChange={e => setPending(p => ({ ...p, [svc]: e.target.value }))}
|
||||||
|
>
|
||||||
|
{['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'].map(l => (
|
||||||
|
<option key={l} value={l}>{l}</option>
|
||||||
|
))}
|
||||||
|
</select>
|
||||||
|
{pending[svc] !== levels[svc] && (
|
||||||
|
<span className="ml-2 text-xs text-yellow-600">changed</span>
|
||||||
|
)}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="flex items-center gap-3">
|
||||||
|
<button className="btn btn-primary text-sm" onClick={save} disabled={saving}>
|
||||||
|
{saving ? 'Saving…' : 'Apply Changes'}
|
||||||
|
</button>
|
||||||
|
<button className="btn btn-secondary text-sm" onClick={load}>Reset</button>
|
||||||
|
{msg && <span className="text-sm text-gray-600">{msg}</span>}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Tab 4: Health History ───────────────────────────────────────────────────
|
||||||
|
function HealthHistoryTab() {
|
||||||
|
const [history, setHistory] = useState([]);
|
||||||
|
const [loading, setLoading] = useState(false);
|
||||||
|
|
||||||
|
const load = async () => {
|
||||||
|
setLoading(true);
|
||||||
|
try { setHistory((await monitoringAPI.getHealthHistory()).data || []); } catch {}
|
||||||
|
setLoading(false);
|
||||||
|
};
|
||||||
|
|
||||||
|
useEffect(() => { load(); }, []);
|
||||||
|
|
||||||
|
// health history entries have shape: { status: {running, status}, healthy, connectivity, ... }
|
||||||
|
const SvcCol = ({ data }) => {
|
||||||
|
const running = data?.status?.running === true || data?.status?.status === 'online'
|
||||||
|
|| data?.running === true || data?.status === 'online';
|
||||||
|
return running
|
||||||
|
? <span className="text-green-600">OK</span>
|
||||||
|
: <span className="text-red-600 font-bold">Down</span>;
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div className="space-y-4">
|
||||||
|
<div className="flex justify-between items-center">
|
||||||
|
<h3 className="text-lg font-medium text-gray-900">Health History</h3>
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<button className="btn btn-secondary text-sm" onClick={load}><RefreshCw className="h-4 w-4 mr-1 inline" />Refresh</button>
|
||||||
|
<button className="btn btn-secondary text-sm text-red-600" onClick={async () => {
|
||||||
|
if (!window.confirm('Clear all health history and reset alert counters?')) return;
|
||||||
|
await monitoringAPI.clearHealthHistory();
|
||||||
|
await load();
|
||||||
|
}}>Clear</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{loading ? <div className="text-gray-500 text-sm">Loading…</div> : (
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="min-w-full text-sm">
|
||||||
|
<thead>
|
||||||
|
<tr className="bg-gray-100">
|
||||||
|
{['Timestamp','Network','WireGuard','Email','Calendar','Files','Routing','Vault','Alerts'].map(h => (
|
||||||
|
<th key={h} className="px-2 py-1 text-left">{h}</th>
|
||||||
))}
|
))}
|
||||||
</tbody>
|
</tr>
|
||||||
</table>
|
</thead>
|
||||||
</div>
|
<tbody>
|
||||||
|
{history.map((h, i) => (
|
||||||
|
<tr key={i} className={h.alerts?.length ? 'bg-red-50' : ''}>
|
||||||
|
<td className="px-2 py-1 font-mono text-xs">{h.timestamp}</td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.network} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.wireguard} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.email} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.calendar} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.files} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.routing} /></td>
|
||||||
|
<td className="px-2 py-1"><SvcCol data={h.vault} /></td>
|
||||||
|
<td className="px-2 py-1">
|
||||||
|
{h.alerts?.length
|
||||||
|
? h.alerts.map((a, j) => (
|
||||||
|
<span key={j} className="text-red-700 font-semibold flex items-center gap-1">
|
||||||
|
<AlertTriangle className="h-3 w-3 text-red-500" />{a}
|
||||||
|
</span>
|
||||||
|
))
|
||||||
|
: <span className="text-green-600">—</span>}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
export default Logs;
|
// ── Main ────────────────────────────────────────────────────────────────────
|
||||||
|
const TABS = [
|
||||||
|
{ id: 'api', label: 'API Service Logs', icon: FileText },
|
||||||
|
{ id: 'container', label: 'Container Logs', icon: Box },
|
||||||
|
{ id: 'verbosity', label: 'Verbosity Config', icon: Settings },
|
||||||
|
{ id: 'health', label: 'Health History', icon: Activity },
|
||||||
|
];
|
||||||
|
|
||||||
|
export default function Logs() {
|
||||||
|
const [tab, setTab] = useState('api');
|
||||||
|
return (
|
||||||
|
<div>
|
||||||
|
<div className="mb-6">
|
||||||
|
<h1 className="text-2xl font-bold text-gray-900">Logs & Monitoring</h1>
|
||||||
|
<p className="mt-1 text-gray-600">API service logs · Container stdout/stderr · Log level config · Health history</p>
|
||||||
|
</div>
|
||||||
|
<div className="mb-4 flex gap-2 border-b">
|
||||||
|
{TABS.map(({ id, label, icon: Icon }) => (
|
||||||
|
<button
|
||||||
|
key={id}
|
||||||
|
className={`px-4 py-2 text-sm font-medium flex items-center gap-1 border-b-2 transition-colors ${
|
||||||
|
tab === id ? 'border-primary-600 text-primary-700' : 'border-transparent text-gray-600 hover:text-gray-900'
|
||||||
|
}`}
|
||||||
|
onClick={() => setTab(id)}
|
||||||
|
>
|
||||||
|
<Icon className="h-4 w-4" />{label}
|
||||||
|
</button>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
<div className="card">
|
||||||
|
{tab === 'api' && <ApiServiceLogsTab />}
|
||||||
|
{tab === 'container' && <ContainerLogsTab />}
|
||||||
|
{tab === 'verbosity' && <VerbosityTab />}
|
||||||
|
{tab === 'health' && <HealthHistoryTab />}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,11 +1,14 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { Network, Server, Clock } from 'lucide-react';
|
import { Network, Server, Clock } from 'lucide-react';
|
||||||
import { networkAPI } from '../services/api';
|
import { networkAPI, cellAPI } from '../services/api';
|
||||||
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
|
||||||
function NetworkServices() {
|
function NetworkServices() {
|
||||||
|
const { domain = 'cell' } = useConfig();
|
||||||
const [dnsRecords, setDnsRecords] = useState([]);
|
const [dnsRecords, setDnsRecords] = useState([]);
|
||||||
const [dhcpLeases, setDhcpLeases] = useState([]);
|
const [dhcpLeases, setDhcpLeases] = useState([]);
|
||||||
const [ntpStatus, setNtpStatus] = useState(null);
|
const [ntpStatus, setNtpStatus] = useState(null);
|
||||||
|
const [networkConfig, setNetworkConfig] = useState({});
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -14,15 +17,17 @@ function NetworkServices() {
|
|||||||
|
|
||||||
const fetchNetworkData = async () => {
|
const fetchNetworkData = async () => {
|
||||||
try {
|
try {
|
||||||
const [dnsResponse, dhcpResponse, ntpResponse] = await Promise.all([
|
const [dnsResponse, dhcpResponse, ntpResponse, cfgResponse] = await Promise.all([
|
||||||
networkAPI.getDNSRecords(),
|
networkAPI.getDNSRecords(),
|
||||||
networkAPI.getDHCPLeases(),
|
networkAPI.getDHCPLeases(),
|
||||||
networkAPI.getNTPStatus()
|
networkAPI.getNTPStatus(),
|
||||||
|
cellAPI.getConfig(),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
setDnsRecords(dnsResponse.data);
|
setDnsRecords(dnsResponse.data);
|
||||||
setDhcpLeases(dhcpResponse.data);
|
setDhcpLeases(dhcpResponse.data);
|
||||||
setNtpStatus(ntpResponse.data);
|
setNtpStatus(ntpResponse.data);
|
||||||
|
setNetworkConfig(cfgResponse.data?.service_configs?.network || {});
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to fetch network data:', error);
|
console.error('Failed to fetch network data:', error);
|
||||||
} finally {
|
} finally {
|
||||||
@@ -43,7 +48,10 @@ function NetworkServices() {
|
|||||||
<div className="mb-8">
|
<div className="mb-8">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">Network Services</h1>
|
<h1 className="text-2xl font-bold text-gray-900">Network Services</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-2 text-gray-600">
|
||||||
Manage DNS, DHCP, and NTP services
|
DNS zone: <span className="font-mono font-medium text-gray-800">{domain}</span>
|
||||||
|
{networkConfig.dhcp_range && (
|
||||||
|
<> · DHCP: <span className="font-mono font-medium text-gray-800">{networkConfig.dhcp_range}</span></>
|
||||||
|
)}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -58,8 +66,11 @@ function NetworkServices() {
|
|||||||
{dnsRecords.length > 0 ? (
|
{dnsRecords.length > 0 ? (
|
||||||
dnsRecords.map((record, index) => (
|
dnsRecords.map((record, index) => (
|
||||||
<div key={index} className="flex justify-between items-center p-2 bg-gray-50 rounded">
|
<div key={index} className="flex justify-between items-center p-2 bg-gray-50 rounded">
|
||||||
<span className="text-sm font-medium">{record.name}</span>
|
<div>
|
||||||
<span className="text-sm text-gray-500">{record.ip}</span>
|
<span className="text-sm font-medium">{record.name}</span>
|
||||||
|
<span className="text-xs text-gray-400 ml-1">.{record.zone}</span>
|
||||||
|
</div>
|
||||||
|
<span className="text-sm font-mono text-gray-600">{record.value}</span>
|
||||||
</div>
|
</div>
|
||||||
))
|
))
|
||||||
) : (
|
) : (
|
||||||
@@ -74,6 +85,9 @@ function NetworkServices() {
|
|||||||
<Server className="h-6 w-6 text-primary-500 mr-2" />
|
<Server className="h-6 w-6 text-primary-500 mr-2" />
|
||||||
<h3 className="text-lg font-medium text-gray-900">DHCP Leases</h3>
|
<h3 className="text-lg font-medium text-gray-900">DHCP Leases</h3>
|
||||||
</div>
|
</div>
|
||||||
|
{networkConfig.dhcp_range && (
|
||||||
|
<p className="text-xs text-gray-400 mb-2">Range: {networkConfig.dhcp_range}</p>
|
||||||
|
)}
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
{dhcpLeases.length > 0 ? (
|
{dhcpLeases.length > 0 ? (
|
||||||
dhcpLeases.map((lease, index) => (
|
dhcpLeases.map((lease, index) => (
|
||||||
@@ -94,6 +108,13 @@ function NetworkServices() {
|
|||||||
<Clock className="h-6 w-6 text-primary-500 mr-2" />
|
<Clock className="h-6 w-6 text-primary-500 mr-2" />
|
||||||
<h3 className="text-lg font-medium text-gray-900">NTP Status</h3>
|
<h3 className="text-lg font-medium text-gray-900">NTP Status</h3>
|
||||||
</div>
|
</div>
|
||||||
|
{networkConfig.ntp_servers && (
|
||||||
|
<p className="text-xs text-gray-400 mb-2">
|
||||||
|
Servers: {Array.isArray(networkConfig.ntp_servers)
|
||||||
|
? networkConfig.ntp_servers.join(', ')
|
||||||
|
: networkConfig.ntp_servers}
|
||||||
|
</p>
|
||||||
|
)}
|
||||||
{ntpStatus ? (
|
{ntpStatus ? (
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
<div className="flex justify-between">
|
<div className="flex justify-between">
|
||||||
|
|||||||
+609
-899
File diff suppressed because it is too large
Load Diff
+625
-769
File diff suppressed because it is too large
Load Diff
+581
-69
@@ -1,96 +1,608 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect, useCallback } from 'react';
|
||||||
import { Settings as SettingsIcon, Server, Shield } from 'lucide-react';
|
import { useConfig } from '../contexts/ConfigContext';
|
||||||
|
import {
|
||||||
|
Settings as SettingsIcon, Server, Shield, Network, Mail, Calendar,
|
||||||
|
HardDrive, GitBranch, Archive, Upload, Download, Trash2, RotateCcw,
|
||||||
|
Save, ChevronDown, ChevronRight, CheckCircle, XCircle, AlertCircle,
|
||||||
|
RefreshCw, Lock
|
||||||
|
} from 'lucide-react';
|
||||||
import { cellAPI } from '../services/api';
|
import { cellAPI } from '../services/api';
|
||||||
|
|
||||||
|
// ── helpers ──────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
function toast(msg, type = 'success') {
|
||||||
|
// simple inline notification via a thrown CustomEvent consumed below
|
||||||
|
window.dispatchEvent(new CustomEvent('settings-toast', { detail: { msg, type } }));
|
||||||
|
}
|
||||||
|
|
||||||
|
function Toast({ toasts }) {
|
||||||
|
return (
|
||||||
|
<div className="fixed bottom-4 right-4 z-50 space-y-2">
|
||||||
|
{toasts.map((t) => (
|
||||||
|
<div
|
||||||
|
key={t.id}
|
||||||
|
className={`px-4 py-3 rounded-lg shadow-lg text-sm text-white flex items-center gap-2 ${
|
||||||
|
t.type === 'success' ? 'bg-green-600' : t.type === 'error' ? 'bg-red-600' : 'bg-yellow-600'
|
||||||
|
}`}
|
||||||
|
>
|
||||||
|
{t.type === 'success' ? <CheckCircle className="h-4 w-4" /> : <XCircle className="h-4 w-4" />}
|
||||||
|
{t.msg}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function useToasts() {
|
||||||
|
const [toasts, setToasts] = useState([]);
|
||||||
|
useEffect(() => {
|
||||||
|
const handler = (e) => {
|
||||||
|
const id = Date.now();
|
||||||
|
setToasts((prev) => [...prev, { ...e.detail, id }]);
|
||||||
|
setTimeout(() => setToasts((prev) => prev.filter((t) => t.id !== id)), 4000);
|
||||||
|
};
|
||||||
|
window.addEventListener('settings-toast', handler);
|
||||||
|
return () => window.removeEventListener('settings-toast', handler);
|
||||||
|
}, []);
|
||||||
|
return toasts;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Section wrapper ───────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
function Section({ icon: Icon, title, children, collapsible = false, defaultOpen = true }) {
|
||||||
|
const [open, setOpen] = useState(defaultOpen);
|
||||||
|
return (
|
||||||
|
<div className="card mb-4">
|
||||||
|
<button
|
||||||
|
className="w-full flex items-center justify-between"
|
||||||
|
onClick={() => collapsible && setOpen((v) => !v)}
|
||||||
|
disabled={!collapsible}
|
||||||
|
>
|
||||||
|
<div className="flex items-center gap-2">
|
||||||
|
<Icon className="h-5 w-5 text-primary-500" />
|
||||||
|
<h3 className="text-base font-semibold text-gray-900">{title}</h3>
|
||||||
|
</div>
|
||||||
|
{collapsible && (open ? <ChevronDown className="h-4 w-4 text-gray-400" /> : <ChevronRight className="h-4 w-4 text-gray-400" />)}
|
||||||
|
</button>
|
||||||
|
{(!collapsible || open) && <div className="mt-4">{children}</div>}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Field components ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
function Field({ label, children, hint }) {
|
||||||
|
return (
|
||||||
|
<div className="flex flex-col sm:flex-row sm:items-center gap-1 sm:gap-4">
|
||||||
|
<label className="text-sm text-gray-600 sm:w-48 shrink-0">{label}</label>
|
||||||
|
<div className="flex-1">{children}</div>
|
||||||
|
{hint && <span className="text-xs text-gray-400">{hint}</span>}
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function TextInput({ value, onChange, placeholder, type = 'text', readOnly }) {
|
||||||
|
return (
|
||||||
|
<input
|
||||||
|
type={type}
|
||||||
|
value={value ?? ''}
|
||||||
|
onChange={(e) => onChange && onChange(e.target.value)}
|
||||||
|
placeholder={placeholder}
|
||||||
|
readOnly={readOnly}
|
||||||
|
className={`w-full text-sm border rounded px-3 py-1.5 focus:outline-none focus:ring-2 focus:ring-primary-400 ${
|
||||||
|
readOnly ? 'bg-gray-50 text-gray-500 cursor-default' : 'bg-white'
|
||||||
|
}`}
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function NumberInput({ value, onChange, min, max }) {
|
||||||
|
return (
|
||||||
|
<input
|
||||||
|
type="number"
|
||||||
|
value={value ?? ''}
|
||||||
|
min={min}
|
||||||
|
max={max}
|
||||||
|
onChange={(e) => onChange && onChange(Number(e.target.value))}
|
||||||
|
className="w-full text-sm border rounded px-3 py-1.5 focus:outline-none focus:ring-2 focus:ring-primary-400 bg-white"
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function BoolToggle({ value, onChange, label }) {
|
||||||
|
return (
|
||||||
|
<label className="flex items-center gap-2 cursor-pointer select-none">
|
||||||
|
<div
|
||||||
|
onClick={() => onChange && onChange(!value)}
|
||||||
|
className={`relative w-10 h-5 rounded-full transition-colors ${value ? 'bg-primary-500' : 'bg-gray-300'}`}
|
||||||
|
>
|
||||||
|
<span
|
||||||
|
className={`absolute top-0.5 left-0.5 w-4 h-4 bg-white rounded-full shadow transition-transform ${value ? 'translate-x-5' : ''}`}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<span className="text-sm text-gray-700">{label}</span>
|
||||||
|
</label>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function TagList({ value = [], onChange, placeholder }) {
|
||||||
|
const [input, setInput] = useState('');
|
||||||
|
const add = () => {
|
||||||
|
const v = input.trim();
|
||||||
|
if (v && !value.includes(v)) { onChange([...value, v]); setInput(''); }
|
||||||
|
};
|
||||||
|
return (
|
||||||
|
<div className="space-y-1">
|
||||||
|
<div className="flex gap-2 flex-wrap">
|
||||||
|
{value.map((item) => (
|
||||||
|
<span key={item} className="flex items-center gap-1 bg-primary-100 text-primary-700 text-xs rounded-full px-2 py-0.5">
|
||||||
|
{item}
|
||||||
|
<button onClick={() => onChange(value.filter((v) => v !== item))} className="hover:text-red-500">×</button>
|
||||||
|
</span>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
<div className="flex gap-2">
|
||||||
|
<input
|
||||||
|
className="flex-1 text-sm border rounded px-3 py-1 focus:outline-none focus:ring-2 focus:ring-primary-400"
|
||||||
|
value={input}
|
||||||
|
onChange={(e) => setInput(e.target.value)}
|
||||||
|
onKeyDown={(e) => e.key === 'Enter' && (e.preventDefault(), add())}
|
||||||
|
placeholder={placeholder}
|
||||||
|
/>
|
||||||
|
<button onClick={add} className="btn-secondary text-xs px-3 py-1">Add</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Service config forms ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
function NetworkForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="DNS Port">
|
||||||
|
<NumberInput value={data.dns_port} onChange={(v) => onChange({ ...data, dns_port: v })} min={1} max={65535} />
|
||||||
|
</Field>
|
||||||
|
<Field label="DHCP Range" hint="e.g. 10.0.0.100,10.0.0.200,12h">
|
||||||
|
<TextInput value={data.dhcp_range} onChange={(v) => onChange({ ...data, dhcp_range: v })} placeholder="10.0.0.100,10.0.0.200,12h" />
|
||||||
|
</Field>
|
||||||
|
<Field label="NTP Servers">
|
||||||
|
<TagList value={data.ntp_servers || []} onChange={(v) => onChange({ ...data, ntp_servers: v })} placeholder="0.pool.ntp.org" />
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function WireguardForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="Listen Port">
|
||||||
|
<NumberInput value={data.port} onChange={(v) => onChange({ ...data, port: v })} min={1} max={65535} />
|
||||||
|
</Field>
|
||||||
|
<Field label="Server Address" hint="CIDR, e.g. 10.0.0.1/24">
|
||||||
|
<TextInput value={data.address} onChange={(v) => onChange({ ...data, address: v })} placeholder="10.0.0.1/24" />
|
||||||
|
</Field>
|
||||||
|
<Field label="Private Key">
|
||||||
|
<TextInput value={data.private_key} onChange={(v) => onChange({ ...data, private_key: v })} placeholder="base64 private key" type="password" />
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function EmailForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="Mail Domain">
|
||||||
|
<TextInput value={data.domain} onChange={(v) => onChange({ ...data, domain: v })} placeholder="mail.example.com" />
|
||||||
|
</Field>
|
||||||
|
<Field label="SMTP Port" hint="Fixed by docker-compose.yml">
|
||||||
|
<TextInput value={data.smtp_port ?? 587} readOnly />
|
||||||
|
</Field>
|
||||||
|
<Field label="IMAP Port" hint="Fixed by docker-compose.yml">
|
||||||
|
<TextInput value={data.imap_port ?? 993} readOnly />
|
||||||
|
</Field>
|
||||||
|
<p className="text-xs text-gray-400">
|
||||||
|
Ports 587 (SMTP) and 993 (IMAP) are set by docker-compose port bindings and cannot be changed at runtime.
|
||||||
|
Only <strong>domain</strong> is applied on Save.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function CalendarForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="Radicale Port" hint="Internal port; clients use port 80 via Caddy">
|
||||||
|
<NumberInput value={data.port} onChange={(v) => onChange({ ...data, port: v })} min={1} max={65535} />
|
||||||
|
</Field>
|
||||||
|
<Field label="Data Directory">
|
||||||
|
<TextInput value={data.data_dir} onChange={(v) => onChange({ ...data, data_dir: v })} placeholder="/app/data/radicale" />
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function FilesForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="Internal Port" hint="Fixed by docker-compose.yml">
|
||||||
|
<TextInput value={data.port ?? 80} readOnly />
|
||||||
|
</Field>
|
||||||
|
<Field label="Data Directory">
|
||||||
|
<TextInput value={data.data_dir} onChange={(v) => onChange({ ...data, data_dir: v })} placeholder="/app/data/webdav" />
|
||||||
|
</Field>
|
||||||
|
<Field label="Default Quota (MB)">
|
||||||
|
<NumberInput value={data.quota} onChange={(v) => onChange({ ...data, quota: v })} min={0} />
|
||||||
|
</Field>
|
||||||
|
<p className="text-xs text-gray-400">
|
||||||
|
Clients always connect on port 80 via Caddy reverse proxy, regardless of internal port.
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function RoutingForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="">
|
||||||
|
<BoolToggle value={data.nat_enabled} onChange={(v) => onChange({ ...data, nat_enabled: v })} label="NAT Enabled" />
|
||||||
|
</Field>
|
||||||
|
<Field label="">
|
||||||
|
<BoolToggle value={data.firewall_enabled} onChange={(v) => onChange({ ...data, firewall_enabled: v })} label="Firewall Enabled" />
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function VaultForm({ data, onChange }) {
|
||||||
|
return (
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="">
|
||||||
|
<BoolToggle value={data.ca_configured} onChange={(v) => onChange({ ...data, ca_configured: v })} label="CA Configured" />
|
||||||
|
</Field>
|
||||||
|
<Field label="">
|
||||||
|
<BoolToggle value={data.fernet_configured} onChange={(v) => onChange({ ...data, fernet_configured: v })} label="Fernet Encryption Configured" />
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// service config meta
|
||||||
|
const SERVICE_DEFS = [
|
||||||
|
{ key: 'network', label: 'Network (DNS/DHCP/NTP)', icon: Network, Form: NetworkForm, defaults: { dns_port: 53, dhcp_range: '', ntp_servers: [] } },
|
||||||
|
{ key: 'wireguard', label: 'WireGuard VPN', icon: Shield, Form: WireguardForm, defaults: { port: 51820, address: '', private_key: '' } },
|
||||||
|
{ key: 'email', label: 'Email (SMTP/IMAP)', icon: Mail, Form: EmailForm, defaults: { domain: '', smtp_port: 587, imap_port: 993 } },
|
||||||
|
{ key: 'calendar', label: 'Calendar (CalDAV)', icon: Calendar, Form: CalendarForm, defaults: { port: 5232, data_dir: '' } },
|
||||||
|
{ key: 'files', label: 'Files (WebDAV)', icon: HardDrive, Form: FilesForm, defaults: { port: 80, data_dir: '', quota: 1024 } },
|
||||||
|
{ key: 'routing', label: 'Routing & Firewall', icon: GitBranch, Form: RoutingForm, defaults: { nat_enabled: true, firewall_enabled: true } },
|
||||||
|
{ key: 'vault', label: 'Vault & Trust', icon: Lock, Form: VaultForm, defaults: { ca_configured: false, fernet_configured: false } },
|
||||||
|
];
|
||||||
|
|
||||||
|
// ── Main component ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
function Settings() {
|
function Settings() {
|
||||||
const [config, setConfig] = useState(null);
|
const toasts = useToasts();
|
||||||
|
const { refresh: refreshConfig } = useConfig();
|
||||||
|
|
||||||
|
// identity
|
||||||
|
const [identity, setIdentity] = useState({ cell_name: '', domain: '', ip_range: '', wireguard_port: 51820 });
|
||||||
|
const [identityDirty, setIdentityDirty] = useState(false);
|
||||||
|
const [identitySaving, setIdentitySaving] = useState(false);
|
||||||
|
|
||||||
|
// service configs
|
||||||
|
const [serviceConfigs, setServiceConfigs] = useState({});
|
||||||
|
const [serviceDirty, setServiceDirty] = useState({});
|
||||||
|
const [serviceSaving, setServiceSaving] = useState({});
|
||||||
|
|
||||||
|
// backups
|
||||||
|
const [backups, setBackups] = useState([]);
|
||||||
|
const [backupsLoading, setBackupsLoading] = useState(false);
|
||||||
|
const [backupCreating, setBackupCreating] = useState(false);
|
||||||
|
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
|
|
||||||
useEffect(() => {
|
const loadAll = useCallback(async () => {
|
||||||
fetchConfig();
|
setIsLoading(true);
|
||||||
}, []);
|
|
||||||
|
|
||||||
const fetchConfig = async () => {
|
|
||||||
try {
|
try {
|
||||||
const response = await cellAPI.getConfig();
|
const [cfgRes, bkRes] = await Promise.all([
|
||||||
setConfig(response.data);
|
cellAPI.getConfig(),
|
||||||
} catch (error) {
|
cellAPI.listBackups(),
|
||||||
console.error('Failed to fetch config:', error);
|
]);
|
||||||
|
const cfg = cfgRes.data;
|
||||||
|
setIdentity({
|
||||||
|
cell_name: cfg.cell_name || '',
|
||||||
|
domain: cfg.domain || '',
|
||||||
|
ip_range: cfg.ip_range || '',
|
||||||
|
wireguard_port: cfg.wireguard_port || 51820,
|
||||||
|
});
|
||||||
|
setServiceConfigs(cfg.service_configs || {});
|
||||||
|
setBackups(bkRes.data || []);
|
||||||
|
} catch (err) {
|
||||||
|
toast('Failed to load configuration', 'error');
|
||||||
} finally {
|
} finally {
|
||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
}
|
}
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
useEffect(() => { loadAll(); }, [loadAll]);
|
||||||
|
|
||||||
|
const _applyResult = (res, label) => {
|
||||||
|
const { restarted = [], warnings = [] } = res.data || {};
|
||||||
|
if (restarted.length > 0) {
|
||||||
|
toast(`${label} saved — restarted: ${restarted.join(', ')}`);
|
||||||
|
} else {
|
||||||
|
toast(`${label} saved`);
|
||||||
|
}
|
||||||
|
warnings.forEach((w) => toast(w, 'warning'));
|
||||||
|
};
|
||||||
|
|
||||||
|
// identity save
|
||||||
|
const saveIdentity = async () => {
|
||||||
|
setIdentitySaving(true);
|
||||||
|
try {
|
||||||
|
const res = await cellAPI.updateConfig(identity);
|
||||||
|
setIdentityDirty(false);
|
||||||
|
_applyResult(res, 'Cell identity');
|
||||||
|
refreshConfig();
|
||||||
|
} catch {
|
||||||
|
toast('Failed to save identity', 'error');
|
||||||
|
} finally {
|
||||||
|
setIdentitySaving(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// service config save
|
||||||
|
const saveService = async (key) => {
|
||||||
|
setServiceSaving((s) => ({ ...s, [key]: true }));
|
||||||
|
try {
|
||||||
|
const res = await cellAPI.updateConfig({ [key]: serviceConfigs[key] });
|
||||||
|
setServiceDirty((d) => ({ ...d, [key]: false }));
|
||||||
|
_applyResult(res, key);
|
||||||
|
} catch {
|
||||||
|
toast(`Failed to save ${key} config`, 'error');
|
||||||
|
} finally {
|
||||||
|
setServiceSaving((s) => ({ ...s, [key]: false }));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const updateServiceConfig = (key, data) => {
|
||||||
|
setServiceConfigs((prev) => ({ ...prev, [key]: data }));
|
||||||
|
setServiceDirty((d) => ({ ...d, [key]: true }));
|
||||||
|
};
|
||||||
|
|
||||||
|
// backups
|
||||||
|
const createBackup = async () => {
|
||||||
|
setBackupCreating(true);
|
||||||
|
try {
|
||||||
|
await cellAPI.createBackup();
|
||||||
|
toast('Backup created');
|
||||||
|
const res = await cellAPI.listBackups();
|
||||||
|
setBackups(res.data || []);
|
||||||
|
} catch {
|
||||||
|
toast('Failed to create backup', 'error');
|
||||||
|
} finally {
|
||||||
|
setBackupCreating(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const restoreBackup = async (id) => {
|
||||||
|
if (!confirm(`Restore backup ${id}? Current config will be overwritten.`)) return;
|
||||||
|
try {
|
||||||
|
await cellAPI.restoreBackup(id);
|
||||||
|
toast('Configuration restored — reloading…');
|
||||||
|
setTimeout(() => loadAll(), 500);
|
||||||
|
} catch {
|
||||||
|
toast('Failed to restore backup', 'error');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const deleteBackup = async (id) => {
|
||||||
|
if (!confirm(`Delete backup ${id}?`)) return;
|
||||||
|
try {
|
||||||
|
await cellAPI.deleteBackup(id);
|
||||||
|
setBackups((prev) => prev.filter((b) => b.backup_id !== id));
|
||||||
|
toast('Backup deleted');
|
||||||
|
} catch {
|
||||||
|
toast('Failed to delete backup', 'error');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// export
|
||||||
|
const exportConfig = async () => {
|
||||||
|
try {
|
||||||
|
const res = await cellAPI.exportConfig('json');
|
||||||
|
const blob = new Blob([res.data.config], { type: 'application/json' });
|
||||||
|
const url = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement('a');
|
||||||
|
a.href = url;
|
||||||
|
a.download = `pic-config-${new Date().toISOString().slice(0, 10)}.json`;
|
||||||
|
a.click();
|
||||||
|
URL.revokeObjectURL(url);
|
||||||
|
} catch {
|
||||||
|
toast('Export failed', 'error');
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// import
|
||||||
|
const importConfig = async (e) => {
|
||||||
|
const file = e.target.files?.[0];
|
||||||
|
if (!file) return;
|
||||||
|
const text = await file.text();
|
||||||
|
if (!confirm('Import this config? Current settings will be replaced.')) { e.target.value = ''; return; }
|
||||||
|
try {
|
||||||
|
await cellAPI.importConfig(text, 'json');
|
||||||
|
toast('Config imported — reloading…');
|
||||||
|
setTimeout(() => loadAll(), 500);
|
||||||
|
} catch {
|
||||||
|
toast('Import failed', 'error');
|
||||||
|
} finally {
|
||||||
|
e.target.value = '';
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
if (isLoading) {
|
if (isLoading) {
|
||||||
return (
|
return (
|
||||||
<div className="flex items-center justify-center h-64">
|
<div className="flex items-center justify-center h-64">
|
||||||
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-primary-600"></div>
|
<div className="animate-spin rounded-full h-8 w-8 border-b-2 border-primary-600" />
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div>
|
<div>
|
||||||
<div className="mb-8">
|
<Toast toasts={toasts} />
|
||||||
|
<div className="mb-6">
|
||||||
<h1 className="text-2xl font-bold text-gray-900">Settings</h1>
|
<h1 className="text-2xl font-bold text-gray-900">Settings</h1>
|
||||||
<p className="mt-2 text-gray-600">
|
<p className="mt-1 text-gray-500 text-sm">Configure your Personal Internet Cell</p>
|
||||||
Configure your Personal Internet Cell
|
</div>
|
||||||
|
|
||||||
|
{/* Cell Identity */}
|
||||||
|
<Section icon={Server} title="Cell Identity">
|
||||||
|
<div className="space-y-3">
|
||||||
|
<Field label="Cell Name">
|
||||||
|
<TextInput
|
||||||
|
value={identity.cell_name}
|
||||||
|
onChange={(v) => { setIdentity((i) => ({ ...i, cell_name: v })); setIdentityDirty(true); }}
|
||||||
|
placeholder="mycell"
|
||||||
|
/>
|
||||||
|
</Field>
|
||||||
|
<Field label="Domain">
|
||||||
|
<TextInput
|
||||||
|
value={identity.domain}
|
||||||
|
onChange={(v) => { setIdentity((i) => ({ ...i, domain: v })); setIdentityDirty(true); }}
|
||||||
|
placeholder="cell.local"
|
||||||
|
/>
|
||||||
|
</Field>
|
||||||
|
<Field label="IP Range" hint="Docker bridge subnet">
|
||||||
|
<TextInput
|
||||||
|
value={identity.ip_range}
|
||||||
|
onChange={(v) => { setIdentity((i) => ({ ...i, ip_range: v })); setIdentityDirty(true); }}
|
||||||
|
placeholder="172.20.0.0/16"
|
||||||
|
/>
|
||||||
|
</Field>
|
||||||
|
<Field label="WireGuard Port">
|
||||||
|
<NumberInput
|
||||||
|
value={identity.wireguard_port}
|
||||||
|
onChange={(v) => { setIdentity((i) => ({ ...i, wireguard_port: v })); setIdentityDirty(true); }}
|
||||||
|
min={1} max={65535}
|
||||||
|
/>
|
||||||
|
</Field>
|
||||||
|
</div>
|
||||||
|
<div className="flex justify-end mt-4">
|
||||||
|
<button
|
||||||
|
onClick={saveIdentity}
|
||||||
|
disabled={!identityDirty || identitySaving}
|
||||||
|
className="btn-primary flex items-center gap-2 text-sm disabled:opacity-50"
|
||||||
|
>
|
||||||
|
{identitySaving ? <RefreshCw className="h-4 w-4 animate-spin" /> : <Save className="h-4 w-4" />}
|
||||||
|
Save Identity
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-2">
|
||||||
|
Note: IP Range and WireGuard Port are also set via environment variables in docker-compose.yml.
|
||||||
|
Changes here are stored in config and take effect on next container start.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</Section>
|
||||||
|
|
||||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
|
{/* Service Configurations */}
|
||||||
{/* Cell Configuration */}
|
<div className="mb-2">
|
||||||
<div className="card">
|
<h2 className="text-lg font-semibold text-gray-800">Service Configuration</h2>
|
||||||
<div className="flex items-center mb-4">
|
|
||||||
<Server className="h-6 w-6 text-primary-500 mr-2" />
|
|
||||||
<h3 className="text-lg font-medium text-gray-900">Cell Configuration</h3>
|
|
||||||
</div>
|
|
||||||
{config ? (
|
|
||||||
<div className="space-y-3">
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">Cell Name:</span>
|
|
||||||
<span className="text-sm font-medium">{config.cell_name}</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">Domain:</span>
|
|
||||||
<span className="text-sm font-medium">{config.domain}</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">IP Range:</span>
|
|
||||||
<span className="text-sm font-medium">{config.ip_range}</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">WireGuard Port:</span>
|
|
||||||
<span className="text-sm font-medium">{config.wireguard_port}</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
) : (
|
|
||||||
<p className="text-gray-500 text-sm">Configuration unavailable</p>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{/* Security Settings */}
|
|
||||||
<div className="card">
|
|
||||||
<div className="flex items-center mb-4">
|
|
||||||
<Shield className="h-6 w-6 text-primary-500 mr-2" />
|
|
||||||
<h3 className="text-lg font-medium text-gray-900">Security Settings</h3>
|
|
||||||
</div>
|
|
||||||
<div className="space-y-3">
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">TLS Certificate:</span>
|
|
||||||
<span className="text-sm font-medium text-success-600">Valid</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">Firewall:</span>
|
|
||||||
<span className="text-sm font-medium text-success-600">Active</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex justify-between">
|
|
||||||
<span className="text-sm text-gray-500">VPN Encryption:</span>
|
|
||||||
<span className="text-sm font-medium text-success-600">Enabled</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
|
{SERVICE_DEFS.map(({ key, label, icon: Icon, Form, defaults }) => {
|
||||||
|
const data = { ...defaults, ...(serviceConfigs[key] || {}) };
|
||||||
|
const dirty = serviceDirty[key];
|
||||||
|
const saving = serviceSaving[key];
|
||||||
|
return (
|
||||||
|
<Section key={key} icon={Icon} title={label} collapsible defaultOpen={false}>
|
||||||
|
<Form data={data} onChange={(d) => updateServiceConfig(key, d)} />
|
||||||
|
<div className="flex items-center justify-between mt-4">
|
||||||
|
<span className="text-xs text-gray-400">Port/directory changes take effect after container restart.</span>
|
||||||
|
<button
|
||||||
|
onClick={() => saveService(key)}
|
||||||
|
disabled={!dirty || saving}
|
||||||
|
className="btn-primary flex items-center gap-2 text-sm disabled:opacity-50"
|
||||||
|
>
|
||||||
|
{saving ? <RefreshCw className="h-4 w-4 animate-spin" /> : <Save className="h-4 w-4" />}
|
||||||
|
Save
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</Section>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
|
||||||
|
{/* Backup & Restore */}
|
||||||
|
<Section icon={Archive} title="Backup & Restore" collapsible defaultOpen>
|
||||||
|
<div className="flex justify-between items-center mb-3">
|
||||||
|
<span className="text-sm text-gray-600">{backups.length} backup{backups.length !== 1 ? 's' : ''} stored</span>
|
||||||
|
<button
|
||||||
|
onClick={createBackup}
|
||||||
|
disabled={backupCreating}
|
||||||
|
className="btn-secondary flex items-center gap-2 text-sm"
|
||||||
|
>
|
||||||
|
{backupCreating ? <RefreshCw className="h-4 w-4 animate-spin" /> : <Archive className="h-4 w-4" />}
|
||||||
|
Create Backup
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
{backups.length === 0 ? (
|
||||||
|
<p className="text-sm text-gray-400 text-center py-4">No backups yet</p>
|
||||||
|
) : (
|
||||||
|
<div className="overflow-x-auto">
|
||||||
|
<table className="w-full text-sm">
|
||||||
|
<thead>
|
||||||
|
<tr className="text-left text-xs text-gray-500 border-b">
|
||||||
|
<th className="pb-2 font-medium">Backup ID</th>
|
||||||
|
<th className="pb-2 font-medium">Timestamp</th>
|
||||||
|
<th className="pb-2 font-medium">Services</th>
|
||||||
|
<th className="pb-2" />
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody className="divide-y divide-gray-100">
|
||||||
|
{backups.map((b) => (
|
||||||
|
<tr key={b.backup_id} className="hover:bg-gray-50">
|
||||||
|
<td className="py-2 font-mono text-xs text-gray-700">{b.backup_id}</td>
|
||||||
|
<td className="py-2 text-gray-600">{new Date(b.timestamp).toLocaleString()}</td>
|
||||||
|
<td className="py-2 text-gray-500">{(b.services || []).length} services</td>
|
||||||
|
<td className="py-2">
|
||||||
|
<div className="flex gap-2 justify-end">
|
||||||
|
<button
|
||||||
|
onClick={() => restoreBackup(b.backup_id)}
|
||||||
|
className="text-blue-600 hover:text-blue-800 flex items-center gap-1 text-xs"
|
||||||
|
title="Restore"
|
||||||
|
>
|
||||||
|
<RotateCcw className="h-3.5 w-3.5" /> Restore
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => deleteBackup(b.backup_id)}
|
||||||
|
className="text-red-500 hover:text-red-700 flex items-center gap-1 text-xs"
|
||||||
|
title="Delete"
|
||||||
|
>
|
||||||
|
<Trash2 className="h-3.5 w-3.5" /> Delete
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
))}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</Section>
|
||||||
|
|
||||||
|
{/* Export / Import */}
|
||||||
|
<Section icon={Download} title="Export & Import">
|
||||||
|
<div className="flex flex-wrap gap-3">
|
||||||
|
<button onClick={exportConfig} className="btn-secondary flex items-center gap-2 text-sm">
|
||||||
|
<Download className="h-4 w-4" /> Export JSON
|
||||||
|
</button>
|
||||||
|
<label className="btn-secondary flex items-center gap-2 text-sm cursor-pointer">
|
||||||
|
<Upload className="h-4 w-4" /> Import JSON
|
||||||
|
<input type="file" accept=".json" className="hidden" onChange={importConfig} />
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
<p className="text-xs text-gray-400 mt-2">
|
||||||
|
Export downloads all service configs as JSON. Import replaces current service configs.
|
||||||
|
</p>
|
||||||
|
</Section>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
+170
-79
@@ -1,10 +1,12 @@
|
|||||||
import { useState, useEffect } from 'react';
|
import { useState, useEffect } from 'react';
|
||||||
import { Shield, Key, Users, Activity, Wifi, Download, Copy, RefreshCw, Play, Pause, AlertCircle, Eye } from 'lucide-react';
|
import { Shield, Key, Users, Activity, Wifi, Download, Copy, RefreshCw, Play, Pause, AlertCircle, Eye, Globe, CheckCircle, XCircle } from 'lucide-react';
|
||||||
import { wireguardAPI, peerAPI } from '../services/api';
|
import { wireguardAPI, peerAPI } from '../services/api';
|
||||||
import QRCode from 'qrcode';
|
import QRCode from 'qrcode';
|
||||||
|
|
||||||
function WireGuard() {
|
function WireGuard() {
|
||||||
const [status, setStatus] = useState(null);
|
const [status, setStatus] = useState(null);
|
||||||
|
const [serverConfig, setServerConfig] = useState(null);
|
||||||
|
const [isRefreshingIp, setIsRefreshingIp] = useState(false);
|
||||||
const [peers, setPeers] = useState([]);
|
const [peers, setPeers] = useState([]);
|
||||||
const [totalPeers, setTotalPeers] = useState(0);
|
const [totalPeers, setTotalPeers] = useState(0);
|
||||||
const [isLoading, setIsLoading] = useState(true);
|
const [isLoading, setIsLoading] = useState(true);
|
||||||
@@ -14,20 +16,41 @@ function WireGuard() {
|
|||||||
const [peerConfig, setPeerConfig] = useState('');
|
const [peerConfig, setPeerConfig] = useState('');
|
||||||
const [qrCodeDataUrl, setQrCodeDataUrl] = useState('');
|
const [qrCodeDataUrl, setQrCodeDataUrl] = useState('');
|
||||||
const [peerStatuses, setPeerStatuses] = useState({});
|
const [peerStatuses, setPeerStatuses] = useState({});
|
||||||
|
const [tunnelMode, setTunnelMode] = useState('full'); // 'split' or 'full'
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
fetchWireGuardData();
|
fetchWireGuardData();
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
const refreshExternalIp = async () => {
|
||||||
|
setIsRefreshingIp(true);
|
||||||
|
try {
|
||||||
|
// Refresh IP first (fast)
|
||||||
|
const ipResp = await fetch('/api/wireguard/refresh-ip', { method: 'POST' });
|
||||||
|
const ipData = await ipResp.json();
|
||||||
|
setServerConfig(prev => ({ ...prev, ...ipData, port_open: 'checking' }));
|
||||||
|
// Then check port (slow — external call)
|
||||||
|
const portResp = await fetch('/api/wireguard/check-port', { method: 'POST' });
|
||||||
|
const portData = await portResp.json();
|
||||||
|
setServerConfig(prev => ({ ...prev, port_open: portData.port_open }));
|
||||||
|
} catch (e) {
|
||||||
|
console.error('Failed to refresh IP:', e);
|
||||||
|
} finally {
|
||||||
|
setIsRefreshingIp(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
const fetchWireGuardData = async () => {
|
const fetchWireGuardData = async () => {
|
||||||
try {
|
try {
|
||||||
const [statusResponse, peersResponse, wireguardResponse] = await Promise.all([
|
const [statusResponse, peersResponse, wireguardResponse, serverConfigResponse] = await Promise.all([
|
||||||
wireguardAPI.getStatus(),
|
wireguardAPI.getStatus(),
|
||||||
peerAPI.getPeers(),
|
peerAPI.getPeers(),
|
||||||
wireguardAPI.getPeers()
|
wireguardAPI.getPeers(),
|
||||||
|
fetch('/api/wireguard/server-config').then(r => r.json()).catch(() => null),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
setStatus(statusResponse.data);
|
setStatus(statusResponse.data);
|
||||||
|
if (serverConfigResponse) setServerConfig(serverConfigResponse);
|
||||||
|
|
||||||
// Merge peer registry data with WireGuard data (same as Peers page)
|
// Merge peer registry data with WireGuard data (same as Peers page)
|
||||||
const peersData = peersResponse.data || [];
|
const peersData = peersResponse.data || [];
|
||||||
@@ -54,36 +77,36 @@ function WireGuard() {
|
|||||||
persistent_keepalive: peer.persistent_keepalive || wireguardMap[peer.peer || peer.name]?.PersistentKeepalive || 25
|
persistent_keepalive: peer.persistent_keepalive || wireguardMap[peer.peer || peer.name]?.PersistentKeepalive || 25
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Load peer statuses first
|
// Load all peer statuses in one call (keyed by public_key)
|
||||||
const statusPromises = mergedPeers.map(async (peer) => {
|
let liveStatuses = {};
|
||||||
if (peer.public_key) {
|
try {
|
||||||
const status = await getPeerStatus(peer);
|
const stResp = await fetch('/api/wireguard/peers/statuses');
|
||||||
return { peerId: peer.name, status };
|
if (stResp.ok) liveStatuses = await stResp.json();
|
||||||
}
|
} catch (_) {}
|
||||||
return { peerId: peer.name, status: { online: null, lastHandshake: null, transferRx: 0, transferTx: 0 } };
|
|
||||||
|
// Normalize snake_case API fields to camelCase for UI
|
||||||
|
const normalizeStatus = (st) => ({
|
||||||
|
online: st.online ?? null,
|
||||||
|
lastHandshake: st.last_handshake || st.lastHandshake || null,
|
||||||
|
lastHandshakeSecondsAgo: st.last_handshake_seconds_ago ?? null,
|
||||||
|
transferRx: st.transfer_rx ?? st.transferRx ?? 0,
|
||||||
|
transferTx: st.transfer_tx ?? st.transferTx ?? 0,
|
||||||
|
endpoint: st.endpoint || null,
|
||||||
});
|
});
|
||||||
|
|
||||||
const statusResults = await Promise.all(statusPromises);
|
// Build name→status map and annotate peers
|
||||||
const statusMap = {};
|
const statusMap = {};
|
||||||
statusResults.forEach(({ peerId, status }) => {
|
const annotated = mergedPeers.map(peer => {
|
||||||
statusMap[peerId] = status;
|
const raw = liveStatuses[peer.public_key] || { online: null };
|
||||||
|
const st = normalizeStatus(raw);
|
||||||
|
statusMap[peer.name] = st;
|
||||||
|
return { ...peer, _liveStatus: st };
|
||||||
});
|
});
|
||||||
setPeerStatuses(statusMap);
|
setPeerStatuses(statusMap);
|
||||||
|
setTotalPeers(annotated.length);
|
||||||
|
|
||||||
// Set total peers count
|
// Show all peers; live ones bubble up via status indicator
|
||||||
setTotalPeers(mergedPeers.length);
|
setPeers(annotated);
|
||||||
|
|
||||||
// Filter to only show live connected peers
|
|
||||||
const livePeers = mergedPeers.filter(peer => {
|
|
||||||
const peerStatus = statusMap[peer.name];
|
|
||||||
return peerStatus && (
|
|
||||||
peerStatus.online === true ||
|
|
||||||
(peerStatus.lastHandshake && peerStatus.lastHandshake !== null) ||
|
|
||||||
(peerStatus.transferRx > 0 || peerStatus.transferTx > 0)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
setPeers(livePeers);
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Failed to fetch WireGuard data:', error);
|
console.error('Failed to fetch WireGuard data:', error);
|
||||||
} finally {
|
} finally {
|
||||||
@@ -97,28 +120,12 @@ function WireGuard() {
|
|||||||
await fetchWireGuardData();
|
await fetchWireGuardData();
|
||||||
};
|
};
|
||||||
|
|
||||||
const handleViewPeerConfig = async (peer) => {
|
const handleViewPeerConfig = async (peer, mode = tunnelMode) => {
|
||||||
setSelectedPeer(peer);
|
setSelectedPeer(peer);
|
||||||
try {
|
try {
|
||||||
// Try to get existing config first
|
const sc = await getServerConfig();
|
||||||
const response = await wireguardAPI.getPeerConfig({ name: peer.name });
|
const peerWithServerConfig = { ...peer, server_public_key: sc.public_key, server_endpoint: sc.endpoint };
|
||||||
let config = response.data.config;
|
const config = generateWireGuardConfig(peerWithServerConfig, mode);
|
||||||
|
|
||||||
// If no config exists, generate a complete one with real server config
|
|
||||||
if (!config || config === 'Configuration not available') {
|
|
||||||
// Get server configuration first
|
|
||||||
const serverConfig = await getServerConfig();
|
|
||||||
|
|
||||||
// Create peer with server config
|
|
||||||
const peerWithServerConfig = {
|
|
||||||
...peer,
|
|
||||||
server_public_key: serverConfig.public_key,
|
|
||||||
server_endpoint: serverConfig.endpoint
|
|
||||||
};
|
|
||||||
|
|
||||||
config = generateWireGuardConfig(peerWithServerConfig);
|
|
||||||
}
|
|
||||||
|
|
||||||
setPeerConfig(config);
|
setPeerConfig(config);
|
||||||
|
|
||||||
// Generate QR code for the config
|
// Generate QR code for the config
|
||||||
@@ -160,46 +167,40 @@ function WireGuard() {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const getServerConfig = async () => {
|
const getServerConfig = async () => {
|
||||||
|
if (serverConfig?.public_key) return serverConfig;
|
||||||
try {
|
try {
|
||||||
// Try to get server configuration from API
|
|
||||||
const response = await fetch('/api/wireguard/server-config');
|
const response = await fetch('/api/wireguard/server-config');
|
||||||
if (response.ok) {
|
if (response.ok) {
|
||||||
const config = await response.json();
|
const config = await response.json();
|
||||||
return {
|
setServerConfig(config);
|
||||||
public_key: config.public_key || "SERVER_PUBLIC_KEY_PLACEHOLDER",
|
return config;
|
||||||
endpoint: config.endpoint || "YOUR_SERVER_IP:51820"
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.warn('Could not get server config:', error);
|
console.warn('Could not get server config:', error);
|
||||||
}
|
}
|
||||||
|
return { public_key: '', endpoint: '<SERVER_IP>:51820' };
|
||||||
// Return default values
|
|
||||||
return {
|
|
||||||
public_key: "SERVER_PUBLIC_KEY_PLACEHOLDER",
|
|
||||||
endpoint: "YOUR_SERVER_IP:51820"
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
|
|
||||||
const generateWireGuardConfig = (peer) => {
|
const FULL_TUNNEL_IPS = '0.0.0.0/0, ::/0';
|
||||||
// Use real keys from the peer data
|
|
||||||
const serverPublicKey = peer.server_public_key || "SERVER_PUBLIC_KEY_PLACEHOLDER";
|
|
||||||
const serverEndpoint = peer.server_endpoint || "YOUR_SERVER_IP:51820";
|
|
||||||
const serverAllowedIPs = peer.allowed_ips || "0.0.0.0/0";
|
|
||||||
const privateKey = peer.private_key || 'YOUR_PRIVATE_KEY_HERE';
|
|
||||||
|
|
||||||
// Check if IP already has a subnet mask, if not add /32
|
const generateWireGuardConfig = (peer, mode = tunnelMode) => {
|
||||||
const peerAddress = peer.ip.includes('/') ? peer.ip : `${peer.ip}/32`;
|
const serverPublicKey = peer.server_public_key || "SERVER_PUBLIC_KEY_PLACEHOLDER";
|
||||||
|
const serverEndpoint = peer.server_endpoint || serverConfig?.endpoint || "YOUR_SERVER_IP:51820";
|
||||||
|
const privateKey = peer.private_key || 'YOUR_PRIVATE_KEY_HERE';
|
||||||
|
const peerAddress = peer.ip?.includes('/') ? peer.ip : `${peer.ip}/32`;
|
||||||
|
const splitTunnelIPs = serverConfig?.split_tunnel_ips || '10.0.0.0/24, 172.20.0.0/16';
|
||||||
|
const allowedIPs = mode === 'full' ? FULL_TUNNEL_IPS : splitTunnelIPs;
|
||||||
|
const dnsIp = serverConfig?.dns_ip || '172.20.0.3';
|
||||||
|
|
||||||
return `[Interface]
|
return `[Interface]
|
||||||
PrivateKey = ${privateKey}
|
PrivateKey = ${privateKey}
|
||||||
Address = ${peerAddress}
|
Address = ${peerAddress}
|
||||||
DNS = 8.8.8.8, 1.1.1.1
|
DNS = ${dnsIp}
|
||||||
|
|
||||||
[Peer]
|
[Peer]
|
||||||
PublicKey = ${serverPublicKey}
|
PublicKey = ${serverPublicKey}
|
||||||
Endpoint = ${serverEndpoint}
|
Endpoint = ${serverEndpoint}
|
||||||
AllowedIPs = ${serverAllowedIPs}
|
AllowedIPs = ${allowedIPs}
|
||||||
PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -329,13 +330,13 @@ PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
|||||||
|
|
||||||
<div className="card">
|
<div className="card">
|
||||||
<div className="flex items-center">
|
<div className="flex items-center">
|
||||||
<div className="p-2 bg-green-100 rounded-lg">
|
<div className={`p-2 rounded-lg ${peers.some(p => p._liveStatus?.online) ? 'bg-green-100' : 'bg-gray-100'}`}>
|
||||||
<Activity className="h-6 w-6 text-green-600" />
|
<Activity className={`h-6 w-6 ${peers.some(p => p._liveStatus?.online) ? 'text-green-600' : 'text-gray-400'}`} />
|
||||||
</div>
|
</div>
|
||||||
<div className="ml-4">
|
<div className="ml-4">
|
||||||
<p className="text-sm font-medium text-gray-500">Live Connections</p>
|
<p className="text-sm font-medium text-gray-500">Live Connections</p>
|
||||||
<p className="text-lg font-semibold text-gray-900">
|
<p className={`text-lg font-semibold ${peers.some(p => p._liveStatus?.online) ? 'text-green-600' : 'text-gray-900'}`}>
|
||||||
{peers.length}
|
{peers.filter(p => p._liveStatus?.online).length} / {totalPeers}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -354,6 +355,75 @@ PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
{/* External IP & Port Status */}
|
||||||
|
<div className="card mb-8">
|
||||||
|
<div className="flex items-center justify-between mb-4">
|
||||||
|
<div className="flex items-center">
|
||||||
|
<Globe className="h-5 w-5 text-gray-500 mr-2" />
|
||||||
|
<h2 className="text-lg font-semibold text-gray-900">Server Endpoint</h2>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={refreshExternalIp}
|
||||||
|
disabled={isRefreshingIp}
|
||||||
|
className="btn btn-secondary flex items-center text-sm"
|
||||||
|
>
|
||||||
|
<RefreshCw className={`h-3 w-3 mr-1 ${isRefreshingIp ? 'animate-spin' : ''}`} />
|
||||||
|
Refresh IP
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div className="grid grid-cols-1 md:grid-cols-4 gap-4">
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-500">External IP</p>
|
||||||
|
<p className="font-mono font-semibold text-gray-900">
|
||||||
|
{serverConfig?.external_ip || <span className="text-yellow-600">Detecting…</span>}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-500">WireGuard Endpoint</p>
|
||||||
|
<p className="font-mono font-semibold text-gray-900">
|
||||||
|
{serverConfig?.endpoint || `<SERVER_IP>:${serverConfig?.port || 51820}`}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-500">UDP Port {serverConfig?.port || 51820}</p>
|
||||||
|
{serverConfig ? (
|
||||||
|
<span className={`inline-flex items-center px-2 py-0.5 rounded-full text-xs font-medium ${
|
||||||
|
serverConfig.port_open === true ? 'bg-green-100 text-green-800' :
|
||||||
|
serverConfig.port_open === false ? 'bg-red-100 text-red-800' :
|
||||||
|
'bg-gray-100 text-gray-600'
|
||||||
|
}`}>
|
||||||
|
<span className={`w-1.5 h-1.5 rounded-full mr-1.5 ${
|
||||||
|
serverConfig.port_open === true ? 'bg-green-400' :
|
||||||
|
serverConfig.port_open === false ? 'bg-red-400' : 'bg-gray-400'
|
||||||
|
}`} />
|
||||||
|
{serverConfig.port_open === true ? 'Open' :
|
||||||
|
serverConfig.port_open === false ? 'Blocked' :
|
||||||
|
serverConfig.port_open === 'checking' ? 'Checking…' :
|
||||||
|
'Click Refresh IP to check'}
|
||||||
|
</span>
|
||||||
|
) : '—'}
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
<p className="text-sm text-gray-500 mb-1">Server Public Key</p>
|
||||||
|
<p className="font-mono text-xs text-gray-700 break-all">
|
||||||
|
{serverConfig?.public_key || '—'}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{serverConfig && !serverConfig.external_ip && (
|
||||||
|
<div className="mt-3 flex items-center text-yellow-700 bg-yellow-50 rounded p-2 text-sm">
|
||||||
|
<AlertCircle className="h-4 w-4 mr-2 flex-shrink-0" />
|
||||||
|
External IP could not be detected. Check internet connectivity, then click Refresh IP.
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
{serverConfig && serverConfig.port_open === false && (
|
||||||
|
<div className="mt-3 flex items-center text-red-700 bg-red-50 rounded p-2 text-sm">
|
||||||
|
<AlertCircle className="h-4 w-4 mr-2 flex-shrink-0" />
|
||||||
|
UDP port {serverConfig.port || 51820} appears closed. Check your router/firewall and forward this port to this machine.
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
{/* Traffic Stats */}
|
{/* Traffic Stats */}
|
||||||
{status?.total_traffic && (
|
{status?.total_traffic && (
|
||||||
<div className="card mb-8">
|
<div className="card mb-8">
|
||||||
@@ -432,7 +502,7 @@ PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
|||||||
</thead>
|
</thead>
|
||||||
<tbody className="bg-white divide-y divide-gray-200">
|
<tbody className="bg-white divide-y divide-gray-200">
|
||||||
{peers.map((peer, index) => {
|
{peers.map((peer, index) => {
|
||||||
const peerStatus = peerStatuses[peer.name] || { online: null, lastHandshake: null, transferRx: 0, transferTx: 0 };
|
const peerStatus = peerStatuses[peer.name] || { online: null, lastHandshake: null, transferRx: 0, transferTx: 0, endpoint: null };
|
||||||
return (
|
return (
|
||||||
<tr key={index} className="hover:bg-gray-50">
|
<tr key={index} className="hover:bg-gray-50">
|
||||||
<td className="px-6 py-4 whitespace-nowrap">
|
<td className="px-6 py-4 whitespace-nowrap">
|
||||||
@@ -536,13 +606,34 @@ PersistentKeepalive = ${peer.persistent_keepalive || 25}`;
|
|||||||
{selectedPeer.name} Configuration
|
{selectedPeer.name} Configuration
|
||||||
</h3>
|
</h3>
|
||||||
</div>
|
</div>
|
||||||
<button
|
<div className="flex items-center space-x-3">
|
||||||
onClick={() => setShowPeerConfig(false)}
|
<div className="flex items-center bg-gray-100 rounded-lg p-1 text-xs">
|
||||||
className="text-gray-400 hover:text-gray-600"
|
<button
|
||||||
>
|
onClick={() => { setTunnelMode('split'); handleViewPeerConfig(selectedPeer, 'split'); }}
|
||||||
✕
|
className={`px-2 py-1 rounded ${tunnelMode === 'split' ? 'bg-white shadow text-primary-700 font-medium' : 'text-gray-500'}`}
|
||||||
</button>
|
>
|
||||||
|
Split tunnel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick={() => { setTunnelMode('full'); handleViewPeerConfig(selectedPeer, 'full'); }}
|
||||||
|
className={`px-2 py-1 rounded ${tunnelMode === 'full' ? 'bg-white shadow text-primary-700 font-medium' : 'text-gray-500'}`}
|
||||||
|
>
|
||||||
|
Full tunnel
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
onClick={() => setShowPeerConfig(false)}
|
||||||
|
className="text-gray-400 hover:text-gray-600"
|
||||||
|
>
|
||||||
|
✕
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<p className="text-xs text-gray-500 mb-3">
|
||||||
|
{tunnelMode === 'split'
|
||||||
|
? `Split tunnel: only cell services (${serverConfig?.split_tunnel_ips || '10.0.0.0/24, 172.20.0.0/16'}) route through VPN — local network & internet traffic stay direct.`
|
||||||
|
: 'Full tunnel: all traffic (internet + local) routes through VPN server.'}
|
||||||
|
</p>
|
||||||
|
|
||||||
<div className="space-y-4">
|
<div className="space-y-4">
|
||||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import axios from 'axios';
|
|||||||
|
|
||||||
// Create axios instance with base configuration
|
// Create axios instance with base configuration
|
||||||
const api = axios.create({
|
const api = axios.create({
|
||||||
baseURL: import.meta.env.VITE_API_URL || 'http://localhost:3000',
|
baseURL: import.meta.env.VITE_API_URL || '',
|
||||||
timeout: 10000,
|
timeout: 10000,
|
||||||
headers: {
|
headers: {
|
||||||
'Content-Type': 'application/json',
|
'Content-Type': 'application/json',
|
||||||
@@ -37,6 +37,12 @@ export const cellAPI = {
|
|||||||
getStatus: () => api.get('/api/status'),
|
getStatus: () => api.get('/api/status'),
|
||||||
getConfig: () => api.get('/api/config'),
|
getConfig: () => api.get('/api/config'),
|
||||||
updateConfig: (config) => api.put('/api/config', config),
|
updateConfig: (config) => api.put('/api/config', config),
|
||||||
|
createBackup: () => api.post('/api/config/backup'),
|
||||||
|
listBackups: () => api.get('/api/config/backups'),
|
||||||
|
restoreBackup: (id) => api.post(`/api/config/restore/${id}`),
|
||||||
|
deleteBackup: (id) => api.delete(`/api/config/backups/${id}`),
|
||||||
|
exportConfig: (format = 'json') => api.get('/api/config/export', { params: { format } }),
|
||||||
|
importConfig: (config, format = 'json') => api.post('/api/config/import', { config, format }),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Network Services API
|
// Network Services API
|
||||||
@@ -63,6 +69,7 @@ export const wireguardAPI = {
|
|||||||
testConnectivity: (data) => api.post('/api/wireguard/connectivity', data),
|
testConnectivity: (data) => api.post('/api/wireguard/connectivity', data),
|
||||||
updatePeerIP: (data) => api.put('/api/wireguard/peers/ip', data),
|
updatePeerIP: (data) => api.put('/api/wireguard/peers/ip', data),
|
||||||
getPeerConfig: (data) => api.post('/api/wireguard/peers/config', data),
|
getPeerConfig: (data) => api.post('/api/wireguard/peers/config', data),
|
||||||
|
getPeerStatuses: () => api.get('/api/wireguard/peers/statuses'),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Peer Registry API
|
// Peer Registry API
|
||||||
@@ -136,6 +143,7 @@ export const routingAPI = {
|
|||||||
getFirewallRules: () => api.get('/api/routing/firewall'),
|
getFirewallRules: () => api.get('/api/routing/firewall'),
|
||||||
addFirewallRule: (rule) => api.post('/api/routing/firewall', rule),
|
addFirewallRule: (rule) => api.post('/api/routing/firewall', rule),
|
||||||
deleteFirewallRule: (ruleId) => api.delete(`/api/routing/firewall/${ruleId}`),
|
deleteFirewallRule: (ruleId) => api.delete(`/api/routing/firewall/${ruleId}`),
|
||||||
|
getLiveIptables: () => api.get('/api/routing/live-iptables'),
|
||||||
// Other
|
// Other
|
||||||
addExitNode: (node) => api.post('/api/routing/exit-nodes', node),
|
addExitNode: (node) => api.post('/api/routing/exit-nodes', node),
|
||||||
addBridgeRoute: (route) => api.post('/api/routing/bridge', route),
|
addBridgeRoute: (route) => api.post('/api/routing/bridge', route),
|
||||||
@@ -173,6 +181,15 @@ export const servicesAPI = {
|
|||||||
restartService: (serviceName) => api.post(`/api/services/bus/services/${serviceName}/restart`),
|
restartService: (serviceName) => api.post(`/api/services/bus/services/${serviceName}/restart`),
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Cell-to-cell connections API
|
||||||
|
export const cellLinkAPI = {
|
||||||
|
getInvite: () => api.get('/api/cells/invite'),
|
||||||
|
listConnections: () => api.get('/api/cells'),
|
||||||
|
addConnection: (invite) => api.post('/api/cells', invite),
|
||||||
|
removeConnection: (name) => api.delete(`/api/cells/${name}`),
|
||||||
|
getStatus: (name) => api.get(`/api/cells/${name}/status`),
|
||||||
|
};
|
||||||
|
|
||||||
// Health check
|
// Health check
|
||||||
export const healthAPI = {
|
export const healthAPI = {
|
||||||
check: () => api.get('/health'),
|
check: () => api.get('/health'),
|
||||||
@@ -182,6 +199,18 @@ export const healthAPI = {
|
|||||||
export const monitoringAPI = {
|
export const monitoringAPI = {
|
||||||
getBackendLogs: (lines = 100) => api.get('/api/logs', { params: { lines } }),
|
getBackendLogs: (lines = 100) => api.get('/api/logs', { params: { lines } }),
|
||||||
getHealthHistory: () => api.get('/api/health/history'),
|
getHealthHistory: () => api.get('/api/health/history'),
|
||||||
|
clearHealthHistory: () => api.post('/api/health/history/clear'),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Logs API
|
||||||
|
export const logsAPI = {
|
||||||
|
getServiceLogs: (service, level = 'ALL', lines = 100) =>
|
||||||
|
api.get(`/api/logs/services/${service}`, { params: { level, lines } }),
|
||||||
|
searchLogs: (data) => api.post('/api/logs/search', data),
|
||||||
|
getLogFiles: () => api.get('/api/logs/files'),
|
||||||
|
rotateLogs: (service) => api.post('/api/logs/rotate', service ? { service } : {}),
|
||||||
|
getVerbosity: () => api.get('/api/logs/verbosity'),
|
||||||
|
setVerbosity: (levels) => api.put('/api/logs/verbosity', levels),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Container Management API
|
// Container Management API
|
||||||
|
|||||||
Reference in New Issue
Block a user