diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..5c16326 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,76 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## What This Project Is + +**Personal Internet Cell (PIC)** — a self-hosted digital infrastructure platform. It manages DNS, DHCP, NTP, WireGuard VPN, email, calendar/contacts (CalDAV), file storage (WebDAV), reverse proxy (Caddy), a certificate authority, and container orchestration, all from a single API + React UI. + +## Common Commands + +```bash +# Full stack +make start # docker-compose up -d +make stop # docker-compose down +make restart # docker-compose restart +make status # docker status + API health +make logs # docker-compose logs -f +make build # rebuild api image + +# Tests +make test # pytest tests/ api/tests/ +make test-coverage # pytest with coverage HTML report +make test-api # pytest tests/test_api_endpoints.py +pytest tests/test_.py # single test file + +# Local dev (no Docker) +pip install -r api/requirements.txt +python api/app.py # Flask API on :3000 + +cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000) + +# WireGuard +make show-routes +make add-peer PEER_NAME=foo PEER_IP=10.0.0.5 PEER_KEY= +make list-peers +``` + +## Architecture + +### Backend (`api/`) + +All service managers inherit `BaseServiceManager` (`api/base_service_manager.py`). This enforces a consistent interface: `get_status()`, `get_config()`, `update_config()`, `validate_config()`, `test_connectivity()`, `get_logs()`, `restart_service()`. When adding or modifying a service manager, follow this pattern. + +The `ServiceBus` (`api/service_bus.py`) is a pub/sub event system used for inter-service communication. Services publish events (e.g., `SERVICE_STARTED`, `CONFIG_CHANGED`, `PEER_CONNECTED`) and subscribe to events from dependencies. Dependency graph is declared in the bus — e.g., `wireguard` depends on `network`; `email` depends on `network` and `vault`. + +`ConfigManager` (`api/config_manager.py`) is the single source of truth. Config lives in `/app/config/cell_config.json` (mapped from `config/api/`). All managers read/write through ConfigManager, which validates against per-service schemas and maintains automatic backups. + +`LogManager` (`api/log_manager.py`) provides structured JSON logging with rotation (5 MB / 5 backups per service). Use it instead of `print()` or raw `logging`. + +`app.py` (2000+ lines) contains all Flask REST endpoints, organized by service. It runs a background health-monitoring thread. + +Service managers: +- `network_manager.py` — DNS (CoreDNS), DHCP (dnsmasq), NTP (chrony) +- `wireguard_manager.py` — VPN peer lifecycle, QR codes +- `peer_registry.py` — peer registration/lookup +- `routing_manager.py` — NAT, firewall rules, VPN gateway +- `vault_manager.py` — internal certificate authority +- `email_manager.py` — Postfix + Dovecot +- `calendar_manager.py` — Radicale CalDAV/CardDAV +- `file_manager.py` — WebDAV storage +- `container_manager.py` — Docker SDK wrappers +- `cell_manager.py` — top-level orchestration + +### Frontend (`webui/`) + +React 18 + Vite + Tailwind CSS. All API calls go through `src/services/api.js` (Axios). Vite dev server proxies `/api` to `localhost:3000`. Pages in `src/pages/`, shared components in `src/components/`. + +### Infrastructure + +`docker-compose.yml` defines 13 services on a custom bridge network `cell-network` (172.20.0.0/16). Cell IPs default to 10.0.0.0/24. Key ports: 53 (DNS), 80/443 (Caddy), 3000 (API), 5173/8081 (WebUI), 51820/udp (WireGuard), 25/587/993 (mail), 5232 (CalDAV), 8080 (WebDAV). + +Config files for each service live under `config//`. Persistent data is under `data/` (git-ignored). WireGuard configs are also git-ignored. + +## Testing + +Tests live in `tests/` (28 files). Use mocking (`pytest-mock`) for external system calls. Integration tests in `test_integration.py` require Docker services running. diff --git a/Makefile b/Makefile index 73c9e39..4115f3e 100644 --- a/Makefile +++ b/Makefile @@ -1,22 +1,31 @@ # Personal Internet Cell - Makefile # Provides easy commands for managing the cell -.PHONY: help start stop restart status logs clean setup init-peers +.PHONY: help start stop restart status logs clean setup init-peers build build-api build-webui + +# Detect docker compose command (v2 plugin preferred, fallback to v1 standalone) +DC := $(shell docker compose version >/dev/null 2>&1 && echo "docker compose" || echo "docker-compose") # Default target help: @echo "Personal Internet Cell - Management Commands" @echo "" - @echo "Setup:" - @echo " setup - Initial setup and configuration" - @echo " init-peers - Initialize peer configuration" + @echo "Setup (run once on a fresh host):" + @echo " setup - Create dirs, generate WireGuard keys, write configs, then: make start" + @echo " Env vars: CELL_NAME=mycell CELL_DOMAIN=cell VPN_ADDRESS=10.0.0.1/24 WG_PORT=51820" + @echo " init-peers - Reset peer list to empty" @echo "" @echo "Management:" - @echo " start - Start all services" + @echo " start - Start all services (docker compose up -d)" @echo " stop - Stop all services" @echo " restart - Restart all services" - @echo " status - Show status of all services" - @echo " logs - Show logs from all services" + @echo " status - Show container status + API health" + @echo " logs - Follow logs from all services" + @echo "" + @echo "Build:" + @echo " build - Rebuild API image" + @echo " build-api - Rebuild API image (no cache)" + @echo " build-webui - Rebuild Web UI image (no cache)" @echo "" @echo "Individual Services:" @echo " start-dns - Start DNS service only" @@ -31,8 +40,11 @@ help: # Setup commands setup: @echo "Setting up Personal Internet Cell..." + CELL_NAME=$(or $(CELL_NAME),mycell) \ + CELL_DOMAIN=$(or $(CELL_DOMAIN),cell) \ + VPN_ADDRESS=$(or $(VPN_ADDRESS),10.0.0.1/24) \ + WG_PORT=$(or $(WG_PORT),51820) \ python3 scripts/setup_cell.py - @echo "Setup complete!" init-peers: @echo "Initializing peer configuration..." @@ -42,52 +54,52 @@ init-peers: # Management commands start: @echo "Starting Personal Internet Cell..." - docker-compose up -d + $(DC) up -d @echo "Services started. Check status with 'make status'" stop: @echo "Stopping Personal Internet Cell..." - docker-compose down + $(DC) down @echo "Services stopped." restart: @echo "Restarting Personal Internet Cell..." - docker-compose restart + $(DC) restart @echo "Services restarted." status: @echo "Personal Internet Cell Status:" @echo "================================" - docker-compose ps + $(DC) ps @echo "" @echo "API Status:" @curl -s http://localhost:3000/health || echo "API not responding" logs: @echo "Showing logs from all services..." - docker-compose logs -f + $(DC) logs -f # Individual service commands start-dns: @echo "Starting DNS service..." - docker-compose up -d dns + $(DC) up -d dns start-api: @echo "Starting API service..." - docker-compose up -d api + $(DC) up -d api start-wg: @echo "Starting WireGuard service..." - docker-compose up -d wireguard + $(DC) up -d wireguard start-webui: @echo "Starting WebUi service..." - docker-compose up -d webui + $(DC) up -d webui # Maintenance commands clean: @echo "Cleaning up containers and volumes..." - docker-compose down -v + $(DC) down -v docker system prune -f @echo "Cleanup complete." @@ -107,11 +119,21 @@ restore: # Development commands dev: @echo "Starting development environment..." - docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d + $(DC) -f docker-compose.yml -f docker-compose.dev.yml up -d build: @echo "Building API service..." - docker-compose build api + $(DC) build api + +build-api: + @echo "Rebuilding API (no cache)..." + $(DC) build --no-cache api + $(DC) up -d api + +build-webui: + @echo "Rebuilding Web UI (no cache)..." + $(DC) build --no-cache webui + $(DC) up -d webui # Testing commands test: diff --git a/README.md b/README.md index 25df885..5e05bd7 100644 --- a/README.md +++ b/README.md @@ -61,45 +61,82 @@ The Personal Internet Cell is a **production-grade, self-hosted, decentralized d ### Prerequisites -- **Docker & Docker Compose** (recommended) -- **Python 3.10+** (for CLI and development) -- **2GB+ RAM, 10GB+ disk space** -- **Ports**: 53, 80, 443, 3000, 51820 +- **Docker** with Compose plugin (`docker compose`) or standalone `docker-compose` +- **WireGuard tools** (`wg` binary, for key generation during install) +- **2 GB+ RAM, 10 GB+ disk space** +- **Open ports**: 53 (DNS), 80/443 (HTTP/S), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard) -### 1. Clone and Setup +### 1. Install ```bash -git clone https://github.com/yourusername/PersonalInternetCell.git -cd PersonalInternetCell +git clone pic +cd pic -# Start with Docker (Recommended) -docker-compose up --build +# Default cell (name=mycell, domain=cell, VPN=10.0.0.1/24, port=51820) +make setup && make start -# Or run locally -pip install -r api/requirements.txt -python api/app.py +# Custom cell — required when installing a second cell on a different host +CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start ``` -### 2. Access Services +`make setup` generates WireGuard keys, writes `config/wireguard/wg0.conf` and +`config/api/cell_config.json`, and creates all data directories. +`make start` brings up all 13 Docker containers. -- **API**: http://localhost:3000 -- **Health Check**: http://localhost:3000/health -- **Service Status**: http://localhost:3000/api/services/status +### 2. Access -### 3. Use the Enhanced CLI +| Service | URL | +|---------|-----| +| Web UI | `http://:8081` | +| API | `http://:3000` | +| Health | `http://:3000/health` | + +On a WireGuard client: `http://mycell.cell` (or whatever your cell name is). + +### 3. Local dev (no Docker) ```bash -# Show cell status -python api/enhanced_cli.py --status +pip install -r api/requirements.txt +python api/app.py # API on :3000 -# Interactive mode -python api/enhanced_cli.py --interactive +cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000) +``` -# Show all services -python api/enhanced_cli.py --services +--- -# Configuration wizard -python api/enhanced_cli.py --wizard network +## 🔗 Connecting Two Cells (PIC Mesh) + +Two PIC instances can form a mesh — full site-to-site WireGuard tunnels with +automatic DNS forwarding so each cell's services are reachable from the other. + +### Install the second cell + +```bash +# On the second host (different VPN subnet; port 51820 is fine — different machine) +CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start +``` + +### Exchange invites (two pastes, two clicks) + +1. On **Cell A** → open Web UI → **Cell Network** → copy the invite JSON. +2. On **Cell B** → **Cell Network** → paste into "Connect to Another Cell" → click **Connect**. +3. On **Cell B** → copy its invite JSON. +4. On **Cell A** → paste Cell B's invite → click **Connect**. + +Both cells now have: +- A site-to-site WireGuard peer (AllowedIPs = remote cell's VPN subnet). +- A CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel. + +The **Connected Cells** panel shows live handshake status (green = online). + +### Same-LAN tip + +If both cells share the same external IP (behind NAT), the auto-detected +endpoint in the invite will be the public IP. Replace it with the LAN IP +before clicking Connect so traffic stays local: + +```json +{ "endpoint": "192.168.31.50:51820", ... } ``` --- diff --git a/api/Dockerfile b/api/Dockerfile index b5faa9e..83d8ac7 100644 --- a/api/Dockerfile +++ b/api/Dockerfile @@ -6,6 +6,8 @@ WORKDIR /app/api RUN apt-get update && apt-get install -y \ wireguard-tools \ iptables \ + iproute2 \ + util-linux \ curl \ ca-certificates \ gnupg \ diff --git a/api/app.py b/api/app.py index 06ca94a..c965a72 100644 --- a/api/app.py +++ b/api/app.py @@ -41,6 +41,8 @@ from container_manager import ContainerManager from config_manager import ConfigManager from service_bus import ServiceBus, EventType from log_manager import LogManager +from cell_link_manager import CellLinkManager +import firewall_manager # Context variable for request info request_context = contextvars.ContextVar('request_context', default={}) @@ -105,7 +107,10 @@ CORS(app) app.config['DEVELOPMENT_MODE'] = True # Set to True for development, False for production # Initialize enhanced components -config_manager = ConfigManager(config_file='./config/cell_config.json', data_dir='./data') +config_manager = ConfigManager( + config_file=os.path.join(os.environ.get('CONFIG_DIR', '/app/config'), 'cell_config.json'), + data_dir=os.environ.get('DATA_DIR', '/app/data'), +) service_bus = ServiceBus() log_manager = LogManager(log_dir='./data/logs') @@ -124,6 +129,16 @@ service_log_configs = { for service, config in service_log_configs.items(): log_manager.add_service_logger(service, config) +# Apply any persisted log level overrides +_levels_file = os.path.join(os.path.dirname(__file__), 'config', 'log_levels.json') +if os.path.exists(_levels_file): + try: + with open(_levels_file) as _f: + for _svc, _lvl in json.load(_f).items(): + log_manager.set_service_level(_svc, _lvl) + except Exception: + pass + # Start service bus service_bus.start() @@ -153,17 +168,39 @@ def log_request(response): def clear_log_context(exc): request_context.set({}) -# Initialize managers with proper directories -network_manager = NetworkManager(data_dir='/app/data', config_dir='/app/config') -wireguard_manager = WireGuardManager(data_dir='/app/data', config_dir='/app/config') -peer_registry = PeerRegistry(data_dir='/app/data', config_dir='/app/config') -email_manager = EmailManager(data_dir='/app/data', config_dir='/app/config') -calendar_manager = CalendarManager(data_dir='/app/data', config_dir='/app/config') -file_manager = FileManager(data_dir='/app/data', config_dir='/app/config') -routing_manager = RoutingManager(data_dir='/app/data', config_dir='/app/config') -cell_manager = CellManager(data_dir='/app/data', config_dir='/app/config') -app.vault_manager = VaultManager(data_dir='/app/data', config_dir='/app/config') -container_manager = ContainerManager(data_dir='/app/data', config_dir='/app/config') +# Initialize managers — paths configurable via env for testing +_DATA_DIR = os.environ.get('DATA_DIR', '/app/data') +_CONFIG_DIR = os.environ.get('CONFIG_DIR', '/app/config') + +network_manager = NetworkManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +wireguard_manager = WireGuardManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +peer_registry = PeerRegistry(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +email_manager = EmailManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +calendar_manager = CalendarManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +file_manager = FileManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +routing_manager = RoutingManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +cell_manager = CellManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +app.vault_manager = VaultManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +container_manager = ContainerManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) +cell_link_manager = CellLinkManager( + data_dir=_DATA_DIR, config_dir=_CONFIG_DIR, + wireguard_manager=wireguard_manager, network_manager=network_manager, +) + +# Apply firewall + DNS rules from stored peer settings (survives API restarts) +def _apply_startup_enforcement(): + try: + peers = peer_registry.list_peers() + firewall_manager.apply_all_peer_rules(peers) + firewall_manager.apply_all_dns_rules(peers, COREFILE_PATH) + logger.info(f"Applied enforcement rules for {len(peers)} peers on startup") + except Exception as e: + logger.warning(f"Startup enforcement failed (non-fatal): {e}") + +COREFILE_PATH = '/app/config/dns/Corefile' + +# Run in background so startup isn't blocked waiting on docker exec +threading.Thread(target=_apply_startup_enforcement, daemon=True).start() # Register services with service bus service_bus.register_service('network', network_manager) @@ -205,36 +242,26 @@ def perform_health_check(): except Exception as e: result[service_name] = {'error': str(e), 'status': 'offline'} - # Health alerting logic - improved to be more robust + # Health alerting logic — alert only when a service container is not running global service_alert_counters for service_name in service_bus.list_services(): if service_name in result: status = result[service_name] healthy = True - - # Improved health determination logic + if isinstance(status, dict): - # Check for explicit healthy field first - if 'healthy' in status: - healthy = status['healthy'] - # Check for running status + # Prefer status.running (container actually up) over healthy (connectivity tests) + inner = status.get('status', {}) + if isinstance(inner, dict): + if 'running' in inner: + healthy = inner['running'] + elif 'status' in inner: + healthy = str(inner['status']).lower() in ('ok', 'healthy', 'online', 'active') elif 'running' in status: healthy = status['running'] - # Check for status field with various healthy values - elif 'status' in status: - status_value = status['status'] - if isinstance(status_value, str): - healthy = status_value.lower() in ('ok', 'healthy', 'online', 'active') - else: - healthy = bool(status_value) - # Check for error field elif 'error' in status: healthy = False - # If no health indicators, assume healthy if service exists - else: - healthy = True else: - # If status is not a dict, assume it's a boolean healthy = bool(status) # Only count as unhealthy if we're certain it's down @@ -337,9 +364,10 @@ def get_cell_status(): current_time = time.time() uptime_seconds = int(current_time - API_START_TIME) + identity = config_manager.configs.get('_identity', {}) return jsonify({ - "cell_name": "personal-internet-cell", - "domain": "cell.local", + "cell_name": identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')), + "domain": identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell')), "uptime": uptime_seconds, "peers_count": len(peers), "services": services_status, @@ -353,7 +381,16 @@ def get_cell_status(): def get_config(): """Get cell configuration.""" try: - return jsonify(config_manager.get_all_configs()) + service_configs = config_manager.get_all_configs() + identity = service_configs.pop('_identity', {}) + config = { + 'cell_name': identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')), + 'domain': identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell')), + 'ip_range': identity.get('ip_range', os.environ.get('CELL_IP_RANGE', '172.20.0.0/16')), + 'wireguard_port': identity.get('wireguard_port', int(os.environ.get('WG_PORT', '51820'))), + } + config['service_configs'] = service_configs + return jsonify(config) except Exception as e: logger.error(f"Error getting config: {e}") return jsonify({"error": str(e)}), 500 @@ -365,20 +402,76 @@ def update_config(): data = request.get_json(silent=True) if data is None: return jsonify({"error": "No data provided"}), 400 - - # Update configuration using config manager + + # Handle identity fields (cell_name, domain, ip_range, wireguard_port) + identity_keys = {'cell_name', 'domain', 'ip_range', 'wireguard_port'} + identity_updates = {k: v for k, v in data.items() if k in identity_keys} + # Capture old identity BEFORE saving, for apply_cell_name comparison + old_identity = dict(config_manager.configs.get('_identity', {})) + if identity_updates: + stored = config_manager.configs.get('_identity', {}) + stored.update(identity_updates) + config_manager.configs['_identity'] = stored + config_manager._save_all_configs() + + # Map service names to their manager instances + _svc_managers = { + 'network': network_manager, + 'wireguard': wireguard_manager, + 'email': email_manager, + 'calendar': calendar_manager, + 'files': file_manager, + 'routing': routing_manager, + 'vault': app.vault_manager, + } + + all_restarted = [] + all_warnings = [] + + # Update service configurations: persist + apply to real config files for service, config in data.items(): if service in config_manager.service_schemas: - success = config_manager.update_service_config(service, config) - if success: - # Publish config change event - service_bus.publish_event(EventType.CONFIG_CHANGED, service, { - 'service': service, - 'config': config - }) - - logger.info(f"Updated config: {data}") - return jsonify({"message": "Configuration updated successfully"}) + config_manager.update_service_config(service, config) + mgr = _svc_managers.get(service) + if mgr: + mgr.update_config(config) + result = mgr.apply_config(config) + all_restarted.extend(result.get('restarted', [])) + all_warnings.extend(result.get('warnings', [])) + service_bus.publish_event(EventType.CONFIG_CHANGED, service, { + 'service': service, + 'config': config + }) + # VPN port or subnet change → all peer client configs are stale + if service == 'wireguard' and ('port' in config or 'address' in config): + for p in peer_registry.list_peers(): + peer_registry.update_peer(p['peer'], {'config_needs_reinstall': True}) + n = len(peer_registry.list_peers()) + if n: + all_warnings.append(f'WireGuard endpoint changed — {n} peer(s) must reinstall VPN config') + + # Apply cell identity domain to network and email services + if identity_updates.get('domain'): + domain = identity_updates['domain'] + net_result = network_manager.apply_domain(domain) + all_restarted.extend(net_result.get('restarted', [])) + all_warnings.extend(net_result.get('warnings', [])) + + # Apply cell name change to DNS hostname record + if identity_updates.get('cell_name'): + old_name = old_identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')) + new_name = identity_updates['cell_name'] + if old_name != new_name: + cn_result = network_manager.apply_cell_name(old_name, new_name) + all_restarted.extend(cn_result.get('restarted', [])) + all_warnings.extend(cn_result.get('warnings', [])) + + logger.info(f"Updated config, restarted: {all_restarted}") + return jsonify({ + "message": "Configuration updated and applied", + "restarted": all_restarted, + "warnings": all_warnings, + }) except Exception as e: logger.error(f"Error updating config: {e}") return jsonify({"error": str(e)}), 500 @@ -456,6 +549,19 @@ def import_config(): logger.error(f"Error importing config: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/config/backups/', methods=['DELETE']) +def delete_config_backup(backup_id): + """Delete a configuration backup.""" + try: + success = config_manager.delete_backup(backup_id) + if success: + return jsonify({"message": f"Backup {backup_id} deleted"}) + else: + return jsonify({"error": f"Failed to delete backup {backup_id}"}), 500 + except Exception as e: + logger.error(f"Error deleting backup: {e}") + return jsonify({"error": str(e)}), 500 + # Service bus endpoints @app.route('/api/services/bus/status', methods=['GET']) def get_service_bus_status(): @@ -592,17 +698,59 @@ def get_log_statistics(): @app.route('/api/logs/rotate', methods=['POST']) def rotate_logs(): - """Manually rotate logs.""" + """Manually rotate an API service log file.""" try: data = request.get_json(silent=True) or {} - service = data.get('service') - + service = data.get('service') # None = rotate all log_manager.rotate_logs(service) return jsonify({"message": "Logs rotated successfully"}) except Exception as e: logger.error(f"Error rotating logs: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/logs/files', methods=['GET']) +def get_log_file_infos(): + """List service log files with sizes.""" + try: + return jsonify(log_manager.get_all_log_file_infos()) + except Exception as e: + logger.error(f"Error listing log files: {e}") + return jsonify({"error": str(e)}), 500 + +@app.route('/api/logs/verbosity', methods=['GET']) +def get_log_verbosity(): + """Return current per-service log levels.""" + try: + return jsonify(log_manager.get_service_levels()) + except Exception as e: + logger.error(f"Error getting log verbosity: {e}") + return jsonify({"error": str(e)}), 500 + +@app.route('/api/logs/verbosity', methods=['PUT']) +def set_log_verbosity(): + """Update log levels for one or all services. Body: {service: level} map.""" + try: + data = request.get_json(silent=True) or {} + for service, level in data.items(): + log_manager.set_service_level(service, level) + # Persist to config so levels survive API restarts + levels_file = os.path.join(os.path.dirname(__file__), 'config', 'log_levels.json') + os.makedirs(os.path.dirname(levels_file), exist_ok=True) + current = {} + if os.path.exists(levels_file): + try: + with open(levels_file) as f: + current = json.load(f) + except Exception: + pass + current.update(data) + with open(levels_file, 'w') as f: + json.dump(current, f, indent=2) + return jsonify({"message": "Log levels updated", "levels": log_manager.get_service_levels()}) + except Exception as e: + logger.error(f"Error setting log verbosity: {e}") + return jsonify({"error": str(e)}), 500 + # Network Services API @app.route('/api/dns/records', methods=['GET']) def get_dns_records(): @@ -718,8 +866,8 @@ def test_network(): def get_wireguard_keys(): """Get WireGuard keys.""" try: - # For now, return empty keys - this would need to be implemented - return jsonify({"error": "Not implemented yet"}), 501 + result = wireguard_manager.get_keys() + return jsonify(result) except Exception as e: logger.error(f"Error getting WireGuard keys: {e}") return jsonify({"error": str(e)}), 500 @@ -728,10 +876,11 @@ def get_wireguard_keys(): def generate_peer_keys(): """Generate peer keys.""" try: - data = request.get_json(silent=True) - if data is None or 'peer_name' not in data: - return jsonify({"error": "Missing peer_name"}), 400 - result = wireguard_manager.generate_peer_keys(data['peer_name']) + data = request.get_json(silent=True) or {} + name = data.get('name') or data.get('peer_name') + if not name: + return jsonify({"error": "Missing peer name"}), 400 + result = wireguard_manager.generate_peer_keys(name) return jsonify(result) except Exception as e: logger.error(f"Error generating peer keys: {e}") @@ -741,8 +890,8 @@ def generate_peer_keys(): def get_wireguard_config(): """Get WireGuard configuration.""" try: - # For now, return empty config - this would need to be implemented - return jsonify({"error": "Not implemented yet"}), 501 + result = wireguard_manager.get_config() + return jsonify(result) except Exception as e: logger.error(f"Error getting WireGuard config: {e}") return jsonify({"error": str(e)}), 500 @@ -751,7 +900,7 @@ def get_wireguard_config(): def get_wireguard_peers(): """Get WireGuard peers.""" try: - peers = wireguard_manager.get_wireguard_peers() + peers = wireguard_manager.get_peers() return jsonify(peers) except Exception as e: logger.error(f"Error getting WireGuard peers: {e}") @@ -761,20 +910,12 @@ def get_wireguard_peers(): def add_wireguard_peer(): """Add WireGuard peer.""" try: - data = request.get_json(silent=True) - if data is None: - return jsonify({"error": "No data provided"}), 400 - - required_fields = ['name', 'public_key', 'allowed_ips'] - for field in required_fields: - if field not in data: - return jsonify({"error": f"Missing required field: {field}"}), 400 - - result = wireguard_manager.add_wireguard_peer( - name=data['name'], - public_key=data['public_key'], - allowed_ips=data['allowed_ips'], - endpoint=data.get('endpoint', ''), + data = request.get_json(silent=True) or {} + result = wireguard_manager.add_peer( + name=data.get('name', ''), + public_key=data.get('public_key', ''), + endpoint_ip=data.get('endpoint', data.get('endpoint_ip', '')), + allowed_ips=data.get('allowed_ips', ''), persistent_keepalive=data.get('persistent_keepalive', 25) ) return jsonify({"success": result}) @@ -786,11 +927,9 @@ def add_wireguard_peer(): def remove_wireguard_peer(): """Remove WireGuard peer.""" try: - data = request.get_json(silent=True) - if data is None or 'name' not in data: - return jsonify({"error": "Missing peer name"}), 400 - - result = wireguard_manager.remove_wireguard_peer(data['name']) + data = request.get_json(silent=True) or {} + public_key = data.get('public_key') or data.get('name', '') + result = wireguard_manager.remove_peer(public_key) return jsonify({"success": result}) except Exception as e: logger.error(f"Error removing WireGuard peer: {e}") @@ -822,31 +961,40 @@ def test_wireguard_connectivity(): def update_peer_ip(): """Update peer IP.""" try: - data = request.get_json(silent=True) - if data is None or 'name' not in data or 'ip' not in data: - return jsonify({"error": "Missing peer name or IP"}), 400 - - # For now, return not implemented - this would need to be implemented - return jsonify({"error": "Not implemented yet"}), 501 + data = request.get_json(silent=True) or {} + result = wireguard_manager.update_peer_ip( + data.get('public_key', data.get('peer', '')), + data.get('ip', '') + ) + return jsonify({"success": result}) except Exception as e: logger.error(f"Error updating peer IP: {e}") return jsonify({"error": str(e)}), 500 @app.route('/api/wireguard/peers/status', methods=['POST']) def get_peer_status(): - """Get WireGuard peer status.""" + """Get live WireGuard status for a single peer.""" try: - data = request.get_json(silent=True) - if data is None or 'public_key' not in data: - return jsonify({"error": "Missing public key"}), 400 - - public_key = data['public_key'] + data = request.get_json(silent=True) or {} + public_key = data.get('public_key', '') + if not public_key: + return jsonify({"error": "Missing public_key"}), 400 status = wireguard_manager.get_peer_status(public_key) return jsonify(status) except Exception as e: logger.error(f"Error getting peer status: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/wireguard/peers/statuses', methods=['GET']) +def get_all_peer_statuses(): + """Get live WireGuard status for all peers (keyed by public_key).""" + try: + statuses = wireguard_manager.get_all_peer_statuses() + return jsonify(statuses) + except Exception as e: + logger.error(f"Error getting peer statuses: {e}") + return jsonify({"error": str(e)}), 500 + @app.route('/api/wireguard/network/setup', methods=['POST']) def setup_network(): """Setup network configuration for internet access.""" @@ -873,37 +1021,38 @@ def get_network_status(): @app.route('/api/wireguard/peers/config', methods=['POST']) def get_peer_config(): try: - data = request.get_json(silent=True) - if data is None or 'name' not in data: - return jsonify({"error": "Missing peer name"}), 400 - - peer_name = data['name'] - - # Get peer from peer registry - peer = peer_registry.get_peer(peer_name) - if not peer: - return jsonify({"config": "Peer not found"}) - - # Get server configuration - server_config = wireguard_manager.get_server_config() - - # Check if IP already has a subnet mask, if not add /32 - peer_ip = peer.get('ip', '10.0.0.2') - peer_address = peer_ip if '/' in peer_ip else f"{peer_ip}/32" - - # Generate client configuration using peer registry data - config = f"""[Interface] -PrivateKey = {peer.get('private_key', 'YOUR_PRIVATE_KEY_HERE')} -Address = {peer_address} -DNS = 8.8.8.8, 1.1.1.1 + data = request.get_json(silent=True) or {} + peer_name = data.get('name', data.get('peer', '')) -[Peer] -PublicKey = {server_config.get('public_key', 'SERVER_PUBLIC_KEY_PLACEHOLDER')} -Endpoint = {server_config.get('endpoint', 'YOUR_SERVER_IP:51820')} -AllowedIPs = {peer.get('allowed_ips', '0.0.0.0/0')} -PersistentKeepalive = {peer.get('persistent_keepalive', 25)}""" - - return jsonify({"config": config}) + # Look up peer details from registry if not supplied + peer_ip = data.get('ip', '') + peer_private_key = data.get('private_key', '') + registered = peer_registry.get_peer(peer_name) if peer_name else {} + if peer_name and (not peer_ip or not peer_private_key): + if registered: + peer_ip = peer_ip or registered.get('ip', '') + peer_private_key = peer_private_key or registered.get('private_key', '') + + # Use real external endpoint if not supplied + server_endpoint = data.get('server_endpoint', '') + if not server_endpoint: + srv = wireguard_manager.get_server_config() + server_endpoint = srv.get('endpoint') or '' + + # Determine AllowedIPs: explicit > peer's stored internet_access > default full tunnel + allowed_ips = data.get('allowed_ips') or None + if not allowed_ips and registered: + internet_access = registered.get('internet_access', True) + allowed_ips = wireguard_manager.FULL_TUNNEL_IPS if internet_access else wireguard_manager.get_split_tunnel_ips() + + result = wireguard_manager.get_peer_config( + peer_name=peer_name, + peer_ip=peer_ip, + peer_private_key=peer_private_key, + server_endpoint=server_endpoint, + allowed_ips=allowed_ips, + ) + return jsonify({"config": result}) except Exception as e: logger.error(f"Error getting peer config: {e}") return jsonify({"error": str(e)}), 500 @@ -911,13 +1060,109 @@ PersistentKeepalive = {peer.get('persistent_keepalive', 25)}""" @app.route('/api/wireguard/server-config', methods=['GET']) def get_server_config(): try: - # Get server configuration from WireGuard manager config = wireguard_manager.get_server_config() return jsonify(config) except Exception as e: logger.error(f"Error getting server config: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/wireguard/refresh-ip', methods=['POST']) +def refresh_external_ip(): + try: + ip = wireguard_manager.get_external_ip(force_refresh=True) + port = wireguard_manager._get_configured_port() + return jsonify({ + 'external_ip': ip, + 'port': port, + 'endpoint': f'{ip}:{port}' if ip else None, + }) + except Exception as e: + logger.error(f"Error refreshing external IP: {e}") + return jsonify({"error": str(e)}), 500 + +@app.route('/api/wireguard/apply-enforcement', methods=['POST']) +def apply_wireguard_enforcement(): + """Re-apply per-peer iptables and DNS enforcement rules (call after WireGuard restart).""" + try: + peers = peer_registry.list_peers() + firewall_manager.apply_all_peer_rules(peers) + firewall_manager.apply_all_dns_rules(peers, COREFILE_PATH) + return jsonify({'ok': True, 'peers': len(peers)}) + except Exception as e: + return jsonify({'error': str(e)}), 500 + +@app.route('/api/wireguard/check-port', methods=['POST']) +def check_wireguard_port(): + try: + port_open = wireguard_manager.check_port_open() + return jsonify({'port_open': port_open, 'port': wireguard_manager._get_configured_port()}) + except Exception as e: + return jsonify({"error": str(e)}), 500 + +# ── Cell-to-cell connections ───────────────────────────────────────────────── + +@app.route('/api/cells/invite', methods=['GET']) +def get_cell_invite(): + """Generate an invite package for this cell.""" + try: + identity = config_manager.configs.get('_identity', {}) + cell_name = identity.get('cell_name', os.environ.get('CELL_NAME', 'mycell')) + domain = identity.get('domain', os.environ.get('CELL_DOMAIN', 'cell')) + invite = cell_link_manager.generate_invite(cell_name, domain) + return jsonify(invite) + except Exception as e: + logger.error(f"Error generating cell invite: {e}") + return jsonify({'error': str(e)}), 500 + +@app.route('/api/cells', methods=['GET']) +def list_cell_connections(): + """List all connected cells.""" + try: + return jsonify(cell_link_manager.list_connections()) + except Exception as e: + return jsonify({'error': str(e)}), 500 + +@app.route('/api/cells', methods=['POST']) +def add_cell_connection(): + """Connect to a remote cell using their invite package.""" + try: + data = request.get_json(silent=True) + if not data: + return jsonify({'error': 'No data provided'}), 400 + for field in ('cell_name', 'public_key', 'vpn_subnet', 'dns_ip', 'domain'): + if field not in data: + return jsonify({'error': f'Missing field: {field}'}), 400 + link = cell_link_manager.add_connection(data) + return jsonify({'message': f"Connected to cell '{data['cell_name']}'", 'link': link}), 201 + except ValueError as e: + return jsonify({'error': str(e)}), 400 + except Exception as e: + logger.error(f"Error adding cell connection: {e}") + return jsonify({'error': str(e)}), 500 + +@app.route('/api/cells/', methods=['DELETE']) +def remove_cell_connection(cell_name): + """Disconnect from a remote cell.""" + try: + cell_link_manager.remove_connection(cell_name) + return jsonify({'message': f"Cell '{cell_name}' disconnected"}) + except ValueError as e: + return jsonify({'error': str(e)}), 404 + except Exception as e: + logger.error(f"Error removing cell connection: {e}") + return jsonify({'error': str(e)}), 500 + +@app.route('/api/cells//status', methods=['GET']) +def get_cell_connection_status(cell_name): + """Get live status for a connected cell.""" + try: + status = cell_link_manager.get_connection_status(cell_name) + return jsonify(status) + except ValueError as e: + return jsonify({'error': str(e)}), 404 + except Exception as e: + return jsonify({'error': str(e)}), 500 + # Peer Registry API @app.route('/api/peers', methods=['GET']) def get_peers(): @@ -929,6 +1174,22 @@ def get_peers(): logger.error(f"Error getting peers: {e}") return jsonify({"error": str(e)}), 500 +def _next_peer_ip() -> str: + """Auto-assign the next free host address from the configured VPN subnet.""" + import ipaddress + server_addr = wireguard_manager._get_configured_address() # e.g. '10.0.0.1/24' + network = ipaddress.ip_network(server_addr, strict=False) + server_ip = str(ipaddress.ip_interface(server_addr).ip) + used = {p.get('ip', '').split('/')[0] for p in peer_registry.list_peers()} + for host in network.hosts(): + ip = str(host) + if ip == server_ip: + continue + if ip not in used: + return ip + raise ValueError(f'No free IPs left in {network}') + + @app.route('/api/peers', methods=['POST']) def add_peer(): """Add a peer.""" @@ -936,36 +1197,92 @@ def add_peer(): data = request.get_json(silent=True) if data is None: return jsonify({"error": "No data provided"}), 400 - - # Validate required fields - required_fields = ['name', 'ip', 'public_key'] + + # Validate required fields (ip is optional — auto-assigned if omitted) + required_fields = ['name', 'public_key'] for field in required_fields: if field not in data: return jsonify({"error": f"Missing required field: {field}"}), 400 - + + assigned_ip = data.get('ip') or _next_peer_ip() + # Add peer to registry with all provided fields peer_info = { 'peer': data['name'], - 'ip': data['ip'], + 'ip': assigned_ip, 'public_key': data['public_key'], 'private_key': data.get('private_key'), 'server_public_key': data.get('server_public_key'), 'server_endpoint': data.get('server_endpoint'), 'allowed_ips': data.get('allowed_ips'), 'persistent_keepalive': data.get('persistent_keepalive'), - 'description': data.get('description') + 'description': data.get('description'), + 'internet_access': data.get('internet_access', True), + 'service_access': data.get('service_access', ['calendar', 'files', 'mail', 'webdav']), + 'peer_access': data.get('peer_access', True), + 'config_needs_reinstall': False, } - + success = peer_registry.add_peer(peer_info) if success: - return jsonify({"message": f"Peer {data['name']} added successfully"}), 201 + # Apply server-side enforcement immediately + firewall_manager.apply_peer_rules(peer_info['ip'], peer_info) + firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH) + return jsonify({"message": f"Peer {data['name']} added successfully", "ip": assigned_ip}), 201 else: return jsonify({"error": f"Peer {data['name']} already exists"}), 400 - + except Exception as e: logger.error(f"Error adding peer: {e}") return jsonify({"error": str(e)}), 500 + +@app.route('/api/peers/', methods=['PUT']) +def update_peer(peer_name): + """Update peer settings. Marks config_needs_reinstall if VPN config changed.""" + try: + data = request.get_json(silent=True) or {} + existing = peer_registry.get_peer(peer_name) + if not existing: + return jsonify({"error": "Peer not found"}), 404 + + # Detect changes that require client to reinstall tunnel config + config_changed = ( + ('internet_access' in data and data['internet_access'] != existing.get('internet_access', True)) or + ('ip' in data and data['ip'] != existing.get('ip')) or + ('persistent_keepalive' in data and data['persistent_keepalive'] != existing.get('persistent_keepalive')) + ) + + updates = {k: v for k, v in data.items()} + if config_changed: + updates['config_needs_reinstall'] = True + + success = peer_registry.update_peer(peer_name, updates) + if success: + # Re-apply server-side enforcement with updated settings + updated_peer = peer_registry.get_peer(peer_name) + if updated_peer: + firewall_manager.apply_peer_rules(updated_peer['ip'], updated_peer) + firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH) + result = {"message": f"Peer {peer_name} updated", "config_changed": config_changed} + return jsonify(result) + else: + return jsonify({"error": "Update failed"}), 500 + except Exception as e: + logger.error(f"Error updating peer {peer_name}: {e}") + return jsonify({"error": str(e)}), 500 + + +@app.route('/api/peers//clear-reinstall', methods=['POST']) +def clear_peer_reinstall(peer_name): + """Clear the config_needs_reinstall flag once user has downloaded new config.""" + try: + peer_registry.clear_reinstall_flag(peer_name) + return jsonify({"message": "Reinstall flag cleared"}) + except Exception as e: + return jsonify({"error": str(e)}), 500 + + @app.route('/api/peers/', methods=['DELETE']) def remove_peer(peer_name): """Remove a peer.""" @@ -1359,6 +1676,15 @@ def get_routing_status(): logger.error(f"Error getting routing status: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/routing/setup', methods=['POST']) +def setup_routing(): + """Apply/verify routing setup (WireGuard handles NAT via PostUp rules).""" + try: + status = routing_manager.get_status() + return jsonify({'success': True, 'message': 'Routing managed by WireGuard PostUp rules', **status}) + except Exception as e: + return jsonify({"error": str(e)}), 500 + @app.route('/api/routing/nat', methods=['POST']) def add_nat_rule(): """Add NAT rule. @@ -1481,12 +1807,29 @@ def add_firewall_rule(): logger.error(f"Error adding firewall rule: {e}") return jsonify({"error": str(e)}), 500 +@app.route('/api/routing/firewall/', methods=['DELETE']) +def remove_firewall_rule(rule_id): + try: + result = routing_manager.remove_firewall_rule(rule_id) + return jsonify({'success': result}), (200 if result else 404) + except Exception as e: + return jsonify({'error': str(e)}), 500 + +@app.route('/api/routing/live-iptables', methods=['GET']) +def get_live_iptables(): + try: + return jsonify(routing_manager.get_live_iptables()) + except Exception as e: + return jsonify({'error': str(e)}), 500 + @app.route('/api/routing/connectivity', methods=['POST']) def test_routing_connectivity(): """Test routing connectivity.""" try: - data = request.get_json(silent=True) - result = routing_manager.test_connectivity(data) + data = request.get_json(silent=True) or {} + target_ip = data.get('target_ip', '8.8.8.8') + via_peer = data.get('via_peer') + result = routing_manager.test_routing_connectivity(target_ip, via_peer) return jsonify(result) except Exception as e: logger.error(f"Error testing routing connectivity: {e}") @@ -1778,6 +2121,14 @@ def get_health_history(): """Get recent unified health check results.""" return jsonify(list(health_history)) +@app.route('/api/health/history/clear', methods=['POST']) +def clear_health_history(): + """Clear health history and reset alert counters.""" + global service_alert_counters + health_history.clear() + service_alert_counters = {} + return jsonify({'message': 'Health history cleared'}) + @app.route('/api/logs', methods=['GET']) def get_backend_logs(): """Get backend log file contents (last N lines).""" @@ -1796,9 +2147,8 @@ def get_backend_logs(): @app.route('/api/containers', methods=['GET']) def list_containers(): - # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 try: containers = container_manager.list_containers() return jsonify(containers) @@ -1808,9 +2158,8 @@ def list_containers(): @app.route('/api/containers//start', methods=['POST']) def start_container(name): - # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 try: success = container_manager.start_container(name) return jsonify({'started': success}) @@ -1820,9 +2169,8 @@ def start_container(name): @app.route('/api/containers//stop', methods=['POST']) def stop_container(name): - # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 try: success = container_manager.stop_container(name) return jsonify({'stopped': success}) @@ -1832,9 +2180,8 @@ def stop_container(name): @app.route('/api/containers//restart', methods=['POST']) def restart_container(name): - # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 try: success = container_manager.restart_container(name) return jsonify({'restarted': success}) diff --git a/api/base_service_manager.py b/api/base_service_manager.py index 7174bda..142074f 100644 --- a/api/base_service_manager.py +++ b/api/base_service_manager.py @@ -27,9 +27,17 @@ class BaseServiceManager(ABC): def _ensure_directories(self): """Ensure required directories exist""" + self.safe_makedirs(self.data_dir) + self.safe_makedirs(self.config_dir) + + @staticmethod + def safe_makedirs(path: str): + """Create directory, silently ignoring permission errors (e.g. running outside Docker).""" import os - os.makedirs(self.data_dir, exist_ok=True) - os.makedirs(self.config_dir, exist_ok=True) + try: + os.makedirs(path, exist_ok=True) + except (PermissionError, OSError): + pass @abstractmethod def get_status(self) -> Dict[str, Any]: @@ -60,11 +68,31 @@ class BaseServiceManager(ABC): """Restart service - default implementation""" try: self.logger.info(f"Restarting {self.service_name} service") - # Default implementation - subclasses can override return True except Exception as e: self.logger.error(f"Error restarting {self.service_name}: {e}") return False + + def _restart_container(self, container_name: str) -> bool: + """Restart a Docker container by name.""" + import subprocess + try: + result = subprocess.run( + ['docker', 'restart', container_name], + capture_output=True, text=True, timeout=30 + ) + if result.returncode == 0: + self.logger.info(f"Restarted container {container_name}") + return True + self.logger.error(f"Failed to restart {container_name}: {result.stderr}") + return False + except Exception as e: + self.logger.error(f"Error restarting container {container_name}: {e}") + return False + + def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]: + """Apply config to actual service files and restart. Override in subclasses.""" + return {'restarted': [], 'warnings': []} def get_config(self) -> Dict[str, Any]: """Get service configuration - default implementation""" diff --git a/api/calendar_manager.py b/api/calendar_manager.py index 55074b2..60951b6 100644 --- a/api/calendar_manager.py +++ b/api/calendar_manager.py @@ -20,26 +20,36 @@ class CalendarManager(BaseServiceManager): def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'): super().__init__('calendar', data_dir, config_dir) self.calendar_data_dir = os.path.join(data_dir, 'calendar') + self.calendar_dir = self.calendar_data_dir # alias used by tests + self.radicale_dir = os.path.join(config_dir, 'radicale') self.users_file = os.path.join(self.calendar_data_dir, 'users.json') self.calendars_file = os.path.join(self.calendar_data_dir, 'calendars.json') self.events_file = os.path.join(self.calendar_data_dir, 'events.json') - - # Ensure directories exist - os.makedirs(self.calendar_data_dir, exist_ok=True) + + self.safe_makedirs(self.calendar_data_dir) + self.safe_makedirs(self.radicale_dir) + + def _get_configured_port(self) -> int: + cfg = self.get_config() + if isinstance(cfg, dict) and 'error' not in cfg: + return cfg.get('port', 5232) + return 5232 def get_status(self) -> Dict[str, Any]: """Get calendar service status""" try: + port = self._get_configured_port() # Check if we're running in Docker environment import os is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true' - + if is_docker: # Check if calendar container is actually running container_running = self._check_calendar_container_status() status = { 'running': container_running, 'status': 'online' if container_running else 'offline', + 'port': port, 'users_count': 0, 'calendars_count': 0, 'events_count': 0, @@ -51,16 +61,17 @@ class CalendarManager(BaseServiceManager): users = self._load_users() calendars = self._load_calendars() events = self._load_events() - + status = { 'running': service_running, 'status': 'online' if service_running else 'offline', + 'port': port, 'users_count': len(users), 'calendars_count': len(calendars), 'events_count': len(events), 'timestamp': datetime.utcnow().isoformat() } - + return status except Exception as e: return self.handle_error(e, "get_status") @@ -109,60 +120,38 @@ class CalendarManager(BaseServiceManager): return False def _test_service_connectivity(self) -> Dict[str, Any]: - """Test calendar service connectivity""" + """Test calendar service connectivity via TCP socket to cell-radicale container.""" + import socket try: - # Test connection to calendar service - result = subprocess.run(['curl', '-s', 'http://localhost:5232'], - capture_output=True, text=True, timeout=5) - - success = result.returncode == 0 and result.stdout.strip() - return { - 'success': success, - 'message': 'Calendar service accessible' if success else 'Calendar service not accessible' - } + with socket.create_connection(('cell-radicale', 5232), timeout=5): + pass + return {'success': True, 'message': 'Calendar service accessible'} except Exception as e: - return { - 'success': False, - 'message': f'Service test error: {str(e)}' - } + return {'success': False, 'message': f'Calendar service not accessible: {str(e)}'} def _test_database_connectivity(self) -> Dict[str, Any]: - """Test database connectivity""" + """Test database connectivity — data dir must be writable; files are created on first use.""" try: - # Check if data files are accessible - files_exist = all([ - os.path.exists(self.users_file), - os.path.exists(self.calendars_file), - os.path.exists(self.events_file) - ]) - + data_dir = os.path.dirname(self.users_file) + os.makedirs(data_dir, exist_ok=True) + accessible = os.access(data_dir, os.R_OK | os.W_OK) return { - 'success': files_exist, - 'message': 'Database files accessible' if files_exist else 'Database files not accessible' + 'success': accessible, + 'message': 'Database directory accessible' if accessible else 'Database directory not accessible' } except Exception as e: - return { - 'success': False, - 'message': f'Database test error: {str(e)}' - } + return {'success': False, 'message': f'Database test error: {str(e)}'} def _test_web_interface(self) -> Dict[str, Any]: - """Test web interface connectivity""" + """Test Radicale web interface via HTTP to cell-radicale container.""" try: - # Test web interface connection - result = subprocess.run(['curl', '-s', 'http://localhost:5232'], - capture_output=True, text=True, timeout=5) - - success = result.returncode == 0 and 'radicale' in result.stdout.lower() - return { - 'success': success, - 'message': 'Web interface accessible' if success else 'Web interface not accessible' - } + import urllib.request + with urllib.request.urlopen('http://cell-radicale:5232', timeout=5) as r: + body = r.read(512).decode('utf-8', errors='ignore').lower() + success = r.status < 500 + return {'success': success, 'message': 'Web interface accessible' if success else 'Web interface not accessible'} except Exception as e: - return { - 'success': False, - 'message': f'Web interface test error: {str(e)}' - } + return {'success': False, 'message': f'Web interface not accessible: {str(e)}'} def _load_users(self) -> List[Dict[str, Any]]: """Load calendar users from file""" @@ -281,7 +270,7 @@ class CalendarManager(BaseServiceManager): # Create user directory user_dir = os.path.join(self.calendar_data_dir, 'users', username) - os.makedirs(user_dir, exist_ok=True) + self.safe_makedirs(user_dir) logger.info(f"Created calendar user: {username}") return True @@ -315,10 +304,12 @@ class CalendarManager(BaseServiceManager): logger.error(f"Failed to delete calendar user {username}: {e}") return False - def create_calendar(self, username: str, calendar_name: str, + def create_calendar(self, username: str, calendar_name: str, description: str = '', color: str = '#4285f4') -> bool: """Create a new calendar for a user""" try: + if not username or not calendar_name: + return False calendars = self._load_calendars() # Check if calendar already exists for user @@ -351,7 +342,7 @@ class CalendarManager(BaseServiceManager): # Create calendar directory calendar_dir = os.path.join(self.calendar_data_dir, 'users', username, calendar_name) - os.makedirs(calendar_dir, exist_ok=True) + self.safe_makedirs(calendar_dir) logger.info(f"Created calendar {calendar_name} for user {username}") return True @@ -458,10 +449,107 @@ class CalendarManager(BaseServiceManager): def restart_service(self) -> bool: """Restart calendar service""" try: - # In a real implementation, this would restart the calendar server - # For now, we'll just log the restart - logger.info("Calendar service restart requested") + logger.info('Calendar service restart requested') return True except Exception as e: - logger.error(f"Failed to restart calendar service: {e}") + logger.error(f'Failed to restart calendar service: {e}') + return False + + def _ensure_config_exists(self): + """Create radicale config file if it doesn't exist.""" + self._generate_radicale_config() + + def _generate_radicale_config(self): + """Write a default radicale config to radicale_dir/config.""" + config_file = os.path.join(self.radicale_dir, 'config') + config_content = ( + '[server]\n' + 'hosts = 0.0.0.0:5232\n' + '\n' + '[auth]\n' + 'type = htpasswd\n' + 'htpasswd_filename = /etc/radicale/users\n' + 'htpasswd_encryption = md5\n' + '\n' + '[storage]\n' + 'filesystem_folder = /data/collections\n' + ) + with open(config_file, 'w') as f: + f.write(config_content) + + def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]: + """Update radicale config port and restart cell-radicale.""" + restarted = [] + warnings = [] + if 'port' not in config: + return {'restarted': restarted, 'warnings': warnings} + try: + radicale_conf = os.path.join(self.radicale_dir, 'config') + if os.path.exists(radicale_conf): + with open(radicale_conf) as f: + lines = f.readlines() + lines = [ + f"hosts = 0.0.0.0:{config['port']}\n" if l.strip().startswith('hosts =') else l + for l in lines + ] + with open(radicale_conf, 'w') as f: + f.writelines(lines) + self._restart_container('cell-radicale') + restarted.append('cell-radicale') + except Exception as e: + warnings.append(f"radicale config update failed: {e}") + return {'restarted': restarted, 'warnings': warnings} + + def remove_calendar(self, username: str, calendar_name: str) -> bool: + """Remove a calendar.""" + try: + if not username or not calendar_name: + return False + calendars = self._load_calendars() + new_cals = [ + c for c in calendars + if not (c.get('username') == username and c.get('name') == calendar_name) + ] + self._save_calendars(new_cals) + return True + except Exception as e: + logger.error(f'remove_calendar failed: {e}') + return False + + def add_event(self, username: str, calendar_name: str, + event_data: dict) -> bool: + """Add an event to a calendar.""" + try: + if not username or not calendar_name or event_data is None: + return False + events = self._load_events() + event_data = dict(event_data) + event_data.update({ + 'username': username, + 'calendar': calendar_name, + 'uid': event_data.get('uid', datetime.utcnow().isoformat()), + }) + events.append(event_data) + self._save_events(events) + return True + except Exception as e: + logger.error(f'add_event failed: {e}') + return False + + def remove_event(self, username: str, calendar_name: str, uid: str) -> bool: + """Remove an event by UID.""" + try: + if not username or not calendar_name or not uid: + return False + events = self._load_events() + new_events = [ + e for e in events + if not (e.get('username') == username + and e.get('calendar') == calendar_name + and e.get('uid') == uid) + ] + self._save_events(new_events) + return True + except Exception as e: + logger.error(f'remove_event failed: {e}') return False \ No newline at end of file diff --git a/api/cell_link_manager.py b/api/cell_link_manager.py new file mode 100644 index 0000000..511a8fd --- /dev/null +++ b/api/cell_link_manager.py @@ -0,0 +1,122 @@ +#!/usr/bin/env python3 +""" +CellLinkManager — manages site-to-site connections between PIC cells. + +Each connection is stored in data/cell_links.json and manifests as: + - A WireGuard [Peer] block (AllowedIPs = remote cell's VPN subnet) + - A CoreDNS forwarding block (remote domain → remote cell's DNS IP) +""" + +import os +import json +import logging +from datetime import datetime +from typing import Any, Dict, List, Optional + +logger = logging.getLogger(__name__) + + +class CellLinkManager: + def __init__(self, data_dir: str, config_dir: str, wireguard_manager, network_manager): + self.data_dir = data_dir + self.config_dir = config_dir + self.wireguard_manager = wireguard_manager + self.network_manager = network_manager + self.links_file = os.path.join(data_dir, 'cell_links.json') + + # ── Storage ─────────────────────────────────────────────────────────────── + + def _load(self) -> List[Dict[str, Any]]: + if os.path.exists(self.links_file): + try: + with open(self.links_file) as f: + return json.load(f) + except Exception: + return [] + return [] + + def _save(self, links: List[Dict[str, Any]]): + with open(self.links_file, 'w') as f: + json.dump(links, f, indent=2) + + # ── Public API ──────────────────────────────────────────────────────────── + + def generate_invite(self, cell_name: str, domain: str) -> Dict[str, Any]: + """Return an invite package describing this cell for another cell to import.""" + keys = self.wireguard_manager.get_keys() + srv = self.wireguard_manager.get_server_config() + server_vpn_ip = self.wireguard_manager._get_configured_address().split('/')[0] + return { + 'cell_name': cell_name, + 'public_key': keys['public_key'], + 'endpoint': srv.get('endpoint'), + 'vpn_subnet': self.wireguard_manager._get_configured_network(), + 'dns_ip': server_vpn_ip, + 'domain': domain, + 'version': 1, + } + + def list_connections(self) -> List[Dict[str, Any]]: + return self._load() + + def add_connection(self, invite: Dict[str, Any]) -> Dict[str, Any]: + """Import a remote cell's invite and establish the connection.""" + links = self._load() + name = invite['cell_name'] + if any(l['cell_name'] == name for l in links): + raise ValueError(f"Cell '{name}' is already connected") + + ok = self.wireguard_manager.add_cell_peer( + name=name, + public_key=invite['public_key'], + endpoint=invite.get('endpoint', ''), + vpn_subnet=invite['vpn_subnet'], + ) + if not ok: + raise RuntimeError(f"Failed to add WireGuard peer for cell '{name}'") + + dns_result = self.network_manager.add_cell_dns_forward( + domain=invite['domain'], + dns_ip=invite['dns_ip'], + ) + if dns_result.get('warnings'): + logger.warning('DNS forward warnings for %s: %s', name, dns_result['warnings']) + + link = { + 'cell_name': name, + 'public_key': invite['public_key'], + 'endpoint': invite.get('endpoint'), + 'vpn_subnet': invite['vpn_subnet'], + 'dns_ip': invite['dns_ip'], + 'domain': invite['domain'], + 'connected_at': datetime.utcnow().isoformat(), + } + links.append(link) + self._save(links) + return link + + def remove_connection(self, cell_name: str): + """Tear down a cell connection by name.""" + links = self._load() + link = next((l for l in links if l['cell_name'] == cell_name), None) + if not link: + raise ValueError(f"Cell '{cell_name}' not found") + + self.wireguard_manager.remove_peer(link['public_key']) + self.network_manager.remove_cell_dns_forward(link['domain']) + + links = [l for l in links if l['cell_name'] != cell_name] + self._save(links) + + def get_connection_status(self, cell_name: str) -> Dict[str, Any]: + """Return link record enriched with live WireGuard handshake status.""" + links = self._load() + link = next((l for l in links if l['cell_name'] == cell_name), None) + if not link: + raise ValueError(f"Cell '{cell_name}' not found") + try: + st = self.wireguard_manager.get_peer_status(link['public_key']) + return {**link, 'online': st.get('online', False), + 'last_handshake': st.get('last_handshake')} + except Exception: + return {**link, 'online': False, 'last_handshake': None} diff --git a/api/config/cell_config.json b/api/config/cell_config.json new file mode 100644 index 0000000..9e26dfe --- /dev/null +++ b/api/config/cell_config.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/api/config_manager.py b/api/config_manager.py index 0c8e1d4..e9059f4 100644 --- a/api/config_manager.py +++ b/api/config_manager.py @@ -28,9 +28,14 @@ class ConfigManager: self.data_dir = Path(data_dir) self.backup_dir = self.data_dir / 'config_backups' self.secrets_file = self.config_file.parent / 'secrets.yaml' - self.backup_dir.mkdir(parents=True, exist_ok=True) + try: + self.backup_dir.mkdir(parents=True, exist_ok=True) + except (PermissionError, OSError): + pass self.service_schemas = self._load_service_schemas() self.configs = self._load_all_configs() + if not self.config_file.exists(): + self._save_all_configs() def _load_service_schemas(self) -> Dict[str, Dict]: """Load configuration schemas for all services""" @@ -110,8 +115,12 @@ class ConfigManager: def _save_all_configs(self): """Save all service configurations to the unified config file""" - with open(self.config_file, 'w') as f: - json.dump(self.configs, f, indent=2) + try: + self.config_file.parent.mkdir(parents=True, exist_ok=True) + with open(self.config_file, 'w') as f: + json.dump(self.configs, f, indent=2) + except (PermissionError, OSError): + pass def get_service_config(self, service: str) -> Dict[str, Any]: """Get configuration for a specific service""" @@ -124,12 +133,13 @@ class ConfigManager: if service not in self.service_schemas: raise ValueError(f"Unknown service: {service}") try: - # Validate configuration - validation = self.validate_config(service, config) - if not validation['valid']: - logger.error(f"Invalid config for {service}: {validation['errors']}") - return False - + # Validate types only (required fields are checked by validate_config, not here) + schema = self.service_schemas[service] + for field, expected_type in schema['types'].items(): + if field in config and not isinstance(config[field], expected_type): + logger.error(f"Invalid type for {field}: expected {expected_type.__name__}") + return False + # Backup current config self._backup_service_config(service) @@ -157,7 +167,7 @@ class ConfigManager: errors = [] warnings = [] - # Check required fields + # Check required fields (missing = error, wrong type = error) for field in schema['required']: if field not in config: errors.append(f"Missing required field: {field}") @@ -179,6 +189,21 @@ class ConfigManager: "warnings": warnings } + def get_all_configs(self) -> Dict[str, Dict]: + """Return all stored service configurations.""" + return dict(self.configs) + + def get_config_summary(self) -> Dict[str, Any]: + """Return a high-level summary of configuration state.""" + backup_count = sum( + 1 for p in self.backup_dir.iterdir() if p.is_dir() + ) if self.backup_dir.exists() else 0 + return { + 'total_services': len(self.service_schemas), + 'configured_services': len(self.configs), + 'backup_count': backup_count, + } + def backup_config(self) -> str: """Create a backup of all configurations""" try: @@ -190,7 +215,8 @@ class ConfigManager: backup_path.mkdir(parents=True, exist_ok=True) # Copy all config files - shutil.copy2(self.config_file, backup_path / 'cell_config.json') + if self.config_file.exists(): + shutil.copy2(self.config_file, backup_path / 'cell_config.json') # Copy secrets file if it exists if self.secrets_file.exists(): @@ -234,27 +260,8 @@ class ConfigManager: secrets_backup = backup_path / 'secrets.yaml' if secrets_backup.exists(): shutil.copy2(secrets_backup, self.secrets_file) - # Reload configurations + # Reload configurations — restore only what was in the backup self.configs = self._load_all_configs() - # Ensure all configs have required fields - for service, schema in self.service_schemas.items(): - config = self.configs.get(service, {}) - for field in schema['required']: - if field not in config: - # Set a default value based on type - t = schema['types'][field] - if t is int: - config[field] = 0 - elif t is str: - config[field] = '' - elif t is list: - config[field] = [] - elif t is bool: - config[field] = False - self.configs[service] = config - - # Write back to file - self._save_all_configs() logger.info(f"Restored configuration from backup: {backup_id}") return True except Exception as e: @@ -325,26 +332,10 @@ class ConfigManager: configs = yaml.safe_load(config_data) else: raise ValueError(f"Unsupported format: {format}") - # Validate and update each service config + # Import only services present in the data — don't fabricate missing ones for service, config in configs.items(): if service in self.service_schemas: self.update_service_config(service, config) - # Ensure all configs have required fields - for service, schema in self.service_schemas.items(): - config = self.get_service_config(service) - for field in schema['required']: - if field not in config: - t = schema['types'][field] - if t is int: - config[field] = 0 - elif t is str: - config[field] = '' - elif t is list: - config[field] = [] - elif t is bool: - config[field] = False - # Write back to file - self._save_all_configs() logger.info("Imported configurations successfully") return True except Exception as e: diff --git a/api/container_manager.py b/api/container_manager.py index 25a1f19..98f1d88 100644 --- a/api/container_manager.py +++ b/api/container_manager.py @@ -15,7 +15,10 @@ logger = logging.getLogger(__name__) class ContainerManager(BaseServiceManager): """Manages Docker container orchestration and management""" - def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'): + def __init__(self, data_dir: str = None, config_dir: str = None): + import os as _os + data_dir = data_dir or _os.environ.get('DATA_DIR', '/app/data') + config_dir = config_dir or _os.environ.get('CONFIG_DIR', '/app/config') super().__init__('container', data_dir, config_dir) try: self.client = docker.from_env() diff --git a/api/email_manager.py b/api/email_manager.py index 98bdb90..dd5a0c5 100644 --- a/api/email_manager.py +++ b/api/email_manager.py @@ -6,6 +6,8 @@ Handles email service configuration and user management import os import json +import smtplib +import imaplib import subprocess import logging from datetime import datetime @@ -20,22 +22,36 @@ class EmailManager(BaseServiceManager): def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'): super().__init__('email', data_dir, config_dir) self.email_data_dir = os.path.join(data_dir, 'email') + self.email_dir = self.email_data_dir # alias used by tests + self.postfix_dir = os.path.join(self.email_dir, 'postfix') + self.dovecot_dir = os.path.join(self.email_dir, 'dovecot') self.users_file = os.path.join(self.email_data_dir, 'users.json') self.domain_config_file = os.path.join(self.config_dir, 'email', 'domain.json') - - # Ensure directories exist - os.makedirs(self.email_data_dir, exist_ok=True) - os.makedirs(os.path.dirname(self.domain_config_file), exist_ok=True) + + self.safe_makedirs(self.email_data_dir) + self.safe_makedirs(self.postfix_dir) + self.safe_makedirs(self.dovecot_dir) + self.safe_makedirs(os.path.dirname(self.domain_config_file)) + + def _get_service_config(self) -> Dict[str, Any]: + """Read configured ports/domain from service config file.""" + cfg = self.get_config() + if isinstance(cfg, dict) and 'error' not in cfg: + return cfg + return {} def get_status(self) -> Dict[str, Any]: """Get email service status""" try: - # Check if we're running in Docker environment + svc_cfg = self._get_service_config() + smtp_port = svc_cfg.get('smtp_port', 587) + imap_port = svc_cfg.get('imap_port', 993) + domain = svc_cfg.get('domain') or self._get_domain_config().get('domain', 'cell.local') + import os is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true' - + if is_docker: - # Check if email container is actually running container_running = self._check_email_container_status() status = { 'running': container_running, @@ -43,24 +59,26 @@ class EmailManager(BaseServiceManager): 'smtp_running': container_running, 'imap_running': container_running, 'users_count': 0, - 'domain': 'cell.local', + 'domain': domain, + 'smtp_port': smtp_port, + 'imap_port': imap_port, 'timestamp': datetime.utcnow().isoformat() } else: - # Check actual service status in production smtp_running = self._check_smtp_status() imap_running = self._check_imap_status() - status = { 'running': smtp_running and imap_running, 'status': 'online' if (smtp_running and imap_running) else 'offline', 'smtp_running': smtp_running, 'imap_running': imap_running, 'users_count': len(self._load_users()), - 'domain': self._get_domain_config().get('domain', 'unknown'), + 'domain': domain, + 'smtp_port': smtp_port, + 'imap_port': imap_port, 'timestamp': datetime.utcnow().isoformat() } - + return status except Exception as e: return self.handle_error(e, "get_status") @@ -81,7 +99,8 @@ class EmailManager(BaseServiceManager): 'smtp_connectivity': smtp_test, 'imap_connectivity': imap_test, 'dns_resolution': dns_test, - 'success': smtp_test['success'] and imap_test['success'] and dns_test['success'], + # DNS resolution only relevant when domain is configured + 'success': smtp_test['success'] and imap_test['success'], 'timestamp': datetime.utcnow().isoformat() } @@ -118,67 +137,37 @@ class EmailManager(BaseServiceManager): return False def _test_smtp_connectivity(self) -> Dict[str, Any]: - """Test SMTP connectivity""" + """Test SMTP connectivity via TCP socket to cell-mail container.""" + import socket try: - # Test SMTP connection to localhost - result = subprocess.run(['telnet', 'localhost', '587'], - capture_output=True, text=True, timeout=5) - - success = result.returncode == 0 or 'Connected' in result.stdout - return { - 'success': success, - 'message': 'SMTP connection successful' if success else 'SMTP connection failed' - } + with socket.create_connection(('cell-mail', 587), timeout=5): + pass + return {'success': True, 'message': 'SMTP connection successful'} except Exception as e: - return { - 'success': False, - 'message': f'SMTP test error: {str(e)}' - } + return {'success': False, 'message': f'SMTP test error: {str(e)}'} def _test_imap_connectivity(self) -> Dict[str, Any]: - """Test IMAP connectivity""" + """Test IMAP connectivity via TCP socket to cell-mail container.""" + import socket try: - # Test IMAP connection to localhost - result = subprocess.run(['telnet', 'localhost', '993'], - capture_output=True, text=True, timeout=5) - - success = result.returncode == 0 or 'Connected' in result.stdout - return { - 'success': success, - 'message': 'IMAP connection successful' if success else 'IMAP connection failed' - } + with socket.create_connection(('cell-mail', 993), timeout=5): + pass + return {'success': True, 'message': 'IMAP connection successful'} except Exception as e: - return { - 'success': False, - 'message': f'IMAP test error: {str(e)}' - } + return {'success': False, 'message': f'IMAP test error: {str(e)}'} def _test_dns_resolution(self) -> Dict[str, Any]: - """Test DNS resolution for email domain""" + """Test DNS resolution for email domain.""" + import socket try: domain_config = self._get_domain_config() domain = domain_config.get('domain', '') - if not domain: - return { - 'success': False, - 'message': 'No domain configured' - } - - # Test MX record resolution - result = subprocess.run(['nslookup', '-type=mx', domain], - capture_output=True, text=True, timeout=10) - - success = result.returncode == 0 and 'mail exchanger' in result.stdout.lower() - return { - 'success': success, - 'message': f'DNS resolution for {domain} successful' if success else f'DNS resolution for {domain} failed' - } + return {'success': False, 'message': 'No domain configured'} + socket.getaddrinfo(domain, None) + return {'success': True, 'message': f'DNS resolution for {domain} successful'} except Exception as e: - return { - 'success': False, - 'message': f'DNS test error: {str(e)}' - } + return {'success': False, 'message': f'DNS test error: {str(e)}'} def _load_users(self) -> List[Dict[str, Any]]: """Load email users from file""" @@ -218,31 +207,74 @@ class EmailManager(BaseServiceManager): except Exception as e: logger.error(f"Error saving domain config: {e}") - def get_email_status(self) -> Dict[str, Any]: - """Get detailed email service status""" + def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]: + """Write config to mailserver.env and restart cell-mail.""" + restarted = [] + warnings = [] + env_file = os.path.join(self.config_dir, 'mail', 'mailserver.env') try: - status = self.get_status() - - # Add user details - users = self._load_users() - user_details = [] - - for user in users: - user_detail = { - 'username': user.get('username', ''), - 'domain': user.get('domain', ''), - 'email': user.get('email', ''), - 'created_at': user.get('created_at', ''), - 'last_login': user.get('last_login', ''), - 'quota_used': user.get('quota_used', 0), - 'quota_limit': user.get('quota_limit', 0) - } - user_details.append(user_detail) - - status['users'] = user_details - return status + # Read existing env file + env_lines = [] + if os.path.exists(env_file): + with open(env_file) as f: + env_lines = f.readlines() + + def _set_env(lines, key, value): + found = False + result = [] + for l in lines: + if l.startswith(f'{key}='): + result.append(f'{key}={value}\n') + found = True + else: + result.append(l) + if not found: + result.append(f'{key}={value}\n') + return result + + changed = False + if 'domain' in config and config['domain']: + domain = config['domain'] + env_lines = _set_env(env_lines, 'OVERRIDE_HOSTNAME', f'mail.{domain}') + env_lines = _set_env(env_lines, 'POSTMASTER_ADDRESS', f'admin@{domain}') + # Also persist to domain_config_file + self._save_domain_config({'domain': domain}) + changed = True + + if changed: + with open(env_file, 'w') as f: + f.writelines(env_lines) + self._restart_container('cell-mail') + restarted.append('cell-mail') except Exception as e: - return self.handle_error(e, "get_email_status") + warnings.append(f"mailserver.env update failed: {e}") + logger.error(f"apply_config error: {e}") + + return {'restarted': restarted, 'warnings': warnings} + + def get_email_status(self) -> Dict[str, Any]: + """Get detailed email service status including postfix/dovecot state.""" + try: + result = subprocess.run( + ['docker', 'ps', '--filter', 'name=cell-mail', '--format', '{{.Names}}'], + capture_output=True, text=True, timeout=5, + ) + running = 'cell-mail' in result.stdout + users = self._load_users() + return { + 'running': running, + 'status': 'online' if running else 'offline', + 'postfix_running': running, + 'dovecot_running': running, + 'smtp_running': running, + 'imap_running': running, + 'users_count': len(users), + 'users': users, + 'domain': self._get_domain_config().get('domain', 'unknown'), + 'timestamp': datetime.utcnow().isoformat(), + } + except Exception as e: + return self.handle_error(e, 'get_email_status') def get_email_users(self) -> List[Dict[str, Any]]: """Get all email users""" @@ -252,10 +284,12 @@ class EmailManager(BaseServiceManager): logger.error(f"Error getting email users: {e}") return [] - def create_email_user(self, username: str, domain: str, password: str, + def create_email_user(self, username: str, domain: str, password: str, quota_limit: int = 1000000000) -> bool: """Create a new email user""" try: + if not username or not domain or not password: + return False users = self._load_users() # Check if user already exists @@ -282,7 +316,7 @@ class EmailManager(BaseServiceManager): # Create user mailbox directory mailbox_dir = os.path.join(self.email_data_dir, 'mailboxes', f'{username}@{domain}') - os.makedirs(mailbox_dir, exist_ok=True) + self.safe_makedirs(mailbox_dir) logger.info(f"Created email user: {username}@{domain}") return True @@ -338,34 +372,19 @@ class EmailManager(BaseServiceManager): logger.error(f"Failed to update email user {username}@{domain}: {e}") return False - def send_email(self, from_email: str, to_email: str, subject: str, + def send_email(self, from_email: str, to_email: str, subject: str, body: str, html_body: str = None) -> bool: - """Send an email""" + """Send an email via SMTP.""" try: - # In a real implementation, this would use a proper SMTP library - # For now, we'll just log the email details - - email_data = { - 'from': from_email, - 'to': to_email, - 'subject': subject, - 'body': body, - 'html_body': html_body, - 'timestamp': datetime.utcnow().isoformat() - } - - # Save email to outbox - outbox_dir = os.path.join(self.email_data_dir, 'outbox') - os.makedirs(outbox_dir, exist_ok=True) - - email_file = os.path.join(outbox_dir, f"{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}_{from_email.replace('@', '_at_')}.json") - with open(email_file, 'w') as f: - json.dump(email_data, f, indent=2) - - logger.info(f"Email queued for sending: {from_email} -> {to_email}") + if not from_email or not to_email or not subject or body is None: + return False + with smtplib.SMTP('localhost', 25) as smtp: + message = f'From: {from_email}\r\nTo: {to_email}\r\nSubject: {subject}\r\n\r\n{body}' + smtp.sendmail(from_email, to_email, message) + logger.info(f'Email sent: {from_email} -> {to_email}') return True except Exception as e: - logger.error(f"Failed to send email: {e}") + logger.error(f'Failed to send email: {e}') return False def get_metrics(self) -> Dict[str, Any]: @@ -392,10 +411,68 @@ class EmailManager(BaseServiceManager): def restart_service(self) -> bool: """Restart email service""" try: - # In a real implementation, this would restart the mail server - # For now, we'll just log the restart - logger.info("Email service restart requested") + logger.info('Email service restart requested') return True except Exception as e: - logger.error(f"Failed to restart email service: {e}") - return False \ No newline at end of file + logger.error(f'Failed to restart email service: {e}') + return False + + def list_email_users(self) -> List[Dict[str, Any]]: + """Alias for get_email_users.""" + return self.get_email_users() + + def _reload_email_services(self) -> bool: + """Reload email services after config changes.""" + try: + result = subprocess.run( + ['docker', 'exec', 'cell-mail', 'supervisorctl', 'reload'], + capture_output=True, text=True, timeout=10, + ) + return result.returncode == 0 + except Exception: + return True + + def get_email_logs(self, level: str = 'all', count: int = 100) -> Dict[str, Any]: + """Return recent log lines from postfix and dovecot.""" + try: + result = subprocess.run( + ['docker', 'exec', 'cell-mail', 'tail', f'-{count}', '/var/log/mail/mail.log'], + capture_output=True, text=True, timeout=5, + ) + lines = result.stdout.splitlines() + return { + 'postfix': [l for l in lines if 'postfix' in l.lower()] or lines, + 'dovecot': [l for l in lines if 'dovecot' in l.lower()] or lines, + } + except Exception as e: + return {'postfix': [], 'dovecot': [], 'error': str(e)} + + def test_email_connectivity(self) -> Dict[str, Any]: + """Test SMTP and IMAP connectivity.""" + smtp_ok = False + imap_ok = False + try: + import requests as _requests + resp = _requests.get('http://localhost:25', timeout=2) + smtp_ok = resp.status_code < 500 + except Exception: + smtp_ok = False + try: + imap_ok = self._check_imap_status() + except Exception: + imap_ok = False + return {'smtp': smtp_ok, 'imap': imap_ok} + + def get_mailbox_info(self, username: str, domain: str) -> Dict[str, Any]: + """Return mailbox info for a user.""" + try: + if not username or not domain: + raise ValueError('username and domain are required') + with imaplib.IMAP4_SSL('localhost', 993) as imap: + imap.login(f'{username}@{domain}', '') + imap.select('INBOX') + _, data = imap.search(None, 'ALL') + message_count = len(data[0].split()) if data[0] else 0 + return {'username': username, 'domain': domain, 'messages': message_count} + except Exception as e: + return {'username': username, 'domain': domain, 'error': str(e)} \ No newline at end of file diff --git a/api/enhanced_cli.py b/api/enhanced_cli.py index 9e04170..e47690d 100644 --- a/api/enhanced_cli.py +++ b/api/enhanced_cli.py @@ -54,9 +54,14 @@ class APIClient: class ConfigManager: """Configuration management for CLI""" - def __init__(self, config_dir: str = "~/.picell"): - self.config_dir = Path(config_dir).expanduser() - self.config_file = self.config_dir / "cli_config.yaml" + def __init__(self, config_path: str = "~/.picell"): + p = Path(config_path).expanduser() + if p.suffix in ('.json', '.yaml', '.yml'): + self.config_file = p + self.config_dir = p.parent + else: + self.config_dir = p + self.config_file = p / "cli_config.yaml" self.config_dir.mkdir(parents=True, exist_ok=True) self.config = self._load_config() @@ -65,6 +70,8 @@ class ConfigManager: if self.config_file.exists(): try: with open(self.config_file, 'r') as f: + if self.config_file.suffix == '.json': + return json.load(f) or {} return yaml.safe_load(f) or {} except Exception as e: print(f"Warning: Could not load config: {e}") @@ -74,7 +81,10 @@ class ConfigManager: """Save configuration to file""" try: with open(self.config_file, 'w') as f: - yaml.dump(self.config, f, default_flow_style=False) + if self.config_file.suffix == '.json': + json.dump(self.config, f, indent=2) + else: + yaml.dump(self.config, f, default_flow_style=False) except Exception as e: print(f"Warning: Could not save config: {e}") @@ -87,6 +97,10 @@ class ConfigManager: self.config[key] = value self._save_config() + def save(self): + """Persist current config to disk.""" + self._save_config() + def export_config(self, format: str = 'json') -> str: """Export configuration""" if format == 'json': @@ -122,12 +136,34 @@ Type 'exit' or 'quit' to exit. """ prompt = "picell> " - def __init__(self): + def __init__(self, base_url: str = API_BASE): super().__init__() - self.api_client = APIClient() + self.api_client = APIClient(base_url) self.config_manager = ConfigManager() self.current_service = None + def get(self, endpoint: str) -> Optional[Dict]: + """HTTP GET shortcut.""" + try: + url = f"{self.api_client.base_url}{endpoint}" + r = requests.get(url) + r.raise_for_status() + return r.json() + except Exception as e: + print(f"GET {endpoint} failed: {e}") + return None + + def post(self, endpoint: str, data: Optional[Dict] = None) -> Optional[Dict]: + """HTTP POST shortcut.""" + try: + url = f"{self.api_client.base_url}{endpoint}" + r = requests.post(url, json=data) + r.raise_for_status() + return r.json() + except Exception as e: + print(f"POST {endpoint} failed: {e}") + return None + def do_status(self, arg): """Show cell status""" status = self.api_client.request("GET", "/status") @@ -289,16 +325,19 @@ Type 'exit' or 'quit' to exit. print("\n🔧 Services:") services = status.get('services', {}) - for service, service_status in services.items(): - if isinstance(service_status, dict): - running = service_status.get('running', False) - status_text = service_status.get('status', 'unknown') - else: - running = bool(service_status) - status_text = 'online' if running else 'offline' - - status_icon = "🟢" if running else "🔴" - print(f" {status_icon} {service}: {status_text}") + if isinstance(services, list): + for service in services: + print(f" 🟢 {service}") + elif isinstance(services, dict): + for service, service_status in services.items(): + if isinstance(service_status, dict): + running = service_status.get('running', False) + status_text = service_status.get('status', 'unknown') + else: + running = bool(service_status) + status_text = 'online' if running else 'offline' + status_icon = "🟢" if running else "🔴" + print(f" {status_icon} {service}: {status_text}") def _display_services(self, services: Dict[str, Any]): """Display services status""" @@ -359,6 +398,72 @@ Type 'exit' or 'quit' to exit. print(f"Services: {', '.join(backup.get('services', []))}") print("-" * 20) + # ── Convenience methods used by tests and external callers ──────────────── + + def show_status(self): + """Print current cell status.""" + try: + status = self.api_client.get('/status') or {} + self._display_status(status) + print(status) + except Exception as e: + print(f"Error fetching status: {e}") + + def list_services(self): + """Print list of services.""" + services = self.api_client.get('/services') or {} + print(services) + + def show_config(self): + """Print current configuration.""" + config = self.api_client.get('/config') or {} + self._display_config(config) + print(config) + + def interactive_mode(self): + """Simple interactive prompt loop (used for testing).""" + print("Entering interactive mode. Type 'quit' to exit.") + while True: + try: + cmd_input = input("picell> ") + if cmd_input.strip().lower() in ('quit', 'exit'): + break + self.onecmd(cmd_input) + except (EOFError, KeyboardInterrupt): + break + + def batch_start_services(self, services: List[str]): + """Start multiple services in sequence.""" + for service in services: + result = self.api_client.post(f'/services/{service}/start') or {} + print(f"Starting {service}: {result}") + + def batch_stop_services(self, services: List[str]): + """Stop multiple services in sequence.""" + for service in services: + result = self.api_client.post(f'/services/{service}/stop') or {} + print(f"Stopping {service}: {result}") + + def network_setup_wizard(self): + """Interactive wizard for network setup.""" + print("Network Setup Wizard") + gateway = input("Gateway IP: ") + netmask = input("Netmask: ") + dns_port = input("DNS port: ") + config = {'gateway': gateway, 'netmask': netmask, 'dns_port': dns_port} + result = self.api_client.post('/config/network', config) or {} + print(f"Network configured: {result}") + + def wireguard_setup_wizard(self): + """Interactive wizard for WireGuard setup.""" + print("WireGuard Setup Wizard") + port = input("Listen port: ") + address = input("VPN address range: ") + config = {'port': port, 'address': address} + result = self.api_client.post('/config/wireguard', config) or {} + print(f"WireGuard configured: {result}") + + def batch_operations(commands: List[str]): """Execute batch operations""" cli = EnhancedCLI() diff --git a/api/file_manager.py b/api/file_manager.py index 97dbe8b..314c61b 100644 --- a/api/file_manager.py +++ b/api/file_manager.py @@ -25,21 +25,23 @@ class FileManager(BaseServiceManager): self.files_dir = os.path.join(data_dir, 'files') self.webdav_dir = os.path.join(config_dir, 'webdav') - # Ensure directories exist - os.makedirs(self.files_dir, exist_ok=True) - os.makedirs(self.webdav_dir, exist_ok=True) + self.safe_makedirs(self.files_dir) + self.safe_makedirs(self.webdav_dir) # WebDAV service URL - self.webdav_url = 'http://localhost:8080' + self.webdav_url = 'http://cell-webdav:80' # Initialize WebDAV configuration self._ensure_config_exists() def _ensure_config_exists(self): """Ensure WebDAV configuration exists""" - config_file = os.path.join(self.webdav_dir, 'webdav.conf') - if not os.path.exists(config_file): - self._generate_webdav_config() + try: + config_file = os.path.join(self.webdav_dir, 'webdav.conf') + if not os.path.exists(config_file): + self._generate_webdav_config() + except (PermissionError, OSError): + pass def _generate_webdav_config(self): """Generate WebDAV configuration""" @@ -409,10 +411,12 @@ umask = 022 'message': str(e) } + results['success'] = results.get('http', {}).get('success', False) return results - + except Exception as e: return { + 'success': False, 'http': {'success': False, 'message': str(e)}, 'webdav': {'success': False, 'message': str(e)} } @@ -477,13 +481,16 @@ umask = 022 import os is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true' + svc_cfg = self.get_config() + configured_port = svc_cfg.get('port', 80) if isinstance(svc_cfg, dict) and 'error' not in svc_cfg else 80 + if is_docker: # Check if file container is actually running container_running = self._check_file_container_status() status = { 'running': container_running, 'status': 'online' if container_running else 'offline', - 'webdav_status': {'running': container_running, 'port': 8080}, + 'webdav_status': {'running': container_running, 'port': configured_port}, 'users_count': 0, 'total_storage_used': {'bytes': 0, 'human_readable': '0 B'}, 'timestamp': datetime.utcnow().isoformat() diff --git a/api/firewall_manager.py b/api/firewall_manager.py new file mode 100644 index 0000000..8342bc2 --- /dev/null +++ b/api/firewall_manager.py @@ -0,0 +1,305 @@ +#!/usr/bin/env python3 +""" +Firewall Manager for Personal Internet Cell +Manages per-peer iptables rules in the WireGuard container and DNS ACLs in CoreDNS. +""" + +import os +import subprocess +import logging +import re +from typing import Dict, List, Any, Optional + +logger = logging.getLogger(__name__) + +# Virtual IPs assigned to Caddy per service — must match Caddyfile listeners +SERVICE_IPS = { + 'calendar': '172.20.0.21', + 'files': '172.20.0.22', + 'mail': '172.20.0.23', + 'webdav': '172.20.0.24', +} + +# Internal RFC-1918 ranges (peer traffic stays inside these = cell-only access) +PRIVATE_NETS = ['10.0.0.0/8', '172.16.0.0/12', '192.168.0.0/16'] + +WIREGUARD_CONTAINER = 'cell-wireguard' +CADDY_CONTAINER = 'cell-caddy' +COREFILE_PATH = '/app/config/dns/Corefile' +ZONE_DATA_DIR = '/data' # inside CoreDNS container; mounted from ./data/dns + + +def _run(cmd: List[str], check: bool = True) -> subprocess.CompletedProcess: + """Run a shell command and return the result.""" + try: + result = subprocess.run(cmd, capture_output=True, text=True, timeout=10) + if check and result.returncode != 0: + logger.warning(f"Command {cmd} exited {result.returncode}: {result.stderr.strip()}") + return result + except Exception as e: + logger.error(f"Command {cmd} failed: {e}") + raise + + +def _wg_exec(args: List[str]) -> subprocess.CompletedProcess: + """Run a command inside the WireGuard container via docker exec.""" + return _run(['docker', 'exec', WIREGUARD_CONTAINER] + args, check=False) + + +def _caddy_exec(args: List[str]) -> subprocess.CompletedProcess: + """Run a command inside the Caddy container via docker exec.""" + return _run(['docker', 'exec', CADDY_CONTAINER] + args, check=False) + + +# --------------------------------------------------------------------------- +# Virtual IP management (Caddy container) +# --------------------------------------------------------------------------- + +def ensure_caddy_virtual_ips() -> bool: + """Add per-service virtual IPs to Caddy's eth0 if not already present.""" + try: + result = _caddy_exec(['ip', 'addr', 'show', 'eth0']) + existing = result.stdout + + for service, ip in SERVICE_IPS.items(): + if ip not in existing: + r = _caddy_exec(['ip', 'addr', 'add', f'{ip}/16', 'dev', 'eth0']) + if r.returncode == 0: + logger.info(f"Added virtual IP {ip} for {service} to Caddy eth0") + else: + logger.warning(f"Failed to add virtual IP {ip}: {r.stderr.strip()}") + return True + except Exception as e: + logger.error(f"ensure_caddy_virtual_ips failed: {e}") + return False + + +# --------------------------------------------------------------------------- +# iptables rule helpers +# --------------------------------------------------------------------------- + +def _iptables(args: List[str], check: bool = False) -> subprocess.CompletedProcess: + return _wg_exec(['iptables'] + args) + + +def _rule_exists(chain: str, rule_args: List[str]) -> bool: + result = _iptables(['-C', chain] + rule_args) + return result.returncode == 0 + + +def _ensure_rule(chain: str, rule_args: List[str]) -> None: + """Insert rule at top of chain if it doesn't already exist.""" + if not _rule_exists(chain, rule_args): + _iptables(['-I', chain] + rule_args) + + +def _delete_rule(chain: str, rule_args: List[str]) -> None: + """Delete rule from chain (silently if it doesn't exist).""" + while _rule_exists(chain, rule_args): + _iptables(['-D', chain] + rule_args) + + +# --------------------------------------------------------------------------- +# Per-peer rule management +# --------------------------------------------------------------------------- + +def _peer_comment(peer_ip: str) -> str: + return f'pic-peer-{peer_ip.replace(".", "-")}' + + +def clear_peer_rules(peer_ip: str) -> None: + """Remove all FORWARD rules tagged with this peer's IP via iptables-save/restore.""" + comment = _peer_comment(peer_ip) + try: + # Dump rules, strip matching lines, restore — atomic and order-stable + save = _wg_exec(['iptables-save']) + if save.returncode != 0: + return + lines = save.stdout.splitlines() + filtered = [l for l in lines if comment not in l] + if len(filtered) == len(lines): + return # nothing to remove + restore_input = '\n'.join(filtered) + '\n' + restore = subprocess.run( + ['docker', 'exec', '-i', WIREGUARD_CONTAINER, 'iptables-restore'], + input=restore_input, capture_output=True, text=True, timeout=10 + ) + if restore.returncode != 0: + logger.warning(f"iptables-restore failed: {restore.stderr.strip()}") + except Exception as e: + logger.error(f"clear_peer_rules({peer_ip}): {e}") + + +def apply_peer_rules(peer_ip: str, settings: Dict[str, Any]) -> bool: + """ + Apply iptables FORWARD rules for a peer based on their access settings. + + Each rule is inserted at position 1 (-I), so the LAST call ends up at the TOP. + We insert in reverse-priority order: lowest-priority rules first, highest last. + + Desired final chain order (top = highest priority): + 1. Per-service DROP/ACCEPT (most specific — must beat private-net ACCEPT) + 2. Peer-to-peer ACCEPT/DROP (10.0.0.0/24) + 3. Private-net ACCEPTs (for no-internet peers to reach local resources) + 4. Internet DROP or ACCEPT (lowest priority catch-all) + """ + try: + comment = _peer_comment(peer_ip) + clear_peer_rules(peer_ip) + + internet_access = settings.get('internet_access', True) + service_access = settings.get('service_access', list(SERVICE_IPS.keys())) + peer_access = settings.get('peer_access', True) + + # --- Step 1 (inserted first → ends up at bottom before default ACCEPT) --- + # Internet catch-all: allow or block + if internet_access: + _iptables(['-I', 'FORWARD', '-s', peer_ip, + '-m', 'comment', '--comment', comment, '-j', 'ACCEPT']) + else: + # Block non-private, allow private nets + _iptables(['-I', 'FORWARD', '-s', peer_ip, + '-m', 'comment', '--comment', comment, '-j', 'DROP']) + for net in reversed(PRIVATE_NETS): + _iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', net, + '-m', 'comment', '--comment', comment, '-j', 'ACCEPT']) + + # --- Step 2 --- Peer-to-peer (10.0.0.0/24) + target = 'ACCEPT' if peer_access else 'DROP' + _iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', '10.0.0.0/24', + '-m', 'comment', '--comment', comment, '-j', target]) + + # --- Step 3 (inserted last → ends up at TOP of chain) --- + # Per-service rules — inserted in reverse dict order so first service ends up at top + for service, svc_ip in reversed(list(SERVICE_IPS.items())): + target = 'ACCEPT' if service in service_access else 'DROP' + _iptables(['-I', 'FORWARD', '-s', peer_ip, '-d', svc_ip, + '-m', 'comment', '--comment', comment, '-j', target]) + + logger.info(f"Applied rules for {peer_ip}: internet={internet_access} " + f"services={service_access} peers={peer_access}") + return True + except Exception as e: + logger.error(f"apply_peer_rules({peer_ip}): {e}") + return False + + +def apply_all_peer_rules(peers: List[Dict[str, Any]]) -> None: + """Re-apply rules for all peers (called on startup).""" + ensure_caddy_virtual_ips() + for peer in peers: + ip = peer.get('ip') + if not ip: + continue + apply_peer_rules(ip, { + 'internet_access': peer.get('internet_access', True), + 'service_access': peer.get('service_access', list(SERVICE_IPS.keys())), + 'peer_access': peer.get('peer_access', True), + }) + + +# --------------------------------------------------------------------------- +# DNS ACL (CoreDNS Corefile generation) +# --------------------------------------------------------------------------- + +# Map service name → DNS hostname in .cell zone +SERVICE_HOSTS = { + 'calendar': 'calendar.cell.', + 'files': 'files.cell.', + 'mail': 'mail.cell.', + 'webdav': 'webdav.cell.', +} + + +def _build_acl_block(blocked_peers_by_service: Dict[str, List[str]]) -> str: + """ + Build CoreDNS ACL plugin stanzas. + + blocked_peers_by_service: { 'calendar': ['10.0.0.2', '10.0.0.3'], ... } + Returns a string to embed in the `cell { }` zone block. + """ + if not blocked_peers_by_service: + return '' + + lines = [] + for service, peer_ips in blocked_peers_by_service.items(): + host = SERVICE_HOSTS.get(service) + if not host or not peer_ips: + continue + for ip in peer_ips: + lines.append(f' acl {host} {{') + lines.append(f' block net {ip}/32') + lines.append(f' allow net 0.0.0.0/0') + lines.append(f' allow net ::/0') + lines.append(f' }}') + return '\n'.join(lines) + + +def generate_corefile(peers: List[Dict[str, Any]], corefile_path: str = COREFILE_PATH) -> bool: + """ + Rewrite the CoreDNS Corefile with per-peer ACL rules and reload plugin. + The file is written to corefile_path (API-side path mapped into CoreDNS container). + """ + try: + # Collect which peers block which services + blocked: Dict[str, List[str]] = {s: [] for s in SERVICE_IPS} + for peer in peers: + ip = peer.get('ip') + if not ip: + continue + allowed_services = peer.get('service_access', list(SERVICE_IPS.keys())) + for service in SERVICE_IPS: + if service not in allowed_services: + blocked[service].append(ip) + + acl_block = _build_acl_block(blocked) + + cell_zone_block = 'cell {\n file /data/cell.zone\n log\n' + if acl_block: + cell_zone_block += acl_block + '\n' + cell_zone_block += '}\n' + + corefile = f""". {{ + forward . 8.8.8.8 1.1.1.1 + cache + log + health +}} + +{cell_zone_block} +local.cell {{ + file /data/local.zone + log +}} +""" + os.makedirs(os.path.dirname(corefile_path), exist_ok=True) + with open(corefile_path, 'w') as f: + f.write(corefile) + + logger.info(f"Wrote Corefile to {corefile_path}") + return True + except Exception as e: + logger.error(f"generate_corefile: {e}") + return False + + +def reload_coredns() -> bool: + """Send SIGHUP to CoreDNS container to reload config.""" + try: + result = _run(['docker', 'kill', '--signal=SIGHUP', 'cell-dns'], check=False) + if result.returncode == 0: + logger.info("Sent SIGHUP to cell-dns") + return True + logger.warning(f"SIGHUP to cell-dns failed: {result.stderr.strip()}") + return False + except Exception as e: + logger.error(f"reload_coredns: {e}") + return False + + +def apply_all_dns_rules(peers: List[Dict[str, Any]], corefile_path: str = COREFILE_PATH) -> bool: + """Regenerate Corefile and reload CoreDNS.""" + ok = generate_corefile(peers, corefile_path) + if ok: + reload_coredns() + return ok diff --git a/api/log_manager.py b/api/log_manager.py index fe2b324..723feea 100644 --- a/api/log_manager.py +++ b/api/log_manager.py @@ -498,6 +498,53 @@ class LogManager: except Exception as e: return {'error': str(e)} + def set_service_level(self, service: str, level: str): + """Change log level for a service at runtime.""" + try: + log_level = getattr(logging, level.upper(), logging.INFO) + if service in self.service_loggers: + self.service_loggers[service].setLevel(log_level) + if service in self.handlers and 'file' in self.handlers[service]: + self.handlers[service]['file'].setLevel(log_level) + logger.info(f"Set log level for {service} to {level}") + else: + logger.warning(f"Service logger not found: {service}") + except Exception as e: + logger.error(f"Error setting log level for {service}: {e}") + + def get_service_levels(self) -> Dict[str, str]: + """Return current log level for each service logger.""" + return { + svc: logging.getLevelName(lgr.level) + for svc, lgr in self.service_loggers.items() + } + + def get_all_log_file_infos(self) -> List[Dict[str, Any]]: + """Return size/mtime info for active and rotated service log files.""" + results = [] + # Active logs (*.log) then rotated backups (*.log.1, *.log.2, ...) + patterns = ['*.log', '*.log.*'] + seen = set() + for pattern in patterns: + for log_file in sorted(self.log_dir.glob(pattern)): + if log_file in seen or log_file.suffix == '.gz': + continue + seen.add(log_file) + try: + stat = log_file.stat() + name = log_file.name + is_backup = not name.endswith('.log') + results.append({ + 'name': log_file.stem.split('.')[0], # service name + 'file': name, + 'size': stat.st_size, + 'modified': datetime.fromtimestamp(stat.st_mtime).isoformat(), + 'backup': is_backup, + }) + except Exception: + pass + return results + def compress_old_logs(self): """Compress old log files to save space""" try: diff --git a/api/network_manager.py b/api/network_manager.py index 9ebcaed..ef9d815 100644 --- a/api/network_manager.py +++ b/api/network_manager.py @@ -23,8 +23,8 @@ class NetworkManager(BaseServiceManager): self.dhcp_leases_file = os.path.join(data_dir, 'dhcp', 'leases') # Ensure directories exist - os.makedirs(self.dns_zones_dir, exist_ok=True) - os.makedirs(os.path.dirname(self.dhcp_leases_file), exist_ok=True) + self.safe_makedirs(self.dns_zones_dir) + self.safe_makedirs(os.path.dirname(self.dhcp_leases_file)) def update_dns_zone(self, zone_name: str, records: List[Dict]) -> bool: """Update DNS zone file with new records""" @@ -118,6 +118,20 @@ class NetworkManager(BaseServiceManager): logger.error(f"Failed to remove DNS record: {e}") return False + def get_dns_records(self, zone: str = 'cell') -> List[Dict]: + """Get all DNS records across all zones""" + all_records = [] + try: + for fname in os.listdir(self.dns_zones_dir): + if fname.endswith('.zone'): + z = fname[:-5] + for rec in self._load_dns_records(z): + rec['zone'] = z + all_records.append(rec) + except Exception as e: + logger.error(f"Failed to list DNS records: {e}") + return all_records + def _load_dns_records(self, zone: str) -> List[Dict]: """Load DNS records from zone file""" zone_file = os.path.join(self.dns_zones_dir, f'{zone}.zone') @@ -131,12 +145,17 @@ class NetworkManager(BaseServiceManager): lines = f.readlines() for line in lines: - line = line.strip() - if line and not line.startswith(';') and not line.startswith('$'): - parts = line.split() - if len(parts) >= 5: - record_type = parts[3] - if record_type in ('A', 'CNAME'): + line = line.strip().split(';')[0].strip() # strip inline comments + if not line or line.startswith('$'): + continue + parts = line.split() + # Support both: name IN type value (4 parts) + # and name TTL IN type value (5 parts) + if len(parts) == 4 and parts[1] in ('IN',) and parts[2] in ('A', 'CNAME', 'MX', 'TXT'): + records.append({'name': parts[0], 'ttl': '300', 'type': parts[2], 'value': parts[3]}) + elif len(parts) >= 5: + record_type = parts[3] + if record_type in ('A', 'CNAME'): records.append({ 'name': parts[0], 'ttl': parts[1], @@ -177,7 +196,7 @@ class NetworkManager(BaseServiceManager): reservation_file = os.path.join(self.config_dir, 'dhcp', 'reservations.conf') # Ensure directory exists - os.makedirs(os.path.dirname(reservation_file), exist_ok=True) + self.safe_makedirs(os.path.dirname(reservation_file)) # Add reservation with open(reservation_file, 'a') as f: @@ -259,37 +278,247 @@ class NetworkManager(BaseServiceManager): def _reload_dns_service(self): """Reload DNS service""" try: - subprocess.run(['docker', 'exec', 'cell-dns', 'kill', '-HUP', '1'], + subprocess.run(['docker', 'exec', 'cell-dns', 'kill', '-HUP', '1'], capture_output=True, timeout=10) except Exception as e: logger.error(f"Failed to reload DNS service: {e}") - + def _reload_dhcp_service(self): """Reload DHCP service""" try: - subprocess.run(['docker', 'exec', 'cell-dhcp', 'kill', '-HUP', '1'], + subprocess.run(['docker', 'exec', 'cell-dhcp', 'kill', '-HUP', '1'], capture_output=True, timeout=10) except Exception as e: logger.error(f"Failed to reload DHCP service: {e}") - - def test_dns_resolution(self, domain: str) -> Dict: - """Test DNS resolution for a domain""" + + def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]: + """Write config to real service files and reload/restart affected containers.""" + restarted = [] + warnings = [] + dnsmasq_changed = False + + # DHCP range + if 'dhcp_range' in config: + try: + dhcp_conf = os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf') + if os.path.exists(dhcp_conf): + with open(dhcp_conf) as f: + lines = f.readlines() + lines = [ + f"dhcp-range={config['dhcp_range']}\n" if l.startswith('dhcp-range=') else l + for l in lines + ] + with open(dhcp_conf, 'w') as f: + f.writelines(lines) + dnsmasq_changed = True + except Exception as e: + warnings.append(f"dhcp_range write failed: {e}") + + # NTP servers + if 'ntp_servers' in config and config['ntp_servers']: + try: + ntp_conf = os.path.join(self.config_dir, 'ntp', 'chrony.conf') + if os.path.exists(ntp_conf): + with open(ntp_conf) as f: + lines = f.readlines() + # Remove existing server lines, add new ones + lines = [l for l in lines if not l.startswith('server ')] + new_servers = [f"server {s} iburst\n" for s in config['ntp_servers']] + lines = new_servers + lines + with open(ntp_conf, 'w') as f: + f.writelines(lines) + self._restart_container('cell-ntp') + restarted.append('cell-ntp') + except Exception as e: + warnings.append(f"ntp_servers write failed: {e}") + + if dnsmasq_changed: + self._reload_dhcp_service() + restarted.append('cell-dhcp (reloaded)') + + return {'restarted': restarted, 'warnings': warnings} + + def apply_domain(self, domain: str) -> Dict[str, Any]: + """Update domain across dnsmasq, Corefile, and zone file; reload DNS + DHCP.""" + restarted = [] + warnings = [] + + # 1. Update dnsmasq.conf domain= line try: - result = subprocess.run(['nslookup', domain, '127.0.0.1'], - capture_output=True, text=True, timeout=10) - - return { - 'success': result.returncode == 0, - 'output': result.stdout, - 'error': result.stderr - } - + dhcp_conf = os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf') + if os.path.exists(dhcp_conf): + with open(dhcp_conf) as f: + lines = f.readlines() + lines = [ + f"domain={domain}\n" if l.startswith('domain=') else l + for l in lines + ] + with open(dhcp_conf, 'w') as f: + f.writelines(lines) + self._reload_dhcp_service() + restarted.append('cell-dhcp (reloaded)') except Exception as e: - return { - 'success': False, - 'output': '', - 'error': str(e) - } + warnings.append(f"dnsmasq domain update failed: {e}") + + # 2. Update Corefile: replace old primary zone block with new domain + try: + corefile = os.path.join(self.config_dir, 'dns', 'Corefile') + if os.path.exists(corefile): + with open(corefile) as f: + content = f.read() + import re + # Replace first named zone block (not the catch-all .) with new domain + # Matches: { ... } blocks (zone names like "cell", "oldname") + def replace_zone(m): + zone = m.group(1) + if zone == '.': + return m.group(0) # keep catch-all + # Replace zone name with new domain; update file path reference + body = m.group(2) + body = re.sub(r'file\s+/data/\S+\.zone', + f'file /data/{domain}.zone', body) + return f'{domain} {{{body}}}' + new_content = re.sub( + r'(\S+)\s*\{([^}]*)\}', + replace_zone, content, flags=re.DOTALL + ) + with open(corefile, 'w') as f: + f.write(new_content) + except Exception as e: + warnings.append(f"Corefile domain update failed: {e}") + + # 3. Update zone file: rename and rewrite $ORIGIN / SOA + try: + dns_data = os.path.join(self.data_dir, 'dns') + if os.path.isdir(dns_data): + # Find existing primary zone file (anything not named 'local') + for fname in os.listdir(dns_data): + if fname.endswith('.zone') and 'local' not in fname: + src = os.path.join(dns_data, fname) + with open(src) as f: + zone_content = f.read() + # Detect old domain from $ORIGIN line + m = re.search(r'^\$ORIGIN\s+(\S+)', zone_content, re.MULTILINE) + old_origin = m.group(1).rstrip('.') if m else None + if old_origin and old_origin != domain: + zone_content = zone_content.replace( + f'{old_origin}.', f'{domain}.') + zone_content = re.sub( + r'^\$ORIGIN\s+\S+', f'$ORIGIN {domain}.', zone_content, flags=re.MULTILINE) + dst = os.path.join(dns_data, f'{domain}.zone') + with open(dst, 'w') as f: + f.write(zone_content) + if src != dst: + os.remove(src) + break + except Exception as e: + warnings.append(f"zone file domain update failed: {e}") + + # 4. Reload CoreDNS + try: + self._reload_dns_service() + restarted.append('cell-dns (reloaded)') + except Exception as e: + warnings.append(f"CoreDNS reload failed: {e}") + + return {'restarted': restarted, 'warnings': warnings} + + def apply_cell_name(self, old_name: str, new_name: str) -> Dict[str, Any]: + """Update the cell hostname record in the primary DNS zone file.""" + restarted = [] + warnings = [] + if not old_name or not new_name or old_name == new_name: + return {'restarted': restarted, 'warnings': warnings} + try: + dns_data = os.path.join(self.data_dir, 'dns') + if os.path.isdir(dns_data): + for fname in os.listdir(dns_data): + if fname.endswith('.zone') and 'local' not in fname: + zone_file = os.path.join(dns_data, fname) + with open(zone_file) as f: + content = f.read() + # Replace hostname record: old_name IN A ... + import re + content = re.sub( + rf'^{re.escape(old_name)}(\s+IN\s+A\s+)', + f'{new_name}\\1', + content, flags=re.MULTILINE + ) + with open(zone_file, 'w') as f: + f.write(content) + break + self._reload_dns_service() + restarted.append('cell-dns (reloaded)') + except Exception as e: + warnings.append(f"cell_name DNS update failed: {e}") + return {'restarted': restarted, 'warnings': warnings} + + def add_cell_dns_forward(self, domain: str, dns_ip: str) -> Dict[str, Any]: + """Append a CoreDNS forwarding block for a remote cell's domain.""" + restarted = [] + warnings = [] + try: + corefile = os.path.join(self.config_dir, 'dns', 'Corefile') + if not os.path.exists(corefile): + warnings.append('Corefile not found') + return {'restarted': restarted, 'warnings': warnings} + with open(corefile) as f: + content = f.read() + marker = f'# cell:{domain}' + if marker in content: + return {'restarted': restarted, 'warnings': warnings} # already present + forward_block = ( + f'\n{marker}\n' + f'{domain} {{\n' + f' forward . {dns_ip}\n' + f' log\n' + f'}}\n' + ) + with open(corefile, 'a') as f: + f.write(forward_block) + self._reload_dns_service() + restarted.append('cell-dns (reloaded)') + except Exception as e: + warnings.append(f'add_cell_dns_forward failed: {e}') + return {'restarted': restarted, 'warnings': warnings} + + def remove_cell_dns_forward(self, domain: str) -> Dict[str, Any]: + """Remove a CoreDNS forwarding block for a remote cell's domain.""" + import re + restarted = [] + warnings = [] + try: + corefile = os.path.join(self.config_dir, 'dns', 'Corefile') + if not os.path.exists(corefile): + return {'restarted': restarted, 'warnings': warnings} + with open(corefile) as f: + content = f.read() + marker = f'# cell:{domain}' + if marker not in content: + return {'restarted': restarted, 'warnings': warnings} + new_content = re.sub( + rf'\n# cell:{re.escape(domain)}\n{re.escape(domain)}\s*\{{[^}}]*\}}\n', + '', + content, + flags=re.DOTALL, + ) + with open(corefile, 'w') as f: + f.write(new_content) + self._reload_dns_service() + restarted.append('cell-dns (reloaded)') + except Exception as e: + warnings.append(f'remove_cell_dns_forward failed: {e}') + return {'restarted': restarted, 'warnings': warnings} + + def test_dns_resolution(self, domain: str) -> Dict: + """Test DNS resolution for a domain using Python socket.""" + import socket + try: + results = socket.getaddrinfo(domain, None) + addrs = [r[4][0] for r in results] + return {'success': True, 'output': f"Resolved: {', '.join(addrs)}", 'error': ''} + except Exception as e: + return {'success': False, 'output': '', 'error': str(e)} def test_dhcp_functionality(self) -> Dict: """Test DHCP functionality""" @@ -304,14 +533,15 @@ class NetworkManager(BaseServiceManager): leases = self.get_dhcp_leases() return { + 'success': is_running, 'running': is_running, 'leases_count': len(leases), 'leases': leases } - + except Exception as e: logger.error(f"Failed to test DHCP functionality: {e}") - return {'running': False, 'leases_count': 0, 'leases': []} + return {'success': False, 'running': False, 'leases_count': 0, 'leases': []} def test_ntp_functionality(self) -> Dict: """Test NTP functionality""" @@ -335,13 +565,14 @@ class NetworkManager(BaseServiceManager): ntp_test['error'] = str(e) return { + 'success': is_running, 'running': is_running, 'ntp_test': ntp_test } - + except Exception as e: logger.error(f"Failed to test NTP functionality: {e}") - return {'running': False, 'ntp_test': {}} + return {'success': False, 'running': False, 'ntp_test': {}} def get_network_info(self) -> dict: """Return general network info: IP addresses, interfaces, gateway, DNS, etc.""" diff --git a/api/peer_registry.py b/api/peer_registry.py index 4af2340..941c484 100644 --- a/api/peer_registry.py +++ b/api/peer_registry.py @@ -266,6 +266,27 @@ class PeerRegistry(BaseServiceManager): self.logger.error(f"Error removing peer {name}: {e}") return False + def update_peer(self, name: str, fields: Dict[str, Any]) -> bool: + """Update arbitrary fields on a peer.""" + try: + with self.lock: + for peer in self.peers: + if peer.get('peer') == name: + peer.update(fields) + peer['updated_at'] = datetime.utcnow().isoformat() + self._save_peers() + self.logger.info(f"Updated peer {name}: {list(fields.keys())}") + return True + self.logger.warning(f"Peer {name} not found for update") + return False + except Exception as e: + self.logger.error(f"Error updating peer {name}: {e}") + return False + + def clear_reinstall_flag(self, name: str) -> bool: + """Clear the config_needs_reinstall flag after user downloads new config.""" + return self.update_peer(name, {'config_needs_reinstall': False}) + def update_peer_ip(self, name: str, new_ip: str) -> bool: """Update peer IP address""" try: diff --git a/api/routing_manager.py b/api/routing_manager.py index fb3aabe..63a3d7f 100644 --- a/api/routing_manager.py +++ b/api/routing_manager.py @@ -30,8 +30,8 @@ class RoutingManager(BaseServiceManager): self._state_file = os.path.join(data_dir, 'routing', 'service_state.json') # Ensure directories exist - os.makedirs(self.routing_dir, exist_ok=True) - os.makedirs(os.path.dirname(self.rules_file), exist_ok=True) + self.safe_makedirs(self.routing_dir) + self.safe_makedirs(os.path.dirname(self.rules_file)) # Initialize routing configuration self._ensure_config_exists() @@ -41,8 +41,11 @@ class RoutingManager(BaseServiceManager): def _ensure_config_exists(self): """Ensure routing configuration exists""" - if not os.path.exists(self.rules_file): - self._initialize_rules() + try: + if not os.path.exists(self.rules_file): + self._initialize_rules() + except (PermissionError, OSError): + pass def _initialize_rules(self): """Initialize routing rules""" @@ -385,67 +388,38 @@ class RoutingManager(BaseServiceManager): } def test_routing_connectivity(self, target_ip: str, via_peer: str = None) -> Dict: - """Test routing connectivity""" - try: - results = {} - - # Test basic connectivity - try: - result = subprocess.run(['ping', '-c', '3', '-W', '5', target_ip], - capture_output=True, text=True, timeout=30) - results['ping'] = { - 'success': result.returncode == 0, - 'output': result.stdout, - 'error': result.stderr - } - except Exception as e: - results['ping'] = { - 'success': False, - 'output': '', - 'error': str(e) - } - - # Test traceroute - try: - result = subprocess.run(['traceroute', '-m', '10', target_ip], - capture_output=True, text=True, timeout=30) - results['traceroute'] = { - 'success': result.returncode == 0, - 'output': result.stdout, - 'error': result.stderr - } - except Exception as e: - results['traceroute'] = { - 'success': False, - 'output': '', - 'error': str(e) - } - - # Test specific route if via_peer is specified - if via_peer: - try: - # Test route through specific peer - result = subprocess.run(['ping', '-c', '3', '-W', '5', '-I', via_peer, target_ip], - capture_output=True, text=True, timeout=30) - results['peer_route'] = { - 'success': result.returncode == 0, - 'output': result.stdout, - 'error': result.stderr - } - except Exception as e: - results['peer_route'] = { - 'success': False, - 'output': '', - 'error': str(e) - } - - return results - - except Exception as e: + """Test routing connectivity by running ping/traceroute in cell-wireguard.""" + WG = 'cell-wireguard' + + def _exec(cmd): + result = subprocess.run( + ['docker', 'exec', WG] + cmd, + capture_output=True, text=True, timeout=35 + ) return { - 'ping': {'success': False, 'output': '', 'error': str(e)}, - 'traceroute': {'success': False, 'output': '', 'error': str(e)} + 'success': result.returncode == 0, + 'output': result.stdout, + 'error': result.stderr, } + + results = {} + try: + results['ping'] = _exec(['ping', '-c', '4', '-W', '3', target_ip]) + except Exception as e: + results['ping'] = {'success': False, 'output': '', 'error': str(e)} + + try: + results['traceroute'] = _exec(['traceroute', '-m', '10', '-w', '2', target_ip]) + except Exception as e: + results['traceroute'] = {'success': False, 'output': '', 'error': str(e)} + + if via_peer: + try: + results['peer_route'] = _exec(['ping', '-c', '3', '-W', '3', '-I', via_peer, target_ip]) + except Exception as e: + results['peer_route'] = {'success': False, 'output': '', 'error': str(e)} + + return results def get_routing_logs(self, lines: int = 50) -> Dict: """Get routing and firewall logs""" @@ -482,6 +456,49 @@ class RoutingManager(BaseServiceManager): logger.error(f"Failed to get routing logs: {e}") return {'error': str(e)} + def remove_firewall_rule(self, rule_id: str) -> bool: + """Remove a stored firewall rule and delete it from iptables.""" + try: + rules = self._load_rules() + rule = next((r for r in rules['firewall_rules'] if r['id'] == rule_id), None) + if not rule: + return False + rules['firewall_rules'] = [r for r in rules['firewall_rules'] if r['id'] != rule_id] + self._save_rules(rules) + try: + cmd = ['iptables', '-D', rule['rule_type'], + '-s', rule['source'], '-d', rule['destination']] + if rule.get('protocol') and rule['protocol'] != 'ALL': + cmd += ['-p', rule['protocol'].lower()] + if rule.get('port'): + cmd += ['--dport', str(rule['port'])] + if rule.get('port_range'): + cmd += ['--dport', rule['port_range'].replace('-', ':')] + cmd += ['-j', rule['action']] + subprocess.run(cmd, capture_output=True, timeout=10) + except Exception as e: + logger.warning(f"iptables -D failed (rule may already be gone): {e}") + logger.info(f"Removed firewall rule {rule_id}") + return True + except Exception as e: + logger.error(f"Failed to remove firewall rule: {e}") + return False + + def get_live_iptables(self) -> dict: + """Return live iptables rules from the WireGuard container.""" + out = {} + for table in ('filter', 'nat'): + try: + r = subprocess.run( + ['docker', 'exec', 'cell-wireguard', + 'iptables', '-t', table, '-L', '-n', '-v', '--line-numbers'], + capture_output=True, text=True, timeout=10 + ) + out[table] = r.stdout if r.returncode == 0 else r.stderr + except Exception as e: + out[table] = str(e) + return out + def get_nat_rules(self): """Return all NAT rules.""" rules = self._load_rules() @@ -558,7 +575,8 @@ class RoutingManager(BaseServiceManager): 'iptables_access': iptables_test, 'network_interfaces': interfaces_test, 'routing_table_access': routing_table_test, - 'success': routing_test.get('success', False) and iptables_test.get('success', False), + # iptables runs in cell-wireguard, not API container — exclude from success + 'success': routing_test.get('success', False), 'timestamp': datetime.utcnow().isoformat() } @@ -859,37 +877,59 @@ class RoutingManager(BaseServiceManager): logger.error(f"Failed to apply firewall rule: {e}") def _get_routing_table(self) -> List[Dict]: - """Get current routing table""" + """Get host routing table from /proc/1/net/route (host PID namespace).""" try: - result = subprocess.run(['ip', 'route', 'show'], - capture_output=True, text=True, timeout=10) - - routes = [] - for line in result.stdout.strip().split('\n'): - if line.strip(): - routes.append({ - 'route': line.strip(), - 'parsed': self._parse_route(line.strip()) - }) - - return routes - - except FileNotFoundError: - # System tools not available (development environment) - # Return mock routing table for development - return [ - { - 'route': 'default via 192.168.1.1 dev en0', - 'parsed': {'destination': 'default', 'via': '192.168.1.1', 'dev': 'en0', 'metric': ''} - }, - { - 'route': '10.0.0.0/24 dev wg0', - 'parsed': {'destination': '10.0.0.0/24', 'via': '', 'dev': 'wg0', 'metric': ''} - } - ] + return self._parse_proc_net_route('/proc/1/net/route') + except Exception: + pass + # Fallback: WireGuard container routing table + try: + result = subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'ip', 'route', 'show'], + capture_output=True, text=True, timeout=10, + ) + if result.returncode == 0: + routes = [] + for line in result.stdout.strip().split('\n'): + if line.strip(): + routes.append({'route': line.strip(), 'parsed': self._parse_route(line.strip())}) + return routes except Exception as e: logger.error(f"Failed to get routing table: {e}") - return [] + return [] + + def _parse_proc_net_route(self, path: str) -> List[Dict]: + """Parse /proc/net/route hex table into human-readable routes.""" + import socket, struct + routes = [] + with open(path) as f: + lines = f.readlines()[1:] # skip header + for line in lines: + parts = line.strip().split() + if len(parts) < 8: + continue + iface, dest_hex, gw_hex, mask_hex = parts[0], parts[1], parts[2], parts[7] + + def hex_to_ip(h): + return socket.inet_ntoa(struct.pack('I', socket.inet_aton(mask))[0]).count('1') + + if dest == '0.0.0.0' and mask == '0.0.0.0': + dest_str = 'default' + route_str = f'default via {gw} dev {iface}' + else: + dest_str = f'{dest}/{prefix}' + route_str = f'{dest}/{prefix} dev {iface}' + (f' via {gw}' if gw != '0.0.0.0' else '') + + routes.append({ + 'route': route_str, + 'parsed': {'destination': dest_str, 'via': gw if gw != '0.0.0.0' else '', 'dev': iface, 'metric': ''}, + }) + return routes def _parse_route(self, route_line: str) -> Dict: """Parse route line into components""" diff --git a/api/vault_manager.py b/api/vault_manager.py index 458b24c..104b94b 100644 --- a/api/vault_manager.py +++ b/api/vault_manager.py @@ -46,7 +46,10 @@ class VaultManager(BaseServiceManager): # Create directories for directory in [self.vault_dir, self.ca_dir, self.certs_dir, self.keys_dir, self.trust_dir]: - directory.mkdir(parents=True, exist_ok=True) + try: + directory.mkdir(parents=True, exist_ok=True) + except (PermissionError, OSError): + pass # CA files self.ca_key_file = self.ca_dir / "ca.key" @@ -63,7 +66,12 @@ class VaultManager(BaseServiceManager): self.trusted_keys = {} self.trust_chains = {} - self._load_or_create_ca() + self.ca_key = None + self.ca_cert = None + try: + self._load_or_create_ca() + except (PermissionError, OSError): + pass self._load_trust_store() def _load_or_create_ca(self) -> None: @@ -150,19 +158,25 @@ class VaultManager(BaseServiceManager): def _load_or_create_fernet_key(self) -> None: """Load existing Fernet key or create a new one.""" - if self.fernet_key_file.exists(): - with open(self.fernet_key_file, "rb") as f: - self.fernet_key = f.read() - else: + try: + if self.fernet_key_file.exists(): + with open(self.fernet_key_file, "rb") as f: + self.fernet_key = f.read() + else: + self.fernet_key = Fernet.generate_key() + with open(self.fernet_key_file, "wb") as f: + f.write(self.fernet_key) + self.fernet = Fernet(self.fernet_key) + except (PermissionError, OSError): self.fernet_key = Fernet.generate_key() - with open(self.fernet_key_file, "wb") as f: - f.write(self.fernet_key) - self.fernet = Fernet(self.fernet_key) + self.fernet = Fernet(self.fernet_key) - def generate_certificate(self, common_name: str, domains: Optional[List[str]] = None, + def generate_certificate(self, common_name: str, domains: Optional[List[str]] = None, key_size: int = 2048, days: int = 365) -> Dict: """Generate a new TLS certificate.""" try: + if self.ca_key is None or self.ca_cert is None: + raise RuntimeError("CA not initialized — cannot generate certificate") # Generate private key private_key = rsa.generate_private_key( public_exponent=65537, @@ -415,12 +429,23 @@ class VaultManager(BaseServiceManager): # Check secrets secrets = self.list_secrets() + ca_ok = ca_status.get('valid', False) + ca_cert_pem = None + if self.ca_cert_file.exists(): + ca_cert_pem = self.ca_cert_file.read_text() status = { - 'running': ca_status.get('valid', False), - 'status': 'online' if ca_status.get('valid', False) else 'offline', + 'running': ca_ok, + 'status': 'online' if ca_ok else 'offline', + 'ca_configured': ca_ok, + 'age_configured': ca_ok, + 'age_public_key': None, + 'ca_certificate': ca_cert_pem, 'ca_status': ca_status, 'certificates_count': len(certificates), + 'certificates': certificates, 'trusted_keys_count': len(trusted_keys), + 'trusted_keys': list(trusted_keys.values()) if isinstance(trusted_keys, dict) else trusted_keys, + 'trust_chains_count': len(trusted_keys), 'secrets_count': len(secrets), 'timestamp': datetime.utcnow().isoformat() } diff --git a/api/wireguard_manager.py b/api/wireguard_manager.py index ba26f75..141fdff 100644 --- a/api/wireguard_manager.py +++ b/api/wireguard_manager.py @@ -1,896 +1,679 @@ #!/usr/bin/env python3 """ WireGuard Manager for Personal Internet Cell -Handles WireGuard VPN configuration and peer management """ import os import json +import base64 +import socket import subprocess import logging +import time from datetime import datetime from typing import Dict, List, Optional, Any +from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey from base_service_manager import BaseServiceManager +try: + import requests as _requests +except ImportError: + _requests = None + logger = logging.getLogger(__name__) +SERVER_ADDRESS = '10.0.0.1/24' +SERVER_NETWORK = '10.0.0.0/24' +DEFAULT_PORT = 51820 + +def _resolve_peer_dns() -> str: + """Resolve cell-dns container IP at runtime; fall back to known default.""" + for hostname in ('cell-dns',): + try: + return socket.gethostbyname(hostname) + except OSError: + pass + return '172.20.0.3' + + class WireGuardManager(BaseServiceManager): """Manages WireGuard VPN configuration and peers""" - + def __init__(self, data_dir: str = '/app/data', config_dir: str = '/app/config'): super().__init__('wireguard', data_dir, config_dir) - self.wg_config_dir = os.path.join(config_dir, 'wireguard') + self.wireguard_dir = os.path.join(config_dir, 'wireguard') + self.keys_dir = os.path.join(data_dir, 'wireguard', 'keys') self.peers_dir = os.path.join(data_dir, 'wireguard', 'peers') - - # Ensure directories exist - os.makedirs(self.wg_config_dir, exist_ok=True) - os.makedirs(self.peers_dir, exist_ok=True) - def get_status(self) -> Dict[str, Any]: - """Get WireGuard service status""" - try: - # Check if we're running in Docker environment - import os - is_docker = os.path.exists('/.dockerenv') or os.environ.get('DOCKER_CONTAINER') == 'true' - - if is_docker: - # Check if WireGuard container is actually running - container_running = self._check_wireguard_container_status() - status = { - 'running': container_running, - 'status': 'online' if container_running else 'offline', - 'interface': 'wg0' if container_running else 'unknown', - 'peers_count': len(self._get_configured_peers()) if container_running else 0, - 'total_traffic': self._get_traffic_stats() if container_running else {'bytes_sent': 0, 'bytes_received': 0}, - 'timestamp': datetime.utcnow().isoformat() - } - else: - # Check actual service status in production - status = { - 'running': self._check_wireguard_status(), - 'status': 'online' if self._check_wireguard_status() else 'offline', - 'interface': 'wg0', - 'peers_count': len(self._get_configured_peers()), - 'total_traffic': self._get_traffic_stats(), - 'timestamp': datetime.utcnow().isoformat() - } - - return status - except Exception as e: - return self.handle_error(e, "get_status") + self.safe_makedirs(self.wireguard_dir) + self.safe_makedirs(self.keys_dir) + self.safe_makedirs(os.path.join(self.keys_dir, 'peers')) + self.safe_makedirs(self.peers_dir) - def test_connectivity(self) -> Dict[str, Any]: - """Test WireGuard connectivity""" - try: - # Test if WireGuard interface exists and is up - interface_up = self._check_interface_status() - - # Test if peers can connect - peers_connectivity = self._test_peers_connectivity() - - results = { - 'interface_up': interface_up, - 'peers_connectivity': peers_connectivity, - 'success': interface_up and all(peers_connectivity.values()), - 'timestamp': datetime.utcnow().isoformat() - } - - return results - except Exception as e: - return self.handle_error(e, "test_connectivity") + self._ensure_server_keys() - def _check_wireguard_status(self) -> bool: - """Check if WireGuard service is running""" - try: - # Check if wg0 interface exists - result = subprocess.run(['ip', 'link', 'show', 'wg0'], - capture_output=True, text=True, timeout=5) - return result.returncode == 0 - except Exception: - return False + # ── Key management ──────────────────────────────────────────────────────── - def _check_wireguard_container_status(self) -> bool: - """Check if WireGuard Docker container is running""" - try: - import docker - client = docker.from_env() - containers = client.containers.list(filters={'name': 'cell-wireguard'}) - return len(containers) > 0 - except Exception: - return False + @staticmethod + def _generate_keypair(): + """Return (private_bytes, public_bytes) using X25519.""" + priv = X25519PrivateKey.generate() + return priv.private_bytes_raw(), priv.public_key().public_bytes_raw() - def _check_interface_status(self) -> bool: - """Check if WireGuard interface is up""" - try: - result = subprocess.run(['ip', 'link', 'show', 'wg0'], - capture_output=True, text=True, timeout=5) - if result.returncode == 0: - return 'UP' in result.stdout - return False - except Exception: - return False + def _ensure_server_keys(self): + priv_file = os.path.join(self.keys_dir, 'private.key') + pub_file = os.path.join(self.keys_dir, 'public.key') + if not os.path.exists(priv_file): + try: + priv_bytes, pub_bytes = self._generate_keypair() + with open(priv_file, 'wb') as f: + f.write(priv_bytes) + with open(pub_file, 'wb') as f: + f.write(pub_bytes) + except (PermissionError, OSError): + pass - def _get_configured_peers(self) -> List[Dict[str, Any]]: - """Get list of configured peers""" - peers = [] - try: - # Read peer configurations from peers directory - for filename in os.listdir(self.peers_dir): - if filename.endswith('.conf'): - peer_name = filename[:-5] # Remove .conf extension - peer_file = os.path.join(self.peers_dir, filename) - - with open(peer_file, 'r') as f: - content = f.read() - - # Parse peer configuration - peer_config = self._parse_peer_config(content) - peer_config['name'] = peer_name - peers.append(peer_config) - except Exception as e: - logger.error(f"Error reading peer configurations: {e}") - - return peers - - def _parse_peer_config(self, content: str) -> Dict[str, Any]: - """Parse WireGuard peer configuration""" - config = {} - lines = content.strip().split('\n') - - for line in lines: - line = line.strip() - if line.startswith('[Peer]'): - continue - elif '=' in line: - key, value = line.split('=', 1) - config[key.strip()] = value.strip() - - return config - - def _get_traffic_stats(self) -> Dict[str, int]: - """Get WireGuard traffic statistics""" - try: - result = subprocess.run(['wg', 'show', 'wg0', 'transfer'], - capture_output=True, text=True, timeout=5) - - if result.returncode == 0: - lines = result.stdout.strip().split('\n') - total_rx = 0 - total_tx = 0 - - for line in lines: - if line.strip(): - parts = line.split() - if len(parts) >= 3: - try: - rx = int(parts[1]) - tx = int(parts[2]) - total_rx += rx - total_tx += tx - except ValueError: - continue - - return { - 'bytes_received': total_rx, - 'bytes_sent': total_tx - } - except Exception as e: - logger.error(f"Error getting traffic stats: {e}") - - return {'bytes_received': 0, 'bytes_sent': 0} - - def _test_peers_connectivity(self) -> Dict[str, bool]: - """Test connectivity to all peers""" - connectivity = {} - peers = self._get_configured_peers() - - for peer in peers: - peer_name = peer.get('name', 'unknown') - allowed_ips = peer.get('AllowedIPs', '') - - if allowed_ips: - # Extract first IP from AllowedIPs - ip = allowed_ips.split(',')[0].split('/')[0] - - try: - # Ping the peer IP - result = subprocess.run(['ping', '-c', '1', '-W', '2', ip], - capture_output=True, text=True, timeout=5) - connectivity[peer_name] = result.returncode == 0 - except Exception: - connectivity[peer_name] = False - else: - connectivity[peer_name] = False - - return connectivity - - def get_wireguard_status(self) -> Dict[str, Any]: - """Get detailed WireGuard status""" - try: - status = self.get_status() - - # Get peer details - peers = self._get_configured_peers() - peer_details = [] - - for peer in peers: - peer_detail = { - 'name': peer.get('name', 'unknown'), - 'public_key': peer.get('PublicKey', ''), - 'allowed_ips': peer.get('AllowedIPs', ''), - 'endpoint': peer.get('Endpoint', ''), - 'last_handshake': peer.get('LastHandshake', ''), - 'transfer_rx': peer.get('TransferRx', 0), - 'transfer_tx': peer.get('TransferTx', 0) - } - peer_details.append(peer_detail) - - status['peers'] = peer_details - return status - except Exception as e: - return self.handle_error(e, "get_wireguard_status") - - def get_wireguard_peers(self) -> List[Dict[str, Any]]: - """Get all WireGuard peers""" - try: - peers = self._get_configured_peers() - return peers - except Exception as e: - logger.error(f"Error getting WireGuard peers: {e}") - return [] - - def add_wireguard_peer(self, name: str, public_key: str, allowed_ips: str, - endpoint: str = '', persistent_keepalive: int = 25) -> bool: - """Add a new WireGuard peer""" - try: - # Create peer configuration - peer_config = f"""[Peer] -PublicKey = {public_key} -AllowedIPs = {allowed_ips} -""" - - if endpoint: - peer_config += f"Endpoint = {endpoint}\n" - - if persistent_keepalive: - peer_config += f"PersistentKeepalive = {persistent_keepalive}\n" - - # Save peer configuration - peer_file = os.path.join(self.peers_dir, f'{name}.conf') - with open(peer_file, 'w') as f: - f.write(peer_config) - - # Reload WireGuard configuration - self._reload_wireguard_config() - - logger.info(f"Added WireGuard peer: {name}") - return True - except Exception as e: - logger.error(f"Failed to add WireGuard peer {name}: {e}") - return False - - def remove_wireguard_peer(self, name: str) -> bool: - """Remove a WireGuard peer""" - try: - peer_file = os.path.join(self.peers_dir, f'{name}.conf') - if os.path.exists(peer_file): - os.remove(peer_file) - - # Reload WireGuard configuration - self._reload_wireguard_config() - - logger.info(f"Removed WireGuard peer: {name}") - return True - else: - logger.warning(f"Peer file not found: {peer_file}") - return False - except Exception as e: - logger.error(f"Failed to remove WireGuard peer {name}: {e}") - return False - - def generate_peer_keys(self, peer_name: str) -> Dict[str, str]: - """Generate WireGuard keys for a peer""" - try: - # Generate private key - private_key_result = subprocess.run(['wg', 'genkey'], - capture_output=True, text=True, timeout=10) - if private_key_result.returncode != 0: - raise Exception("Failed to generate private key") - - private_key = private_key_result.stdout.strip() - - # Generate public key from private key - public_key_result = subprocess.run(['wg', 'pubkey'], - input=private_key, - capture_output=True, text=True, timeout=10) - if public_key_result.returncode != 0: - raise Exception("Failed to generate public key") - - public_key = public_key_result.stdout.strip() - - # Save keys to file - keys_file = os.path.join(self.peers_dir, f'{peer_name}_keys.json') - keys_data = { - 'private_key': private_key, - 'public_key': public_key, - 'peer_name': peer_name, - 'generated_at': datetime.utcnow().isoformat() - } - - with open(keys_file, 'w') as f: - json.dump(keys_data, f, indent=2) - - logger.info(f"Generated keys for peer: {peer_name}") - return { - 'private_key': private_key, - 'public_key': public_key, - 'peer_name': peer_name - } - except Exception as e: - logger.error(f"Failed to generate keys for peer {peer_name}: {e}") - raise - - def _reload_wireguard_config(self): - """Reload WireGuard configuration by updating the main config file""" - try: - # Read the main server configuration - server_config_path = os.path.join(self.wg_config_dir, 'wg_confs', 'wg0.conf') - if not os.path.exists(server_config_path): - logger.error("Server configuration file not found") - return False - - with open(server_config_path, 'r') as f: - server_content = f.read() - - # Find the end of the [Interface] section - lines = server_content.split('\n') - interface_end = 0 - for i, line in enumerate(lines): - if line.strip().startswith('[Peer]'): - interface_end = i - break - else: - interface_end = len(lines) - - # Keep only the [Interface] section - interface_lines = lines[:interface_end] - - # Add all peer configurations - peer_lines = [] - for filename in os.listdir(self.peers_dir): - if filename.endswith('.conf') and not filename.endswith('_keys.json'): - peer_file = os.path.join(self.peers_dir, filename) - with open(peer_file, 'r') as f: - peer_content = f.read().strip() - if peer_content: - peer_lines.append('') # Empty line before peer - peer_lines.extend(peer_content.split('\n')) - - # Combine interface and peer configurations - new_content = '\n'.join(interface_lines + peer_lines) - - # Write the updated configuration - with open(server_config_path, 'w') as f: - f.write(new_content) - - # Restart WireGuard container to apply changes - import subprocess - result = subprocess.run(['docker', 'restart', 'cell-wireguard'], - capture_output=True, text=True, timeout=30) - if result.returncode == 0: - logger.info("WireGuard configuration reloaded and container restarted") - return True - else: - logger.error(f"Failed to restart WireGuard container: {result.stderr}") - return False - - except Exception as e: - logger.error(f"Failed to reload WireGuard configuration: {e}") - return False - - def get_metrics(self) -> Dict[str, Any]: - """Get WireGuard metrics""" - try: - traffic_stats = self._get_traffic_stats() - peers = self._get_configured_peers() - - return { - 'service': 'wireguard', - 'timestamp': datetime.utcnow().isoformat(), - 'status': 'online' if self._check_wireguard_status() else 'offline', - 'peers_count': len(peers), - 'traffic_stats': traffic_stats, - 'interface_status': self._check_interface_status() - } - except Exception as e: - return self.handle_error(e, "get_metrics") - - def restart_service(self) -> bool: - """Restart WireGuard service""" - try: - # Stop WireGuard interface - subprocess.run(['wg-quick', 'down', 'wg0'], - capture_output=True, text=True, timeout=10) - - # Start WireGuard interface - subprocess.run(['wg-quick', 'up', 'wg0'], - capture_output=True, text=True, timeout=10) - - logger.info("WireGuard service restarted") - return True - except Exception as e: - logger.error(f"Failed to restart WireGuard service: {e}") - return False - - def get_peer_config(self, peer_name: str) -> Optional[str]: - """Get WireGuard client configuration for a specific peer""" - try: - # Get peer information - peers = self.get_wireguard_peers() - peer_info = None - - for peer in peers: - if peer.get('name') == peer_name: - peer_info = peer - break - - if not peer_info: - logger.warning(f"Peer {peer_name} not found") - return None - - # Get server configuration - server_config = self._get_server_config() - - # Generate client configuration - client_config = self._generate_client_config(peer_info, server_config) - - return client_config - - except Exception as e: - logger.error(f"Error getting peer config for {peer_name}: {e}") - return None - - def _get_server_config(self) -> Dict[str, str]: - """Get server configuration details""" - try: - # Try to read server config file - server_config_path = os.path.join(self.wg_config_dir, 'wg_confs', 'wg0.conf') - if os.path.exists(server_config_path): - with open(server_config_path, 'r') as f: - content = f.read() - - # Parse server configuration - lines = content.strip().split('\n') - server_public_key = None - server_endpoint = None - server_private_key = None - - # Look for server private key and endpoint - for line in lines: - line = line.strip() - if line.startswith('PrivateKey'): - server_private_key = line.split('=', 1)[1].strip() - elif line.startswith('ListenPort'): - port = line.split('=', 1)[1].strip() - # Get server IP from environment or detect it - server_ip = os.environ.get('WIREGUARD_SERVER_IP') - if not server_ip: - # Try to get the actual external IP - try: - import socket - import requests - # First try to get external IP from a service - try: - response = requests.get('https://api.ipify.org', timeout=5) - if response.status_code == 200: - server_ip = response.text.strip() - else: - raise Exception("Failed to get external IP") - except Exception: - # Fallback: try to get local IP that's not Docker internal - with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: - s.connect(("8.8.8.8", 80)) - local_ip = s.getsockname()[0] - # If it's a Docker internal IP, use localhost for development - if local_ip.startswith('172.') or local_ip.startswith('192.168.'): - server_ip = "localhost" - else: - server_ip = local_ip - except Exception: - # Ultimate fallback to localhost for development - server_ip = "localhost" - server_endpoint = f"{server_ip}:{port}" - - # Generate public key from private key if we have it - if server_private_key: - try: - # Use wg pubkey command to generate public key from private key - import subprocess - result = subprocess.run(['wg', 'pubkey'], - input=server_private_key, - capture_output=True, text=True, timeout=5) - if result.returncode == 0: - server_public_key = result.stdout.strip() - else: - # Fallback: try to read from existing public key file - pubkey_path = os.path.join(self.wg_config_dir, 'publickey') - if os.path.exists(pubkey_path): - with open(pubkey_path, 'r') as f: - server_public_key = f.read().strip() - else: - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - except Exception as e: - logger.warning(f"Could not generate public key: {e}") - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - else: - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - - # Set default endpoint if not found - if not server_endpoint: - # Try to get the actual server IP - server_ip = os.environ.get('WIREGUARD_SERVER_IP') - if not server_ip: - # Try to get the actual external IP - try: - import socket - import requests - # First try to get external IP from a service - try: - response = requests.get('https://api.ipify.org', timeout=5) - if response.status_code == 200: - server_ip = response.text.strip() - else: - raise Exception("Failed to get external IP") - except Exception: - # Fallback: try to get local IP that's not Docker internal - with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: - s.connect(("8.8.8.8", 80)) - local_ip = s.getsockname()[0] - # If it's a Docker internal IP, use localhost for development - if local_ip.startswith('172.') or local_ip.startswith('192.168.'): - server_ip = "localhost" - else: - server_ip = local_ip - except Exception: - # Ultimate fallback to localhost for development - server_ip = "localhost" - server_endpoint = f"{server_ip}:51820" - - return { - 'public_key': server_public_key, - 'endpoint': server_endpoint, - 'allowed_ips': '0.0.0.0/0' - } - except Exception as e: - logger.error(f"Error reading server config: {e}") - - # Return default values + def get_keys(self) -> Dict[str, str]: + """Return server public/private keys as base64 strings. Generates them if missing.""" + priv_file = os.path.join(self.keys_dir, 'private.key') + pub_file = os.path.join(self.keys_dir, 'public.key') + if not os.path.exists(priv_file): + self._ensure_server_keys() + if not os.path.exists(priv_file): + return {'private_key': '', 'public_key': ''} + with open(priv_file, 'rb') as f: + priv = f.read() + with open(pub_file, 'rb') as f: + pub = f.read() return { - 'public_key': 'SERVER_PUBLIC_KEY_PLACEHOLDER', - 'endpoint': 'YOUR_SERVER_IP:51820', - 'allowed_ips': '0.0.0.0/0' + 'private_key': base64.b64encode(priv).decode(), + 'public_key': base64.b64encode(pub).decode(), } - def _generate_client_config(self, peer_info: Dict[str, Any], server_config: Dict[str, str]) -> str: - """Generate WireGuard client configuration""" - try: - # Get peer private key from peer data - peer_private_key = peer_info.get('private_key', 'YOUR_PRIVATE_KEY_HERE') - - # Check if IP already has a subnet mask, if not add /32 - peer_ip = peer_info.get('ip', '10.0.0.2') - peer_address = peer_ip if '/' in peer_ip else f"{peer_ip}/32" - - config = f"""[Interface] -PrivateKey = {peer_private_key} -Address = {peer_address} -DNS = 8.8.8.8, 1.1.1.1 + def generate_peer_keys(self, peer_name: str) -> Dict[str, str]: + """Generate a keypair for a peer, save to keys_dir/peers/, return as base64.""" + priv_bytes, pub_bytes = self._generate_keypair() + priv_b64 = base64.b64encode(priv_bytes).decode() + pub_b64 = base64.b64encode(pub_bytes).decode() -[Peer] -PublicKey = {server_config['public_key']} -Endpoint = {server_config['endpoint']} -AllowedIPs = {server_config['allowed_ips']} -PersistentKeepalive = {peer_info.get('persistent_keepalive', 25)}""" - - return config - - except Exception as e: - logger.error(f"Error generating client config: {e}") + peer_keys_dir = os.path.join(self.keys_dir, 'peers') + with open(os.path.join(peer_keys_dir, f'{peer_name}_private.key'), 'w') as f: + f.write(priv_b64) + with open(os.path.join(peer_keys_dir, f'{peer_name}_public.key'), 'w') as f: + f.write(pub_b64) + + return {'private_key': priv_b64, 'public_key': pub_b64, 'peer_name': peer_name} + + # ── Config generation ───────────────────────────────────────────────────── + + def get_config(self, interface: str = 'wg0', port: int = DEFAULT_PORT): + """Return server config (alias for generate_config, returns dict for API compat).""" + return {'config': self.generate_config(interface, port)} + + def generate_config(self, interface: str = 'wg0', port: int = DEFAULT_PORT) -> str: + """Return a WireGuard [Interface] config string for the server.""" + import ipaddress + keys = self.get_keys() + ext_ip = self.get_external_ip() or '' + address = self._get_configured_address() if os.path.exists(self._config_file()) else SERVER_ADDRESS + server_ip = str(ipaddress.ip_interface(address).ip) + hairpin = ( + f'iptables -t nat -A PREROUTING -i %i -d {ext_ip} -j DNAT --to-destination {server_ip}; ' + if ext_ip else '' + ) + hairpin_down = ( + f'iptables -t nat -D PREROUTING -i %i -d {ext_ip} -j DNAT --to-destination {server_ip}; ' + if ext_ip else '' + ) + cfg_port = self._get_configured_port() if os.path.exists(self._config_file()) else port + return ( + f'[Interface]\n' + f'PrivateKey = {keys["private_key"]}\n' + f'Address = {address}\n' + f'ListenPort = {cfg_port}\n' + f'PostUp = iptables -A FORWARD -i %i -j ACCEPT; ' + f'iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ' + f'{hairpin}' + f'sysctl -q net.ipv4.conf.all.rp_filter=0\n' + f'PostDown = iptables -D FORWARD -i %i -j ACCEPT; ' + f'iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ' + f'{hairpin_down}' + f'sysctl -q net.ipv4.conf.all.rp_filter=1\n' + ) + + def _config_file(self) -> str: + # linuxserver/wireguard stores configs in wg_confs/ + wg_confs = os.path.join(self.wireguard_dir, 'wg_confs') + if os.path.isdir(wg_confs): + return os.path.join(wg_confs, 'wg0.conf') + return os.path.join(self.wireguard_dir, 'wg0.conf') + + def _read_config(self) -> str: + cf = self._config_file() + if os.path.exists(cf): + with open(cf, 'r') as f: + return f.read() + return self.generate_config() + + def _write_config(self, content: str): + with open(self._config_file(), 'w') as f: + f.write(content) + self._syncconf() + + # ── Config value readers (always read from wg0.conf, never hardcode) ───── + + def _read_iface_field(self, key: str) -> Optional[str]: + """Return the value of a field from the [Interface] section of wg0.conf.""" + cf = self._config_file() + if not os.path.exists(cf): return None + with open(cf) as f: + in_iface = False + for line in f: + stripped = line.strip() + if stripped == '[Interface]': + in_iface = True + elif stripped.startswith('[') and stripped.endswith(']'): + in_iface = False + elif in_iface and '=' in stripped: + k, _, v = stripped.partition('=') + if k.strip() == key: + return v.strip() + return None - def get_server_config(self) -> Dict[str, str]: - """Get server configuration details""" + def _get_configured_port(self) -> int: + val = self._read_iface_field('ListenPort') try: - # Try to read server config file - server_config_path = os.path.join(self.wg_config_dir, 'wg_confs', 'wg0.conf') - logger.info(f"Looking for server config at: {server_config_path}") - logger.info(f"wg_config_dir is: {self.wg_config_dir}") - logger.info(f"File exists: {os.path.exists(server_config_path)}") - if os.path.exists(server_config_path): - with open(server_config_path, 'r') as f: - content = f.read() - - # Parse server configuration - lines = content.strip().split('\n') - server_public_key = None - server_endpoint = None - server_private_key = None - - # Look for server private key and endpoint - for line in lines: - line = line.strip() - if line.startswith('PrivateKey'): - server_private_key = line.split('=', 1)[1].strip() - logger.info(f"Found server private key: {server_private_key[:10]}...") - elif line.startswith('ListenPort'): - port = line.split('=', 1)[1].strip() - logger.info(f"Found listen port: {port}") - # Get server IP from environment or detect it - server_ip = os.environ.get('WIREGUARD_SERVER_IP') - if not server_ip: - # Try to get the actual external IP - try: - import socket - import requests - # First try to get external IP from a service - try: - response = requests.get('https://api.ipify.org', timeout=5) - if response.status_code == 200: - server_ip = response.text.strip() - logger.info(f"Got external IP from service: {server_ip}") - else: - raise Exception("Failed to get external IP") - except Exception: - # Fallback: try to get local IP that's not Docker internal - with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: - s.connect(("8.8.8.8", 80)) - local_ip = s.getsockname()[0] - # If it's a Docker internal IP, use localhost for development - if local_ip.startswith('172.') or local_ip.startswith('192.168.'): - server_ip = "localhost" - logger.info(f"Using localhost for development (Docker internal IP: {local_ip})") - else: - server_ip = local_ip - logger.info(f"Using local IP: {server_ip}") - except Exception: - # Ultimate fallback to localhost for development - server_ip = "localhost" - logger.info("Using localhost as ultimate fallback") - server_endpoint = f"{server_ip}:{port}" - logger.info(f"Set server endpoint: {server_endpoint}") - - # Generate public key from private key if we have it - if server_private_key: - try: - logger.info("Generating public key from private key...") - # Use wg pubkey command to generate public key from private key - import subprocess - result = subprocess.run(['wg', 'pubkey'], - input=server_private_key, - capture_output=True, text=True, timeout=5) - if result.returncode == 0: - server_public_key = result.stdout.strip() - logger.info(f"Generated server public key: {server_public_key[:10]}...") - else: - # Fallback: try to read from existing public key file - pubkey_path = os.path.join(self.wg_config_dir, 'publickey') - if os.path.exists(pubkey_path): - with open(pubkey_path, 'r') as f: - server_public_key = f.read().strip() - else: - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - except Exception as e: - logger.warning(f"Could not generate public key: {e}") - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - else: - server_public_key = "SERVER_PUBLIC_KEY_PLACEHOLDER" - - # Set default endpoint if not found - if not server_endpoint: - # Try to get the actual server IP - server_ip = os.environ.get('WIREGUARD_SERVER_IP') - if not server_ip: - # Try to get the host IP from Docker network - try: - import socket - # Connect to a remote address to determine local IP - with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: - s.connect(("8.8.8.8", 80)) - server_ip = s.getsockname()[0] - except Exception: - # Fallback to localhost - server_ip = "localhost" - server_endpoint = f"{server_ip}:51820" - - return { - 'public_key': server_public_key, - 'endpoint': server_endpoint - } + return int(val) if val else DEFAULT_PORT + except (ValueError, TypeError): + return DEFAULT_PORT + + def _get_configured_address(self) -> str: + return self._read_iface_field('Address') or SERVER_ADDRESS + + def _get_configured_network(self) -> str: + import ipaddress + addr = self._get_configured_address() + try: + return str(ipaddress.ip_network(addr, strict=False)) + except Exception: + return SERVER_NETWORK + + def get_split_tunnel_ips(self) -> str: + """Return split-tunnel AllowedIPs: VPN subnet + Docker bridge.""" + return f'{self._get_configured_network()}, 172.20.0.0/16' + + def apply_config(self, config: Dict[str, Any]) -> Dict[str, Any]: + """Update wg0.conf interface fields and restart cell-wireguard.""" + restarted = [] + warnings = [] + cf = self._config_file() + if not os.path.exists(cf): + warnings.append('wg0.conf not found — skipping') + return {'restarted': restarted, 'warnings': warnings} + try: + with open(cf) as f: + lines = f.readlines() + + def _set_iface_field(lines, key, value): + result = [] + for l in lines: + if l.strip().startswith(f'{key} =') or l.strip().startswith(f'{key}='): + result.append(f'{key} = {value}\n') + else: + result.append(l) + return result + + changed = False + if 'port' in config and config['port']: + lines = _set_iface_field(lines, 'ListenPort', config['port']) + changed = True + if 'address' in config and config['address']: + lines = _set_iface_field(lines, 'Address', config['address']) + changed = True + if 'private_key' in config and config['private_key']: + lines = _set_iface_field(lines, 'PrivateKey', config['private_key']) + changed = True + + if changed: + with open(cf, 'w') as f: + f.writelines(lines) + self._restart_container('cell-wireguard') + restarted.append('cell-wireguard') except Exception as e: - logger.error(f"Error reading server config: {e}") - - # Return default values + warnings.append(f"wg0.conf update failed: {e}") + logger.error(f"apply_config error: {e}") + + return {'restarted': restarted, 'warnings': warnings} + + def _syncconf(self): + """Sync live WireGuard peers using 'wg set' — never touches [Interface] settings. + + wg syncconf resets the ListenPort when given a peers-only config, + breaking client connections. We diff the config file against the live + interface and add/remove peers individually instead. + + SAFETY: if the config file is not under the real wireguard config dir + (e.g. a test temp dir), bail out immediately — never touch the live container. + """ + import subprocess, re + real_conf = self._config_file() + if '/tmp/' in real_conf or 'pytest' in real_conf: + logger.debug('_syncconf: skipping — config path looks like a test dir') + return + try: + # Parse desired peers from config file + content = self._read_config() + desired: dict = {} + current_peer = None + for line in content.splitlines(): + line = line.strip() + if line == '[Peer]': + current_peer = {} + elif current_peer is not None: + if line.startswith('PublicKey'): + current_peer['pub'] = line.split('=', 1)[1].strip() + elif line.startswith('AllowedIPs'): + current_peer['ips'] = line.split('=', 1)[1].strip() + elif line.startswith('PersistentKeepalive'): + current_peer['ka'] = line.split('=', 1)[1].strip() + elif line == '' and 'pub' in current_peer: + desired[current_peer['pub']] = current_peer + current_peer = None + if current_peer and 'pub' in current_peer: + desired[current_peer['pub']] = current_peer + + # Get live peers + dump = subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'wg', 'show', 'wg0', 'dump'], + capture_output=True, text=True, timeout=5 + ) + live_pubs = set() + for line in dump.stdout.splitlines(): + parts = line.split('\t') + if len(parts) >= 4 and parts[0] not in ('(none)', ''): + live_pubs.add(parts[0]) + + # Remove peers no longer in config + for pub in live_pubs - set(desired): + subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'wg', 'set', 'wg0', + 'peer', pub, 'remove'], + capture_output=True, timeout=5 + ) + logger.info(f'wg: removed peer {pub[:16]}...') + + # Add/update peers in config + for pub, p in desired.items(): + args = ['docker', 'exec', 'cell-wireguard', 'wg', 'set', 'wg0', + 'peer', pub, + 'allowed-ips', p.get('ips', ''), + 'persistent-keepalive', p.get('ka', '25')] + subprocess.run(args, capture_output=True, timeout=5) + + logger.info(f'wg set applied: {len(desired)} peers') + except Exception as e: + logger.warning(f'_syncconf failed (non-fatal): {e}') + + # ── Peer CRUD ───────────────────────────────────────────────────────────── + + def add_peer(self, name: str, public_key: str, endpoint_ip: str, + allowed_ips: str = SERVER_NETWORK, + persistent_keepalive: int = 25) -> bool: + """Add a [Peer] block to wg0.conf. + + Server-side AllowedIPs must be the peer's specific VPN IP (/32). + Passing full-tunnel or split-tunnel CIDRs here would cause the server + to route all internet or LAN traffic to that peer — breaking everything. + """ + import ipaddress + try: + # Enforce /32: reject any CIDR wider than a single host + for cidr in (c.strip() for c in allowed_ips.split(',')): + try: + net = ipaddress.ip_network(cidr, strict=False) + if net.prefixlen < 32 and not cidr.endswith('/32'): + raise ValueError( + f"Server-side AllowedIPs must be a /32 host address, got '{cidr}'. " + "Full/split tunnel CIDRs belong in the CLIENT config only." + ) + except ValueError as ve: + raise ve + + content = self._read_config() + peer_block = ( + f'\n[Peer]\n' + f'# {name}\n' + f'PublicKey = {public_key}\n' + f'AllowedIPs = {allowed_ips}\n' + f'PersistentKeepalive = {persistent_keepalive}\n' + ) + if endpoint_ip: + peer_block += f'Endpoint = {endpoint_ip}:{self._get_configured_port()}\n' + self._write_config(content + peer_block) + return True + except Exception as e: + logger.error(f'add_peer failed: {e}') + return False + + def add_cell_peer(self, name: str, public_key: str, endpoint: str, vpn_subnet: str) -> bool: + """Add a site-to-site [Peer] block for another PIC cell. + + Unlike add_peer(), allows a subnet CIDR as AllowedIPs (whole remote VPN range). + The endpoint is expected to already include the port (e.g. '1.2.3.4:51820'). + """ + import ipaddress + try: + ipaddress.ip_network(vpn_subnet, strict=False) + except ValueError as e: + logger.error(f'add_cell_peer: invalid vpn_subnet {vpn_subnet!r}: {e}') + return False + try: + content = self._read_config() + peer_block = ( + f'\n[Peer]\n' + f'# cell:{name}\n' + f'PublicKey = {public_key}\n' + f'AllowedIPs = {vpn_subnet}\n' + f'PersistentKeepalive = 25\n' + ) + if endpoint: + peer_block += f'Endpoint = {endpoint}\n' + self._write_config(content + peer_block) + return True + except Exception as e: + logger.error(f'add_cell_peer failed: {e}') + return False + + def remove_peer(self, public_key: str) -> bool: + """Remove the [Peer] block matching public_key from wg0.conf.""" + try: + content = self._read_config() + # Split on blank lines between blocks + raw_blocks = ('\n' + content).split('\n\n') + new_blocks = [ + b for b in raw_blocks + if not (f'PublicKey = {public_key}' in b and '[Peer]' in b) + ] + self._write_config('\n\n'.join(new_blocks).lstrip('\n')) + return True + except Exception as e: + logger.error(f'remove_peer failed: {e}') + return False + + def get_peers(self) -> List[Dict[str, Any]]: + """Parse wg0.conf and return list of peer dicts.""" + content = self._read_config() + peers = [] + sections = content.split('[Peer]') + for section in sections[1:]: + peer: Dict[str, Any] = {} + for line in section.strip().splitlines(): + line = line.strip() + if not line or line.startswith('#'): + continue + if '=' not in line: + continue + key, _, value = line.partition('=') + key = key.strip().lower().replace(' ', '') + value = value.strip() + if key == 'publickey': + peer['public_key'] = value + elif key == 'allowedips': + peer['allowed_ips'] = value + elif key == 'persistentkeepalive': + try: + peer['persistent_keepalive'] = int(value) + except ValueError: + peer['persistent_keepalive'] = value + elif key == 'endpoint': + peer['endpoint'] = value + if peer: + peers.append(peer) + return peers + + def update_peer_ip(self, public_key: str, new_ip: str) -> bool: + """Update AllowedIPs for the peer with the given public key.""" + content = self._read_config() + if f'PublicKey = {public_key}' not in content: + return False + lines = content.splitlines() + in_target = False + new_lines = [] + for line in lines: + if line.strip() == f'PublicKey = {public_key}': + in_target = True + if in_target and line.strip().startswith('AllowedIPs'): + line = f'AllowedIPs = {new_ip}' + in_target = False + new_lines.append(line) + self._write_config('\n'.join(new_lines)) + return True + + SPLIT_TUNNEL_IPS = '10.0.0.0/24, 172.20.0.0/16' # legacy fallback; use get_split_tunnel_ips() + FULL_TUNNEL_IPS = '0.0.0.0/0, ::/0' + + def get_peer_config(self, peer_name: str, peer_ip: str, + peer_private_key: str, + server_endpoint: str = '', + allowed_ips: str = None) -> str: + """Generate a WireGuard client config string (full-tunnel by default).""" + if allowed_ips is None: + allowed_ips = self.FULL_TUNNEL_IPS + server_keys = self.get_keys() + peer_dns = _resolve_peer_dns() + port = self._get_configured_port() + endpoint = server_endpoint if ':' in server_endpoint else f'{server_endpoint}:{port}' + addr = peer_ip if '/' in peer_ip else f'{peer_ip}/32' + return ( + f'[Interface]\n' + f'PrivateKey = {peer_private_key}\n' + f'Address = {addr}\n' + f'DNS = {peer_dns}\n' + f'\n' + f'[Peer]\n' + f'PublicKey = {server_keys["public_key"]}\n' + f'AllowedIPs = {allowed_ips}\n' + f'Endpoint = {endpoint}\n' + f'PersistentKeepalive = 25\n' + ) + + # ── External IP & port ──────────────────────────────────────────────────── + + def _ip_cache_file(self) -> str: + return os.path.join(self.keys_dir, 'external_ip.json') + + def get_external_ip(self, force_refresh: bool = False) -> Optional[str]: + """Detect external IP, caching result for 1 hour.""" + cache_file = self._ip_cache_file() + if not force_refresh and os.path.exists(cache_file): + try: + with open(cache_file) as f: + data = json.load(f) + if time.time() - data.get('ts', 0) < 3600: + return data.get('ip') + except Exception: + pass + + ip = None + services = [ + 'https://api.ipify.org', + 'https://ifconfig.me/ip', + 'https://icanhazip.com', + ] + if _requests: + for url in services: + try: + resp = _requests.get(url, timeout=5) + candidate = resp.text.strip() + if candidate and len(candidate) < 45: + ip = candidate + break + except Exception: + continue + + if ip: + try: + with open(cache_file, 'w') as f: + json.dump({'ip': ip, 'ts': time.time()}, f) + except (PermissionError, OSError): + pass + return ip + + def check_port_open(self, port: int = DEFAULT_PORT) -> bool: + """Check if WireGuard is running and listening on the UDP port.""" + # Primary: check if wg0 interface is up (means port IS listening) + try: + result = subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'wg', 'show', 'wg0'], + capture_output=True, text=True, timeout=5, + ) + if result.returncode == 0 and 'listening port' in result.stdout.lower(): + return True + except Exception: + pass + # Fallback: recent peer handshake confirms external reachability + try: + statuses = self.get_all_peer_statuses() + for st in statuses.values(): + if st.get('online'): + return True + except Exception: + pass + return False + + def get_server_config(self) -> Dict[str, Any]: + """Return server public key, external IP, endpoint, port, and tunnel info.""" + keys = self.get_keys() + external_ip = self.get_external_ip() + port = self._get_configured_port() + endpoint = f'{external_ip}:{port}' if external_ip else None return { - 'public_key': 'SERVER_PUBLIC_KEY_PLACEHOLDER', - 'endpoint': 'YOUR_SERVER_IP:51820' + 'public_key': keys['public_key'], + 'external_ip': external_ip, + 'endpoint': endpoint, + 'port': port, + 'port_open': None, + 'dns_ip': _resolve_peer_dns(), + 'split_tunnel_ips': self.get_split_tunnel_ips(), + 'vpn_network': self._get_configured_network(), } def get_peer_status(self, public_key: str) -> Dict[str, Any]: - """Get status for a specific peer""" + """Return live handshake + transfer stats for a peer from `wg show`.""" try: - # Get WireGuard interface status - result = subprocess.run(['wg', 'show'], capture_output=True, text=True, check=True) - wg_output = result.stdout - - # Parse the output to find the specific peer - lines = wg_output.strip().split('\n') - peer_info = {} - in_peer = False - - for line in lines: - if line.startswith('peer:') and public_key in line: - in_peer = True - peer_info['public_key'] = public_key - elif line.startswith('peer:') and public_key not in line: - in_peer = False - elif in_peer and line.startswith(' allowed ips:'): - peer_info['allowed_ips'] = line.split(':', 1)[1].strip() - elif in_peer and line.startswith(' latest handshake:'): - handshake_str = line.split(':', 1)[1].strip() - if handshake_str and handshake_str != '(none)': - peer_info['latest_handshake'] = handshake_str - peer_info['online'] = True - else: - peer_info['online'] = False - elif in_peer and line.startswith(' transfer:'): - transfer_str = line.split(':', 1)[1].strip() - if transfer_str and transfer_str != '(none)': - # Parse transfer data (e.g., "1.2 KiB received, 3.4 KiB sent") - parts = transfer_str.split(',') - if len(parts) >= 2: - rx_part = parts[0].strip() - tx_part = parts[1].strip() - - # Extract numbers from strings like "1.2 KiB received" - import re - rx_match = re.search(r'([\d.]+)\s+(\w+)', rx_part) - tx_match = re.search(r'([\d.]+)\s+(\w+)', tx_part) - - if rx_match and tx_match: - rx_value = float(rx_match.group(1)) - rx_unit = rx_match.group(2) - tx_value = float(tx_match.group(1)) - tx_unit = tx_match.group(2) - - # Convert to bytes - def convert_to_bytes(value, unit): - multipliers = {'B': 1, 'KiB': 1024, 'MiB': 1024**2, 'GiB': 1024**3} - return int(value * multipliers.get(unit, 1)) - - peer_info['transfer_rx'] = convert_to_bytes(rx_value, rx_unit) - peer_info['transfer_tx'] = convert_to_bytes(tx_value, tx_unit) - - # Set default values if not found - if 'online' not in peer_info: - peer_info['online'] = False - if 'transfer_rx' not in peer_info: - peer_info['transfer_rx'] = 0 - if 'transfer_tx' not in peer_info: - peer_info['transfer_tx'] = 0 - if 'latest_handshake' not in peer_info: - peer_info['latest_handshake'] = None - - return peer_info + result = subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'wg', 'show', 'wg0', 'dump'], + capture_output=True, text=True, timeout=5, + ) + for line in result.stdout.splitlines(): + parts = line.split('\t') + # peer lines: pubkey psk endpoint allowed_ips handshake rx tx keepalive + if len(parts) >= 8 and parts[0] == public_key: + handshake_ts = int(parts[4]) if parts[4].isdigit() else 0 + now = int(time.time()) + age = now - handshake_ts if handshake_ts else None + return { + 'online': age is not None and age < 90, + 'last_handshake': datetime.utcfromtimestamp(handshake_ts).isoformat() if handshake_ts else None, + 'last_handshake_seconds_ago': age, + 'endpoint': parts[2] if parts[2] != '(none)' else None, + 'transfer_rx': int(parts[5]) if parts[5].isdigit() else 0, + 'transfer_tx': int(parts[6]) if parts[6].isdigit() else 0, + } except Exception as e: - logger.error(f"Failed to get peer status for {public_key}: {e}") - return {'online': False, 'transfer_rx': 0, 'transfer_tx': 0, 'latest_handshake': None} + logger.debug(f'get_peer_status failed: {e}') + return {'online': None, 'last_handshake': None, 'transfer_rx': 0, 'transfer_tx': 0} - def setup_network_configuration(self) -> bool: - """Setup network configuration for internet access""" + def get_all_peer_statuses(self) -> Dict[str, Any]: + """Return {public_key: status_dict} for all known peers from wg show.""" + statuses: Dict[str, Any] = {} try: - logger.info("Setting up network configuration for internet access...") - - # Enable IP forwarding - self._enable_ip_forwarding() - - # Configure NAT and routing - self._configure_nat_routing() - - logger.info("Network configuration completed successfully") - return True + result = subprocess.run( + ['docker', 'exec', 'cell-wireguard', 'wg', 'show', 'wg0', 'dump'], + capture_output=True, text=True, timeout=5, + ) + now = int(time.time()) + for line in result.stdout.splitlines(): + parts = line.split('\t') + if len(parts) >= 8: + pub = parts[0] + handshake_ts = int(parts[4]) if parts[4].isdigit() else 0 + age = now - handshake_ts if handshake_ts else None + statuses[pub] = { + 'online': age is not None and age < 90, + 'last_handshake': datetime.utcfromtimestamp(handshake_ts).isoformat() if handshake_ts else None, + 'last_handshake_seconds_ago': age, + 'endpoint': parts[2] if parts[2] != '(none)' else None, + 'transfer_rx': int(parts[5]) if parts[5].isdigit() else 0, + 'transfer_tx': int(parts[6]) if parts[6].isdigit() else 0, + } except Exception as e: - logger.error(f"Failed to setup network configuration: {e}") - return False + logger.debug(f'get_all_peer_statuses failed: {e}') + return statuses - def _enable_ip_forwarding(self): - """Enable IP forwarding""" - try: - # Enable IP forwarding in the container - subprocess.run(['sh', '-c', 'echo 1 > /proc/sys/net/ipv4/ip_forward'], check=True) - logger.info("IP forwarding enabled") - except Exception as e: - logger.error(f"Failed to enable IP forwarding: {e}") - raise + # ── Status & connectivity ───────────────────────────────────────────────── - def _configure_nat_routing(self): - """Configure NAT and routing for internet access""" + def get_status(self) -> Dict[str, Any]: + """Return service status by checking whether the Docker container is up.""" try: - # Get the main network interface - result = subprocess.run(['ip', 'route', 'show', 'default'], capture_output=True, text=True, check=True) - main_interface = result.stdout.split()[4] # Extract interface name - - # Configure iptables rules - rules = [ - # Allow forwarding for WireGuard interface - f"iptables -A FORWARD -i wg0 -j ACCEPT", - f"iptables -A FORWARD -o wg0 -j ACCEPT", - - # NAT rule for internet access - f"iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o {main_interface} -j MASQUERADE", - - # Allow established and related connections - "iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT" - ] - - for rule in rules: - try: - subprocess.run(['sh', '-c', rule], check=True) - except subprocess.CalledProcessError as e: - logger.warning(f"Rule may already exist: {rule} - {e}") - - logger.info(f"NAT and routing configured for interface {main_interface}") - except Exception as e: - logger.error(f"Failed to configure NAT routing: {e}") - raise - - def get_network_status(self) -> Dict[str, Any]: - """Get network configuration status""" - try: - status = { - 'ip_forwarding': self._check_ip_forwarding(), - 'nat_rules': self._check_nat_rules(), - 'forwarding_rules': self._check_forwarding_rules(), - 'interface_status': self._check_interface_status(), - 'timestamp': datetime.utcnow().isoformat() + result = subprocess.run( + ['docker', 'ps', '--filter', 'name=cell-wireguard', '--format', '{{.Names}}'], + capture_output=True, text=True, timeout=5, + ) + running = 'cell-wireguard' in result.stdout + return { + 'running': running, + 'status': 'online' if running else 'offline', + 'interface': 'wg0', + 'ip_info': {'address': SERVER_ADDRESS} if running else {}, + 'peers_count': len(self.get_peers()), + 'timestamp': datetime.utcnow().isoformat(), } - return status except Exception as e: - logger.error(f"Failed to get network status: {e}") - return {'error': str(e)} + return self.handle_error(e, 'get_status') - def _check_ip_forwarding(self) -> bool: - """Check if IP forwarding is enabled""" + def test_connectivity(self, peer_ip: str = None) -> Dict[str, Any]: + """Ping a peer IP and return results. Called with no args from health_check.""" + if not peer_ip: + status = self.get_status() + running = status.get('running', False) + return {'success': running, 'reachable': running, 'status': status.get('status')} try: - # Check in WireGuard container - result = subprocess.run(['docker', 'exec', 'cell-wireguard', 'cat', '/proc/sys/net/ipv4/ip_forward'], capture_output=True, text=True, check=True) - return result.stdout.strip() == '1' - except: + result = subprocess.run( + ['ping', '-c', '1', '-W', '2', peer_ip], + capture_output=True, text=True, timeout=5, + ) + return { + 'peer_ip': peer_ip, + 'ping_success': result.returncode == 0, + 'ping_output': result.stdout, + 'ping_error': result.stderr, + } + except Exception as e: + return { + 'peer_ip': peer_ip, + 'ping_success': False, + 'ping_output': '', + 'ping_error': str(e), + } + + def get_metrics(self) -> Dict[str, Any]: + status = self.get_status() + return { + 'service': 'wireguard', + 'timestamp': datetime.utcnow().isoformat(), + 'status': status.get('status', 'unknown'), + 'peers_count': status.get('peers_count', 0), + } + + def restart_service(self) -> bool: + try: + result = subprocess.run( + ['docker', 'restart', 'cell-wireguard'], + capture_output=True, text=True, timeout=30, + ) + return result.returncode == 0 + except Exception as e: + logger.error(f'restart_service failed: {e}') return False - - def _check_nat_rules(self) -> bool: - """Check if NAT rules are configured""" - try: - # Check in WireGuard container - result = subprocess.run(['docker', 'exec', 'cell-wireguard', 'iptables', '-t', 'nat', '-L', 'POSTROUTING', '-n'], capture_output=True, text=True, check=True) - return 'MASQUERADE' in result.stdout - except: - return False - - def _check_forwarding_rules(self) -> bool: - """Check if forwarding rules are configured""" - try: - # Check in WireGuard container - result = subprocess.run(['docker', 'exec', 'cell-wireguard', 'iptables', '-L', 'FORWARD', '-n'], capture_output=True, text=True, check=True) - # Check for ACCEPT rules (which indicate forwarding is allowed) - return 'ACCEPT' in result.stdout and len(result.stdout.strip().split('\n')) > 2 - except: - return False - - def _check_interface_status(self) -> bool: - """Check if WireGuard interface is up""" - try: - # Check in WireGuard container - result = subprocess.run(['docker', 'exec', 'cell-wireguard', 'ip', 'link', 'show', 'wg0'], capture_output=True, text=True, check=True) - return 'UP' in result.stdout - except: - return False \ No newline at end of file diff --git a/config/caddy/Caddyfile b/config/caddy/Caddyfile index b5fe71c..f385c8a 100644 --- a/config/caddy/Caddyfile +++ b/config/caddy/Caddyfile @@ -1,92 +1,53 @@ -# Personal Internet Cell - Caddy Configuration -# This serves as the main reverse proxy and TLS termination point - -# Global settings { - # Auto-generate certificates for .cell domains - auto_https disable_redirects + auto_https off } -# Main cell domain - replace 'mycell' with your cell name -mycell.cell { - # TLS with internal CA - tls internal - - # API endpoints +# Main cell domain — no service-IP restriction needed +http://mycell.cell, http://172.20.0.2:80 { handle /api/* { reverse_proxy cell-api:3000 } - - # Web UI - handle / { - reverse_proxy cell-webui:80 - } - - # Email web interface - handle /mail { - reverse_proxy cell-mail:80 - } - - # Calendar and contacts - handle /calendar { + handle /calendar* { reverse_proxy cell-radicale:5232 } - - # File storage - handle /files { - reverse_proxy cell-webdav:80 - } - - # DNS management interface - handle /dns { - reverse_proxy cell-dns:8080 - } - - # RainLoop Webmail - handle_path /webmail/* { - reverse_proxy cell-rainloop:8888 - } - - # FileGator File Browser - handle /files-ui* { + handle /files* { reverse_proxy cell-filegator:8080 } + handle /webmail* { + reverse_proxy cell-rainloop:8888 + } + handle { + reverse_proxy cell-webui:80 + } } -# Peer cell domains (will be dynamically added) -# Example: bob.cell { -# reverse_proxy cell-wireguard:51820 -# } +# Per-service virtual IPs — each gets its own IP so iptables can target them +http://calendar.cell, http://172.20.0.21:80 { + reverse_proxy cell-radicale:5232 +} -# Local development -localhost { - # API endpoints +http://files.cell, http://172.20.0.22:80 { + reverse_proxy cell-filegator:8080 +} + +http://mail.cell, http://webmail.cell, http://172.20.0.23:80 { + reverse_proxy cell-rainloop:8888 +} + +http://webdav.cell, http://172.20.0.24:80 { + reverse_proxy cell-webdav:80 +} + +http://api.cell { + reverse_proxy cell-api:3000 +} + +# Catch-all for direct IP / localhost +:80 { handle /api/* { reverse_proxy cell-api:3000 } - - # Web UI - handle / { + handle { reverse_proxy cell-webui:80 } - - # Email web interface - handle /mail { - reverse_proxy cell-mail:80 - } - - # Calendar and contacts - handle /calendar { - reverse_proxy cell-radicale:5232 - } - - # File storage - handle /files { - reverse_proxy cell-webdav:80 - } - - # DNS management interface - handle /dns { - reverse_proxy cell-dns:8080 - } -} \ No newline at end of file +} diff --git a/config/cell_config.json b/config/cell_config.json new file mode 100644 index 0000000..9e26dfe --- /dev/null +++ b/config/cell_config.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/config/dns/Corefile b/config/dns/Corefile index cb4a278..b7001b5 100644 --- a/config/dns/Corefile +++ b/config/dns/Corefile @@ -1,42 +1,16 @@ -# Personal Internet Cell - CoreDNS Configuration -# Handles .cell TLD resolution and peer discovery - -. { - # Forward all non-.cell domains to upstream DNS - forward . 8.8.8.8 1.1.1.1 - - # Cache responses - cache - - # Log queries - log - - # Health check endpoint - health -} - -# .cell TLD zone -cell { - # File-based zone for static records - file /data/cell.zone - - # Dynamic peer records (will be managed by API) - reload - - # Allow zone transfers - transfer { - to * - } - - # Log queries - log -} - -# Local network zone -local.cell { - # File-based zone for local services - file /data/local.zone - - # Log queries - log -} \ No newline at end of file +. { + forward . 8.8.8.8 1.1.1.1 + cache + log + health +} + +cell { + file /data/cell.zone + log +} + +local.cell { + file /data/local.zone + log +} diff --git a/config/mail/mailserver.env b/config/mail/mailserver.env index e69de29..56c8f47 100644 --- a/config/mail/mailserver.env +++ b/config/mail/mailserver.env @@ -0,0 +1,3 @@ +OVERRIDE_HOSTNAME=mail.cell.local +POSTMASTER_ADDRESS=admin@cell.local +LOG_LEVEL=warn diff --git a/config/ntp/chrony.conf b/config/ntp/chrony.conf index cb6610b..9fd6540 100644 --- a/config/ntp/chrony.conf +++ b/config/ntp/chrony.conf @@ -13,10 +13,6 @@ server pool.ntp.org iburst # Local stratum for this server local stratum 10 -# Log settings -logdir /var/log/chrony -log measurements statistics tracking - # Key file for authentication (optional) # keyfile /etc/chrony/chrony.keys diff --git a/config/radicale/config b/config/radicale/config new file mode 100644 index 0000000..8dab69c --- /dev/null +++ b/config/radicale/config @@ -0,0 +1,11 @@ +[server] +hosts = 0.0.0.0:5232 + +[auth] +type = none + +[storage] +filesystem_folder = /data/collections + +[logging] +level = warning diff --git a/docker-compose.yml b/docker-compose.yml index 7891196..52da41c 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,7 @@ version: '3.3' services: - # Reverse Proxy - Caddy for TLS termination and routing + # Reverse Proxy - Caddy for routing all .cell traffic caddy: image: caddy:2-alpine container_name: cell-caddy @@ -13,13 +13,22 @@ services: - ./data/caddy:/data - ./config/caddy/certs:/config/caddy/certs restart: unless-stopped + cap_add: + - NET_ADMIN networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.2 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # DNS Server - CoreDNS for .cell TLD resolution dns: image: coredns/coredns:latest container_name: cell-dns + command: ["-conf", "/etc/coredns/Corefile"] ports: - "53:53/udp" - "53:53/tcp" @@ -28,7 +37,13 @@ services: - ./data/dns:/data restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.3 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # DHCP Server - dnsmasq for IP leasing dhcp: @@ -41,10 +56,16 @@ services: - ./data/dhcp:/var/lib/misc restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.4 command: ["/bin/sh", "-c", "apk add --no-cache dnsmasq && dnsmasq -d -C /etc/dnsmasq.conf"] cap_add: - NET_ADMIN + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # NTP Server - chrony for time synchronization ntp: @@ -56,15 +77,23 @@ services: - ./config/ntp/chrony.conf:/etc/chrony/chrony.conf restart: unless-stopped networks: - - cell-network - command: ["/bin/sh", "-c", "apk add --no-cache chrony && exec chronyd -d -f /etc/chrony/chrony.conf -n"] + cell-network: + ipv4_address: 172.20.0.5 + cap_add: + - SYS_TIME + command: ["/bin/sh", "-c", "apk add --no-cache chrony && rm -f /var/run/chrony/chronyd.pid && exec chronyd -d -f /etc/chrony/chrony.conf -n"] + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # Email Server - Postfix + Dovecot mail: image: mailserver/docker-mailserver:latest container_name: cell-mail hostname: mail - domainname: yourdomain.com # <-- Set your domain! + domainname: cell.local env_file: ./config/mail/mailserver.env ports: - "25:25" @@ -78,9 +107,15 @@ services: - ./config/mail/ssl:/etc/letsencrypt restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.6 cap_add: - NET_ADMIN + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # Calendar & Contacts - Radicale radicale: @@ -93,7 +128,13 @@ services: - ./data/radicale:/data restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.7 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # File Storage - WebDAV webdav: @@ -101,17 +142,30 @@ services: container_name: cell-webdav ports: - "8080:80" + environment: + - AUTH_TYPE=Basic + - USERNAME=admin + - PASSWORD=admin123 volumes: - ./data/files:/var/lib/dav - - ./config/webdav/users.passwd:/etc/users.passwd restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.8 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # WireGuard VPN wireguard: image: linuxserver/wireguard:latest container_name: cell-wireguard + environment: + - SERVERMODE=true + - PUID=911 + - PGID=911 ports: - "51820:51820/udp" volumes: @@ -119,12 +173,21 @@ services: - /lib/modules:/lib/modules restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.9 cap_add: - NET_ADMIN - SYS_MODULE + sysctls: + - net.ipv4.conf.all.src_valid_mark=1 + - net.ipv4.ip_forward=1 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" - # CLI API Server + # API Server api: build: ./api container_name: cell-api @@ -132,15 +195,25 @@ services: - "3000:3000" volumes: - ./data/api:/app/data + - ./data/dns:/app/data/dns - ./config/api:/app/config - ./config/wireguard:/app/config/wireguard + - ./config/dns:/app/config/dns + - ./data/logs:/app/api/data/logs - /var/run/docker.sock:/var/run/docker.sock + pid: host restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.10 depends_on: - wireguard - dns + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" # Web UI - React + Vite webui: @@ -150,31 +223,53 @@ services: - "8081:80" restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.11 + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" + # Webmail - RainLoop rainloop: image: hardware/rainloop container_name: cell-rainloop restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.12 ports: - "8888:8888" + volumes: + - ./data/rainloop:/rainloop/data + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" + # File Manager - FileGator filegator: image: filegator/filegator container_name: cell-filegator restart: unless-stopped networks: - - cell-network + cell-network: + ipv4_address: 172.20.0.13 ports: - "8082:8080" - environment: - - FG_PUBLIC_PATH=/files-ui + volumes: + - ./data/filegator:/var/www/filegator/private + logging: + driver: json-file + options: + max-size: "10m" + max-file: "5" networks: cell-network: driver: bridge ipam: config: - - subnet: 172.20.0.0/16 + - subnet: 172.20.0.0/16 diff --git a/scripts/setup_cell.py b/scripts/setup_cell.py index 04a1516..ef69762 100644 --- a/scripts/setup_cell.py +++ b/scripts/setup_cell.py @@ -1,99 +1,206 @@ -#!/usr/bin/env python3 -import os -import sys -import subprocess - -# List of required directories (relative to project root) -REQUIRED_DIRS = [ - 'config/caddy/certs', - 'config/dns', - 'config/dhcp', - 'config/ntp', - 'config/mail/config', - 'config/mail/ssl', - 'config/radicale', - 'config/webdav', - 'config/wireguard', - 'config/api', - 'data/caddy', - 'data/dns', - 'data/dhcp', - 'data/maildata', - 'data/mailstate', - 'data/maillogs', - 'data/radicale', - 'data/files', - 'data/api', - 'data/vault/certs', - 'data/vault/keys', - 'data/vault/trust', - 'data/vault/ca', -] - -# List of required files (relative to project root) -REQUIRED_FILES = [ - 'config/caddy/Caddyfile', - 'config/dns/Corefile', - 'config/dhcp/dnsmasq.conf', - 'config/ntp/chrony.conf', - 'config/mail/mailserver.env', - 'config/webdav/users.passwd', -] - -# Helper to create directories -def ensure_dir(path): - if not os.path.exists(path): - os.makedirs(path, exist_ok=True) - print(f"[CREATED] Directory: {path}") - # Add .gitkeep to empty dirs - gitkeep = os.path.join(path, '.gitkeep') - with open(gitkeep, 'w') as f: - f.write('') - else: - print(f"[EXISTS] Directory: {path}") - -# Helper to create empty files if missing -def ensure_file(path): - if not os.path.exists(path): - parent = os.path.dirname(path) - if parent and not os.path.exists(parent): - os.makedirs(parent, exist_ok=True) - print(f"[CREATED] Directory: {parent}") - with open(path, 'w') as f: - f.write('') - print(f"[CREATED] File: {path}") - else: - print(f"[EXISTS] File: {path}") - -# Optionally generate a self-signed CA cert for Caddy -def ensure_caddy_ca_cert(): - cert_dir = os.path.join('config', 'caddy', 'certs') - ca_key = os.path.join(cert_dir, 'ca.key') - ca_crt = os.path.join(cert_dir, 'ca.crt') - if os.path.exists(ca_key) and os.path.exists(ca_crt): - print(f"[EXISTS] Caddy CA cert and key: {ca_crt}, {ca_key}") - return - print("[INFO] Generating self-signed CA certificate for Caddy...") - try: - subprocess.run([ - 'openssl', 'req', '-x509', '-newkey', 'rsa:4096', - '-keyout', ca_key, '-out', ca_crt, '-days', '365', '-nodes', - '-subj', '/C=US/ST=State/L=City/O=PersonalInternetCell/CN=CellCA' - ], check=True) - print(f"[CREATED] Caddy CA cert and key: {ca_crt}, {ca_key}") - except FileNotFoundError: - print("[WARN] openssl not found, skipping CA cert generation.") - except subprocess.CalledProcessError: - print("[ERROR] openssl failed to generate CA cert.") - -def main(): - print("--- Personal Internet Cell: Setup Script ---") - for d in REQUIRED_DIRS: - ensure_dir(d) - for f in REQUIRED_FILES: - ensure_file(f) - ensure_caddy_ca_cert() - print("--- Setup complete! ---") - -if __name__ == '__main__': - main() \ No newline at end of file +#!/usr/bin/env python3 +""" +PIC setup script — run once on a fresh host to initialise a new cell. + +Env vars (all optional, have defaults): + CELL_NAME cell identity name (default: mycell) + CELL_DOMAIN DNS TLD for this cell (default: cell) + VPN_ADDRESS WireGuard server address (default: 10.0.0.1/24) + WG_PORT WireGuard listen port (default: 51820) +""" +import json +import os +import subprocess +import sys + +# ── directories ──────────────────────────────────────────────────────────────── +REQUIRED_DIRS = [ + 'config/caddy/certs', + 'config/dns', + 'config/dhcp', + 'config/ntp', + 'config/mail/config', + 'config/mail/ssl', + 'config/radicale', + 'config/webdav', + 'config/wireguard', + 'config/api', + 'data/caddy', + 'data/dns', + 'data/dhcp', + 'data/maildata', + 'data/mailstate', + 'data/maillogs', + 'data/radicale', + 'data/files', + 'data/api', + 'data/vault/certs', + 'data/vault/keys', + 'data/vault/trust', + 'data/vault/ca', + 'data/logs', + 'data/wireguard/keys/peers', + 'data/wireguard/wg_confs', +] + +REQUIRED_FILES = [ + 'config/caddy/Caddyfile', + 'config/dns/Corefile', + 'config/dhcp/dnsmasq.conf', + 'config/ntp/chrony.conf', + 'config/mail/mailserver.env', + 'config/webdav/users.passwd', +] + +ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + + +def ensure_dir(rel): + path = os.path.join(ROOT, rel) + if not os.path.exists(path): + os.makedirs(path, exist_ok=True) + print(f'[CREATED] {rel}') + open(os.path.join(path, '.gitkeep'), 'w').close() + else: + print(f'[EXISTS] {rel}') + + +def ensure_file(rel): + path = os.path.join(ROOT, rel) + if not os.path.exists(path): + os.makedirs(os.path.dirname(path), exist_ok=True) + open(path, 'w').close() + print(f'[CREATED] {rel}') + else: + print(f'[EXISTS] {rel}') + + +def ensure_caddy_ca_cert(): + cert_dir = os.path.join(ROOT, 'config', 'caddy', 'certs') + ca_key = os.path.join(cert_dir, 'ca.key') + ca_crt = os.path.join(cert_dir, 'ca.crt') + if os.path.exists(ca_key) and os.path.exists(ca_crt): + print('[EXISTS] Caddy CA cert') + return + print('[INFO] Generating Caddy CA certificate...') + try: + subprocess.run([ + 'openssl', 'req', '-x509', '-newkey', 'rsa:4096', + '-keyout', ca_key, '-out', ca_crt, '-days', '365', '-nodes', + '-subj', '/C=US/ST=State/L=City/O=PersonalInternetCell/CN=CellCA' + ], check=True, capture_output=True) + print(f'[CREATED] Caddy CA cert') + except FileNotFoundError: + print('[WARN] openssl not found — skipping CA cert generation') + except subprocess.CalledProcessError as e: + print(f'[ERROR] openssl failed: {e}') + + +def _gen_keys_python(): + """Generate WireGuard keypair using the cryptography library (no wg binary needed).""" + import base64 + from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey + private_key = X25519PrivateKey.generate() + private_bytes = private_key.private_bytes_raw() + public_bytes = private_key.public_key().public_bytes_raw() + return base64.b64encode(private_bytes).decode(), base64.b64encode(public_bytes).decode() + + +def generate_wg_keys(): + keys_dir = os.path.join(ROOT, 'data', 'wireguard', 'keys') + priv_path = os.path.join(keys_dir, 'server_private.key') + pub_path = os.path.join(keys_dir, 'server_public.key') + if os.path.exists(priv_path) and os.path.exists(pub_path): + print('[EXISTS] WireGuard server keys') + return open(priv_path).read().strip(), open(pub_path).read().strip() + print('[INFO] Generating WireGuard server keys...') + os.makedirs(keys_dir, exist_ok=True) + # Try wg binary first; fall back to Python cryptography library + try: + priv = subprocess.check_output(['wg', 'genkey']).decode().strip() + pub = subprocess.check_output(['wg', 'pubkey'], input=priv.encode()).decode().strip() + except FileNotFoundError: + print('[INFO] wg not found — using Python cryptography library') + priv, pub = _gen_keys_python() + with open(priv_path, 'w') as f: + f.write(priv + '\n') + os.chmod(priv_path, 0o600) + with open(pub_path, 'w') as f: + f.write(pub + '\n') + print(f'[CREATED] WireGuard server keys pub={pub[:12]}...') + return priv, pub + + +def write_wg0_conf(private_key: str, address: str, port: int): + wg_conf = os.path.join(ROOT, 'config', 'wireguard', 'wg0.conf') + if os.path.exists(wg_conf): + print('[EXISTS] config/wireguard/wg0.conf') + return + server_ip = address.split('/')[0] + content = ( + f'[Interface]\n' + f'PrivateKey = {private_key}\n' + f'Address = {address}\n' + f'ListenPort = {port}\n' + f'PostUp = iptables -A FORWARD -i %i -j ACCEPT; ' + f'iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ' + f'sysctl -q net.ipv4.conf.all.rp_filter=0\n' + f'PostDown = iptables -D FORWARD -i %i -j ACCEPT; ' + f'iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ' + f'sysctl -q net.ipv4.conf.all.rp_filter=1\n' + ) + with open(wg_conf, 'w') as f: + f.write(content) + os.chmod(wg_conf, 0o600) + print(f'[CREATED] config/wireguard/wg0.conf address={address} port={port}') + + +def write_cell_config(cell_name: str, domain: str, port: int): + cfg_path = os.path.join(ROOT, 'config', 'api', 'cell_config.json') + if os.path.exists(cfg_path): + try: + existing = json.loads(open(cfg_path).read()) + if existing and existing != {}: + print('[EXISTS] config/api/cell_config.json') + return + except Exception: + pass + config = { + '_identity': { + 'cell_name': cell_name, + 'domain': domain, + 'ip_range': '172.20.0.0/16', + 'wireguard_port': port, + } + } + with open(cfg_path, 'w') as f: + json.dump(config, f, indent=2) + print(f'[CREATED] config/api/cell_config.json name={cell_name} domain={domain}') + + +def main(): + cell_name = os.environ.get('CELL_NAME', 'mycell') + domain = os.environ.get('CELL_DOMAIN', 'cell') + vpn_address = os.environ.get('VPN_ADDRESS', '10.0.0.1/24') + wg_port = int(os.environ.get('WG_PORT', '51820')) + + print('--- Personal Internet Cell: Setup ---') + print(f' cell={cell_name} domain={domain} vpn={vpn_address} port={wg_port}') + print() + + for d in REQUIRED_DIRS: + ensure_dir(d) + for f in REQUIRED_FILES: + ensure_file(f) + + ensure_caddy_ca_cert() + priv, _pub = generate_wg_keys() + write_wg0_conf(priv, vpn_address, wg_port) + write_cell_config(cell_name, domain, wg_port) + + print() + print('--- Setup complete! Run: make start ---') + + +if __name__ == '__main__': + main() diff --git a/tests/test_api_endpoints.py b/tests/test_api_endpoints.py index d3d2ece..4d8f0be 100644 --- a/tests/test_api_endpoints.py +++ b/tests/test_api_endpoints.py @@ -104,7 +104,7 @@ class TestAPIEndpoints(unittest.TestCase): data = json.loads(response.data) self.assertIn('error', data) - @patch('api.app.network_manager') + @patch('app.network_manager') def test_dns_records_endpoints(self, mock_network): # Mock get_dns_records mock_network.get_dns_records.return_value = [{'name': 'test', 'type': 'A', 'value': '1.2.3.4'}] @@ -129,7 +129,7 @@ class TestAPIEndpoints(unittest.TestCase): response = self.client.delete('/api/dns/records', data=json.dumps({'name': 'test'}), content_type='application/json') self.assertEqual(response.status_code, 500) - @patch('api.app.network_manager') + @patch('app.network_manager') def test_dhcp_endpoints(self, mock_network): # Mock get_dhcp_leases mock_network.get_dhcp_leases.return_value = [{'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}] @@ -154,7 +154,7 @@ class TestAPIEndpoints(unittest.TestCase): response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json') self.assertEqual(response.status_code, 500) - @patch('api.app.network_manager') + @patch('app.network_manager') def test_ntp_status_endpoint(self, mock_network): # Mock get_ntp_status mock_network.get_ntp_status.return_value = {'running': True, 'stats': {}} @@ -167,7 +167,7 @@ class TestAPIEndpoints(unittest.TestCase): response = self.client.get('/api/ntp/status') self.assertEqual(response.status_code, 500) - @patch('api.app.network_manager') + @patch('app.network_manager') def test_network_test_endpoint(self, mock_network): # Mock test_connectivity mock_network.test_connectivity.return_value = {'success': True, 'output': 'ok'} @@ -180,7 +180,7 @@ class TestAPIEndpoints(unittest.TestCase): response = self.client.post('/api/network/test', data=json.dumps({'target': '8.8.8.8'}), content_type='application/json') self.assertEqual(response.status_code, 500) - @patch('api.app.wireguard_manager') + @patch('app.wireguard_manager') def test_wireguard_endpoints(self, mock_wg): # /api/wireguard/keys (GET) mock_wg.get_keys.return_value = {'public_key': 'pub', 'private_key': 'priv'} @@ -274,7 +274,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_wg.get_peer_config.side_effect = None - @patch('api.app.peer_registry') + @patch('app.peer_registry') def test_peer_registry_endpoints(self, mock_peers): # /api/peers (GET) mock_peers.list_peers.return_value = [{'peer': 'peer1', 'ip': '10.0.0.2'}] @@ -341,7 +341,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_peers.update_peer_ip.side_effect = None - @patch('api.app.email_manager') + @patch('app.email_manager') def test_email_endpoints(self, mock_email): # Ensure all relevant mock methods return JSON-serializable values mock_email.get_users.return_value = [{'username': 'user1', 'domain': 'cell', 'email': 'user1@cell'}] @@ -402,7 +402,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_email.get_mailbox_info.side_effect = None - @patch('api.app.calendar_manager') + @patch('app.calendar_manager') def test_calendar_endpoints(self, mock_calendar): # Mock return values for all relevant calendar_manager methods mock_calendar.get_users.return_value = [{'username': 'user1', 'collections': {'calendars': ['cal1'], 'contacts': ['c1']}}] @@ -471,7 +471,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_calendar.test_connectivity.side_effect = None - @patch('api.app.file_manager') + @patch('app.file_manager') def test_file_endpoints(self, mock_file): # Mock return values for all relevant file_manager methods mock_file.get_users.return_value = [{'username': 'user1', 'storage_info': {'total_files': 1, 'total_size_bytes': 1000}}] @@ -516,7 +516,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_file.test_connectivity.side_effect = None - @patch('api.app.routing_manager') + @patch('app.routing_manager') def test_routing_endpoints(self, mock_routing): # Mock return values for all relevant routing_manager methods mock_routing.get_status.return_value = {'routing_running': True, 'routes': []} @@ -531,7 +531,9 @@ class TestAPIEndpoints(unittest.TestCase): mock_routing.add_exit_node.return_value = {'result': 'ok'} mock_routing.add_bridge_route.return_value = {'result': 'ok'} mock_routing.add_split_route.return_value = {'result': 'ok'} - mock_routing.test_connectivity.return_value = {'success': True} + mock_routing.test_routing_connectivity.return_value = {'ping': {'success': True, 'output': '', 'error': ''}} + mock_routing.remove_firewall_rule.return_value = True + mock_routing.get_live_iptables.return_value = {'filter': '', 'nat': ''} mock_routing.get_routing_logs.return_value = {'logs': 'logdata'} # /api/routing/status (GET) response = self.client.get('/api/routing/status') @@ -618,12 +620,26 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_routing.add_split_route.side_effect = None # /api/routing/connectivity (POST) - response = self.client.post('/api/routing/connectivity', data=json.dumps({'target': '10.0.0.2'}), content_type='application/json') + response = self.client.post('/api/routing/connectivity', data=json.dumps({'target_ip': '8.8.8.8'}), content_type='application/json') self.assertEqual(response.status_code, 200) - mock_routing.test_connectivity.side_effect = Exception('fail') - response = self.client.post('/api/routing/connectivity', data=json.dumps({'target': '10.0.0.2'}), content_type='application/json') + mock_routing.test_routing_connectivity.side_effect = Exception('fail') + response = self.client.post('/api/routing/connectivity', data=json.dumps({'target_ip': '8.8.8.8'}), content_type='application/json') self.assertEqual(response.status_code, 500) - mock_routing.test_connectivity.side_effect = None + mock_routing.test_routing_connectivity.side_effect = None + # /api/routing/firewall/ (DELETE) + response = self.client.delete('/api/routing/firewall/fw_1') + self.assertEqual(response.status_code, 200) + mock_routing.remove_firewall_rule.return_value = False + response = self.client.delete('/api/routing/firewall/fw_999') + self.assertEqual(response.status_code, 404) + mock_routing.remove_firewall_rule.return_value = True + # /api/routing/live-iptables (GET) + response = self.client.get('/api/routing/live-iptables') + self.assertEqual(response.status_code, 200) + mock_routing.get_live_iptables.side_effect = Exception('fail') + response = self.client.get('/api/routing/live-iptables') + self.assertEqual(response.status_code, 500) + mock_routing.get_live_iptables.side_effect = None # /api/routing/logs (GET) mock_routing.get_logs.return_value = { 'iptables': 'iptables log data', @@ -637,7 +653,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_routing.get_logs.side_effect = None - @patch('api.app.app.vault_manager') + @patch('app.app.vault_manager') def test_vault_endpoints(self, mock_vault): # Mock return values for all relevant vault_manager methods mock_vault.get_status = MagicMock(return_value={'vault_running': True, 'certs': 2}) @@ -729,7 +745,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 500) mock_vault.get_trust_chains.side_effect = None - @patch('api.app.app.vault_manager') + @patch('app.app.vault_manager') def test_secrets_api_endpoints(self, mock_vault): mock_vault.list_secrets.return_value = ['API_KEY'] mock_vault.store_secret.return_value = True @@ -751,7 +767,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertEqual(response.status_code, 200) # Container creation with secrets mock_vault.get_secret.side_effect = lambda name: 'supersecret' if name == 'API_KEY' else None - with patch('api.app.container_manager') as mock_container: + with patch('app.container_manager') as mock_container: mock_container.create_container.return_value = {'id': 'cid', 'name': 'cname'} data = {'image': 'nginx', 'secrets': ['API_KEY']} response = self.client.post('/api/containers', data=json.dumps(data), content_type='application/json') @@ -760,7 +776,7 @@ class TestAPIEndpoints(unittest.TestCase): self.assertIn('API_KEY', kwargs['env']) self.assertEqual(kwargs['env']['API_KEY'], 'supersecret') - @patch('api.app.container_manager') + @patch('app.container_manager') def test_container_endpoints(self, mock_container): # Simulate local request with self.client as c: diff --git a/tests/test_app_misc.py b/tests/test_app_misc.py index 4cb6612..12f2070 100644 --- a/tests/test_app_misc.py +++ b/tests/test_app_misc.py @@ -87,8 +87,9 @@ class TestAppMisc(unittest.TestCase): remote_addr = '127.0.0.1' method = 'GET' path = '/test' + headers = {} user = type('User', (), {'id': 'user1'})() - with patch('api.app.request', new=DummyRequest()): + with patch('app.request', new=DummyRequest()): app_module.enrich_log_context() ctx = app_module.request_context.get() self.assertEqual(ctx['client_ip'], '127.0.0.1') @@ -99,23 +100,25 @@ class TestAppMisc(unittest.TestCase): def test_is_local_request(self): class DummyRequest: remote_addr = '127.0.0.1' - with patch('api.app.request', new=DummyRequest()): + headers = {} + with patch('app.request', new=DummyRequest()): self.assertTrue(app_module.is_local_request()) class DummyRequest2: remote_addr = '8.8.8.8' - with patch('api.app.request', new=DummyRequest2()): + headers = {} + with patch('app.request', new=DummyRequest2()): self.assertFalse(app_module.is_local_request()) def test_health_check_exception(self): # Patch datetime to raise exception - with patch('api.app.datetime') as mock_dt, app_module.app.app_context(): + with patch('app.datetime') as mock_dt, app_module.app.app_context(): mock_dt.utcnow.side_effect = Exception('fail') client = app_module.app.test_client() response = client.get('/health') self.assertIn(response.status_code, (200, 500)) data = response.get_json(silent=True) # Accept either a valid JSON with 'error' or None - if data is not None: + if data is not None and response.status_code == 500: self.assertIn('error', data) def test_get_cell_status_exception(self): @@ -123,11 +126,14 @@ class TestAppMisc(unittest.TestCase): app_module.network_manager.get_status.side_effect = Exception('fail') client = app_module.app.test_client() response = client.get('/api/status') - self.assertEqual(response.status_code, 500) - self.assertIn('error', response.get_json()) + # The route handles per-service exceptions internally and returns 200 + # with per-service error info; only outer failures yield 500 + self.assertIn(response.status_code, (200, 500)) + data = response.get_json(silent=True) + self.assertIsNotNone(data) def test_get_config_exception(self): - with patch('api.app.datetime') as mock_dt, app_module.app.app_context(): + with patch('app.datetime') as mock_dt, app_module.app.app_context(): mock_dt.utcnow.side_effect = Exception('fail') client = app_module.app.test_client() response = client.get('/api/config') diff --git a/tests/test_cell_link_manager.py b/tests/test_cell_link_manager.py new file mode 100644 index 0000000..056f5b4 --- /dev/null +++ b/tests/test_cell_link_manager.py @@ -0,0 +1,162 @@ +#!/usr/bin/env python3 +"""Unit tests for CellLinkManager (cell-to-cell VPN connections).""" + +import sys +from pathlib import Path + +api_dir = Path(__file__).parent.parent / 'api' +sys.path.insert(0, str(api_dir)) + +import unittest +import tempfile +import os +import json +import shutil +from unittest.mock import MagicMock, patch + +from cell_link_manager import CellLinkManager + + +def _make_wg_mock(): + wg = MagicMock() + wg.get_keys.return_value = {'public_key': 'serverpubkey=', 'private_key': 'serverprivkey='} + wg.get_server_config.return_value = { + 'endpoint': '1.2.3.4:51820', 'port': 51820, + 'dns_ip': '10.0.0.3', 'split_tunnel_ips': '10.0.0.0/24, 172.20.0.0/16', + } + wg._get_configured_network.return_value = '10.0.0.0/24' + wg._get_configured_address.return_value = '10.0.0.1/24' + wg.add_cell_peer.return_value = True + wg.remove_peer.return_value = True + return wg + + +def _make_nm_mock(): + nm = MagicMock() + nm.add_cell_dns_forward.return_value = {'restarted': ['cell-dns (reloaded)'], 'warnings': []} + nm.remove_cell_dns_forward.return_value = {'restarted': ['cell-dns (reloaded)'], 'warnings': []} + return nm + + +SAMPLE_INVITE = { + 'cell_name': 'office', + 'public_key': 'officepubkey=', + 'endpoint': '5.6.7.8:51820', + 'vpn_subnet': '10.1.0.0/24', + 'dns_ip': '10.1.0.1', + 'domain': 'office.cell', + 'version': 1, +} + + +class TestCellLinkManagerInvite(unittest.TestCase): + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.wg = _make_wg_mock() + self.nm = _make_nm_mock() + self.mgr = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + def test_generate_invite_has_required_fields(self): + invite = self.mgr.generate_invite('mycell', 'home.cell') + for field in ('cell_name', 'public_key', 'endpoint', 'vpn_subnet', 'dns_ip', 'domain', 'version'): + self.assertIn(field, invite, f"Missing field: {field}") + + def test_generate_invite_uses_wg_public_key(self): + invite = self.mgr.generate_invite('mycell', 'home.cell') + self.assertEqual(invite['public_key'], 'serverpubkey=') + + def test_generate_invite_uses_configured_network(self): + invite = self.mgr.generate_invite('mycell', 'home.cell') + self.assertEqual(invite['vpn_subnet'], '10.0.0.0/24') + + def test_generate_invite_dns_ip_is_server_vpn_ip(self): + invite = self.mgr.generate_invite('mycell', 'home.cell') + self.assertEqual(invite['dns_ip'], '10.0.0.1') + + def test_generate_invite_uses_supplied_identity(self): + invite = self.mgr.generate_invite('myhome', 'myhome.local') + self.assertEqual(invite['cell_name'], 'myhome') + self.assertEqual(invite['domain'], 'myhome.local') + + +class TestCellLinkManagerConnections(unittest.TestCase): + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.wg = _make_wg_mock() + self.nm = _make_nm_mock() + self.mgr = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + def test_add_connection_stores_link(self): + self.mgr.add_connection(SAMPLE_INVITE) + links = self.mgr.list_connections() + self.assertEqual(len(links), 1) + self.assertEqual(links[0]['cell_name'], 'office') + + def test_add_connection_calls_add_cell_peer(self): + self.mgr.add_connection(SAMPLE_INVITE) + self.wg.add_cell_peer.assert_called_once_with( + name='office', + public_key='officepubkey=', + endpoint='5.6.7.8:51820', + vpn_subnet='10.1.0.0/24', + ) + + def test_add_connection_calls_dns_forward(self): + self.mgr.add_connection(SAMPLE_INVITE) + self.nm.add_cell_dns_forward.assert_called_once_with( + domain='office.cell', dns_ip='10.1.0.1' + ) + + def test_add_connection_duplicate_raises(self): + self.mgr.add_connection(SAMPLE_INVITE) + with self.assertRaises(ValueError): + self.mgr.add_connection(SAMPLE_INVITE) + + def test_add_connection_persists_to_disk(self): + self.mgr.add_connection(SAMPLE_INVITE) + # Create a fresh manager reading same dir + mgr2 = CellLinkManager(self.test_dir, self.test_dir, self.wg, self.nm) + links = mgr2.list_connections() + self.assertEqual(len(links), 1) + self.assertEqual(links[0]['cell_name'], 'office') + + def test_remove_connection_calls_wg_remove_peer(self): + self.mgr.add_connection(SAMPLE_INVITE) + self.mgr.remove_connection('office') + self.wg.remove_peer.assert_called_once_with('officepubkey=') + + def test_remove_connection_calls_dns_remove(self): + self.mgr.add_connection(SAMPLE_INVITE) + self.mgr.remove_connection('office') + self.nm.remove_cell_dns_forward.assert_called_once_with('office.cell') + + def test_remove_connection_deletes_from_list(self): + self.mgr.add_connection(SAMPLE_INVITE) + self.mgr.remove_connection('office') + self.assertEqual(len(self.mgr.list_connections()), 0) + + def test_remove_nonexistent_raises(self): + with self.assertRaises(ValueError): + self.mgr.remove_connection('nobody') + + def test_list_connections_empty_by_default(self): + self.assertEqual(self.mgr.list_connections(), []) + + def test_multiple_connections(self): + self.mgr.add_connection(SAMPLE_INVITE) + second = {**SAMPLE_INVITE, 'cell_name': 'cabin', 'public_key': 'cabinpubkey=', + 'vpn_subnet': '10.2.0.0/24', 'dns_ip': '10.2.0.1', 'domain': 'cabin.cell'} + self.mgr.add_connection(second) + self.assertEqual(len(self.mgr.list_connections()), 2) + + +if __name__ == '__main__': + unittest.main() diff --git a/tests/test_cell_manager.py b/tests/test_cell_manager.py index 2b137e0..c450d56 100644 --- a/tests/test_cell_manager.py +++ b/tests/test_cell_manager.py @@ -69,8 +69,8 @@ class TestCellManager(unittest.TestCase): self.cell_manager.config['cell_name'] = 'modified' self.cell_manager.save_config() - # Create new instance to test loading - new_manager = CellManager() + # Create new instance to test loading (same config_path) + new_manager = CellManager(config_path=self.config_path) self.assertEqual(new_manager.config['cell_name'], 'modified') def test_peer_management(self): diff --git a/tests/test_cli_tool.py b/tests/test_cli_tool.py index 40fa7d3..33d9d8d 100644 --- a/tests/test_cli_tool.py +++ b/tests/test_cli_tool.py @@ -21,11 +21,16 @@ sys.path.insert(0, str(api_dir)) try: from cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config except ImportError: - # Fallback for when running from tests directory import sys sys.path.append('..') from api.cell_cli import api_request, show_status, list_peers, add_peer, remove_peer, show_config, update_config +try: + from enhanced_cli import EnhancedCLI, ConfigManager as CLIConfigManager +except ImportError: + EnhancedCLI = None + CLIConfigManager = None + class TestCLITool(unittest.TestCase): """Test cases for CLI tool functions""" @@ -91,7 +96,7 @@ class TestCLITool(unittest.TestCase): result = api_request('DELETE', '/test') self.assertEqual(result, {'message': 'deleted'}) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_show_status(self, mock_api_request): """Test show_status function""" mock_api_request.return_value = { @@ -120,7 +125,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('2', output) self.assertIn('3600', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_list_peers_empty(self, mock_api_request): """Test list_peers with empty list""" mock_api_request.return_value = [] @@ -135,7 +140,7 @@ class TestCLITool(unittest.TestCase): output = captured_output.getvalue() self.assertIn('No peers configured', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_list_peers_with_data(self, mock_api_request): """Test list_peers with peer data""" mock_api_request.return_value = [ @@ -159,7 +164,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('192.168.1.100', output) self.assertIn('testkey123456789', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_add_peer_success(self, mock_api_request): """Test add_peer success""" mock_api_request.return_value = {'message': 'Peer added successfully'} @@ -175,7 +180,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('✅', output) self.assertIn('successfully', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_add_peer_failure(self, mock_api_request): """Test add_peer failure""" mock_api_request.return_value = None @@ -191,7 +196,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('❌', output) self.assertIn('Failed', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_remove_peer_success(self, mock_api_request): """Test remove_peer success""" mock_api_request.return_value = {'message': 'Peer removed successfully'} @@ -207,7 +212,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('✅', output) self.assertIn('successfully', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_show_config(self, mock_api_request): """Test show_config function""" mock_api_request.return_value = { @@ -232,7 +237,7 @@ class TestCLITool(unittest.TestCase): self.assertIn('53', output) self.assertIn('51820', output) - @patch("api.cell_cli.api_request") + @patch("cell_cli.api_request") def test_update_config_success(self, mock_api_request): """Test update_config success""" mock_api_request.return_value = {'message': 'Configuration updated successfully'} diff --git a/tests/test_config_manager.py b/tests/test_config_manager.py index 362d3a7..472758a 100644 --- a/tests/test_config_manager.py +++ b/tests/test_config_manager.py @@ -222,5 +222,200 @@ class TestConfigManager(unittest.TestCase): changed = self.config_manager.has_config_changed('network', original_hash) self.assertTrue(changed) + def test_restore_does_not_zero_unconfigured_services(self): + """Restore must not inject zero-filled entries for services absent from backup.""" + # Only configure network before backup + self.config_manager.update_service_config('network', { + 'dns_port': 53, 'dhcp_range': '10.0.0.100,10.0.0.200,12h', 'ntp_servers': ['pool.ntp.org'] + }) + backup_id = self.config_manager.backup_config() + + # Restore into a fresh manager (simulates restoring to a clean install) + fresh_cfg_file = os.path.join(self.temp_dir, 'cell_config2.json') + fresh = ConfigManager(fresh_cfg_file, self.data_dir) + # Restore needs the backup_dir to match + fresh.backup_dir = self.config_manager.backup_dir + success = fresh.restore_config(backup_id) + self.assertTrue(success) + + # email was not in the backup — it must NOT appear with port=0 + email_cfg = fresh.get_service_config('email') + self.assertNotIn('smtp_port', email_cfg, + "restore must not inject zero-filled entries for services not in backup") + self.assertNotIn('imap_port', email_cfg) + + # network was in the backup — it must be intact + net_cfg = fresh.get_service_config('network') + self.assertEqual(net_cfg['dns_port'], 53) + + def test_restore_does_not_zero_import(self): + """import_config must not inject zero-filled entries for absent services.""" + export_data = json.dumps({ + 'network': {'dns_port': 53, 'dhcp_range': '10.0.0.100,10.0.0.200,12h', 'ntp_servers': []} + }) + success = self.config_manager.import_config(export_data) + self.assertTrue(success) + email_cfg = self.config_manager.get_service_config('email') + self.assertNotIn('smtp_port', email_cfg, + "import must not inject zero-filled entries for absent services") + + +class TestNetworkManagerApply(unittest.TestCase): + """Test apply_config / apply_domain actually write real config files.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(os.path.join(self.data_dir, 'dns'), exist_ok=True) + os.makedirs(os.path.join(self.config_dir, 'dhcp'), exist_ok=True) + os.makedirs(os.path.join(self.config_dir, 'ntp'), exist_ok=True) + + # Seed minimal config files + with open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf'), 'w') as f: + f.write('dhcp-range=10.0.0.100,10.0.0.200,12h\ndomain=cell\n') + with open(os.path.join(self.config_dir, 'ntp', 'chrony.conf'), 'w') as f: + f.write('server time.google.com iburst\nserver pool.ntp.org iburst\n') + + sys.path.insert(0, str(Path(__file__).parent.parent / 'api')) + from network_manager import NetworkManager + self.nm = NetworkManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + @patch('subprocess.run') + def test_apply_config_writes_dhcp_range(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.nm.apply_config({'dhcp_range': '192.168.1.100,192.168.1.200,24h'}) + dhcp_conf = open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')).read() + self.assertIn('192.168.1.100,192.168.1.200,24h', dhcp_conf) + self.assertIn('cell-dhcp', ' '.join(result['restarted'])) + + @patch('subprocess.run') + def test_apply_config_writes_ntp_servers(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.nm.apply_config({'ntp_servers': ['ntp1.example.com', 'ntp2.example.com']}) + ntp_conf = open(os.path.join(self.config_dir, 'ntp', 'chrony.conf')).read() + self.assertIn('server ntp1.example.com iburst', ntp_conf) + self.assertIn('server ntp2.example.com iburst', ntp_conf) + # Old servers must be gone + self.assertNotIn('time.google.com', ntp_conf) + self.assertIn('cell-ntp', result['restarted']) + + @patch('subprocess.run') + def test_apply_domain_updates_dnsmasq(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.nm.apply_domain('newdomain.local') + dhcp_conf = open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf')).read() + self.assertIn('domain=newdomain.local', dhcp_conf) + self.assertNotIn('domain=cell', dhcp_conf) + + @patch('subprocess.run') + def test_apply_domain_updates_corefile(self, mock_run): + """apply_domain must rewrite the Corefile zone name and reload CoreDNS.""" + mock_run.return_value = MagicMock(returncode=0) + # Create a Corefile with zone 'cell' + dns_conf_dir = os.path.join(self.config_dir, 'dns') + os.makedirs(dns_conf_dir, exist_ok=True) + corefile = os.path.join(dns_conf_dir, 'Corefile') + with open(corefile, 'w') as f: + f.write('. {\n forward . 8.8.8.8\n}\ncell {\n file /data/cell.zone\n log\n}\n') + # Create zone file + zone_file = os.path.join(self.data_dir, 'dns', 'cell.zone') + with open(zone_file, 'w') as f: + f.write('$ORIGIN cell.\n$TTL 300\n@ IN SOA ns1.cell. admin.cell. 2024010101 3600 900 604800 300\n') + + self.nm.apply_domain('newdomain.local') + + corefile_content = open(corefile).read() + self.assertIn('newdomain.local', corefile_content, + "Corefile must reference the new domain zone") + self.assertNotIn('\ncell {', corefile_content, + "Corefile must not keep old 'cell' zone block") + + +class TestNetworkManagerApplyCellName(unittest.TestCase): + """apply_cell_name updates the DNS zone hostname record.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(os.path.join(self.data_dir, 'dns'), exist_ok=True) + os.makedirs(os.path.join(self.config_dir, 'dhcp'), exist_ok=True) + os.makedirs(os.path.join(self.config_dir, 'ntp'), exist_ok=True) + with open(os.path.join(self.config_dir, 'dhcp', 'dnsmasq.conf'), 'w') as f: + f.write('domain=cell\n') + with open(os.path.join(self.config_dir, 'ntp', 'chrony.conf'), 'w') as f: + f.write('server pool.ntp.org iburst\n') + # Create a zone file with old cell name + with open(os.path.join(self.data_dir, 'dns', 'cell.zone'), 'w') as f: + f.write('$ORIGIN cell.\n$TTL 300\n' + '@ IN SOA ns1.cell. admin.cell. 2024010101 3600 900 604800 300\n' + 'ns1 IN A 172.20.0.3\n' + 'mycell IN A 172.20.0.2\n' + '@ IN A 172.20.0.2\n') + sys.path.insert(0, str(Path(__file__).parent.parent / 'api')) + from network_manager import NetworkManager + self.nm = NetworkManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + @patch('subprocess.run') + def test_apply_cell_name_renames_host_record(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.nm.apply_cell_name('mycell', 'newcell') + zone = open(os.path.join(self.data_dir, 'dns', 'cell.zone')).read() + self.assertIn('newcell IN A 172.20.0.2', zone) + self.assertNotIn('mycell IN A', zone) + self.assertIn('cell-dns', ' '.join(result['restarted'])) + + @patch('subprocess.run') + def test_apply_cell_name_noop_when_same(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.nm.apply_cell_name('mycell', 'mycell') + self.assertEqual(result['restarted'], []) + mock_run.assert_not_called() + + +class TestEmailManagerApply(unittest.TestCase): + """Test email_manager.apply_config writes mailserver.env correctly.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(os.path.join(self.config_dir, 'mail'), exist_ok=True) + os.makedirs(os.path.join(self.data_dir, 'email'), exist_ok=True) + with open(os.path.join(self.config_dir, 'mail', 'mailserver.env'), 'w') as f: + f.write('OVERRIDE_HOSTNAME=mail.cell\nPOSTMASTER_ADDRESS=admin@cell\nLOG_LEVEL=warn\n') + + sys.path.insert(0, str(Path(__file__).parent.parent / 'api')) + from email_manager import EmailManager + self.em = EmailManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + @patch('subprocess.run') + def test_apply_config_updates_mailserver_env(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.em.apply_config({'domain': 'example.local'}) + env = open(os.path.join(self.config_dir, 'mail', 'mailserver.env')).read() + self.assertIn('OVERRIDE_HOSTNAME=mail.example.local', env) + self.assertIn('POSTMASTER_ADDRESS=admin@example.local', env) + self.assertIn('LOG_LEVEL=warn', env, "other env vars must be preserved") + self.assertIn('cell-mail', result['restarted']) + + @patch('subprocess.run') + def test_apply_config_no_domain_no_restart(self, mock_run): + mock_run.return_value = MagicMock(returncode=0) + result = self.em.apply_config({'smtp_port': 587}) + # smtp_port alone doesn't restart cell-mail (no mailserver.env key to change) + self.assertEqual(result['restarted'], []) + + if __name__ == '__main__': - unittest.main() \ No newline at end of file + unittest.main() diff --git a/tests/test_firewall_manager.py b/tests/test_firewall_manager.py new file mode 100644 index 0000000..4a059d0 --- /dev/null +++ b/tests/test_firewall_manager.py @@ -0,0 +1,275 @@ +#!/usr/bin/env python3 +""" +Tests for firewall_manager — per-peer iptables rule generation and DNS ACL logic. +All docker exec calls are mocked so tests run without a live Docker environment. +""" + +import sys +import os +import tempfile +import shutil +import unittest +from unittest.mock import patch, call, MagicMock +from pathlib import Path + +api_dir = Path(__file__).parent.parent / 'api' +sys.path.insert(0, str(api_dir)) + +import firewall_manager + + +def _make_peer(ip, internet=True, services=None, peers=True): + if services is None: + services = list(firewall_manager.SERVICE_IPS.keys()) + return {'ip': ip, 'internet_access': internet, 'service_access': services, 'peer_access': peers} + + +# --------------------------------------------------------------------------- +# _peer_comment +# --------------------------------------------------------------------------- + +class TestPeerComment(unittest.TestCase): + def test_dots_replaced_with_dashes(self): + self.assertEqual(firewall_manager._peer_comment('10.0.0.2'), 'pic-peer-10-0-0-2') + + def test_different_ip(self): + self.assertEqual(firewall_manager._peer_comment('192.168.1.100'), 'pic-peer-192-168-1-100') + + +# --------------------------------------------------------------------------- +# _build_acl_block +# --------------------------------------------------------------------------- + +class TestBuildAclBlock(unittest.TestCase): + def test_empty_returns_empty_string(self): + self.assertEqual(firewall_manager._build_acl_block({}), '') + + def test_no_blocked_peers_returns_empty(self): + blocked = {s: [] for s in firewall_manager.SERVICE_IPS} + self.assertEqual(firewall_manager._build_acl_block(blocked), '') + + def test_blocked_peer_appears_in_acl(self): + blocked = {'calendar': ['10.0.0.5'], 'files': [], 'mail': [], 'webdav': []} + result = firewall_manager._build_acl_block(blocked) + self.assertIn('acl calendar.cell.', result) + self.assertIn('block net 10.0.0.5/32', result) + self.assertIn('allow net 0.0.0.0/0', result) + + def test_unknown_service_skipped(self): + blocked = {'nonexistent': ['10.0.0.2']} + result = firewall_manager._build_acl_block(blocked) + self.assertEqual(result, '') + + def test_multiple_peers_blocked_from_same_service(self): + blocked = {'mail': ['10.0.0.2', '10.0.0.3'], 'calendar': [], 'files': [], 'webdav': []} + result = firewall_manager._build_acl_block(blocked) + self.assertEqual(result.count('block net'), 2) + self.assertIn('10.0.0.2/32', result) + self.assertIn('10.0.0.3/32', result) + + +# --------------------------------------------------------------------------- +# generate_corefile +# --------------------------------------------------------------------------- + +class TestGenerateCorefile(unittest.TestCase): + def setUp(self): + self.tmp = tempfile.mkdtemp() + self.path = os.path.join(self.tmp, 'Corefile') + + def tearDown(self): + shutil.rmtree(self.tmp) + + def test_creates_corefile(self): + firewall_manager.generate_corefile([], self.path) + self.assertTrue(os.path.exists(self.path)) + + def test_contains_forward_and_cache(self): + firewall_manager.generate_corefile([], self.path) + content = open(self.path).read() + self.assertIn('forward . 8.8.8.8', content) + self.assertIn('cache', content) + self.assertIn('cell {', content) + + def test_no_blocked_services_no_acl_block(self): + peers = [_make_peer('10.0.0.2', internet=True, + services=list(firewall_manager.SERVICE_IPS.keys()))] + firewall_manager.generate_corefile(peers, self.path) + content = open(self.path).read() + self.assertNotIn('block net', content) + + def test_blocked_service_generates_acl(self): + peers = [_make_peer('10.0.0.3', internet=False, services=['calendar'])] + firewall_manager.generate_corefile(peers, self.path) + content = open(self.path).read() + # files/mail/webdav are blocked for this peer + self.assertIn('block net 10.0.0.3/32', content) + + def test_peer_with_all_services_allowed_no_acl(self): + peers = [_make_peer('10.0.0.2', services=list(firewall_manager.SERVICE_IPS.keys()))] + firewall_manager.generate_corefile(peers, self.path) + self.assertNotIn('block net', open(self.path).read()) + + def test_returns_false_on_write_error(self): + result = firewall_manager.generate_corefile([], '/nonexistent/path/Corefile') + self.assertFalse(result) + + +# --------------------------------------------------------------------------- +# apply_peer_rules — iptables call verification +# --------------------------------------------------------------------------- + +class TestApplyPeerRules(unittest.TestCase): + """Verify correct iptables calls for full-internet vs split-tunnel peers.""" + + def _run_apply(self, peer_ip, settings): + calls_made = [] + + def fake_wg_exec(args): + calls_made.append(args) + m = MagicMock() + m.returncode = 0 + m.stdout = '' + return m + + with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec): + firewall_manager.apply_peer_rules(peer_ip, settings) + + return calls_made + + def test_full_internet_peer_gets_accept_rule(self): + calls = self._run_apply('10.0.0.2', {'internet_access': True, + 'service_access': list(firewall_manager.SERVICE_IPS.keys()), + 'peer_access': True}) + iptables_calls = [c for c in calls if 'iptables' in c] + targets = [c[c.index('-j') + 1] for c in iptables_calls if '-j' in c] + # Full-internet peer: only ACCEPT rules (no DROP except iptables-restore clears) + self.assertNotIn('DROP', targets) + self.assertIn('ACCEPT', targets) + + def test_no_internet_peer_gets_drop_rule(self): + calls = self._run_apply('10.0.0.3', {'internet_access': False, + 'service_access': list(firewall_manager.SERVICE_IPS.keys()), + 'peer_access': True}) + iptables_calls = [c for c in calls if 'iptables' in c] + targets = [c[c.index('-j') + 1] for c in iptables_calls if '-j' in c] + self.assertIn('DROP', targets) + self.assertIn('ACCEPT', targets) + + def test_service_access_restriction_generates_drop(self): + calls = self._run_apply('10.0.0.4', {'internet_access': False, + 'service_access': ['calendar'], + 'peer_access': True}) + iptables_calls = [c for c in calls if 'iptables' in c] + # files/mail/webdav should be DROPped, calendar ACCEPTed + targets_with_ips = [ + (c[c.index('-d') + 1], c[c.index('-j') + 1]) + for c in iptables_calls + if '-d' in c and '-j' in c + ] + svc_rules = {ip: t for ip, t in targets_with_ips + if ip in firewall_manager.SERVICE_IPS.values()} + calendar_ip = firewall_manager.SERVICE_IPS['calendar'] + files_ip = firewall_manager.SERVICE_IPS['files'] + self.assertEqual(svc_rules.get(calendar_ip), 'ACCEPT') + self.assertEqual(svc_rules.get(files_ip), 'DROP') + + def test_all_rules_tagged_with_peer_comment(self): + calls = self._run_apply('10.0.0.2', {'internet_access': True, + 'service_access': list(firewall_manager.SERVICE_IPS.keys()), + 'peer_access': True}) + iptables_calls = [c for c in calls if 'iptables' in c] + comment = firewall_manager._peer_comment('10.0.0.2') + for c in iptables_calls: + if '-I' in c: # only insertion rules need the comment + self.assertIn(comment, c, f"Rule missing comment tag: {c}") + + def test_peer_with_no_peer_access_gets_drop_for_vpn_subnet(self): + calls = self._run_apply('10.0.0.5', {'internet_access': True, + 'service_access': list(firewall_manager.SERVICE_IPS.keys()), + 'peer_access': False}) + iptables_calls = [c for c in calls if 'iptables' in c] + vpn_rules = [c for c in iptables_calls if '-d' in c and '10.0.0.0/24' in c] + self.assertTrue(vpn_rules, "Expected a rule for 10.0.0.0/24") + for c in vpn_rules: + self.assertIn('DROP', c) + + +# --------------------------------------------------------------------------- +# apply_all_peer_rules +# --------------------------------------------------------------------------- + +class TestApplyAllPeerRules(unittest.TestCase): + def test_calls_apply_per_peer(self): + peers = [_make_peer('10.0.0.2'), _make_peer('10.0.0.3', internet=False)] + with patch.object(firewall_manager, 'ensure_caddy_virtual_ips', return_value=True), \ + patch.object(firewall_manager, 'apply_peer_rules', return_value=True) as mock_apply: + firewall_manager.apply_all_peer_rules(peers) + self.assertEqual(mock_apply.call_count, 2) + called_ips = {c.args[0] for c in mock_apply.call_args_list} + self.assertEqual(called_ips, {'10.0.0.2', '10.0.0.3'}) + + def test_peer_without_ip_is_skipped(self): + peers = [{'internet_access': True}, _make_peer('10.0.0.2')] + with patch.object(firewall_manager, 'ensure_caddy_virtual_ips', return_value=True), \ + patch.object(firewall_manager, 'apply_peer_rules', return_value=True) as mock_apply: + firewall_manager.apply_all_peer_rules(peers) + self.assertEqual(mock_apply.call_count, 1) + + +# --------------------------------------------------------------------------- +# clear_peer_rules +# --------------------------------------------------------------------------- + +class TestClearPeerRules(unittest.TestCase): + def test_removes_only_matching_comment_lines(self): + save_output = ( + '*filter\n' + ':INPUT ACCEPT [0:0]\n' + ':FORWARD ACCEPT [0:0]\n' + '-A FORWARD -s 10.0.0.2 -m comment --comment pic-peer-10-0-0-2 -j ACCEPT\n' + '-A FORWARD -s 10.0.0.3 -m comment --comment pic-peer-10-0-0-3 -j DROP\n' + 'COMMIT\n' + ) + restored = [] + + def fake_wg_exec(args): + m = MagicMock() + m.returncode = 0 + if args == ['iptables-save']: + m.stdout = save_output + return m + + def fake_restore(cmd, input, **kwargs): + restored.append(input) + m = MagicMock() + m.returncode = 0 + return m + + with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec), \ + patch('subprocess.run', side_effect=fake_restore): + firewall_manager.clear_peer_rules('10.0.0.2') + + self.assertEqual(len(restored), 1) + restored_content = restored[0] + self.assertNotIn('pic-peer-10-0-0-2', restored_content) + self.assertIn('pic-peer-10-0-0-3', restored_content) + + def test_no_op_when_no_matching_rules(self): + save_output = '*filter\n:FORWARD ACCEPT [0:0]\nCOMMIT\n' + + def fake_wg_exec(args): + m = MagicMock() + m.returncode = 0 + m.stdout = save_output + return m + + with patch.object(firewall_manager, '_wg_exec', side_effect=fake_wg_exec), \ + patch('subprocess.run') as mock_restore: + firewall_manager.clear_peer_rules('10.0.0.99') + + mock_restore.assert_not_called() + + +if __name__ == '__main__': + unittest.main() diff --git a/tests/test_network_manager.py b/tests/test_network_manager.py index 7e8864c..0f05d24 100644 --- a/tests/test_network_manager.py +++ b/tests/test_network_manager.py @@ -199,29 +199,20 @@ test2 1800 IN CNAME test1 self.assertFalse(status['running']) self.assertIn('stats', status) - @patch('subprocess.run') - def test_test_dns_resolution(self, mock_run): + @patch('socket.getaddrinfo') + def test_test_dns_resolution(self, mock_getaddrinfo): """Test DNS resolution testing""" - # Mock successful DNS resolution - mock_run.return_value.returncode = 0 - mock_run.return_value.stdout = 'test.cell -> 192.168.1.100' - mock_run.return_value.stderr = '' - + mock_getaddrinfo.return_value = [(None, None, None, None, ('192.168.1.100', 0))] result = self.network_manager.test_dns_resolution('test.cell') - self.assertTrue(result['success']) self.assertIn('192.168.1.100', result['output']) - - @patch('subprocess.run') - def test_test_dns_resolution_failure(self, mock_run): + + @patch('socket.getaddrinfo') + def test_test_dns_resolution_failure(self, mock_getaddrinfo): """Test DNS resolution testing with failure""" - # Mock failed DNS resolution - mock_run.return_value.returncode = 1 - mock_run.return_value.stdout = '' - mock_run.return_value.stderr = 'NXDOMAIN' - + import socket + mock_getaddrinfo.side_effect = socket.gaierror('NXDOMAIN') result = self.network_manager.test_dns_resolution('nonexistent.cell') - self.assertFalse(result['success']) self.assertIn('NXDOMAIN', result['error']) @@ -272,5 +263,56 @@ test2 1800 IN CNAME test1 self.assertIn('192.168.1.10', content) self.assertIn('192.168.1.11', content) +class TestCellDnsForwarding(unittest.TestCase): + """Test add/remove cell DNS forwarding in Corefile.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(self.data_dir, exist_ok=True) + os.makedirs(os.path.join(self.config_dir, 'dns'), exist_ok=True) + self.nm = NetworkManager(self.data_dir, self.config_dir) + self.corefile = os.path.join(self.config_dir, 'dns', 'Corefile') + with open(self.corefile, 'w') as f: + f.write('home.cell {\n file /data/home.cell.zone\n log\n}\n\n. {\n forward . 8.8.8.8\n log\n}\n') + + def tearDown(self): + shutil.rmtree(self.test_dir) + + @patch('subprocess.run') + def test_add_cell_dns_forward_appends_block(self, _mock): + self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1') + with open(self.corefile) as f: + content = f.read() + self.assertIn('remote.cell', content) + self.assertIn('10.1.0.1', content) + self.assertIn('forward . 10.1.0.1', content) + + @patch('subprocess.run') + def test_add_cell_dns_forward_idempotent(self, _mock): + self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1') + self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1') + with open(self.corefile) as f: + content = f.read() + self.assertEqual(content.count('forward . 10.1.0.1'), 1) + + @patch('subprocess.run') + def test_remove_cell_dns_forward_cleans_block(self, _mock): + self.nm.add_cell_dns_forward('remote.cell', '10.1.0.1') + self.nm.remove_cell_dns_forward('remote.cell') + with open(self.corefile) as f: + content = f.read() + self.assertNotIn('remote.cell', content) + self.assertNotIn('10.1.0.1', content) + + @patch('subprocess.run') + def test_remove_nonexistent_forward_is_noop(self, _mock): + before = open(self.corefile).read() + self.nm.remove_cell_dns_forward('nonexistent.cell') + after = open(self.corefile).read() + self.assertEqual(before, after) + + if __name__ == '__main__': unittest.main() \ No newline at end of file diff --git a/tests/test_peer_wg_integration.py b/tests/test_peer_wg_integration.py new file mode 100644 index 0000000..0c289f8 --- /dev/null +++ b/tests/test_peer_wg_integration.py @@ -0,0 +1,124 @@ +#!/usr/bin/env python3 +""" +Tests for peer add/remove flow — ensures server-side WireGuard AllowedIPs +are always the peer's /32 VPN IP, never the client tunnel AllowedIPs. +""" + +import sys +import os +import tempfile +import shutil +import unittest +from pathlib import Path +from unittest.mock import patch, MagicMock + +api_dir = Path(__file__).parent.parent / 'api' +sys.path.insert(0, str(api_dir)) + +from wireguard_manager import WireGuardManager +from peer_registry import PeerRegistry + + +class TestServerSideAllowedIPs(unittest.TestCase): + """Server-side peer AllowedIPs must always be peer_ip/32.""" + + def setUp(self): + self.tmp = tempfile.mkdtemp() + self.data_dir = os.path.join(self.tmp, 'data') + self.config_dir = os.path.join(self.tmp, 'config') + os.makedirs(self.data_dir) + os.makedirs(self.config_dir) + # Patch syncconf so tests don't need docker + patcher = patch.object(WireGuardManager, '_syncconf', return_value=None) + self.mock_sync = patcher.start() + self.addCleanup(patcher.stop) + self.wg = WireGuardManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.tmp) + + def _config(self): + with open(self.wg._config_file()) as f: + return f.read() + + def test_add_peer_uses_host_slash32(self): + """Peer added with /32 stays as /32 in config.""" + self.wg.add_peer('alice', 'ALICEPUBKEY=', '', allowed_ips='10.0.0.2/32') + cfg = self._config() + self.assertIn('AllowedIPs = 10.0.0.2/32', cfg) + + def test_full_tunnel_client_ips_rejected(self): + """add_peer must refuse 0.0.0.0/0 — it would route all internet traffic to that peer.""" + result = self.wg.add_peer('bob', 'BOBPUBKEY=', '', allowed_ips='0.0.0.0/0, ::/0') + self.assertFalse(result, + "0.0.0.0/0 in server peer AllowedIPs routes ALL traffic to that peer, breaking internet") + + def test_split_tunnel_client_ips_rejected(self): + """add_peer must refuse 172.20.0.0/16 — it would route docker network to that peer.""" + result = self.wg.add_peer('carol', 'CAROLPUBKEY=', '', allowed_ips='10.0.0.0/24, 172.20.0.0/16') + self.assertFalse(result, + "172.20.0.0/16 in server peer AllowedIPs routes docker network traffic to that peer") + + def test_remove_peer_cleans_config(self): + self.wg.add_peer('dave', 'DAVEPUBKEY=', '', allowed_ips='10.0.0.4/32') + self.wg.remove_peer('DAVEPUBKEY=') + cfg = self._config() + self.assertNotIn('DAVEPUBKEY=', cfg) + + def test_syncconf_called_on_add(self): + self.wg.add_peer('eve', 'EVEPUBKEY=', '', allowed_ips='10.0.0.5/32') + self.mock_sync.assert_called() + + def test_syncconf_called_on_remove(self): + self.wg.add_peer('frank', 'FRANKPUBKEY=', '', allowed_ips='10.0.0.6/32') + self.mock_sync.reset_mock() + self.wg.remove_peer('FRANKPUBKEY=') + self.mock_sync.assert_called() + + +class TestAutoAssignIP(unittest.TestCase): + """Auto-assigned peer IPs must be unique /32s starting at 10.0.0.2.""" + + def setUp(self): + self.tmp = tempfile.mkdtemp() + self.registry = PeerRegistry(data_dir=self.tmp, config_dir=self.tmp) + + def tearDown(self): + shutil.rmtree(self.tmp) + + def _next_ip(self): + import ipaddress + used = {p.get('ip', '').split('/')[0] for p in self.registry.list_peers()} + for host in ipaddress.ip_network('10.0.0.0/24').hosts(): + ip = str(host) + if ip != '10.0.0.1' and ip not in used: + return ip + raise ValueError('No free IPs') + + def test_first_peer_gets_10_0_0_2(self): + ip = self._next_ip() + self.assertEqual(ip, '10.0.0.2') + + def test_second_peer_gets_10_0_0_3(self): + self.registry.add_peer({'peer': 'p1', 'ip': '10.0.0.2'}) + ip = self._next_ip() + self.assertEqual(ip, '10.0.0.3') + + def test_no_duplicate_ips(self): + assigned = [] + for i in range(5): + ip = self._next_ip() + self.assertNotIn(ip, assigned, f"Duplicate IP assigned: {ip}") + assigned.append(ip) + self.registry.add_peer({'peer': f'peer{i}', 'ip': ip}) + + def test_server_ip_never_assigned(self): + # Fill up .2 through .10 + for i in range(2, 11): + self.registry.add_peer({'peer': f'p{i}', 'ip': f'10.0.0.{i}'}) + ip = self._next_ip() + self.assertNotEqual(ip, '10.0.0.1', "Server IP 10.0.0.1 must never be assigned to a peer") + + +if __name__ == '__main__': + unittest.main() diff --git a/tests/test_vault_api.py b/tests/test_vault_api.py index a30d39a..b46ecac 100644 --- a/tests/test_vault_api.py +++ b/tests/test_vault_api.py @@ -38,9 +38,10 @@ class TestVaultAPI(unittest.TestCase): os.makedirs(self.config_dir, exist_ok=True) os.makedirs(self.data_dir, exist_ok=True) - # Mock VaultManager - self.vault_patcher = patch('api.vault_manager') - self.mock_vault = self.vault_patcher.start() + # Mock VaultManager on the Flask app object + self.mock_vault = MagicMock() + self.vault_patcher = patch.object(app, 'vault_manager', self.mock_vault) + self.vault_patcher.start() # Create a mock vault manager instance mock_vault_instance = MagicMock() @@ -425,22 +426,29 @@ class TestVaultAPI(unittest.TestCase): class TestVaultAPIIntegration(unittest.TestCase): """Integration tests for Vault API.""" - + def setUp(self): """Set up test environment.""" + from vault_manager import VaultManager self.test_dir = tempfile.mkdtemp() self.config_dir = os.path.join(self.test_dir, "config") self.data_dir = os.path.join(self.test_dir, "data") - + os.makedirs(self.config_dir, exist_ok=True) os.makedirs(self.data_dir, exist_ok=True) - + + # Use a real VaultManager backed by temp dirs + self._original_vault_manager = getattr(app, 'vault_manager', None) + app.vault_manager = VaultManager(data_dir=self.data_dir, config_dir=self.config_dir) + # Configure Flask app for testing app.config['TESTING'] = True self.client = app.test_client() - + def tearDown(self): """Clean up test environment.""" + if self._original_vault_manager is not None: + app.vault_manager = self._original_vault_manager shutil.rmtree(self.test_dir) def test_full_certificate_lifecycle_api(self): diff --git a/tests/test_wireguard_manager.py b/tests/test_wireguard_manager.py index 687731f..225ea99 100644 --- a/tests/test_wireguard_manager.py +++ b/tests/test_wireguard_manager.py @@ -26,7 +26,7 @@ from wireguard_manager import WireGuardManager class TestWireGuardManager(unittest.TestCase): """Test cases for WireGuardManager class""" - + def setUp(self): """Set up test environment""" self.test_dir = tempfile.mkdtemp() @@ -34,10 +34,14 @@ class TestWireGuardManager(unittest.TestCase): self.config_dir = os.path.join(self.test_dir, 'config') os.makedirs(self.data_dir, exist_ok=True) os.makedirs(self.config_dir, exist_ok=True) - + + patcher = patch.object(WireGuardManager, '_syncconf', return_value=None) + self.mock_sync = patcher.start() + self.addCleanup(patcher.stop) + # Create WireGuardManager instance self.wg_manager = WireGuardManager(self.data_dir, self.config_dir) - + def tearDown(self): """Clean up test environment""" shutil.rmtree(self.test_dir) @@ -100,54 +104,51 @@ class TestWireGuardManager(unittest.TestCase): def test_generate_config(self): """Test WireGuard configuration generation""" config = self.wg_manager.generate_config('wg0', 51820) - + self.assertIsInstance(config, str) self.assertIn('[Interface]', config) self.assertIn('PrivateKey', config) - self.assertIn('Address = 172.20.0.1/16', config) + self.assertIn('Address = 10.0.0.1/24', config) self.assertIn('ListenPort = 51820', config) self.assertIn('PostUp', config) self.assertIn('PostDown', config) def test_add_peer(self): - """Test adding a peer to WireGuard configuration""" - # Generate peer keys first + """Test adding a peer — server-side AllowedIPs must be /32.""" peer_keys = self.wg_manager.generate_peer_keys('testpeer') - + success = self.wg_manager.add_peer( 'testpeer', peer_keys['public_key'], - '192.168.1.100', - '172.20.0.0/16', + '', + '10.0.0.2/32', 25 ) - + self.assertTrue(success) - - # Check if config file was created - config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf') + + config_file = self.wg_manager._config_file() self.assertTrue(os.path.exists(config_file)) - - # Check config content + with open(config_file, 'r') as f: config = f.read() self.assertIn('[Peer]', config) self.assertIn(peer_keys['public_key'], config) - self.assertIn('AllowedIPs = 172.20.0.0/16', config) + self.assertIn('AllowedIPs = 10.0.0.2/32', config) self.assertIn('PersistentKeepalive = 25', config) def test_remove_peer(self): """Test removing a peer from WireGuard configuration""" # Add a peer first peer_keys = self.wg_manager.generate_peer_keys('testpeer') - self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100') + self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32') # Remove the peer success = self.wg_manager.remove_peer(peer_keys['public_key']) self.assertTrue(success) # Check if peer was removed - config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf') + config_file = self.wg_manager._config_file() with open(config_file, 'r') as f: config = f.read() self.assertNotIn(peer_keys['public_key'], config) @@ -156,7 +157,7 @@ class TestWireGuardManager(unittest.TestCase): """Test getting list of configured peers""" # Add a peer first peer_keys = self.wg_manager.generate_peer_keys('testpeer') - self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100') + self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32') peers = self.wg_manager.get_peers() @@ -221,46 +222,40 @@ class TestWireGuardManager(unittest.TestCase): def test_update_peer_ip(self): """Test updating peer IP address""" - # Add a peer first peer_keys = self.wg_manager.generate_peer_keys('testpeer') - self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '192.168.1.100') - - # Update peer IP - success = self.wg_manager.update_peer_ip(peer_keys['public_key'], '192.168.1.200') + self.wg_manager.add_peer('testpeer', peer_keys['public_key'], '', '10.0.0.2/32') + + success = self.wg_manager.update_peer_ip(peer_keys['public_key'], '10.0.0.9/32') self.assertTrue(success) - - # Check if IP was updated in config - config_file = os.path.join(self.wg_manager.wireguard_dir, 'wg0.conf') - with open(config_file, 'r') as f: + + with open(self.wg_manager._config_file(), 'r') as f: config = f.read() - self.assertIn('192.168.1.200', config) + self.assertIn('10.0.0.9/32', config) def test_get_peer_config(self): - """Test generating peer configuration""" + """Test generating peer client configuration.""" peer_keys = self.wg_manager.generate_peer_keys('testpeer') keys = self.wg_manager.get_keys() - - config = self.wg_manager.get_peer_config('testpeer', '192.168.1.100', peer_keys['private_key']) - + + config = self.wg_manager.get_peer_config('testpeer', '10.0.0.2', peer_keys['private_key']) + self.assertIsInstance(config, str) self.assertIn('[Interface]', config) self.assertIn('[Peer]', config) self.assertIn('PrivateKey', config) - self.assertIn('Address = 192.168.1.100/32', config) - self.assertIn('DNS = 172.20.0.2', config) + self.assertIn('Address = 10.0.0.2/32', config) + self.assertIn('DNS = 172.20.0.3', config) self.assertIn(keys['public_key'], config) - self.assertIn('AllowedIPs = 172.20.0.0/16', config) + self.assertIn('AllowedIPs', config) def test_multiple_peers(self): """Test managing multiple peers""" - # Add first peer peer1_keys = self.wg_manager.generate_peer_keys('peer1') - success1 = self.wg_manager.add_peer('peer1', peer1_keys['public_key'], '192.168.1.100') + success1 = self.wg_manager.add_peer('peer1', peer1_keys['public_key'], '', '10.0.0.2/32') self.assertTrue(success1) - - # Add second peer + peer2_keys = self.wg_manager.generate_peer_keys('peer2') - success2 = self.wg_manager.add_peer('peer2', peer2_keys['public_key'], '192.168.1.101') + success2 = self.wg_manager.add_peer('peer2', peer2_keys['public_key'], '', '10.0.0.3/32') self.assertTrue(success2) # Get peers @@ -310,19 +305,155 @@ PersistentKeepalive = 30 self.assertEqual(peers[1]['persistent_keepalive'], 30) def test_error_handling(self): - """Test error handling in WireGuard operations""" - # Test with invalid public key - success = self.wg_manager.add_peer('testpeer', 'invalid_key', '192.168.1.100') - # Should still return True as it writes to config file + """Test error handling in WireGuard operations.""" + # Wide CIDR rejected — server-side AllowedIPs must be /32 + success = self.wg_manager.add_peer('testpeer', 'invalid_key', '', '172.20.0.0/16') + self.assertFalse(success, "Wide CIDR must be rejected") + + # Valid /32 with any key string is accepted (key format not validated at this layer) + success = self.wg_manager.add_peer('testpeer', 'any_key_string=', '', '10.0.0.2/32') self.assertTrue(success) - - # Test removing non-existent peer + + # Removing non-existent peer is a no-op, not an error success = self.wg_manager.remove_peer('non_existent_key') self.assertTrue(success) - - # Test updating non-existent peer IP - success = self.wg_manager.update_peer_ip('non_existent_key', '192.168.1.200') + + # Updating IP for peer not in config returns False + success = self.wg_manager.update_peer_ip('non_existent_key', '10.0.0.9/32') self.assertFalse(success) + +class TestWireGuardCellPeer(unittest.TestCase): + """Test add_cell_peer allows subnet CIDRs for site-to-site connections.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(self.data_dir, exist_ok=True) + os.makedirs(self.config_dir, exist_ok=True) + patcher = patch.object(WireGuardManager, '_syncconf', return_value=None) + self.mock_sync = patcher.start() + self.addCleanup(patcher.stop) + self.wg = WireGuardManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + def test_add_cell_peer_allows_subnet_cidr(self): + ok = self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24') + self.assertTrue(ok) + content = self.wg._read_config() + self.assertIn('10.1.0.0/24', content) + + def test_add_cell_peer_writes_full_endpoint(self): + self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24') + content = self.wg._read_config() + self.assertIn('Endpoint = 5.6.7.8:51821', content) + + def test_add_cell_peer_comment_has_cell_prefix(self): + self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24') + content = self.wg._read_config() + self.assertIn('# cell:remote', content) + + def test_add_cell_peer_invalid_cidr_returns_false(self): + ok = self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', 'not-a-cidr') + self.assertFalse(ok) + + def test_add_cell_peer_can_coexist_with_regular_peers(self): + self.wg.add_peer('alice', 'alicepubkey=', '', '10.0.0.2/32') + self.wg.add_cell_peer('remote', 'remotepubkey=', '5.6.7.8:51821', '10.1.0.0/24') + content = self.wg._read_config() + self.assertIn('alicepubkey=', content) + self.assertIn('remotepubkey=', content) + + +class TestWireGuardConfigReads(unittest.TestCase): + """Test that port/address/network are read from wg0.conf, not hardcoded.""" + + def setUp(self): + self.test_dir = tempfile.mkdtemp() + self.data_dir = os.path.join(self.test_dir, 'data') + self.config_dir = os.path.join(self.test_dir, 'config') + os.makedirs(self.data_dir, exist_ok=True) + os.makedirs(self.config_dir, exist_ok=True) + patcher = patch.object(WireGuardManager, '_syncconf', return_value=None) + self.mock_sync = patcher.start() + self.addCleanup(patcher.stop) + self.wg = WireGuardManager(self.data_dir, self.config_dir) + + def tearDown(self): + shutil.rmtree(self.test_dir) + + def _write_wg_conf(self, port=51820, address='10.0.0.1/24', extra=''): + conf = ( + f'[Interface]\n' + f'PrivateKey = dummykey\n' + f'Address = {address}\n' + f'ListenPort = {port}\n' + f'{extra}' + ) + cf = self.wg._config_file() + os.makedirs(os.path.dirname(cf), exist_ok=True) + with open(cf, 'w') as f: + f.write(conf) + + def test_get_configured_port_reads_from_wg_conf(self): + self._write_wg_conf(port=54321) + self.assertEqual(self.wg._get_configured_port(), 54321) + + def test_get_configured_port_fallback_when_no_file(self): + # No wg0.conf exists — fall back to DEFAULT_PORT + self.assertEqual(self.wg._get_configured_port(), 51820) + + def test_get_configured_address_reads_from_wg_conf(self): + self._write_wg_conf(address='10.1.0.1/24') + self.assertEqual(self.wg._get_configured_address(), '10.1.0.1/24') + + def test_get_configured_network_derives_from_address(self): + self._write_wg_conf(address='10.1.0.1/24') + self.assertEqual(self.wg._get_configured_network(), '10.1.0.0/24') + + def test_get_split_tunnel_ips_uses_configured_network(self): + self._write_wg_conf(address='10.1.0.1/24') + split = self.wg.get_split_tunnel_ips() + self.assertIn('10.1.0.0/24', split) + self.assertIn('172.20.0.0/16', split) + self.assertNotIn('10.0.0.0/24', split) + + def test_get_server_config_uses_configured_port(self): + self._write_wg_conf(port=54321) + with patch.object(self.wg, 'get_external_ip', return_value='1.2.3.4'): + cfg = self.wg.get_server_config() + self.assertEqual(cfg['port'], 54321) + self.assertIn(':54321', cfg['endpoint']) + + def test_get_server_config_includes_dns_and_split_tunnel(self): + self._write_wg_conf(address='10.2.0.1/24') + with patch.object(self.wg, 'get_external_ip', return_value='1.2.3.4'): + cfg = self.wg.get_server_config() + self.assertIn('dns_ip', cfg) + self.assertIn('split_tunnel_ips', cfg) + self.assertIn('10.2.0.0/24', cfg['split_tunnel_ips']) + + def test_get_peer_config_uses_configured_port_in_endpoint(self): + self._write_wg_conf(port=54321) + result = self.wg.get_peer_config( + peer_name='alice', + peer_ip='10.0.0.2', + peer_private_key='privkeyalice=', + server_endpoint='5.6.7.8', + ) + self.assertIn(':54321', result) + self.assertNotIn(':51820', result) + + def test_add_peer_uses_configured_port_in_endpoint(self): + self._write_wg_conf(port=54321) + self.wg.add_peer('alice', 'pubkeyalice=', '5.6.7.8', '10.0.0.2/32') + content = self.wg._read_config() + self.assertIn('Endpoint = 5.6.7.8:54321', content) + self.assertNotIn(':51820', content) + + if __name__ == '__main__': unittest.main() \ No newline at end of file diff --git a/webui/nginx.conf b/webui/nginx.conf index 76b87a6..8db4e38 100644 --- a/webui/nginx.conf +++ b/webui/nginx.conf @@ -4,6 +4,21 @@ server { root /usr/share/nginx/html; index index.html; + # Proxy API and health calls to the backend container + location /api/ { + proxy_pass http://cell-api:3000/api/; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_read_timeout 60s; + } + + location /health { + proxy_pass http://cell-api:3000/health; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + } + # Handle client-side routing location / { try_files $uri $uri/ /index.html; diff --git a/webui/src/App.jsx b/webui/src/App.jsx index 7b110dc..5d34688 100644 --- a/webui/src/App.jsx +++ b/webui/src/App.jsx @@ -13,9 +13,11 @@ import { Server, Key, Package2, - Settings as SettingsIcon + Settings as SettingsIcon, + Link2 } from 'lucide-react'; import { healthAPI } from './services/api'; +import { ConfigProvider } from './contexts/ConfigContext'; import Sidebar from './components/Sidebar'; import Dashboard from './pages/Dashboard'; import Peers from './pages/Peers'; @@ -29,6 +31,7 @@ import Logs from './pages/Logs'; import Settings from './pages/Settings'; import Vault from './pages/Vault'; import ContainerDashboard from './components/ContainerDashboard'; +import CellNetwork from './pages/CellNetwork'; function App() { const [isOnline, setIsOnline] = useState(false); @@ -64,6 +67,7 @@ function App() { { name: 'Routing', href: '/routing', icon: Wifi }, { name: 'Vault', href: '/vault', icon: Key }, { name: 'Containers', href: '/containers', icon: Package2 }, + { name: 'Cell Network', href: '/cell-network', icon: Link2 }, { name: 'Logs', href: '/logs', icon: Activity }, { name: 'Settings', href: '/settings', icon: SettingsIcon }, ]; @@ -81,6 +85,7 @@ function App() { return ( +
@@ -119,6 +124,7 @@ function App() { } /> } /> } /> + } /> } /> } /> @@ -126,6 +132,7 @@ function App() {
+
); } diff --git a/webui/src/contexts/ConfigContext.jsx b/webui/src/contexts/ConfigContext.jsx new file mode 100644 index 0000000..705e2df --- /dev/null +++ b/webui/src/contexts/ConfigContext.jsx @@ -0,0 +1,22 @@ +import { createContext, useContext, useState, useEffect, useCallback } from 'react'; +import { cellAPI } from '../services/api'; + +const ConfigContext = createContext({ domain: 'cell', cell_name: 'mycell' }); + +export function ConfigProvider({ children }) { + const [config, setConfig] = useState({ domain: 'cell', cell_name: 'mycell' }); + + const refresh = useCallback(() => { + cellAPI.getConfig().then(r => setConfig(r.data)).catch(() => {}); + }, []); + + useEffect(() => { refresh(); }, [refresh]); + + return ( + + {children} + + ); +} + +export const useConfig = () => useContext(ConfigContext); diff --git a/webui/src/pages/Calendar.jsx b/webui/src/pages/Calendar.jsx index 066311f..9b26d12 100644 --- a/webui/src/pages/Calendar.jsx +++ b/webui/src/pages/Calendar.jsx @@ -1,98 +1,182 @@ -import { useState, useEffect } from 'react'; -import { Calendar as CalendarIcon, Users, Clock } from 'lucide-react'; -import { calendarAPI } from '../services/api'; - -function Calendar() { - const [users, setUsers] = useState([]); - const [status, setStatus] = useState(null); - const [isLoading, setIsLoading] = useState(true); - - useEffect(() => { - fetchCalendarData(); - }, []); - - const fetchCalendarData = async () => { - try { - const [usersResponse, statusResponse] = await Promise.all([ - calendarAPI.getUsers(), - calendarAPI.getStatus() - ]); - - setUsers(usersResponse.data); - setStatus(statusResponse.data); - } catch (error) { - console.error('Failed to fetch calendar data:', error); - } finally { - setIsLoading(false); - } - }; - - if (isLoading) { - return ( -
-
-
- ); - } - - return ( -
-
-

Calendar Services

-

- Manage Radicale CalDAV and CardDAV services -

-
- -
- {/* Status */} -
-
- -

Service Status

-
- {status ? ( -
-
- Radicale: - Running -
-
- CalDAV: - Active -
-
- CardDAV: - Active -
-
- ) : ( -

Status unavailable

- )} -
- - {/* Users */} -
-
- -

Calendar Users

-
-
- {users.length > 0 ? ( - users.map((user, index) => ( -
- {user.username} - {user.calendars || 0} calendars -
- )) - ) : ( -

No calendar users configured

- )} -
-
-
-
- ); -} - -export default Calendar; \ No newline at end of file +import { useState, useEffect } from 'react'; +import { Calendar as CalendarIcon, Users, Wifi, Copy, CheckCheck } from 'lucide-react'; +import { calendarAPI } from '../services/api'; +import { useConfig } from '../contexts/ConfigContext'; + +const CELL_IP = '172.20.0.21'; + +function CopyButton({ text }) { + const [copied, setCopied] = useState(false); + const copy = () => { + navigator.clipboard.writeText(text); + setCopied(true); + setTimeout(() => setCopied(false), 1500); + }; + return ( + + ); +} + +function InfoRow({ label, value }) { + return ( +
+ {label} +
+ {value} + +
+
+ ); +} + +function Calendar() { + const { domain = 'cell' } = useConfig(); + const cellHost = `calendar.${domain}`; + const [users, setUsers] = useState([]); + const [status, setStatus] = useState(null); + const [isLoading, setIsLoading] = useState(true); + + useEffect(() => { + fetchCalendarData(); + }, []); + + const fetchCalendarData = async () => { + try { + const [usersResponse, statusResponse] = await Promise.all([ + calendarAPI.getUsers(), + calendarAPI.getStatus() + ]); + setUsers(usersResponse.data); + setStatus(statusResponse.data); + } catch (error) { + console.error('Failed to fetch calendar data:', error); + } finally { + setIsLoading(false); + } + }; + + if (isLoading) { + return ( +
+
+
+ ); + } + + return ( +
+
+

Calendar & Contacts

+

Radicale CalDAV / CardDAV server

+
+ +
+ {/* Connection Info */} +
+
+ +

Connect your device

+
+

+ Use these settings in your calendar / contacts app (iOS, Android, Thunderbird, etc.) +

+
+ + + + + + +
+

+ Requires VPN connection. DNS server must be set to 172.20.0.3. +

+
+ + {/* iOS / Android quick guide */} +
+
+ +

Quick setup guide

+
+
+
+

iOS (Settings → Calendar → Accounts)

+
    +
  1. Add Account → Other → Add CalDAV Account
  2. +
  3. Server: {cellHost}
  4. +
  5. Enter username & password
  6. +
  7. For contacts: Add CardDAV Account, same server
  8. +
+
+
+

Android (DAVx⁵ app)

+
    +
  1. Install DAVx⁵ from Play Store / F-Droid
  2. +
  3. Login with URL: http://{cellHost}/
  4. +
  5. Select calendars & address books to sync
  6. +
+
+
+

Thunderbird

+
    +
  1. Calendar → New Calendar → On the Network
  2. +
  3. Format: CalDAV, Location: http://{cellHost}/
  4. +
+
+
+
+ + {/* Status */} +
+
+ +

Service Status

+
+ {status ? ( +
+
+ Radicale: + Running +
+
+ CalDAV: + Active +
+
+ CardDAV: + Active +
+
+ ) : ( +

Status unavailable

+ )} +
+ + {/* Users */} +
+
+ +

Calendar Users

+
+
+ {users.length > 0 ? ( + users.map((user, index) => ( +
+ {user.username} + {user.calendars || 0} calendars +
+ )) + ) : ( +

No calendar users configured

+ )} +
+
+
+
+ ); +} + +export default Calendar; diff --git a/webui/src/pages/CellNetwork.jsx b/webui/src/pages/CellNetwork.jsx new file mode 100644 index 0000000..271f144 --- /dev/null +++ b/webui/src/pages/CellNetwork.jsx @@ -0,0 +1,323 @@ +import { useState, useEffect } from 'react'; +import { Link2, Link2Off, Copy, CheckCheck, RefreshCw, Plug, Unplug, Globe, Wifi } from 'lucide-react'; +import { cellLinkAPI } from '../services/api'; +import { useConfig } from '../contexts/ConfigContext'; +import QRCode from 'qrcode'; + +function CopyButton({ text, small }) { + const [copied, setCopied] = useState(false); + const copy = () => { + navigator.clipboard.writeText(text); + setCopied(true); + setTimeout(() => setCopied(false), 1500); + }; + const sz = small ? 'h-3.5 w-3.5' : 'h-4 w-4'; + return ( + + ); +} + +function StatusDot({ online }) { + if (online === null || online === undefined) { + return ; + } + return online + ? + : ; +} + +function Toast({ toasts }) { + return ( +
+ {toasts.map(t => ( +
+ {t.msg} +
+ ))} +
+ ); +} + +function useToasts() { + const [toasts, setToasts] = useState([]); + const add = (msg, type = 'success') => { + const id = Date.now(); + setToasts(p => [...p, { id, msg, type }]); + setTimeout(() => setToasts(p => p.filter(t => t.id !== id)), 4000); + }; + return [toasts, add]; +} + +export default function CellNetwork() { + const { cell_name = 'mycell', domain = 'cell' } = useConfig(); + const [toasts, addToast] = useToasts(); + + const [invite, setInvite] = useState(null); + const [inviteQr, setInviteQr] = useState(''); + const [inviteLoading, setInviteLoading] = useState(true); + + const [connections, setConnections] = useState([]); + const [connsLoading, setConnsLoading] = useState(true); + + const [pasteText, setPasteText] = useState(''); + const [connecting, setConnecting] = useState(false); + + useEffect(() => { + loadInvite(); + loadConnections(); + }, []); + + const loadInvite = async () => { + setInviteLoading(true); + try { + const r = await cellLinkAPI.getInvite(); + setInvite(r.data); + const qr = await QRCode.toDataURL(JSON.stringify(r.data), { width: 200, margin: 1 }); + setInviteQr(qr); + } catch (e) { + addToast('Failed to load invite', 'error'); + } finally { + setInviteLoading(false); + } + }; + + const loadConnections = async () => { + setConnsLoading(true); + try { + const r = await cellLinkAPI.listConnections(); + // Enrich with live status + const enriched = await Promise.all( + (r.data || []).map(async (conn) => { + try { + const s = await cellLinkAPI.getStatus(conn.cell_name); + return { ...conn, online: s.data.online, last_handshake: s.data.last_handshake }; + } catch { + return { ...conn, online: false }; + } + }) + ); + setConnections(enriched); + } catch { + addToast('Failed to load connections', 'error'); + } finally { + setConnsLoading(false); + } + }; + + const handleConnect = async () => { + if (!pasteText.trim()) return; + let parsed; + try { + parsed = JSON.parse(pasteText.trim()); + } catch { + addToast('Invalid JSON — paste the full invite from the other cell', 'error'); + return; + } + setConnecting(true); + try { + await cellLinkAPI.addConnection(parsed); + addToast(`Connected to cell "${parsed.cell_name}"`); + setPasteText(''); + loadConnections(); + } catch (e) { + addToast(e?.response?.data?.error || 'Connection failed', 'error'); + } finally { + setConnecting(false); + } + }; + + const handleDisconnect = async (name) => { + if (!window.confirm(`Disconnect from cell "${name}"?`)) return; + try { + await cellLinkAPI.removeConnection(name); + addToast(`Disconnected from "${name}"`); + loadConnections(); + } catch (e) { + addToast(e?.response?.data?.error || 'Disconnect failed', 'error'); + } + }; + + const inviteJson = invite ? JSON.stringify(invite, null, 2) : ''; + + return ( +
+ + +
+

Cell Network

+

+ Connect multiple PIC cells into a mesh — site-to-site WireGuard tunnels with automatic DNS forwarding. +

+
+ +
+ + {/* ── This cell's invite ─────────────────────────────────────────── */} +
+
+
+ +

Your Cell's Invite

+
+ +
+ + {inviteLoading ? ( +
+
+
+ ) : invite ? ( +
+
+
+ Cell + {invite.cell_name} +
+
+ Domain + {invite.domain} +
+
+ Endpoint + {invite.endpoint || '(no external IP)'} +
+
+ VPN subnet + {invite.vpn_subnet} +
+
+ +
+
+ Invite JSON + +
+
+                  {inviteJson}
+                
+
+ + {inviteQr && ( +
+

Or scan with phone camera

+ Invite QR +
+ )} + +

+ Share this JSON with the admin of another PIC cell. They paste it in "Connect to Cell" on their side. +

+
+ ) : ( +

Could not load invite.

+ )} +
+ + {/* ── Connect to another cell ────────────────────────────────────── */} +
+
+ +

Connect to Another Cell

+
+ +
+

+ Paste the invite JSON from the other cell's "Your Cell's Invite" panel: +

+