diff --git a/Makefile b/Makefile index f9f34d6..0d089b2 100644 --- a/Makefile +++ b/Makefile @@ -225,7 +225,7 @@ test-unit: pytest tests/ test-coverage: - pytest tests/ api/tests/ --cov=api --cov-report=html --cov-report=term-missing -v + pytest tests/ api/tests/ --cov=api --cov-report=html --cov-report=term-missing --cov-fail-under=70 -v test-api: cd api && python3 -m pytest tests/test_api_endpoints.py -v diff --git a/README.md b/README.md index d554176..0063e7f 100644 --- a/README.md +++ b/README.md @@ -1,539 +1,267 @@ - -# Personal Internet Cell - -## ๐ŸŒŸ Overview - -The Personal Internet Cell is a **production-grade, self-hosted, decentralized digital infrastructure** that empowers you to: - -- **Host your own services**: Email, calendar, contacts, files, DNS, DHCP, NTP -- **Secure mesh networking**: Connect with trusted peers via WireGuard VPN -- **Advanced routing**: VPN gateway, NAT, firewall, exit nodes, and bridge routing -- **Enterprise security**: Self-hosted CA, certificate management, trust systems -- **Modern management**: RESTful API, enhanced CLI, and comprehensive monitoring -- **Event-driven architecture**: Service orchestration and real-time communication - ---- - -## ๐Ÿš€ Key Features - -### ๐Ÿ”ง **Core Services** -- **Network Services**: DNS, DHCP, NTP with dynamic management -- **VPN & Mesh**: WireGuard-based peer federation with dynamic IP updates -- **Digital Services**: Email (SMTP/IMAP), Calendar/Contacts (CalDAV/CardDAV), File Storage (WebDAV) -- **Security**: Self-hosted Certificate Authority, Age/Fernet encryption, trust management -- **Container Orchestration**: Docker-based service management and deployment - -### ๐Ÿ—๏ธ **Architecture Highlights** -- **BaseServiceManager**: Unified interface across all 10 service managers -- **Event-Driven Service Bus**: Real-time service communication and orchestration -- **Centralized Configuration**: Type-safe validation, backup/restore, import/export -- **Comprehensive Logging**: Structured JSON logs with rotation, search, and export -- **Enhanced CLI**: Interactive mode, batch operations, service wizards -- **Health Monitoring**: Real-time health checks and performance metrics - -### ๐Ÿ“Š **Production Features** -- **Service Orchestration**: Automatic service dependency management -- **Configuration Management**: Schema validation, versioning, and migration -- **Error Handling**: Standardized error handling and recovery mechanisms -- **Testing**: Comprehensive test suite with 77%+ coverage -- **Documentation**: Complete API documentation and usage guides - ---- - -## ๐Ÿ“‹ Table of Contents - -1. [Quick Start](#quick-start) -2. [Architecture](#architecture) -3. [Service Managers](#service-managers) -4. [API Reference](#api-reference) -5. [CLI Guide](#cli-guide) -6. [Configuration](#configuration) -7. [Security](#security) -8. [Development](#development) -9. [Testing](#testing) -10. [Deployment](#deployment) -11. [Contributing](#contributing) -12. [License](#license) - ---- - -## ๐Ÿš€ Quick Start - -### Prerequisites - -- **Debian/Ubuntu** host (apt-based). All other dependencies are installed automatically. -- **2 GB+ RAM, 10 GB+ disk space** -- **Open ports**: 53 (DNS), 80/443 (HTTP/S), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard) - -### 1. Install - -```bash -git clone pic -cd pic - -# Install all system dependencies (docker, python3, python3-cryptography, etc.) -make check-deps - -# Default cell (name=mycell, domain=cell, VPN=10.0.0.1/24, port=51820) -make setup -make start - -# Custom cell โ€” use when installing a second cell on a different host -CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start -``` - -`make check-deps` installs python3, python3-cryptography, docker, docker-compose, curl, openssl, git via apt and adds the current user to the docker group. - -`make setup` generates WireGuard keys, writes configs, and creates all data directories. - -`make start` builds and brings up all 12 Docker containers. - -### 2. Access - -| Service | URL | -|---------|-----| -| Web UI | `http://:8081` | -| API | `http://:3000` | -| Health | `http://:3000/health` | - -On a WireGuard client: `http://mycell.cell` (or whatever your cell name is). - -### 3. Local dev (no Docker) - -```bash -pip install -r api/requirements.txt -python api/app.py # API on :3000 - -cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000) -``` - ---- - -## ๐Ÿ› ๏ธ Management Commands - -```bash -# First install -make check-deps # install all system packages via apt -make setup # generate keys, write configs -make start # start all 12 containers - -# Daily operations -make status # container status + API health -make logs # follow all logs -make logs-api # follow logs for one service (api, dns, wg, mail, caddy, ...) -make shell-api # open a shell inside a container - -# Deploy latest code -make update # git pull + rebuild + restart - -# Full wipe and reinstall (useful on test machine) -make reinstall # stop, wipe config/data, setup, start fresh - -# Remove everything -make uninstall # stop + remove images; prompts whether to also wipe config/data - -# Maintenance -make backup # tar config/ + data/ into backups/ -make restore # list available backups -make clean # remove containers/volumes, keep config/data - -# Tests -make test # run all tests -make test-coverage # tests + HTML coverage report -``` - ---- - -## ๐Ÿ”— Connecting Two Cells (PIC Mesh) - -Two PIC instances can form a mesh โ€” full site-to-site WireGuard tunnels with -automatic DNS forwarding so each cell's services are reachable from the other. - -### Install the second cell - -```bash -# On the second host (different VPN subnet; port 51820 is fine โ€” different machine) -CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start -``` - -### Exchange invites (two pastes, two clicks) - -1. On **Cell A** โ†’ open Web UI โ†’ **Cell Network** โ†’ copy the invite JSON. -2. On **Cell B** โ†’ **Cell Network** โ†’ paste into "Connect to Another Cell" โ†’ click **Connect**. -3. On **Cell B** โ†’ copy its invite JSON. -4. On **Cell A** โ†’ paste Cell B's invite โ†’ click **Connect**. - -Both cells now have: -- A site-to-site WireGuard peer (AllowedIPs = remote cell's VPN subnet). -- A CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel. - -The **Connected Cells** panel shows live handshake status (green = online). - -### Same-LAN tip - -If both cells share the same external IP (behind NAT), the auto-detected -endpoint in the invite will be the public IP. Replace it with the LAN IP -before clicking Connect so traffic stays local: - -```json -{ "endpoint": "192.168.31.50:51820", ... } -``` - ---- - -## ๐Ÿ—๏ธ Architecture - -### **Service Manager Architecture** - -All services inherit from `BaseServiceManager`, providing: -- **Unified Interface**: Consistent methods across all services -- **Health Monitoring**: Standardized health checks and metrics -- **Error Handling**: Centralized error handling and logging -- **Configuration**: Common configuration management patterns - -### **Event-Driven Service Bus** - -```python -# Services communicate via events -service_bus.register_service('network', network_manager) -service_bus.register_service('wireguard', wireguard_manager) -service_bus.publish_event(EventType.SERVICE_STARTED, 'network', data) -``` - -### **Service Dependencies** - -``` -wireguard โ†’ network -email โ†’ network, vault -calendar โ†’ network, vault -files โ†’ network, vault -routing โ†’ network, wireguard -vault โ†’ network -``` - ---- - -## ๐Ÿ”ง Service Managers - -### **Core Network Services** -- **NetworkManager**: DNS, DHCP, NTP with dynamic zone management -- **WireGuardManager**: VPN configuration, peer management, key generation -- **PeerRegistry**: Peer registration, IP updates, trust management - -### **Digital Services** -- **EmailManager**: SMTP/IMAP email with user management -- **CalendarManager**: CalDAV/CardDAV calendar and contacts -- **FileManager**: WebDAV file storage with user directories - -### **Infrastructure Services** -- **RoutingManager**: NAT, firewall, advanced routing (exit/bridge/split) -- **VaultManager**: Certificate authority, trust management, encryption -- **ContainerManager**: Docker orchestration and container management -- **CellManager**: Overall cell configuration and service orchestration - ---- - -## ๐Ÿ“ก API Reference - -### **Core Endpoints** - -```bash -# Service Status -GET /api/services/status -GET /api/services/connectivity - -# Configuration Management -GET /api/config -PUT /api/config -POST /api/config/backup -POST /api/config/restore/ - -# Service Bus -GET /api/services/bus/status -GET /api/services/bus/events -POST /api/services/bus/services//start - -# Logging -GET /api/logs/services/ -POST /api/logs/search -POST /api/logs/export -``` - -### **Service-Specific Endpoints** - -```bash -# Network Services -GET /api/dns/records -POST /api/dns/records -GET /api/dhcp/leases -GET /api/ntp/status - -# WireGuard & Peers -GET /api/wireguard/peers -POST /api/wireguard/peers -GET /api/wireguard/status - -# Digital Services -GET /api/email/users -GET /api/calendar/users -GET /api/files/users - -# Routing & Security -GET /api/routing/status -POST /api/routing/nat -GET /api/vault/certificates -``` - ---- - -## ๐Ÿ’ป CLI Guide - -### **Enhanced CLI Features** - -```bash -# Interactive Mode -python api/enhanced_cli.py --interactive - -# Batch Operations -python api/enhanced_cli.py --batch "status" "services" "health" - -# Configuration Management -python api/enhanced_cli.py --export-config json -python api/enhanced_cli.py --import-config config.json - -# Service Wizards -python api/enhanced_cli.py --wizard network -python api/enhanced_cli.py --wizard email - -# Health Monitoring -python api/enhanced_cli.py --health -python api/enhanced_cli.py --logs network -``` - -### **Service Management** - -```bash -# Show status -python api/enhanced_cli.py --status - -# List services -python api/enhanced_cli.py --services - -# Peer management -python api/enhanced_cli.py --peers - -# Service logs -python api/enhanced_cli.py --logs wireguard -``` - ---- - -## โš™๏ธ Configuration - -### **Configuration Management** - -```bash -# Export configuration -curl -X GET http://localhost:3000/api/config - -# Update configuration -curl -X PUT http://localhost:3000/api/config \ - -H "Content-Type: application/json" \ - -d '{"cell_name": "mycell", "domain": "mycell.cell"}' - -# Backup configuration -curl -X POST http://localhost:3000/api/config/backup -``` - -### **Service Configuration** - -Each service has its own configuration schema: -- **Network**: DNS zones, DHCP ranges, NTP servers -- **WireGuard**: Interface settings, peer configurations -- **Email**: Domain settings, user accounts, mailboxes -- **Calendar**: User accounts, calendar sharing -- **Files**: Storage quotas, user directories -- **Routing**: NAT rules, firewall policies, routing tables - ---- - -## ๐Ÿ”’ Security - -### **Certificate Management** -- **Self-hosted CA**: Issue and manage TLS certificates -- **Certificate Lifecycle**: Generate, renew, revoke certificates -- **Trust Management**: Direct and indirect trust relationships -- **Age Encryption**: Modern encryption for sensitive data - -### **Network Security** -- **WireGuard VPN**: Secure peer-to-peer communication -- **Firewall & NAT**: Granular access control -- **Service Isolation**: Docker containers for each service -- **Input Validation**: All API endpoints validate input - -### **Data Protection** -- **Encrypted Storage**: Sensitive data encrypted at rest -- **Secure Communication**: TLS for all API endpoints -- **Access Control**: Role-based access for services -- **Audit Logging**: Comprehensive security event logging - ---- - -## ๐Ÿ› ๏ธ Development - -### **Project Structure** - -``` -PersonalInternetCell/ -โ”œโ”€โ”€ api/ # Backend API server -โ”‚ โ”œโ”€โ”€ base_service_manager.py # Base class for all services -โ”‚ โ”œโ”€โ”€ config_manager.py # Configuration management -โ”‚ โ”œโ”€โ”€ service_bus.py # Event-driven service bus -โ”‚ โ”œโ”€โ”€ log_manager.py # Comprehensive logging -โ”‚ โ”œโ”€โ”€ enhanced_cli.py # Enhanced CLI tool -โ”‚ โ”œโ”€โ”€ network_manager.py # DNS, DHCP, NTP -โ”‚ โ”œโ”€โ”€ wireguard_manager.py # VPN and peer management -โ”‚ โ”œโ”€โ”€ email_manager.py # Email services -โ”‚ โ”œโ”€โ”€ calendar_manager.py # Calendar services -โ”‚ โ”œโ”€โ”€ file_manager.py # File storage -โ”‚ โ”œโ”€โ”€ routing_manager.py # Routing and NAT -โ”‚ โ”œโ”€โ”€ vault_manager.py # Security and trust -โ”‚ โ”œโ”€โ”€ container_manager.py # Container orchestration -โ”‚ โ”œโ”€โ”€ cell_manager.py # Overall cell management -โ”‚ โ”œโ”€โ”€ peer_registry.py # Peer registration -โ”‚ โ””โ”€โ”€ app.py # Main API server -โ”œโ”€โ”€ webui/ # React frontend -โ”œโ”€โ”€ config/ # Configuration files -โ”œโ”€โ”€ data/ # Persistent data -โ”œโ”€โ”€ tests/ # Test suite -โ””โ”€โ”€ docker-compose.yml # Container orchestration -``` - -### **Running Locally** - -```bash -# Install dependencies -pip install -r api/requirements.txt - -# Start the API server -python api/app.py - -# Run tests -python api/test_enhanced_api.py - -# Start frontend (if available) -cd webui && bun install && npm run dev -``` - -### **Service Development** - -```python -from base_service_manager import BaseServiceManager - -class MyServiceManager(BaseServiceManager): - def __init__(self, data_dir='/app/data', config_dir='/app/config'): - super().__init__('myservice', data_dir, config_dir) - - def get_status(self) -> Dict[str, Any]: - # Implement service status - pass - - def test_connectivity(self) -> Dict[str, Any]: - # Implement connectivity test - pass -``` - ---- - -## ๐Ÿงช Testing - -### **Test Suite** - -```bash -# Run all tests -python api/test_enhanced_api.py - -# Test specific components -python -m pytest api/tests/test_network_manager.py -python -m pytest api/tests/test_service_bus.py - -# Coverage report -coverage run -m pytest api/tests/ -coverage html -``` - -### **Test Coverage** -- **BaseServiceManager**: 100% coverage -- **ConfigManager**: 95%+ coverage -- **ServiceBus**: 95%+ coverage -- **LogManager**: 95%+ coverage -- **All Service Managers**: 77%+ overall coverage - ---- - -## ๐Ÿš€ Deployment - -### **Docker Deployment** - -```bash -# Production deployment -docker-compose -f docker-compose.prod.yml up -d - -# Development deployment -docker-compose up --build -``` - -### **System Requirements** -- **CPU**: 2+ cores -- **RAM**: 2GB+ (4GB recommended) -- **Storage**: 10GB+ (SSD recommended) -- **Network**: Stable internet connection - -### **Monitoring** - -```bash -# Health check -curl http://localhost:3000/health - -# Service status -curl http://localhost:3000/api/services/status - -# Service connectivity -curl http://localhost:3000/api/services/connectivity -``` - ---- - -## ๐Ÿค Contributing - -1. **Fork** the repository -2. **Create** a feature branch -3. **Implement** your changes -4. **Add tests** for new functionality -5. **Submit** a pull request - -### **Development Guidelines** -- Follow the existing code style -- Add comprehensive tests -- Update documentation -- Use the BaseServiceManager pattern -- Implement proper error handling - ---- - -## ๐Ÿ“„ License - -MIT License - see [LICENSE](LICENSE) file for details. - ---- - -## ๐Ÿ“š Documentation - -- **[Quick Start Guide](QUICKSTART.md)**: Get up and running quickly -- **[API Documentation](api/API_DOCUMENTATION.md)**: Complete API reference -- **[Comprehensive Improvements](COMPREHENSIVE_IMPROVEMENTS_SUMMARY.md)**: Detailed architecture overview -- **[Enhanced API Improvements](ENHANCED_API_IMPROVEMENTS.md)**: Technical implementation details - ---- - -**๐ŸŒŸ The Personal Internet Cell - Your self-hosted, production-grade digital infrastructure!** + +# Personal Internet Cell (PIC) + +A self-hosted digital infrastructure platform. One stack, one API, one UI โ€” managing DNS, DHCP, NTP, WireGuard VPN, email, calendar/contacts, file storage, and a reverse proxy on your own hardware. + +--- + +## What it does + +- **Network services** โ€” CoreDNS, dnsmasq DHCP, chrony NTP, all dynamically managed +- **WireGuard VPN** โ€” peer lifecycle, QR-code provisioning, per-peer service access control +- **Digital services** โ€” Email (Postfix/Dovecot), Calendar/Contacts (Radicale CalDAV), Files (WebDAV + Filegator) +- **Reverse proxy** โ€” Caddy with per-service virtual IPs; subdomains like `calendar.mycell.cell` work on VPN clients automatically +- **Certificate authority** โ€” self-hosted CA via VaultManager +- **Cell mesh** โ€” connect two PIC instances with site-to-site WireGuard + DNS forwarding + +Everything is configured through a REST API and a React web UI. No manual config file editing needed for normal operations. + +--- + +## Quick Start + +### Prerequisites + +- Debian/Ubuntu host (apt-based) +- 2 GB+ RAM, 10 GB+ disk +- Open ports: 53 (DNS), 80 (HTTP), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard) + +### Install + +```bash +git clone pic +cd pic + +# Install system deps (docker, python3, python3-cryptography, etc.) +make check-deps + +# Generate keys + write configs +make setup + +# Build and start all 12 containers +make start +``` + +`make setup` accepts overrides for a second cell on a different host: + +```bash +CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start +``` + +### Access + +| Service | URL | +|---------|-----| +| Web UI | `http://:8081` | +| API | `http://:3000` | +| Health | `http://:3000/health` | + +From a WireGuard client: `http://mycell.cell` (replace with your cell name/domain). + +### Local dev (no Docker) + +```bash +pip install -r api/requirements.txt +python api/app.py # Flask API on :3000 + +cd webui && npm install && npm run dev # React UI on :5173 (proxies /api โ†’ :3000) +``` + +--- + +## Management Commands + +```bash +# First install +make check-deps # install system packages via apt +make setup # generate keys, write configs, create data dirs +make start # start all 12 containers + +# Daily operations +make status # container status + API health +make logs # follow all container logs +make logs-api # follow logs for one service (api, dns, wg, mail, caddy, ...) +make shell-api # shell inside a container + +# Deploy latest code +make update # git pull + rebuild api image + restart + +# Maintenance +make backup # tar config/ + data/ into backups/ +make restore # list available backups and restore +make clean # remove containers/volumes, keep config/data + +# Full wipe (test machines) +make reinstall # stop, wipe config/data, setup, start fresh +make uninstall # stop + remove images; prompts to also wipe config/data + +# Tests +make test # run full pytest suite +make test-coverage # tests + HTML coverage report in htmlcov/ +``` + +--- + +## Connecting Two Cells (PIC Mesh) + +Two PIC instances form a mesh: site-to-site WireGuard tunnels with automatic DNS forwarding so each cell's services resolve from the other. + +### Exchange invites + +1. On **Cell A** โ†’ Web UI โ†’ **Cell Network** โ†’ copy the invite JSON. +2. On **Cell B** โ†’ **Cell Network** โ†’ paste into "Connect to Another Cell" โ†’ **Connect**. +3. On **Cell B** โ†’ copy its invite JSON. +4. On **Cell A** โ†’ paste Cell B's invite โ†’ **Connect**. + +Both cells now have a WireGuard peer with `AllowedIPs = remote VPN subnet` and a CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel. + +### Same-LAN tip + +If both cells share the same external IP (behind NAT), replace the auto-detected endpoint with the LAN IP before connecting: + +```json +{ "endpoint": "192.168.31.50:51820", ... } +``` + +--- + +## Architecture + +### Stack + +``` +cell-caddy (Caddy) :80/:443 + per-service virtual IPs +cell-api (Flask :3000) REST API + config management + container orchestration +cell-webui (Nginx :8081) React UI +cell-dns (CoreDNS :53) internal DNS + per-peer ACLs +cell-dhcp (dnsmasq) DHCP + static reservations +cell-ntp (chrony) NTP +cell-wireguard WireGuard VPN +cell-mail (docker-mailserver) SMTP/IMAP +cell-radicale CalDAV/CardDAV :5232 +cell-webdav WebDAV :80 +cell-filegator file manager UI :8080 +cell-rainloop webmail :8888 +``` + +All containers share a custom Docker bridge network. Static IPs are assigned in `docker-compose.yml`. Caddy adds per-service virtual IPs to its own interface at API startup so `calendar.`, `files.`, etc. route to the right container. + +### Backend (`api/`) + +Service managers (`network_manager.py`, `wireguard_manager.py`, `peer_registry.py`, etc.) all inherit `BaseServiceManager`. `app.py` contains all Flask routes โ€” one file, organized by service. + +`ConfigManager` (`config_manager.py`) is the single source of truth. Config lives in `config/api/cell_config.json`. All managers read/write through it. + +`ip_utils.py` owns all container IP logic via `CONTAINER_OFFSETS` โ€” do not hardcode IPs elsewhere. + +When a config change requires recreating the Docker network (e.g. `ip_range` change), the API spawns a helper container that outlives cell-api to run `docker compose down && up`. Other restarts run `compose up -d --no-deps ` directly. + +### Frontend (`webui/`) + +React 18 + Vite + Tailwind CSS. All API calls go through `src/services/api.js` (Axios). Vite dev server proxies `/api` to `localhost:3000`. Pages in `src/pages/`, shared components in `src/components/`. + +### Project layout + +``` +pic/ +โ”œโ”€โ”€ api/ # Flask API + all service managers +โ”‚ โ”œโ”€โ”€ app.py # all routes (~2700 lines) +โ”‚ โ”œโ”€โ”€ config_manager.py # unified config CRUD +โ”‚ โ”œโ”€โ”€ ip_utils.py # IP/CIDR helpers + Caddyfile generator +โ”‚ โ”œโ”€โ”€ firewall_manager.py # iptables (via cell-wireguard) + Corefile +โ”‚ โ”œโ”€โ”€ network_manager.py # DNS zones, DHCP, NTP +โ”‚ โ”œโ”€โ”€ wireguard_manager.py +โ”‚ โ”œโ”€โ”€ peer_registry.py +โ”‚ โ”œโ”€โ”€ vault_manager.py +โ”‚ โ”œโ”€โ”€ email_manager.py +โ”‚ โ”œโ”€โ”€ calendar_manager.py +โ”‚ โ”œโ”€โ”€ file_manager.py +โ”‚ โ””โ”€โ”€ container_manager.py +โ”œโ”€โ”€ webui/ # React frontend +โ”œโ”€โ”€ config/ # Config files (bind-mounted into containers) +โ”‚ โ”œโ”€โ”€ api/cell_config.json โ† live config +โ”‚ โ”œโ”€โ”€ caddy/Caddyfile +โ”‚ โ”œโ”€โ”€ dns/Corefile +โ”‚ โ””โ”€โ”€ ... +โ”œโ”€โ”€ data/ # Persistent data (git-ignored) +โ”œโ”€โ”€ tests/ # pytest suite (372 tests, 27 files) +โ”œโ”€โ”€ docker-compose.yml +โ””โ”€โ”€ Makefile +``` + +--- + +## API Reference + +### Config + +``` +GET /api/config full config + service IPs +PUT /api/config update identity or service config +GET /api/config/pending pending restart info +POST /api/config/apply apply pending restart +POST /api/config/backup create backup +POST /api/config/restore/ restore from backup +``` + +### Network + +``` +GET /api/dns/records +POST /api/dns/records +GET /api/dhcp/leases +GET /api/dhcp/reservations +POST /api/dhcp/reservations +``` + +### WireGuard & Peers + +``` +GET /api/wireguard/status +GET /api/wireguard/peers +POST /api/wireguard/peers +GET /api/peers +POST /api/peers +PUT /api/peers/ +DELETE /api/peers/ +GET /api/peers//config peer config + QR code +``` + +### Containers & Health + +``` +GET /api/containers +POST /api/containers//restart +GET /health +GET /api/services/status +``` + +--- + +## Testing + +```bash +make test # run full suite +make test-coverage # coverage report in htmlcov/ +pytest tests/test_.py # single file +pytest tests/ -k "test_name" # single test +``` + +Tests live in `tests/` and use `unittest.TestCase` collected by pytest. External system calls (Docker, iptables, file writes) are mocked with `unittest.mock.patch`. + +Known coverage gaps: `write_caddyfile`, `POST /api/config/apply` (helper container path), `PUT /api/config` 400 validation paths. These are the highest-risk untested paths. + +--- + +## Security Notes + +- The API is access-controlled by `is_local_request()` โ€” it checks whether the request comes from a local/loopback/cell-network IP. Sensitive endpoints (containers, vault) are restricted to local access only. +- All per-peer service access is enforced via iptables rules inside `cell-wireguard` and CoreDNS ACL blocks. +- The Docker socket is mounted into `cell-api` for container management โ€” treat network access to port 3000 as privileged. +- `ip_range` must be an RFC-1918 CIDR (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16). The API and UI both validate this. + +--- + +## License + +MIT โ€” see [LICENSE](LICENSE). diff --git a/api/app.py b/api/app.py index 3cf4405..4d51e04 100644 --- a/api/app.py +++ b/api/app.py @@ -179,7 +179,6 @@ email_manager = EmailManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) calendar_manager = CalendarManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) file_manager = FileManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) routing_manager = RoutingManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) -cell_manager = CellManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) app.vault_manager = VaultManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) container_manager = ContainerManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR) cell_link_manager = CellLinkManager( @@ -345,10 +344,12 @@ def is_local_request(): if _allowed(remote_addr): return True + # Only trust the LAST X-Forwarded-For entry โ€” that is what Caddy appended. + # Iterating all entries allows clients to spoof local origin by prepending 127.0.0.1. if forwarded_for: - for addr in forwarded_for.split(','): - if _allowed(addr.strip()): - return True + last_hop = forwarded_for.split(',')[-1].strip() + if _allowed(last_hop): + return True return False @app.route('/health', methods=['GET']) @@ -481,6 +482,8 @@ def update_config(): _addr = data['wireguard'].get('address') if _addr: import ipaddress as _ipa2 + if '/' not in str(_addr): + return jsonify({'error': 'wireguard.address must include a prefix length (e.g. 10.0.0.1/24)'}), 400 try: _ipa2.ip_interface(_addr) except ValueError as _e: @@ -1166,10 +1169,13 @@ def get_dhcp_leases(): def add_dhcp_reservation(): try: data = request.get_json(silent=True) - if data is None: + if not data: return jsonify({"error": "No data provided"}), 400 - result = network_manager.add_dhcp_reservation(data) - return jsonify(result) + for field in ('mac', 'ip'): + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + result = network_manager.add_dhcp_reservation(data['mac'], data['ip'], data.get('hostname', '')) + return jsonify({"success": result}) except Exception as e: logger.error(f"Error adding DHCP reservation: {e}") return jsonify({"error": str(e)}), 500 @@ -1179,8 +1185,10 @@ def remove_dhcp_reservation(): """Remove DHCP reservation.""" try: data = request.get_json(silent=True) - result = network_manager.remove_dhcp_reservation(data) - return jsonify(result) + if not data or 'mac' not in data: + return jsonify({"error": "Missing required field: mac"}), 400 + result = network_manager.remove_dhcp_reservation(data['mac']) + return jsonify({"success": result}) except Exception as e: logger.error(f"Error removing DHCP reservation: {e}") return jsonify({"error": str(e)}), 500 @@ -1218,10 +1226,7 @@ def get_dns_status(): @app.route('/api/network/test', methods=['POST']) def test_network(): try: - data = request.get_json(silent=True) - if data is None: - return jsonify({"error": "No data provided"}), 400 - result = network_manager.test_connectivity(data) + result = network_manager.test_connectivity() return jsonify(result) except Exception as e: logger.error(f"Error testing network: {e}") @@ -1572,6 +1577,12 @@ def add_peer(): assigned_ip = data.get('ip') or _next_peer_ip() + # Validate service_access if provided + _valid_services = {'calendar', 'files', 'mail', 'webdav'} + service_access = data.get('service_access', list(_valid_services)) + if not isinstance(service_access, list) or not all(s in _valid_services for s in service_access): + return jsonify({"error": f"service_access must be a list of: {sorted(_valid_services)}"}), 400 + # Add peer to registry with all provided fields peer_info = { 'peer': data['name'], @@ -1584,7 +1595,7 @@ def add_peer(): 'persistent_keepalive': data.get('persistent_keepalive'), 'description': data.get('description'), 'internet_access': data.get('internet_access', True), - 'service_access': data.get('service_access', ['calendar', 'files', 'mail', 'webdav']), + 'service_access': service_access, 'peer_access': data.get('peer_access', True), 'config_needs_reinstall': False, } @@ -1651,10 +1662,17 @@ def clear_peer_reinstall(peer_name): @app.route('/api/peers/', methods=['DELETE']) def remove_peer(peer_name): - """Remove a peer.""" + """Remove a peer and clean up its firewall rules and DNS ACLs.""" try: + peer = peer_registry.get_peer(peer_name) + if not peer: + return jsonify({"message": f"Peer {peer_name} not found or already removed"}) + peer_ip = peer.get('ip') success = peer_registry.remove_peer(peer_name) if success: + if peer_ip: + firewall_manager.clear_peer_rules(peer_ip) + firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH, _configured_domain()) return jsonify({"message": f"Peer {peer_name} removed successfully"}) else: return jsonify({"message": f"Peer {peer_name} not found or already removed"}) @@ -2558,8 +2576,8 @@ def restart_container(name): @app.route('/api/containers//logs', methods=['GET']) def get_container_logs(name): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 tail = request.args.get('tail', default=100, type=int) try: logs = container_manager.get_container_logs(name, tail=tail) @@ -2571,8 +2589,8 @@ def get_container_logs(name): @app.route('/api/containers//stats', methods=['GET']) def get_container_stats(name): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 try: stats = container_manager.get_container_stats(name) return jsonify(stats) @@ -2583,16 +2601,16 @@ def get_container_stats(name): @app.route('/api/vault/secrets', methods=['GET']) def list_secrets(): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 secrets = app.vault_manager.list_secrets() return jsonify({'secrets': secrets}) @app.route('/api/vault/secrets', methods=['POST']) def store_secret(): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 data = request.get_json(silent=True) if not data or 'name' not in data or 'value' not in data: return jsonify({'error': 'Missing name or value'}), 400 @@ -2602,8 +2620,8 @@ def store_secret(): @app.route('/api/vault/secrets/', methods=['GET']) def get_secret(name): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 value = app.vault_manager.get_secret(name) if value is None: return jsonify({'error': 'Not found'}), 404 @@ -2612,8 +2630,8 @@ def get_secret(name): @app.route('/api/vault/secrets/', methods=['DELETE']) def delete_secret(name): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 result = app.vault_manager.delete_secret(name) return jsonify({'deleted': result}) @@ -2621,8 +2639,8 @@ def delete_secret(name): @app.route('/api/containers', methods=['POST']) def create_container(): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 data = request.get_json(silent=True) if not data or 'image' not in data: return jsonify({'error': 'Missing image parameter'}), 400 @@ -2653,8 +2671,8 @@ def create_container(): @app.route('/api/containers/', methods=['DELETE']) def remove_container(name): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 force = request.args.get('force', default=False, type=bool) success = container_manager.remove_container(name, force=force) return jsonify({'removed': success}) @@ -2662,8 +2680,8 @@ def remove_container(name): @app.route('/api/images', methods=['GET']) def list_images(): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 images = container_manager.list_images() return jsonify(images) @@ -2690,8 +2708,8 @@ def remove_image(image): @app.route('/api/volumes', methods=['GET']) def list_volumes(): # Temporarily disable access control for debugging - # if not is_local_request(): - # return jsonify({'error': 'Access denied'}), 403 + if not is_local_request(): + return jsonify({'error': 'Access denied'}), 403 volumes = container_manager.list_volumes() return jsonify(volumes) diff --git a/api/config_manager.py b/api/config_manager.py index eb8f0fd..fd9cca5 100644 --- a/api/config_manager.py +++ b/api/config_manager.py @@ -117,11 +117,15 @@ class ConfigManager: return {} def _save_all_configs(self): - """Save all service configurations to the unified config file""" + """Save all service configurations to the unified config file (atomic write).""" try: self.config_file.parent.mkdir(parents=True, exist_ok=True) - with open(self.config_file, 'w') as f: + tmp = self.config_file.with_suffix('.tmp') + with open(tmp, 'w') as f: json.dump(self.configs, f, indent=2) + f.flush() + os.fsync(f.fileno()) + os.replace(tmp, self.config_file) except (PermissionError, OSError): pass @@ -208,62 +212,98 @@ class ConfigManager: } def backup_config(self) -> str: - """Create a backup of all configurations""" + """Create a backup of cell_config.json, secrets, Caddyfile, .env, Corefile, and DNS zones.""" try: timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') backup_id = f"backup_{timestamp}" backup_path = self.backup_dir / backup_id - - # Create backup directory backup_path.mkdir(parents=True, exist_ok=True) - - # Copy all config files + + # Primary config and secrets if self.config_file.exists(): shutil.copy2(self.config_file, backup_path / 'cell_config.json') - - # Copy secrets file if it exists if self.secrets_file.exists(): shutil.copy2(self.secrets_file, backup_path / 'secrets.yaml') - - # Create backup manifest + + # Runtime-generated files that must match cell_config.json after restore + config_dir = Path(os.environ.get('CONFIG_DIR', '/app/config')) + data_dir = Path(os.environ.get('DATA_DIR', '/app/data')) + env_file = Path(os.environ.get('ENV_FILE', '/app/.env')) + + extra = [ + (config_dir / 'caddy' / 'Caddyfile', 'Caddyfile'), + (config_dir / 'dns' / 'Corefile', 'Corefile'), + (env_file, '.env'), + ] + for src, dest_name in extra: + if src.exists(): + shutil.copy2(src, backup_path / dest_name) + + # DNS zone files + dns_data = data_dir / 'dns' + if dns_data.is_dir(): + zones_dir = backup_path / 'dns_zones' + zones_dir.mkdir(exist_ok=True) + for zone_file in dns_data.glob('*.zone'): + shutil.copy2(zone_file, zones_dir / zone_file.name) + manifest = { "backup_id": backup_id, "timestamp": datetime.now().isoformat(), "services": list(self.service_schemas.keys()), - "files": [f.name for f in backup_path.iterdir()] + "files": [f.name for f in backup_path.iterdir()], } - with open(backup_path / 'manifest.json', 'w') as f: json.dump(manifest, f, indent=2) - + logger.info(f"Created configuration backup: {backup_id}") return backup_id - + except Exception as e: logger.error(f"Error creating backup: {e}") raise def restore_config(self, backup_id: str) -> bool: - """Restore configuration from backup""" + """Restore cell_config.json, secrets, Caddyfile, .env, Corefile, and DNS zones from backup.""" try: backup_path = self.backup_dir / backup_id if not backup_path.exists(): raise ValueError(f"Backup {backup_id} not found") - # Read manifest manifest_file = backup_path / 'manifest.json' if not manifest_file.exists(): raise ValueError(f"Backup manifest not found") - with open(manifest_file, 'r') as f: - manifest = json.load(f) - # Restore config files + + # Restore primary config config_backup = backup_path / 'cell_config.json' if config_backup.exists(): shutil.copy2(config_backup, self.config_file) - # Restore secrets file if it exists secrets_backup = backup_path / 'secrets.yaml' if secrets_backup.exists(): shutil.copy2(secrets_backup, self.secrets_file) - # Reload configurations โ€” restore only what was in the backup + + # Restore runtime-generated files so they stay consistent with cell_config.json + config_dir = Path(os.environ.get('CONFIG_DIR', '/app/config')) + data_dir = Path(os.environ.get('DATA_DIR', '/app/data')) + env_file = Path(os.environ.get('ENV_FILE', '/app/.env')) + + restore_map = [ + (backup_path / 'Caddyfile', config_dir / 'caddy' / 'Caddyfile'), + (backup_path / 'Corefile', config_dir / 'dns' / 'Corefile'), + (backup_path / '.env', env_file), + ] + for src, dest in restore_map: + if src.exists(): + dest.parent.mkdir(parents=True, exist_ok=True) + shutil.copy2(src, dest) + + # Restore DNS zone files + zones_backup = backup_path / 'dns_zones' + if zones_backup.is_dir(): + dns_data = data_dir / 'dns' + dns_data.mkdir(parents=True, exist_ok=True) + for zone_file in zones_backup.glob('*.zone'): + shutil.copy2(zone_file, dns_data / zone_file.name) + self.configs = self._load_all_configs() logger.info(f"Restored configuration from backup: {backup_id}") return True diff --git a/api/firewall_manager.py b/api/firewall_manager.py index 51d65e1..01572c5 100644 --- a/api/firewall_manager.py +++ b/api/firewall_manager.py @@ -276,14 +276,16 @@ def generate_corefile(peers: List[Dict[str, Any]], corefile_path: str = COREFILE }} {primary_zone_block} -local.{domain} {{ - file /data/local.zone - log -}} """ + # local.{domain} block intentionally omitted: /data/local.zone does not exist + # and CoreDNS logs errors on every reload for a missing zone file. os.makedirs(os.path.dirname(corefile_path), exist_ok=True) - with open(corefile_path, 'w') as f: + tmp_path = corefile_path + '.tmp' + with open(tmp_path, 'w') as f: f.write(corefile) + f.flush() + os.fsync(f.fileno()) + os.replace(tmp_path, corefile_path) logger.info(f"Wrote Corefile to {corefile_path}") return True @@ -293,13 +295,13 @@ local.{domain} {{ def reload_coredns() -> bool: - """Send SIGHUP to CoreDNS container to reload config.""" + """Signal CoreDNS to reload its config. SIGUSR1 triggers the reload plugin; SIGHUP kills the process.""" try: - result = _run(['docker', 'kill', '--signal=SIGHUP', 'cell-dns'], check=False) + result = _run(['docker', 'kill', '--signal=SIGUSR1', 'cell-dns'], check=False) if result.returncode == 0: - logger.info("Sent SIGHUP to cell-dns") + logger.info("Sent SIGUSR1 to cell-dns (reload)") return True - logger.warning(f"SIGHUP to cell-dns failed: {result.stderr.strip()}") + logger.warning(f"SIGUSR1 to cell-dns failed: {result.stderr.strip()}") return False except Exception as e: logger.error(f"reload_coredns: {e}") diff --git a/api/ip_utils.py b/api/ip_utils.py index 007d17e..0837cb2 100644 --- a/api/ip_utils.py +++ b/api/ip_utils.py @@ -200,8 +200,12 @@ http://api.{domain} {{ }} """ os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True) - with open(path, 'w') as f: + tmp = path + '.tmp' + with open(tmp, 'w') as f: f.write(content) + f.flush() + os.fsync(f.fileno()) + os.replace(tmp, path) return True except Exception: return False @@ -229,8 +233,12 @@ def write_env_file(ip_range: str, path: str, ports: Optional[Dict[str, int]] = N for key, var in PORT_ENV_VAR_NAMES.items(): lines.append(f'{var}={merged_ports[key]}\n') os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True) - with open(path, 'w') as f: + tmp = path + '.tmp' + with open(tmp, 'w') as f: f.writelines(lines) + f.flush() + os.fsync(f.fileno()) + os.replace(tmp, path) return True except Exception: return False diff --git a/api/network_manager.py b/api/network_manager.py index 6721ec6..a28bc69 100644 --- a/api/network_manager.py +++ b/api/network_manager.py @@ -33,10 +33,14 @@ class NetworkManager(BaseServiceManager): # Create zone file content content = self._generate_zone_content(zone_name, records) - - with open(zone_file, 'w') as f: + + tmp_file = zone_file + '.tmp' + with open(tmp_file, 'w') as f: f.write(content) - + f.flush() + os.fsync(f.fileno()) + os.replace(tmp_file, zone_file) + # Reload DNS service self._reload_dns_service() diff --git a/api/routing_manager.py b/api/routing_manager.py index 63a3d7f..024c151 100644 --- a/api/routing_manager.py +++ b/api/routing_manager.py @@ -2,6 +2,16 @@ """ Routing Manager for Personal Internet Cell Handles VPN gateway, NAT, iptables, and advanced routing + +NOTE: This manager runs iptables/ip-route commands on the HOST (the machine running +docker-compose), not inside cell-wireguard. This is intentional for host-level +routing features (exit-node, bridge, split-route) that are not yet wired to any +UI endpoint. The manager is instantiated but its methods are not called by any +active API route. + +CRITICAL: _remove_nat_rule flushes ALL of POSTROUTING (-F), which would wipe the +WireGuard MASQUERADE rule. Do not call it until this is fixed to use targeted +rule deletion (-D) instead of a full flush. """ import os @@ -766,14 +776,18 @@ class RoutingManager(BaseServiceManager): logger.error(f"Failed to apply NAT rule: {e}") def _remove_nat_rule(self, rule_id: str): - """Remove NAT rule from iptables""" + """Remove NAT rule from iptables by rule_id comment tag.""" try: - # This is a simplified removal - in practice you'd need to track the exact rule - cmd = ['iptables', '-t', 'nat', '-F', 'POSTROUTING'] - subprocess.run(cmd, check=True, timeout=10) - - logger.info(f"Removed NAT rule: {rule_id}") - + # Use -D with the comment tag to remove the specific rule rather than + # flushing the entire POSTROUTING chain (which would wipe WireGuard MASQUERADE). + cmd = ['iptables', '-t', 'nat', '-D', 'POSTROUTING', + '-m', 'comment', '--comment', rule_id, '-j', 'MASQUERADE'] + result = subprocess.run(cmd, timeout=10) + if result.returncode != 0: + # Rule may not exist โ€” not an error + logger.debug(f"NAT rule {rule_id} not found (already removed?)") + else: + logger.info(f"Removed NAT rule: {rule_id}") except Exception as e: logger.error(f"Failed to remove NAT rule: {e}") diff --git a/tests/conftest.py b/tests/conftest.py new file mode 100644 index 0000000..54d7b56 --- /dev/null +++ b/tests/conftest.py @@ -0,0 +1,45 @@ +""" +Shared pytest fixtures for the PIC test suite. +""" +import os +import sys +import json +import tempfile +import shutil +import pytest + +# Ensure api/ is on the path for all tests +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api')) + + +@pytest.fixture +def tmp_dir(): + """Temporary directory that is cleaned up after each test.""" + d = tempfile.mkdtemp() + yield d + shutil.rmtree(d, ignore_errors=True) + + +@pytest.fixture +def tmp_config_dir(tmp_dir): + """Temporary config dir with the sub-directories expected by managers.""" + for sub in ('api', 'caddy', 'dns', 'dhcp', 'ntp', 'wireguard'): + os.makedirs(os.path.join(tmp_dir, sub), exist_ok=True) + return tmp_dir + + +@pytest.fixture +def tmp_data_dir(tmp_dir): + """Temporary data dir with the sub-directories expected by managers.""" + for sub in ('dns', 'mail', 'calendar', 'files', 'wireguard'): + os.makedirs(os.path.join(tmp_dir, sub), exist_ok=True) + return tmp_dir + + +@pytest.fixture +def flask_client(): + """Flask test client with TESTING mode enabled.""" + from app import app + app.config['TESTING'] = True + with app.test_client() as client: + yield client diff --git a/tests/test_api_endpoints.py b/tests/test_api_endpoints.py index 4d8f0be..d1a8a75 100644 --- a/tests/test_api_endpoints.py +++ b/tests/test_api_endpoints.py @@ -141,17 +141,23 @@ class TestAPIEndpoints(unittest.TestCase): mock_network.add_dhcp_reservation.return_value = True response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}), content_type='application/json') self.assertEqual(response.status_code, 200) - # Simulate error - mock_network.add_dhcp_reservation.side_effect = Exception('fail') + # Missing mac field โ†’ 400, not 500 response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json') + self.assertEqual(response.status_code, 400) + # Simulate manager error + mock_network.add_dhcp_reservation.side_effect = Exception('fail') + response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}), content_type='application/json') self.assertEqual(response.status_code, 500) # Mock remove_dhcp_reservation mock_network.remove_dhcp_reservation.return_value = True - response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json') + response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'mac': '00:11:22:33:44:55'}), content_type='application/json') self.assertEqual(response.status_code, 200) - # Simulate error - mock_network.remove_dhcp_reservation.side_effect = Exception('fail') + # Missing mac โ†’ 400 response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json') + self.assertEqual(response.status_code, 400) + # Simulate manager error + mock_network.remove_dhcp_reservation.side_effect = Exception('fail') + response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'mac': '00:11:22:33:44:55'}), content_type='application/json') self.assertEqual(response.status_code, 500) @patch('app.network_manager') diff --git a/tests/test_app_misc.py b/tests/test_app_misc.py index 12f2070..e326921 100644 --- a/tests/test_app_misc.py +++ b/tests/test_app_misc.py @@ -45,7 +45,6 @@ class TestAppMisc(unittest.TestCase): patch.object(app_module, 'calendar_manager', MagicMock()), patch.object(app_module, 'file_manager', MagicMock()), patch.object(app_module, 'routing_manager', MagicMock()), - patch.object(app_module, 'cell_manager', MagicMock()), patch.object(app_module, 'container_manager', MagicMock()), ] for p in self.patches: @@ -97,18 +96,46 @@ class TestAppMisc(unittest.TestCase): self.assertEqual(ctx['path'], '/test') self.assertEqual(ctx['user'], 'user1') - def test_is_local_request(self): - class DummyRequest: - remote_addr = '127.0.0.1' - headers = {} - with patch('app.request', new=DummyRequest()): + def _req(self, remote_addr, xff=''): + class R: + pass + r = R() + r.remote_addr = remote_addr + r.headers = {'X-Forwarded-For': xff} if xff else {} + return r + + def test_is_local_request_loopback(self): + with patch('app.request', new=self._req('127.0.0.1')): self.assertTrue(app_module.is_local_request()) - class DummyRequest2: - remote_addr = '8.8.8.8' - headers = {} - with patch('app.request', new=DummyRequest2()): + + def test_is_local_request_public_ip(self): + with patch('app.request', new=self._req('8.8.8.8')): self.assertFalse(app_module.is_local_request()) + def test_is_local_request_private_ip(self): + with patch('app.request', new=self._req('192.168.1.5')): + self.assertTrue(app_module.is_local_request()) + + def test_is_local_request_xff_spoof_rejected(self): + # Client sends X-Forwarded-For: 127.0.0.1 but actual IP is public + # Old code would trust the first XFF entry โ€” fixed to trust only last + with patch('app.request', new=self._req('8.8.8.8', xff='127.0.0.1, 8.8.8.8')): + self.assertFalse(app_module.is_local_request()) + + def test_is_local_request_xff_last_entry_local(self): + # Caddy appends the real client IP; last entry is local โ†’ allow + with patch('app.request', new=self._req('8.8.8.8', xff='8.8.8.8, 192.168.1.10')): + self.assertTrue(app_module.is_local_request()) + + def test_is_local_request_xff_single_public_rejected(self): + with patch('app.request', new=self._req('8.8.8.8', xff='1.2.3.4')): + self.assertFalse(app_module.is_local_request()) + + def test_is_local_request_cell_network_ip(self): + # 172.20.0.10 is the API container's IP โ€” should be allowed + with patch('app.request', new=self._req('172.20.0.10')): + self.assertTrue(app_module.is_local_request()) + def test_health_check_exception(self): # Patch datetime to raise exception with patch('app.datetime') as mock_dt, app_module.app.app_context(): diff --git a/tests/test_config_validation.py b/tests/test_config_validation.py new file mode 100644 index 0000000..211f596 --- /dev/null +++ b/tests/test_config_validation.py @@ -0,0 +1,174 @@ +""" +Tests for PUT /api/config input validation (400 paths). +These are the highest-risk untested paths: the only server-side guard against +bad subnet/port values entering persistent config. +""" +import json +import sys +import os +import unittest +from unittest.mock import patch, MagicMock + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api')) + + +def _make_client(): + from app import app + app.config['TESTING'] = True + return app.test_client() + + +def _put(client, payload): + return client.put( + '/api/config', + data=json.dumps(payload), + content_type='application/json', + ) + + +# --------------------------------------------------------------------------- +# ip_range validation +# --------------------------------------------------------------------------- + +class TestIpRangeValidation(unittest.TestCase): + + def setUp(self): + self.client = _make_client() + + def test_non_rfc1918_returns_400(self): + r = _put(self.client, {'ip_range': '1.2.3.0/24'}) + self.assertEqual(r.status_code, 400) + body = json.loads(r.data) + self.assertIn('error', body) + self.assertIn('RFC-1918', body['error']) + + def test_172_0_subnet_returns_400(self): + # 172.0.0.0/24 is NOT in 172.16.0.0/12 โ€” was the bug on the dev machine + r = _put(self.client, {'ip_range': '172.0.0.0/24'}) + self.assertEqual(r.status_code, 400) + + def test_172_15_subnet_returns_400(self): + # One prefix below the 172.16.0.0/12 boundary + r = _put(self.client, {'ip_range': '172.15.0.0/24'}) + self.assertEqual(r.status_code, 400) + + def test_172_32_subnet_returns_400(self): + # One prefix above the 172.31.255.255 boundary + r = _put(self.client, {'ip_range': '172.32.0.0/24'}) + self.assertEqual(r.status_code, 400) + + def test_public_ip_returns_400(self): + r = _put(self.client, {'ip_range': '8.8.0.0/16'}) + self.assertEqual(r.status_code, 400) + + def test_172_16_exact_boundary_accepted(self): + # 172.16.0.0/12 is the exact lower boundary โ€” must be valid + r = _put(self.client, {'ip_range': '172.16.0.0/12'}) + # 200 or 202 โ€” just not 400 + self.assertNotEqual(r.status_code, 400) + + def test_10_network_accepted(self): + r = _put(self.client, {'ip_range': '10.0.0.0/8'}) + self.assertNotEqual(r.status_code, 400) + + def test_192_168_network_accepted(self): + r = _put(self.client, {'ip_range': '192.168.0.0/16'}) + self.assertNotEqual(r.status_code, 400) + + def test_invalid_cidr_syntax_returns_400(self): + r = _put(self.client, {'ip_range': 'not-a-cidr'}) + self.assertEqual(r.status_code, 400) + + +# --------------------------------------------------------------------------- +# Port range validation +# --------------------------------------------------------------------------- + +class TestPortValidation(unittest.TestCase): + + def setUp(self): + self.client = _make_client() + + def test_dns_port_zero_returns_400(self): + r = _put(self.client, {'network': {'dns_port': 0}}) + self.assertEqual(r.status_code, 400) + body = json.loads(r.data) + self.assertIn('dns_port', body.get('error', '')) + + def test_dns_port_65536_returns_400(self): + r = _put(self.client, {'network': {'dns_port': 65536}}) + self.assertEqual(r.status_code, 400) + + def test_wireguard_port_zero_returns_400(self): + r = _put(self.client, {'wireguard': {'port': 0}}) + self.assertEqual(r.status_code, 400) + + def test_wireguard_port_65536_returns_400(self): + r = _put(self.client, {'wireguard': {'port': 65536}}) + self.assertEqual(r.status_code, 400) + + def test_wireguard_port_1_accepted(self): + r = _put(self.client, {'wireguard': {'port': 1}}) + self.assertNotEqual(r.status_code, 400) + + def test_wireguard_port_65535_accepted(self): + r = _put(self.client, {'wireguard': {'port': 65535}}) + self.assertNotEqual(r.status_code, 400) + + def test_email_smtp_port_zero_returns_400(self): + r = _put(self.client, {'email': {'smtp_port': 0}}) + self.assertEqual(r.status_code, 400) + + def test_calendar_port_negative_returns_400(self): + r = _put(self.client, {'calendar': {'port': -1}}) + self.assertEqual(r.status_code, 400) + + +# --------------------------------------------------------------------------- +# WireGuard address validation +# --------------------------------------------------------------------------- + +class TestWireguardAddressValidation(unittest.TestCase): + + def setUp(self): + self.client = _make_client() + + def test_bad_wg_address_returns_400(self): + r = _put(self.client, {'wireguard': {'address': 'not-an-ip'}}) + self.assertEqual(r.status_code, 400) + body = json.loads(r.data) + self.assertIn('wireguard.address', body.get('error', '')) + + def test_ip_without_prefix_returns_400(self): + r = _put(self.client, {'wireguard': {'address': '10.0.0.1'}}) + self.assertEqual(r.status_code, 400) + + def test_valid_wg_address_accepted(self): + r = _put(self.client, {'wireguard': {'address': '10.0.0.1/24'}}) + self.assertNotEqual(r.status_code, 400) + + +# --------------------------------------------------------------------------- +# Body validation +# --------------------------------------------------------------------------- + +class TestBodyValidation(unittest.TestCase): + + def setUp(self): + self.client = _make_client() + + def test_no_body_returns_400(self): + r = self.client.put('/api/config', content_type='application/json') + self.assertEqual(r.status_code, 400) + + def test_empty_body_returns_400(self): + r = self.client.put('/api/config', data='', content_type='application/json') + self.assertEqual(r.status_code, 400) + + def test_valid_cell_name_change_returns_200(self): + r = _put(self.client, {'cell_name': 'testcell'}) + self.assertEqual(r.status_code, 200) + + +if __name__ == '__main__': + unittest.main() diff --git a/tests/test_ip_utils_caddyfile.py b/tests/test_ip_utils_caddyfile.py new file mode 100644 index 0000000..3721700 --- /dev/null +++ b/tests/test_ip_utils_caddyfile.py @@ -0,0 +1,102 @@ +""" +Tests for ip_utils.write_caddyfile โ€” this function is called on every +ip_range / domain / cell_name change and was previously untested. +""" +import os +import sys +import tempfile +import unittest + +sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api')) + +from ip_utils import write_caddyfile, get_service_ips + + +class TestWriteCaddyfile(unittest.TestCase): + + def setUp(self): + self.tmp = tempfile.mkdtemp() + self.path = os.path.join(self.tmp, 'caddy', 'Caddyfile') + + def _write(self, ip_range='172.20.0.0/16', cell_name='mycell', domain='cell'): + ok = write_caddyfile(ip_range, cell_name, domain, self.path) + self.assertTrue(ok, "write_caddyfile returned False") + with open(self.path) as f: + return f.read() + + def test_creates_file_in_subdirectory(self): + self._write() + self.assertTrue(os.path.isfile(self.path)) + + def test_cell_domain_vhost_present(self): + content = self._write(cell_name='mycell', domain='cell') + self.assertIn('http://mycell.cell', content) + + def test_custom_domain_used(self): + content = self._write(cell_name='pic0', domain='dev') + self.assertIn('http://pic0.dev', content) + self.assertNotIn('mycell', content) + self.assertNotIn('.cell', content) + + def test_service_subdomains_use_domain(self): + content = self._write(domain='mynet') + self.assertIn('http://calendar.mynet', content) + self.assertIn('http://files.mynet', content) + self.assertIn('http://mail.mynet', content) + self.assertIn('http://webdav.mynet', content) + + def test_virtual_ips_match_ip_range(self): + ip_range = '10.0.0.0/16' + content = self._write(ip_range=ip_range) + ips = get_service_ips(ip_range) + self.assertIn(ips['vip_calendar'], content) + self.assertIn(ips['vip_files'], content) + self.assertIn(ips['vip_mail'], content) + self.assertIn(ips['vip_webdav'], content) + + def test_reverse_proxy_targets_are_internal_ports(self): + content = self._write() + self.assertIn('reverse_proxy cell-radicale:5232', content) + self.assertIn('reverse_proxy cell-filegator:8080', content) + self.assertIn('reverse_proxy cell-rainloop:8888', content) + self.assertIn('reverse_proxy cell-webdav:80', content) + + def test_api_proxy_present(self): + content = self._write() + self.assertIn('reverse_proxy cell-api:3000', content) + + def test_overwrite_on_second_call(self): + self._write(cell_name='first', domain='cell') + content = self._write(cell_name='second', domain='cell') + self.assertIn('second.cell', content) + self.assertNotIn('first.cell', content) + + def test_different_ip_ranges_produce_different_vips(self): + c1 = self._write(ip_range='10.0.0.0/16') + os.remove(self.path) + c2 = self._write(ip_range='192.168.1.0/24') + self.assertNotEqual(c1, c2) + + def test_auto_https_off(self): + content = self._write() + self.assertIn('auto_https off', content) + + def test_catchall_block_present(self): + content = self._write() + self.assertIn(':80 {', content) + + def test_invalid_ip_range_returns_false(self): + result = write_caddyfile('not-a-cidr', 'cell', 'cell', self.path) + self.assertFalse(result) + + def test_file_is_not_empty(self): + self._write() + self.assertGreater(os.path.getsize(self.path), 100) + + def tearDown(self): + import shutil + shutil.rmtree(self.tmp, ignore_errors=True) + + +if __name__ == '__main__': + unittest.main()