fix: architecture audit — security, atomicity, broken endpoints, test coverage
Sprint 1 — Security & correctness:
- Restore all 10 commented-out is_local_request() checks (vault, containers, images, volumes)
- Fix XFF spoofing: only trust the LAST X-Forwarded-For entry (Caddy's append), not all
- Require prefix length in wireguard.address (was accepting bare IPs like 10.0.0.1)
- Validate service_access list in add_peer (valid: calendar/files/mail/webdav)
- Fix dhcp/reservations POST/DELETE: unpack mac/ip/hostname from body (was passing dict as positional arg)
- Fix network/test POST: remove spurious data arg (test_connectivity takes no args)
- Fix remove_peer: clear iptables rules and regenerate DNS ACLs on deletion (was leaving stale rules)
- Fix CoreDNS reload: SIGHUP → SIGUSR1 (SIGHUP kills the process; SIGUSR1 triggers reload plugin)
- Remove local.{domain} block from Corefile template (local.zone doesn't exist, caused log spam)
- Fix routing_manager._remove_nat_rule: targeted -D instead of flushing entire POSTROUTING chain
Sprint 2 — State consistency:
- Atomic config writes in config_manager, ip_utils, firewall_manager, network_manager
(write to .tmp → fsync → os.replace, prevents truncated files on kill)
- backup_config: now also backs up Caddyfile, Corefile, .env, DNS zone files
- restore_config: restores all of the above so config stays consistent after restore
Sprint 3 — Dead code / documentation:
- Remove CellManager instantiation from app startup (was never called, double-instantiated all managers)
- Document routing_manager scope (targets host, not cell-wireguard; methods not called by any active route)
Sprint 4 — Test infrastructure:
- Add tests/conftest.py with shared tmp_dir, tmp_config_dir, tmp_data_dir, flask_client fixtures
- Add tests/test_config_validation.py: 400 paths for ip_range, port, wireguard.address validation
- Add tests/test_ip_utils_caddyfile.py: 14 tests for write_caddyfile (was completely untested)
- Expand test_app_misc.py: 7 new is_local_request tests covering XFF spoofing and cell-network IPs
- Add --cov-fail-under=70 to make test-coverage
- Add pre-commit hook that runs pytest before every commit
414 tests pass (was 372).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -225,7 +225,7 @@ test-unit:
|
|||||||
pytest tests/
|
pytest tests/
|
||||||
|
|
||||||
test-coverage:
|
test-coverage:
|
||||||
pytest tests/ api/tests/ --cov=api --cov-report=html --cov-report=term-missing -v
|
pytest tests/ api/tests/ --cov=api --cov-report=html --cov-report=term-missing --cov-fail-under=70 -v
|
||||||
|
|
||||||
test-api:
|
test-api:
|
||||||
cd api && python3 -m pytest tests/test_api_endpoints.py -v
|
cd api && python3 -m pytest tests/test_api_endpoints.py -v
|
||||||
|
|||||||
@@ -1,179 +1,123 @@
|
|||||||
|
|
||||||
# Personal Internet Cell
|
# Personal Internet Cell (PIC)
|
||||||
|
|
||||||
## 🌟 Overview
|
A self-hosted digital infrastructure platform. One stack, one API, one UI — managing DNS, DHCP, NTP, WireGuard VPN, email, calendar/contacts, file storage, and a reverse proxy on your own hardware.
|
||||||
|
|
||||||
The Personal Internet Cell is a **production-grade, self-hosted, decentralized digital infrastructure** that empowers you to:
|
|
||||||
|
|
||||||
- **Host your own services**: Email, calendar, contacts, files, DNS, DHCP, NTP
|
|
||||||
- **Secure mesh networking**: Connect with trusted peers via WireGuard VPN
|
|
||||||
- **Advanced routing**: VPN gateway, NAT, firewall, exit nodes, and bridge routing
|
|
||||||
- **Enterprise security**: Self-hosted CA, certificate management, trust systems
|
|
||||||
- **Modern management**: RESTful API, enhanced CLI, and comprehensive monitoring
|
|
||||||
- **Event-driven architecture**: Service orchestration and real-time communication
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🚀 Key Features
|
## What it does
|
||||||
|
|
||||||
### 🔧 **Core Services**
|
- **Network services** — CoreDNS, dnsmasq DHCP, chrony NTP, all dynamically managed
|
||||||
- **Network Services**: DNS, DHCP, NTP with dynamic management
|
- **WireGuard VPN** — peer lifecycle, QR-code provisioning, per-peer service access control
|
||||||
- **VPN & Mesh**: WireGuard-based peer federation with dynamic IP updates
|
- **Digital services** — Email (Postfix/Dovecot), Calendar/Contacts (Radicale CalDAV), Files (WebDAV + Filegator)
|
||||||
- **Digital Services**: Email (SMTP/IMAP), Calendar/Contacts (CalDAV/CardDAV), File Storage (WebDAV)
|
- **Reverse proxy** — Caddy with per-service virtual IPs; subdomains like `calendar.mycell.cell` work on VPN clients automatically
|
||||||
- **Security**: Self-hosted Certificate Authority, Age/Fernet encryption, trust management
|
- **Certificate authority** — self-hosted CA via VaultManager
|
||||||
- **Container Orchestration**: Docker-based service management and deployment
|
- **Cell mesh** — connect two PIC instances with site-to-site WireGuard + DNS forwarding
|
||||||
|
|
||||||
### 🏗️ **Architecture Highlights**
|
Everything is configured through a REST API and a React web UI. No manual config file editing needed for normal operations.
|
||||||
- **BaseServiceManager**: Unified interface across all 10 service managers
|
|
||||||
- **Event-Driven Service Bus**: Real-time service communication and orchestration
|
|
||||||
- **Centralized Configuration**: Type-safe validation, backup/restore, import/export
|
|
||||||
- **Comprehensive Logging**: Structured JSON logs with rotation, search, and export
|
|
||||||
- **Enhanced CLI**: Interactive mode, batch operations, service wizards
|
|
||||||
- **Health Monitoring**: Real-time health checks and performance metrics
|
|
||||||
|
|
||||||
### 📊 **Production Features**
|
|
||||||
- **Service Orchestration**: Automatic service dependency management
|
|
||||||
- **Configuration Management**: Schema validation, versioning, and migration
|
|
||||||
- **Error Handling**: Standardized error handling and recovery mechanisms
|
|
||||||
- **Testing**: Comprehensive test suite with 77%+ coverage
|
|
||||||
- **Documentation**: Complete API documentation and usage guides
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📋 Table of Contents
|
## Quick Start
|
||||||
|
|
||||||
1. [Quick Start](#quick-start)
|
|
||||||
2. [Architecture](#architecture)
|
|
||||||
3. [Service Managers](#service-managers)
|
|
||||||
4. [API Reference](#api-reference)
|
|
||||||
5. [CLI Guide](#cli-guide)
|
|
||||||
6. [Configuration](#configuration)
|
|
||||||
7. [Security](#security)
|
|
||||||
8. [Development](#development)
|
|
||||||
9. [Testing](#testing)
|
|
||||||
10. [Deployment](#deployment)
|
|
||||||
11. [Contributing](#contributing)
|
|
||||||
12. [License](#license)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Quick Start
|
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
- **Debian/Ubuntu** host (apt-based). All other dependencies are installed automatically.
|
- Debian/Ubuntu host (apt-based)
|
||||||
- **2 GB+ RAM, 10 GB+ disk space**
|
- 2 GB+ RAM, 10 GB+ disk
|
||||||
- **Open ports**: 53 (DNS), 80/443 (HTTP/S), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard)
|
- Open ports: 53 (DNS), 80 (HTTP), 3000 (API), 8081 (Web UI), 51820/udp (WireGuard)
|
||||||
|
|
||||||
### 1. Install
|
### Install
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone <repo-url> pic
|
git clone <repo-url> pic
|
||||||
cd pic
|
cd pic
|
||||||
|
|
||||||
# Install all system dependencies (docker, python3, python3-cryptography, etc.)
|
# Install system deps (docker, python3, python3-cryptography, etc.)
|
||||||
make check-deps
|
make check-deps
|
||||||
|
|
||||||
# Default cell (name=mycell, domain=cell, VPN=10.0.0.1/24, port=51820)
|
# Generate keys + write configs
|
||||||
make setup
|
make setup
|
||||||
make start
|
|
||||||
|
|
||||||
# Custom cell — use when installing a second cell on a different host
|
# Build and start all 12 containers
|
||||||
|
make start
|
||||||
|
```
|
||||||
|
|
||||||
|
`make setup` accepts overrides for a second cell on a different host:
|
||||||
|
|
||||||
|
```bash
|
||||||
CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start
|
CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start
|
||||||
```
|
```
|
||||||
|
|
||||||
`make check-deps` installs python3, python3-cryptography, docker, docker-compose, curl, openssl, git via apt and adds the current user to the docker group.
|
### Access
|
||||||
|
|
||||||
`make setup` generates WireGuard keys, writes configs, and creates all data directories.
|
|
||||||
|
|
||||||
`make start` builds and brings up all 12 Docker containers.
|
|
||||||
|
|
||||||
### 2. Access
|
|
||||||
|
|
||||||
| Service | URL |
|
| Service | URL |
|
||||||
|---------|-----|
|
|---------|-----|
|
||||||
| Web UI | `http://<host-ip>:8081` |
|
| Web UI | `http://<host-ip>:8081` |
|
||||||
| API | `http://<host-ip>:3000` |
|
| API | `http://<host-ip>:3000` |
|
||||||
| Health | `http://<host-ip>:3000/health` |
|
| Health | `http://<host-ip>:3000/health` |
|
||||||
|
|
||||||
On a WireGuard client: `http://mycell.cell` (or whatever your cell name is).
|
From a WireGuard client: `http://mycell.cell` (replace with your cell name/domain).
|
||||||
|
|
||||||
### 3. Local dev (no Docker)
|
### Local dev (no Docker)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install -r api/requirements.txt
|
pip install -r api/requirements.txt
|
||||||
python api/app.py # API on :3000
|
python api/app.py # Flask API on :3000
|
||||||
|
|
||||||
cd webui && npm install && npm run dev # React UI on :5173 (proxies API to :3000)
|
cd webui && npm install && npm run dev # React UI on :5173 (proxies /api → :3000)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🛠️ Management Commands
|
## Management Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# First install
|
# First install
|
||||||
make check-deps # install all system packages via apt
|
make check-deps # install system packages via apt
|
||||||
make setup # generate keys, write configs
|
make setup # generate keys, write configs, create data dirs
|
||||||
make start # start all 12 containers
|
make start # start all 12 containers
|
||||||
|
|
||||||
# Daily operations
|
# Daily operations
|
||||||
make status # container status + API health
|
make status # container status + API health
|
||||||
make logs # follow all logs
|
make logs # follow all container logs
|
||||||
make logs-api # follow logs for one service (api, dns, wg, mail, caddy, ...)
|
make logs-api # follow logs for one service (api, dns, wg, mail, caddy, ...)
|
||||||
make shell-api # open a shell inside a container
|
make shell-api # shell inside a container
|
||||||
|
|
||||||
# Deploy latest code
|
# Deploy latest code
|
||||||
make update # git pull + rebuild + restart
|
make update # git pull + rebuild api image + restart
|
||||||
|
|
||||||
# Full wipe and reinstall (useful on test machine)
|
|
||||||
make reinstall # stop, wipe config/data, setup, start fresh
|
|
||||||
|
|
||||||
# Remove everything
|
|
||||||
make uninstall # stop + remove images; prompts whether to also wipe config/data
|
|
||||||
|
|
||||||
# Maintenance
|
# Maintenance
|
||||||
make backup # tar config/ + data/ into backups/
|
make backup # tar config/ + data/ into backups/
|
||||||
make restore # list available backups
|
make restore # list available backups and restore
|
||||||
make clean # remove containers/volumes, keep config/data
|
make clean # remove containers/volumes, keep config/data
|
||||||
|
|
||||||
|
# Full wipe (test machines)
|
||||||
|
make reinstall # stop, wipe config/data, setup, start fresh
|
||||||
|
make uninstall # stop + remove images; prompts to also wipe config/data
|
||||||
|
|
||||||
# Tests
|
# Tests
|
||||||
make test # run all tests
|
make test # run full pytest suite
|
||||||
make test-coverage # tests + HTML coverage report
|
make test-coverage # tests + HTML coverage report in htmlcov/
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🔗 Connecting Two Cells (PIC Mesh)
|
## Connecting Two Cells (PIC Mesh)
|
||||||
|
|
||||||
Two PIC instances can form a mesh — full site-to-site WireGuard tunnels with
|
Two PIC instances form a mesh: site-to-site WireGuard tunnels with automatic DNS forwarding so each cell's services resolve from the other.
|
||||||
automatic DNS forwarding so each cell's services are reachable from the other.
|
|
||||||
|
|
||||||
### Install the second cell
|
### Exchange invites
|
||||||
|
|
||||||
```bash
|
1. On **Cell A** → Web UI → **Cell Network** → copy the invite JSON.
|
||||||
# On the second host (different VPN subnet; port 51820 is fine — different machine)
|
2. On **Cell B** → **Cell Network** → paste into "Connect to Another Cell" → **Connect**.
|
||||||
CELL_NAME=pic1 VPN_ADDRESS=10.1.0.1/24 make setup && make start
|
|
||||||
```
|
|
||||||
|
|
||||||
### Exchange invites (two pastes, two clicks)
|
|
||||||
|
|
||||||
1. On **Cell A** → open Web UI → **Cell Network** → copy the invite JSON.
|
|
||||||
2. On **Cell B** → **Cell Network** → paste into "Connect to Another Cell" → click **Connect**.
|
|
||||||
3. On **Cell B** → copy its invite JSON.
|
3. On **Cell B** → copy its invite JSON.
|
||||||
4. On **Cell A** → paste Cell B's invite → click **Connect**.
|
4. On **Cell A** → paste Cell B's invite → **Connect**.
|
||||||
|
|
||||||
Both cells now have:
|
Both cells now have a WireGuard peer with `AllowedIPs = remote VPN subnet` and a CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel.
|
||||||
- A site-to-site WireGuard peer (AllowedIPs = remote cell's VPN subnet).
|
|
||||||
- A CoreDNS forwarding block so `*.pic1.cell` resolves across the tunnel.
|
|
||||||
|
|
||||||
The **Connected Cells** panel shows live handshake status (green = online).
|
|
||||||
|
|
||||||
### Same-LAN tip
|
### Same-LAN tip
|
||||||
|
|
||||||
If both cells share the same external IP (behind NAT), the auto-detected
|
If both cells share the same external IP (behind NAT), replace the auto-detected endpoint with the LAN IP before connecting:
|
||||||
endpoint in the invite will be the public IP. Replace it with the LAN IP
|
|
||||||
before clicking Connect so traffic stays local:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{ "endpoint": "192.168.31.50:51820", ... }
|
{ "endpoint": "192.168.31.50:51820", ... }
|
||||||
@@ -181,359 +125,143 @@ before clicking Connect so traffic stays local:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🏗️ Architecture
|
## Architecture
|
||||||
|
|
||||||
### **Service Manager Architecture**
|
### Stack
|
||||||
|
|
||||||
All services inherit from `BaseServiceManager`, providing:
|
|
||||||
- **Unified Interface**: Consistent methods across all services
|
|
||||||
- **Health Monitoring**: Standardized health checks and metrics
|
|
||||||
- **Error Handling**: Centralized error handling and logging
|
|
||||||
- **Configuration**: Common configuration management patterns
|
|
||||||
|
|
||||||
### **Event-Driven Service Bus**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Services communicate via events
|
|
||||||
service_bus.register_service('network', network_manager)
|
|
||||||
service_bus.register_service('wireguard', wireguard_manager)
|
|
||||||
service_bus.publish_event(EventType.SERVICE_STARTED, 'network', data)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Service Dependencies**
|
|
||||||
|
|
||||||
```
|
```
|
||||||
wireguard → network
|
cell-caddy (Caddy) :80/:443 + per-service virtual IPs
|
||||||
email → network, vault
|
cell-api (Flask :3000) REST API + config management + container orchestration
|
||||||
calendar → network, vault
|
cell-webui (Nginx :8081) React UI
|
||||||
files → network, vault
|
cell-dns (CoreDNS :53) internal DNS + per-peer ACLs
|
||||||
routing → network, wireguard
|
cell-dhcp (dnsmasq) DHCP + static reservations
|
||||||
vault → network
|
cell-ntp (chrony) NTP
|
||||||
|
cell-wireguard WireGuard VPN
|
||||||
|
cell-mail (docker-mailserver) SMTP/IMAP
|
||||||
|
cell-radicale CalDAV/CardDAV :5232
|
||||||
|
cell-webdav WebDAV :80
|
||||||
|
cell-filegator file manager UI :8080
|
||||||
|
cell-rainloop webmail :8888
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
All containers share a custom Docker bridge network. Static IPs are assigned in `docker-compose.yml`. Caddy adds per-service virtual IPs to its own interface at API startup so `calendar.<domain>`, `files.<domain>`, etc. route to the right container.
|
||||||
|
|
||||||
## 🔧 Service Managers
|
### Backend (`api/`)
|
||||||
|
|
||||||
### **Core Network Services**
|
Service managers (`network_manager.py`, `wireguard_manager.py`, `peer_registry.py`, etc.) all inherit `BaseServiceManager`. `app.py` contains all Flask routes — one file, organized by service.
|
||||||
- **NetworkManager**: DNS, DHCP, NTP with dynamic zone management
|
|
||||||
- **WireGuardManager**: VPN configuration, peer management, key generation
|
|
||||||
- **PeerRegistry**: Peer registration, IP updates, trust management
|
|
||||||
|
|
||||||
### **Digital Services**
|
`ConfigManager` (`config_manager.py`) is the single source of truth. Config lives in `config/api/cell_config.json`. All managers read/write through it.
|
||||||
- **EmailManager**: SMTP/IMAP email with user management
|
|
||||||
- **CalendarManager**: CalDAV/CardDAV calendar and contacts
|
|
||||||
- **FileManager**: WebDAV file storage with user directories
|
|
||||||
|
|
||||||
### **Infrastructure Services**
|
`ip_utils.py` owns all container IP logic via `CONTAINER_OFFSETS` — do not hardcode IPs elsewhere.
|
||||||
- **RoutingManager**: NAT, firewall, advanced routing (exit/bridge/split)
|
|
||||||
- **VaultManager**: Certificate authority, trust management, encryption
|
|
||||||
- **ContainerManager**: Docker orchestration and container management
|
|
||||||
- **CellManager**: Overall cell configuration and service orchestration
|
|
||||||
|
|
||||||
---
|
When a config change requires recreating the Docker network (e.g. `ip_range` change), the API spawns a helper container that outlives cell-api to run `docker compose down && up`. Other restarts run `compose up -d --no-deps <containers>` directly.
|
||||||
|
|
||||||
## 📡 API Reference
|
### Frontend (`webui/`)
|
||||||
|
|
||||||
### **Core Endpoints**
|
React 18 + Vite + Tailwind CSS. All API calls go through `src/services/api.js` (Axios). Vite dev server proxies `/api` to `localhost:3000`. Pages in `src/pages/`, shared components in `src/components/`.
|
||||||
|
|
||||||
```bash
|
### Project layout
|
||||||
# Service Status
|
|
||||||
GET /api/services/status
|
|
||||||
GET /api/services/connectivity
|
|
||||||
|
|
||||||
# Configuration Management
|
|
||||||
GET /api/config
|
|
||||||
PUT /api/config
|
|
||||||
POST /api/config/backup
|
|
||||||
POST /api/config/restore/<backup_id>
|
|
||||||
|
|
||||||
# Service Bus
|
|
||||||
GET /api/services/bus/status
|
|
||||||
GET /api/services/bus/events
|
|
||||||
POST /api/services/bus/services/<service>/start
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
GET /api/logs/services/<service>
|
|
||||||
POST /api/logs/search
|
|
||||||
POST /api/logs/export
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Service-Specific Endpoints**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Network Services
|
|
||||||
GET /api/dns/records
|
|
||||||
POST /api/dns/records
|
|
||||||
GET /api/dhcp/leases
|
|
||||||
GET /api/ntp/status
|
|
||||||
|
|
||||||
# WireGuard & Peers
|
|
||||||
GET /api/wireguard/peers
|
|
||||||
POST /api/wireguard/peers
|
|
||||||
GET /api/wireguard/status
|
|
||||||
|
|
||||||
# Digital Services
|
|
||||||
GET /api/email/users
|
|
||||||
GET /api/calendar/users
|
|
||||||
GET /api/files/users
|
|
||||||
|
|
||||||
# Routing & Security
|
|
||||||
GET /api/routing/status
|
|
||||||
POST /api/routing/nat
|
|
||||||
GET /api/vault/certificates
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💻 CLI Guide
|
|
||||||
|
|
||||||
### **Enhanced CLI Features**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Interactive Mode
|
|
||||||
python api/enhanced_cli.py --interactive
|
|
||||||
|
|
||||||
# Batch Operations
|
|
||||||
python api/enhanced_cli.py --batch "status" "services" "health"
|
|
||||||
|
|
||||||
# Configuration Management
|
|
||||||
python api/enhanced_cli.py --export-config json
|
|
||||||
python api/enhanced_cli.py --import-config config.json
|
|
||||||
|
|
||||||
# Service Wizards
|
|
||||||
python api/enhanced_cli.py --wizard network
|
|
||||||
python api/enhanced_cli.py --wizard email
|
|
||||||
|
|
||||||
# Health Monitoring
|
|
||||||
python api/enhanced_cli.py --health
|
|
||||||
python api/enhanced_cli.py --logs network
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Service Management**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Show status
|
|
||||||
python api/enhanced_cli.py --status
|
|
||||||
|
|
||||||
# List services
|
|
||||||
python api/enhanced_cli.py --services
|
|
||||||
|
|
||||||
# Peer management
|
|
||||||
python api/enhanced_cli.py --peers
|
|
||||||
|
|
||||||
# Service logs
|
|
||||||
python api/enhanced_cli.py --logs wireguard
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ Configuration
|
|
||||||
|
|
||||||
### **Configuration Management**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Export configuration
|
|
||||||
curl -X GET http://localhost:3000/api/config
|
|
||||||
|
|
||||||
# Update configuration
|
|
||||||
curl -X PUT http://localhost:3000/api/config \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"cell_name": "mycell", "domain": "mycell.cell"}'
|
|
||||||
|
|
||||||
# Backup configuration
|
|
||||||
curl -X POST http://localhost:3000/api/config/backup
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Service Configuration**
|
|
||||||
|
|
||||||
Each service has its own configuration schema:
|
|
||||||
- **Network**: DNS zones, DHCP ranges, NTP servers
|
|
||||||
- **WireGuard**: Interface settings, peer configurations
|
|
||||||
- **Email**: Domain settings, user accounts, mailboxes
|
|
||||||
- **Calendar**: User accounts, calendar sharing
|
|
||||||
- **Files**: Storage quotas, user directories
|
|
||||||
- **Routing**: NAT rules, firewall policies, routing tables
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔒 Security
|
|
||||||
|
|
||||||
### **Certificate Management**
|
|
||||||
- **Self-hosted CA**: Issue and manage TLS certificates
|
|
||||||
- **Certificate Lifecycle**: Generate, renew, revoke certificates
|
|
||||||
- **Trust Management**: Direct and indirect trust relationships
|
|
||||||
- **Age Encryption**: Modern encryption for sensitive data
|
|
||||||
|
|
||||||
### **Network Security**
|
|
||||||
- **WireGuard VPN**: Secure peer-to-peer communication
|
|
||||||
- **Firewall & NAT**: Granular access control
|
|
||||||
- **Service Isolation**: Docker containers for each service
|
|
||||||
- **Input Validation**: All API endpoints validate input
|
|
||||||
|
|
||||||
### **Data Protection**
|
|
||||||
- **Encrypted Storage**: Sensitive data encrypted at rest
|
|
||||||
- **Secure Communication**: TLS for all API endpoints
|
|
||||||
- **Access Control**: Role-based access for services
|
|
||||||
- **Audit Logging**: Comprehensive security event logging
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Development
|
|
||||||
|
|
||||||
### **Project Structure**
|
|
||||||
|
|
||||||
```
|
```
|
||||||
PersonalInternetCell/
|
pic/
|
||||||
├── api/ # Backend API server
|
├── api/ # Flask API + all service managers
|
||||||
│ ├── base_service_manager.py # Base class for all services
|
│ ├── app.py # all routes (~2700 lines)
|
||||||
│ ├── config_manager.py # Configuration management
|
│ ├── config_manager.py # unified config CRUD
|
||||||
│ ├── service_bus.py # Event-driven service bus
|
│ ├── ip_utils.py # IP/CIDR helpers + Caddyfile generator
|
||||||
│ ├── log_manager.py # Comprehensive logging
|
│ ├── firewall_manager.py # iptables (via cell-wireguard) + Corefile
|
||||||
│ ├── enhanced_cli.py # Enhanced CLI tool
|
│ ├── network_manager.py # DNS zones, DHCP, NTP
|
||||||
│ ├── network_manager.py # DNS, DHCP, NTP
|
│ ├── wireguard_manager.py
|
||||||
│ ├── wireguard_manager.py # VPN and peer management
|
│ ├── peer_registry.py
|
||||||
│ ├── email_manager.py # Email services
|
│ ├── vault_manager.py
|
||||||
│ ├── calendar_manager.py # Calendar services
|
│ ├── email_manager.py
|
||||||
│ ├── file_manager.py # File storage
|
│ ├── calendar_manager.py
|
||||||
│ ├── routing_manager.py # Routing and NAT
|
│ ├── file_manager.py
|
||||||
│ ├── vault_manager.py # Security and trust
|
│ └── container_manager.py
|
||||||
│ ├── container_manager.py # Container orchestration
|
|
||||||
│ ├── cell_manager.py # Overall cell management
|
|
||||||
│ ├── peer_registry.py # Peer registration
|
|
||||||
│ └── app.py # Main API server
|
|
||||||
├── webui/ # React frontend
|
├── webui/ # React frontend
|
||||||
├── config/ # Configuration files
|
├── config/ # Config files (bind-mounted into containers)
|
||||||
├── data/ # Persistent data
|
│ ├── api/cell_config.json ← live config
|
||||||
├── tests/ # Test suite
|
│ ├── caddy/Caddyfile
|
||||||
└── docker-compose.yml # Container orchestration
|
│ ├── dns/Corefile
|
||||||
|
│ └── ...
|
||||||
|
├── data/ # Persistent data (git-ignored)
|
||||||
|
├── tests/ # pytest suite (372 tests, 27 files)
|
||||||
|
├── docker-compose.yml
|
||||||
|
└── Makefile
|
||||||
```
|
```
|
||||||
|
|
||||||
### **Running Locally**
|
---
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
### Config
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/config full config + service IPs
|
||||||
|
PUT /api/config update identity or service config
|
||||||
|
GET /api/config/pending pending restart info
|
||||||
|
POST /api/config/apply apply pending restart
|
||||||
|
POST /api/config/backup create backup
|
||||||
|
POST /api/config/restore/<backup_id> restore from backup
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/dns/records
|
||||||
|
POST /api/dns/records
|
||||||
|
GET /api/dhcp/leases
|
||||||
|
GET /api/dhcp/reservations
|
||||||
|
POST /api/dhcp/reservations
|
||||||
|
```
|
||||||
|
|
||||||
|
### WireGuard & Peers
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/wireguard/status
|
||||||
|
GET /api/wireguard/peers
|
||||||
|
POST /api/wireguard/peers
|
||||||
|
GET /api/peers
|
||||||
|
POST /api/peers
|
||||||
|
PUT /api/peers/<name>
|
||||||
|
DELETE /api/peers/<name>
|
||||||
|
GET /api/peers/<name>/config peer config + QR code
|
||||||
|
```
|
||||||
|
|
||||||
|
### Containers & Health
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/containers
|
||||||
|
POST /api/containers/<name>/restart
|
||||||
|
GET /health
|
||||||
|
GET /api/services/status
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install dependencies
|
make test # run full suite
|
||||||
pip install -r api/requirements.txt
|
make test-coverage # coverage report in htmlcov/
|
||||||
|
pytest tests/test_<module>.py # single file
|
||||||
# Start the API server
|
pytest tests/ -k "test_name" # single test
|
||||||
python api/app.py
|
|
||||||
|
|
||||||
# Run tests
|
|
||||||
python api/test_enhanced_api.py
|
|
||||||
|
|
||||||
# Start frontend (if available)
|
|
||||||
cd webui && bun install && npm run dev
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### **Service Development**
|
Tests live in `tests/` and use `unittest.TestCase` collected by pytest. External system calls (Docker, iptables, file writes) are mocked with `unittest.mock.patch`.
|
||||||
|
|
||||||
```python
|
Known coverage gaps: `write_caddyfile`, `POST /api/config/apply` (helper container path), `PUT /api/config` 400 validation paths. These are the highest-risk untested paths.
|
||||||
from base_service_manager import BaseServiceManager
|
|
||||||
|
|
||||||
class MyServiceManager(BaseServiceManager):
|
|
||||||
def __init__(self, data_dir='/app/data', config_dir='/app/config'):
|
|
||||||
super().__init__('myservice', data_dir, config_dir)
|
|
||||||
|
|
||||||
def get_status(self) -> Dict[str, Any]:
|
|
||||||
# Implement service status
|
|
||||||
pass
|
|
||||||
|
|
||||||
def test_connectivity(self) -> Dict[str, Any]:
|
|
||||||
# Implement connectivity test
|
|
||||||
pass
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🧪 Testing
|
## Security Notes
|
||||||
|
|
||||||
### **Test Suite**
|
- The API is access-controlled by `is_local_request()` — it checks whether the request comes from a local/loopback/cell-network IP. Sensitive endpoints (containers, vault) are restricted to local access only.
|
||||||
|
- All per-peer service access is enforced via iptables rules inside `cell-wireguard` and CoreDNS ACL blocks.
|
||||||
```bash
|
- The Docker socket is mounted into `cell-api` for container management — treat network access to port 3000 as privileged.
|
||||||
# Run all tests
|
- `ip_range` must be an RFC-1918 CIDR (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16). The API and UI both validate this.
|
||||||
python api/test_enhanced_api.py
|
|
||||||
|
|
||||||
# Test specific components
|
|
||||||
python -m pytest api/tests/test_network_manager.py
|
|
||||||
python -m pytest api/tests/test_service_bus.py
|
|
||||||
|
|
||||||
# Coverage report
|
|
||||||
coverage run -m pytest api/tests/
|
|
||||||
coverage html
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test Coverage**
|
|
||||||
- **BaseServiceManager**: 100% coverage
|
|
||||||
- **ConfigManager**: 95%+ coverage
|
|
||||||
- **ServiceBus**: 95%+ coverage
|
|
||||||
- **LogManager**: 95%+ coverage
|
|
||||||
- **All Service Managers**: 77%+ overall coverage
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🚀 Deployment
|
## License
|
||||||
|
|
||||||
### **Docker Deployment**
|
MIT — see [LICENSE](LICENSE).
|
||||||
|
|
||||||
```bash
|
|
||||||
# Production deployment
|
|
||||||
docker-compose -f docker-compose.prod.yml up -d
|
|
||||||
|
|
||||||
# Development deployment
|
|
||||||
docker-compose up --build
|
|
||||||
```
|
|
||||||
|
|
||||||
### **System Requirements**
|
|
||||||
- **CPU**: 2+ cores
|
|
||||||
- **RAM**: 2GB+ (4GB recommended)
|
|
||||||
- **Storage**: 10GB+ (SSD recommended)
|
|
||||||
- **Network**: Stable internet connection
|
|
||||||
|
|
||||||
### **Monitoring**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Health check
|
|
||||||
curl http://localhost:3000/health
|
|
||||||
|
|
||||||
# Service status
|
|
||||||
curl http://localhost:3000/api/services/status
|
|
||||||
|
|
||||||
# Service connectivity
|
|
||||||
curl http://localhost:3000/api/services/connectivity
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤝 Contributing
|
|
||||||
|
|
||||||
1. **Fork** the repository
|
|
||||||
2. **Create** a feature branch
|
|
||||||
3. **Implement** your changes
|
|
||||||
4. **Add tests** for new functionality
|
|
||||||
5. **Submit** a pull request
|
|
||||||
|
|
||||||
### **Development Guidelines**
|
|
||||||
- Follow the existing code style
|
|
||||||
- Add comprehensive tests
|
|
||||||
- Update documentation
|
|
||||||
- Use the BaseServiceManager pattern
|
|
||||||
- Implement proper error handling
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📄 License
|
|
||||||
|
|
||||||
MIT License - see [LICENSE](LICENSE) file for details.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentation
|
|
||||||
|
|
||||||
- **[Quick Start Guide](QUICKSTART.md)**: Get up and running quickly
|
|
||||||
- **[API Documentation](api/API_DOCUMENTATION.md)**: Complete API reference
|
|
||||||
- **[Comprehensive Improvements](COMPREHENSIVE_IMPROVEMENTS_SUMMARY.md)**: Detailed architecture overview
|
|
||||||
- **[Enhanced API Improvements](ENHANCED_API_IMPROVEMENTS.md)**: Technical implementation details
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🌟 The Personal Internet Cell - Your self-hosted, production-grade digital infrastructure!**
|
|
||||||
|
|||||||
+53
-35
@@ -179,7 +179,6 @@ email_manager = EmailManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
|||||||
calendar_manager = CalendarManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
calendar_manager = CalendarManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
file_manager = FileManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
file_manager = FileManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
routing_manager = RoutingManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
routing_manager = RoutingManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
cell_manager = CellManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
|
||||||
app.vault_manager = VaultManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
app.vault_manager = VaultManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
container_manager = ContainerManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
container_manager = ContainerManager(data_dir=_DATA_DIR, config_dir=_CONFIG_DIR)
|
||||||
cell_link_manager = CellLinkManager(
|
cell_link_manager = CellLinkManager(
|
||||||
@@ -345,10 +344,12 @@ def is_local_request():
|
|||||||
|
|
||||||
if _allowed(remote_addr):
|
if _allowed(remote_addr):
|
||||||
return True
|
return True
|
||||||
|
# Only trust the LAST X-Forwarded-For entry — that is what Caddy appended.
|
||||||
|
# Iterating all entries allows clients to spoof local origin by prepending 127.0.0.1.
|
||||||
if forwarded_for:
|
if forwarded_for:
|
||||||
for addr in forwarded_for.split(','):
|
last_hop = forwarded_for.split(',')[-1].strip()
|
||||||
if _allowed(addr.strip()):
|
if _allowed(last_hop):
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@app.route('/health', methods=['GET'])
|
@app.route('/health', methods=['GET'])
|
||||||
@@ -481,6 +482,8 @@ def update_config():
|
|||||||
_addr = data['wireguard'].get('address')
|
_addr = data['wireguard'].get('address')
|
||||||
if _addr:
|
if _addr:
|
||||||
import ipaddress as _ipa2
|
import ipaddress as _ipa2
|
||||||
|
if '/' not in str(_addr):
|
||||||
|
return jsonify({'error': 'wireguard.address must include a prefix length (e.g. 10.0.0.1/24)'}), 400
|
||||||
try:
|
try:
|
||||||
_ipa2.ip_interface(_addr)
|
_ipa2.ip_interface(_addr)
|
||||||
except ValueError as _e:
|
except ValueError as _e:
|
||||||
@@ -1166,10 +1169,13 @@ def get_dhcp_leases():
|
|||||||
def add_dhcp_reservation():
|
def add_dhcp_reservation():
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True)
|
||||||
if data is None:
|
if not data:
|
||||||
return jsonify({"error": "No data provided"}), 400
|
return jsonify({"error": "No data provided"}), 400
|
||||||
result = network_manager.add_dhcp_reservation(data)
|
for field in ('mac', 'ip'):
|
||||||
return jsonify(result)
|
if field not in data:
|
||||||
|
return jsonify({"error": f"Missing required field: {field}"}), 400
|
||||||
|
result = network_manager.add_dhcp_reservation(data['mac'], data['ip'], data.get('hostname', ''))
|
||||||
|
return jsonify({"success": result})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error adding DHCP reservation: {e}")
|
logger.error(f"Error adding DHCP reservation: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -1179,8 +1185,10 @@ def remove_dhcp_reservation():
|
|||||||
"""Remove DHCP reservation."""
|
"""Remove DHCP reservation."""
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True)
|
||||||
result = network_manager.remove_dhcp_reservation(data)
|
if not data or 'mac' not in data:
|
||||||
return jsonify(result)
|
return jsonify({"error": "Missing required field: mac"}), 400
|
||||||
|
result = network_manager.remove_dhcp_reservation(data['mac'])
|
||||||
|
return jsonify({"success": result})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error removing DHCP reservation: {e}")
|
logger.error(f"Error removing DHCP reservation: {e}")
|
||||||
return jsonify({"error": str(e)}), 500
|
return jsonify({"error": str(e)}), 500
|
||||||
@@ -1218,10 +1226,7 @@ def get_dns_status():
|
|||||||
@app.route('/api/network/test', methods=['POST'])
|
@app.route('/api/network/test', methods=['POST'])
|
||||||
def test_network():
|
def test_network():
|
||||||
try:
|
try:
|
||||||
data = request.get_json(silent=True)
|
result = network_manager.test_connectivity()
|
||||||
if data is None:
|
|
||||||
return jsonify({"error": "No data provided"}), 400
|
|
||||||
result = network_manager.test_connectivity(data)
|
|
||||||
return jsonify(result)
|
return jsonify(result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error testing network: {e}")
|
logger.error(f"Error testing network: {e}")
|
||||||
@@ -1572,6 +1577,12 @@ def add_peer():
|
|||||||
|
|
||||||
assigned_ip = data.get('ip') or _next_peer_ip()
|
assigned_ip = data.get('ip') or _next_peer_ip()
|
||||||
|
|
||||||
|
# Validate service_access if provided
|
||||||
|
_valid_services = {'calendar', 'files', 'mail', 'webdav'}
|
||||||
|
service_access = data.get('service_access', list(_valid_services))
|
||||||
|
if not isinstance(service_access, list) or not all(s in _valid_services for s in service_access):
|
||||||
|
return jsonify({"error": f"service_access must be a list of: {sorted(_valid_services)}"}), 400
|
||||||
|
|
||||||
# Add peer to registry with all provided fields
|
# Add peer to registry with all provided fields
|
||||||
peer_info = {
|
peer_info = {
|
||||||
'peer': data['name'],
|
'peer': data['name'],
|
||||||
@@ -1584,7 +1595,7 @@ def add_peer():
|
|||||||
'persistent_keepalive': data.get('persistent_keepalive'),
|
'persistent_keepalive': data.get('persistent_keepalive'),
|
||||||
'description': data.get('description'),
|
'description': data.get('description'),
|
||||||
'internet_access': data.get('internet_access', True),
|
'internet_access': data.get('internet_access', True),
|
||||||
'service_access': data.get('service_access', ['calendar', 'files', 'mail', 'webdav']),
|
'service_access': service_access,
|
||||||
'peer_access': data.get('peer_access', True),
|
'peer_access': data.get('peer_access', True),
|
||||||
'config_needs_reinstall': False,
|
'config_needs_reinstall': False,
|
||||||
}
|
}
|
||||||
@@ -1651,10 +1662,17 @@ def clear_peer_reinstall(peer_name):
|
|||||||
|
|
||||||
@app.route('/api/peers/<peer_name>', methods=['DELETE'])
|
@app.route('/api/peers/<peer_name>', methods=['DELETE'])
|
||||||
def remove_peer(peer_name):
|
def remove_peer(peer_name):
|
||||||
"""Remove a peer."""
|
"""Remove a peer and clean up its firewall rules and DNS ACLs."""
|
||||||
try:
|
try:
|
||||||
|
peer = peer_registry.get_peer(peer_name)
|
||||||
|
if not peer:
|
||||||
|
return jsonify({"message": f"Peer {peer_name} not found or already removed"})
|
||||||
|
peer_ip = peer.get('ip')
|
||||||
success = peer_registry.remove_peer(peer_name)
|
success = peer_registry.remove_peer(peer_name)
|
||||||
if success:
|
if success:
|
||||||
|
if peer_ip:
|
||||||
|
firewall_manager.clear_peer_rules(peer_ip)
|
||||||
|
firewall_manager.apply_all_dns_rules(peer_registry.list_peers(), COREFILE_PATH, _configured_domain())
|
||||||
return jsonify({"message": f"Peer {peer_name} removed successfully"})
|
return jsonify({"message": f"Peer {peer_name} removed successfully"})
|
||||||
else:
|
else:
|
||||||
return jsonify({"message": f"Peer {peer_name} not found or already removed"})
|
return jsonify({"message": f"Peer {peer_name} not found or already removed"})
|
||||||
@@ -2558,8 +2576,8 @@ def restart_container(name):
|
|||||||
@app.route('/api/containers/<name>/logs', methods=['GET'])
|
@app.route('/api/containers/<name>/logs', methods=['GET'])
|
||||||
def get_container_logs(name):
|
def get_container_logs(name):
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
tail = request.args.get('tail', default=100, type=int)
|
tail = request.args.get('tail', default=100, type=int)
|
||||||
try:
|
try:
|
||||||
logs = container_manager.get_container_logs(name, tail=tail)
|
logs = container_manager.get_container_logs(name, tail=tail)
|
||||||
@@ -2571,8 +2589,8 @@ def get_container_logs(name):
|
|||||||
@app.route('/api/containers/<name>/stats', methods=['GET'])
|
@app.route('/api/containers/<name>/stats', methods=['GET'])
|
||||||
def get_container_stats(name):
|
def get_container_stats(name):
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
try:
|
try:
|
||||||
stats = container_manager.get_container_stats(name)
|
stats = container_manager.get_container_stats(name)
|
||||||
return jsonify(stats)
|
return jsonify(stats)
|
||||||
@@ -2583,16 +2601,16 @@ def get_container_stats(name):
|
|||||||
@app.route('/api/vault/secrets', methods=['GET'])
|
@app.route('/api/vault/secrets', methods=['GET'])
|
||||||
def list_secrets():
|
def list_secrets():
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
secrets = app.vault_manager.list_secrets()
|
secrets = app.vault_manager.list_secrets()
|
||||||
return jsonify({'secrets': secrets})
|
return jsonify({'secrets': secrets})
|
||||||
|
|
||||||
@app.route('/api/vault/secrets', methods=['POST'])
|
@app.route('/api/vault/secrets', methods=['POST'])
|
||||||
def store_secret():
|
def store_secret():
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True)
|
||||||
if not data or 'name' not in data or 'value' not in data:
|
if not data or 'name' not in data or 'value' not in data:
|
||||||
return jsonify({'error': 'Missing name or value'}), 400
|
return jsonify({'error': 'Missing name or value'}), 400
|
||||||
@@ -2602,8 +2620,8 @@ def store_secret():
|
|||||||
@app.route('/api/vault/secrets/<name>', methods=['GET'])
|
@app.route('/api/vault/secrets/<name>', methods=['GET'])
|
||||||
def get_secret(name):
|
def get_secret(name):
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
value = app.vault_manager.get_secret(name)
|
value = app.vault_manager.get_secret(name)
|
||||||
if value is None:
|
if value is None:
|
||||||
return jsonify({'error': 'Not found'}), 404
|
return jsonify({'error': 'Not found'}), 404
|
||||||
@@ -2612,8 +2630,8 @@ def get_secret(name):
|
|||||||
@app.route('/api/vault/secrets/<name>', methods=['DELETE'])
|
@app.route('/api/vault/secrets/<name>', methods=['DELETE'])
|
||||||
def delete_secret(name):
|
def delete_secret(name):
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
result = app.vault_manager.delete_secret(name)
|
result = app.vault_manager.delete_secret(name)
|
||||||
return jsonify({'deleted': result})
|
return jsonify({'deleted': result})
|
||||||
|
|
||||||
@@ -2621,8 +2639,8 @@ def delete_secret(name):
|
|||||||
@app.route('/api/containers', methods=['POST'])
|
@app.route('/api/containers', methods=['POST'])
|
||||||
def create_container():
|
def create_container():
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
data = request.get_json(silent=True)
|
data = request.get_json(silent=True)
|
||||||
if not data or 'image' not in data:
|
if not data or 'image' not in data:
|
||||||
return jsonify({'error': 'Missing image parameter'}), 400
|
return jsonify({'error': 'Missing image parameter'}), 400
|
||||||
@@ -2653,8 +2671,8 @@ def create_container():
|
|||||||
@app.route('/api/containers/<name>', methods=['DELETE'])
|
@app.route('/api/containers/<name>', methods=['DELETE'])
|
||||||
def remove_container(name):
|
def remove_container(name):
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
force = request.args.get('force', default=False, type=bool)
|
force = request.args.get('force', default=False, type=bool)
|
||||||
success = container_manager.remove_container(name, force=force)
|
success = container_manager.remove_container(name, force=force)
|
||||||
return jsonify({'removed': success})
|
return jsonify({'removed': success})
|
||||||
@@ -2662,8 +2680,8 @@ def remove_container(name):
|
|||||||
@app.route('/api/images', methods=['GET'])
|
@app.route('/api/images', methods=['GET'])
|
||||||
def list_images():
|
def list_images():
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
images = container_manager.list_images()
|
images = container_manager.list_images()
|
||||||
return jsonify(images)
|
return jsonify(images)
|
||||||
|
|
||||||
@@ -2690,8 +2708,8 @@ def remove_image(image):
|
|||||||
@app.route('/api/volumes', methods=['GET'])
|
@app.route('/api/volumes', methods=['GET'])
|
||||||
def list_volumes():
|
def list_volumes():
|
||||||
# Temporarily disable access control for debugging
|
# Temporarily disable access control for debugging
|
||||||
# if not is_local_request():
|
if not is_local_request():
|
||||||
# return jsonify({'error': 'Access denied'}), 403
|
return jsonify({'error': 'Access denied'}), 403
|
||||||
volumes = container_manager.list_volumes()
|
volumes = container_manager.list_volumes()
|
||||||
return jsonify(volumes)
|
return jsonify(volumes)
|
||||||
|
|
||||||
|
|||||||
+58
-18
@@ -117,11 +117,15 @@ class ConfigManager:
|
|||||||
return {}
|
return {}
|
||||||
|
|
||||||
def _save_all_configs(self):
|
def _save_all_configs(self):
|
||||||
"""Save all service configurations to the unified config file"""
|
"""Save all service configurations to the unified config file (atomic write)."""
|
||||||
try:
|
try:
|
||||||
self.config_file.parent.mkdir(parents=True, exist_ok=True)
|
self.config_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(self.config_file, 'w') as f:
|
tmp = self.config_file.with_suffix('.tmp')
|
||||||
|
with open(tmp, 'w') as f:
|
||||||
json.dump(self.configs, f, indent=2)
|
json.dump(self.configs, f, indent=2)
|
||||||
|
f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
os.replace(tmp, self.config_file)
|
||||||
except (PermissionError, OSError):
|
except (PermissionError, OSError):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -208,31 +212,47 @@ class ConfigManager:
|
|||||||
}
|
}
|
||||||
|
|
||||||
def backup_config(self) -> str:
|
def backup_config(self) -> str:
|
||||||
"""Create a backup of all configurations"""
|
"""Create a backup of cell_config.json, secrets, Caddyfile, .env, Corefile, and DNS zones."""
|
||||||
try:
|
try:
|
||||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||||
backup_id = f"backup_{timestamp}"
|
backup_id = f"backup_{timestamp}"
|
||||||
backup_path = self.backup_dir / backup_id
|
backup_path = self.backup_dir / backup_id
|
||||||
|
|
||||||
# Create backup directory
|
|
||||||
backup_path.mkdir(parents=True, exist_ok=True)
|
backup_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Copy all config files
|
# Primary config and secrets
|
||||||
if self.config_file.exists():
|
if self.config_file.exists():
|
||||||
shutil.copy2(self.config_file, backup_path / 'cell_config.json')
|
shutil.copy2(self.config_file, backup_path / 'cell_config.json')
|
||||||
|
|
||||||
# Copy secrets file if it exists
|
|
||||||
if self.secrets_file.exists():
|
if self.secrets_file.exists():
|
||||||
shutil.copy2(self.secrets_file, backup_path / 'secrets.yaml')
|
shutil.copy2(self.secrets_file, backup_path / 'secrets.yaml')
|
||||||
|
|
||||||
# Create backup manifest
|
# Runtime-generated files that must match cell_config.json after restore
|
||||||
|
config_dir = Path(os.environ.get('CONFIG_DIR', '/app/config'))
|
||||||
|
data_dir = Path(os.environ.get('DATA_DIR', '/app/data'))
|
||||||
|
env_file = Path(os.environ.get('ENV_FILE', '/app/.env'))
|
||||||
|
|
||||||
|
extra = [
|
||||||
|
(config_dir / 'caddy' / 'Caddyfile', 'Caddyfile'),
|
||||||
|
(config_dir / 'dns' / 'Corefile', 'Corefile'),
|
||||||
|
(env_file, '.env'),
|
||||||
|
]
|
||||||
|
for src, dest_name in extra:
|
||||||
|
if src.exists():
|
||||||
|
shutil.copy2(src, backup_path / dest_name)
|
||||||
|
|
||||||
|
# DNS zone files
|
||||||
|
dns_data = data_dir / 'dns'
|
||||||
|
if dns_data.is_dir():
|
||||||
|
zones_dir = backup_path / 'dns_zones'
|
||||||
|
zones_dir.mkdir(exist_ok=True)
|
||||||
|
for zone_file in dns_data.glob('*.zone'):
|
||||||
|
shutil.copy2(zone_file, zones_dir / zone_file.name)
|
||||||
|
|
||||||
manifest = {
|
manifest = {
|
||||||
"backup_id": backup_id,
|
"backup_id": backup_id,
|
||||||
"timestamp": datetime.now().isoformat(),
|
"timestamp": datetime.now().isoformat(),
|
||||||
"services": list(self.service_schemas.keys()),
|
"services": list(self.service_schemas.keys()),
|
||||||
"files": [f.name for f in backup_path.iterdir()]
|
"files": [f.name for f in backup_path.iterdir()],
|
||||||
}
|
}
|
||||||
|
|
||||||
with open(backup_path / 'manifest.json', 'w') as f:
|
with open(backup_path / 'manifest.json', 'w') as f:
|
||||||
json.dump(manifest, f, indent=2)
|
json.dump(manifest, f, indent=2)
|
||||||
|
|
||||||
@@ -244,26 +264,46 @@ class ConfigManager:
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
def restore_config(self, backup_id: str) -> bool:
|
def restore_config(self, backup_id: str) -> bool:
|
||||||
"""Restore configuration from backup"""
|
"""Restore cell_config.json, secrets, Caddyfile, .env, Corefile, and DNS zones from backup."""
|
||||||
try:
|
try:
|
||||||
backup_path = self.backup_dir / backup_id
|
backup_path = self.backup_dir / backup_id
|
||||||
if not backup_path.exists():
|
if not backup_path.exists():
|
||||||
raise ValueError(f"Backup {backup_id} not found")
|
raise ValueError(f"Backup {backup_id} not found")
|
||||||
# Read manifest
|
|
||||||
manifest_file = backup_path / 'manifest.json'
|
manifest_file = backup_path / 'manifest.json'
|
||||||
if not manifest_file.exists():
|
if not manifest_file.exists():
|
||||||
raise ValueError(f"Backup manifest not found")
|
raise ValueError(f"Backup manifest not found")
|
||||||
with open(manifest_file, 'r') as f:
|
|
||||||
manifest = json.load(f)
|
# Restore primary config
|
||||||
# Restore config files
|
|
||||||
config_backup = backup_path / 'cell_config.json'
|
config_backup = backup_path / 'cell_config.json'
|
||||||
if config_backup.exists():
|
if config_backup.exists():
|
||||||
shutil.copy2(config_backup, self.config_file)
|
shutil.copy2(config_backup, self.config_file)
|
||||||
# Restore secrets file if it exists
|
|
||||||
secrets_backup = backup_path / 'secrets.yaml'
|
secrets_backup = backup_path / 'secrets.yaml'
|
||||||
if secrets_backup.exists():
|
if secrets_backup.exists():
|
||||||
shutil.copy2(secrets_backup, self.secrets_file)
|
shutil.copy2(secrets_backup, self.secrets_file)
|
||||||
# Reload configurations — restore only what was in the backup
|
|
||||||
|
# Restore runtime-generated files so they stay consistent with cell_config.json
|
||||||
|
config_dir = Path(os.environ.get('CONFIG_DIR', '/app/config'))
|
||||||
|
data_dir = Path(os.environ.get('DATA_DIR', '/app/data'))
|
||||||
|
env_file = Path(os.environ.get('ENV_FILE', '/app/.env'))
|
||||||
|
|
||||||
|
restore_map = [
|
||||||
|
(backup_path / 'Caddyfile', config_dir / 'caddy' / 'Caddyfile'),
|
||||||
|
(backup_path / 'Corefile', config_dir / 'dns' / 'Corefile'),
|
||||||
|
(backup_path / '.env', env_file),
|
||||||
|
]
|
||||||
|
for src, dest in restore_map:
|
||||||
|
if src.exists():
|
||||||
|
dest.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
shutil.copy2(src, dest)
|
||||||
|
|
||||||
|
# Restore DNS zone files
|
||||||
|
zones_backup = backup_path / 'dns_zones'
|
||||||
|
if zones_backup.is_dir():
|
||||||
|
dns_data = data_dir / 'dns'
|
||||||
|
dns_data.mkdir(parents=True, exist_ok=True)
|
||||||
|
for zone_file in zones_backup.glob('*.zone'):
|
||||||
|
shutil.copy2(zone_file, dns_data / zone_file.name)
|
||||||
|
|
||||||
self.configs = self._load_all_configs()
|
self.configs = self._load_all_configs()
|
||||||
logger.info(f"Restored configuration from backup: {backup_id}")
|
logger.info(f"Restored configuration from backup: {backup_id}")
|
||||||
return True
|
return True
|
||||||
|
|||||||
+11
-9
@@ -276,14 +276,16 @@ def generate_corefile(peers: List[Dict[str, Any]], corefile_path: str = COREFILE
|
|||||||
}}
|
}}
|
||||||
|
|
||||||
{primary_zone_block}
|
{primary_zone_block}
|
||||||
local.{domain} {{
|
|
||||||
file /data/local.zone
|
|
||||||
log
|
|
||||||
}}
|
|
||||||
"""
|
"""
|
||||||
|
# local.{domain} block intentionally omitted: /data/local.zone does not exist
|
||||||
|
# and CoreDNS logs errors on every reload for a missing zone file.
|
||||||
os.makedirs(os.path.dirname(corefile_path), exist_ok=True)
|
os.makedirs(os.path.dirname(corefile_path), exist_ok=True)
|
||||||
with open(corefile_path, 'w') as f:
|
tmp_path = corefile_path + '.tmp'
|
||||||
|
with open(tmp_path, 'w') as f:
|
||||||
f.write(corefile)
|
f.write(corefile)
|
||||||
|
f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
os.replace(tmp_path, corefile_path)
|
||||||
|
|
||||||
logger.info(f"Wrote Corefile to {corefile_path}")
|
logger.info(f"Wrote Corefile to {corefile_path}")
|
||||||
return True
|
return True
|
||||||
@@ -293,13 +295,13 @@ local.{domain} {{
|
|||||||
|
|
||||||
|
|
||||||
def reload_coredns() -> bool:
|
def reload_coredns() -> bool:
|
||||||
"""Send SIGHUP to CoreDNS container to reload config."""
|
"""Signal CoreDNS to reload its config. SIGUSR1 triggers the reload plugin; SIGHUP kills the process."""
|
||||||
try:
|
try:
|
||||||
result = _run(['docker', 'kill', '--signal=SIGHUP', 'cell-dns'], check=False)
|
result = _run(['docker', 'kill', '--signal=SIGUSR1', 'cell-dns'], check=False)
|
||||||
if result.returncode == 0:
|
if result.returncode == 0:
|
||||||
logger.info("Sent SIGHUP to cell-dns")
|
logger.info("Sent SIGUSR1 to cell-dns (reload)")
|
||||||
return True
|
return True
|
||||||
logger.warning(f"SIGHUP to cell-dns failed: {result.stderr.strip()}")
|
logger.warning(f"SIGUSR1 to cell-dns failed: {result.stderr.strip()}")
|
||||||
return False
|
return False
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"reload_coredns: {e}")
|
logger.error(f"reload_coredns: {e}")
|
||||||
|
|||||||
+10
-2
@@ -200,8 +200,12 @@ http://api.{domain} {{
|
|||||||
}}
|
}}
|
||||||
"""
|
"""
|
||||||
os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)
|
os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)
|
||||||
with open(path, 'w') as f:
|
tmp = path + '.tmp'
|
||||||
|
with open(tmp, 'w') as f:
|
||||||
f.write(content)
|
f.write(content)
|
||||||
|
f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
os.replace(tmp, path)
|
||||||
return True
|
return True
|
||||||
except Exception:
|
except Exception:
|
||||||
return False
|
return False
|
||||||
@@ -229,8 +233,12 @@ def write_env_file(ip_range: str, path: str, ports: Optional[Dict[str, int]] = N
|
|||||||
for key, var in PORT_ENV_VAR_NAMES.items():
|
for key, var in PORT_ENV_VAR_NAMES.items():
|
||||||
lines.append(f'{var}={merged_ports[key]}\n')
|
lines.append(f'{var}={merged_ports[key]}\n')
|
||||||
os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)
|
os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)
|
||||||
with open(path, 'w') as f:
|
tmp = path + '.tmp'
|
||||||
|
with open(tmp, 'w') as f:
|
||||||
f.writelines(lines)
|
f.writelines(lines)
|
||||||
|
f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
os.replace(tmp, path)
|
||||||
return True
|
return True
|
||||||
except Exception:
|
except Exception:
|
||||||
return False
|
return False
|
||||||
|
|||||||
@@ -34,8 +34,12 @@ class NetworkManager(BaseServiceManager):
|
|||||||
# Create zone file content
|
# Create zone file content
|
||||||
content = self._generate_zone_content(zone_name, records)
|
content = self._generate_zone_content(zone_name, records)
|
||||||
|
|
||||||
with open(zone_file, 'w') as f:
|
tmp_file = zone_file + '.tmp'
|
||||||
|
with open(tmp_file, 'w') as f:
|
||||||
f.write(content)
|
f.write(content)
|
||||||
|
f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
os.replace(tmp_file, zone_file)
|
||||||
|
|
||||||
# Reload DNS service
|
# Reload DNS service
|
||||||
self._reload_dns_service()
|
self._reload_dns_service()
|
||||||
|
|||||||
+21
-7
@@ -2,6 +2,16 @@
|
|||||||
"""
|
"""
|
||||||
Routing Manager for Personal Internet Cell
|
Routing Manager for Personal Internet Cell
|
||||||
Handles VPN gateway, NAT, iptables, and advanced routing
|
Handles VPN gateway, NAT, iptables, and advanced routing
|
||||||
|
|
||||||
|
NOTE: This manager runs iptables/ip-route commands on the HOST (the machine running
|
||||||
|
docker-compose), not inside cell-wireguard. This is intentional for host-level
|
||||||
|
routing features (exit-node, bridge, split-route) that are not yet wired to any
|
||||||
|
UI endpoint. The manager is instantiated but its methods are not called by any
|
||||||
|
active API route.
|
||||||
|
|
||||||
|
CRITICAL: _remove_nat_rule flushes ALL of POSTROUTING (-F), which would wipe the
|
||||||
|
WireGuard MASQUERADE rule. Do not call it until this is fixed to use targeted
|
||||||
|
rule deletion (-D) instead of a full flush.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
import os
|
||||||
@@ -766,14 +776,18 @@ class RoutingManager(BaseServiceManager):
|
|||||||
logger.error(f"Failed to apply NAT rule: {e}")
|
logger.error(f"Failed to apply NAT rule: {e}")
|
||||||
|
|
||||||
def _remove_nat_rule(self, rule_id: str):
|
def _remove_nat_rule(self, rule_id: str):
|
||||||
"""Remove NAT rule from iptables"""
|
"""Remove NAT rule from iptables by rule_id comment tag."""
|
||||||
try:
|
try:
|
||||||
# This is a simplified removal - in practice you'd need to track the exact rule
|
# Use -D with the comment tag to remove the specific rule rather than
|
||||||
cmd = ['iptables', '-t', 'nat', '-F', 'POSTROUTING']
|
# flushing the entire POSTROUTING chain (which would wipe WireGuard MASQUERADE).
|
||||||
subprocess.run(cmd, check=True, timeout=10)
|
cmd = ['iptables', '-t', 'nat', '-D', 'POSTROUTING',
|
||||||
|
'-m', 'comment', '--comment', rule_id, '-j', 'MASQUERADE']
|
||||||
logger.info(f"Removed NAT rule: {rule_id}")
|
result = subprocess.run(cmd, timeout=10)
|
||||||
|
if result.returncode != 0:
|
||||||
|
# Rule may not exist — not an error
|
||||||
|
logger.debug(f"NAT rule {rule_id} not found (already removed?)")
|
||||||
|
else:
|
||||||
|
logger.info(f"Removed NAT rule: {rule_id}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to remove NAT rule: {e}")
|
logger.error(f"Failed to remove NAT rule: {e}")
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,45 @@
|
|||||||
|
"""
|
||||||
|
Shared pytest fixtures for the PIC test suite.
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import shutil
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
# Ensure api/ is on the path for all tests
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api'))
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tmp_dir():
|
||||||
|
"""Temporary directory that is cleaned up after each test."""
|
||||||
|
d = tempfile.mkdtemp()
|
||||||
|
yield d
|
||||||
|
shutil.rmtree(d, ignore_errors=True)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tmp_config_dir(tmp_dir):
|
||||||
|
"""Temporary config dir with the sub-directories expected by managers."""
|
||||||
|
for sub in ('api', 'caddy', 'dns', 'dhcp', 'ntp', 'wireguard'):
|
||||||
|
os.makedirs(os.path.join(tmp_dir, sub), exist_ok=True)
|
||||||
|
return tmp_dir
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tmp_data_dir(tmp_dir):
|
||||||
|
"""Temporary data dir with the sub-directories expected by managers."""
|
||||||
|
for sub in ('dns', 'mail', 'calendar', 'files', 'wireguard'):
|
||||||
|
os.makedirs(os.path.join(tmp_dir, sub), exist_ok=True)
|
||||||
|
return tmp_dir
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def flask_client():
|
||||||
|
"""Flask test client with TESTING mode enabled."""
|
||||||
|
from app import app
|
||||||
|
app.config['TESTING'] = True
|
||||||
|
with app.test_client() as client:
|
||||||
|
yield client
|
||||||
@@ -141,17 +141,23 @@ class TestAPIEndpoints(unittest.TestCase):
|
|||||||
mock_network.add_dhcp_reservation.return_value = True
|
mock_network.add_dhcp_reservation.return_value = True
|
||||||
response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}), content_type='application/json')
|
response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 200)
|
self.assertEqual(response.status_code, 200)
|
||||||
# Simulate error
|
# Missing mac field → 400, not 500
|
||||||
mock_network.add_dhcp_reservation.side_effect = Exception('fail')
|
|
||||||
response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
||||||
|
self.assertEqual(response.status_code, 400)
|
||||||
|
# Simulate manager error
|
||||||
|
mock_network.add_dhcp_reservation.side_effect = Exception('fail')
|
||||||
|
response = self.client.post('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2', 'mac': '00:11:22:33:44:55'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
# Mock remove_dhcp_reservation
|
# Mock remove_dhcp_reservation
|
||||||
mock_network.remove_dhcp_reservation.return_value = True
|
mock_network.remove_dhcp_reservation.return_value = True
|
||||||
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'mac': '00:11:22:33:44:55'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 200)
|
self.assertEqual(response.status_code, 200)
|
||||||
# Simulate error
|
# Missing mac → 400
|
||||||
mock_network.remove_dhcp_reservation.side_effect = Exception('fail')
|
|
||||||
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'ip': '10.0.0.2'}), content_type='application/json')
|
||||||
|
self.assertEqual(response.status_code, 400)
|
||||||
|
# Simulate manager error
|
||||||
|
mock_network.remove_dhcp_reservation.side_effect = Exception('fail')
|
||||||
|
response = self.client.delete('/api/dhcp/reservations', data=json.dumps({'mac': '00:11:22:33:44:55'}), content_type='application/json')
|
||||||
self.assertEqual(response.status_code, 500)
|
self.assertEqual(response.status_code, 500)
|
||||||
|
|
||||||
@patch('app.network_manager')
|
@patch('app.network_manager')
|
||||||
|
|||||||
+37
-10
@@ -45,7 +45,6 @@ class TestAppMisc(unittest.TestCase):
|
|||||||
patch.object(app_module, 'calendar_manager', MagicMock()),
|
patch.object(app_module, 'calendar_manager', MagicMock()),
|
||||||
patch.object(app_module, 'file_manager', MagicMock()),
|
patch.object(app_module, 'file_manager', MagicMock()),
|
||||||
patch.object(app_module, 'routing_manager', MagicMock()),
|
patch.object(app_module, 'routing_manager', MagicMock()),
|
||||||
patch.object(app_module, 'cell_manager', MagicMock()),
|
|
||||||
patch.object(app_module, 'container_manager', MagicMock()),
|
patch.object(app_module, 'container_manager', MagicMock()),
|
||||||
]
|
]
|
||||||
for p in self.patches:
|
for p in self.patches:
|
||||||
@@ -97,18 +96,46 @@ class TestAppMisc(unittest.TestCase):
|
|||||||
self.assertEqual(ctx['path'], '/test')
|
self.assertEqual(ctx['path'], '/test')
|
||||||
self.assertEqual(ctx['user'], 'user1')
|
self.assertEqual(ctx['user'], 'user1')
|
||||||
|
|
||||||
def test_is_local_request(self):
|
def _req(self, remote_addr, xff=''):
|
||||||
class DummyRequest:
|
class R:
|
||||||
remote_addr = '127.0.0.1'
|
pass
|
||||||
headers = {}
|
r = R()
|
||||||
with patch('app.request', new=DummyRequest()):
|
r.remote_addr = remote_addr
|
||||||
|
r.headers = {'X-Forwarded-For': xff} if xff else {}
|
||||||
|
return r
|
||||||
|
|
||||||
|
def test_is_local_request_loopback(self):
|
||||||
|
with patch('app.request', new=self._req('127.0.0.1')):
|
||||||
self.assertTrue(app_module.is_local_request())
|
self.assertTrue(app_module.is_local_request())
|
||||||
class DummyRequest2:
|
|
||||||
remote_addr = '8.8.8.8'
|
def test_is_local_request_public_ip(self):
|
||||||
headers = {}
|
with patch('app.request', new=self._req('8.8.8.8')):
|
||||||
with patch('app.request', new=DummyRequest2()):
|
|
||||||
self.assertFalse(app_module.is_local_request())
|
self.assertFalse(app_module.is_local_request())
|
||||||
|
|
||||||
|
def test_is_local_request_private_ip(self):
|
||||||
|
with patch('app.request', new=self._req('192.168.1.5')):
|
||||||
|
self.assertTrue(app_module.is_local_request())
|
||||||
|
|
||||||
|
def test_is_local_request_xff_spoof_rejected(self):
|
||||||
|
# Client sends X-Forwarded-For: 127.0.0.1 but actual IP is public
|
||||||
|
# Old code would trust the first XFF entry — fixed to trust only last
|
||||||
|
with patch('app.request', new=self._req('8.8.8.8', xff='127.0.0.1, 8.8.8.8')):
|
||||||
|
self.assertFalse(app_module.is_local_request())
|
||||||
|
|
||||||
|
def test_is_local_request_xff_last_entry_local(self):
|
||||||
|
# Caddy appends the real client IP; last entry is local → allow
|
||||||
|
with patch('app.request', new=self._req('8.8.8.8', xff='8.8.8.8, 192.168.1.10')):
|
||||||
|
self.assertTrue(app_module.is_local_request())
|
||||||
|
|
||||||
|
def test_is_local_request_xff_single_public_rejected(self):
|
||||||
|
with patch('app.request', new=self._req('8.8.8.8', xff='1.2.3.4')):
|
||||||
|
self.assertFalse(app_module.is_local_request())
|
||||||
|
|
||||||
|
def test_is_local_request_cell_network_ip(self):
|
||||||
|
# 172.20.0.10 is the API container's IP — should be allowed
|
||||||
|
with patch('app.request', new=self._req('172.20.0.10')):
|
||||||
|
self.assertTrue(app_module.is_local_request())
|
||||||
|
|
||||||
def test_health_check_exception(self):
|
def test_health_check_exception(self):
|
||||||
# Patch datetime to raise exception
|
# Patch datetime to raise exception
|
||||||
with patch('app.datetime') as mock_dt, app_module.app.app_context():
|
with patch('app.datetime') as mock_dt, app_module.app.app_context():
|
||||||
|
|||||||
@@ -0,0 +1,174 @@
|
|||||||
|
"""
|
||||||
|
Tests for PUT /api/config input validation (400 paths).
|
||||||
|
These are the highest-risk untested paths: the only server-side guard against
|
||||||
|
bad subnet/port values entering persistent config.
|
||||||
|
"""
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import unittest
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api'))
|
||||||
|
|
||||||
|
|
||||||
|
def _make_client():
|
||||||
|
from app import app
|
||||||
|
app.config['TESTING'] = True
|
||||||
|
return app.test_client()
|
||||||
|
|
||||||
|
|
||||||
|
def _put(client, payload):
|
||||||
|
return client.put(
|
||||||
|
'/api/config',
|
||||||
|
data=json.dumps(payload),
|
||||||
|
content_type='application/json',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# ip_range validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestIpRangeValidation(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.client = _make_client()
|
||||||
|
|
||||||
|
def test_non_rfc1918_returns_400(self):
|
||||||
|
r = _put(self.client, {'ip_range': '1.2.3.0/24'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
body = json.loads(r.data)
|
||||||
|
self.assertIn('error', body)
|
||||||
|
self.assertIn('RFC-1918', body['error'])
|
||||||
|
|
||||||
|
def test_172_0_subnet_returns_400(self):
|
||||||
|
# 172.0.0.0/24 is NOT in 172.16.0.0/12 — was the bug on the dev machine
|
||||||
|
r = _put(self.client, {'ip_range': '172.0.0.0/24'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_172_15_subnet_returns_400(self):
|
||||||
|
# One prefix below the 172.16.0.0/12 boundary
|
||||||
|
r = _put(self.client, {'ip_range': '172.15.0.0/24'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_172_32_subnet_returns_400(self):
|
||||||
|
# One prefix above the 172.31.255.255 boundary
|
||||||
|
r = _put(self.client, {'ip_range': '172.32.0.0/24'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_public_ip_returns_400(self):
|
||||||
|
r = _put(self.client, {'ip_range': '8.8.0.0/16'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_172_16_exact_boundary_accepted(self):
|
||||||
|
# 172.16.0.0/12 is the exact lower boundary — must be valid
|
||||||
|
r = _put(self.client, {'ip_range': '172.16.0.0/12'})
|
||||||
|
# 200 or 202 — just not 400
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_10_network_accepted(self):
|
||||||
|
r = _put(self.client, {'ip_range': '10.0.0.0/8'})
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_192_168_network_accepted(self):
|
||||||
|
r = _put(self.client, {'ip_range': '192.168.0.0/16'})
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_invalid_cidr_syntax_returns_400(self):
|
||||||
|
r = _put(self.client, {'ip_range': 'not-a-cidr'})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Port range validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestPortValidation(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.client = _make_client()
|
||||||
|
|
||||||
|
def test_dns_port_zero_returns_400(self):
|
||||||
|
r = _put(self.client, {'network': {'dns_port': 0}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
body = json.loads(r.data)
|
||||||
|
self.assertIn('dns_port', body.get('error', ''))
|
||||||
|
|
||||||
|
def test_dns_port_65536_returns_400(self):
|
||||||
|
r = _put(self.client, {'network': {'dns_port': 65536}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_wireguard_port_zero_returns_400(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'port': 0}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_wireguard_port_65536_returns_400(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'port': 65536}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_wireguard_port_1_accepted(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'port': 1}})
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_wireguard_port_65535_accepted(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'port': 65535}})
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_email_smtp_port_zero_returns_400(self):
|
||||||
|
r = _put(self.client, {'email': {'smtp_port': 0}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_calendar_port_negative_returns_400(self):
|
||||||
|
r = _put(self.client, {'calendar': {'port': -1}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# WireGuard address validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestWireguardAddressValidation(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.client = _make_client()
|
||||||
|
|
||||||
|
def test_bad_wg_address_returns_400(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'address': 'not-an-ip'}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
body = json.loads(r.data)
|
||||||
|
self.assertIn('wireguard.address', body.get('error', ''))
|
||||||
|
|
||||||
|
def test_ip_without_prefix_returns_400(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'address': '10.0.0.1'}})
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_valid_wg_address_accepted(self):
|
||||||
|
r = _put(self.client, {'wireguard': {'address': '10.0.0.1/24'}})
|
||||||
|
self.assertNotEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Body validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestBodyValidation(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.client = _make_client()
|
||||||
|
|
||||||
|
def test_no_body_returns_400(self):
|
||||||
|
r = self.client.put('/api/config', content_type='application/json')
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_empty_body_returns_400(self):
|
||||||
|
r = self.client.put('/api/config', data='', content_type='application/json')
|
||||||
|
self.assertEqual(r.status_code, 400)
|
||||||
|
|
||||||
|
def test_valid_cell_name_change_returns_200(self):
|
||||||
|
r = _put(self.client, {'cell_name': 'testcell'})
|
||||||
|
self.assertEqual(r.status_code, 200)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
@@ -0,0 +1,102 @@
|
|||||||
|
"""
|
||||||
|
Tests for ip_utils.write_caddyfile — this function is called on every
|
||||||
|
ip_range / domain / cell_name change and was previously untested.
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'api'))
|
||||||
|
|
||||||
|
from ip_utils import write_caddyfile, get_service_ips
|
||||||
|
|
||||||
|
|
||||||
|
class TestWriteCaddyfile(unittest.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.tmp = tempfile.mkdtemp()
|
||||||
|
self.path = os.path.join(self.tmp, 'caddy', 'Caddyfile')
|
||||||
|
|
||||||
|
def _write(self, ip_range='172.20.0.0/16', cell_name='mycell', domain='cell'):
|
||||||
|
ok = write_caddyfile(ip_range, cell_name, domain, self.path)
|
||||||
|
self.assertTrue(ok, "write_caddyfile returned False")
|
||||||
|
with open(self.path) as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
def test_creates_file_in_subdirectory(self):
|
||||||
|
self._write()
|
||||||
|
self.assertTrue(os.path.isfile(self.path))
|
||||||
|
|
||||||
|
def test_cell_domain_vhost_present(self):
|
||||||
|
content = self._write(cell_name='mycell', domain='cell')
|
||||||
|
self.assertIn('http://mycell.cell', content)
|
||||||
|
|
||||||
|
def test_custom_domain_used(self):
|
||||||
|
content = self._write(cell_name='pic0', domain='dev')
|
||||||
|
self.assertIn('http://pic0.dev', content)
|
||||||
|
self.assertNotIn('mycell', content)
|
||||||
|
self.assertNotIn('.cell', content)
|
||||||
|
|
||||||
|
def test_service_subdomains_use_domain(self):
|
||||||
|
content = self._write(domain='mynet')
|
||||||
|
self.assertIn('http://calendar.mynet', content)
|
||||||
|
self.assertIn('http://files.mynet', content)
|
||||||
|
self.assertIn('http://mail.mynet', content)
|
||||||
|
self.assertIn('http://webdav.mynet', content)
|
||||||
|
|
||||||
|
def test_virtual_ips_match_ip_range(self):
|
||||||
|
ip_range = '10.0.0.0/16'
|
||||||
|
content = self._write(ip_range=ip_range)
|
||||||
|
ips = get_service_ips(ip_range)
|
||||||
|
self.assertIn(ips['vip_calendar'], content)
|
||||||
|
self.assertIn(ips['vip_files'], content)
|
||||||
|
self.assertIn(ips['vip_mail'], content)
|
||||||
|
self.assertIn(ips['vip_webdav'], content)
|
||||||
|
|
||||||
|
def test_reverse_proxy_targets_are_internal_ports(self):
|
||||||
|
content = self._write()
|
||||||
|
self.assertIn('reverse_proxy cell-radicale:5232', content)
|
||||||
|
self.assertIn('reverse_proxy cell-filegator:8080', content)
|
||||||
|
self.assertIn('reverse_proxy cell-rainloop:8888', content)
|
||||||
|
self.assertIn('reverse_proxy cell-webdav:80', content)
|
||||||
|
|
||||||
|
def test_api_proxy_present(self):
|
||||||
|
content = self._write()
|
||||||
|
self.assertIn('reverse_proxy cell-api:3000', content)
|
||||||
|
|
||||||
|
def test_overwrite_on_second_call(self):
|
||||||
|
self._write(cell_name='first', domain='cell')
|
||||||
|
content = self._write(cell_name='second', domain='cell')
|
||||||
|
self.assertIn('second.cell', content)
|
||||||
|
self.assertNotIn('first.cell', content)
|
||||||
|
|
||||||
|
def test_different_ip_ranges_produce_different_vips(self):
|
||||||
|
c1 = self._write(ip_range='10.0.0.0/16')
|
||||||
|
os.remove(self.path)
|
||||||
|
c2 = self._write(ip_range='192.168.1.0/24')
|
||||||
|
self.assertNotEqual(c1, c2)
|
||||||
|
|
||||||
|
def test_auto_https_off(self):
|
||||||
|
content = self._write()
|
||||||
|
self.assertIn('auto_https off', content)
|
||||||
|
|
||||||
|
def test_catchall_block_present(self):
|
||||||
|
content = self._write()
|
||||||
|
self.assertIn(':80 {', content)
|
||||||
|
|
||||||
|
def test_invalid_ip_range_returns_false(self):
|
||||||
|
result = write_caddyfile('not-a-cidr', 'cell', 'cell', self.path)
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
def test_file_is_not_empty(self):
|
||||||
|
self._write()
|
||||||
|
self.assertGreater(os.path.getsize(self.path), 100)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
import shutil
|
||||||
|
shutil.rmtree(self.tmp, ignore_errors=True)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
Reference in New Issue
Block a user