Compare commits

...

21 Commits

Author SHA1 Message Date
b39a502257 fix(images): use Docker Registry v2 API for correct digest comparison
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 15:15:22 +01:00
dee07d7b8e fix(images): use Docker Registry v2 API for correct digest comparison
The Docker Hub REST API returns per-platform manifest digests, while
docker image inspect RepoDigests stores the manifest list digest.
These two values never match, causing update_available to always be
True even after a fresh pull.

Fix: use registry-1.docker.io/v2/{name}/manifests/{tag} with anonymous
auth and read the Docker-Content-Digest response header, which is the
exact same digest that docker pull stores in RepoDigests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 15:15:05 +01:00
351caec893 docs: update README with all current features and correct settings
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-25 08:34:29 +01:00
dd04408dc2 docs: update README with all current features and correct settings
- Add NetBird Container Updates section (digest check, pull, bulk update)
- Add update indicators, dark mode, LDAP/AD, Windows DNS to features
- Correct Settings table: add all tabs, remove incorrect Monitoring entry
- Split Docker Images tab out of System tab
- Add sudo install note for fresh Debian minimal
- Rewrite Updating NetBird Images section with new UI-based workflow
- Add new monitoring/image API endpoints to API docs

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-25 08:33:20 +01:00
6373722c2b chore(release): merge unstable → main for beta-1.0
Promotes alpha-1.25 to beta-1.0 (stable branch).

Highlights:
- NetBird container update management (check / pull / update per customer + bulk)
- Visual update badges on dashboard and customer detail
- Dark mode toggle with localStorage persistence
- User role management for Azure AD / LDAP users
- Branding logo persistence across updates (Docker volume)
- Favicon, NPM stream removal, MFA (TOTP)
- LDAP / Active Directory and Azure AD SSO
- Windows DNS integration
- Settings restructure and Git branch dropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 21:58:24 +01:00
27c8e4889c feat(updates): visual update indicators, progress feedback, settings pull
- Dashboard: update badge (orange) injected lazily into customer Status cell
  after table renders via GET /monitoring/customers/local-update-status
  (local-only Docker inspect, no Hub call on every page load)
- Customer detail Deployment tab: "Update Images" button with spinner,
  shows success/error inline without page reload
- Monitoring Update All: now synchronous + sequential (one customer at a
  time), shows live spinner + per-customer results table on completion
- Settings > Docker Images: "Pull from Docker Hub" button with spinner
  and inline status message
- /monitoring/customers/local-update-status: new lightweight endpoint
  (no network, pure local Docker inspect)
- /monitoring/customers/update-all: removed BackgroundTasks, now awaits
  each customer sequentially and returns detailed per-customer results

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 21:25:33 +01:00
848ead0b2c feat(updates): NetBird container image update management
- New image_service.py: Docker Hub digest check (no pull), local digest/ID
  comparison, pull_all_images, per-customer container image status, and
  update_customer_containers (docker compose up -d, data-safe)
- Monitoring endpoints: GET /images/check (hub vs local + per-customer
  needs_update), POST /images/pull (background), POST /customers/update-all
- Deployment endpoint: POST /{id}/update-images (single-customer update)
- Monitoring page: "NetBird Container Updates" card with Check / Pull / Update
  All buttons; image status table and per-customer update table with inline
  update buttons
- i18n: added keys in en.json and de.json

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 21:01:56 +01:00
Sascha Lustenberger | techlan gmbh
796824c400 feat(users): allow role assignment for Azure AD and LDAP users
- Backend: add admin-only guard + role validation to PUT /users/{id}
- Backend: prevent admins from changing their own role
- Frontend: role toggle button (person-check / person-dash) per user row
- Frontend: admin badge green, viewer badge secondary, ldap badge blue
- i18n: add makeAdmin / makeViewer translations (de + en)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:27:54 +01:00
Sascha Lustenberger | techlan gmbh
8103fffcb8 fix(docker): persist branding uploads across container rebuilds
Mount ./data/uploads into /app/static/uploads so uploaded logos
survive image rebuilds during the update process.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:19:22 +01:00
Sascha Lustenberger | techlan gmbh
13408225b4 feat(ui): add dark mode toggle to navbar
Uses Bootstrap 5.3 native data-bs-theme with localStorage persistence.
Inline script in <head> prevents flash on page load.
Moon/sun icon in top-right navbar switches between light and dark.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:08:18 +01:00
Sascha Lustenberger | techlan gmbh
0f77aaa176 fix(deploy): remove NPM stream creation on customer deploy/undeploy
STUN/TURN UDP relay no longer requires NPM stream entries.
NetBird uses rels:// WebSocket relay via NPM proxy host instead.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 19:42:12 +01:00
Sascha Lustenberger | techlan gmbh
0bc7c0ba9f feat(ui): add SVG favicon for NetBird MSP Appliance
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 16:54:21 +01:00
Sascha Lustenberger | techlan gmbh
27428b69a0 fix(netbird): query customer before use in stop/start/restart
In stop_customer, start_customer and restart_customer the local variable
'customer' was referenced on the instance_dir line before it was assigned
(it was only queried after the docker compose call). This caused an
UnboundLocalError (HTTP 500) on every stop/start/restart action.

Fix: move the customer query to the top of each function alongside the
deployment and config queries.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 11:12:17 +01:00
Sascha Lustenberger | techlan gmbh
582f92eec4 fix(update): add git safe.directory and fetch --tags after pull
- Register SOURCE_DIR as git safe.directory before pulling so the
  process (root inside container) can access repos owned by a host user
- Run 'git fetch --tags' after pull so git describe always finds the
  latest tag for version.json — git pull does not reliably fetch all tags

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:58:02 +01:00
Sascha Lustenberger | techlan gmbh
1d27226b6f fix(update): detect compose project name at runtime instead of hardcoding
The project name was hardcoded as 'netbirdmsp-appliance' but Docker Compose
derives the project name from the install directory name ('netbird-msp').
This caused Phase A to build an image under the wrong project name and
Phase B to start the replacement container under a mismatched project,
leaving the old container running indefinitely.

Fix: read the 'com.docker.compose.project' label from the running container
at update time. Both Phase A (build) and Phase B (docker compose up) now
use the detected project name. Falls back to SOURCE_DIR basename if the
inspect fails.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:51:25 +01:00
Sascha Lustenberger | techlan gmbh
c70dc33f67 fix(caddy): route relay WebSocket traffic to relay container
Add /relay* location block to Caddyfile template so that NetBird relay
WebSocket connections (rels://) are correctly forwarded to the relay
container instead of falling through to the dashboard handler.

Without this fix, all relay WebSocket connections silently hit the
dashboard container, causing STUN/relay connectivity failures for all
deployed NetBird instances.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:31:08 +01:00
Sascha Lustenberger | techlan gmbh
fb264bf7c6 Fix: Add grpc_pass to NPM advanced_config for Management and Signal endpoints 2026-02-23 14:49:43 +01:00
Sascha Lustenberger | techlan gmbh
f3304b90c8 Fix: correctly detect update when current version is unknown 2026-02-23 13:11:04 +01:00
Sascha Lustenberger | techlan gmbh
cda916f2af Fix: display dynamic version on login and use subdomain for customer directories instead of kunde{id} 2026-02-23 12:58:39 +01:00
c3ab7a5a67 fix(api): correct extraction of commit date from gitea branches api 2026-02-22 22:57:07 +01:00
2713e67259 Deutsch korrektur 2026-02-09 15:55:01 +01:00
19 changed files with 1251 additions and 154 deletions

View File

@@ -91,7 +91,7 @@ netbird-msp-appliance/
1. Validate inputs (subdomain unique, email valid)
2. Allocate ports (Management internal, Relay UDP public)
3. Generate configs from Jinja2 templates
4. Create instance directory: `/opt/netbird-instances/kunde{id}/`
4. Create instance directory: `/opt/netbird-instances/{subdomain}/`
5. Write `docker-compose.yml`, `management.json`, `relay.env`
6. Start Docker containers via Docker SDK
7. Wait for health checks (max 60s)
@@ -113,7 +113,7 @@ No manual config file editing required!
### 4. Nginx Proxy Manager Integration
**Per customer, create proxy host:**
- Domain: `{subdomain}.{base_domain}`
- Forward to: `netbird-kunde{id}-dashboard:80`
- Forward to: `netbird-{subdomain}-dashboard:80`
- SSL: Automatic Let's Encrypt
- Advanced config: Route `/api/*` to management, `/signalexchange.*` to signal, `/relay` to relay
@@ -272,7 +272,7 @@ networks:
services:
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- npm-network
@@ -285,7 +285,7 @@ services:
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- npm-network
@@ -294,7 +294,7 @@ services:
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- npm-network
@@ -311,7 +311,7 @@ services:
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- npm-network

217
README.md
View File

@@ -38,11 +38,20 @@ A management solution for running isolated NetBird instances for your MSP busine
- **Docker-Based** — Everything runs in containers for easy deployment
### Dashboard
- **Modern Web UI** — Responsive Bootstrap 5 interface
- **Modern Web UI** — Responsive Bootstrap 5 interface with dark/light mode toggle
- **Real-Time Monitoring** — Container status, health checks, resource usage
- **Container Logs** — View logs per container directly in the browser
- **Start / Stop / Restart** — Control customer instances from the dashboard
- **Customer Status Tracking** — Automatic status sync (active / inactive / error)
- **Update Indicators** — Per-customer badges when container images are outdated
### NetBird Container Updates
- **Docker Hub Digest Check** — Compare locally pulled image digests against Docker Hub without pulling
- **One-Click Pull** — Pull all NetBird images from Docker Hub via Settings
- **Bulk Update** — Update all outdated customer containers at once from the Monitoring page
- **Per-Customer Update** — Update a single customer's containers from the customer detail view
- **Zero Data Loss** — Container recreation preserves all bind-mounted volumes
- **Sequential Updates** — Customers are updated one at a time to minimize risk
### Multi-Language (i18n)
- **English and German** — Full UI translation
@@ -55,13 +64,18 @@ A management solution for running isolated NetBird instances for your MSP busine
- **Login Page** — Branding is applied to the login page automatically
- **Configurable Docker Images** — Use custom or specific NetBird image versions
### Security
### Authentication & User Management
- **JWT Authentication** — Token-based API authentication
- **Multi-Factor Authentication (MFA)** — Optional TOTP-based MFA for all local users, activatable in Security settings
- **Azure AD / OIDC** — Optional single sign-on via Microsoft Entra ID (exempt from MFA)
- **Encrypted Credentials** — NPM passwords, relay secrets, and TOTP secrets are Fernet-encrypted at rest
- **LDAP / Active Directory** — Allow AD users to authenticate; local admin accounts always work as fallback
- **Encrypted Credentials** — NPM passwords, relay secrets, TOTP secrets, and LDAP bind passwords are Fernet-encrypted at rest
- **User Management** — Create, edit, delete admin users, reset passwords and MFA
### Integrations
- **Windows DNS** — Automatically create and delete DNS A-records when deploying or removing customers
- **MSP Updates** — In-UI appliance update check with configurable release branch
---
## Architecture
@@ -95,8 +109,8 @@ A management solution for running isolated NetBird instances for your MSP busine
| | Caddy | | | | Caddy | |
| +------------+ | | +------------+ |
+------------------+ +------------------+
kunde1.domain.de kundeN.domain.de
UDP 3478 UDP 3478+N-1
customer-a.domain.de customer-x.domain.de
| |3478+N-1
```
### Components per Customer Instance (5 containers):
@@ -140,9 +154,9 @@ Example for 3 customers:
| Customer | Dashboard (TCP) | Relay (UDP) |
|----------|----------------|-------------|
| Kunde 1 | 9001 | 3478 |
| Kunde 2 | 9002 | 3479 |
| Kunde 3 | 9003 | 3480 |
| Customer-A | 9001 | 3478 |
| Customer-C | 9002 | 3479 |
| Customer-X | 9003 | 3480 |
**Your firewall must allow both the TCP dashboard ports and the UDP relay ports!**
@@ -178,6 +192,18 @@ The following tools and services must be available **before** running the instal
### Install Prerequisites (Ubuntu/Debian)
> **Note:** On a fresh Debian minimal install, `sudo` is not pre-installed. Install it as root first:
```bash
# As root — only needed on fresh Debian minimal (sudo not pre-installed):
apt update && apt install -y sudo
# Install remaining prerequisites:
sudo apt install -y curl git openssl
```
If `sudo` is already available (Ubuntu, most standard installs):
```bash
sudo apt update
sudo apt install -y curl git openssl
@@ -266,17 +292,32 @@ HOST_IP=<your-server-ip>
### Web UI Settings
Available under **Settings** in the web interface:
Available under **Settings** in the web interface, organized into tabs:
#### User Management
| Tab | Settings |
|-----|----------|
| **System** | Base domain, admin email, Docker images, port ranges, data directory |
| **NPM Integration** | NPM API URL, login credentials, SSL certificate mode (Let's Encrypt / Wildcard), wildcard certificate selection |
| **Branding** | Platform name, subtitle, logo upload, default language |
| **Azure AD** | Azure AD / Entra ID SSO configuration (tenant ID, client ID/secret, optional group restriction) |
| **Users** | Create/edit/delete admin users, per-user language preference, MFA reset |
| **Azure AD** | Azure AD / Entra ID SSO configuration |
| **LDAP / AD** | LDAP/Active Directory authentication (server, base DN, bind credentials, group restriction), enable/disable |
| **Security** | Change admin password, enable/disable MFA globally, manage own TOTP |
| **Monitoring** | System resources, Docker stats |
#### System
| Tab | Settings |
|-----|----------|
| **Branding** | Platform name, subtitle, logo upload, default language |
| **NetBird Docker Images** | Configured NetBird image tags (management, signal, relay, dashboard), pull images from Docker Hub |
| **NetBird MSP System** | Base domain, admin email, port ranges, data directory |
| **NetBird MSP Updates** | Appliance version info, check for updates, switch release branch |
#### External Systems
| Tab | Settings |
|-----|----------|
| **NPM Proxy** | NPM API URL, login credentials, SSL certificate mode (Let's Encrypt / Wildcard), wildcard certificate selection |
| **Windows DNS** | Windows DNS server integration for automatic DNS A-record creation/deletion on customer deploy/delete |
Changes are applied immediately without restart.
@@ -308,11 +349,42 @@ Changes are applied immediately without restart.
### Monitoring
The dashboard shows:
The **Monitoring** page shows:
- **System Overview** — Total customers, active/inactive, errors
- **Resource Usage** — RAM, CPU per container
- **Container Health** — Running/stopped per container with color-coded status
- **Deployment Logs** — Action history per customer
- **Host Resources** — CPU, RAM, disk usage of the host machine
- **Customer Status** — Container health per customer (running/stopped)
- **NetBird Container Updates** — Compare local image digests against Docker Hub, pull new images, and update all outdated customer containers
### NetBird Container Updates
#### Workflow
1. **Check for updates** — Go to **Monitoring > NetBird Container Updates**, click **"Check Updates"**
- Compares local image digests against Docker Hub
- Shows which images have a new version available
- Shows which customer containers are running outdated images
- An orange badge appears next to customers in the dashboard list that need updating
2. **Pull new images** — Go to **Settings > NetBird Docker Images**, click **"Pull from Docker Hub"**
- Pulls all 4 NetBird images (`management`, `signal`, `relay`, `dashboard`) in the background
- Wait for the pull to complete before updating customers
3. **Update customers** — Return to **Monitoring > NetBird Container Updates**, click **"Update All Customers"**
- Recreates containers for all customers whose running image is outdated
- Customers are updated **sequentially** — one at a time
- All bind-mounted volumes (database, keys, config) are preserved — **no data loss**
- A per-customer results table is shown after completion
#### Per-Customer Update
To update a single customer:
1. Open the customer detail view
2. Go to the **Deployment** tab
3. Click **"Update Images"**
#### Update Badges
The dashboard customer list shows an orange **"Update"** badge next to any customer whose running containers are using an outdated local image. This check is fast (local-only, no network call) and runs automatically when the dashboard loads.
### Language Settings
@@ -320,9 +392,13 @@ The dashboard shows:
- **Per-user default** — Set in Settings > Users during user creation
- **System default** — Set in Settings > Branding
### Dark Mode
Toggle dark/light mode using the moon/sun icon in the top navigation bar. The preference is saved in the browser.
### Multi-Factor Authentication (MFA)
TOTP-based MFA can be enabled globally for all local users. Azure AD users are not affected (they use their own MFA).
TOTP-based MFA can be enabled globally for all local users. Azure AD and LDAP users are not affected (they use their own authentication systems).
#### Enable MFA
1. Go to **Settings > Security**
@@ -344,9 +420,30 @@ When MFA is enabled and a user logs in for the first time:
- **Disable own TOTP** — In Settings > Security, click "Disable my TOTP" to remove your own MFA setup
- **Disable MFA globally** — Uncheck the toggle in Settings > Security to allow login without MFA
### LDAP / Active Directory Authentication
Active Directory users can log in to the appliance using their AD credentials. Local admin accounts always work as a fallback regardless of LDAP status.
#### Setup
1. Go to **Settings > LDAP / AD**
2. Enable **"LDAP / AD Authentication"**
3. Enter LDAP server, port, bind DN (service account), bind password, and base DN
4. Optionally restrict access to members of a specific AD group
5. Click **Save LDAP Settings**
### Windows DNS Integration
Automatically create and delete DNS A-records in a Windows DNS server when customers are deployed or deleted.
#### Setup
1. Go to **Settings > Windows DNS**
2. Enable **"Windows DNS Integration"**
3. Enter the DNS server details
4. Click **Save DNS Settings**
### SSL Certificate Mode
The appliance supports two SSL certificate modes for customer proxy hosts, configurable under **Settings > NPM Integration**:
The appliance supports two SSL certificate modes for customer proxy hosts, configurable under **Settings > NPM Proxy**:
#### Let's Encrypt (default)
Each customer gets an individual Let's Encrypt certificate via HTTP-01 validation. This is the default behavior and requires no additional setup beyond a valid admin email.
@@ -356,7 +453,7 @@ Use a pre-existing wildcard certificate (e.g. `*.yourdomain.com`) already upload
**Setup:**
1. Upload a wildcard certificate in Nginx Proxy Manager (e.g. via DNS challenge)
2. Go to **Settings > NPM Integration**
2. Go to **Settings > NPM Proxy**
3. Set **SSL Mode** to "Wildcard Certificate"
4. Click the refresh button to load certificates from NPM
5. Select your wildcard certificate from the dropdown
@@ -385,30 +482,37 @@ http://your-server:8000/api/docs
**Common Endpoints:**
```
POST /api/customers # Create customer + deploy
GET /api/customers # List all customers
GET /api/customers/{id} # Get customer details
PUT /api/customers/{id} # Update customer
DELETE /api/customers/{id} # Delete customer
POST /api/customers # Create customer + deploy
GET /api/customers # List all customers
GET /api/customers/{id} # Get customer details
PUT /api/customers/{id} # Update customer
DELETE /api/customers/{id} # Delete customer
POST /api/customers/{id}/start # Start containers
POST /api/customers/{id}/stop # Stop containers
POST /api/customers/{id}/restart # Restart containers
GET /api/customers/{id}/logs # Get container logs
GET /api/customers/{id}/health # Health check
POST /api/customers/{id}/start # Start containers
POST /api/customers/{id}/stop # Stop containers
POST /api/customers/{id}/restart # Restart containers
GET /api/customers/{id}/logs # Get container logs
GET /api/customers/{id}/health # Health check
POST /api/customers/{id}/update-images # Recreate containers with new images
GET /api/settings/branding # Get branding (public, no auth)
GET /api/settings/npm-certificates # List NPM SSL certificates
PUT /api/settings # Update system settings
GET /api/users # List users
POST /api/users # Create user
POST /api/users/{id}/reset-mfa # Reset user's MFA
GET /api/settings/branding # Get branding (public, no auth)
GET /api/settings/npm-certificates # List NPM SSL certificates
PUT /api/settings # Update system settings
POST /api/auth/mfa/setup # Generate TOTP secret + QR code
POST /api/auth/mfa/setup/complete # Verify first TOTP code
POST /api/auth/mfa/verify # Verify TOTP code on login
GET /api/auth/mfa/status # Get MFA status
POST /api/auth/mfa/disable # Disable own TOTP
GET /api/users # List users
POST /api/users # Create user
POST /api/users/{id}/reset-mfa # Reset user's MFA
POST /api/auth/mfa/setup # Generate TOTP secret + QR code
POST /api/auth/mfa/setup/complete # Verify first TOTP code
POST /api/auth/mfa/verify # Verify TOTP code on login
GET /api/auth/mfa/status # Get MFA status
POST /api/auth/mfa/disable # Disable own TOTP
GET /api/monitoring/images/check # Check Hub vs local digests for all images
POST /api/monitoring/images/pull # Pull all NetBird images from Docker Hub (background)
GET /api/monitoring/customers/local-update-status # Fast local-only update check (no network)
POST /api/monitoring/customers/update-all # Recreate outdated containers for all customers
```
### Example: Create Customer via API
@@ -488,11 +592,28 @@ The database migrations run automatically on startup.
### Updating NetBird Images
Via the Web UI:
1. Settings > System Configuration
2. Change image tags (e.g., `netbirdio/management:0.35.0`)
3. Click "Save"
4. Re-deploy individual customers to apply the new images
NetBird image updates are managed entirely through the Web UI — no manual config changes required.
#### Step 1 — Pull new images
1. Go to **Settings > NetBird Docker Images**
2. Click **"Pull from Docker Hub"**
3. Wait for the pull to complete (progress shown inline)
#### Step 2 — Check which customers need updating
1. Go to **Monitoring > NetBird Container Updates**
2. Click **"Check Updates"**
3. The table shows per-image Hub vs. local digest comparison and which customers are running outdated containers
#### Step 3 — Update customer containers
- **All customers**: Click **"Update All Customers"** in the Monitoring page
- Customers are updated sequentially, one at a time
- A results table is shown after completion
- **Single customer**: Open the customer detail view > **Deployment** tab > **"Update Images"**
> All bind-mounted volumes (database, keys, config files) are preserved. Container recreation does not cause data loss.
---
@@ -546,7 +667,7 @@ MIT License — see [LICENSE](LICENSE) file for details.
## Built With AI
This software was developed with [Claude Code](https://claude.ai/claude-code) (Anthropic Claude Opus 4.6) — from architecture and backend logic to frontend UI and deployment scripts.
This software was developed with [Claude Code](https://claude.ai/claude-code) (Anthropic Claude Sonnet 4.6) — from architecture and backend logic to frontend UI and deployment scripts.
## Acknowledgments

View File

@@ -7,8 +7,8 @@ from sqlalchemy.orm import Session
from app.database import SessionLocal, get_db
from app.dependencies import get_current_user
from app.models import Customer, Deployment, User
from app.services import docker_service, netbird_service
from app.models import Customer, Deployment, SystemConfig, User
from app.services import docker_service, image_service, netbird_service
from app.utils.security import decrypt_value
logger = logging.getLogger(__name__)
@@ -207,6 +207,50 @@ async def get_customer_credentials(
}
@router.post("/{customer_id}/update-images")
async def update_customer_images(
customer_id: int,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Recreate a customer's containers to pick up newly pulled images.
Images must already be pulled via POST /monitoring/images/pull.
Bind-mounted data is preserved — no data loss.
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
customer = _require_customer(db, customer_id)
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
if not deployment:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="No deployment found for this customer.",
)
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(
status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured."
)
instance_dir = f"{config.data_dir}/{customer.subdomain}"
result = await image_service.update_customer_containers(instance_dir, deployment.container_prefix)
if not result["success"]:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.get("error", "Failed to update containers."),
)
logger.info(
"Containers updated for customer '%s' (prefix: %s) by '%s'.",
customer.name, deployment.container_prefix, current_user.username,
)
return {"message": f"Containers updated for '{customer.name}'."}
def _require_customer(db: Session, customer_id: int) -> Customer:
"""Helper to fetch a customer or raise 404.

View File

@@ -5,13 +5,13 @@ import platform
from typing import Any
import psutil
from fastapi import APIRouter, Depends
from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException, status
from sqlalchemy.orm import Session
from app.database import get_db
from app.database import SessionLocal, get_db
from app.dependencies import get_current_user
from app.models import Customer, Deployment, User
from app.services import docker_service
from app.models import Customer, Deployment, SystemConfig, User
from app.services import docker_service, image_service
logger = logging.getLogger(__name__)
router = APIRouter()
@@ -115,3 +115,160 @@ async def host_resources(
"percent": disk.percent,
},
}
@router.get("/images/check")
async def check_image_updates(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Check all configured NetBird images for available updates on Docker Hub.
Compares local image digests against Docker Hub — no image is pulled.
Returns:
images: dict mapping image name to update status
any_update_available: bool
customer_status: list of per-customer container image status
"""
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
hub_status = await image_service.check_all_images(config)
# Per-customer local check (no network)
deployments = db.query(Deployment).all()
customer_status = []
for dep in deployments:
customer = dep.customer
cs = image_service.get_customer_container_image_status(dep.container_prefix, config)
customer_status.append({
"customer_id": customer.id,
"customer_name": customer.name,
"subdomain": customer.subdomain,
"container_prefix": dep.container_prefix,
"needs_update": cs["needs_update"],
"services": cs["services"],
})
return {**hub_status, "customer_status": customer_status}
@router.post("/images/pull")
async def pull_all_netbird_images(
background_tasks: BackgroundTasks,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Pull all configured NetBird images from Docker Hub.
Runs in the background — returns immediately. After pulling, re-check
customer status via GET /images/check to see which customers need updating.
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
# Snapshot image list before background task starts
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
async def _pull_bg() -> None:
bg_db = SessionLocal()
try:
cfg = bg_db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if cfg:
await image_service.pull_all_images(cfg)
except Exception:
logger.exception("Background image pull failed")
finally:
bg_db.close()
background_tasks.add_task(_pull_bg)
return {"message": "Image pull started in background.", "images": images}
@router.get("/customers/local-update-status")
async def customers_local_update_status(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> list[dict[str, Any]]:
"""Fast local-only check for outdated customer containers.
Compares running container image IDs against locally stored images.
No network call — safe to call on every dashboard load.
"""
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
return []
deployments = db.query(Deployment).all()
results = []
for dep in deployments:
cs = image_service.get_customer_container_image_status(dep.container_prefix, config)
results.append({"customer_id": dep.customer_id, "needs_update": cs["needs_update"]})
return results
@router.post("/customers/update-all")
async def update_all_customers(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Recreate containers for all customers with outdated images — sequential, synchronous.
Updates customers one at a time so a failing customer does not block others.
Images must already be pulled. Data is preserved (bind mounts).
Returns detailed per-customer results.
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
deployments = db.query(Deployment).all()
to_update = []
for dep in deployments:
cs = image_service.get_customer_container_image_status(dep.container_prefix, config)
if cs["needs_update"]:
customer = dep.customer
to_update.append({
"instance_dir": f"{config.data_dir}/{customer.subdomain}",
"project_name": dep.container_prefix,
"customer_name": customer.name,
"customer_id": customer.id,
})
if not to_update:
return {"message": "All customers are already up to date.", "updated": 0, "results": []}
# Update customers sequentially — one at a time
update_results = []
for entry in to_update:
res = await image_service.update_customer_containers(
entry["instance_dir"], entry["project_name"]
)
ok = res["success"]
logger.info("Updated %s: %s", entry["project_name"], "OK" if ok else res.get("error"))
update_results.append({
"customer_name": entry["customer_name"],
"customer_id": entry["customer_id"],
"success": ok,
"error": res.get("error"),
})
success_count = sum(1 for r in update_results if r["success"])
return {
"message": f"Updated {success_count} of {len(update_results)} customer(s).",
"updated": success_count,
"results": update_results,
}

View File

@@ -237,6 +237,10 @@ async def test_ldap(
@router.get("/branding")
async def get_branding(db: Session = Depends(get_db)):
"""Public endpoint — returns branding info for the login page (no auth required)."""
current_version = update_service.get_current_version().get("tag", "alpha-1.1")
if current_version == "unknown":
current_version = "alpha-1.1"
row = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not row:
return {
@@ -244,12 +248,14 @@ async def get_branding(db: Session = Depends(get_db)):
"branding_subtitle": "Multi-Tenant Management Platform",
"branding_logo_path": None,
"default_language": "en",
"version": current_version
}
return {
"branding_name": row.branding_name or "NetBird MSP Appliance",
"branding_subtitle": row.branding_subtitle or "Multi-Tenant Management Platform",
"branding_logo_path": row.branding_logo_path,
"default_language": row.default_language or "en",
"version": current_version
}

View File

@@ -70,12 +70,31 @@ async def update_user(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Update an existing user (email, is_active, role)."""
"""Update an existing user (email, is_active, role). Admin only."""
if current_user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Only admins can update users.",
)
user = db.query(User).filter(User.id == user_id).first()
if not user:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="User not found.")
update_data = payload.model_dump(exclude_none=True)
if "role" in update_data:
if update_data["role"] not in ("admin", "viewer"):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Role must be 'admin' or 'viewer'.",
)
if user_id == current_user.id:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="You cannot change your own role.",
)
for field, value in update_data.items():
if hasattr(user, field):
setattr(user, field, value)

View File

@@ -0,0 +1,267 @@
"""NetBird Docker image update service.
Compares locally pulled images against Docker Hub to detect available updates.
Provides pull and per-customer container recreation functions without data loss.
"""
import asyncio
import json
import logging
import os
import subprocess
from typing import Any
import httpx
logger = logging.getLogger(__name__)
# Services that make up a customer's NetBird deployment
NETBIRD_SERVICES = ["management", "signal", "relay", "dashboard"]
async def _run_cmd(cmd: list[str], timeout: int = 300) -> subprocess.CompletedProcess:
"""Run a subprocess command without blocking the event loop."""
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: subprocess.run(cmd, capture_output=True, text=True, timeout=timeout),
)
def _parse_image_name(image: str) -> tuple[str, str]:
"""Split 'repo/name:tag' into ('repo/name', 'tag'). Defaults tag to 'latest'."""
if ":" in image:
name, tag = image.rsplit(":", 1)
else:
name, tag = image, "latest"
return name, tag
async def get_hub_digest(image: str) -> str | None:
"""Fetch the manifest-list digest from the Docker Registry v2 API.
Uses anonymous auth against registry-1.docker.io — does NOT pull the image.
Returns the Docker-Content-Digest header value (sha256:...) which is identical
to the digest stored in local RepoDigests after a pull, enabling correct comparison.
"""
name, tag = _parse_image_name(image)
try:
async with httpx.AsyncClient(timeout=15) as client:
# Step 1: obtain anonymous pull token
token_resp = await client.get(
"https://auth.docker.io/token",
params={"service": "registry.docker.io", "scope": f"repository:{name}:pull"},
)
if token_resp.status_code != 200:
logger.warning("Failed to get registry token for %s", image)
return None
token = token_resp.json().get("token")
# Step 2: fetch manifest — prefer manifest list (multi-arch) so the digest
# matches what `docker pull` stores in RepoDigests.
manifest_resp = await client.get(
f"https://registry-1.docker.io/v2/{name}/manifests/{tag}",
headers={
"Authorization": f"Bearer {token}",
"Accept": (
"application/vnd.docker.distribution.manifest.list.v2+json, "
"application/vnd.oci.image.index.v1+json, "
"application/vnd.docker.distribution.manifest.v2+json"
),
},
)
if manifest_resp.status_code != 200:
logger.warning("Registry API returned %d for %s", manifest_resp.status_code, image)
return None
# The Docker-Content-Digest header is the canonical digest
digest = manifest_resp.headers.get("docker-content-digest")
if digest:
return digest
return None
except Exception as exc:
logger.warning("Failed to fetch registry digest for %s: %s", image, exc)
return None
def get_local_digest(image: str) -> str | None:
"""Get the RepoDigest for a locally pulled image.
Returns the digest (sha256:...) or None if image not found locally.
"""
try:
result = subprocess.run(
["docker", "image", "inspect", image, "--format", "{{json .RepoDigests}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
digests = json.loads(result.stdout.strip())
if not digests:
return None
# RepoDigests look like "netbirdio/management@sha256:abc..."
for d in digests:
if "@" in d:
return d.split("@", 1)[1]
return None
except Exception as exc:
logger.warning("Failed to inspect local image %s: %s", image, exc)
return None
def get_container_image_id(container_name: str) -> str | None:
"""Get the full image ID (sha256:...) of a running or stopped container."""
try:
result = subprocess.run(
["docker", "inspect", container_name, "--format", "{{.Image}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
return result.stdout.strip() or None
except Exception:
return None
def get_local_image_id(image: str) -> str | None:
"""Get the full image ID (sha256:...) of a locally stored image."""
try:
result = subprocess.run(
["docker", "image", "inspect", image, "--format", "{{.Id}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
return result.stdout.strip() or None
except Exception:
return None
async def check_image_status(image: str) -> dict[str, Any]:
"""Check whether a configured image has an update available on Docker Hub.
Returns a dict with:
image: the image name:tag
local_digest: digest of locally cached image (or None)
hub_digest: latest digest from Docker Hub (or None)
update_available: True if hub_digest differs from local_digest
"""
hub_digest, local_digest = await asyncio.gather(
get_hub_digest(image),
asyncio.get_event_loop().run_in_executor(None, get_local_digest, image),
)
if hub_digest and local_digest:
update_available = hub_digest != local_digest
elif hub_digest and not local_digest:
# Image not pulled locally yet — needs pull
update_available = True
else:
update_available = False
return {
"image": image,
"local_digest": local_digest,
"hub_digest": hub_digest,
"update_available": update_available,
}
async def check_all_images(config) -> dict[str, Any]:
"""Check all 4 configured NetBird images for available updates.
Returns a dict with:
images: dict mapping image name -> status dict
any_update_available: bool
"""
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
results = await asyncio.gather(*[check_image_status(img) for img in images])
by_image = {r["image"]: r for r in results}
any_update = any(r["update_available"] for r in results)
return {"images": by_image, "any_update_available": any_update}
async def pull_image(image: str) -> dict[str, Any]:
"""Pull a Docker image. Returns success/error dict."""
logger.info("Pulling image: %s", image)
result = await _run_cmd(["docker", "pull", image], timeout=600)
if result.returncode != 0:
logger.error("Failed to pull %s: %s", image, result.stderr)
return {"image": image, "success": False, "error": result.stderr[:500]}
return {"image": image, "success": True}
async def pull_all_images(config) -> dict[str, Any]:
"""Pull all 4 configured NetBird images. Returns results per image."""
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
results = await asyncio.gather(*[pull_image(img) for img in images])
return {
"results": {r["image"]: r for r in results},
"all_success": all(r["success"] for r in results),
}
def get_customer_container_image_status(container_prefix: str, config) -> dict[str, Any]:
"""Check which service containers are running outdated local images.
Compares each running container's image ID against the locally stored image ID
for the configured image tag. This is a local check — no network call.
Returns:
services: dict mapping service name to status info
needs_update: True if any service has a different image ID than locally stored
"""
service_images = {
"management": config.netbird_management_image,
"signal": config.netbird_signal_image,
"relay": config.netbird_relay_image,
"dashboard": config.netbird_dashboard_image,
}
services: dict[str, Any] = {}
for svc, image in service_images.items():
container_name = f"{container_prefix}-{svc}"
container_id = get_container_image_id(container_name)
local_id = get_local_image_id(image)
if container_id and local_id:
up_to_date = container_id == local_id
else:
up_to_date = None # container not running or image not pulled
services[svc] = {
"container": container_name,
"image": image,
"up_to_date": up_to_date,
}
needs_update = any(s["up_to_date"] is False for s in services.values())
return {"services": services, "needs_update": needs_update}
async def update_customer_containers(instance_dir: str, project_name: str) -> dict[str, Any]:
"""Recreate customer containers to pick up newly pulled images.
Runs `docker compose up -d` in the customer's instance directory.
Images must already be pulled. Bind-mounted data is preserved — no data loss.
"""
compose_file = os.path.join(instance_dir, "docker-compose.yml")
if not os.path.isfile(compose_file):
return {"success": False, "error": f"docker-compose.yml not found at {compose_file}"}
cmd = [
"docker", "compose",
"-f", compose_file,
"-p", project_name,
"up", "-d", "--remove-orphans",
]
logger.info("Updating containers for %s", project_name)
result = await _run_cmd(cmd, timeout=300)
if result.returncode != 0:
return {"success": False, "error": result.stderr[:1000]}
return {"success": True}

View File

@@ -118,7 +118,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
allocated_port = None
instance_dir = None
container_prefix = f"netbird-kunde{customer_id}"
container_prefix = f"netbird-{customer.subdomain}"
local_mode = _is_local_domain(config.base_domain)
existing_deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
@@ -135,7 +135,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 2: Generate secrets (reuse existing key if instance data exists)
relay_secret = generate_relay_secret()
datastore_key = _get_existing_datastore_key(
os.path.join(config.data_dir, f"kunde{customer_id}", "management.json")
os.path.join(config.data_dir, customer.subdomain, "management.json")
)
if datastore_key:
_log_action(db, customer_id, "deploy", "info",
@@ -159,7 +159,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
relay_ws_protocol = "rels"
# Step 4: Create instance directory
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
os.makedirs(instance_dir, exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "management"), exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "signal"), exist_ok=True)
@@ -225,7 +225,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 8: Auto-create admin user via NetBird setup API
admin_email = customer.email
admin_password = secrets.token_urlsafe(16)
management_container = f"netbird-kunde{customer_id}-management"
management_container = f"netbird-{customer.subdomain}-management"
setup_api_url = f"http://{management_container}:80/api/setup"
setup_payload = json.dumps({
"name": customer.name,
@@ -264,7 +264,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
_log_action(db, customer_id, "deploy", "info",
"Auto-setup failed — admin must complete setup manually.")
# Step 9: Create NPM proxy host + stream (production only)
# Step 9: Create NPM proxy host (production only)
npm_proxy_id = None
npm_stream_id = None
if not local_mode:
@@ -294,27 +294,6 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
f"(SSL: {'OK' if ssl_ok else 'FAILED — check DNS and port 80 accessibility'})",
)
# Create NPM UDP stream for relay STUN port
stream_result = await npm_service.create_stream(
api_url=config.npm_api_url,
npm_email=config.npm_api_email,
npm_password=config.npm_api_password,
incoming_port=allocated_port,
forwarding_host=forward_host,
forwarding_port=allocated_port,
)
npm_stream_id = stream_result.get("stream_id")
if stream_result.get("error"):
_log_action(
db, customer_id, "deploy", "error",
f"NPM stream creation failed: {stream_result['error']}",
)
else:
_log_action(
db, customer_id, "deploy", "info",
f"NPM UDP stream created: port {allocated_port} -> {forward_host}:{allocated_port}",
)
# Note: Keep HTTPS configs even if SSL cert creation failed.
# SSL can be set up manually in NPM later. Switching to HTTP
# would break the dashboard when the user accesses via HTTPS.
@@ -387,7 +366,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Rollback: stop containers if they were started
try:
await docker_service.compose_down(
instance_dir or os.path.join(config.data_dir, f"kunde{customer_id}"),
instance_dir or os.path.join(config.data_dir, customer.subdomain),
container_prefix,
remove_volumes=True,
)
@@ -423,7 +402,7 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
config = get_system_config(db)
if deployment and config:
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
# Stop and remove containers
try:
@@ -443,17 +422,6 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM removal error: {exc}")
# Remove NPM stream
if deployment.npm_stream_id and config.npm_api_email:
try:
await npm_service.delete_stream(
config.npm_api_url, config.npm_api_email, config.npm_api_password,
deployment.npm_stream_id,
)
_log_action(db, customer_id, "undeploy", "info", "NPM stream removed.")
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM stream removal error: {exc}")
# Remove Windows DNS A-record (non-fatal)
if config and config.dns_enabled and config.dns_server and config.dns_zone:
try:
@@ -484,17 +452,16 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Stop containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_stop(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "stopped"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "inactive"
customer.status = "inactive"
db.commit()
_log_action(db, customer_id, "stop", "success", "Containers stopped.")
else:
@@ -505,17 +472,16 @@ async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Start containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_start(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "start", "success", "Containers started.")
else:
@@ -526,17 +492,16 @@ async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def restart_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Restart containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_restart(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "restart", "success", "Containers restarted.")
else:

View File

@@ -259,7 +259,16 @@ async def create_proxy_host(
"block_exploits": True,
"allow_websocket_upgrade": True,
"access_list_id": 0,
"advanced_config": "",
"advanced_config": (
"location ^~ /management.ManagementService/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
"location ^~ /signalexchange.SignalExchange/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
),
"meta": {
"letsencrypt_agree": True,
"letsencrypt_email": admin_email,

View File

@@ -15,10 +15,45 @@ import httpx
SOURCE_DIR = "/app-source"
VERSION_FILE = "/app/version.json"
BACKUP_DIR = "/app/backups"
CONTAINER_NAME = "netbird-msp-appliance"
SERVICE_NAME = "netbird-msp-appliance"
logger = logging.getLogger(__name__)
def _get_compose_project_name() -> str:
"""Detect the compose project name from the running container's labels.
Docker Compose sets the label ``com.docker.compose.project`` on every
managed container. Reading it at runtime avoids hard-coding a project
name that may differ from the directory name used at deploy time.
Returns:
The compose project name (e.g. ``netbird-msp``).
"""
try:
result = subprocess.run(
[
"docker", "inspect", CONTAINER_NAME,
"--format",
'{{index .Config.Labels "com.docker.compose.project"}}',
],
capture_output=True, text=True, timeout=10,
)
if result.returncode == 0:
project = result.stdout.strip()
if project:
logger.info("Detected compose project name: %s", project)
return project
except Exception as exc:
logger.warning("Could not detect compose project name: %s", exc)
# Fallback: derive from SOURCE_DIR basename (mirrors Compose default behaviour)
fallback = Path(SOURCE_DIR).name
logger.warning("Using fallback compose project name: %s", fallback)
return fallback
def get_current_version() -> dict:
"""Read the version baked at build time from /app/version.json."""
try:
@@ -104,15 +139,19 @@ async def check_for_updates(config: Any) -> dict:
"tag": latest_tag,
"commit": short_sha,
"commit_full": full_sha,
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0],
"date": latest_commit.get("commit", {}).get("committer", {}).get("date", ""),
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0] if latest_commit.get("commit") else "",
"date": latest_commit.get("timestamp", ""),
"branch": branch,
}
# Determine if update is needed: prefer tag comparison, fallback to commit
current_tag = current.get("tag", "unknown")
current_sha = current.get("commit", "unknown")
if current_tag != "unknown" and latest_tag != "unknown":
# If we don't know our current version but the remote has one, we should update
if current_tag == "unknown" and current_sha == "unknown":
needs_update = latest_tag != "unknown" or short_sha != "unknown"
elif current_tag != "unknown" and latest_tag != "unknown":
needs_update = current_tag != latest_tag
else:
needs_update = (
@@ -213,6 +252,16 @@ def trigger_update(config: Any, db_path: str) -> dict:
pull_cmd = ["git", "-C", SOURCE_DIR, "pull", "origin", branch]
# 3. Git pull (synchronous — must complete before rebuild)
# Ensure .git directory is owned by the process user (root inside container).
# The .git dir may be owned by the host user after manual operations.
try:
subprocess.run(
["git", "config", "--global", "--add", "safe.directory", SOURCE_DIR],
capture_output=True, timeout=10,
)
except Exception:
pass
try:
result = subprocess.run(
pull_cmd,
@@ -236,6 +285,15 @@ def trigger_update(config: Any, db_path: str) -> dict:
logger.info("git pull succeeded: %s", result.stdout.strip()[:200])
# Fetch tags separately — git pull does not always pull all tags
try:
subprocess.run(
["git", "-C", SOURCE_DIR, "fetch", "--tags"],
capture_output=True, text=True, timeout=30,
)
except Exception as exc:
logger.warning("git fetch --tags failed (non-fatal): %s", exc)
# 4. Read version info from the freshly-pulled source
build_env = os.environ.copy()
try:
@@ -274,13 +332,20 @@ def trigger_update(config: Any, db_path: str) -> dict:
# ensure the compose-up runs detached on the Docker host via a wrapper.
log_path = Path(BACKUP_DIR) / "update_rebuild.log"
# Detect compose project name at runtime — avoids hard-coding a name that
# may differ from the directory used at deploy time.
project_name = _get_compose_project_name()
# Image name follows Docker Compose convention: {project}-{service}
service_image = f"{project_name}-{SERVICE_NAME}:latest"
logger.info("Using project=%s image=%s", project_name, service_image)
# Phase A — build the new image (does NOT stop anything)
build_cmd = [
"docker", "compose",
"-p", "netbirdmsp-appliance",
"-p", project_name,
"-f", f"{SOURCE_DIR}/docker-compose.yml",
"build", "--no-cache",
"netbird-msp-appliance",
SERVICE_NAME,
]
logger.info("Phase A: building new image …")
try:
@@ -332,22 +397,19 @@ def trigger_update(config: Any, db_path: str) -> dict:
val = build_env.get(key, "unknown")
env_flags.extend(["-e", f"{key}={val}"])
# Use the same image we're already running (it has docker CLI + compose plugin)
own_image = "netbirdmsp-appliance-netbird-msp-appliance:latest"
helper_cmd = [
"docker", "run", "--rm", "-d", "--privileged",
"--name", "msp-updater",
"-v", "/var/run/docker.sock:/var/run/docker.sock:z",
"-v", f"{host_source_dir}:{host_source_dir}:ro,z",
*env_flags,
own_image,
service_image, # freshly built image — has docker CLI + compose plugin
"sh", "-c",
(
"sleep 3 && "
"docker compose -p netbirdmsp-appliance "
f"docker compose -p {project_name} "
f"-f {host_source_dir}/docker-compose.yml "
"up --force-recreate --no-deps -d netbird-msp-appliance"
f"up --force-recreate --no-deps -d {SERVICE_NAME}"
),
]
try:

View File

@@ -57,6 +57,7 @@ services:
- "${WEB_UI_PORT:-8000}:8000"
volumes:
- ./data:/app/data:z
- ./data/uploads:/app/static/uploads:z
- ./logs:/app/logs:z
- ./backups:/app/backups:z
- /var/run/docker.sock:/var/run/docker.sock:z

View File

@@ -188,3 +188,36 @@ body.i18n-loading #app-page {
font-weight: 600;
background: rgba(0, 0, 0, 0.02);
}
/* ---------------------------------------------------------------------------
Dark mode overrides (Bootstrap 5.3 data-bs-theme="dark")
Bootstrap handles most components automatically; only custom elements need
explicit overrides here.
--------------------------------------------------------------------------- */
[data-bs-theme="dark"] .card {
border-color: rgba(255, 255, 255, 0.08);
}
[data-bs-theme="dark"] .card-header {
background: rgba(255, 255, 255, 0.04);
}
[data-bs-theme="dark"] .log-entry {
border-bottom-color: rgba(255, 255, 255, 0.07);
}
[data-bs-theme="dark"] .log-time {
color: #9ca3af;
}
[data-bs-theme="dark"] .table th {
color: #9ca3af;
}
[data-bs-theme="dark"] .login-container {
background: linear-gradient(135deg, #0d0d1a 0%, #0a1020 50%, #071525 100%);
}
[data-bs-theme="dark"] .stat-card {
background: var(--bs-card-bg);
}

21
static/favicon.svg Normal file
View File

@@ -0,0 +1,21 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
<!-- Blue rounded background -->
<rect width="32" height="32" rx="7" fill="#2563EB"/>
<!-- Bird silhouette: top-down view, wings spread, forked tail -->
<path fill="white" d="
M 16 7
C 15 8 14 9.5 14 11
C 11 10.5 7 11 4 14
C 8 15 12 14.5 14 14.5
L 15 22
L 13 26
L 16 24
L 19 26
L 17 22
L 18 14.5
C 20 14.5 24 15 28 14
C 25 11 21 10.5 18 11
C 18 9.5 17 8 16 7 Z
"/>
</svg>

After

Width:  |  Height:  |  Size: 496 B

View File

@@ -5,6 +5,14 @@
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>NetBird MSP Appliance</title>
<link rel="icon" type="image/svg+xml" href="/static/favicon.svg">
<script>
// Apply dark mode before page renders to prevent flash
(function () {
const saved = localStorage.getItem('darkMode');
if (saved === 'dark') document.documentElement.setAttribute('data-bs-theme', 'dark');
})();
</script>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.11.2/font/bootstrap-icons.min.css" rel="stylesheet">
<link href="/static/css/styles.css" rel="stylesheet">
@@ -21,7 +29,7 @@
<h3 class="mt-2" id="login-title">NetBird MSP Appliance</h3>
<p class="text-muted" id="login-subtitle" data-i18n="login.subtitle">Multi-Tenant Management
Platform</p>
<p class="text-muted small mb-0" style="opacity:0.6;"><i class="bi bi-tag me-1"></i>alpha-1.1
<p class="text-muted small mb-0" style="opacity:0.6;"><i class="bi bi-tag me-1"></i>
</p>
</div>
<div id="login-error" class="alert alert-danger d-none"></div>
@@ -108,6 +116,10 @@
<span id="nav-brand-name">NetBird MSP</span>
</a>
<div class="d-flex align-items-center">
<!-- Dark Mode Toggle -->
<button class="btn btn-outline-light btn-sm me-2" id="darkmode-toggle" onclick="toggleDarkMode()" title="Toggle dark mode">
<i id="darkmode-icon" class="bi bi-moon-fill"></i>
</button>
<!-- Language Switcher -->
<div class="dropdown me-2">
<button class="btn btn-outline-light btn-sm dropdown-toggle" id="language-switcher-btn"
@@ -622,6 +634,13 @@
Settings</span></button>
</div>
</form>
<hr>
<h6 data-i18n="settings.pullImagesTitle">Pull Latest Images from Docker Hub</h6>
<p class="text-muted small" data-i18n="settings.pullImagesHint">Downloads the latest versions of all configured NetBird images. After pulling, use Monitoring to update customer containers.</p>
<button class="btn btn-outline-primary" onclick="pullAllImagesSettings()" id="btn-pull-images-settings">
<i class="bi bi-cloud-download me-1"></i><span data-i18n="settings.pullImages">Pull from Docker Hub</span>
</button>
<span id="pull-images-settings-status" class="ms-2 text-muted small"></span>
</div>
</div>
</div>
@@ -1140,7 +1159,7 @@
</div>
<!-- Customer Statuses -->
<div class="card shadow-sm">
<div class="card shadow-sm mb-4">
<div class="card-header" data-i18n="monitoring.allCustomerDeployments">All Customer Deployments
</div>
<div class="table-responsive">
@@ -1166,6 +1185,27 @@
</table>
</div>
</div>
<!-- NetBird Container Updates -->
<div class="card shadow-sm">
<div class="card-header d-flex justify-content-between align-items-center">
<span><i class="bi bi-arrow-repeat me-2"></i><span data-i18n="monitoring.imageUpdates">NetBird Container Updates</span></span>
<div class="d-flex gap-2">
<button class="btn btn-outline-secondary btn-sm" onclick="checkImageUpdates()" id="btn-check-updates">
<i class="bi bi-search me-1"></i><span data-i18n="monitoring.checkUpdates">Check for Updates</span>
</button>
<button class="btn btn-outline-primary btn-sm" onclick="pullAllImages()" id="btn-pull-images">
<i class="bi bi-cloud-download me-1"></i><span data-i18n="monitoring.pullImages">Pull Latest Images</span>
</button>
<button class="btn btn-warning btn-sm d-none" onclick="updateAllCustomers()" id="btn-update-all">
<i class="bi bi-lightning-charge-fill me-1"></i><span data-i18n="monitoring.updateAll">Update All</span>
</button>
</div>
</div>
<div class="card-body" id="image-updates-body">
<p class="text-muted mb-0" data-i18n="monitoring.clickCheckUpdates">Click "Check for Updates" to compare local images with Docker Hub.</p>
</div>
</div>
</div>
</div>
</div>

View File

@@ -12,7 +12,7 @@ let currentPage = 'dashboard';
let currentCustomerId = null;
let currentCustomerData = null;
let customersPage = 1;
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null };
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null, version: 'alpha-1.1' };
let azureConfig = { azure_enabled: false };
// ---------------------------------------------------------------------------
@@ -66,10 +66,35 @@ async function api(method, path, body = null) {
return data;
}
// ---------------------------------------------------------------------------
// Dark mode
// ---------------------------------------------------------------------------
function toggleDarkMode() {
const isDark = document.documentElement.getAttribute('data-bs-theme') === 'dark';
if (isDark) {
document.documentElement.removeAttribute('data-bs-theme');
localStorage.setItem('darkMode', 'light');
document.getElementById('darkmode-icon').className = 'bi bi-moon-fill';
} else {
document.documentElement.setAttribute('data-bs-theme', 'dark');
localStorage.setItem('darkMode', 'dark');
document.getElementById('darkmode-icon').className = 'bi bi-sun-fill';
}
}
function syncDarkmodeIcon() {
const icon = document.getElementById('darkmode-icon');
if (!icon) return;
icon.className = document.documentElement.getAttribute('data-bs-theme') === 'dark'
? 'bi bi-sun-fill'
: 'bi bi-moon-fill';
}
// ---------------------------------------------------------------------------
// Auth
// ---------------------------------------------------------------------------
async function initApp() {
syncDarkmodeIcon();
await initI18n();
await loadBranding();
await loadAzureLoginConfig();
@@ -127,12 +152,19 @@ function applyBranding() {
const name = brandingData.branding_name || 'NetBird MSP Appliance';
const subtitle = brandingData.branding_subtitle || t('login.subtitle');
const logoPath = brandingData.branding_logo_path;
const version = brandingData.version || 'alpha-1.1';
// Login page
document.getElementById('login-title').textContent = name;
const subtitleEl = document.getElementById('login-subtitle');
if (subtitleEl) subtitleEl.textContent = subtitle;
document.title = name;
// Update version string in login page
const versionEl = document.querySelector('#login-page .text-muted.small.mb-0');
if (versionEl) {
versionEl.innerHTML = `<i class="bi bi-tag me-1"></i>${version}`;
}
if (logoPath) {
document.getElementById('login-logo').innerHTML = `<img src="${logoPath}" alt="Logo" style="max-height:64px;max-width:200px;" class="mb-1">`;
} else {
@@ -453,11 +485,11 @@ function renderCustomersTable(data) {
const dashLink = dPort
? `<a href="${esc(dashUrl || 'http://localhost:' + dPort)}" target="_blank" class="text-decoration-none" title="${t('customer.openDashboard')}">:${dPort} <i class="bi bi-box-arrow-up-right"></i></a>`
: '-';
return `<tr>
return `<tr data-customer-id="${c.id}">
<td>${c.id}</td>
<td><a href="#" onclick="viewCustomer(${c.id})" class="text-decoration-none fw-semibold">${esc(c.name)}</a></td>
<td><code>${esc(c.subdomain)}</code></td>
<td>${statusBadge(c.status)}</td>
<td><span class="customer-status-cell">${statusBadge(c.status)}</span></td>
<td>${dashLink}</td>
<td>${c.max_devices}</td>
<td>${formatDate(c.created_at)}</td>
@@ -485,6 +517,26 @@ function renderCustomersTable(data) {
paginationHtml += `<li class="page-item ${i === data.page ? 'active' : ''}"><a class="page-link" href="#" onclick="goToPage(${i})">${i}</a></li>`;
}
document.getElementById('pagination-controls').innerHTML = paginationHtml;
// Lazy-load update badges after table renders (best-effort, silent fail)
loadCustomerUpdateBadges().catch(() => {});
}
async function loadCustomerUpdateBadges() {
const data = await api('GET', '/monitoring/customers/local-update-status');
data.forEach(s => {
if (!s.needs_update) return;
const tr = document.querySelector(`tr[data-customer-id="${s.customer_id}"]`);
if (!tr) return;
const cell = tr.querySelector('.customer-status-cell');
if (cell && !cell.querySelector('.update-badge')) {
const badge = document.createElement('span');
badge.className = 'badge bg-warning text-dark update-badge ms-1';
badge.title = t('monitoring.updateAvailable');
badge.innerHTML = '<i class="bi bi-arrow-repeat"></i> Update';
cell.appendChild(badge);
}
});
}
function goToPage(page) {
@@ -710,8 +762,13 @@ async function viewCustomer(id) {
<button class="btn btn-success btn-sm me-1" onclick="customerAction(${id},'start')"><i class="bi bi-play-circle me-1"></i>${t('customer.start')}</button>
<button class="btn btn-warning btn-sm me-1" onclick="customerAction(${id},'stop')"><i class="bi bi-stop-circle me-1"></i>${t('customer.stop')}</button>
<button class="btn btn-info btn-sm me-1" onclick="customerAction(${id},'restart')"><i class="bi bi-arrow-repeat me-1"></i>${t('customer.restart')}</button>
<button class="btn btn-outline-primary btn-sm" onclick="customerAction(${id},'deploy')"><i class="bi bi-rocket me-1"></i>${t('customer.reDeploy')}</button>
<button class="btn btn-outline-primary btn-sm me-1" onclick="customerAction(${id},'deploy')"><i class="bi bi-rocket me-1"></i>${t('customer.reDeploy')}</button>
<button class="btn btn-outline-warning btn-sm" id="btn-update-images-detail" onclick="updateCustomerImagesFromDetail(${id})">
<span id="update-detail-spinner" class="spinner-border spinner-border-sm d-none me-1"></span>
<i class="bi bi-arrow-repeat me-1"></i>${t('customer.updateImages')}
</button>
</div>
<div id="detail-update-result"></div>
`;
} else {
document.getElementById('detail-deployment-content').innerHTML = `
@@ -1336,8 +1393,8 @@ async function loadUsers() {
<td>${u.id}</td>
<td><strong>${esc(u.username)}</strong></td>
<td>${esc(u.email || '-')}</td>
<td><span class="badge bg-info">${esc(u.role || 'admin')}</span></td>
<td><span class="badge bg-${u.auth_provider === 'azure' ? 'primary' : 'secondary'}">${esc(u.auth_provider || 'local')}</span></td>
<td><span class="badge bg-${u.role === 'admin' ? 'success' : 'secondary'}">${esc(u.role || 'admin')}</span></td>
<td><span class="badge bg-${u.auth_provider === 'azure' ? 'primary' : u.auth_provider === 'ldap' ? 'info' : 'secondary'}">${esc(u.auth_provider || 'local')}</span></td>
<td>${langDisplay}</td>
<td>${mfaDisplay}</td>
<td>${u.is_active ? `<span class="badge bg-success">${t('common.active')}</span>` : `<span class="badge bg-danger">${t('common.disabled')}</span>`}</td>
@@ -1349,6 +1406,11 @@ async function loadUsers() {
}
${u.auth_provider === 'local' ? `<button class="btn btn-outline-info" title="${t('common.resetPassword')}" onclick="resetUserPassword(${u.id}, '${esc(u.username)}')"><i class="bi bi-key"></i></button>` : ''}
${u.totp_enabled ? `<button class="btn btn-outline-secondary" title="${t('mfa.resetMfa')}" onclick="resetUserMfa(${u.id}, '${esc(u.username)}')"><i class="bi bi-shield-x"></i></button>` : ''}
${currentUser && currentUser.role === 'admin' && u.id !== currentUser.id
? (u.role === 'admin'
? `<button class="btn btn-outline-secondary" title="${t('settings.makeViewer')}" onclick="toggleUserRole(${u.id}, 'admin')"><i class="bi bi-person-dash"></i></button>`
: `<button class="btn btn-outline-success" title="${t('settings.makeAdmin')}" onclick="toggleUserRole(${u.id}, 'viewer')"><i class="bi bi-person-check"></i></button>`)
: ''}
<button class="btn btn-outline-danger" title="${t('common.delete')}" onclick="deleteUser(${u.id}, '${esc(u.username)}')"><i class="bi bi-trash"></i></button>
</div>
</td>
@@ -1408,6 +1470,16 @@ async function toggleUserActive(id, active) {
}
}
async function toggleUserRole(id, currentRole) {
const newRole = currentRole === 'admin' ? 'viewer' : 'admin';
try {
await api('PUT', `/users/${id}`, { role: newRole });
loadUsers();
} catch (err) {
showSettingsAlert('danger', t('errors.updateFailed', { error: err.message }));
}
}
async function resetUserPassword(id, username) {
if (!confirm(t('messages.confirmResetPassword', { username }))) return;
try {
@@ -1586,6 +1658,221 @@ async function loadAllCustomerStatuses() {
}
}
// ---------------------------------------------------------------------------
// Image Updates
// ---------------------------------------------------------------------------
async function checkImageUpdates() {
const btn = document.getElementById('btn-check-updates');
const body = document.getElementById('image-updates-body');
btn.disabled = true;
body.innerHTML = `<div class="text-muted"><span class="spinner-border spinner-border-sm me-2"></span>${t('common.loading')}</div>`;
try {
const data = await api('GET', '/monitoring/images/check');
// Image status table
const imageRows = Object.values(data.images).map(img => {
const badge = img.update_available
? `<span class="badge bg-warning text-dark">${t('monitoring.updateAvailable')}</span>`
: `<span class="badge bg-success">${t('monitoring.upToDate')}</span>`;
const shortDigest = d => d ? d.substring(7, 19) + '…' : '-';
return `<tr>
<td><code class="small">${esc(img.image)}</code></td>
<td class="small text-muted">${shortDigest(img.local_digest)}</td>
<td class="small text-muted">${shortDigest(img.hub_digest)}</td>
<td>${badge}</td>
</tr>`;
}).join('');
// Customer status table
const customerRows = data.customer_status.length === 0
? `<tr><td colspan="3" class="text-center text-muted py-3">${t('monitoring.noCustomers')}</td></tr>`
: data.customer_status.map(c => {
const badge = c.needs_update
? `<span class="badge bg-warning text-dark">${t('monitoring.needsUpdate')}</span>`
: `<span class="badge bg-success">${t('monitoring.upToDate')}</span>`;
const updateBtn = c.needs_update
? `<button class="btn btn-sm btn-outline-warning ms-2 btn-update-customer" onclick="updateCustomerImages(${c.customer_id})"
title="${t('monitoring.updateCustomer')}"><i class="bi bi-arrow-repeat"></i></button>`
: '';
return `<tr>
<td>${c.customer_id}</td>
<td>${esc(c.customer_name)} <code class="small text-muted">${esc(c.subdomain)}</code></td>
<td>${badge}${updateBtn}</td>
</tr>`;
}).join('');
// Show "Update All" button if any customer needs update
const updateAllBtn = document.getElementById('btn-update-all');
if (data.customer_status.some(c => c.needs_update)) {
updateAllBtn.classList.remove('d-none');
} else {
updateAllBtn.classList.add('d-none');
}
body.innerHTML = `
<h6 class="mb-2">${t('monitoring.imageStatusTitle')}</h6>
<div class="table-responsive mb-4">
<table class="table table-sm mb-0">
<thead class="table-light">
<tr>
<th>${t('monitoring.thImage')}</th>
<th>${t('monitoring.thLocalDigest')}</th>
<th>${t('monitoring.thHubDigest')}</th>
<th>${t('monitoring.thStatus')}</th>
</tr>
</thead>
<tbody>${imageRows}</tbody>
</table>
</div>
<h6 class="mb-2">${t('monitoring.customerImageTitle')}</h6>
<div class="table-responsive">
<table class="table table-sm mb-0">
<thead class="table-light">
<tr>
<th>${t('monitoring.thId')}</th>
<th>${t('monitoring.thName')}</th>
<th>${t('monitoring.thStatus')}</th>
</tr>
</thead>
<tbody>${customerRows}</tbody>
</table>
</div>`;
} catch (err) {
body.innerHTML = `<div class="alert alert-danger">${err.message}</div>`;
} finally {
btn.disabled = false;
}
}
async function pullAllImages() {
if (!confirm(t('monitoring.confirmPull'))) return;
const btn = document.getElementById('btn-pull-images');
btn.disabled = true;
try {
await api('POST', '/monitoring/images/pull');
showToast(t('monitoring.pullStarted'));
// Re-check after a few seconds to let pull finish
setTimeout(() => checkImageUpdates(), 5000);
} catch (err) {
showMonitoringAlert('danger', err.message);
} finally {
btn.disabled = false;
}
}
async function updateCustomerImagesFromDetail(id) {
const btn = document.getElementById('btn-update-images-detail');
const spinner = document.getElementById('update-detail-spinner');
const resultDiv = document.getElementById('detail-update-result');
btn.disabled = true;
spinner.classList.remove('d-none');
resultDiv.innerHTML = `<div class="alert alert-info py-2 mt-2"><span class="spinner-border spinner-border-sm me-2"></span>${t('customer.updateInProgress')}</div>`;
try {
const data = await api('POST', `/customers/${id}/update-images`);
resultDiv.innerHTML = `<div class="alert alert-success py-2 mt-2"><i class="bi bi-check-circle me-1"></i>${esc(data.message)}</div>`;
setTimeout(() => { resultDiv.innerHTML = ''; }, 6000);
} catch (err) {
resultDiv.innerHTML = `<div class="alert alert-danger py-2 mt-2"><i class="bi bi-exclamation-circle me-1"></i>${esc(err.message)}</div>`;
} finally {
btn.disabled = false;
spinner.classList.add('d-none');
}
}
async function updateCustomerImages(customerId) {
// Find the update button for this customer row and show a spinner
const btn = document.querySelector(`tr[data-customer-id="${customerId}"] .btn-update-customer`);
if (btn) {
btn.disabled = true;
btn.innerHTML = '<span class="spinner-border spinner-border-sm"></span>';
}
try {
await api('POST', `/customers/${customerId}/update-images`);
showToast(t('monitoring.updateDone'));
setTimeout(() => checkImageUpdates(), 2000);
} catch (err) {
showMonitoringAlert('danger', err.message);
if (btn) {
btn.disabled = false;
btn.innerHTML = '<i class="bi bi-arrow-repeat"></i>';
}
}
}
async function updateAllCustomers() {
if (!confirm(t('monitoring.confirmUpdateAll'))) return;
const btn = document.getElementById('btn-update-all');
const body = document.getElementById('image-updates-body');
btn.disabled = true;
btn.innerHTML = `<span class="spinner-border spinner-border-sm me-1"></span>${t('monitoring.updating')}`;
const progressDiv = document.createElement('div');
progressDiv.className = 'alert alert-info mt-3';
progressDiv.innerHTML = `<span class="spinner-border spinner-border-sm me-2"></span>${t('monitoring.updateAllProgress')}`;
body.appendChild(progressDiv);
try {
const data = await api('POST', '/monitoring/customers/update-all');
progressDiv.remove();
if (data.results && data.results.length > 0) {
const allOk = data.updated === data.results.length;
const rows = data.results.map(r => `<tr>
<td>${esc(r.customer_name)}</td>
<td>${r.success
? '<span class="badge bg-success"><i class="bi bi-check-lg"></i> OK</span>'
: '<span class="badge bg-danger"><i class="bi bi-x-lg"></i> Error</span>'}</td>
<td class="small text-muted">${esc(r.error || '')}</td>
</tr>`).join('');
const resultHtml = `<div class="alert alert-${allOk ? 'success' : 'warning'} mt-3">
<strong>${esc(data.message)}</strong>
<table class="table table-sm mb-0 mt-2">
<thead><tr><th>${t('monitoring.thName')}</th><th>${t('monitoring.thStatus')}</th><th></th></tr></thead>
<tbody>${rows}</tbody>
</table>
</div>`;
body.insertAdjacentHTML('beforeend', resultHtml);
} else {
showToast(data.message);
}
setTimeout(() => checkImageUpdates(), 2000);
} catch (err) {
progressDiv.remove();
showMonitoringAlert('danger', err.message);
} finally {
btn.disabled = false;
btn.innerHTML = `<i class="bi bi-lightning-charge-fill me-1"></i>${t('monitoring.updateAll')}`;
}
}
async function pullAllImagesSettings() {
if (!confirm(t('monitoring.confirmPull'))) return;
const btn = document.getElementById('btn-pull-images-settings');
const statusEl = document.getElementById('pull-images-settings-status');
btn.disabled = true;
statusEl.innerHTML = `<span class="spinner-border spinner-border-sm me-1"></span>${t('monitoring.pulling')}`;
try {
await api('POST', '/monitoring/images/pull');
statusEl.innerHTML = `<i class="bi bi-check-circle text-success me-1"></i>${t('monitoring.pullStartedShort')}`;
setTimeout(() => { statusEl.innerHTML = ''; }, 8000);
} catch (err) {
statusEl.innerHTML = `<span class="text-danger"><i class="bi bi-exclamation-circle me-1"></i>${esc(err.message)}</span>`;
} finally {
btn.disabled = false;
}
}
function showMonitoringAlert(type, msg) {
const body = document.getElementById('image-updates-body');
const existing = body.querySelector('.alert');
if (existing) existing.remove();
const div = document.createElement('div');
div.className = `alert alert-${type} mt-2`;
div.textContent = msg;
body.prepend(div);
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------

View File

@@ -89,7 +89,9 @@
"thHealth": "Zustand",
"thImage": "Image",
"lastCheck": "Letzte Prüfung: {time}",
"openDashboard": "Dashboard öffnen"
"openDashboard": "Dashboard öffnen",
"updateImages": "Images aktualisieren",
"updateInProgress": "Container werden aktualisiert — bitte warten…"
},
"settings": {
"title": "Systemeinstellungen",
@@ -108,7 +110,7 @@
"groupExternal": "Umsysteme",
"baseDomain": "Basis-Domain",
"baseDomainPlaceholder": "ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kunde.ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kundenname.ihredomain.com",
"adminEmail": "Admin E-Mail",
"adminEmailPlaceholder": "admin@ihredomain.com",
"dataDir": "Datenverzeichnis",
@@ -118,7 +120,7 @@
"relayBasePort": "Relay-Basisport",
"relayBasePortHint": "Erster UDP-Port für Relay. Bereich: Basis bis Basis+99",
"dashboardBasePort": "Dashboard-Basisport",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Kunde N erhält Basis+N",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Der erste Kunde erhält Basis+1",
"saveSystemSettings": "Systemeinstellungen speichern",
"npmDescription": "NPM verwendet JWT-Authentifizierung. Geben Sie Ihre NPM-Zugangsdaten ein. Das System meldet sich automatisch an.",
"npmApiUrl": "NPM API URL",
@@ -152,6 +154,9 @@
"dashboardImage": "Dashboard Image",
"dashboardImagePlaceholder": "netbirdio/dashboard:latest",
"saveImageSettings": "Image-Einstellungen speichern",
"pullImagesTitle": "Neueste Images von Docker Hub laden",
"pullImagesHint": "Lädt die neuesten Versionen aller konfigurierten NetBird Images. Danach können Kunden-Container über das Monitoring aktualisiert werden.",
"pullImages": "Von Docker Hub laden",
"brandingTitle": "Branding-Einstellungen",
"companyName": "Firmen- / Anwendungsname",
"companyNamePlaceholder": "NetBird MSP Appliance",
@@ -170,6 +175,8 @@
"saveBranding": "Branding speichern",
"userManagement": "Benutzerverwaltung",
"newUser": "Neuer Benutzer",
"makeAdmin": "Zum Admin befördern",
"makeViewer": "Zum Viewer degradieren",
"thId": "ID",
"thUsername": "Benutzername",
"thEmail": "E-Mail",
@@ -371,6 +378,29 @@
"thDashboard": "Dashboard",
"thRelayPort": "Relay-Port",
"thContainers": "Container",
"noCustomers": "Keine Kunden."
"noCustomers": "Keine Kunden.",
"imageUpdates": "NetBird Container Updates",
"checkUpdates": "Auf Updates prüfen",
"pullImages": "Neueste Images laden",
"updateAll": "Alle aktualisieren",
"clickCheckUpdates": "Klicken Sie auf \"Auf Updates prüfen\" um lokale Images mit Docker Hub zu vergleichen.",
"updateAvailable": "Update verfügbar",
"upToDate": "Aktuell",
"needsUpdate": "Update erforderlich",
"updateCustomer": "Diesen Kunden aktualisieren",
"imageStatusTitle": "Image-Status (vs. Docker Hub)",
"customerImageTitle": "Kunden-Container Status",
"thImage": "Image",
"thLocalDigest": "Lokaler Digest",
"thHubDigest": "Hub Digest",
"confirmPull": "Neueste NetBird Images von Docker Hub laden? Dies kann einige Minuten dauern.",
"pullStarted": "Image-Download im Hintergrund gestartet. Prüfung in 5 Sekunden…",
"confirmUpdateAll": "Container aller Kunden mit veralteten Images neu erstellen? Laufende Dienste werden kurz neu gestartet.",
"updateAllStarted": "Aktualisierung im Hintergrund gestartet.",
"updateDone": "Kunden-Container aktualisiert.",
"updating": "Wird aktualisiert…",
"updateAllProgress": "Kunden-Container werden nacheinander aktualisiert — bitte warten…",
"pulling": "Wird geladen…",
"pullStartedShort": "Download im Hintergrund gestartet."
}
}

View File

@@ -89,7 +89,9 @@
"thHealth": "Health",
"thImage": "Image",
"lastCheck": "Last check: {time}",
"openDashboard": "Open Dashboard"
"openDashboard": "Open Dashboard",
"updateImages": "Update Images",
"updateInProgress": "Updating containers — please wait…"
},
"customerModal": {
"newCustomer": "New Customer",
@@ -173,6 +175,9 @@
"dashboardImage": "Dashboard Image",
"dashboardImagePlaceholder": "netbirdio/dashboard:latest",
"saveImageSettings": "Save Image Settings",
"pullImagesTitle": "Pull Latest Images from Docker Hub",
"pullImagesHint": "Downloads the latest versions of all configured NetBird images. After pulling, use Monitoring to update customer containers.",
"pullImages": "Pull from Docker Hub",
"brandingTitle": "Branding Settings",
"companyName": "Company / Application Name",
"companyNamePlaceholder": "NetBird MSP Appliance",
@@ -191,6 +196,8 @@
"saveBranding": "Save Branding",
"userManagement": "User Management",
"newUser": "New User",
"makeAdmin": "Promote to admin",
"makeViewer": "Demote to viewer",
"thId": "ID",
"thUsername": "Username",
"thEmail": "Email",
@@ -278,7 +285,30 @@
"thDashboard": "Dashboard",
"thRelayPort": "Relay Port",
"thContainers": "Containers",
"noCustomers": "No customers."
"noCustomers": "No customers.",
"imageUpdates": "NetBird Container Updates",
"checkUpdates": "Check for Updates",
"pullImages": "Pull Latest Images",
"updateAll": "Update All",
"clickCheckUpdates": "Click \"Check for Updates\" to compare local images with Docker Hub.",
"updateAvailable": "Update available",
"upToDate": "Up to date",
"needsUpdate": "Needs update",
"updateCustomer": "Update this customer",
"imageStatusTitle": "Image Status (vs. Docker Hub)",
"customerImageTitle": "Customer Container Status",
"thImage": "Image",
"thLocalDigest": "Local Digest",
"thHubDigest": "Hub Digest",
"confirmPull": "Pull the latest NetBird images from Docker Hub? This may take a few minutes.",
"pullStarted": "Image pull started in background. Re-checking in 5 seconds…",
"confirmUpdateAll": "Recreate containers for all customers that have outdated images? Running services will briefly restart.",
"updateAllStarted": "Update started in background.",
"updateDone": "Customer containers updated.",
"updating": "Updating…",
"updateAllProgress": "Updating customer containers one by one — please wait…",
"pulling": "Pulling…",
"pullStartedShort": "Pull started in background."
},
"userModal": {
"title": "New User",

View File

@@ -5,15 +5,15 @@
:80 {
# Embedded IdP OAuth2/OIDC endpoints
handle /oauth2/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
# NetBird Management API + gRPC
handle /api/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
handle /management.ManagementService/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80 {
reverse_proxy netbird-{{ subdomain }}-management:80 {
transport http {
versions h2c
}
@@ -22,15 +22,20 @@
# NetBird Signal gRPC
handle /signalexchange.SignalExchange/* {
reverse_proxy netbird-kunde{{ customer_id }}-signal:80 {
reverse_proxy netbird-{{ subdomain }}-signal:80 {
transport http {
versions h2c
}
}
}
# NetBird Relay WebSocket (rels://)
handle /relay* {
reverse_proxy netbird-{{ subdomain }}-relay:80
}
# Default: NetBird Dashboard
handle {
reverse_proxy netbird-kunde{{ customer_id }}-dashboard:80
reverse_proxy netbird-{{ subdomain }}-dashboard:80
}
}

View File

@@ -6,7 +6,7 @@ services:
# --- Caddy Reverse Proxy (entry point) ---
netbird-caddy:
image: caddy:2-alpine
container_name: netbird-kunde{{ customer_id }}-caddy
container_name: netbird-{{ subdomain }}-caddy
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -18,7 +18,7 @@ services:
# --- NetBird Management (with embedded IdP) ---
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -39,7 +39,7 @@ services:
# --- NetBird Signal ---
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -49,7 +49,7 @@ services:
# --- NetBird Relay ---
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -61,7 +61,7 @@ services:
# --- NetBird Dashboard ---
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- {{ docker_network }}