Compare commits

...

28 Commits

Author SHA1 Message Date
Sascha Lustenberger | techlan gmbh
0f77aaa176 fix(deploy): remove NPM stream creation on customer deploy/undeploy
STUN/TURN UDP relay no longer requires NPM stream entries.
NetBird uses rels:// WebSocket relay via NPM proxy host instead.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 19:42:12 +01:00
Sascha Lustenberger | techlan gmbh
0bc7c0ba9f feat(ui): add SVG favicon for NetBird MSP Appliance
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 16:54:21 +01:00
Sascha Lustenberger | techlan gmbh
27428b69a0 fix(netbird): query customer before use in stop/start/restart
In stop_customer, start_customer and restart_customer the local variable
'customer' was referenced on the instance_dir line before it was assigned
(it was only queried after the docker compose call). This caused an
UnboundLocalError (HTTP 500) on every stop/start/restart action.

Fix: move the customer query to the top of each function alongside the
deployment and config queries.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 11:12:17 +01:00
Sascha Lustenberger | techlan gmbh
582f92eec4 fix(update): add git safe.directory and fetch --tags after pull
- Register SOURCE_DIR as git safe.directory before pulling so the
  process (root inside container) can access repos owned by a host user
- Run 'git fetch --tags' after pull so git describe always finds the
  latest tag for version.json — git pull does not reliably fetch all tags

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:58:02 +01:00
Sascha Lustenberger | techlan gmbh
1d27226b6f fix(update): detect compose project name at runtime instead of hardcoding
The project name was hardcoded as 'netbirdmsp-appliance' but Docker Compose
derives the project name from the install directory name ('netbird-msp').
This caused Phase A to build an image under the wrong project name and
Phase B to start the replacement container under a mismatched project,
leaving the old container running indefinitely.

Fix: read the 'com.docker.compose.project' label from the running container
at update time. Both Phase A (build) and Phase B (docker compose up) now
use the detected project name. Falls back to SOURCE_DIR basename if the
inspect fails.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:51:25 +01:00
Sascha Lustenberger | techlan gmbh
c70dc33f67 fix(caddy): route relay WebSocket traffic to relay container
Add /relay* location block to Caddyfile template so that NetBird relay
WebSocket connections (rels://) are correctly forwarded to the relay
container instead of falling through to the dashboard handler.

Without this fix, all relay WebSocket connections silently hit the
dashboard container, causing STUN/relay connectivity failures for all
deployed NetBird instances.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:31:08 +01:00
Sascha Lustenberger | techlan gmbh
fb264bf7c6 Fix: Add grpc_pass to NPM advanced_config for Management and Signal endpoints 2026-02-23 14:49:43 +01:00
Sascha Lustenberger | techlan gmbh
f3304b90c8 Fix: correctly detect update when current version is unknown 2026-02-23 13:11:04 +01:00
Sascha Lustenberger | techlan gmbh
cda916f2af Fix: display dynamic version on login and use subdomain for customer directories instead of kunde{id} 2026-02-23 12:58:39 +01:00
c3ab7a5a67 fix(api): correct extraction of commit date from gitea branches api 2026-02-22 22:57:07 +01:00
b955e4f464 feat(ui): settings menu restructure, git branch dropdown, and repo cleanup 2026-02-22 21:29:30 +01:00
831564762b feat(ui): clean vertical settings menu and improved version formatting 2026-02-22 16:07:08 +01:00
3f177a6993 fix(updater): add --rm to helper container to remove it after use 2026-02-22 15:58:18 +01:00
ea4afbd6ca alpha-1.8: final test with privileged 2026-02-22 15:49:44 +01:00
95ec6765c1 fix(updater): add --privileged to helper container to bypass user namespace restrictions 2026-02-22 15:46:09 +01:00
c40b7d3bc6 alpha-1.7: final test 2026-02-22 15:39:18 +01:00
525b056b91 fix(updater): add :z flag to docker volumes for SELinux 2026-02-22 15:33:42 +01:00
6bc11d4c5e alpha-1.6: test final update 2026-02-22 15:25:50 +01:00
e0aa51bac3 fix(updater): remove log redirection from helper to avoid nonexistent dir error 2026-02-22 15:22:43 +01:00
94d0b989d0 alpha-1.5: trigger update 2026-02-22 15:16:20 +01:00
2780b065d2 fix(updater): add force-recreate and logging to helper container 2026-02-22 15:14:23 +01:00
ef691a4308 alpha-1.4: test helper-container update 2026-02-22 14:51:43 +01:00
0fe68cc6df fix: use helper container for self-update (survives container restart) 2026-02-22 14:50:00 +01:00
314393d61a alpha-1.3: update test 2026-02-22 14:42:53 +01:00
a9fc549cec fix: correct docker compose project name and target only app service for update 2026-02-22 14:40:07 +01:00
41bbd6676b alpha-1.2: test update button 2026-02-22 14:34:46 +01:00
fc9589b6f9 fix: trigger_update setzt GIT_TAG/GIT_COMMIT env vars für docker compose rebuild 2026-02-22 14:32:08 +01:00
6d2251bcf5 alpha-1.1: Login-Page Version-Marker hinzugefügt 2026-02-22 14:25:44 +01:00
14 changed files with 1301 additions and 651 deletions

15
.gitignore vendored
View File

@@ -69,5 +69,20 @@ PROJECT_SUMMARY.md
QUICKSTART.md
VS_CODE_SETUP.md
# Gemini / Antigravity
.gemini/
# Windows artifacts
nul
# Debug / temp files (generated during development & testing)
out.txt
containers.txt
helper.txt
logs.txt
port.txt
env.txt
network.txt
update_helper.txt
state.txt
hostpath.txt

View File

@@ -91,7 +91,7 @@ netbird-msp-appliance/
1. Validate inputs (subdomain unique, email valid)
2. Allocate ports (Management internal, Relay UDP public)
3. Generate configs from Jinja2 templates
4. Create instance directory: `/opt/netbird-instances/kunde{id}/`
4. Create instance directory: `/opt/netbird-instances/{subdomain}/`
5. Write `docker-compose.yml`, `management.json`, `relay.env`
6. Start Docker containers via Docker SDK
7. Wait for health checks (max 60s)
@@ -113,7 +113,7 @@ No manual config file editing required!
### 4. Nginx Proxy Manager Integration
**Per customer, create proxy host:**
- Domain: `{subdomain}.{base_domain}`
- Forward to: `netbird-kunde{id}-dashboard:80`
- Forward to: `netbird-{subdomain}-dashboard:80`
- SSL: Automatic Let's Encrypt
- Advanced config: Route `/api/*` to management, `/signalexchange.*` to signal, `/relay` to relay
@@ -272,7 +272,7 @@ networks:
services:
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- npm-network
@@ -285,7 +285,7 @@ services:
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- npm-network
@@ -294,7 +294,7 @@ services:
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- npm-network
@@ -311,7 +311,7 @@ services:
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- npm-network

View File

@@ -95,8 +95,8 @@ A management solution for running isolated NetBird instances for your MSP busine
| | Caddy | | | | Caddy | |
| +------------+ | | +------------+ |
+------------------+ +------------------+
kunde1.domain.de kundeN.domain.de
UDP 3478 UDP 3478+N-1
customer-a.domain.de customer-x.domain.de
| |3478+N-1
```
### Components per Customer Instance (5 containers):
@@ -140,9 +140,9 @@ Example for 3 customers:
| Customer | Dashboard (TCP) | Relay (UDP) |
|----------|----------------|-------------|
| Kunde 1 | 9001 | 3478 |
| Kunde 2 | 9002 | 3479 |
| Kunde 3 | 9003 | 3480 |
| Customer-A | 9001 | 3478 |
| Customer-C | 9002 | 3479 |
| Customer-X | 9003 | 3480 |
**Your firewall must allow both the TCP dashboard ports and the UDP relay ports!**

View File

@@ -237,6 +237,10 @@ async def test_ldap(
@router.get("/branding")
async def get_branding(db: Session = Depends(get_db)):
"""Public endpoint — returns branding info for the login page (no auth required)."""
current_version = update_service.get_current_version().get("tag", "alpha-1.1")
if current_version == "unknown":
current_version = "alpha-1.1"
row = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not row:
return {
@@ -244,12 +248,14 @@ async def get_branding(db: Session = Depends(get_db)):
"branding_subtitle": "Multi-Tenant Management Platform",
"branding_logo_path": None,
"default_language": "en",
"version": current_version
}
return {
"branding_name": row.branding_name or "NetBird MSP Appliance",
"branding_subtitle": row.branding_subtitle or "Multi-Tenant Management Platform",
"branding_logo_path": row.branding_logo_path,
"default_language": row.default_language or "en",
"version": current_version
}
@@ -334,6 +340,19 @@ async def get_version(
return result
@router.get("/branches")
async def get_branches(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Return a list of available branches from the configured git remote."""
config = get_system_config(db)
if not config or not config.git_repo_url:
return []
branches = await update_service.get_remote_branches(config)
return branches
@router.post("/update")
async def trigger_update(
current_user: User = Depends(get_current_user),

View File

@@ -118,7 +118,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
allocated_port = None
instance_dir = None
container_prefix = f"netbird-kunde{customer_id}"
container_prefix = f"netbird-{customer.subdomain}"
local_mode = _is_local_domain(config.base_domain)
existing_deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
@@ -135,7 +135,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 2: Generate secrets (reuse existing key if instance data exists)
relay_secret = generate_relay_secret()
datastore_key = _get_existing_datastore_key(
os.path.join(config.data_dir, f"kunde{customer_id}", "management.json")
os.path.join(config.data_dir, customer.subdomain, "management.json")
)
if datastore_key:
_log_action(db, customer_id, "deploy", "info",
@@ -159,7 +159,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
relay_ws_protocol = "rels"
# Step 4: Create instance directory
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
os.makedirs(instance_dir, exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "management"), exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "signal"), exist_ok=True)
@@ -225,7 +225,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 8: Auto-create admin user via NetBird setup API
admin_email = customer.email
admin_password = secrets.token_urlsafe(16)
management_container = f"netbird-kunde{customer_id}-management"
management_container = f"netbird-{customer.subdomain}-management"
setup_api_url = f"http://{management_container}:80/api/setup"
setup_payload = json.dumps({
"name": customer.name,
@@ -264,7 +264,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
_log_action(db, customer_id, "deploy", "info",
"Auto-setup failed — admin must complete setup manually.")
# Step 9: Create NPM proxy host + stream (production only)
# Step 9: Create NPM proxy host (production only)
npm_proxy_id = None
npm_stream_id = None
if not local_mode:
@@ -294,27 +294,6 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
f"(SSL: {'OK' if ssl_ok else 'FAILED — check DNS and port 80 accessibility'})",
)
# Create NPM UDP stream for relay STUN port
stream_result = await npm_service.create_stream(
api_url=config.npm_api_url,
npm_email=config.npm_api_email,
npm_password=config.npm_api_password,
incoming_port=allocated_port,
forwarding_host=forward_host,
forwarding_port=allocated_port,
)
npm_stream_id = stream_result.get("stream_id")
if stream_result.get("error"):
_log_action(
db, customer_id, "deploy", "error",
f"NPM stream creation failed: {stream_result['error']}",
)
else:
_log_action(
db, customer_id, "deploy", "info",
f"NPM UDP stream created: port {allocated_port} -> {forward_host}:{allocated_port}",
)
# Note: Keep HTTPS configs even if SSL cert creation failed.
# SSL can be set up manually in NPM later. Switching to HTTP
# would break the dashboard when the user accesses via HTTPS.
@@ -387,7 +366,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Rollback: stop containers if they were started
try:
await docker_service.compose_down(
instance_dir or os.path.join(config.data_dir, f"kunde{customer_id}"),
instance_dir or os.path.join(config.data_dir, customer.subdomain),
container_prefix,
remove_volumes=True,
)
@@ -423,7 +402,7 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
config = get_system_config(db)
if deployment and config:
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
# Stop and remove containers
try:
@@ -443,17 +422,6 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM removal error: {exc}")
# Remove NPM stream
if deployment.npm_stream_id and config.npm_api_email:
try:
await npm_service.delete_stream(
config.npm_api_url, config.npm_api_email, config.npm_api_password,
deployment.npm_stream_id,
)
_log_action(db, customer_id, "undeploy", "info", "NPM stream removed.")
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM stream removal error: {exc}")
# Remove Windows DNS A-record (non-fatal)
if config and config.dns_enabled and config.dns_server and config.dns_zone:
try:
@@ -484,17 +452,16 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Stop containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_stop(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "stopped"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "inactive"
customer.status = "inactive"
db.commit()
_log_action(db, customer_id, "stop", "success", "Containers stopped.")
else:
@@ -505,17 +472,16 @@ async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Start containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_start(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "start", "success", "Containers started.")
else:
@@ -526,17 +492,16 @@ async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def restart_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Restart containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_restart(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "restart", "success", "Containers restarted.")
else:

View File

@@ -259,7 +259,16 @@ async def create_proxy_host(
"block_exploits": True,
"allow_websocket_upgrade": True,
"access_list_id": 0,
"advanced_config": "",
"advanced_config": (
"location ^~ /management.ManagementService/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
"location ^~ /signalexchange.SignalExchange/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
),
"meta": {
"letsencrypt_agree": True,
"letsencrypt_email": admin_email,

View File

@@ -2,8 +2,10 @@
import json
import logging
import os
import shutil
import subprocess
import httpx
from datetime import datetime
from pathlib import Path
from typing import Any
@@ -13,10 +15,45 @@ import httpx
SOURCE_DIR = "/app-source"
VERSION_FILE = "/app/version.json"
BACKUP_DIR = "/app/backups"
CONTAINER_NAME = "netbird-msp-appliance"
SERVICE_NAME = "netbird-msp-appliance"
logger = logging.getLogger(__name__)
def _get_compose_project_name() -> str:
"""Detect the compose project name from the running container's labels.
Docker Compose sets the label ``com.docker.compose.project`` on every
managed container. Reading it at runtime avoids hard-coding a project
name that may differ from the directory name used at deploy time.
Returns:
The compose project name (e.g. ``netbird-msp``).
"""
try:
result = subprocess.run(
[
"docker", "inspect", CONTAINER_NAME,
"--format",
'{{index .Config.Labels "com.docker.compose.project"}}',
],
capture_output=True, text=True, timeout=10,
)
if result.returncode == 0:
project = result.stdout.strip()
if project:
logger.info("Detected compose project name: %s", project)
return project
except Exception as exc:
logger.warning("Could not detect compose project name: %s", exc)
# Fallback: derive from SOURCE_DIR basename (mirrors Compose default behaviour)
fallback = Path(SOURCE_DIR).name
logger.warning("Using fallback compose project name: %s", fallback)
return fallback
def get_current_version() -> dict:
"""Read the version baked at build time from /app/version.json."""
try:
@@ -102,15 +139,19 @@ async def check_for_updates(config: Any) -> dict:
"tag": latest_tag,
"commit": short_sha,
"commit_full": full_sha,
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0],
"date": latest_commit.get("commit", {}).get("committer", {}).get("date", ""),
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0] if latest_commit.get("commit") else "",
"date": latest_commit.get("timestamp", ""),
"branch": branch,
}
# Determine if update is needed: prefer tag comparison, fallback to commit
current_tag = current.get("tag", "unknown")
current_sha = current.get("commit", "unknown")
if current_tag != "unknown" and latest_tag != "unknown":
# If we don't know our current version but the remote has one, we should update
if current_tag == "unknown" and current_sha == "unknown":
needs_update = latest_tag != "unknown" or short_sha != "unknown"
elif current_tag != "unknown" and latest_tag != "unknown":
needs_update = current_tag != latest_tag
else:
needs_update = (
@@ -129,6 +170,42 @@ async def check_for_updates(config: Any) -> dict:
}
async def get_remote_branches(config: Any) -> list[str]:
"""Query the Gitea API for available branches on the configured repository.
Returns a list of branch names (e.g., ['main', 'unstable', 'development']).
If the repository URL is not configured or an error occurs, returns an empty list.
"""
if not config.git_repo_url:
return []
repo_url = config.git_repo_url.rstrip("/")
parts = repo_url.split("/")
if len(parts) < 5:
return []
base_url = "/".join(parts[:-2])
owner = parts[-2]
repo = parts[-1]
branches_api = f"{base_url}/api/v1/repos/{owner}/{repo}/branches?limit=100"
headers = {}
if config.git_token:
headers["Authorization"] = f"token {config.git_token}"
try:
async with httpx.AsyncClient(timeout=10) as client:
resp = await client.get(branches_api, headers=headers)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list):
return [branch.get("name") for branch in data if "name" in branch]
except Exception as exc:
logger.error("Error fetching branches: %s", exc)
return []
def backup_database(db_path: str) -> str:
"""Create a timestamped backup of the SQLite database.
@@ -175,6 +252,16 @@ def trigger_update(config: Any, db_path: str) -> dict:
pull_cmd = ["git", "-C", SOURCE_DIR, "pull", "origin", branch]
# 3. Git pull (synchronous — must complete before rebuild)
# Ensure .git directory is owned by the process user (root inside container).
# The .git dir may be owned by the host user after manual operations.
try:
subprocess.run(
["git", "config", "--global", "--add", "safe.directory", SOURCE_DIR],
capture_output=True, timeout=10,
)
except Exception:
pass
try:
result = subprocess.run(
pull_cmd,
@@ -198,18 +285,156 @@ def trigger_update(config: Any, db_path: str) -> dict:
logger.info("git pull succeeded: %s", result.stdout.strip()[:200])
# 4. Fire-and-forget docker compose rebuild — the container will restart itself
compose_cmd = [
"docker", "compose",
"-f", f"{SOURCE_DIR}/docker-compose.yml",
"up", "--build", "-d",
]
subprocess.Popen(
compose_cmd,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
# Fetch tags separately — git pull does not always pull all tags
try:
subprocess.run(
["git", "-C", SOURCE_DIR, "fetch", "--tags"],
capture_output=True, text=True, timeout=30,
)
except Exception as exc:
logger.warning("git fetch --tags failed (non-fatal): %s", exc)
# 4. Read version info from the freshly-pulled source
build_env = os.environ.copy()
try:
build_env["GIT_COMMIT"] = subprocess.run(
["git", "-C", SOURCE_DIR, "rev-parse", "--short", "HEAD"],
capture_output=True, text=True, timeout=10,
).stdout.strip() or "unknown"
build_env["GIT_BRANCH"] = subprocess.run(
["git", "-C", SOURCE_DIR, "rev-parse", "--abbrev-ref", "HEAD"],
capture_output=True, text=True, timeout=10,
).stdout.strip() or "unknown"
build_env["GIT_COMMIT_DATE"] = subprocess.run(
["git", "-C", SOURCE_DIR, "log", "-1", "--format=%cI"],
capture_output=True, text=True, timeout=10,
).stdout.strip() or "unknown"
tag_result = subprocess.run(
["git", "-C", SOURCE_DIR, "describe", "--tags", "--abbrev=0"],
capture_output=True, text=True, timeout=10,
)
build_env["GIT_TAG"] = tag_result.stdout.strip() if tag_result.returncode == 0 else "unknown"
except Exception as exc:
logger.warning("Could not read version info from source: %s", exc)
logger.info(
"Rebuilding with GIT_TAG=%s GIT_COMMIT=%s GIT_BRANCH=%s",
build_env.get("GIT_TAG", "?"),
build_env.get("GIT_COMMIT", "?"),
build_env.get("GIT_BRANCH", "?"),
)
logger.info("docker compose up --build -d triggered — container will restart shortly.")
# 5. Two-phase rebuild: Build image first, then swap container.
# The swap will kill this process (we ARE the container), so we must
# ensure the compose-up runs detached on the Docker host via a wrapper.
log_path = Path(BACKUP_DIR) / "update_rebuild.log"
# Detect compose project name at runtime — avoids hard-coding a name that
# may differ from the directory used at deploy time.
project_name = _get_compose_project_name()
# Image name follows Docker Compose convention: {project}-{service}
service_image = f"{project_name}-{SERVICE_NAME}:latest"
logger.info("Using project=%s image=%s", project_name, service_image)
# Phase A — build the new image (does NOT stop anything)
build_cmd = [
"docker", "compose",
"-p", project_name,
"-f", f"{SOURCE_DIR}/docker-compose.yml",
"build", "--no-cache",
SERVICE_NAME,
]
logger.info("Phase A: building new image …")
try:
build_result = subprocess.run(
build_cmd,
capture_output=True, text=True,
timeout=600,
env=build_env,
)
with open(log_path, "w") as f:
f.write(build_result.stdout)
f.write(build_result.stderr)
if build_result.returncode != 0:
logger.error("Image build failed: %s", build_result.stderr[:500])
return {
"ok": False,
"message": f"Image build failed: {build_result.stderr[:300]}",
"backup": backup_path,
}
except subprocess.TimeoutExpired:
return {"ok": False, "message": "Image build timed out after 600s.", "backup": backup_path}
logger.info("Phase A complete — image built successfully.")
# Phase B — swap the container using a helper container.
# When compose recreates our container, ALL processes inside die (PID namespace
# is destroyed). So we launch a *separate* helper container via 'docker run -d'
# that has access to the Docker socket and runs 'docker compose up -d'.
# This helper lives outside our container and survives our restart.
# Discover the host-side path of /app-source (docker volumes use host paths)
try:
inspect_result = subprocess.run(
["docker", "inspect", "netbird-msp-appliance",
"--format", '{{range .Mounts}}{{if eq .Destination "/app-source"}}{{.Source}}{{end}}{{end}}'],
capture_output=True, text=True, timeout=10,
)
host_source_dir = inspect_result.stdout.strip()
if not host_source_dir:
raise ValueError("Could not find /app-source mount")
except Exception as exc:
logger.error("Failed to discover host source path: %s", exc)
return {"ok": False, "message": f"Could not find host source path: {exc}", "backup": backup_path}
logger.info("Host source directory: %s", host_source_dir)
env_flags = []
for key in ("GIT_TAG", "GIT_COMMIT", "GIT_BRANCH", "GIT_COMMIT_DATE"):
val = build_env.get(key, "unknown")
env_flags.extend(["-e", f"{key}={val}"])
helper_cmd = [
"docker", "run", "--rm", "-d", "--privileged",
"--name", "msp-updater",
"-v", "/var/run/docker.sock:/var/run/docker.sock:z",
"-v", f"{host_source_dir}:{host_source_dir}:ro,z",
*env_flags,
service_image, # freshly built image — has docker CLI + compose plugin
"sh", "-c",
(
"sleep 3 && "
f"docker compose -p {project_name} "
f"-f {host_source_dir}/docker-compose.yml "
f"up --force-recreate --no-deps -d {SERVICE_NAME}"
),
]
try:
# Remove stale updater container if any
subprocess.run(
["docker", "rm", "-f", "msp-updater"],
capture_output=True, timeout=10,
)
result = subprocess.run(
helper_cmd,
capture_output=True, text=True,
timeout=30,
env=build_env,
)
if result.returncode != 0:
logger.error("Failed to start updater container: %s", result.stderr.strip())
return {
"ok": False,
"message": f"Update-Container konnte nicht gestartet werden: {result.stderr.strip()[:200]}",
"backup": backup_path,
}
logger.info("Phase B: updater container started — this container will restart in ~5s.")
except Exception as exc:
logger.error("Failed to launch updater: %s", exc)
return {"ok": False, "message": f"Updater launch failed: {exc}", "backup": backup_path}
return {
"ok": True,

21
static/favicon.svg Normal file
View File

@@ -0,0 +1,21 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
<!-- Blue rounded background -->
<rect width="32" height="32" rx="7" fill="#2563EB"/>
<!-- Bird silhouette: top-down view, wings spread, forked tail -->
<path fill="white" d="
M 16 7
C 15 8 14 9.5 14 11
C 11 10.5 7 11 4 14
C 8 15 12 14.5 14 14.5
L 15 22
L 13 26
L 16 24
L 19 26
L 17 22
L 18 14.5
C 20 14.5 24 15 28 14
C 25 11 21 10.5 18 11
C 18 9.5 17 8 16 7 Z
"/>
</svg>

After

Width:  |  Height:  |  Size: 496 B

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ let currentPage = 'dashboard';
let currentCustomerId = null;
let currentCustomerData = null;
let customersPage = 1;
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null };
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null, version: 'alpha-1.1' };
let azureConfig = { azure_enabled: false };
// ---------------------------------------------------------------------------
@@ -127,12 +127,19 @@ function applyBranding() {
const name = brandingData.branding_name || 'NetBird MSP Appliance';
const subtitle = brandingData.branding_subtitle || t('login.subtitle');
const logoPath = brandingData.branding_logo_path;
const version = brandingData.version || 'alpha-1.1';
// Login page
document.getElementById('login-title').textContent = name;
const subtitleEl = document.getElementById('login-subtitle');
if (subtitleEl) subtitleEl.textContent = subtitle;
document.title = name;
// Update version string in login page
const versionEl = document.querySelector('#login-page .text-muted.small.mb-0');
if (versionEl) {
versionEl.innerHTML = `<i class="bi bi-tag me-1"></i>${version}`;
}
if (logoPath) {
document.getElementById('login-logo').innerHTML = `<img src="${logoPath}" alt="Logo" style="max-height:64px;max-width:200px;" class="mb-1">`;
} else {
@@ -366,7 +373,7 @@ function logout() {
'Content-Type': 'application/json',
'Authorization': `Bearer ${authToken}`,
},
}).catch(() => {});
}).catch(() => { });
}
authToken = null;
currentUser = null;
@@ -465,9 +472,9 @@ function renderCustomersTable(data) {
<div class="btn-group btn-group-sm">
<button class="btn btn-outline-primary" title="${t('common.view')}" onclick="viewCustomer(${c.id})"><i class="bi bi-eye"></i></button>
${c.deployment && c.deployment.deployment_status === 'running'
? `<button class="btn btn-outline-warning" title="${t('common.stop')}" onclick="customerAction(${c.id},'stop')"><i class="bi bi-stop-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.start')}" onclick="customerAction(${c.id},'start')"><i class="bi bi-play-circle"></i></button>`
}
? `<button class="btn btn-outline-warning" title="${t('common.stop')}" onclick="customerAction(${c.id},'stop')"><i class="bi bi-stop-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.start')}" onclick="customerAction(${c.id},'start')"><i class="bi bi-play-circle"></i></button>`
}
<button class="btn btn-outline-info" title="${t('common.restart')}" onclick="customerAction(${c.id},'restart')"><i class="bi bi-arrow-repeat"></i></button>
<button class="btn btn-outline-danger" title="${t('common.delete')}" onclick="showDeleteModal(${c.id},'${esc(c.name)}')"><i class="bi bi-trash"></i></button>
</div>
@@ -511,7 +518,7 @@ function showNewCustomerModal() {
// Update subdomain suffix
api('GET', '/settings/system').then(cfg => {
document.getElementById('cust-subdomain-suffix').textContent = `.${cfg.base_domain || 'domain.com'}`;
}).catch(() => {});
}).catch(() => { });
const modalEl = document.getElementById('customer-modal');
const modal = bootstrap.Modal.getOrCreateInstance(modalEl);
@@ -872,6 +879,9 @@ async function loadSettings() {
} catch (err) {
showSettingsAlert('danger', t('errors.failedToLoadSettings', { error: err.message }));
}
// Automatically fetch branches once the base config is populated
await loadGitBranches();
}
function updateLogoPreview(logoPath) {
@@ -1183,6 +1193,42 @@ async function testLdapConnection() {
}
}
async function loadGitBranches() {
const branchSelect = document.getElementById('cfg-git-branch');
const currentVal = branchSelect.value;
// Disable mapping while loading
branchSelect.disabled = true;
branchSelect.innerHTML = `<option value="${currentVal}">${currentVal} (Loading...)</option>`;
try {
const branches = await api('GET', '/settings/branches');
branchSelect.innerHTML = '';
// Always ensure the currently saved branch is an option
if (currentVal && !branches.includes(currentVal)) {
branches.unshift(currentVal);
}
if (branches.length === 0) {
branchSelect.innerHTML = `<option value="main">main</option>`;
} else {
branches.forEach(b => {
const opt = document.createElement('option');
opt.value = b;
opt.textContent = b;
if (b === currentVal) opt.selected = true;
branchSelect.appendChild(opt);
});
}
} catch (err) {
showSettingsAlert('warning', `Failed to load branches: ${err.message}`);
branchSelect.innerHTML = `<option value="${currentVal}">${currentVal}</option>`;
} finally {
branchSelect.disabled = false;
}
}
// ---------------------------------------------------------------------------
// Update / Version Management
// ---------------------------------------------------------------------------
@@ -1219,12 +1265,12 @@ async function loadVersionInfo() {
let html = `<div class="row g-3">
<div class="col-md-6">
<div class="border rounded p-3">
<div class="border rounded p-3 h-100">
<div class="text-muted small mb-1">${t('settings.currentVersion')}</div>
<div class="fw-bold fs-5">${esc(currentTag || currentCommit)}</div>
${currentTag ? `<div class="text-muted small font-monospace">${t('settings.commitHash')}: ${esc(currentCommit)}</div>` : ''}
<div class="text-muted small">${t('settings.branch')}: <strong>${esc(current.branch || 'unknown')}</strong></div>
<div class="text-muted small">${esc(current.date || '')}</div>
<div class="text-muted small mt-2"><i class="bi bi-clock me-1"></i>${formatDate(current.date)}</div>
</div>
</div>`;
@@ -1235,17 +1281,17 @@ async function loadVersionInfo() {
? `<span class="badge bg-warning text-dark ms-1">${t('settings.updateAvailable')}</span>`
: `<span class="badge bg-success ms-1">${t('settings.upToDate')}</span>`;
html += `<div class="col-md-6">
<div class="border rounded p-3 ${needsUpdate ? 'border-warning' : ''}">
<div class="border rounded p-3 h-100 ${needsUpdate ? 'border-warning' : ''}">
<div class="text-muted small mb-1">${t('settings.latestVersion')} ${badge}</div>
<div class="fw-bold fs-5">${esc(latestTag || latestCommit)}</div>
${latestTag ? `<div class="text-muted small font-monospace">${t('settings.commitHash')}: ${esc(latestCommit)}</div>` : ''}
<div class="text-muted small">${t('settings.branch')}: <strong>${esc(latest.branch || 'unknown')}</strong></div>
<div class="text-muted small">${esc(latest.message || '')}</div>
<div class="text-muted small">${esc(latest.date || '')}</div>
<div class="text-muted small mt-2"><i class="bi bi-clock me-1"></i>${formatDate(latest.date)}</div>
${latest.message ? `<div class="text-muted small mt-1 border-top pt-1 text-truncate" title="${esc(latest.message)}"><i class="bi bi-chat-text me-1"></i>${esc(latest.message)}</div>` : ''}
</div>
</div>`;
} else if (data.error) {
html += `<div class="col-md-6"><div class="alert alert-warning mb-0">${esc(data.error)}</div></div>`;
html += `<div class="col-md-6"><div class="alert alert-warning h-100 mb-0">${esc(data.error)}</div></div>`;
}
html += '</div>';
@@ -1305,9 +1351,9 @@ async function loadUsers() {
<td>
<div class="btn-group btn-group-sm">
${u.is_active
? `<button class="btn btn-outline-warning" title="${t('common.disable')}" onclick="toggleUserActive(${u.id}, false)"><i class="bi bi-pause-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.enable')}" onclick="toggleUserActive(${u.id}, true)"><i class="bi bi-play-circle"></i></button>`
}
? `<button class="btn btn-outline-warning" title="${t('common.disable')}" onclick="toggleUserActive(${u.id}, false)"><i class="bi bi-pause-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.enable')}" onclick="toggleUserActive(${u.id}, true)"><i class="bi bi-play-circle"></i></button>`
}
${u.auth_provider === 'local' ? `<button class="btn btn-outline-info" title="${t('common.resetPassword')}" onclick="resetUserPassword(${u.id}, '${esc(u.username)}')"><i class="bi bi-key"></i></button>` : ''}
${u.totp_enabled ? `<button class="btn btn-outline-secondary" title="${t('mfa.resetMfa')}" onclick="resetUserMfa(${u.id}, '${esc(u.username)}')"><i class="bi bi-shield-x"></i></button>` : ''}
<button class="btn btn-outline-danger" title="${t('common.delete')}" onclick="deleteUser(${u.id}, '${esc(u.username)}')"><i class="bi bi-trash"></i></button>

View File

@@ -93,19 +93,22 @@
},
"settings": {
"title": "Systemeinstellungen",
"tabSystem": "Systemkonfiguration",
"tabNpm": "NPM Integration",
"tabImages": "Docker Images",
"tabSystem": "NetBird MSP System",
"tabNpm": "NPM Proxy",
"tabImages": "NetBird Docker Images",
"tabBranding": "Branding",
"tabUsers": "Benutzer",
"tabAzure": "Azure AD",
"tabDns": "Windows DNS",
"tabLdap": "LDAP / AD",
"tabUpdate": "Updates",
"tabUpdate": "NetBird MSP Updates",
"tabSecurity": "Sicherheit",
"groupUsers": "Benutzerverwaltung",
"groupSystem": "Systemkonfiguration",
"groupExternal": "Umsysteme",
"baseDomain": "Basis-Domain",
"baseDomainPlaceholder": "ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kunde.ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kundenname.ihredomain.com",
"adminEmail": "Admin E-Mail",
"adminEmailPlaceholder": "admin@ihredomain.com",
"dataDir": "Datenverzeichnis",
@@ -115,7 +118,7 @@
"relayBasePort": "Relay-Basisport",
"relayBasePortHint": "Erster UDP-Port für Relay. Bereich: Basis bis Basis+99",
"dashboardBasePort": "Dashboard-Basisport",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Kunde N erhält Basis+N",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Der erste Kunde erhält Basis+1",
"saveSystemSettings": "Systemeinstellungen speichern",
"npmDescription": "NPM verwendet JWT-Authentifizierung. Geben Sie Ihre NPM-Zugangsdaten ein. Das System meldet sich automatisch an.",
"npmApiUrl": "NPM API URL",

View File

@@ -114,16 +114,19 @@
},
"settings": {
"title": "System Settings",
"tabSystem": "System Configuration",
"tabNpm": "NPM Integration",
"tabImages": "Docker Images",
"tabSystem": "NetBird MSP System",
"tabNpm": "NPM Proxy",
"tabImages": "NetBird Docker Images",
"tabBranding": "Branding",
"tabUsers": "Users",
"tabAzure": "Azure AD",
"tabDns": "Windows DNS",
"tabLdap": "LDAP / AD",
"tabUpdate": "Updates",
"tabUpdate": "NetBird MSP Updates",
"tabSecurity": "Security",
"groupUsers": "User Management",
"groupSystem": "System Configuration",
"groupExternal": "External Systems",
"baseDomain": "Base Domain",
"baseDomainPlaceholder": "yourdomain.com",
"baseDomainHint": "Customers get subdomains: customer.yourdomain.com",
@@ -370,4 +373,4 @@
"confirmDeleteUser": "Delete user '{username}'? This cannot be undone.",
"confirmResetPassword": "Reset password for '{username}'? A new random password will be generated."
}
}
}

View File

@@ -5,15 +5,15 @@
:80 {
# Embedded IdP OAuth2/OIDC endpoints
handle /oauth2/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
# NetBird Management API + gRPC
handle /api/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
handle /management.ManagementService/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80 {
reverse_proxy netbird-{{ subdomain }}-management:80 {
transport http {
versions h2c
}
@@ -22,15 +22,20 @@
# NetBird Signal gRPC
handle /signalexchange.SignalExchange/* {
reverse_proxy netbird-kunde{{ customer_id }}-signal:80 {
reverse_proxy netbird-{{ subdomain }}-signal:80 {
transport http {
versions h2c
}
}
}
# NetBird Relay WebSocket (rels://)
handle /relay* {
reverse_proxy netbird-{{ subdomain }}-relay:80
}
# Default: NetBird Dashboard
handle {
reverse_proxy netbird-kunde{{ customer_id }}-dashboard:80
reverse_proxy netbird-{{ subdomain }}-dashboard:80
}
}

View File

@@ -6,7 +6,7 @@ services:
# --- Caddy Reverse Proxy (entry point) ---
netbird-caddy:
image: caddy:2-alpine
container_name: netbird-kunde{{ customer_id }}-caddy
container_name: netbird-{{ subdomain }}-caddy
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -18,7 +18,7 @@ services:
# --- NetBird Management (with embedded IdP) ---
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -39,7 +39,7 @@ services:
# --- NetBird Signal ---
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -49,7 +49,7 @@ services:
# --- NetBird Relay ---
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -61,7 +61,7 @@ services:
# --- NetBird Dashboard ---
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- {{ docker_network }}