Compare commits

...

17 Commits

Author SHA1 Message Date
848ead0b2c feat(updates): NetBird container image update management
- New image_service.py: Docker Hub digest check (no pull), local digest/ID
  comparison, pull_all_images, per-customer container image status, and
  update_customer_containers (docker compose up -d, data-safe)
- Monitoring endpoints: GET /images/check (hub vs local + per-customer
  needs_update), POST /images/pull (background), POST /customers/update-all
- Deployment endpoint: POST /{id}/update-images (single-customer update)
- Monitoring page: "NetBird Container Updates" card with Check / Pull / Update
  All buttons; image status table and per-customer update table with inline
  update buttons
- i18n: added keys in en.json and de.json

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 21:01:56 +01:00
Sascha Lustenberger | techlan gmbh
796824c400 feat(users): allow role assignment for Azure AD and LDAP users
- Backend: add admin-only guard + role validation to PUT /users/{id}
- Backend: prevent admins from changing their own role
- Frontend: role toggle button (person-check / person-dash) per user row
- Frontend: admin badge green, viewer badge secondary, ldap badge blue
- i18n: add makeAdmin / makeViewer translations (de + en)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:27:54 +01:00
Sascha Lustenberger | techlan gmbh
8103fffcb8 fix(docker): persist branding uploads across container rebuilds
Mount ./data/uploads into /app/static/uploads so uploaded logos
survive image rebuilds during the update process.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:19:22 +01:00
Sascha Lustenberger | techlan gmbh
13408225b4 feat(ui): add dark mode toggle to navbar
Uses Bootstrap 5.3 native data-bs-theme with localStorage persistence.
Inline script in <head> prevents flash on page load.
Moon/sun icon in top-right navbar switches between light and dark.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 20:08:18 +01:00
Sascha Lustenberger | techlan gmbh
0f77aaa176 fix(deploy): remove NPM stream creation on customer deploy/undeploy
STUN/TURN UDP relay no longer requires NPM stream entries.
NetBird uses rels:// WebSocket relay via NPM proxy host instead.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 19:42:12 +01:00
Sascha Lustenberger | techlan gmbh
0bc7c0ba9f feat(ui): add SVG favicon for NetBird MSP Appliance
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 16:54:21 +01:00
Sascha Lustenberger | techlan gmbh
27428b69a0 fix(netbird): query customer before use in stop/start/restart
In stop_customer, start_customer and restart_customer the local variable
'customer' was referenced on the instance_dir line before it was assigned
(it was only queried after the docker compose call). This caused an
UnboundLocalError (HTTP 500) on every stop/start/restart action.

Fix: move the customer query to the top of each function alongside the
deployment and config queries.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 11:12:17 +01:00
Sascha Lustenberger | techlan gmbh
582f92eec4 fix(update): add git safe.directory and fetch --tags after pull
- Register SOURCE_DIR as git safe.directory before pulling so the
  process (root inside container) can access repos owned by a host user
- Run 'git fetch --tags' after pull so git describe always finds the
  latest tag for version.json — git pull does not reliably fetch all tags

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:58:02 +01:00
Sascha Lustenberger | techlan gmbh
1d27226b6f fix(update): detect compose project name at runtime instead of hardcoding
The project name was hardcoded as 'netbirdmsp-appliance' but Docker Compose
derives the project name from the install directory name ('netbird-msp').
This caused Phase A to build an image under the wrong project name and
Phase B to start the replacement container under a mismatched project,
leaving the old container running indefinitely.

Fix: read the 'com.docker.compose.project' label from the running container
at update time. Both Phase A (build) and Phase B (docker compose up) now
use the detected project name. Falls back to SOURCE_DIR basename if the
inspect fails.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:51:25 +01:00
Sascha Lustenberger | techlan gmbh
c70dc33f67 fix(caddy): route relay WebSocket traffic to relay container
Add /relay* location block to Caddyfile template so that NetBird relay
WebSocket connections (rels://) are correctly forwarded to the relay
container instead of falling through to the dashboard handler.

Without this fix, all relay WebSocket connections silently hit the
dashboard container, causing STUN/relay connectivity failures for all
deployed NetBird instances.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 10:31:08 +01:00
Sascha Lustenberger | techlan gmbh
fb264bf7c6 Fix: Add grpc_pass to NPM advanced_config for Management and Signal endpoints 2026-02-23 14:49:43 +01:00
Sascha Lustenberger | techlan gmbh
f3304b90c8 Fix: correctly detect update when current version is unknown 2026-02-23 13:11:04 +01:00
Sascha Lustenberger | techlan gmbh
cda916f2af Fix: display dynamic version on login and use subdomain for customer directories instead of kunde{id} 2026-02-23 12:58:39 +01:00
c3ab7a5a67 fix(api): correct extraction of commit date from gitea branches api 2026-02-22 22:57:07 +01:00
b955e4f464 feat(ui): settings menu restructure, git branch dropdown, and repo cleanup 2026-02-22 21:29:30 +01:00
831564762b feat(ui): clean vertical settings menu and improved version formatting 2026-02-22 16:07:08 +01:00
3f177a6993 fix(updater): add --rm to helper container to remove it after use 2026-02-22 15:58:18 +01:00
28 changed files with 1919 additions and 715 deletions

15
.gitignore vendored
View File

@@ -69,5 +69,20 @@ PROJECT_SUMMARY.md
QUICKSTART.md
VS_CODE_SETUP.md
# Gemini / Antigravity
.gemini/
# Windows artifacts
nul
# Debug / temp files (generated during development & testing)
out.txt
containers.txt
helper.txt
logs.txt
port.txt
env.txt
network.txt
update_helper.txt
state.txt
hostpath.txt

View File

@@ -91,7 +91,7 @@ netbird-msp-appliance/
1. Validate inputs (subdomain unique, email valid)
2. Allocate ports (Management internal, Relay UDP public)
3. Generate configs from Jinja2 templates
4. Create instance directory: `/opt/netbird-instances/kunde{id}/`
4. Create instance directory: `/opt/netbird-instances/{subdomain}/`
5. Write `docker-compose.yml`, `management.json`, `relay.env`
6. Start Docker containers via Docker SDK
7. Wait for health checks (max 60s)
@@ -113,7 +113,7 @@ No manual config file editing required!
### 4. Nginx Proxy Manager Integration
**Per customer, create proxy host:**
- Domain: `{subdomain}.{base_domain}`
- Forward to: `netbird-kunde{id}-dashboard:80`
- Forward to: `netbird-{subdomain}-dashboard:80`
- SSL: Automatic Let's Encrypt
- Advanced config: Route `/api/*` to management, `/signalexchange.*` to signal, `/relay` to relay
@@ -272,7 +272,7 @@ networks:
services:
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- npm-network
@@ -285,7 +285,7 @@ services:
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- npm-network
@@ -294,7 +294,7 @@ services:
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- npm-network
@@ -311,7 +311,7 @@ services:
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- npm-network

View File

@@ -95,8 +95,8 @@ A management solution for running isolated NetBird instances for your MSP busine
| | Caddy | | | | Caddy | |
| +------------+ | | +------------+ |
+------------------+ +------------------+
kunde1.domain.de kundeN.domain.de
UDP 3478 UDP 3478+N-1
customer-a.domain.de customer-x.domain.de
| |3478+N-1
```
### Components per Customer Instance (5 containers):
@@ -140,9 +140,9 @@ Example for 3 customers:
| Customer | Dashboard (TCP) | Relay (UDP) |
|----------|----------------|-------------|
| Kunde 1 | 9001 | 3478 |
| Kunde 2 | 9002 | 3479 |
| Kunde 3 | 9003 | 3480 |
| Customer-A | 9001 | 3478 |
| Customer-C | 9002 | 3479 |
| Customer-X | 9003 | 3480 |
**Your firewall must allow both the TCP dashboard ports and the UDP relay ports!**

View File

@@ -7,8 +7,8 @@ from sqlalchemy.orm import Session
from app.database import SessionLocal, get_db
from app.dependencies import get_current_user
from app.models import Customer, Deployment, User
from app.services import docker_service, netbird_service
from app.models import Customer, Deployment, SystemConfig, User
from app.services import docker_service, image_service, netbird_service
from app.utils.security import decrypt_value
logger = logging.getLogger(__name__)
@@ -207,6 +207,50 @@ async def get_customer_credentials(
}
@router.post("/{customer_id}/update-images")
async def update_customer_images(
customer_id: int,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Recreate a customer's containers to pick up newly pulled images.
Images must already be pulled via POST /monitoring/images/pull.
Bind-mounted data is preserved — no data loss.
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
customer = _require_customer(db, customer_id)
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
if not deployment:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="No deployment found for this customer.",
)
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(
status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured."
)
instance_dir = f"{config.data_dir}/{customer.subdomain}"
result = await image_service.update_customer_containers(instance_dir, deployment.container_prefix)
if not result["success"]:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.get("error", "Failed to update containers."),
)
logger.info(
"Containers updated for customer '%s' (prefix: %s) by '%s'.",
customer.name, deployment.container_prefix, current_user.username,
)
return {"message": f"Containers updated for '{customer.name}'."}
def _require_customer(db: Session, customer_id: int) -> Customer:
"""Helper to fetch a customer or raise 404.

View File

@@ -5,13 +5,13 @@ import platform
from typing import Any
import psutil
from fastapi import APIRouter, Depends
from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException, status
from sqlalchemy.orm import Session
from app.database import get_db
from app.database import SessionLocal, get_db
from app.dependencies import get_current_user
from app.models import Customer, Deployment, User
from app.services import docker_service
from app.models import Customer, Deployment, SystemConfig, User
from app.services import docker_service, image_service
logger = logging.getLogger(__name__)
router = APIRouter()
@@ -115,3 +115,137 @@ async def host_resources(
"percent": disk.percent,
},
}
@router.get("/images/check")
async def check_image_updates(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Check all configured NetBird images for available updates on Docker Hub.
Compares local image digests against Docker Hub — no image is pulled.
Returns:
images: dict mapping image name to update status
any_update_available: bool
customer_status: list of per-customer container image status
"""
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
hub_status = await image_service.check_all_images(config)
# Per-customer local check (no network)
deployments = db.query(Deployment).all()
customer_status = []
for dep in deployments:
customer = dep.customer
cs = image_service.get_customer_container_image_status(dep.container_prefix, config)
customer_status.append({
"customer_id": customer.id,
"customer_name": customer.name,
"subdomain": customer.subdomain,
"container_prefix": dep.container_prefix,
"needs_update": cs["needs_update"],
"services": cs["services"],
})
return {**hub_status, "customer_status": customer_status}
@router.post("/images/pull")
async def pull_all_netbird_images(
background_tasks: BackgroundTasks,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Pull all configured NetBird images from Docker Hub.
Runs in the background — returns immediately. After pulling, re-check
customer status via GET /images/check to see which customers need updating.
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
# Snapshot image list before background task starts
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
async def _pull_bg() -> None:
bg_db = SessionLocal()
try:
cfg = bg_db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if cfg:
await image_service.pull_all_images(cfg)
except Exception:
logger.exception("Background image pull failed")
finally:
bg_db.close()
background_tasks.add_task(_pull_bg)
return {"message": "Image pull started in background.", "images": images}
@router.post("/customers/update-all")
async def update_all_customers(
background_tasks: BackgroundTasks,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
) -> dict[str, Any]:
"""Recreate containers for all customers that have outdated images.
Only customers where at least one container runs an outdated image are updated.
Images must already be pulled. Data is preserved (bind mounts).
"""
if current_user.role != "admin":
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Admin only.")
config = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not config:
raise HTTPException(status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail="System not configured.")
# Collect customers that need updating
deployments = db.query(Deployment).all()
to_update = []
for dep in deployments:
cs = image_service.get_customer_container_image_status(dep.container_prefix, config)
if cs["needs_update"]:
customer = dep.customer
instance_dir = str(dep.container_prefix).replace(
"netbird-", "", 1
) # subdomain
to_update.append({
"instance_dir": f"{config.data_dir}/{customer.subdomain}",
"project_name": dep.container_prefix,
"customer_name": customer.name,
})
if not to_update:
return {"message": "All customers are already up to date.", "updated": 0}
async def _update_all_bg() -> None:
for entry in to_update:
try:
await image_service.update_customer_containers(
entry["instance_dir"], entry["project_name"]
)
logger.info("Updated containers for %s", entry["project_name"])
except Exception:
logger.exception("Failed to update %s", entry["project_name"])
background_tasks.add_task(_update_all_bg)
names = [e["customer_name"] for e in to_update]
return {
"message": f"Updating {len(to_update)} customer(s) in background.",
"customers": names,
}

View File

@@ -237,6 +237,10 @@ async def test_ldap(
@router.get("/branding")
async def get_branding(db: Session = Depends(get_db)):
"""Public endpoint — returns branding info for the login page (no auth required)."""
current_version = update_service.get_current_version().get("tag", "alpha-1.1")
if current_version == "unknown":
current_version = "alpha-1.1"
row = db.query(SystemConfig).filter(SystemConfig.id == 1).first()
if not row:
return {
@@ -244,12 +248,14 @@ async def get_branding(db: Session = Depends(get_db)):
"branding_subtitle": "Multi-Tenant Management Platform",
"branding_logo_path": None,
"default_language": "en",
"version": current_version
}
return {
"branding_name": row.branding_name or "NetBird MSP Appliance",
"branding_subtitle": row.branding_subtitle or "Multi-Tenant Management Platform",
"branding_logo_path": row.branding_logo_path,
"default_language": row.default_language or "en",
"version": current_version
}
@@ -334,6 +340,19 @@ async def get_version(
return result
@router.get("/branches")
async def get_branches(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Return a list of available branches from the configured git remote."""
config = get_system_config(db)
if not config or not config.git_repo_url:
return []
branches = await update_service.get_remote_branches(config)
return branches
@router.post("/update")
async def trigger_update(
current_user: User = Depends(get_current_user),

View File

@@ -70,12 +70,31 @@ async def update_user(
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db),
):
"""Update an existing user (email, is_active, role)."""
"""Update an existing user (email, is_active, role). Admin only."""
if current_user.role != "admin":
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Only admins can update users.",
)
user = db.query(User).filter(User.id == user_id).first()
if not user:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="User not found.")
update_data = payload.model_dump(exclude_none=True)
if "role" in update_data:
if update_data["role"] not in ("admin", "viewer"):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Role must be 'admin' or 'viewer'.",
)
if user_id == current_user.id:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="You cannot change your own role.",
)
for field, value in update_data.items():
if hasattr(user, field):
setattr(user, field, value)

View File

@@ -0,0 +1,251 @@
"""NetBird Docker image update service.
Compares locally pulled images against Docker Hub to detect available updates.
Provides pull and per-customer container recreation functions without data loss.
"""
import asyncio
import json
import logging
import os
import subprocess
from typing import Any
import httpx
logger = logging.getLogger(__name__)
# Services that make up a customer's NetBird deployment
NETBIRD_SERVICES = ["management", "signal", "relay", "dashboard"]
async def _run_cmd(cmd: list[str], timeout: int = 300) -> subprocess.CompletedProcess:
"""Run a subprocess command without blocking the event loop."""
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
lambda: subprocess.run(cmd, capture_output=True, text=True, timeout=timeout),
)
def _parse_image_name(image: str) -> tuple[str, str]:
"""Split 'repo/name:tag' into ('repo/name', 'tag'). Defaults tag to 'latest'."""
if ":" in image:
name, tag = image.rsplit(":", 1)
else:
name, tag = image, "latest"
return name, tag
async def get_hub_digest(image: str) -> str | None:
"""Fetch the current digest from Docker Hub for an image:tag.
Uses the Docker Hub REST API — does NOT pull the image.
Returns the digest string (sha256:...) or None on failure.
"""
name, tag = _parse_image_name(image)
url = f"https://hub.docker.com/v2/repositories/{name}/tags/{tag}/"
try:
async with httpx.AsyncClient(timeout=15) as client:
resp = await client.get(url)
if resp.status_code != 200:
logger.warning("Docker Hub API returned %d for %s", resp.status_code, image)
return None
data = resp.json()
images = data.get("images", [])
# Prefer linux/amd64 digest
for img in images:
if img.get("os") == "linux" and img.get("architecture") in ("amd64", ""):
d = img.get("digest")
if d:
return d
# Fallback: first available digest
if images:
return images[0].get("digest")
return None
except Exception as exc:
logger.warning("Failed to fetch Docker Hub digest for %s: %s", image, exc)
return None
def get_local_digest(image: str) -> str | None:
"""Get the RepoDigest for a locally pulled image.
Returns the digest (sha256:...) or None if image not found locally.
"""
try:
result = subprocess.run(
["docker", "image", "inspect", image, "--format", "{{json .RepoDigests}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
digests = json.loads(result.stdout.strip())
if not digests:
return None
# RepoDigests look like "netbirdio/management@sha256:abc..."
for d in digests:
if "@" in d:
return d.split("@", 1)[1]
return None
except Exception as exc:
logger.warning("Failed to inspect local image %s: %s", image, exc)
return None
def get_container_image_id(container_name: str) -> str | None:
"""Get the full image ID (sha256:...) of a running or stopped container."""
try:
result = subprocess.run(
["docker", "inspect", container_name, "--format", "{{.Image}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
return result.stdout.strip() or None
except Exception:
return None
def get_local_image_id(image: str) -> str | None:
"""Get the full image ID (sha256:...) of a locally stored image."""
try:
result = subprocess.run(
["docker", "image", "inspect", image, "--format", "{{.Id}}"],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
return None
return result.stdout.strip() or None
except Exception:
return None
async def check_image_status(image: str) -> dict[str, Any]:
"""Check whether a configured image has an update available on Docker Hub.
Returns a dict with:
image: the image name:tag
local_digest: digest of locally cached image (or None)
hub_digest: latest digest from Docker Hub (or None)
update_available: True if hub_digest differs from local_digest
"""
hub_digest, local_digest = await asyncio.gather(
get_hub_digest(image),
asyncio.get_event_loop().run_in_executor(None, get_local_digest, image),
)
if hub_digest and local_digest:
update_available = hub_digest != local_digest
elif hub_digest and not local_digest:
# Image not pulled locally yet — needs pull
update_available = True
else:
update_available = False
return {
"image": image,
"local_digest": local_digest,
"hub_digest": hub_digest,
"update_available": update_available,
}
async def check_all_images(config) -> dict[str, Any]:
"""Check all 4 configured NetBird images for available updates.
Returns a dict with:
images: dict mapping image name -> status dict
any_update_available: bool
"""
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
results = await asyncio.gather(*[check_image_status(img) for img in images])
by_image = {r["image"]: r for r in results}
any_update = any(r["update_available"] for r in results)
return {"images": by_image, "any_update_available": any_update}
async def pull_image(image: str) -> dict[str, Any]:
"""Pull a Docker image. Returns success/error dict."""
logger.info("Pulling image: %s", image)
result = await _run_cmd(["docker", "pull", image], timeout=600)
if result.returncode != 0:
logger.error("Failed to pull %s: %s", image, result.stderr)
return {"image": image, "success": False, "error": result.stderr[:500]}
return {"image": image, "success": True}
async def pull_all_images(config) -> dict[str, Any]:
"""Pull all 4 configured NetBird images. Returns results per image."""
images = [
config.netbird_management_image,
config.netbird_signal_image,
config.netbird_relay_image,
config.netbird_dashboard_image,
]
results = await asyncio.gather(*[pull_image(img) for img in images])
return {
"results": {r["image"]: r for r in results},
"all_success": all(r["success"] for r in results),
}
def get_customer_container_image_status(container_prefix: str, config) -> dict[str, Any]:
"""Check which service containers are running outdated local images.
Compares each running container's image ID against the locally stored image ID
for the configured image tag. This is a local check — no network call.
Returns:
services: dict mapping service name to status info
needs_update: True if any service has a different image ID than locally stored
"""
service_images = {
"management": config.netbird_management_image,
"signal": config.netbird_signal_image,
"relay": config.netbird_relay_image,
"dashboard": config.netbird_dashboard_image,
}
services: dict[str, Any] = {}
for svc, image in service_images.items():
container_name = f"{container_prefix}-{svc}"
container_id = get_container_image_id(container_name)
local_id = get_local_image_id(image)
if container_id and local_id:
up_to_date = container_id == local_id
else:
up_to_date = None # container not running or image not pulled
services[svc] = {
"container": container_name,
"image": image,
"up_to_date": up_to_date,
}
needs_update = any(s["up_to_date"] is False for s in services.values())
return {"services": services, "needs_update": needs_update}
async def update_customer_containers(instance_dir: str, project_name: str) -> dict[str, Any]:
"""Recreate customer containers to pick up newly pulled images.
Runs `docker compose up -d` in the customer's instance directory.
Images must already be pulled. Bind-mounted data is preserved — no data loss.
"""
compose_file = os.path.join(instance_dir, "docker-compose.yml")
if not os.path.isfile(compose_file):
return {"success": False, "error": f"docker-compose.yml not found at {compose_file}"}
cmd = [
"docker", "compose",
"-f", compose_file,
"-p", project_name,
"up", "-d", "--remove-orphans",
]
logger.info("Updating containers for %s", project_name)
result = await _run_cmd(cmd, timeout=300)
if result.returncode != 0:
return {"success": False, "error": result.stderr[:1000]}
return {"success": True}

View File

@@ -118,7 +118,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
allocated_port = None
instance_dir = None
container_prefix = f"netbird-kunde{customer_id}"
container_prefix = f"netbird-{customer.subdomain}"
local_mode = _is_local_domain(config.base_domain)
existing_deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
@@ -135,7 +135,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 2: Generate secrets (reuse existing key if instance data exists)
relay_secret = generate_relay_secret()
datastore_key = _get_existing_datastore_key(
os.path.join(config.data_dir, f"kunde{customer_id}", "management.json")
os.path.join(config.data_dir, customer.subdomain, "management.json")
)
if datastore_key:
_log_action(db, customer_id, "deploy", "info",
@@ -159,7 +159,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
relay_ws_protocol = "rels"
# Step 4: Create instance directory
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
os.makedirs(instance_dir, exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "management"), exist_ok=True)
os.makedirs(os.path.join(instance_dir, "data", "signal"), exist_ok=True)
@@ -225,7 +225,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Step 8: Auto-create admin user via NetBird setup API
admin_email = customer.email
admin_password = secrets.token_urlsafe(16)
management_container = f"netbird-kunde{customer_id}-management"
management_container = f"netbird-{customer.subdomain}-management"
setup_api_url = f"http://{management_container}:80/api/setup"
setup_payload = json.dumps({
"name": customer.name,
@@ -264,7 +264,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
_log_action(db, customer_id, "deploy", "info",
"Auto-setup failed — admin must complete setup manually.")
# Step 9: Create NPM proxy host + stream (production only)
# Step 9: Create NPM proxy host (production only)
npm_proxy_id = None
npm_stream_id = None
if not local_mode:
@@ -294,27 +294,6 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
f"(SSL: {'OK' if ssl_ok else 'FAILED — check DNS and port 80 accessibility'})",
)
# Create NPM UDP stream for relay STUN port
stream_result = await npm_service.create_stream(
api_url=config.npm_api_url,
npm_email=config.npm_api_email,
npm_password=config.npm_api_password,
incoming_port=allocated_port,
forwarding_host=forward_host,
forwarding_port=allocated_port,
)
npm_stream_id = stream_result.get("stream_id")
if stream_result.get("error"):
_log_action(
db, customer_id, "deploy", "error",
f"NPM stream creation failed: {stream_result['error']}",
)
else:
_log_action(
db, customer_id, "deploy", "info",
f"NPM UDP stream created: port {allocated_port} -> {forward_host}:{allocated_port}",
)
# Note: Keep HTTPS configs even if SSL cert creation failed.
# SSL can be set up manually in NPM later. Switching to HTTP
# would break the dashboard when the user accesses via HTTPS.
@@ -387,7 +366,7 @@ async def deploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
# Rollback: stop containers if they were started
try:
await docker_service.compose_down(
instance_dir or os.path.join(config.data_dir, f"kunde{customer_id}"),
instance_dir or os.path.join(config.data_dir, customer.subdomain),
container_prefix,
remove_volumes=True,
)
@@ -423,7 +402,7 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
config = get_system_config(db)
if deployment and config:
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
# Stop and remove containers
try:
@@ -443,17 +422,6 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM removal error: {exc}")
# Remove NPM stream
if deployment.npm_stream_id and config.npm_api_email:
try:
await npm_service.delete_stream(
config.npm_api_url, config.npm_api_email, config.npm_api_password,
deployment.npm_stream_id,
)
_log_action(db, customer_id, "undeploy", "info", "NPM stream removed.")
except Exception as exc:
_log_action(db, customer_id, "undeploy", "error", f"NPM stream removal error: {exc}")
# Remove Windows DNS A-record (non-fatal)
if config and config.dns_enabled and config.dns_server and config.dns_zone:
try:
@@ -484,17 +452,16 @@ async def undeploy_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Stop containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_stop(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "stopped"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "inactive"
customer.status = "inactive"
db.commit()
_log_action(db, customer_id, "stop", "success", "Containers stopped.")
else:
@@ -505,17 +472,16 @@ async def stop_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Start containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_start(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "start", "success", "Containers started.")
else:
@@ -526,17 +492,16 @@ async def start_customer(db: Session, customer_id: int) -> dict[str, Any]:
async def restart_customer(db: Session, customer_id: int) -> dict[str, Any]:
"""Restart containers for a customer."""
deployment = db.query(Deployment).filter(Deployment.customer_id == customer_id).first()
customer = db.query(Customer).filter(Customer.id == customer_id).first()
config = get_system_config(db)
if not deployment or not config:
return {"success": False, "error": "Deployment or config not found."}
if not deployment or not config or not customer:
return {"success": False, "error": "Deployment, customer or config not found."}
instance_dir = os.path.join(config.data_dir, f"kunde{customer_id}")
instance_dir = os.path.join(config.data_dir, customer.subdomain)
ok = await docker_service.compose_restart(instance_dir, deployment.container_prefix)
if ok:
deployment.deployment_status = "running"
customer = db.query(Customer).filter(Customer.id == customer_id).first()
if customer:
customer.status = "active"
customer.status = "active"
db.commit()
_log_action(db, customer_id, "restart", "success", "Containers restarted.")
else:

View File

@@ -259,7 +259,16 @@ async def create_proxy_host(
"block_exploits": True,
"allow_websocket_upgrade": True,
"access_list_id": 0,
"advanced_config": "",
"advanced_config": (
"location ^~ /management.ManagementService/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
"location ^~ /signalexchange.SignalExchange/ {\n"
f" grpc_pass grpc://{forward_host}:{forward_port};\n"
" grpc_set_header Host $host;\n"
"}\n"
),
"meta": {
"letsencrypt_agree": True,
"letsencrypt_email": admin_email,

View File

@@ -5,6 +5,7 @@ import logging
import os
import shutil
import subprocess
import httpx
from datetime import datetime
from pathlib import Path
from typing import Any
@@ -14,10 +15,45 @@ import httpx
SOURCE_DIR = "/app-source"
VERSION_FILE = "/app/version.json"
BACKUP_DIR = "/app/backups"
CONTAINER_NAME = "netbird-msp-appliance"
SERVICE_NAME = "netbird-msp-appliance"
logger = logging.getLogger(__name__)
def _get_compose_project_name() -> str:
"""Detect the compose project name from the running container's labels.
Docker Compose sets the label ``com.docker.compose.project`` on every
managed container. Reading it at runtime avoids hard-coding a project
name that may differ from the directory name used at deploy time.
Returns:
The compose project name (e.g. ``netbird-msp``).
"""
try:
result = subprocess.run(
[
"docker", "inspect", CONTAINER_NAME,
"--format",
'{{index .Config.Labels "com.docker.compose.project"}}',
],
capture_output=True, text=True, timeout=10,
)
if result.returncode == 0:
project = result.stdout.strip()
if project:
logger.info("Detected compose project name: %s", project)
return project
except Exception as exc:
logger.warning("Could not detect compose project name: %s", exc)
# Fallback: derive from SOURCE_DIR basename (mirrors Compose default behaviour)
fallback = Path(SOURCE_DIR).name
logger.warning("Using fallback compose project name: %s", fallback)
return fallback
def get_current_version() -> dict:
"""Read the version baked at build time from /app/version.json."""
try:
@@ -103,15 +139,19 @@ async def check_for_updates(config: Any) -> dict:
"tag": latest_tag,
"commit": short_sha,
"commit_full": full_sha,
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0],
"date": latest_commit.get("commit", {}).get("committer", {}).get("date", ""),
"message": latest_commit.get("commit", {}).get("message", "").split("\n")[0] if latest_commit.get("commit") else "",
"date": latest_commit.get("timestamp", ""),
"branch": branch,
}
# Determine if update is needed: prefer tag comparison, fallback to commit
current_tag = current.get("tag", "unknown")
current_sha = current.get("commit", "unknown")
if current_tag != "unknown" and latest_tag != "unknown":
# If we don't know our current version but the remote has one, we should update
if current_tag == "unknown" and current_sha == "unknown":
needs_update = latest_tag != "unknown" or short_sha != "unknown"
elif current_tag != "unknown" and latest_tag != "unknown":
needs_update = current_tag != latest_tag
else:
needs_update = (
@@ -130,6 +170,42 @@ async def check_for_updates(config: Any) -> dict:
}
async def get_remote_branches(config: Any) -> list[str]:
"""Query the Gitea API for available branches on the configured repository.
Returns a list of branch names (e.g., ['main', 'unstable', 'development']).
If the repository URL is not configured or an error occurs, returns an empty list.
"""
if not config.git_repo_url:
return []
repo_url = config.git_repo_url.rstrip("/")
parts = repo_url.split("/")
if len(parts) < 5:
return []
base_url = "/".join(parts[:-2])
owner = parts[-2]
repo = parts[-1]
branches_api = f"{base_url}/api/v1/repos/{owner}/{repo}/branches?limit=100"
headers = {}
if config.git_token:
headers["Authorization"] = f"token {config.git_token}"
try:
async with httpx.AsyncClient(timeout=10) as client:
resp = await client.get(branches_api, headers=headers)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list):
return [branch.get("name") for branch in data if "name" in branch]
except Exception as exc:
logger.error("Error fetching branches: %s", exc)
return []
def backup_database(db_path: str) -> str:
"""Create a timestamped backup of the SQLite database.
@@ -176,6 +252,16 @@ def trigger_update(config: Any, db_path: str) -> dict:
pull_cmd = ["git", "-C", SOURCE_DIR, "pull", "origin", branch]
# 3. Git pull (synchronous — must complete before rebuild)
# Ensure .git directory is owned by the process user (root inside container).
# The .git dir may be owned by the host user after manual operations.
try:
subprocess.run(
["git", "config", "--global", "--add", "safe.directory", SOURCE_DIR],
capture_output=True, timeout=10,
)
except Exception:
pass
try:
result = subprocess.run(
pull_cmd,
@@ -199,6 +285,15 @@ def trigger_update(config: Any, db_path: str) -> dict:
logger.info("git pull succeeded: %s", result.stdout.strip()[:200])
# Fetch tags separately — git pull does not always pull all tags
try:
subprocess.run(
["git", "-C", SOURCE_DIR, "fetch", "--tags"],
capture_output=True, text=True, timeout=30,
)
except Exception as exc:
logger.warning("git fetch --tags failed (non-fatal): %s", exc)
# 4. Read version info from the freshly-pulled source
build_env = os.environ.copy()
try:
@@ -237,13 +332,20 @@ def trigger_update(config: Any, db_path: str) -> dict:
# ensure the compose-up runs detached on the Docker host via a wrapper.
log_path = Path(BACKUP_DIR) / "update_rebuild.log"
# Detect compose project name at runtime — avoids hard-coding a name that
# may differ from the directory used at deploy time.
project_name = _get_compose_project_name()
# Image name follows Docker Compose convention: {project}-{service}
service_image = f"{project_name}-{SERVICE_NAME}:latest"
logger.info("Using project=%s image=%s", project_name, service_image)
# Phase A — build the new image (does NOT stop anything)
build_cmd = [
"docker", "compose",
"-p", "netbirdmsp-appliance",
"-p", project_name,
"-f", f"{SOURCE_DIR}/docker-compose.yml",
"build", "--no-cache",
"netbird-msp-appliance",
SERVICE_NAME,
]
logger.info("Phase A: building new image …")
try:
@@ -295,22 +397,19 @@ def trigger_update(config: Any, db_path: str) -> dict:
val = build_env.get(key, "unknown")
env_flags.extend(["-e", f"{key}={val}"])
# Use the same image we're already running (it has docker CLI + compose plugin)
own_image = "netbirdmsp-appliance-netbird-msp-appliance:latest"
helper_cmd = [
"docker", "run", "-d", "--privileged",
"docker", "run", "--rm", "-d", "--privileged",
"--name", "msp-updater",
"-v", "/var/run/docker.sock:/var/run/docker.sock:z",
"-v", f"{host_source_dir}:{host_source_dir}:ro,z",
*env_flags,
own_image,
service_image, # freshly built image — has docker CLI + compose plugin
"sh", "-c",
(
"sleep 3 && "
"docker compose -p netbirdmsp-appliance "
f"docker compose -p {project_name} "
f"-f {host_source_dir}/docker-compose.yml "
"up --force-recreate --no-deps -d netbird-msp-appliance"
f"up --force-recreate --no-deps -d {SERVICE_NAME}"
),
]
try:

View File

@@ -1,9 +0,0 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b25f16030139 netbirdmsp-appliance-netbird-msp-appliance:latest "sh -c 'sleep 3 && d…" 2 minutes ago Exited (1) 2 minutes ago msp-updater
c7acab75017f f4446ac34896 "uvicorn app.main:ap…" 11 minutes ago Up 11 minutes (healthy) 0.0.0.0:8000->8000/tcp, [::]:8000->8000/tcp netbird-msp-appliance
878efa979680 caddy:2-alpine "caddy run --config …" 3 hours ago Up 2 hours 443/tcp, 2019/tcp, 443/udp, 0.0.0.0:9001->80/tcp, [::]:9001->80/tcp netbird-kunde1-caddy
564c613f112a netbirdio/signal:latest "/go/bin/netbird-sig…" 3 hours ago Up 2 hours netbird-kunde1-signal
a98852970815 netbirdio/dashboard:latest "/usr/bin/supervisor…" 3 hours ago Up 2 hours 80/tcp, 443/tcp netbird-kunde1-dashboard
11e100e21d81 netbirdio/relay:latest "/go/bin/netbird-rel…" 3 hours ago Up 2 hours 0.0.0.0:3478->3478/udp, [::]:3478->3478/udp netbird-kunde1-relay
aeae96bf691e netbirdio/management:latest "/go/bin/netbird-mgm…" 3 hours ago Up 2 hours netbird-kunde1-management
9cdda4d58e36 tecnativa/docker-socket-proxy:latest "docker-entrypoint.s…" 3 days ago Up 2 hours 2375/tcp docker-socket-proxy

View File

@@ -57,6 +57,7 @@ services:
- "${WEB_UI_PORT:-8000}:8000"
volumes:
- ./data:/app/data:z
- ./data/uploads:/app/static/uploads:z
- ./logs:/app/logs:z
- ./backups:/app/backups:z
- /var/run/docker.sock:/var/run/docker.sock:z

View File

View File

@@ -1 +0,0 @@
Error response from daemon: No such container: msp-updater

View File

@@ -1,30 +0,0 @@
INFO: 172.18.0.1:34414 - "GET /lang/de.json HTTP/1.1" 304 Not Modified
INFO: 172.18.0.1:34414 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 172.18.0.1:34424 - "GET /api/settings/branding HTTP/1.1" 200 OK
INFO: 172.18.0.1:34424 - "GET /api/auth/azure/config HTTP/1.1" 200 OK
INFO: 172.18.0.1:34424 - "GET /api/auth/me HTTP/1.1" 200 OK
INFO: 172.18.0.1:34424 - "GET /api/monitoring/status HTTP/1.1" 200 OK
INFO: 172.18.0.1:34414 - "GET /api/customers?page=1&per_page=25 HTTP/1.1" 200 OK
INFO: 127.0.0.1:34422 - "GET /api/health HTTP/1.1" 200 OK
INFO: 172.18.0.1:34042 - "GET /api/settings/system HTTP/1.1" 200 OK
INFO: 172.18.0.1:34042 - "GET /api/auth/mfa/status HTTP/1.1" 200 OK
2026-02-22 14:40:01,292 [INFO] httpx: HTTP Request: GET https://git.0x26.ch/api/v1/repos/BurgerGames/NetBirdMSP-Appliance/branches/unstable "HTTP/1.1 200 OK"
2026-02-22 14:40:01,301 [INFO] httpx: HTTP Request: GET https://git.0x26.ch/api/v1/repos/BurgerGames/NetBirdMSP-Appliance/tags?limit=1 "HTTP/1.1 200 OK"
INFO: 172.18.0.1:49812 - "GET /api/settings/version HTTP/1.1" 200 OK
INFO: 127.0.0.1:54492 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:36052 - "GET /api/health HTTP/1.1" 200 OK
2026-02-22 14:40:57,656 [INFO] app.services.update_service: Database backed up to /app/backups/netbird_msp_20260222_144057.db
2026-02-22 14:40:57,971 [INFO] app.services.update_service: git pull succeeded: Already up to date.
2026-02-22 14:40:57,988 [INFO] app.services.update_service: Rebuilding with GIT_TAG=alpha-1.7 GIT_COMMIT=c40b7d3 GIT_BRANCH=unstable
2026-02-22 14:40:57,988 [INFO] app.services.update_service: Phase A: building new image …
2026-02-22 14:42:44,434 [INFO] app.services.update_service: Phase A complete — image built successfully.
2026-02-22 14:42:44,461 [INFO] app.services.update_service: Host source directory: /home/sascha/NetBirdMSP-Appliance
2026-02-22 14:42:44,973 [INFO] app.services.update_service: Phase B: updater container started — this container will restart in ~5s.
2026-02-22 14:42:44,973 [INFO] app.routers.settings: Update triggered by admin.
INFO: 172.18.0.1:46292 - "POST /api/settings/update HTTP/1.1" 200 OK
INFO: 127.0.0.1:54584 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:33600 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:35272 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:44226 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:48574 - "GET /api/health HTTP/1.1" 200 OK
INFO: 127.0.0.1:53686 - "GET /api/health HTTP/1.1" 200 OK

View File

10
out.txt
View File

@@ -1,10 +0,0 @@
[unstable c40b7d3] alpha-1.7: final test
remote:
remote: Create a new pull request for 'unstable':
remote: https://git.0x26.ch/BurgerGames/NetBirdMSP-Appliance/pulls/new/unstable
remote:
remote: .. Processing 2 references
remote: Processed 2 references in total
To https://git.0x26.ch/BurgerGames/NetBirdMSP-Appliance.git
525b056..c40b7d3 unstable -> unstable
* [new tag] alpha-1.7 -> alpha-1.7

View File

@@ -1,2 +0,0 @@
8000/tcp -> 0.0.0.0:8000
8000/tcp -> [::]:8000

View File

@@ -188,3 +188,36 @@ body.i18n-loading #app-page {
font-weight: 600;
background: rgba(0, 0, 0, 0.02);
}
/* ---------------------------------------------------------------------------
Dark mode overrides (Bootstrap 5.3 data-bs-theme="dark")
Bootstrap handles most components automatically; only custom elements need
explicit overrides here.
--------------------------------------------------------------------------- */
[data-bs-theme="dark"] .card {
border-color: rgba(255, 255, 255, 0.08);
}
[data-bs-theme="dark"] .card-header {
background: rgba(255, 255, 255, 0.04);
}
[data-bs-theme="dark"] .log-entry {
border-bottom-color: rgba(255, 255, 255, 0.07);
}
[data-bs-theme="dark"] .log-time {
color: #9ca3af;
}
[data-bs-theme="dark"] .table th {
color: #9ca3af;
}
[data-bs-theme="dark"] .login-container {
background: linear-gradient(135deg, #0d0d1a 0%, #0a1020 50%, #071525 100%);
}
[data-bs-theme="dark"] .stat-card {
background: var(--bs-card-bg);
}

21
static/favicon.svg Normal file
View File

@@ -0,0 +1,21 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
<!-- Blue rounded background -->
<rect width="32" height="32" rx="7" fill="#2563EB"/>
<!-- Bird silhouette: top-down view, wings spread, forked tail -->
<path fill="white" d="
M 16 7
C 15 8 14 9.5 14 11
C 11 10.5 7 11 4 14
C 8 15 12 14.5 14 14.5
L 15 22
L 13 26
L 16 24
L 19 26
L 17 22
L 18 14.5
C 20 14.5 24 15 28 14
C 25 11 21 10.5 18 11
C 18 9.5 17 8 16 7 Z
"/>
</svg>

After

Width:  |  Height:  |  Size: 496 B

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ let currentPage = 'dashboard';
let currentCustomerId = null;
let currentCustomerData = null;
let customersPage = 1;
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null };
let brandingData = { branding_name: 'NetBird MSP Appliance', branding_logo_path: null, version: 'alpha-1.1' };
let azureConfig = { azure_enabled: false };
// ---------------------------------------------------------------------------
@@ -66,10 +66,35 @@ async function api(method, path, body = null) {
return data;
}
// ---------------------------------------------------------------------------
// Dark mode
// ---------------------------------------------------------------------------
function toggleDarkMode() {
const isDark = document.documentElement.getAttribute('data-bs-theme') === 'dark';
if (isDark) {
document.documentElement.removeAttribute('data-bs-theme');
localStorage.setItem('darkMode', 'light');
document.getElementById('darkmode-icon').className = 'bi bi-moon-fill';
} else {
document.documentElement.setAttribute('data-bs-theme', 'dark');
localStorage.setItem('darkMode', 'dark');
document.getElementById('darkmode-icon').className = 'bi bi-sun-fill';
}
}
function syncDarkmodeIcon() {
const icon = document.getElementById('darkmode-icon');
if (!icon) return;
icon.className = document.documentElement.getAttribute('data-bs-theme') === 'dark'
? 'bi bi-sun-fill'
: 'bi bi-moon-fill';
}
// ---------------------------------------------------------------------------
// Auth
// ---------------------------------------------------------------------------
async function initApp() {
syncDarkmodeIcon();
await initI18n();
await loadBranding();
await loadAzureLoginConfig();
@@ -127,12 +152,19 @@ function applyBranding() {
const name = brandingData.branding_name || 'NetBird MSP Appliance';
const subtitle = brandingData.branding_subtitle || t('login.subtitle');
const logoPath = brandingData.branding_logo_path;
const version = brandingData.version || 'alpha-1.1';
// Login page
document.getElementById('login-title').textContent = name;
const subtitleEl = document.getElementById('login-subtitle');
if (subtitleEl) subtitleEl.textContent = subtitle;
document.title = name;
// Update version string in login page
const versionEl = document.querySelector('#login-page .text-muted.small.mb-0');
if (versionEl) {
versionEl.innerHTML = `<i class="bi bi-tag me-1"></i>${version}`;
}
if (logoPath) {
document.getElementById('login-logo').innerHTML = `<img src="${logoPath}" alt="Logo" style="max-height:64px;max-width:200px;" class="mb-1">`;
} else {
@@ -366,7 +398,7 @@ function logout() {
'Content-Type': 'application/json',
'Authorization': `Bearer ${authToken}`,
},
}).catch(() => {});
}).catch(() => { });
}
authToken = null;
currentUser = null;
@@ -465,9 +497,9 @@ function renderCustomersTable(data) {
<div class="btn-group btn-group-sm">
<button class="btn btn-outline-primary" title="${t('common.view')}" onclick="viewCustomer(${c.id})"><i class="bi bi-eye"></i></button>
${c.deployment && c.deployment.deployment_status === 'running'
? `<button class="btn btn-outline-warning" title="${t('common.stop')}" onclick="customerAction(${c.id},'stop')"><i class="bi bi-stop-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.start')}" onclick="customerAction(${c.id},'start')"><i class="bi bi-play-circle"></i></button>`
}
? `<button class="btn btn-outline-warning" title="${t('common.stop')}" onclick="customerAction(${c.id},'stop')"><i class="bi bi-stop-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.start')}" onclick="customerAction(${c.id},'start')"><i class="bi bi-play-circle"></i></button>`
}
<button class="btn btn-outline-info" title="${t('common.restart')}" onclick="customerAction(${c.id},'restart')"><i class="bi bi-arrow-repeat"></i></button>
<button class="btn btn-outline-danger" title="${t('common.delete')}" onclick="showDeleteModal(${c.id},'${esc(c.name)}')"><i class="bi bi-trash"></i></button>
</div>
@@ -511,7 +543,7 @@ function showNewCustomerModal() {
// Update subdomain suffix
api('GET', '/settings/system').then(cfg => {
document.getElementById('cust-subdomain-suffix').textContent = `.${cfg.base_domain || 'domain.com'}`;
}).catch(() => {});
}).catch(() => { });
const modalEl = document.getElementById('customer-modal');
const modal = bootstrap.Modal.getOrCreateInstance(modalEl);
@@ -872,6 +904,9 @@ async function loadSettings() {
} catch (err) {
showSettingsAlert('danger', t('errors.failedToLoadSettings', { error: err.message }));
}
// Automatically fetch branches once the base config is populated
await loadGitBranches();
}
function updateLogoPreview(logoPath) {
@@ -1183,6 +1218,42 @@ async function testLdapConnection() {
}
}
async function loadGitBranches() {
const branchSelect = document.getElementById('cfg-git-branch');
const currentVal = branchSelect.value;
// Disable mapping while loading
branchSelect.disabled = true;
branchSelect.innerHTML = `<option value="${currentVal}">${currentVal} (Loading...)</option>`;
try {
const branches = await api('GET', '/settings/branches');
branchSelect.innerHTML = '';
// Always ensure the currently saved branch is an option
if (currentVal && !branches.includes(currentVal)) {
branches.unshift(currentVal);
}
if (branches.length === 0) {
branchSelect.innerHTML = `<option value="main">main</option>`;
} else {
branches.forEach(b => {
const opt = document.createElement('option');
opt.value = b;
opt.textContent = b;
if (b === currentVal) opt.selected = true;
branchSelect.appendChild(opt);
});
}
} catch (err) {
showSettingsAlert('warning', `Failed to load branches: ${err.message}`);
branchSelect.innerHTML = `<option value="${currentVal}">${currentVal}</option>`;
} finally {
branchSelect.disabled = false;
}
}
// ---------------------------------------------------------------------------
// Update / Version Management
// ---------------------------------------------------------------------------
@@ -1219,12 +1290,12 @@ async function loadVersionInfo() {
let html = `<div class="row g-3">
<div class="col-md-6">
<div class="border rounded p-3">
<div class="border rounded p-3 h-100">
<div class="text-muted small mb-1">${t('settings.currentVersion')}</div>
<div class="fw-bold fs-5">${esc(currentTag || currentCommit)}</div>
${currentTag ? `<div class="text-muted small font-monospace">${t('settings.commitHash')}: ${esc(currentCommit)}</div>` : ''}
<div class="text-muted small">${t('settings.branch')}: <strong>${esc(current.branch || 'unknown')}</strong></div>
<div class="text-muted small">${esc(current.date || '')}</div>
<div class="text-muted small mt-2"><i class="bi bi-clock me-1"></i>${formatDate(current.date)}</div>
</div>
</div>`;
@@ -1235,17 +1306,17 @@ async function loadVersionInfo() {
? `<span class="badge bg-warning text-dark ms-1">${t('settings.updateAvailable')}</span>`
: `<span class="badge bg-success ms-1">${t('settings.upToDate')}</span>`;
html += `<div class="col-md-6">
<div class="border rounded p-3 ${needsUpdate ? 'border-warning' : ''}">
<div class="border rounded p-3 h-100 ${needsUpdate ? 'border-warning' : ''}">
<div class="text-muted small mb-1">${t('settings.latestVersion')} ${badge}</div>
<div class="fw-bold fs-5">${esc(latestTag || latestCommit)}</div>
${latestTag ? `<div class="text-muted small font-monospace">${t('settings.commitHash')}: ${esc(latestCommit)}</div>` : ''}
<div class="text-muted small">${t('settings.branch')}: <strong>${esc(latest.branch || 'unknown')}</strong></div>
<div class="text-muted small">${esc(latest.message || '')}</div>
<div class="text-muted small">${esc(latest.date || '')}</div>
<div class="text-muted small mt-2"><i class="bi bi-clock me-1"></i>${formatDate(latest.date)}</div>
${latest.message ? `<div class="text-muted small mt-1 border-top pt-1 text-truncate" title="${esc(latest.message)}"><i class="bi bi-chat-text me-1"></i>${esc(latest.message)}</div>` : ''}
</div>
</div>`;
} else if (data.error) {
html += `<div class="col-md-6"><div class="alert alert-warning mb-0">${esc(data.error)}</div></div>`;
html += `<div class="col-md-6"><div class="alert alert-warning h-100 mb-0">${esc(data.error)}</div></div>`;
}
html += '</div>';
@@ -1297,19 +1368,24 @@ async function loadUsers() {
<td>${u.id}</td>
<td><strong>${esc(u.username)}</strong></td>
<td>${esc(u.email || '-')}</td>
<td><span class="badge bg-info">${esc(u.role || 'admin')}</span></td>
<td><span class="badge bg-${u.auth_provider === 'azure' ? 'primary' : 'secondary'}">${esc(u.auth_provider || 'local')}</span></td>
<td><span class="badge bg-${u.role === 'admin' ? 'success' : 'secondary'}">${esc(u.role || 'admin')}</span></td>
<td><span class="badge bg-${u.auth_provider === 'azure' ? 'primary' : u.auth_provider === 'ldap' ? 'info' : 'secondary'}">${esc(u.auth_provider || 'local')}</span></td>
<td>${langDisplay}</td>
<td>${mfaDisplay}</td>
<td>${u.is_active ? `<span class="badge bg-success">${t('common.active')}</span>` : `<span class="badge bg-danger">${t('common.disabled')}</span>`}</td>
<td>
<div class="btn-group btn-group-sm">
${u.is_active
? `<button class="btn btn-outline-warning" title="${t('common.disable')}" onclick="toggleUserActive(${u.id}, false)"><i class="bi bi-pause-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.enable')}" onclick="toggleUserActive(${u.id}, true)"><i class="bi bi-play-circle"></i></button>`
}
? `<button class="btn btn-outline-warning" title="${t('common.disable')}" onclick="toggleUserActive(${u.id}, false)"><i class="bi bi-pause-circle"></i></button>`
: `<button class="btn btn-outline-success" title="${t('common.enable')}" onclick="toggleUserActive(${u.id}, true)"><i class="bi bi-play-circle"></i></button>`
}
${u.auth_provider === 'local' ? `<button class="btn btn-outline-info" title="${t('common.resetPassword')}" onclick="resetUserPassword(${u.id}, '${esc(u.username)}')"><i class="bi bi-key"></i></button>` : ''}
${u.totp_enabled ? `<button class="btn btn-outline-secondary" title="${t('mfa.resetMfa')}" onclick="resetUserMfa(${u.id}, '${esc(u.username)}')"><i class="bi bi-shield-x"></i></button>` : ''}
${currentUser && currentUser.role === 'admin' && u.id !== currentUser.id
? (u.role === 'admin'
? `<button class="btn btn-outline-secondary" title="${t('settings.makeViewer')}" onclick="toggleUserRole(${u.id}, 'admin')"><i class="bi bi-person-dash"></i></button>`
: `<button class="btn btn-outline-success" title="${t('settings.makeAdmin')}" onclick="toggleUserRole(${u.id}, 'viewer')"><i class="bi bi-person-check"></i></button>`)
: ''}
<button class="btn btn-outline-danger" title="${t('common.delete')}" onclick="deleteUser(${u.id}, '${esc(u.username)}')"><i class="bi bi-trash"></i></button>
</div>
</td>
@@ -1369,6 +1445,16 @@ async function toggleUserActive(id, active) {
}
}
async function toggleUserRole(id, currentRole) {
const newRole = currentRole === 'admin' ? 'viewer' : 'admin';
try {
await api('PUT', `/users/${id}`, { role: newRole });
loadUsers();
} catch (err) {
showSettingsAlert('danger', t('errors.updateFailed', { error: err.message }));
}
}
async function resetUserPassword(id, username) {
if (!confirm(t('messages.confirmResetPassword', { username }))) return;
try {
@@ -1547,6 +1633,144 @@ async function loadAllCustomerStatuses() {
}
}
// ---------------------------------------------------------------------------
// Image Updates
// ---------------------------------------------------------------------------
async function checkImageUpdates() {
const btn = document.getElementById('btn-check-updates');
const body = document.getElementById('image-updates-body');
btn.disabled = true;
body.innerHTML = `<div class="text-muted"><span class="spinner-border spinner-border-sm me-2"></span>${t('common.loading')}</div>`;
try {
const data = await api('GET', '/monitoring/images/check');
// Image status table
const imageRows = Object.values(data.images).map(img => {
const badge = img.update_available
? `<span class="badge bg-warning text-dark">${t('monitoring.updateAvailable')}</span>`
: `<span class="badge bg-success">${t('monitoring.upToDate')}</span>`;
const shortDigest = d => d ? d.substring(7, 19) + '…' : '-';
return `<tr>
<td><code class="small">${esc(img.image)}</code></td>
<td class="small text-muted">${shortDigest(img.local_digest)}</td>
<td class="small text-muted">${shortDigest(img.hub_digest)}</td>
<td>${badge}</td>
</tr>`;
}).join('');
// Customer status table
const customerRows = data.customer_status.length === 0
? `<tr><td colspan="3" class="text-center text-muted py-3">${t('monitoring.noCustomers')}</td></tr>`
: data.customer_status.map(c => {
const badge = c.needs_update
? `<span class="badge bg-warning text-dark">${t('monitoring.needsUpdate')}</span>`
: `<span class="badge bg-success">${t('monitoring.upToDate')}</span>`;
const updateBtn = c.needs_update
? `<button class="btn btn-sm btn-outline-warning ms-2" onclick="updateCustomerImages(${c.customer_id})"
title="${t('monitoring.updateCustomer')}"><i class="bi bi-arrow-repeat"></i></button>`
: '';
return `<tr>
<td>${c.customer_id}</td>
<td>${esc(c.customer_name)} <code class="small text-muted">${esc(c.subdomain)}</code></td>
<td>${badge}${updateBtn}</td>
</tr>`;
}).join('');
// Show "Update All" button if any customer needs update
const updateAllBtn = document.getElementById('btn-update-all');
if (data.customer_status.some(c => c.needs_update)) {
updateAllBtn.classList.remove('d-none');
} else {
updateAllBtn.classList.add('d-none');
}
body.innerHTML = `
<h6 class="mb-2">${t('monitoring.imageStatusTitle')}</h6>
<div class="table-responsive mb-4">
<table class="table table-sm mb-0">
<thead class="table-light">
<tr>
<th>${t('monitoring.thImage')}</th>
<th>${t('monitoring.thLocalDigest')}</th>
<th>${t('monitoring.thHubDigest')}</th>
<th>${t('monitoring.thStatus')}</th>
</tr>
</thead>
<tbody>${imageRows}</tbody>
</table>
</div>
<h6 class="mb-2">${t('monitoring.customerImageTitle')}</h6>
<div class="table-responsive">
<table class="table table-sm mb-0">
<thead class="table-light">
<tr>
<th>${t('monitoring.thId')}</th>
<th>${t('monitoring.thName')}</th>
<th>${t('monitoring.thStatus')}</th>
</tr>
</thead>
<tbody>${customerRows}</tbody>
</table>
</div>`;
} catch (err) {
body.innerHTML = `<div class="alert alert-danger">${err.message}</div>`;
} finally {
btn.disabled = false;
}
}
async function pullAllImages() {
if (!confirm(t('monitoring.confirmPull'))) return;
const btn = document.getElementById('btn-pull-images');
btn.disabled = true;
try {
await api('POST', '/monitoring/images/pull');
showToast(t('monitoring.pullStarted'));
// Re-check after a few seconds to let pull finish
setTimeout(() => checkImageUpdates(), 5000);
} catch (err) {
showMonitoringAlert('danger', err.message);
} finally {
btn.disabled = false;
}
}
async function updateCustomerImages(customerId) {
try {
await api('POST', `/customers/${customerId}/update-images`);
showToast(t('monitoring.updateDone'));
setTimeout(() => checkImageUpdates(), 2000);
} catch (err) {
showMonitoringAlert('danger', err.message);
}
}
async function updateAllCustomers() {
if (!confirm(t('monitoring.confirmUpdateAll'))) return;
const btn = document.getElementById('btn-update-all');
btn.disabled = true;
try {
const data = await api('POST', '/monitoring/customers/update-all');
showToast(data.message || t('monitoring.updateAllStarted'));
setTimeout(() => checkImageUpdates(), 5000);
} catch (err) {
showMonitoringAlert('danger', err.message);
} finally {
btn.disabled = false;
}
}
function showMonitoringAlert(type, msg) {
const body = document.getElementById('image-updates-body');
const existing = body.querySelector('.alert');
if (existing) existing.remove();
const div = document.createElement('div');
div.className = `alert alert-${type} mt-2`;
div.textContent = msg;
body.prepend(div);
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------

View File

@@ -93,19 +93,22 @@
},
"settings": {
"title": "Systemeinstellungen",
"tabSystem": "Systemkonfiguration",
"tabNpm": "NPM Integration",
"tabImages": "Docker Images",
"tabSystem": "NetBird MSP System",
"tabNpm": "NPM Proxy",
"tabImages": "NetBird Docker Images",
"tabBranding": "Branding",
"tabUsers": "Benutzer",
"tabAzure": "Azure AD",
"tabDns": "Windows DNS",
"tabLdap": "LDAP / AD",
"tabUpdate": "Updates",
"tabUpdate": "NetBird MSP Updates",
"tabSecurity": "Sicherheit",
"groupUsers": "Benutzerverwaltung",
"groupSystem": "Systemkonfiguration",
"groupExternal": "Umsysteme",
"baseDomain": "Basis-Domain",
"baseDomainPlaceholder": "ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kunde.ihredomain.com",
"baseDomainHint": "Kunden erhalten Subdomains: kundenname.ihredomain.com",
"adminEmail": "Admin E-Mail",
"adminEmailPlaceholder": "admin@ihredomain.com",
"dataDir": "Datenverzeichnis",
@@ -115,7 +118,7 @@
"relayBasePort": "Relay-Basisport",
"relayBasePortHint": "Erster UDP-Port für Relay. Bereich: Basis bis Basis+99",
"dashboardBasePort": "Dashboard-Basisport",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Kunde N erhält Basis+N",
"dashboardBasePortHint": "Basisport für Kunden-Dashboards. Der erste Kunde erhält Basis+1",
"saveSystemSettings": "Systemeinstellungen speichern",
"npmDescription": "NPM verwendet JWT-Authentifizierung. Geben Sie Ihre NPM-Zugangsdaten ein. Das System meldet sich automatisch an.",
"npmApiUrl": "NPM API URL",
@@ -167,6 +170,8 @@
"saveBranding": "Branding speichern",
"userManagement": "Benutzerverwaltung",
"newUser": "Neuer Benutzer",
"makeAdmin": "Zum Admin befördern",
"makeViewer": "Zum Viewer degradieren",
"thId": "ID",
"thUsername": "Benutzername",
"thEmail": "E-Mail",
@@ -368,6 +373,25 @@
"thDashboard": "Dashboard",
"thRelayPort": "Relay-Port",
"thContainers": "Container",
"noCustomers": "Keine Kunden."
"noCustomers": "Keine Kunden.",
"imageUpdates": "NetBird Container Updates",
"checkUpdates": "Auf Updates prüfen",
"pullImages": "Neueste Images laden",
"updateAll": "Alle aktualisieren",
"clickCheckUpdates": "Klicken Sie auf \"Auf Updates prüfen\" um lokale Images mit Docker Hub zu vergleichen.",
"updateAvailable": "Update verfügbar",
"upToDate": "Aktuell",
"needsUpdate": "Update erforderlich",
"updateCustomer": "Diesen Kunden aktualisieren",
"imageStatusTitle": "Image-Status (vs. Docker Hub)",
"customerImageTitle": "Kunden-Container Status",
"thImage": "Image",
"thLocalDigest": "Lokaler Digest",
"thHubDigest": "Hub Digest",
"confirmPull": "Neueste NetBird Images von Docker Hub laden? Dies kann einige Minuten dauern.",
"pullStarted": "Image-Download im Hintergrund gestartet. Prüfung in 5 Sekunden…",
"confirmUpdateAll": "Container aller Kunden mit veralteten Images neu erstellen? Laufende Dienste werden kurz neu gestartet.",
"updateAllStarted": "Aktualisierung im Hintergrund gestartet.",
"updateDone": "Kunden-Container aktualisiert."
}
}

View File

@@ -114,16 +114,19 @@
},
"settings": {
"title": "System Settings",
"tabSystem": "System Configuration",
"tabNpm": "NPM Integration",
"tabImages": "Docker Images",
"tabSystem": "NetBird MSP System",
"tabNpm": "NPM Proxy",
"tabImages": "NetBird Docker Images",
"tabBranding": "Branding",
"tabUsers": "Users",
"tabAzure": "Azure AD",
"tabDns": "Windows DNS",
"tabLdap": "LDAP / AD",
"tabUpdate": "Updates",
"tabUpdate": "NetBird MSP Updates",
"tabSecurity": "Security",
"groupUsers": "User Management",
"groupSystem": "System Configuration",
"groupExternal": "External Systems",
"baseDomain": "Base Domain",
"baseDomainPlaceholder": "yourdomain.com",
"baseDomainHint": "Customers get subdomains: customer.yourdomain.com",
@@ -188,6 +191,8 @@
"saveBranding": "Save Branding",
"userManagement": "User Management",
"newUser": "New User",
"makeAdmin": "Promote to admin",
"makeViewer": "Demote to viewer",
"thId": "ID",
"thUsername": "Username",
"thEmail": "Email",
@@ -275,7 +280,26 @@
"thDashboard": "Dashboard",
"thRelayPort": "Relay Port",
"thContainers": "Containers",
"noCustomers": "No customers."
"noCustomers": "No customers.",
"imageUpdates": "NetBird Container Updates",
"checkUpdates": "Check for Updates",
"pullImages": "Pull Latest Images",
"updateAll": "Update All",
"clickCheckUpdates": "Click \"Check for Updates\" to compare local images with Docker Hub.",
"updateAvailable": "Update available",
"upToDate": "Up to date",
"needsUpdate": "Needs update",
"updateCustomer": "Update this customer",
"imageStatusTitle": "Image Status (vs. Docker Hub)",
"customerImageTitle": "Customer Container Status",
"thImage": "Image",
"thLocalDigest": "Local Digest",
"thHubDigest": "Hub Digest",
"confirmPull": "Pull the latest NetBird images from Docker Hub? This may take a few minutes.",
"pullStarted": "Image pull started in background. Re-checking in 5 seconds…",
"confirmUpdateAll": "Recreate containers for all customers that have outdated images? Running services will briefly restart.",
"updateAllStarted": "Update started in background.",
"updateDone": "Customer containers updated."
},
"userModal": {
"title": "New User",
@@ -370,4 +394,4 @@
"confirmDeleteUser": "Delete user '{username}'? This cannot be undone.",
"confirmResetPassword": "Reset password for '{username}'? A new random password will be generated."
}
}
}

View File

@@ -5,15 +5,15 @@
:80 {
# Embedded IdP OAuth2/OIDC endpoints
handle /oauth2/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
# NetBird Management API + gRPC
handle /api/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80
reverse_proxy netbird-{{ subdomain }}-management:80
}
handle /management.ManagementService/* {
reverse_proxy netbird-kunde{{ customer_id }}-management:80 {
reverse_proxy netbird-{{ subdomain }}-management:80 {
transport http {
versions h2c
}
@@ -22,15 +22,20 @@
# NetBird Signal gRPC
handle /signalexchange.SignalExchange/* {
reverse_proxy netbird-kunde{{ customer_id }}-signal:80 {
reverse_proxy netbird-{{ subdomain }}-signal:80 {
transport http {
versions h2c
}
}
}
# NetBird Relay WebSocket (rels://)
handle /relay* {
reverse_proxy netbird-{{ subdomain }}-relay:80
}
# Default: NetBird Dashboard
handle {
reverse_proxy netbird-kunde{{ customer_id }}-dashboard:80
reverse_proxy netbird-{{ subdomain }}-dashboard:80
}
}

View File

@@ -6,7 +6,7 @@ services:
# --- Caddy Reverse Proxy (entry point) ---
netbird-caddy:
image: caddy:2-alpine
container_name: netbird-kunde{{ customer_id }}-caddy
container_name: netbird-{{ subdomain }}-caddy
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -18,7 +18,7 @@ services:
# --- NetBird Management (with embedded IdP) ---
netbird-management:
image: {{ netbird_management_image }}
container_name: netbird-kunde{{ customer_id }}-management
container_name: netbird-{{ subdomain }}-management
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -39,7 +39,7 @@ services:
# --- NetBird Signal ---
netbird-signal:
image: {{ netbird_signal_image }}
container_name: netbird-kunde{{ customer_id }}-signal
container_name: netbird-{{ subdomain }}-signal
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -49,7 +49,7 @@ services:
# --- NetBird Relay ---
netbird-relay:
image: {{ netbird_relay_image }}
container_name: netbird-kunde{{ customer_id }}-relay
container_name: netbird-{{ subdomain }}-relay
restart: unless-stopped
networks:
- {{ docker_network }}
@@ -61,7 +61,7 @@ services:
# --- NetBird Dashboard ---
netbird-dashboard:
image: {{ netbird_dashboard_image }}
container_name: netbird-kunde{{ customer_id }}-dashboard
container_name: netbird-{{ subdomain }}-dashboard
restart: unless-stopped
networks:
- {{ docker_network }}

View File

@@ -1 +0,0 @@
unable to get image 'netbirdmsp-appliance-netbird-msp-appliance': permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.51/images/netbirdmsp-appliance-netbird-msp-appliance/json": dial unix /var/run/docker.sock: connect: permission denied