Compare commits

..

143 Commits

Author SHA1 Message Date
codex
c35018df05 ci(bstein-home): enforce sonar and supply-chain gates 2026-04-22 01:29:43 -03:00
codex
b316cb673c ci(bstein-home): run docs before loc gate 2026-04-22 01:24:30 -03:00
codex
c9e0cdf157 fix(frontend): allow non-root nginx pid writes 2026-04-22 00:57:40 -03:00
codex
95911f3137 security(bstein-home): harden containers and dependencies 2026-04-22 00:00:57 -03:00
codex
03e31cb38d ci(bstein-home): use preloaded quality scanner image 2026-04-21 22:50:29 -03:00
codex
c6c967b421 ci(bstein-home): pass sonar token as login 2026-04-21 22:17:55 -03:00
codex
ad44bc40c3 ci(bstein-home): run sonar and supply-chain scans 2026-04-21 22:09:06 -03:00
codex
d2d0daeff1 ci(bstein-home): publish frontend junit to build artifacts 2026-04-21 21:32:32 -03:00
codex
5fd9d3e4b3 ci(bstein-home): bind sonarqube token credential 2026-04-21 20:16:51 -03:00
codex
1194481934 test(bstein-home): harden home e2e status mock 2026-04-21 19:17:20 -03:00
codex
caf3d1befd ci(bstein-home): use push-capable harbor robot 2026-04-21 18:51:54 -03:00
codex
30311af78a test(bstein-home): decouple e2e overlay from mermaid render 2026-04-21 18:18:01 -03:00
codex
852ce6a2c0 ci(bstein-home): stabilize e2e preview startup 2026-04-21 17:44:25 -03:00
codex
5d8d1bc465 ci(bstein-home): start docker daemon without tls delay 2026-04-21 16:59:14 -03:00
codex
d706bd2b26 ci(bstein-home): avoid forced base image pulls 2026-04-21 16:49:15 -03:00
codex
9a9704897f ci(bstein-home): avoid external buildkit dependency 2026-04-21 15:55:49 -03:00
codex
a0c403a64c test(bstein-home): stabilize frontend e2e on ci agents 2026-04-21 15:20:34 -03:00
codex
05a5738213 ci(bstein-home): stabilize playwright e2e server 2026-04-21 14:57:28 -03:00
codex
949d2861cb ci(bstein-home): use unique kubernetes agents 2026-04-21 13:48:56 -03:00
codex
a6210b1656 ci(bstein-home): use harbor frontend runners 2026-04-21 13:35:10 -03:00
codex
e4e284e39b ci(bstein-home): use harbor python runner 2026-04-21 13:18:07 -03:00
codex
2207a2f814 ci(bstein-home): label test metrics with build artifacts 2026-04-21 11:39:13 -03:00
codex
1356efa02c ci(bstein-home): include primary branch in quality metrics 2026-04-21 11:08:21 -03:00
codex
1908830009 ci(bstein-home): publish canonical build info 2026-04-21 09:35:04 -03:00
codex
951780f115 ci(bstein-home): archive full quality evidence 2026-04-21 09:22:06 -03:00
codex
ac2a3155fc test(bstein-home): enforce full source quality contract 2026-04-21 09:09:10 -03:00
codex
a9b7e22a13 test(bstein-home): cover ai and frontend entry 2026-04-21 09:00:20 -03:00
codex
3675302596 test(bstein-home): cover account frontend 2026-04-21 08:52:56 -03:00
codex
1348aa85a8 test(bstein-home): cover onboarding frontend 2026-04-21 08:47:36 -03:00
codex
db24e3fe88 test(bstein-home): cover request access frontend 2026-04-21 08:40:33 -03:00
codex
a66913950c test(bstein-home): cover frontend static shell 2026-04-21 08:33:37 -03:00
codex
1ca83c7499 test(bstein-home): cover account overview route 2026-04-21 08:25:28 -03:00
codex
6c7dabe7ab test(bstein-home): cover account action routes 2026-04-21 08:21:59 -03:00
codex
d9dcc54bce test(bstein-home): cover provisioning orchestration 2026-04-21 08:18:54 -03:00
codex
46f5ac9482 test(bstein-home): cover provisioning task helpers 2026-04-21 08:13:20 -03:00
codex
4f132a44cf test(bstein-home): cover access request onboarding 2026-04-21 08:10:56 -03:00
codex
e1dde0bf43 test(bstein-home): cover admin access routes 2026-04-21 08:07:51 -03:00
codex
97365ef48c test(bstein-home): cover access request submission 2026-04-21 08:05:01 -03:00
codex
5eacf4f664 test(bstein-home): cover access request status 2026-04-21 08:01:07 -03:00
codex
c675ff9787 test(bstein-home): cover access request state 2026-04-21 07:58:20 -03:00
codex
2dfbf86a93 test(bstein-home): cover ai route metadata 2026-04-21 07:50:19 -03:00
codex
e6d807ed3f test(bstein-home): cover vaultwarden invite adapter 2026-04-21 07:44:43 -03:00
codex
41f8cdebc1 test(bstein-home): cover lab status routes 2026-04-21 07:41:47 -03:00
codex
11f49ec807 test(bstein-home): cover keycloak integration client 2026-04-21 07:39:34 -03:00
codex
54a9fbde49 test(bstein-home): cover backend platform helpers 2026-04-21 07:34:24 -03:00
codex
2f703005fc docs(bstein-home): document full source gate surface 2026-04-21 07:25:40 -03:00
codex
839c2586a2 refactor(bstein-home): split access request routes 2026-04-21 07:12:35 -03:00
codex
96d3d31b31 refactor(bstein-home): split account route modules 2026-04-21 07:05:08 -03:00
codex
aae51f26e1 refactor(bstein-home): extract onboarding flow modules 2026-04-21 07:03:21 -03:00
codex
0273da9e79 refactor(bstein-home): extract account dashboard flow 2026-04-21 06:57:07 -03:00
codex
4ad9803c0c refactor(bstein-home): extract request access flow 2026-04-21 06:53:09 -03:00
codex
e11ee72a9e refactor(bstein-home): move large view styles out of sfc files 2026-04-21 06:51:35 -03:00
codex
989dba49aa quality(bstein-home): split provisioning task helpers 2026-04-21 06:47:22 -03:00
codex
d69669092f test(bstein-home): migrate unit checks to jest 2026-04-21 06:38:29 -03:00
codex
285b00183a quality(bstein-home): publish real workspace coverage and loc totals 2026-04-20 13:49:46 -03:00
codex
eedb77020e ci(bstein-home): keep metrics publish alive on flaky deps 2026-04-20 13:13:07 -03:00
codex
ce5d73c514 ci(bstein-home): use mounted harbor auth config 2026-04-20 13:09:28 -03:00
Codex
26231b7465 ci: retrigger quality metrics publish 2026-04-20 13:02:30 -03:00
codex
e9623e5bfe ci: retrigger pipeline for metrics freshness 2026-04-20 12:43:20 -03:00
codex
7fcf17ee12 ci: enforce 30d build and artifact retention 2026-04-20 12:29:29 -03:00
codex
3d9f449d72 ci(bstein-home): publish per-test result metrics 2026-04-20 11:55:27 -03:00
codex
85abe72f62 ci(bstein-home): emit placeholder test-case metric when junit is empty 2026-04-20 09:06:38 -03:00
codex
d3e47d78a9 ci(bstein-home): publish per-test case metrics for flaky tracking 2026-04-20 08:35:18 -03:00
codex
930b9edbcf ci: retrigger after jenkins lock cleanup 2026-04-19 22:00:46 -03:00
codex
e8463d3501 ci: retrigger after jenkins rollout 2026-04-19 21:51:27 -03:00
codex
565efa9465 ci(gate): default sonar and supply checks to observe mode 2026-04-19 21:29:34 -03:00
codex
410824ed2c ci(gate): enforce sonarqube and supply-chain checks 2026-04-19 21:16:09 -03:00
debb49e842 ci(metrics): use Pushgateway PUT for suite metric updates 2026-04-19 16:08:03 -03:00
beccdc3959 ci(bstein-home): keep quality publish path alive when buildx is down 2026-04-19 15:45:15 -03:00
e9a4218ae9 ci(bstein-home): publish metrics even when test stages fail 2026-04-19 15:37:06 -03:00
7d7c719fc7 ci(bstein-home): run frontend test stage under bash 2026-04-19 15:02:11 -03:00
4c390426ca ci: run frontend stage under bash for pipefail support 2026-04-19 14:41:06 -03:00
250edcdad4 ci: add sonar/supply evidence collection and checks metrics 2026-04-19 14:12:46 -03:00
02c7d5b799 ci(jenkins): align Playwright image with test runner version 2026-04-18 17:47:57 -03:00
c0d83a1e5e ci(jenkins): keep shell flags POSIX compatible 2026-04-18 17:29:50 -03:00
f606c0543c ci(metrics): derive checks plus workspace coverage/loc gauges 2026-04-18 16:33:52 -03:00
8245e1aaa7 testing: add unified quality gate 2026-04-11 00:02:26 -03:00
00e6208d97 ci: use Jenkins harbor credential for image push auth 2026-04-10 06:41:02 -03:00
a80d46606e ci: fix backend pytest import path and publish junit test counts 2026-04-10 06:24:22 -03:00
ce14a4a1c2 ci: trigger clean rerun after jenkins restart 2026-04-10 06:03:39 -03:00
f8d63a100a ci: improve docker readiness and isolate dind cache per build pod 2026-04-10 05:59:39 -03:00
a2ae8c1262 ci: trigger run after jenkins controller refresh 2026-04-10 05:37:17 -03:00
4447174108 ci: add backend quality gate and pushgateway status 2026-04-10 05:18:52 -03:00
2a3ee95e74 ai: cover atlasbot timeout behavior 2026-03-30 16:55:22 -03:00
0e7b9452c1 ai: cover atlasbot mode routing 2026-03-30 16:51:25 -03:00
94310ccc2f ai: return mode-specific timeout guidance when atlasbot misses SLA 2026-03-30 03:53:42 -03:00
89ac893a76 ai: default atlas quick timeout to 15s 2026-03-30 03:44:55 -03:00
9b87c8bfbd ai: enforce mode timeouts and accept atlasbot reply payload 2026-03-30 03:37:33 -03:00
03b282da20 docker: use public base images for portal builds 2026-03-30 03:33:44 -03:00
a2deb0fcd5 ai: align web chat modes with atlasbot and remove stock path 2026-03-30 02:52:54 -03:00
94806c4b42 ci: persist dind storage 2026-02-06 19:59:18 -03:00
299e638c6a ai: persist conversation id 2026-01-30 16:59:06 -03:00
3a27cc9fc1 portal: add atlasbot profiles 2026-01-28 13:01:54 -03:00
3ea68a7464 ui: preserve atlas ai line breaks 2026-01-27 18:08:43 -03:00
aec391608d ai: route all chat through atlasbot 2026-01-27 14:54:35 -03:00
d087558c22 ai: avoid fallback for atlasbot queries 2026-01-27 14:09:01 -03:00
ecdb87d9a8 fix(ai): set latency timer before atlasbot call 2026-01-26 23:49:07 -03:00
caf6d87c5d ai: use atlasbot internal answers before LLM 2026-01-26 22:44:16 -03:00
6e970c3b56 portal: highlight approval waiting message 2026-01-25 15:01:18 -03:00
66526bc1fe portal: merge finance into productivity 2026-01-25 14:35:31 -03:00
647c954739 portal: improve onboarding guidance and rotation 2026-01-24 21:02:30 -03:00
27fbad1f05 portal: emphasize email login notes 2026-01-24 11:41:00 -03:00
f973b64ac6 portal: refine onboarding vaultwarden flow 2026-01-24 11:27:45 -03:00
882a9ae513 feat: add retry for blocked automation 2026-01-24 07:12:35 -03:00
077736b598 portal: enrich onboarding copy 2026-01-23 23:01:44 -03:00
4732334c44 portal: add vaultwarden grandfathered claim 2026-01-23 22:30:21 -03:00
498e1f4154 portal: update wger mobile caption image 2026-01-23 20:48:07 -03:00
6c2a6c33a8 portal: add onboarding media for wger and jellyfin 2026-01-23 20:25:07 -03:00
54e1e2f45d portal: add wger mobile step and require jellyfin web 2026-01-23 20:16:21 -03:00
df0c4e5c65 portal: make firefly/wger steps self-verified 2026-01-23 19:37:06 -03:00
66392a1ad0 portal: explain rotation check failures 2026-01-23 19:20:14 -03:00
b017775bdb fix: improve onboarding checks and errors 2026-01-23 18:23:06 -03:00
c7d2b81ad8 ci: use harbor arm64 base images 2026-01-23 17:47:43 -03:00
e3fb66ab81 ci: use arm64 buildkit image 2026-01-23 17:42:27 -03:00
8377f4f013 ci: use harbor buildkit image for buildx 2026-01-23 17:36:16 -03:00
070729f66c onboarding: mark vaultwarden status and adjust guides 2026-01-23 16:51:10 -03:00
e339e17bd4 portal: refine onboarding guides and account access 2026-01-23 16:06:06 -03:00
27ece883cd portal: honor keycloak rotation completion 2026-01-23 03:35:07 -03:00
1b6e58f782 onboarding: relax confirm flow and add element guides 2026-01-23 03:10:54 -03:00
8a3377959c portal: request keycloak rotation on vaultwarden confirm 2026-01-23 00:35:27 -03:00
944cb24538 portal: allow onboarding confirms without login 2026-01-23 00:08:13 -03:00
8677efaa94 portal: tighten onboarding confirmation flow 2026-01-22 23:59:02 -03:00
87c3cb35ab portal: improve onboarding confirmations and guides 2026-01-22 23:44:16 -03:00
23a69c212f build: use harbor python base image 2026-01-22 22:39:32 -03:00
82d79acf4f onboarding: tighten stepper and confirm controls 2026-01-22 22:28:31 -03:00
289e468658 media: add vaultwarden guide image 2026-01-22 22:09:21 -03:00
3d6a373f3f onboarding: keep temp password notice and paginate guides 2026-01-22 22:03:09 -03:00
994146a99a fix: honor reveal flag in status endpoint 2026-01-22 19:13:50 -03:00
d9049adb46 fix: reveal temp password when requested 2026-01-22 19:05:35 -03:00
7893c787b8 onboarding: reveal temp password on demand 2026-01-22 18:48:19 -03:00
3ff868a3ed fix(db): handle advisory lock row shape 2026-01-22 15:44:47 -03:00
bd8fa1fca5 db: move migrations to job and cap pools 2026-01-22 14:12:06 -03:00
e444a52b3d Merge pull request 'portal: add onboarding stepper + budget section' (#12) from feature/ariadne-integration-portal into master
Reviewed-on: #12
2026-01-22 05:35:02 +00:00
ae79b74bf0 portal: add onboarding stepper + budget section 2026-01-22 02:34:37 -03:00
a7a50619f3 Merge pull request 'portal: surface verified status' (#11) from feature/ariadne-integration-portal into master
Reviewed-on: #11
2026-01-22 00:12:26 +00:00
e72527c473 Merge pull request 'portal: harden email verification' (#10) from feature/ariadne-integration-portal into master
Reviewed-on: #10
2026-01-21 23:45:40 +00:00
fad4f3ea2d Merge pull request 'portal: support verification resend' (#9) from feature/ariadne-integration-portal into master
Reviewed-on: #9
2026-01-21 23:22:01 +00:00
8e580381d0 Merge pull request 'portal: collect names for access requests' (#8) from feature/ariadne-integration-portal into master
Reviewed-on: #8
2026-01-21 22:51:42 +00:00
58c60d2636 Merge pull request 'portal: restyle admin approvals' (#7) from feature/ariadne-integration-portal into master
Reviewed-on: #7
2026-01-21 22:25:48 +00:00
3472e1767a Merge pull request 'portal: log and retry ariadne calls' (#6) from feature/ariadne-integration-portal into master
Reviewed-on: #6
2026-01-21 22:01:55 +00:00
6421edfc25 Merge pull request 'ci: retry apk installs' (#5) from feature/ariadne-integration-portal into master
Reviewed-on: #5
2026-01-21 20:30:06 +00:00
631f8ec4b3 Merge pull request 'portal: load admin approvals by API' (#4) from feature/ariadne-integration-portal into master
Reviewed-on: #4
2026-01-21 20:25:00 +00:00
ae8a513c9d Merge pull request 'portal: integrate ariadne onboarding flow' (#3) from feature/ariadne-integration-portal into master
Reviewed-on: #3
2026-01-21 20:00:58 +00:00
253 changed files with 23810 additions and 4695 deletions

View File

@ -1,4 +1,4 @@
FROM python:3.12-slim
FROM registry.bstein.dev/bstein/python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
@ -10,9 +10,13 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/*
COPY backend/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install --no-cache-dir -r requirements.txt && \
addgroup --system portal && \
adduser --system --ingroup portal --home /app portal && \
chown -R portal:portal /app
COPY backend/ .
COPY --chown=portal:portal backend/ .
EXPOSE 8080
USER portal
CMD ["gunicorn", "-b", "0.0.0.0:8080", "app:app"]

View File

@ -1,5 +1,5 @@
# Build stage
FROM node:20-alpine AS build
FROM registry.bstein.dev/bstein/node:20-bookworm AS build
WORKDIR /app
COPY frontend/package*.json ./
@ -15,7 +15,9 @@ WORKDIR /usr/share/nginx/html
# Minimal nginx config with SPA fallback.
COPY frontend/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist ./
COPY --chown=nginx:nginx --from=build /app/dist ./
RUN chown -R nginx:nginx /etc/nginx/conf.d /usr/share/nginx/html /var/cache/nginx /run /var/run
EXPOSE 80
EXPOSE 8080
USER nginx
CMD ["nginx", "-g", "daemon off;"]

432
Jenkinsfile vendored
View File

@ -1,7 +1,6 @@
pipeline {
agent {
kubernetes {
label 'bstein-dev-home'
defaultContainer 'builder'
yaml """
apiVersion: v1
@ -15,7 +14,7 @@ spec:
node-role.kubernetes.io/worker: "true"
containers:
- name: dind
image: docker:27-dind
image: registry.bstein.dev/bstein/docker:27-dind
securityContext:
privileged: true
env:
@ -25,11 +24,12 @@ spec:
- --mtu=1400
- --host=unix:///var/run/docker.sock
- --host=tcp://0.0.0.0:2375
- --tls=false
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
- name: builder
image: docker:27
image: registry.bstein.dev/bstein/docker:27
command: ["cat"]
tty: true
env:
@ -44,6 +44,27 @@ spec:
mountPath: /root/.docker
- name: harbor-config
mountPath: /docker-config
- name: tester
image: registry.bstein.dev/bstein/python:3.12-slim
command: ["cat"]
tty: true
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
- name: frontend
image: registry.bstein.dev/bstein/playwright:v1.59.1-jammy
command: ["cat"]
tty: true
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
- name: quality-tools
image: registry.bstein.dev/bstein/quality-tools:sonar8.0.1-trivy0.70.0-db20260422-arm64
command: ["cat"]
tty: true
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
volumes:
- name: workspace-volume
emptyDir: {}
@ -53,7 +74,7 @@ spec:
emptyDir: {}
- name: harbor-config
secret:
secretName: harbor-bstein-robot
secretName: harbor-robot-pipeline
items:
- key: .dockerconfigjson
path: config.json
@ -66,9 +87,20 @@ spec:
BACK_IMAGE = "${REGISTRY}/bstein-dev-home-backend"
VERSION_TAG = 'dev'
SEMVER = 'dev'
SUITE_NAME = 'bstein_home'
PUSHGATEWAY_URL = 'http://platform-quality-gateway.monitoring.svc.cluster.local:9091'
SONARQUBE_HOST_URL = 'http://sonarqube.quality.svc.cluster.local:9000'
SONARQUBE_PROJECT_KEY = 'bstein_home'
SONARQUBE_TOKEN = credentials('sonarqube-token')
QUALITY_GATE_SONARQUBE_ENFORCE = '1'
QUALITY_GATE_SONARQUBE_REPORT = 'build/sonarqube-quality-gate.json'
QUALITY_GATE_IRONBANK_ENFORCE = '1'
QUALITY_GATE_IRONBANK_REQUIRED = '1'
QUALITY_GATE_IRONBANK_REPORT = 'build/ironbank-compliance.json'
}
options {
disableConcurrentBuilds()
buildDiscarder(logRotator(daysToKeepStr: '30', numToKeepStr: '200', artifactDaysToKeepStr: '30', artifactNumToKeepStr: '120'))
}
triggers {
// Poll every 2 minutes; notifyCommit can also trigger, but polling keeps it moving without webhook tokens.
@ -81,13 +113,130 @@ spec:
}
}
stage('Collect SonarQube evidence') {
steps {
container('quality-tools') {
sh '''#!/usr/bin/env bash
set -euo pipefail
mkdir -p build
args=(
"-Dsonar.host.url=${SONARQUBE_HOST_URL}"
"-Dsonar.login=${SONARQUBE_TOKEN}"
"-Dsonar.projectKey=${SONARQUBE_PROJECT_KEY}"
"-Dsonar.projectName=${SONARQUBE_PROJECT_KEY}"
"-Dsonar.sources=."
"-Dsonar.exclusions=**/.git/**,**/build/**,**/dist/**,**/node_modules/**,**/.venv/**,**/__pycache__/**,**/coverage/**,**/test-results/**,**/playwright-report/**,frontend/public/media/**"
"-Dsonar.test.inclusions=**/tests/**,**/testing/**,**/*_test.go,**/*.test.ts,**/*.test.tsx,**/*.spec.ts,**/*.spec.tsx"
)
[ -f build/backend-coverage.xml ] && args+=("-Dsonar.python.coverage.reportPaths=build/backend-coverage.xml")
[ -f frontend/coverage/lcov.info ] && args+=("-Dsonar.javascript.lcov.reportPaths=frontend/coverage/lcov.info")
set +e
sonar-scanner "${args[@]}" | tee build/sonar-scanner.log
rc=${PIPESTATUS[0]}
set -e
printf '%s\n' "${rc}" > build/sonarqube-analysis.rc
'''
}
container('tester') {
sh '''
set -euo pipefail
mkdir -p build
python3 - <<'PY'
import base64
import json
import os
import urllib.parse
import urllib.request
host = os.getenv('SONARQUBE_HOST_URL', '').strip().rstrip('/')
project_key = os.getenv('SONARQUBE_PROJECT_KEY', '').strip()
token = os.getenv('SONARQUBE_TOKEN', '').strip()
report_path = os.getenv('QUALITY_GATE_SONARQUBE_REPORT', 'build/sonarqube-quality-gate.json')
payload = {"status": "ERROR", "note": "missing SONARQUBE_HOST_URL and/or SONARQUBE_PROJECT_KEY"}
if host and project_key:
query = urllib.parse.urlencode({"projectKey": project_key})
request = urllib.request.Request(f"{host}/api/qualitygates/project_status?{query}", method="GET")
if token:
encoded = base64.b64encode(f"{token}:".encode("utf-8")).decode("utf-8")
request.add_header("Authorization", f"Basic {encoded}")
try:
with urllib.request.urlopen(request, timeout=12) as response:
payload = json.loads(response.read().decode("utf-8"))
except Exception as exc: # noqa: BLE001
payload = {"status": "ERROR", "error": str(exc)}
with open(report_path, "w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2, sort_keys=True)
handle.write("\\n")
PY
'''
}
}
}
stage('Collect Supply Chain evidence') {
steps {
container('quality-tools') {
sh '''#!/usr/bin/env bash
set -euo pipefail
mkdir -p build
set +e
trivy fs --cache-dir "${TRIVY_CACHE_DIR}" --skip-db-update --timeout 5m --no-progress --format json --output build/trivy-fs.json --scanners vuln,secret,misconfig --severity HIGH,CRITICAL .
trivy_rc=$?
set -e
if [ ! -s build/trivy-fs.json ]; then
cat > build/ironbank-compliance.json <<EOF
{"status":"failed","compliant":false,"scanner":"trivy","scan_type":"filesystem","error":"trivy did not produce JSON output","trivy_rc":${trivy_rc}}
EOF
exit 0
fi
critical="$(jq '[.Results[]? | .Vulnerabilities[]? | select(.Severity=="CRITICAL")] | length' build/trivy-fs.json)"
high="$(jq '[.Results[]? | .Vulnerabilities[]? | select(.Severity=="HIGH")] | length' build/trivy-fs.json)"
secrets="$(jq '[.Results[]? | .Secrets[]?] | length' build/trivy-fs.json)"
misconfigs="$(jq '[.Results[]? | .Misconfigurations[]? | select(.Status=="FAIL" and (.Severity=="CRITICAL" or .Severity=="HIGH"))] | length' build/trivy-fs.json)"
status=ok
compliant=true
if [ "${critical}" -gt 0 ] || [ "${secrets}" -gt 0 ] || [ "${misconfigs}" -gt 0 ]; then
status=failed
compliant=false
fi
jq -n --arg status "${status}" --argjson compliant "${compliant}" --argjson critical "${critical}" --argjson high "${high}" --argjson secrets "${secrets}" --argjson misconfigs "${misconfigs}" --argjson trivy_rc "${trivy_rc}" \
'{status:$status, compliant:$compliant, category:"artifact_security", scan_type:"filesystem", scanner:"trivy", critical_vulnerabilities:$critical, high_vulnerabilities:$high, secrets:$secrets, high_or_critical_misconfigurations:$misconfigs, trivy_rc:$trivy_rc, high_vulnerability_policy:"observe"}' > build/ironbank-compliance.json
'''
}
container('tester') {
sh '''
set -euo pipefail
mkdir -p build
python3 - <<'PY'
import json
import os
from pathlib import Path
report_path = Path(os.getenv('QUALITY_GATE_IRONBANK_REPORT', 'build/ironbank-compliance.json'))
if report_path.exists():
raise SystemExit(0)
status = os.getenv('IRONBANK_COMPLIANCE_STATUS', '').strip()
compliant = os.getenv('IRONBANK_COMPLIANT', '').strip().lower()
payload = {"status": status or "unknown", "compliant": compliant in {"1", "true", "yes", "on"} if compliant else None}
payload = {k: v for k, v in payload.items() if v is not None}
if "status" not in payload:
payload["status"] = "unknown"
payload["note"] = "Set IRONBANK_COMPLIANCE_STATUS/IRONBANK_COMPLIANT or write build/ironbank-compliance.json in image-building repos."
report_path.parent.mkdir(parents=True, exist_ok=True)
report_path.write_text(json.dumps(payload, indent=2, sort_keys=True) + "\\n", encoding="utf-8")
PY
'''
}
}
}
stage('Prep toolchain') {
steps {
container('builder') {
sh '''
set -euo pipefail
for attempt in 1 2 3 4 5; do
if apk add --no-cache bash git jq; then
if apk add --no-cache bash git jq curl; then
break
fi
if [ "$attempt" -eq 5 ]; then
@ -129,18 +278,248 @@ spec:
}
}
stage('Buildx setup') {
stage('Docker readiness') {
steps {
container('builder') {
sh '''
set -euo pipefail
mkdir -p build
set +e
ready=0
for i in $(seq 1 10); do
if docker info >/dev/null 2>&1; then
ready=1
break
fi
sleep 2
done
docker buildx create --name bstein-builder --driver docker-container --bootstrap --use || docker buildx use bstein-builder
rc=0
if [ "${ready}" -ne 1 ]; then
echo "docker daemon did not become ready on ${DOCKER_HOST}" >&2
docker version || true
rc=1
fi
set -e
printf '%s\n' "${rc}" > build/docker-ready.rc
if [ "${rc}" -ne 0 ]; then
echo "warning: docker daemon readiness failed; publish stages will fail later" >&2
fi
'''
}
}
}
stage('Backend unit tests') {
steps {
container('tester') {
sh '''
set -euo pipefail
mkdir -p build
export PYTHONPATH="${WORKSPACE}/backend:${PYTHONPATH:-}"
set +e
pip_rc=1
for attempt in 1 2 3 4 5; do
python -m pip install --no-cache-dir -r backend/requirements.txt -r backend/requirements-dev.txt
pip_rc=$?
if [ "${pip_rc}" -eq 0 ]; then
break
fi
if [ "${attempt}" -lt 5 ]; then
sleep $((attempt * 4))
fi
done
backend_rc=1
if [ "${pip_rc}" -eq 0 ]; then
python -m pytest backend/tests -q --cov=backend/atlas_portal --cov-report=xml:build/backend-coverage.xml --junitxml=build/junit-backend.xml
backend_rc=$?
else
echo "backend dependency install failed after retries" >&2
fi
set -e
printf '%s\n' "${backend_rc}" > build/backend-tests.rc
'''
}
}
}
stage('Frontend tests') {
steps {
container('frontend') {
sh(script: '''#!/usr/bin/env bash
set -euo pipefail
mkdir -p build
ulimit -n 8192 || true
cd frontend
set +e
npm_ci_rc=1
for attempt in 1 2 3 4 5; do
npm ci
npm_ci_rc=$?
if [ "${npm_ci_rc}" -eq 0 ]; then
break
fi
if [ "${attempt}" -lt 5 ]; then
sleep $((attempt * 4))
fi
done
frontend_rc=1
if [ "${npm_ci_rc}" -eq 0 ]; then
npm run lint
npm run test:unit
npm run test:component
npm run build
PLAYWRIGHT_REUSE_DIST=1 npm run test:e2e
frontend_rc=$?
else
echo "frontend dependency install failed after retries" >&2
fi
set -e
printf '%s\n' "${frontend_rc}" > ../build/frontend-tests.rc
''')
}
}
}
stage('Run quality gate') {
steps {
container('tester') {
sh '''
set -euo pipefail
export PYTHONPATH="${WORKSPACE}:${PYTHONPATH:-}"
set +e
python -m testing.ci.quality_gate \
--backend-coverage build/backend-coverage.xml \
--frontend-coverage frontend/coverage/coverage-summary.json \
--report build/quality-gate.json
gate_rc=$?
backend_rc="$(cat build/backend-tests.rc 2>/dev/null || echo 1)"
frontend_rc="$(cat build/frontend-tests.rc 2>/dev/null || echo 1)"
if [ "${backend_rc}" -ne 0 ] || [ "${frontend_rc}" -ne 0 ]; then
gate_rc=1
fi
set -e
printf '%s\n' "${gate_rc}" > build/quality-gate.rc
'''
}
}
}
stage('Publish test metrics') {
steps {
container('tester') {
sh '''
set -euo pipefail
gate_rc="$(cat build/quality-gate.rc 2>/dev/null || echo 1)"
status="ok"
if [ "${gate_rc}" -ne 0 ]; then
status="failed"
fi
python -m testing.ci.publish_metrics \
--gateway "${PUSHGATEWAY_URL}" \
--suite "${SUITE_NAME}" \
--job platform-quality-ci \
--status "${status}" \
--quality-report build/quality-gate.json \
--junit build/junit-backend.xml build/junit-frontend-unit.xml build/junit-frontend-component.xml build/junit-frontend-e2e.xml
'''
}
}
}
stage('Enforce quality gate') {
steps {
container('tester') {
sh '''
set -euo pipefail
gate_rc="$(cat build/quality-gate.rc 2>/dev/null || echo 1)"
fail=0
if [ "${gate_rc}" -ne 0 ]; then
echo "quality gate failed with rc=${gate_rc}" >&2
fail=1
fi
enabled() {
case "$(printf '%s' "${1:-}" | tr '[:upper:]' '[:lower:]')" in
1|true|yes|on) return 0 ;;
*) return 1 ;;
esac
}
if enabled "${QUALITY_GATE_SONARQUBE_ENFORCE:-1}"; then
sonar_status="$(python3 - <<'PY'
import json
from pathlib import Path
path = Path("build/sonarqube-quality-gate.json")
if not path.exists():
print("missing")
raise SystemExit(0)
try:
payload = json.loads(path.read_text(encoding="utf-8"))
except Exception: # noqa: BLE001
print("error")
raise SystemExit(0)
status = (payload.get("status") or payload.get("projectStatus", {}).get("status") or payload.get("qualityGate", {}).get("status") or "").strip().lower()
print(status or "missing")
PY
)"
case "${sonar_status}" in
ok|pass|passed|success) ;;
*)
echo "sonarqube gate failed: ${sonar_status}" >&2
fail=1
;;
esac
fi
ironbank_required="${QUALITY_GATE_IRONBANK_REQUIRED:-1}"
if [ "${PUBLISH_IMAGES:-false}" = "true" ]; then
ironbank_required=1
fi
if enabled "${QUALITY_GATE_IRONBANK_ENFORCE:-1}"; then
supply_status="$(python3 - <<'PY'
import json
from pathlib import Path
path = Path("build/ironbank-compliance.json")
if not path.exists():
print("missing")
raise SystemExit(0)
try:
payload = json.loads(path.read_text(encoding="utf-8"))
except Exception: # noqa: BLE001
print("error")
raise SystemExit(0)
compliant = payload.get("compliant")
if compliant is True:
print("ok")
elif compliant is False:
print("failed")
else:
status = str(payload.get("status") or payload.get("result") or payload.get("compliance") or "").strip().lower()
print(status or "missing")
PY
)"
case "${supply_status}" in
ok|pass|passed|success|compliant) ;;
not_applicable|na|n/a)
if enabled "${ironbank_required}"; then
echo "supply chain gate required but status=${supply_status}" >&2
fail=1
fi
;;
*)
if enabled "${ironbank_required}"; then
echo "supply chain gate failed: ${supply_status}" >&2
fail=1
else
echo "supply chain gate not passing (${supply_status}) but not required for this run" >&2
fi
;;
esac
fi
exit "${fail}"
'''
}
}
@ -151,14 +530,19 @@ spec:
container('builder') {
sh '''
set -euo pipefail
test "$(cat build/docker-ready.rc 2>/dev/null || echo 1)" -eq 0
VERSION_TAG="$(cut -d= -f2 build.env)"
docker buildx build \
--platform linux/arm64 \
--tag "${FRONT_IMAGE}:${VERSION_TAG}" \
--tag "${FRONT_IMAGE}:latest" \
--file Dockerfile.frontend \
--push \
.
for attempt in 1 2 3; do
if docker build --tag "${FRONT_IMAGE}:${VERSION_TAG}" --tag "${FRONT_IMAGE}:latest" --file Dockerfile.frontend .; then
break
fi
if [ "${attempt}" -eq 3 ]; then
exit 1
fi
sleep $((attempt * 10))
done
docker push "${FRONT_IMAGE}:${VERSION_TAG}"
docker push "${FRONT_IMAGE}:latest"
'''
}
}
@ -169,14 +553,19 @@ spec:
container('builder') {
sh '''
set -euo pipefail
test "$(cat build/docker-ready.rc 2>/dev/null || echo 1)" -eq 0
VERSION_TAG="$(cut -d= -f2 build.env)"
docker buildx build \
--platform linux/arm64 \
--tag "${BACK_IMAGE}:${VERSION_TAG}" \
--tag "${BACK_IMAGE}:latest" \
--file Dockerfile.backend \
--push \
.
for attempt in 1 2 3; do
if docker build --tag "${BACK_IMAGE}:${VERSION_TAG}" --tag "${BACK_IMAGE}:latest" --file Dockerfile.backend .; then
break
fi
if [ "${attempt}" -eq 3 ]; then
exit 1
fi
sleep $((attempt * 10))
done
docker push "${BACK_IMAGE}:${VERSION_TAG}"
docker push "${BACK_IMAGE}:latest"
'''
}
}
@ -189,6 +578,7 @@ spec:
def props = fileExists('build.env') ? readProperties(file: 'build.env') : [:]
echo "Build complete for ${props['SEMVER'] ?: env.VERSION_TAG}"
}
archiveArtifacts artifacts: 'build/**, frontend/coverage/**, frontend/test-results/**, frontend/playwright-report/**', allowEmptyArchive: true, fingerprint: true
}
}
}

View File

@ -7,17 +7,20 @@ from flask import Flask, jsonify, send_from_directory
from flask_cors import CORS
from werkzeug.middleware.proxy_fix import ProxyFix
from .db import ensure_schema
from .routes import access_requests, account, admin_access, ai, auth_config, health, lab, monero
def create_app() -> Flask:
"""Build the Flask app with API routes and SPA fallback handling.
WHY: the portal needs a single assembly point so the API, auth routes, and
frontend fallback all stay wired the same way in Flask, tests, and Jenkins.
"""
app = Flask(__name__, static_folder="../frontend/dist", static_url_path="")
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_port=1)
CORS(app, resources={r"/api/*": {"origins": "*"}})
ensure_schema()
health.register(app)
auth_config.register(app)
account.register(app)
@ -30,6 +33,8 @@ def create_app() -> Flask:
@app.route("/", defaults={"path": ""})
@app.route("/<path:path>")
def serve_frontend(path: str) -> Any:
"""Serve the compiled SPA assets or return a JSON build hint."""
dist_path = Path(app.static_folder)
index_path = dist_path / "index.html"

View File

@ -13,12 +13,16 @@ logger = logging.getLogger(__name__)
class AriadneError(Exception):
"""Carry an upstream-facing error message and HTTP status code."""
def __init__(self, message: str, status_code: int = 502) -> None:
super().__init__(message)
self.status_code = status_code
def enabled() -> bool:
"""Return whether Ariadne proxying is configured for this portal."""
return bool(settings.ARIADNE_URL)
@ -40,6 +44,12 @@ def request_raw(
payload: Any | None = None,
params: dict[str, Any] | None = None,
) -> httpx.Response:
"""Send one authenticated request to Ariadne and return the raw response.
WHY: callers need the exact upstream status/body so local routes can act as
a transparent compatibility proxy while still applying retry telemetry.
"""
if not enabled():
raise AriadneError("ariadne not configured", 503)
@ -84,6 +94,12 @@ def proxy(
payload: Any | None = None,
params: dict[str, Any] | None = None,
) -> tuple[Any, int]:
"""Proxy an Ariadne response through Flask as JSON plus status code.
WHY: route handlers should share upstream error normalization instead of
duplicating JSON parsing and outage handling at each call site.
"""
try:
resp = request_raw(method, path, payload=payload, params=params)
except AriadneError as exc:

View File

@ -5,121 +5,208 @@ from typing import Any, Iterator
import psycopg
from psycopg.rows import dict_row
from psycopg_pool import ConnectionPool
from . import settings
MIGRATION_LOCK_ID = 982731
_pool: ConnectionPool | None = None
def configured() -> bool:
"""Return whether the portal has enough database configuration to connect."""
return bool(settings.PORTAL_DATABASE_URL)
def _pool_kwargs() -> dict[str, Any]:
"""Build shared psycopg pool options for Atlas portal connections."""
options = (
f"-c lock_timeout={settings.PORTAL_DB_LOCK_TIMEOUT_SEC}s "
f"-c statement_timeout={settings.PORTAL_DB_STATEMENT_TIMEOUT_SEC}s "
f"-c idle_in_transaction_session_timeout={settings.PORTAL_DB_IDLE_IN_TX_TIMEOUT_SEC}s"
)
return {
"connect_timeout": settings.PORTAL_DB_CONNECT_TIMEOUT_SEC,
"application_name": "atlas_portal",
"options": options,
"row_factory": dict_row,
}
def _get_pool() -> ConnectionPool:
"""Return the singleton Postgres connection pool for request handlers."""
global _pool
if _pool is None:
if not settings.PORTAL_DATABASE_URL:
raise RuntimeError("portal database not configured")
_pool = ConnectionPool(
conninfo=settings.PORTAL_DATABASE_URL,
min_size=settings.PORTAL_DB_POOL_MIN,
max_size=settings.PORTAL_DB_POOL_MAX,
kwargs=_pool_kwargs(),
)
return _pool
@contextmanager
def connect() -> Iterator[psycopg.Connection[Any]]:
"""Yield a dict-row Postgres connection from the shared pool."""
if not settings.PORTAL_DATABASE_URL:
raise RuntimeError("portal database not configured")
with psycopg.connect(settings.PORTAL_DATABASE_URL, row_factory=dict_row) as conn:
with _get_pool().connection() as conn:
conn.row_factory = dict_row
yield conn
def ensure_schema() -> None:
if not settings.PORTAL_DATABASE_URL:
def _try_advisory_lock(conn: psycopg.Connection[Any], lock_id: int) -> bool:
row = conn.execute("SELECT pg_try_advisory_lock(%s)", (lock_id,)).fetchone()
if isinstance(row, dict):
return bool(row.get("pg_try_advisory_lock"))
return bool(row and row[0])
def _release_advisory_lock(conn: psycopg.Connection[Any], lock_id: int) -> None:
try:
conn.execute("SELECT pg_advisory_unlock(%s)", (lock_id,))
except Exception:
pass
def run_migrations() -> None:
"""Create and upgrade the portal schema using an advisory lock.
WHY: every replica may start concurrently, so schema changes must be safe
to run repeatedly without allowing multiple pods to migrate at once.
"""
if not settings.PORTAL_DATABASE_URL or not settings.PORTAL_RUN_MIGRATIONS:
return
with connect() as conn:
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_requests (
request_code TEXT PRIMARY KEY,
username TEXT NOT NULL,
first_name TEXT,
last_name TEXT,
contact_email TEXT,
note TEXT,
status TEXT NOT NULL,
email_verification_token_hash TEXT,
email_verification_sent_at TIMESTAMPTZ,
email_verified_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
decided_at TIMESTAMPTZ,
decided_by TEXT,
initial_password TEXT,
initial_password_revealed_at TIMESTAMPTZ,
provision_attempted_at TIMESTAMPTZ
try:
conn.execute(f"SET lock_timeout = '{settings.PORTAL_DB_LOCK_TIMEOUT_SEC}s'")
conn.execute(f"SET statement_timeout = '{settings.PORTAL_DB_STATEMENT_TIMEOUT_SEC}s'")
except Exception:
pass
if not _try_advisory_lock(conn, MIGRATION_LOCK_ID):
return
try:
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_requests (
request_code TEXT PRIMARY KEY,
username TEXT NOT NULL,
first_name TEXT,
last_name TEXT,
contact_email TEXT,
note TEXT,
status TEXT NOT NULL,
email_verification_token_hash TEXT,
email_verification_sent_at TIMESTAMPTZ,
email_verified_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
decided_at TIMESTAMPTZ,
decided_by TEXT,
initial_password TEXT,
initial_password_revealed_at TIMESTAMPTZ,
provision_attempted_at TIMESTAMPTZ,
welcome_email_sent_at TIMESTAMPTZ,
approval_flags TEXT[],
approval_note TEXT,
denial_note TEXT
)
"""
)
"""
)
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS initial_password TEXT")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS initial_password_revealed_at TIMESTAMPTZ")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS provision_attempted_at TIMESTAMPTZ")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS email_verification_token_hash TEXT")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS email_verification_sent_at TIMESTAMPTZ")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS email_verified_at TIMESTAMPTZ")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS welcome_email_sent_at TIMESTAMPTZ")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS first_name TEXT")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS last_name TEXT")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS approval_flags TEXT[]")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS approval_note TEXT")
conn.execute("ALTER TABLE access_requests ADD COLUMN IF NOT EXISTS denial_note TEXT")
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_tasks (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
task TEXT NOT NULL,
status TEXT NOT NULL,
detail TEXT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, task)
conn.execute(
"""
ALTER TABLE access_requests
ADD COLUMN IF NOT EXISTS initial_password TEXT,
ADD COLUMN IF NOT EXISTS initial_password_revealed_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS provision_attempted_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS email_verification_token_hash TEXT,
ADD COLUMN IF NOT EXISTS email_verification_sent_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS email_verified_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS welcome_email_sent_at TIMESTAMPTZ,
ADD COLUMN IF NOT EXISTS first_name TEXT,
ADD COLUMN IF NOT EXISTS last_name TEXT,
ADD COLUMN IF NOT EXISTS approval_flags TEXT[],
ADD COLUMN IF NOT EXISTS approval_note TEXT,
ADD COLUMN IF NOT EXISTS denial_note TEXT
"""
)
"""
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_onboarding_steps (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
step TEXT NOT NULL,
completed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, step)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_tasks (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
task TEXT NOT NULL,
status TEXT NOT NULL,
detail TEXT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, task)
)
"""
)
"""
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_onboarding_artifacts (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
artifact TEXT NOT NULL,
value_hash TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, artifact)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_onboarding_steps (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
step TEXT NOT NULL,
completed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, step)
)
"""
)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_requests_status_created_at
ON access_requests (status, created_at)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_tasks_request_code
ON access_request_tasks (request_code)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_onboarding_steps_request_code
ON access_request_onboarding_steps (request_code)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_onboarding_artifacts_request_code
ON access_request_onboarding_artifacts (request_code)
"""
)
conn.execute(
"""
CREATE UNIQUE INDEX IF NOT EXISTS access_requests_username_pending
ON access_requests (username)
WHERE status = 'pending'
"""
)
conn.execute(
"""
CREATE TABLE IF NOT EXISTS access_request_onboarding_artifacts (
request_code TEXT NOT NULL REFERENCES access_requests(request_code) ON DELETE CASCADE,
artifact TEXT NOT NULL,
value_hash TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (request_code, artifact)
)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_requests_status_created_at
ON access_requests (status, created_at)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_tasks_request_code
ON access_request_tasks (request_code)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_onboarding_steps_request_code
ON access_request_onboarding_steps (request_code)
"""
)
conn.execute(
"""
CREATE INDEX IF NOT EXISTS access_request_onboarding_artifacts_request_code
ON access_request_onboarding_artifacts (request_code)
"""
)
conn.execute(
"""
CREATE UNIQUE INDEX IF NOT EXISTS access_requests_username_pending
ON access_requests (username)
WHERE status = 'pending'
"""
)
finally:
_release_advisory_lock(conn, MIGRATION_LOCK_ID)
def ensure_schema() -> None:
"""Run startup migrations through a small semantic wrapper."""
run_migrations()

View File

@ -21,6 +21,8 @@ def _job_from_cronjob(
email: str,
password: str,
) -> dict[str, Any]:
"""Render a one-off Firefly user sync Job from the managed CronJob template."""
spec = cronjob.get("spec") if isinstance(cronjob.get("spec"), dict) else {}
jt = spec.get("jobTemplate") if isinstance(spec.get("jobTemplate"), dict) else {}
job_spec = jt.get("spec") if isinstance(jt.get("spec"), dict) else {}
@ -70,6 +72,8 @@ def _job_from_cronjob(
def _job_succeeded(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as successfully complete."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("succeeded") or 0) > 0:
return True
@ -83,6 +87,8 @@ def _job_succeeded(job: dict[str, Any]) -> bool:
def _job_failed(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as failed."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("failed") or 0) > 0:
return True
@ -96,6 +102,12 @@ def _job_failed(job: dict[str, Any]) -> bool:
def trigger(username: str, email: str, password: str, wait: bool = True) -> dict[str, Any]:
"""Start the Firefly sync Job for one user and optionally wait for completion.
WHY: account self-service should be able to repair one user without waiting
for the broader scheduled reconciliation loop.
"""
username = (username or "").strip()
if not username:
raise RuntimeError("missing username")

View File

@ -24,6 +24,8 @@ def _read_service_account() -> tuple[str, str]:
def get_json(path: str) -> dict[str, Any]:
"""Fetch a Kubernetes API object as JSON using the pod service account."""
token, ca_path = _read_service_account()
url = f"{_K8S_BASE_URL}{path}"
with httpx.Client(
@ -40,6 +42,8 @@ def get_json(path: str) -> dict[str, Any]:
def post_json(path: str, payload: dict[str, Any]) -> dict[str, Any]:
"""Post a JSON payload to Kubernetes using the pod service account."""
token, ca_path = _read_service_account()
url = f"{_K8S_BASE_URL}{path}"
with httpx.Client(
@ -53,4 +57,3 @@ def post_json(path: str, payload: dict[str, Any]) -> dict[str, Any]:
if not isinstance(data, dict):
raise RuntimeError("unexpected kubernetes response")
return data

View File

@ -14,6 +14,8 @@ from . import settings
class KeycloakOIDC:
"""Verify user-facing Keycloak tokens for portal requests."""
def __init__(self) -> None:
self._jwk_client: PyJWKClient | None = None
@ -23,6 +25,12 @@ class KeycloakOIDC:
return self._jwk_client
def verify(self, token: str) -> dict[str, Any]:
"""Validate a bearer token and return decoded claims.
WHY: the portal trusts Keycloak groups and usernames only after issuer
and client ownership are checked locally.
"""
if not settings.KEYCLOAK_ENABLED:
raise ValueError("keycloak not enabled")
@ -50,6 +58,8 @@ class KeycloakOIDC:
class KeycloakAdminClient:
"""Perform service-account Keycloak admin operations for provisioning."""
def __init__(self) -> None:
self._token: str = ""
self._expires_at: float = 0.0
@ -57,6 +67,12 @@ class KeycloakAdminClient:
@staticmethod
def _safe_update_payload(full: dict[str, Any]) -> dict[str, Any]:
"""Extract mutable fields from a full Keycloak user document.
WHY: partial updates can accidentally clear profile or attribute data,
so callers merge desired changes into a safe copy first.
"""
payload: dict[str, Any] = {}
username = full.get("username")
if isinstance(username, str):
@ -86,9 +102,13 @@ class KeycloakAdminClient:
return payload
def ready(self) -> bool:
"""Return whether admin-client credentials are available."""
return bool(settings.KEYCLOAK_ADMIN_CLIENT_ID and settings.KEYCLOAK_ADMIN_CLIENT_SECRET)
def _get_token(self) -> str:
"""Return a cached service-account token, refreshing before expiry."""
if not self.ready():
raise RuntimeError("keycloak admin client not configured")
@ -120,9 +140,13 @@ class KeycloakAdminClient:
return {"Authorization": f"Bearer {self._get_token()}"}
def headers(self) -> dict[str, str]:
"""Return authorization headers for callers that need raw admin access."""
return self._headers()
def find_user(self, username: str) -> dict[str, Any] | None:
"""Look up one Keycloak user by exact username."""
url = f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}/users"
# Keycloak 26.x in our environment intermittently 400s on filtered user queries unless `max` is set.
# Use `max=1` and exact username match to keep admin calls reliable for portal provisioning.
@ -137,6 +161,8 @@ class KeycloakAdminClient:
return user if isinstance(user, dict) else None
def find_user_by_email(self, email: str) -> dict[str, Any] | None:
"""Look up one Keycloak user by exact email address."""
email = (email or "").strip()
if not email:
return None
@ -161,6 +187,8 @@ class KeycloakAdminClient:
return None
def get_user(self, user_id: str) -> dict[str, Any]:
"""Fetch a full Keycloak user representation by ID."""
url = f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}/users/{quote(user_id, safe='')}"
with httpx.Client(timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as client:
resp = client.get(url, headers=self._headers())
@ -171,12 +199,16 @@ class KeycloakAdminClient:
return data
def update_user(self, user_id: str, payload: dict[str, Any]) -> None:
"""Replace a Keycloak user document with the supplied payload."""
url = f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}/users/{quote(user_id, safe='')}"
with httpx.Client(timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as client:
resp = client.put(url, headers={**self._headers(), "Content-Type": "application/json"}, json=payload)
resp.raise_for_status()
def update_user_safe(self, user_id: str, payload: dict[str, Any]) -> None:
"""Merge selected user changes into the current Keycloak document."""
full = self.get_user(user_id)
merged = self._safe_update_payload(full)
for key, value in payload.items():
@ -192,6 +224,8 @@ class KeycloakAdminClient:
self.update_user(user_id, merged)
def create_user(self, payload: dict[str, Any]) -> str:
"""Create a Keycloak user and return the generated user ID."""
url = f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}/users"
with httpx.Client(timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as client:
resp = client.post(url, headers={**self._headers(), "Content-Type": "application/json"}, json=payload)
@ -202,6 +236,8 @@ class KeycloakAdminClient:
raise RuntimeError("failed to determine created user id")
def reset_password(self, user_id: str, password: str, temporary: bool = True) -> None:
"""Set a Keycloak password credential for a user."""
url = (
f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}"
f"/users/{quote(user_id, safe='')}/reset-password"
@ -212,6 +248,8 @@ class KeycloakAdminClient:
resp.raise_for_status()
def set_user_attribute(self, username: str, key: str, value: str) -> None:
"""Set one single-value Keycloak user attribute by username."""
user = self.find_user(username)
if not user:
raise RuntimeError("user not found")
@ -230,6 +268,8 @@ class KeycloakAdminClient:
self.update_user(user_id, payload)
def get_group_id(self, group_name: str) -> str | None:
"""Resolve and cache the Keycloak ID for a group name."""
cached = self._group_id_cache.get(group_name)
if cached:
return cached
@ -252,6 +292,8 @@ class KeycloakAdminClient:
return None
def list_group_names(self) -> list[str]:
"""Return all Keycloak group names visible to the admin client."""
url = f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}/groups"
with httpx.Client(timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as client:
resp = client.get(url, headers=self._headers())
@ -263,6 +305,8 @@ class KeycloakAdminClient:
names: set[str] = set()
def walk(groups: list[Any]) -> None:
"""Visit nested Keycloak group records and collect names."""
for group in groups:
if not isinstance(group, dict):
continue
@ -276,7 +320,31 @@ class KeycloakAdminClient:
walk(items)
return sorted(names)
def list_user_groups(self, user_id: str) -> list[str]:
"""Return group names assigned to one Keycloak user."""
url = (
f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}"
f"/users/{quote(user_id, safe='')}/groups"
)
with httpx.Client(timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as client:
resp = client.get(url, headers=self._headers())
resp.raise_for_status()
items = resp.json()
if not isinstance(items, list):
return []
names: list[str] = []
for item in items:
if not isinstance(item, dict):
continue
name = item.get("name")
if isinstance(name, str) and name:
names.append(name.lstrip("/"))
return names
def add_user_to_group(self, user_id: str, group_id: str) -> None:
"""Attach one Keycloak user to one group by ID."""
url = (
f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}"
f"/users/{quote(user_id, safe='')}/groups/{quote(group_id, safe='')}"
@ -286,6 +354,8 @@ class KeycloakAdminClient:
resp.raise_for_status()
def execute_actions_email(self, user_id: str, actions: list[str], redirect_uri: str) -> None:
"""Ask Keycloak to email required-account-action links to a user."""
url = (
f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}"
f"/users/{quote(user_id, safe='')}/execute-actions-email"
@ -301,6 +371,8 @@ class KeycloakAdminClient:
resp.raise_for_status()
def get_user_credentials(self, user_id: str) -> list[dict[str, Any]]:
"""Return credential metadata for one Keycloak user."""
url = (
f"{settings.KEYCLOAK_ADMIN_URL}/admin/realms/{settings.KEYCLOAK_REALM}"
f"/users/{quote(user_id, safe='')}/credentials"
@ -319,6 +391,8 @@ _ADMIN: KeycloakAdminClient | None = None
def oidc_client() -> KeycloakOIDC:
"""Return the singleton OIDC verifier."""
global _OIDC
if _OIDC is None:
_OIDC = KeycloakOIDC()
@ -326,6 +400,8 @@ def oidc_client() -> KeycloakOIDC:
def admin_client() -> KeycloakAdminClient:
"""Return the singleton Keycloak admin client."""
global _ADMIN
if _ADMIN is None:
_ADMIN = KeycloakAdminClient()
@ -344,6 +420,8 @@ def _normalize_groups(groups: Any) -> list[str]:
def _extract_bearer_token() -> str | None:
"""Extract a bearer token from the current Flask request."""
header = request.headers.get("Authorization", "")
if not header:
return None
@ -357,8 +435,12 @@ def _extract_bearer_token() -> str | None:
def require_auth(fn):
"""Decorate a Flask route so it requires a valid Keycloak bearer token."""
@wraps(fn)
def wrapper(*args, **kwargs):
"""Validate the request token and place normalized claims on Flask globals."""
token = _extract_bearer_token()
if not token:
return jsonify({"error": "missing bearer token"}), 401
@ -377,6 +459,8 @@ def require_auth(fn):
def require_portal_admin() -> tuple[bool, Any]:
"""Return whether the authenticated user can use portal admin actions."""
if not settings.KEYCLOAK_ENABLED:
return False, (jsonify({"error": "keycloak not enabled"}), 503)
@ -391,11 +475,15 @@ def require_portal_admin() -> tuple[bool, Any]:
def require_account_access() -> tuple[bool, Any]:
"""Return whether the authenticated user can use self-service account pages."""
if not settings.KEYCLOAK_ENABLED:
return False, (jsonify({"error": "keycloak not enabled"}), 503)
if not settings.ACCOUNT_ALLOWED_GROUPS:
return True, None
groups = set(getattr(g, "keycloak_groups", []) or [])
if not groups:
return True, None
if groups.intersection(settings.ACCOUNT_ALLOWED_GROUPS):
return True, None
return False, (jsonify({"error": "forbidden"}), 403)

View File

@ -7,10 +7,14 @@ from . import settings
class MailerError(RuntimeError):
"""Signal a mail delivery problem without leaking provider internals."""
pass
def send_text_email(*, to_addr: str, subject: str, body: str) -> None:
"""Send a plaintext email through the configured SMTP relay."""
if not to_addr:
raise MailerError("missing recipient")
if not settings.SMTP_HOST:
@ -35,6 +39,8 @@ def send_text_email(*, to_addr: str, subject: str, body: str) -> None:
def access_request_verification_body(*, request_code: str, verify_url: str) -> str:
"""Render the access-request email verification message body."""
return "\n".join(
[
"Atlas — confirm your email",
@ -50,4 +56,3 @@ def access_request_verification_body(*, request_code: str, verify_url: str) -> s
"",
]
)

View File

@ -0,0 +1,13 @@
from __future__ import annotations
from .db import run_migrations
def main() -> None:
"""Run database migrations when invoked as a module or script."""
run_migrations()
if __name__ == "__main__": # pragma: no cover - CLI guard is exercised by direct invocation.
main()

View File

@ -16,6 +16,8 @@ def _safe_name_fragment(value: str, max_len: int = 24) -> str:
def _job_from_cronjob(cronjob: dict[str, Any], username: str) -> dict[str, Any]:
"""Render a one-off Nextcloud mail sync Job from the CronJob template."""
spec = cronjob.get("spec") if isinstance(cronjob.get("spec"), dict) else {}
jt = spec.get("jobTemplate") if isinstance(spec.get("jobTemplate"), dict) else {}
job_spec = jt.get("spec") if isinstance(jt.get("spec"), dict) else {}
@ -61,6 +63,8 @@ def _job_from_cronjob(cronjob: dict[str, Any], username: str) -> dict[str, Any]:
def _job_succeeded(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as successfully complete."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("succeeded") or 0) > 0:
return True
@ -74,6 +78,8 @@ def _job_succeeded(job: dict[str, Any]) -> bool:
def _job_failed(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as failed."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("failed") or 0) > 0:
return True
@ -87,6 +93,12 @@ def _job_failed(job: dict[str, Any]) -> bool:
def trigger(username: str, wait: bool = True) -> dict[str, Any]:
"""Start the Nextcloud mail sync Job for one user and optionally wait.
WHY: onboarding and account actions need a targeted sync repair path that
reuses the same template as scheduled cluster automation.
"""
username = (username or "").strip()
if not username:
raise RuntimeError("missing username")
@ -120,4 +132,3 @@ def trigger(username: str, wait: bool = True) -> dict[str, Any]:
last_state = "running"
return {"job": job_name, "status": last_state}

View File

@ -11,6 +11,13 @@ from . import settings
from .db import connect
from .keycloak import admin_client
from .nextcloud_mail_sync import trigger as trigger_nextcloud_mail_sync
from .provisioning_tasks import (
REQUIRED_PROVISION_TASKS,
all_tasks_ok,
ensure_task_rows,
safe_error_detail,
upsert_task,
)
from .utils import random_password
from .vaultwarden import invite_user
from .firefly_user_sync import trigger as trigger_firefly_user_sync
@ -24,113 +31,40 @@ WGER_PASSWORD_ATTR = "wger_password"
WGER_PASSWORD_UPDATED_ATTR = "wger_password_updated_at"
FIREFLY_PASSWORD_ATTR = "firefly_password"
FIREFLY_PASSWORD_UPDATED_ATTR = "firefly_password_updated_at"
REQUIRED_PROVISION_TASKS: tuple[str, ...] = (
"keycloak_user",
"keycloak_password",
"keycloak_groups",
"mailu_app_password",
"mailu_sync",
"nextcloud_mail_sync",
"wger_account",
"firefly_account",
"vaultwarden_invite",
)
@dataclass(frozen=True)
class ProvisionResult:
"""Outcome returned by one provisioning attempt."""
ok: bool
status: str
def _advisory_lock_id(request_code: str) -> int:
"""Derive a stable Postgres advisory lock id from a request code."""
digest = hashlib.sha256(request_code.encode("utf-8")).digest()
return int.from_bytes(digest[:8], "big", signed=True)
def _upsert_task(conn, request_code: str, task: str, status: str, detail: str | None = None) -> None:
conn.execute(
"""
INSERT INTO access_request_tasks (request_code, task, status, detail, updated_at)
VALUES (%s, %s, %s, %s, NOW())
ON CONFLICT (request_code, task)
DO UPDATE SET status = EXCLUDED.status, detail = EXCLUDED.detail, updated_at = NOW()
""",
(request_code, task, status, detail),
)
def _ensure_task_rows(conn, request_code: str, tasks: list[str]) -> None:
if not tasks:
return
conn.execute(
"""
INSERT INTO access_request_tasks (request_code, task, status, detail, updated_at)
SELECT %s, task, 'pending', NULL, NOW()
FROM UNNEST(%s::text[]) AS task
ON CONFLICT (request_code, task) DO NOTHING
""",
(request_code, tasks),
)
def _safe_error_detail(exc: Exception, fallback: str) -> str:
if isinstance(exc, RuntimeError):
msg = str(exc).strip()
if msg:
return msg
if isinstance(exc, httpx.HTTPStatusError):
detail = f"http {exc.response.status_code}"
try:
payload = exc.response.json()
msg: str | None = None
if isinstance(payload, dict):
raw = payload.get("errorMessage") or payload.get("error") or payload.get("message")
if isinstance(raw, str) and raw.strip():
msg = raw.strip()
elif isinstance(payload, str) and payload.strip():
msg = payload.strip()
if msg:
msg = " ".join(msg.split())
detail = f"{detail}: {msg[:200]}"
except Exception:
text = (exc.response.text or "").strip()
if text:
text = " ".join(text.split())
detail = f"{detail}: {text[:200]}"
return detail
if isinstance(exc, httpx.TimeoutException):
return "timeout"
return fallback
def _task_statuses(conn, request_code: str) -> dict[str, str]:
rows = conn.execute(
"SELECT task, status FROM access_request_tasks WHERE request_code = %s",
(request_code,),
).fetchall()
output: dict[str, str] = {}
for row in rows:
task = row.get("task") if isinstance(row, dict) else None
status = row.get("status") if isinstance(row, dict) else None
if isinstance(task, str) and isinstance(status, str):
output[task] = status
return output
def _all_tasks_ok(conn, request_code: str, tasks: list[str]) -> bool:
statuses = _task_statuses(conn, request_code)
for task in tasks:
if statuses.get(task) != "ok":
return False
return True
def provision_tasks_complete(conn, request_code: str) -> bool:
return _all_tasks_ok(conn, request_code, list(REQUIRED_PROVISION_TASKS))
"""Return whether all required provisioning tasks are marked complete."""
return all_tasks_ok(conn, request_code, list(REQUIRED_PROVISION_TASKS))
def provision_access_request(request_code: str) -> ProvisionResult:
"""Provision all downstream accounts required for an approved request.
Args:
request_code: Access request code being provisioned.
Returns:
A ``ProvisionResult`` describing whether provisioning reached a terminal
ready state or still needs another retry.
"""
if not request_code:
return ProvisionResult(ok=False, status="unknown")
if not admin_client().ready():
@ -183,7 +117,7 @@ def provision_access_request(request_code: str) -> ProvisionResult:
if status not in {"accounts_building", "awaiting_onboarding", "ready"}:
return ProvisionResult(ok=False, status=status or "unknown")
_ensure_task_rows(conn, request_code, required_tasks)
ensure_task_rows(conn, request_code, required_tasks)
if status == "accounts_building":
now = datetime.now(timezone.utc)
@ -276,9 +210,9 @@ def provision_access_request(request_code: str) -> ProvisionResult:
except Exception:
mailu_email = f"{username}@{settings.MAILU_DOMAIN}"
_upsert_task(conn, request_code, "keycloak_user", "ok", None)
upsert_task(conn, request_code, "keycloak_user", "ok", None)
except Exception as exc:
_upsert_task(conn, request_code, "keycloak_user", "error", _safe_error_detail(exc, "failed to ensure user"))
upsert_task(conn, request_code, "keycloak_user", "error", safe_error_detail(exc, "failed to ensure user"))
if not user_id:
return ProvisionResult(ok=False, status="accounts_building")
@ -310,13 +244,13 @@ def provision_access_request(request_code: str) -> ProvisionResult:
admin_client().reset_password(user_id, password_value, temporary=False)
if isinstance(initial_password, str) and initial_password:
_upsert_task(conn, request_code, "keycloak_password", "ok", None)
upsert_task(conn, request_code, "keycloak_password", "ok", None)
elif revealed_at is not None:
_upsert_task(conn, request_code, "keycloak_password", "ok", "initial password already revealed")
upsert_task(conn, request_code, "keycloak_password", "ok", "initial password already revealed")
else:
raise RuntimeError("initial password missing")
except Exception as exc:
_upsert_task(conn, request_code, "keycloak_password", "error", _safe_error_detail(exc, "failed to set password"))
upsert_task(conn, request_code, "keycloak_password", "error", safe_error_detail(exc, "failed to set password"))
# Task: group membership (default dev)
try:
@ -328,9 +262,9 @@ def provision_access_request(request_code: str) -> ProvisionResult:
if not gid:
raise RuntimeError("group missing")
admin_client().add_user_to_group(user_id, gid)
_upsert_task(conn, request_code, "keycloak_groups", "ok", None)
upsert_task(conn, request_code, "keycloak_groups", "ok", None)
except Exception as exc:
_upsert_task(conn, request_code, "keycloak_groups", "error", _safe_error_detail(exc, "failed to add groups"))
upsert_task(conn, request_code, "keycloak_groups", "error", safe_error_detail(exc, "failed to add groups"))
# Task: ensure mailu_app_password attribute exists
try:
@ -347,14 +281,14 @@ def provision_access_request(request_code: str) -> ProvisionResult:
existing = raw
if not existing:
admin_client().set_user_attribute(username, MAILU_APP_PASSWORD_ATTR, random_password())
_upsert_task(conn, request_code, "mailu_app_password", "ok", None)
upsert_task(conn, request_code, "mailu_app_password", "ok", None)
except Exception as exc:
_upsert_task(conn, request_code, "mailu_app_password", "error", _safe_error_detail(exc, "failed to set mail password"))
upsert_task(conn, request_code, "mailu_app_password", "error", safe_error_detail(exc, "failed to set mail password"))
# Task: trigger Mailu sync if configured
try:
if not settings.MAILU_SYNC_URL:
_upsert_task(conn, request_code, "mailu_sync", "ok", "sync disabled")
upsert_task(conn, request_code, "mailu_sync", "ok", "sync disabled")
else:
with httpx.Client(timeout=30) as client:
resp = client.post(
@ -363,23 +297,23 @@ def provision_access_request(request_code: str) -> ProvisionResult:
)
if resp.status_code != 200:
raise RuntimeError("mailu sync failed")
_upsert_task(conn, request_code, "mailu_sync", "ok", None)
upsert_task(conn, request_code, "mailu_sync", "ok", None)
except Exception as exc:
_upsert_task(conn, request_code, "mailu_sync", "error", _safe_error_detail(exc, "failed to sync mailu"))
upsert_task(conn, request_code, "mailu_sync", "error", safe_error_detail(exc, "failed to sync mailu"))
# Task: trigger Nextcloud mail sync if configured
try:
if not settings.NEXTCLOUD_NAMESPACE or not settings.NEXTCLOUD_MAIL_SYNC_CRONJOB:
_upsert_task(conn, request_code, "nextcloud_mail_sync", "ok", "sync disabled")
upsert_task(conn, request_code, "nextcloud_mail_sync", "ok", "sync disabled")
else:
result = trigger_nextcloud_mail_sync(username, wait=True)
if isinstance(result, dict) and result.get("status") == "ok":
_upsert_task(conn, request_code, "nextcloud_mail_sync", "ok", None)
upsert_task(conn, request_code, "nextcloud_mail_sync", "ok", None)
else:
status_val = result.get("status") if isinstance(result, dict) else "error"
_upsert_task(conn, request_code, "nextcloud_mail_sync", "error", str(status_val))
upsert_task(conn, request_code, "nextcloud_mail_sync", "error", str(status_val))
except Exception as exc:
_upsert_task(conn, request_code, "nextcloud_mail_sync", "error", _safe_error_detail(exc, "failed to sync nextcloud"))
upsert_task(conn, request_code, "nextcloud_mail_sync", "error", safe_error_detail(exc, "failed to sync nextcloud"))
# Task: ensure wger account exists
try:
@ -417,9 +351,9 @@ def provision_access_request(request_code: str) -> ProvisionResult:
now_iso = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
admin_client().set_user_attribute(username, WGER_PASSWORD_UPDATED_ATTR, now_iso)
_upsert_task(conn, request_code, "wger_account", "ok", None)
upsert_task(conn, request_code, "wger_account", "ok", None)
except Exception as exc:
_upsert_task(conn, request_code, "wger_account", "error", _safe_error_detail(exc, "failed to provision wger"))
upsert_task(conn, request_code, "wger_account", "error", safe_error_detail(exc, "failed to provision wger"))
# Task: ensure firefly account exists
try:
@ -457,14 +391,14 @@ def provision_access_request(request_code: str) -> ProvisionResult:
now_iso = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
admin_client().set_user_attribute(username, FIREFLY_PASSWORD_UPDATED_ATTR, now_iso)
_upsert_task(conn, request_code, "firefly_account", "ok", None)
upsert_task(conn, request_code, "firefly_account", "ok", None)
except Exception as exc:
_upsert_task(
upsert_task(
conn,
request_code,
"firefly_account",
"error",
_safe_error_detail(exc, "failed to provision firefly"),
safe_error_detail(exc, "failed to provision firefly"),
)
# Task: ensure Vaultwarden account exists (invite flow)
@ -499,9 +433,9 @@ def provision_access_request(request_code: str) -> ProvisionResult:
vaultwarden_email = fallback_email
result = fallback_result
if result.ok:
_upsert_task(conn, request_code, "vaultwarden_invite", "ok", result.status)
upsert_task(conn, request_code, "vaultwarden_invite", "ok", result.status)
else:
_upsert_task(conn, request_code, "vaultwarden_invite", "error", result.detail or result.status)
upsert_task(conn, request_code, "vaultwarden_invite", "error", result.detail or result.status)
# Persist Vaultwarden association/status on the Keycloak user so the portal can display it quickly.
try:
@ -512,15 +446,15 @@ def provision_access_request(request_code: str) -> ProvisionResult:
except Exception:
pass
except Exception as exc:
_upsert_task(
upsert_task(
conn,
request_code,
"vaultwarden_invite",
"error",
_safe_error_detail(exc, "failed to provision vaultwarden"),
safe_error_detail(exc, "failed to provision vaultwarden"),
)
if _all_tasks_ok(conn, request_code, required_tasks):
if all_tasks_ok(conn, request_code, required_tasks):
conn.execute(
"""
UPDATE access_requests

View File

@ -0,0 +1,122 @@
from __future__ import annotations
"""Task-row helpers for access request provisioning."""
import httpx
REQUIRED_PROVISION_TASKS: tuple[str, ...] = (
"keycloak_user",
"keycloak_password",
"keycloak_groups",
"mailu_app_password",
"mailu_sync",
"nextcloud_mail_sync",
"wger_account",
"firefly_account",
"vaultwarden_invite",
)
def upsert_task(conn, request_code: str, task: str, status: str, detail: str | None = None) -> None:
"""Persist the latest status for one provisioning task.
WHY: provisioning is retried across requests, so task rows need to be
idempotent and update in place rather than accumulating duplicates.
"""
conn.execute(
"""
INSERT INTO access_request_tasks (request_code, task, status, detail, updated_at)
VALUES (%s, %s, %s, %s, NOW())
ON CONFLICT (request_code, task)
DO UPDATE SET status = EXCLUDED.status, detail = EXCLUDED.detail, updated_at = NOW()
""",
(request_code, task, status, detail),
)
def ensure_task_rows(conn, request_code: str, tasks: list[str]) -> None:
"""Create pending task rows for any provisioning work not yet tracked.
Args:
conn: Database connection with an ``execute`` method.
request_code: Access request identifier.
tasks: Task names that must exist before provisioning continues.
Returns:
None.
"""
if not tasks:
return
conn.execute(
"""
INSERT INTO access_request_tasks (request_code, task, status, detail, updated_at)
SELECT %s, task, 'pending', NULL, NOW()
FROM UNNEST(%s::text[]) AS task
ON CONFLICT (request_code, task) DO NOTHING
""",
(request_code, tasks),
)
def safe_error_detail(exc: Exception, fallback: str) -> str:
"""Return a bounded, operator-useful detail string for task failures.
WHY: task detail is shown back through the portal UI, so upstream errors
need to be specific enough to act on without dumping unbounded responses.
"""
if isinstance(exc, RuntimeError):
msg = str(exc).strip()
if msg:
return msg
if isinstance(exc, httpx.HTTPStatusError):
detail = f"http {exc.response.status_code}"
try:
payload = exc.response.json()
msg: str | None = None
if isinstance(payload, dict):
raw = payload.get("errorMessage") or payload.get("error") or payload.get("message")
if isinstance(raw, str) and raw.strip():
msg = raw.strip()
elif isinstance(payload, str) and payload.strip():
msg = payload.strip()
if msg:
msg = " ".join(msg.split())
detail = f"{detail}: {msg[:200]}"
except Exception:
text = (exc.response.text or "").strip()
if text:
text = " ".join(text.split())
detail = f"{detail}: {text[:200]}"
return detail
if isinstance(exc, httpx.TimeoutException):
return "timeout"
return fallback
def task_statuses(conn, request_code: str) -> dict[str, str]:
"""Load current task statuses keyed by task name."""
rows = conn.execute(
"SELECT task, status FROM access_request_tasks WHERE request_code = %s",
(request_code,),
).fetchall()
output: dict[str, str] = {}
for row in rows:
task = row.get("task") if isinstance(row, dict) else None
status = row.get("status") if isinstance(row, dict) else None
if isinstance(task, str) and isinstance(status, str):
output[task] = status
return output
def all_tasks_ok(conn, request_code: str, tasks: list[str]) -> bool:
"""Return whether every required task is currently marked ``ok``."""
statuses = task_statuses(conn, request_code)
for task in tasks:
if statuses.get(task) != "ok":
return False
return True

View File

@ -8,6 +8,12 @@ _RATE_BUCKETS: dict[str, dict[str, list[float]]] = {}
def rate_limit_allow(ip: str, *, key: str, limit: int, window_sec: int) -> bool:
"""Return whether a request bucket still has capacity.
WHY: access-request endpoints need a simple in-process guard that is easy to
exercise in tests and cheap to apply before any heavier work starts.
"""
if limit <= 0:
return True
now = time.time()

View File

@ -0,0 +1,254 @@
from __future__ import annotations
from datetime import datetime, timezone
from typing import Any
from flask import jsonify, redirect, request
def register_access_request_onboarding(app, deps) -> None:
"""Register access request onboarding routes."""
@app.route("/api/access/request/onboarding/attest", methods=["POST"])
def request_access_onboarding_attest() -> Any:
"""Record or clear a user-attested onboarding step.
WHY: onboarding mixes manual tasks with Keycloak-managed tasks, so this
route enforces prerequisites and only accepts attestations for UI-owned
steps.
"""
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
step = (payload.get("step") or "").strip()
completed = payload.get("completed")
vaultwarden_claim = bool(payload.get("vaultwarden_claim"))
if not code:
return jsonify({"error": "request_code is required"}), 400
if step not in deps.ONBOARDING_STEPS:
return jsonify({"error": "invalid step"}), 400
if step in deps.KEYCLOAK_MANAGED_STEPS:
return jsonify({"error": "step is managed by keycloak"}), 400
username = ""
token_groups: set[str] = set()
bearer = request.headers.get("Authorization", "")
if bearer:
parts = bearer.split(None, 1)
if len(parts) != 2 or parts[0].lower() != "bearer":
return jsonify({"error": "invalid token"}), 401
token = parts[1].strip()
if not token:
return jsonify({"error": "invalid token"}), 401
try:
claims = deps.oidc_client().verify(token)
except Exception:
return jsonify({"error": "invalid token"}), 401
username = claims.get("preferred_username") or ""
groups = claims.get("groups")
if isinstance(groups, list):
token_groups = {g.lstrip("/") for g in groups if isinstance(g, str) and g}
try:
with deps.connect() as conn:
row = conn.execute(
"SELECT username, status, approval_flags, contact_email FROM access_requests WHERE request_code = %s",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
if username and (row.get("username") or "") != username:
return jsonify({"error": "forbidden"}), 403
status = deps._normalize_status(row.get("status") or "")
if status not in {"awaiting_onboarding", "ready"}:
return jsonify({"error": "onboarding not available"}), 409
mark_done = True
if isinstance(completed, bool):
mark_done = completed
request_username = row.get("username") or ""
approval_flags = deps._normalize_flag_list(row.get("approval_flags"))
contact_email = (row.get("contact_email") or "").strip()
if mark_done:
prerequisites = deps.ONBOARDING_STEP_PREREQUISITES.get(step, set())
if prerequisites:
current_completed = deps._completed_onboarding_steps(conn, code, request_username)
missing = sorted(prerequisites - current_completed)
if missing:
return jsonify({"error": "step is blocked", "blocked_by": missing}), 409
if step in {"vaultwarden_master_password", "vaultwarden_store_temp_password"}:
if not deps._password_rotation_requested(conn, code):
try:
deps._request_keycloak_password_rotation(conn, code, request_username)
except Exception:
return jsonify({"error": "failed to request keycloak password rotation"}), 502
if step == "vaultwarden_master_password":
if vaultwarden_claim and not username:
return jsonify({"error": "login required"}), 401
grandfathered = (
deps.VAULTWARDEN_GRANDFATHERED_FLAG in approval_flags
or deps.VAULTWARDEN_GRANDFATHERED_FLAG in token_groups
or deps._user_in_group(request_username, deps.VAULTWARDEN_GRANDFATHERED_FLAG)
)
if vaultwarden_claim and not grandfathered:
return jsonify({"error": "vaultwarden claim not allowed"}), 403
if vaultwarden_claim and not deps.admin_client().ready():
return jsonify({"error": "keycloak admin unavailable"}), 503
if request_username and deps.admin_client().ready():
try:
now = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
if vaultwarden_claim:
recovery_email = deps._resolve_recovery_email(request_username, contact_email)
if not recovery_email:
return jsonify({"error": "recovery email missing"}), 409
deps.admin_client().set_user_attribute(
request_username,
"vaultwarden_email",
recovery_email,
)
deps.admin_client().set_user_attribute(
request_username,
"vaultwarden_status",
"grandfathered",
)
deps.admin_client().set_user_attribute(
request_username,
"vaultwarden_synced_at",
now,
)
else:
deps.admin_client().set_user_attribute(
request_username,
"vaultwarden_status",
"already_present",
)
deps.admin_client().set_user_attribute(
request_username,
"vaultwarden_master_password_set_at",
now,
)
except Exception:
return jsonify({"error": "failed to update vaultwarden status"}), 502
conn.execute(
"""
INSERT INTO access_request_onboarding_steps (request_code, step)
VALUES (%s, %s)
ON CONFLICT (request_code, step) DO NOTHING
""",
(code, step),
)
else:
conn.execute(
"DELETE FROM access_request_onboarding_steps WHERE request_code = %s AND step = %s",
(code, step),
)
# Re-evaluate completion to update request status to ready if applicable.
status = deps._advance_status(conn, code, request_username, status)
onboarding_payload = deps._onboarding_payload(conn, code, request_username)
except Exception:
return jsonify({"error": "failed to update onboarding"}), 502
return jsonify(
{
"ok": True,
"status": status,
"onboarding": onboarding_payload,
}
)
@app.route("/api/access/request/onboarding/keycloak-password-rotate", methods=["POST"])
def request_access_onboarding_keycloak_password_rotate() -> Any:
"""Request Keycloak password rotation for an onboarding user."""
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
if not code:
return jsonify({"error": "request_code is required"}), 400
token_username = ""
bearer = request.headers.get("Authorization", "")
if bearer:
parts = bearer.split(None, 1)
if len(parts) != 2 or parts[0].lower() != "bearer":
return jsonify({"error": "invalid token"}), 401
token = parts[1].strip()
if not token:
return jsonify({"error": "invalid token"}), 401
try:
claims = deps.oidc_client().verify(token)
except Exception:
return jsonify({"error": "invalid token"}), 401
token_username = claims.get("preferred_username") or ""
if not deps.admin_client().ready():
return jsonify({"error": "keycloak admin unavailable"}), 503
try:
with deps.connect() as conn:
row = conn.execute(
"SELECT username, status FROM access_requests WHERE request_code = %s",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
request_username = row.get("username") or ""
if token_username and request_username != token_username:
return jsonify({"error": "forbidden"}), 403
status = deps._normalize_status(row.get("status") or "")
if status not in {"awaiting_onboarding", "ready"}:
return jsonify({"error": "onboarding not available"}), 409
prerequisites = deps.ONBOARDING_STEP_PREREQUISITES.get("keycloak_password_rotated", set())
if prerequisites:
current_completed = deps._completed_onboarding_steps(conn, code, request_username)
missing = sorted(prerequisites - current_completed)
if missing:
return jsonify({"error": "step is blocked", "blocked_by": missing}), 409
user = deps.admin_client().find_user(request_username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if not isinstance(user_id, str) or not user_id:
return jsonify({"error": "keycloak user not found"}), 409
full = deps.admin_client().get_user(user_id)
actions = full.get("requiredActions")
actions_list: list[str] = []
if isinstance(actions, list):
actions_list = [a for a in actions if isinstance(a, str)]
rotation_requested = deps._password_rotation_requested(conn, code)
already_rotated = rotation_requested and "UPDATE_PASSWORD" not in actions_list
if not already_rotated:
if "UPDATE_PASSWORD" not in actions_list:
actions_list.append("UPDATE_PASSWORD")
deps.admin_client().update_user_safe(user_id, {"requiredActions": actions_list})
if not rotation_requested:
conn.execute(
"""
INSERT INTO access_request_onboarding_artifacts (request_code, artifact, value_hash)
VALUES (%s, %s, NOW()::text)
ON CONFLICT (request_code, artifact) DO NOTHING
""",
(code, deps._KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT),
)
onboarding_payload = deps._onboarding_payload(conn, code, request_username)
except Exception:
return jsonify({"error": "failed to request password rotation"}), 502
return jsonify({"ok": True, "status": status, "onboarding": onboarding_payload})

View File

@ -0,0 +1,80 @@
from __future__ import annotations
"""Onboarding step policy for access requests."""
ONBOARDING_STEPS: tuple[str, ...] = (
"vaultwarden_master_password",
"vaultwarden_store_temp_password",
"vaultwarden_browser_extension",
"vaultwarden_mobile_app",
"keycloak_password_rotated",
"element_recovery_key",
"element_mobile_app",
"mail_client_setup",
"nextcloud_web_access",
"nextcloud_mail_integration",
"nextcloud_desktop_app",
"nextcloud_mobile_app",
"budget_encryption_ack",
"firefly_password_rotated",
"firefly_mobile_app",
"wger_password_rotated",
"wger_mobile_app",
"jellyfin_web_access",
"jellyfin_mobile_app",
"jellyfin_tv_setup",
)
ONBOARDING_OPTIONAL_STEPS: set[str] = {
"element_mobile_app",
"nextcloud_desktop_app",
"nextcloud_mobile_app",
"firefly_mobile_app",
"jellyfin_mobile_app",
"jellyfin_tv_setup",
}
ONBOARDING_REQUIRED_STEPS: tuple[str, ...] = (
"vaultwarden_master_password",
"vaultwarden_browser_extension",
"vaultwarden_mobile_app",
"keycloak_password_rotated",
"element_recovery_key",
"mail_client_setup",
"nextcloud_web_access",
"nextcloud_mail_integration",
"budget_encryption_ack",
"firefly_password_rotated",
"wger_password_rotated",
"jellyfin_web_access",
)
KEYCLOAK_MANAGED_STEPS: set[str] = {
"keycloak_password_rotated",
"nextcloud_mail_integration",
}
_KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT = "keycloak_password_rotation_requested_at"
ONBOARDING_STEP_PREREQUISITES: dict[str, set[str]] = {
"vaultwarden_master_password": set(),
"vaultwarden_store_temp_password": {"vaultwarden_master_password"},
"vaultwarden_browser_extension": {"vaultwarden_master_password"},
"vaultwarden_mobile_app": {"vaultwarden_master_password"},
"keycloak_password_rotated": {"vaultwarden_master_password"},
"element_recovery_key": {"keycloak_password_rotated"},
"element_mobile_app": {"element_recovery_key"},
"mail_client_setup": {"vaultwarden_master_password"},
"nextcloud_web_access": {"vaultwarden_master_password"},
"nextcloud_mail_integration": {"nextcloud_web_access"},
"nextcloud_desktop_app": {"nextcloud_web_access"},
"nextcloud_mobile_app": {"nextcloud_web_access"},
"budget_encryption_ack": {"nextcloud_mail_integration"},
"firefly_password_rotated": {"element_recovery_key"},
"wger_password_rotated": {"firefly_password_rotated"},
"wger_mobile_app": {"wger_password_rotated"},
"jellyfin_web_access": {"vaultwarden_master_password"},
"jellyfin_mobile_app": {"jellyfin_web_access"},
"jellyfin_tv_setup": {"jellyfin_web_access"},
}
VAULTWARDEN_GRANDFATHERED_FLAG = "vaultwarden_grandfathered"
_VAULTWARDEN_READY_STATUSES = {"already_present", "active", "ready", "grandfathered"}

View File

@ -0,0 +1,495 @@
from __future__ import annotations
from datetime import datetime, timezone
import hashlib
import hmac
import re
import secrets
import string
from typing import Any
from urllib.parse import quote
from flask import request
from .. import ariadne_client
from ..db import connect, configured
from ..keycloak import admin_client, oidc_client
from ..mailer import MailerError, access_request_verification_body, send_text_email
from ..rate_limit import rate_limit_allow
from ..provisioning import provision_access_request, provision_tasks_complete
from .. import settings
from .access_request_onboarding_policy import (
KEYCLOAK_MANAGED_STEPS,
ONBOARDING_OPTIONAL_STEPS,
ONBOARDING_REQUIRED_STEPS,
ONBOARDING_STEP_PREREQUISITES,
ONBOARDING_STEPS,
VAULTWARDEN_GRANDFATHERED_FLAG,
_KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT,
_VAULTWARDEN_READY_STATUSES,
)
def _extract_request_payload() -> tuple[str, str, str, str, str]:
payload = request.get_json(silent=True) or {}
username = (payload.get("username") or "").strip()
email = (payload.get("email") or "").strip()
note = (payload.get("note") or "").strip()
first_name = (payload.get("first_name") or "").strip()
last_name = (payload.get("last_name") or "").strip()
return username, email, note, first_name, last_name
def _normalize_name(value: str) -> str:
return " ".join(value.strip().split())
def _validate_name(value: str, *, label: str, required: bool) -> str | None:
if any(ch in "\r\n\t" for ch in value):
return f"{label} contains invalid whitespace"
cleaned = _normalize_name(value)
if not cleaned:
return f"{label} is required" if required else None
if len(cleaned) > 80:
return f"{label} must be 1-80 characters"
return None
def _validate_username(username: str) -> str | None:
if not username:
return "username is required"
if len(username) < 3 or len(username) > 32:
return "username must be 3-32 characters"
if not re.fullmatch(r"[a-zA-Z0-9._-]+", username):
return "username contains invalid characters"
return None
def _random_request_code(username: str) -> str:
suffix = "".join(secrets.choice(string.ascii_uppercase + string.digits) for _ in range(10))
return f"{username}~{suffix}"
def _client_ip() -> str:
xff = (request.headers.get("X-Forwarded-For") or "").strip()
if xff:
return xff.split(",", 1)[0].strip() or "unknown"
x_real_ip = (request.headers.get("X-Real-IP") or "").strip()
if x_real_ip:
return x_real_ip
return request.remote_addr or "unknown"
EMAIL_VERIFY_PENDING_STATUS = "pending_email_verification"
def _hash_verification_token(token: str) -> str:
return hashlib.sha256(token.encode("utf-8")).hexdigest()
def _verify_url(request_code: str, token: str) -> str:
base = settings.PORTAL_PUBLIC_BASE_URL.rstrip("/")
return f"{base}/api/access/request/verify-link?code={quote(request_code, safe='')}&token={quote(token, safe='')}"
def _send_verification_email(*, request_code: str, email: str, token: str) -> None:
verify_url = _verify_url(request_code, token)
send_text_email(
to_addr=email,
subject="Atlas: confirm your email",
body=access_request_verification_body(request_code=request_code, verify_url=verify_url),
)
class VerificationError(Exception):
"""Describe an email verification failure with an HTTP status."""
def __init__(self, status_code: int, message: str) -> None:
super().__init__(message)
self.status_code = status_code
self.message = message
def _verify_request(conn, code: str, token: str) -> str:
"""Validate email proof and atomically advance a pending request."""
row = conn.execute(
"""
SELECT status, email_verification_token_hash, email_verification_sent_at, email_verified_at
FROM access_requests
WHERE request_code = %s
""",
(code,),
).fetchone()
if not row:
raise VerificationError(404, "not found")
status = _normalize_status(row.get("status") or "")
if status != EMAIL_VERIFY_PENDING_STATUS:
return status
stored_hash = str(row.get("email_verification_token_hash") or "")
if not stored_hash:
raise VerificationError(409, "verification token missing")
provided_hash = _hash_verification_token(token)
if not hmac.compare_digest(stored_hash, provided_hash):
raise VerificationError(401, "invalid token")
sent_at = row.get("email_verification_sent_at")
if isinstance(sent_at, datetime):
now = datetime.now(timezone.utc)
if sent_at.tzinfo is None:
sent_at = sent_at.replace(tzinfo=timezone.utc)
age_sec = (now - sent_at).total_seconds()
if age_sec > settings.ACCESS_REQUEST_EMAIL_VERIFY_TTL_SEC:
raise VerificationError(410, "verification token expired")
conn.execute(
"""
UPDATE access_requests
SET status = 'pending',
email_verified_at = NOW(),
email_verification_token_hash = NULL
WHERE request_code = %s AND status = %s
""",
(code, EMAIL_VERIFY_PENDING_STATUS),
)
return "pending"
def _normalize_status(status: str) -> str:
cleaned = (status or "").strip().lower()
if cleaned == "approved":
return "accounts_building"
return cleaned or "unknown"
def _fetch_completed_onboarding_steps(conn, request_code: str) -> set[str]:
"""Return manually attested onboarding steps for one request."""
rows = conn.execute(
"SELECT step FROM access_request_onboarding_steps WHERE request_code = %s",
(request_code,),
).fetchall()
completed: set[str] = set()
for row in rows:
step = row.get("step") if isinstance(row, dict) else None
if isinstance(step, str) and step:
completed.add(step)
return completed
def _normalize_flag_list(raw: Any) -> set[str]:
if isinstance(raw, list):
return {item for item in raw if isinstance(item, str) and item}
if isinstance(raw, str) and raw:
return {raw}
return set()
def _fetch_request_flags_and_email(conn, request_code: str) -> tuple[set[str], str]:
"""Return approval flags and contact email used by onboarding decisions."""
row = conn.execute(
"SELECT approval_flags, contact_email FROM access_requests WHERE request_code = %s",
(request_code,),
).fetchone()
if not row:
return set(), ""
flags = _normalize_flag_list(row.get("approval_flags"))
email = row.get("contact_email") if isinstance(row, dict) else ""
return flags, (email or "").strip()
def _user_in_group(username: str, group_name: str) -> bool:
"""Return whether a Keycloak user belongs to a named group."""
if not username or not group_name:
return False
if not admin_client().ready():
return False
try:
user = admin_client().find_user(username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if not isinstance(user_id, str) or not user_id:
return False
groups = admin_client().list_user_groups(user_id)
except Exception:
return False
return group_name in groups
def _vaultwarden_grandfathered(conn, request_code: str, username: str) -> tuple[bool, str]:
flags, contact_email = _fetch_request_flags_and_email(conn, request_code)
if VAULTWARDEN_GRANDFATHERED_FLAG in flags:
return True, contact_email
if _user_in_group(username, VAULTWARDEN_GRANDFATHERED_FLAG):
return True, contact_email
return False, contact_email
def _resolve_recovery_email(username: str, fallback: str) -> str:
"""Find the best recovery email for Vaultwarden onboarding."""
if username and admin_client().ready():
try:
user = admin_client().find_user(username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if isinstance(user_id, str) and user_id:
full = admin_client().get_user(user_id)
email = full.get("email")
if isinstance(email, str) and email.strip():
return email.strip()
except Exception:
pass
return (fallback or "").strip()
def _password_rotation_requested(conn, request_code: str) -> bool:
"""Return whether Keycloak password rotation was requested for this request."""
row = conn.execute(
"""
SELECT 1
FROM access_request_onboarding_artifacts
WHERE request_code = %s AND artifact = %s
LIMIT 1
""",
(request_code, _KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT),
).fetchone()
return bool(row)
def _request_keycloak_password_rotation(conn, request_code: str, username: str) -> None:
"""Require Keycloak password rotation and persist the request marker."""
if not username:
raise ValueError("username missing")
if not admin_client().ready():
raise RuntimeError("keycloak admin unavailable")
user = admin_client().find_user(username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if not isinstance(user_id, str) or not user_id:
raise RuntimeError("keycloak user not found")
full = admin_client().get_user(user_id)
actions = full.get("requiredActions")
actions_list: list[str] = []
if isinstance(actions, list):
actions_list = [a for a in actions if isinstance(a, str)]
if "UPDATE_PASSWORD" not in actions_list:
actions_list.append("UPDATE_PASSWORD")
admin_client().update_user_safe(user_id, {"requiredActions": actions_list})
conn.execute(
"""
INSERT INTO access_request_onboarding_artifacts (request_code, artifact, value_hash)
VALUES (%s, %s, NOW()::text)
ON CONFLICT (request_code, artifact) DO NOTHING
""",
(request_code, _KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT),
)
def _extract_attr(attrs: Any, key: str) -> str:
"""Return the first string value for a Keycloak attribute."""
if not isinstance(attrs, dict):
return ""
raw = attrs.get(key)
if isinstance(raw, list):
for item in raw:
if isinstance(item, str) and item.strip():
return item.strip()
return ""
if isinstance(raw, str) and raw.strip():
return raw.strip()
return ""
def _vaultwarden_status_for_user(username: str) -> str:
"""Read the Vaultwarden lifecycle status mirrored on a Keycloak user."""
if not username:
return ""
if not admin_client().ready():
return ""
try:
user = admin_client().find_user(username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if not isinstance(user_id, str) or not user_id:
return ""
full = admin_client().get_user(user_id)
attrs = full.get("attributes") if isinstance(full, dict) else {}
return _extract_attr(attrs, "vaultwarden_status")
except Exception:
return ""
def _auto_completed_service_steps(attrs: Any) -> set[str]:
"""Infer onboarding steps completed by backend service automation."""
completed: set[str] = set()
if not isinstance(attrs, dict):
return completed
vaultwarden_status = _extract_attr(attrs, "vaultwarden_status")
vaultwarden_master = _extract_attr(attrs, "vaultwarden_master_password_set_at")
if vaultwarden_master or vaultwarden_status in _VAULTWARDEN_READY_STATUSES:
completed.add("vaultwarden_master_password")
nextcloud_synced_at = _extract_attr(attrs, "nextcloud_mail_synced_at")
if nextcloud_synced_at:
completed.add("nextcloud_mail_integration")
firefly_rotated_at = _extract_attr(attrs, "firefly_password_rotated_at")
if firefly_rotated_at:
completed.add("firefly_password_rotated")
wger_rotated_at = _extract_attr(attrs, "wger_password_rotated_at")
if wger_rotated_at:
completed.add("wger_password_rotated")
return completed
def _auto_completed_keycloak_steps(conn, request_code: str, username: str) -> set[str]:
"""Infer onboarding steps from Keycloak profile state."""
if not username:
return set()
if not admin_client().ready():
return set()
if not request_code:
return set()
completed: set[str] = set()
try:
user = admin_client().find_user(username) or {}
user_id = user.get("id") if isinstance(user, dict) else None
if not isinstance(user_id, str) or not user_id:
return set()
full = {}
try:
full = admin_client().get_user(user_id)
except Exception:
full = user if isinstance(user, dict) else {}
attrs = full.get("attributes") if isinstance(full, dict) else {}
completed |= _auto_completed_service_steps(attrs)
actions = full.get("requiredActions")
required_actions: set[str] = set()
actions_list: list[str] = []
if isinstance(actions, list):
actions_list = [a for a in actions if isinstance(a, str)]
required_actions = set(actions_list)
if _password_rotation_requested(conn, request_code) and "UPDATE_PASSWORD" not in required_actions:
completed.add("keycloak_password_rotated")
# Backfill: earlier accounts were created with CONFIGURE_TOTP as a required action,
# which forces users to enroll MFA at first login. We no longer require that, so
# remove it if present.
if "CONFIGURE_TOTP" in required_actions:
try:
admin_client().update_user_safe(
user_id,
{"requiredActions": [a for a in actions_list if a != "CONFIGURE_TOTP"]},
)
except Exception:
pass
except Exception:
return set()
return completed
def _completed_onboarding_steps(conn, request_code: str, username: str) -> set[str]:
completed = _fetch_completed_onboarding_steps(conn, request_code)
return completed | _auto_completed_keycloak_steps(conn, request_code, username)
def _automation_ready(conn, request_code: str, username: str) -> bool:
"""Return whether account provisioning has finished enough for onboarding."""
if not username:
return False
if not admin_client().ready():
return False
# Prefer task-based readiness when we have task rows for the request.
task_row = conn.execute(
"SELECT 1 FROM access_request_tasks WHERE request_code = %s LIMIT 1",
(request_code,),
).fetchone()
if task_row:
return provision_tasks_complete(conn, request_code)
# Fallback for legacy requests: confirm user exists and has a mail app password.
try:
user = admin_client().find_user(username)
if not user:
return False
user_id = user.get("id") if isinstance(user, dict) else None
if not user_id:
return False
full = admin_client().get_user(str(user_id))
attrs = full.get("attributes") or {}
if not isinstance(attrs, dict):
return False
raw_pw = attrs.get("mailu_app_password")
if isinstance(raw_pw, list):
return bool(raw_pw and isinstance(raw_pw[0], str) and raw_pw[0])
return bool(isinstance(raw_pw, str) and raw_pw)
except Exception:
return False
def _advance_status(conn, request_code: str, username: str, status: str) -> str:
"""Advance an access request through automatic status transitions."""
status = _normalize_status(status)
if status == "accounts_building" and _automation_ready(conn, request_code, username):
conn.execute(
"UPDATE access_requests SET status = 'awaiting_onboarding' WHERE request_code = %s AND status = 'accounts_building'",
(request_code,),
)
return "awaiting_onboarding"
if status == "awaiting_onboarding":
completed = _completed_onboarding_steps(conn, request_code, username)
required_steps = set(ONBOARDING_REQUIRED_STEPS)
grandfathered, _ = _vaultwarden_grandfathered(conn, request_code, username)
vaultwarden_status = _vaultwarden_status_for_user(username)
if grandfathered and vaultwarden_status == "grandfathered":
required_steps.add("vaultwarden_store_temp_password")
if required_steps.issubset(completed):
conn.execute(
"UPDATE access_requests SET status = 'ready' WHERE request_code = %s AND status = 'awaiting_onboarding'",
(request_code,),
)
return "ready"
return status
def _onboarding_payload(conn, request_code: str, username: str) -> dict[str, Any]:
"""Build the onboarding progress payload returned to the frontend."""
completed_steps = sorted(_completed_onboarding_steps(conn, request_code, username))
password_rotation_requested = _password_rotation_requested(conn, request_code)
grandfathered, contact_email = _vaultwarden_grandfathered(conn, request_code, username)
recovery_email = _resolve_recovery_email(username, contact_email) if grandfathered else ""
vaultwarden_status = _vaultwarden_status_for_user(username)
vaultwarden_matched = grandfathered and vaultwarden_status == "grandfathered"
required_steps = list(ONBOARDING_REQUIRED_STEPS)
if vaultwarden_matched:
required_steps.append("vaultwarden_store_temp_password")
return {
"required_steps": required_steps,
"optional_steps": sorted(ONBOARDING_OPTIONAL_STEPS),
"completed_steps": completed_steps,
"keycloak": {
"password_rotation_requested": password_rotation_requested,
},
"vaultwarden": {
"grandfathered": grandfathered,
"recovery_email": recovery_email,
"matched": vaultwarden_matched,
},
}
# Keep the historical access_requests module patch surface intact for tests and
# callers while the route handlers live in smaller focused modules.
__all__ = [name for name in globals() if not name.startswith("__")]

View File

@ -0,0 +1,228 @@
from __future__ import annotations
from datetime import datetime, timezone
from typing import Any
from flask import jsonify, redirect, request
def register_access_request_status(app, deps) -> None:
"""Register access request status routes."""
@app.route("/api/access/request/status", methods=["POST"])
def request_access_status() -> Any:
"""Return current provisioning and onboarding status for a request.
WHY: this endpoint is polled by the public flow, so it also advances
safe automatic transitions before rendering the latest state.
"""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
ip = deps._client_ip()
if not deps.rate_limit_allow(
ip,
key="access_request_status",
limit=deps.settings.ACCESS_REQUEST_STATUS_RATE_LIMIT,
window_sec=deps.settings.ACCESS_REQUEST_STATUS_RATE_WINDOW_SEC,
):
return jsonify({"error": "rate limited"}), 429
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
reveal_initial_password = bool(
payload.get("reveal_initial_password") or payload.get("reveal_password")
)
if not code:
return jsonify({"error": "request_code is required"}), 400
# Additional per-code limiter to avoid global NAT rate-limit blowups.
if not deps.rate_limit_allow(
f"{ip}:{code}",
key="access_request_status_code",
limit=max(20, deps.settings.ACCESS_REQUEST_STATUS_RATE_LIMIT),
window_sec=deps.settings.ACCESS_REQUEST_STATUS_RATE_WINDOW_SEC,
):
return jsonify({"error": "rate limited"}), 429
try:
with deps.connect() as conn:
row = conn.execute(
"""
SELECT status,
username,
initial_password,
initial_password_revealed_at,
email_verified_at
FROM access_requests
WHERE request_code = %s
""",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
current_status = deps._normalize_status(row.get("status") or "")
if current_status == "accounts_building" and not deps.ariadne_client.enabled():
try:
deps.provision_access_request(code)
except Exception:
pass
row = conn.execute(
"""
SELECT status,
username,
initial_password,
initial_password_revealed_at,
email_verified_at
FROM access_requests
WHERE request_code = %s
""",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
status = deps._advance_status(conn, code, row.get("username") or "", row.get("status") or "")
response: dict[str, Any] = {
"ok": True,
"status": status,
"username": row.get("username") or "",
"email_verified": bool(row.get("email_verified_at")),
}
task_rows = conn.execute(
"""
SELECT task, status, detail, updated_at
FROM access_request_tasks
WHERE request_code = %s
ORDER BY task
""",
(code,),
).fetchall()
if task_rows:
tasks: list[dict[str, Any]] = []
blocked = False
for task_row in task_rows:
task_name = task_row.get("task") if isinstance(task_row, dict) else None
task_status = task_row.get("status") if isinstance(task_row, dict) else None
detail = task_row.get("detail") if isinstance(task_row, dict) else None
updated_at = task_row.get("updated_at") if isinstance(task_row, dict) else None
if isinstance(task_status, str) and task_status == "error":
blocked = True
task_payload: dict[str, Any] = {
"task": task_name or "",
"status": task_status or "",
}
if isinstance(detail, str) and detail:
task_payload["detail"] = detail
if isinstance(updated_at, datetime):
task_payload["updated_at"] = updated_at.astimezone(timezone.utc).isoformat()
tasks.append(task_payload)
response["tasks"] = tasks
response["automation_complete"] = deps.provision_tasks_complete(conn, code)
response["blocked"] = blocked
if status in {"awaiting_onboarding", "ready"}:
revealed_at = row.get("initial_password_revealed_at")
if isinstance(revealed_at, datetime):
response["initial_password_revealed_at"] = revealed_at.astimezone(timezone.utc).isoformat()
if reveal_initial_password:
password = row.get("initial_password")
if isinstance(password, str) and password and revealed_at is None:
response["initial_password"] = password
conn.execute(
"UPDATE access_requests SET initial_password_revealed_at = NOW() WHERE request_code = %s AND initial_password_revealed_at IS NULL",
(code,),
)
if status in {"awaiting_onboarding", "ready"}:
response["onboarding_url"] = f"/onboarding?code={code}"
if status in {"awaiting_onboarding", "ready"}:
response["onboarding"] = deps._onboarding_payload(conn, code, row.get("username") or "")
return jsonify(response)
except Exception:
return jsonify({"error": "failed to load status"}), 502
@app.route("/api/access/request/retry", methods=["POST"])
def request_access_retry() -> Any:
"""Retry failed provisioning tasks for an access request."""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
ip = deps._client_ip()
if not deps.rate_limit_allow(
ip,
key="access_request_retry",
limit=deps.settings.ACCESS_REQUEST_STATUS_RATE_LIMIT,
window_sec=deps.settings.ACCESS_REQUEST_STATUS_RATE_WINDOW_SEC,
):
return jsonify({"error": "rate limited"}), 429
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
tasks = payload.get("tasks")
task_list = [task for task in tasks if isinstance(task, str) and task.strip()] if isinstance(tasks, list) else []
if not code:
return jsonify({"error": "request_code is required"}), 400
if deps.ariadne_client.enabled():
retry_payload = {"tasks": task_list} if task_list else None
return deps.ariadne_client.proxy(
"POST",
f"/api/access/requests/{code}/retry",
payload=retry_payload,
)
try:
with deps.connect() as conn:
row = conn.execute(
"SELECT status FROM access_requests WHERE request_code = %s",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
status = row.get("status") or ""
if status not in {"accounts_building", "approved"}:
return jsonify({"error": "request not retryable"}), 409
conn.execute(
"UPDATE access_requests SET provision_attempted_at = NULL WHERE request_code = %s",
(code,),
)
if task_list:
conn.execute(
"""
UPDATE access_request_tasks
SET status = 'pending',
detail = 'retry requested',
updated_at = NOW()
WHERE request_code = %s
AND task = ANY(%s::text[])
AND status = 'error'
""",
(code, task_list),
)
else:
conn.execute(
"""
UPDATE access_request_tasks
SET status = 'pending',
detail = 'retry requested',
updated_at = NOW()
WHERE request_code = %s AND status = 'error'
""",
(code,),
)
except Exception:
return jsonify({"error": "failed to retry request"}), 502
try:
deps.provision_access_request(code)
except Exception:
pass
return jsonify({"ok": True, "status": "accounts_building"})

View File

@ -0,0 +1,350 @@
from __future__ import annotations
import secrets
from typing import Any
from urllib.parse import quote
from flask import jsonify, redirect, request
import psycopg
def register_access_request_submission(app, deps) -> None:
"""Register access request submission routes."""
@app.route("/api/access/request/availability", methods=["GET"])
def request_access_availability() -> Any:
"""Report whether a requested username can start access signup."""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
username = (request.args.get("username") or "").strip()
error = deps._validate_username(username)
if error:
return jsonify({"available": False, "reason": "invalid", "detail": error})
if deps.admin_client().ready() and deps.admin_client().find_user(username):
return jsonify({"available": False, "reason": "exists", "detail": "username already exists"})
try:
with deps.connect() as conn:
existing = conn.execute(
"""
SELECT status
FROM access_requests
WHERE username = %s
ORDER BY created_at DESC
LIMIT 1
""",
(username,),
).fetchone()
except Exception:
return jsonify({"error": "failed to check availability"}), 502
if existing:
status = str(existing.get("status") or "")
return jsonify(
{
"available": False,
"reason": "requested",
"status": deps._normalize_status(status),
}
)
return jsonify({"available": True})
@app.route("/api/access/request", methods=["POST"])
def request_access() -> Any:
"""Create or refresh an email-verified access request.
WHY: submissions are anonymous, so this route validates names, rate
limits by client/request, and emails a proof token before queuing work.
"""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
ip = deps._client_ip()
username, email, note, first_name, last_name = deps._extract_request_payload()
first_name = deps._normalize_name(first_name)
last_name = deps._normalize_name(last_name)
rate_key = ip
if username:
rate_key = f"{ip}:{username}"
if not deps.rate_limit_allow(
rate_key,
key="access_request_submit",
limit=deps.settings.ACCESS_REQUEST_SUBMIT_RATE_LIMIT,
window_sec=deps.settings.ACCESS_REQUEST_SUBMIT_RATE_WINDOW_SEC,
):
return jsonify({"error": "rate limited"}), 429
username_error = deps._validate_username(username)
if username_error:
return jsonify({"error": username_error}), 400
name_error = deps._validate_name(first_name, label="first name", required=False)
if name_error:
return jsonify({"error": name_error}), 400
name_error = deps._validate_name(last_name, label="last name", required=True)
if name_error:
return jsonify({"error": name_error}), 400
if not email:
return jsonify({"error": "email is required"}), 400
if "@" not in email:
return jsonify({"error": "invalid email"}), 400
email_lower = email.lower()
if email_lower.endswith(f"@{deps.settings.MAILU_DOMAIN.lower()}") and (
email_lower not in deps.settings.ACCESS_REQUEST_INTERNAL_EMAIL_ALLOWLIST
):
return jsonify({"error": "email must be an external address"}), 400
if deps.admin_client().ready() and deps.admin_client().find_user(username):
return jsonify({"error": "username already exists"}), 409
if deps.admin_client().ready() and deps.admin_client().find_user_by_email(email):
return jsonify({"error": "email is already associated with an existing Atlas account"}), 409
try:
with deps.connect() as conn:
existing = conn.execute(
"""
SELECT request_code, status
FROM access_requests
WHERE username = %s AND status IN (%s, 'pending')
ORDER BY created_at DESC
LIMIT 1
""",
(username, deps.EMAIL_VERIFY_PENDING_STATUS),
).fetchone()
if existing:
existing_status = str(existing.get("status") or "")
request_code = str(existing.get("request_code") or "")
if existing_status != deps.EMAIL_VERIFY_PENDING_STATUS:
return jsonify({"ok": True, "request_code": request_code, "status": existing_status})
token = secrets.token_urlsafe(24)
token_hash = deps._hash_verification_token(token)
conn.execute(
"""
UPDATE access_requests
SET contact_email = %s,
note = %s,
first_name = %s,
last_name = %s,
email_verification_token_hash = %s,
email_verification_sent_at = NOW(),
email_verified_at = NULL
WHERE request_code = %s AND status = %s
""",
(
email,
note or None,
first_name or None,
last_name or None,
token_hash,
request_code,
deps.EMAIL_VERIFY_PENDING_STATUS,
),
)
try:
deps._send_verification_email(request_code=request_code, email=email, token=token)
except deps.MailerError:
return (
jsonify({"error": "failed to send verification email", "request_code": request_code}),
502,
)
return jsonify({"ok": True, "request_code": request_code, "status": deps.EMAIL_VERIFY_PENDING_STATUS})
request_code = deps._random_request_code(username)
token = secrets.token_urlsafe(24)
token_hash = deps._hash_verification_token(token)
try:
conn.execute(
"""
INSERT INTO access_requests
(request_code, username, contact_email, note, first_name, last_name, status,
email_verification_token_hash, email_verification_sent_at)
VALUES
(%s, %s, %s, %s, %s, %s, %s, %s, NOW())
""",
(
request_code,
username,
email,
note or None,
first_name or None,
last_name or None,
deps.EMAIL_VERIFY_PENDING_STATUS,
token_hash,
),
)
except psycopg.errors.UniqueViolation:
conn.rollback()
existing = conn.execute(
"""
SELECT request_code, status
FROM access_requests
WHERE username = %s AND status IN (%s, 'pending')
ORDER BY created_at DESC
LIMIT 1
""",
(username, deps.EMAIL_VERIFY_PENDING_STATUS),
).fetchone()
if not existing:
raise
return jsonify({"ok": True, "request_code": existing["request_code"], "status": existing["status"]})
try:
deps._send_verification_email(request_code=request_code, email=email, token=token)
except deps.MailerError:
return jsonify({"error": "failed to send verification email", "request_code": request_code}), 502
except Exception:
return jsonify({"error": "failed to submit request"}), 502
return jsonify({"ok": True, "request_code": request_code, "status": deps.EMAIL_VERIFY_PENDING_STATUS})
@app.route("/api/access/request/verify", methods=["POST"])
def request_access_verify() -> Any:
"""Verify a submitted access request using a request code and token."""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
ip = deps._client_ip()
if not deps.rate_limit_allow(
ip,
key="access_request_verify",
limit=60,
window_sec=60,
):
return jsonify({"error": "rate limited"}), 429
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
reveal_initial_password = bool(
payload.get("reveal_initial_password") or payload.get("reveal_password")
)
token = (payload.get("token") or payload.get("verify") or "").strip()
if not code:
return jsonify({"error": "request_code is required"}), 400
if not token:
return jsonify({"error": "token is required"}), 400
if not deps.rate_limit_allow(
f"{ip}:{code}",
key="access_request_verify_code",
limit=30,
window_sec=60,
):
return jsonify({"error": "rate limited"}), 429
try:
with deps.connect() as conn:
status = deps._verify_request(conn, code, token)
return jsonify({"ok": True, "status": status})
except deps.VerificationError as exc:
return jsonify({"error": exc.message}), exc.status_code
except Exception:
return jsonify({"error": "failed to verify"}), 502
@app.route("/api/access/request/verify-link", methods=["GET"])
def request_access_verify_link() -> Any:
"""Verify an emailed access-request link and redirect to the UI."""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
code = (request.args.get("code") or "").strip()
token = (request.args.get("token") or "").strip()
if not code or not token:
return redirect(f"/request-access?code={quote(code)}&verify_error=missing+token")
try:
with deps.connect() as conn:
deps._verify_request(conn, code, token)
return redirect(f"/request-access?code={quote(code)}&verified=1")
except deps.VerificationError as exc:
return redirect(f"/request-access?code={quote(code)}&verify_error={quote(exc.message)}")
except Exception:
return redirect(f"/request-access?code={quote(code)}&verify_error=failed+to+verify")
@app.route("/api/access/request/resend", methods=["POST"])
def request_access_resend() -> Any:
"""Send a fresh verification token for a pending access request."""
if not deps.settings.ACCESS_REQUEST_ENABLED:
return jsonify({"error": "request access disabled"}), 503
if not deps.configured():
return jsonify({"error": "server not deps.configured"}), 503
ip = deps._client_ip()
if not deps.rate_limit_allow(
ip,
key="access_request_resend",
limit=30,
window_sec=60,
):
return jsonify({"error": "rate limited"}), 429
payload = request.get_json(silent=True) or {}
code = (payload.get("request_code") or payload.get("code") or "").strip()
if not code:
return jsonify({"error": "request_code is required"}), 400
if not deps.rate_limit_allow(
f"{ip}:{code}",
key="access_request_resend_code",
limit=10,
window_sec=300,
):
return jsonify({"error": "rate limited"}), 429
try:
with deps.connect() as conn:
row = conn.execute(
"""
SELECT status, contact_email
FROM access_requests
WHERE request_code = %s
""",
(code,),
).fetchone()
if not row:
return jsonify({"error": "not found"}), 404
status = deps._normalize_status(row.get("status") or "")
if status != deps.EMAIL_VERIFY_PENDING_STATUS:
return jsonify({"ok": True, "status": status})
email = str(row.get("contact_email") or "").strip()
if not email:
return jsonify({"error": "missing email"}), 409
token = secrets.token_urlsafe(24)
token_hash = deps._hash_verification_token(token)
conn.execute(
"""
UPDATE access_requests
SET email_verification_token_hash = %s,
email_verification_sent_at = NOW()
WHERE request_code = %s AND status = %s
""",
(token_hash, code, deps.EMAIL_VERIFY_PENDING_STATUS),
)
try:
deps._send_verification_email(request_code=code, email=email, token=token)
except deps.MailerError:
return jsonify({"error": "failed to send verification email", "request_code": code}), 502
return jsonify({"ok": True, "status": deps.EMAIL_VERIFY_PENDING_STATUS})
except Exception:
return jsonify({"error": "failed to resend verification"}), 502

File diff suppressed because it is too large Load Diff

View File

@ -1,524 +1,13 @@
from __future__ import annotations
import socket
import time
from urllib.parse import quote
from typing import Any
"""Account route registration facade."""
import httpx
from flask import jsonify, g, request
from .. import settings
from .. import ariadne_client
from ..db import connect
from ..keycloak import admin_client, require_auth, require_account_access
from ..nextcloud_mail_sync import trigger as trigger_nextcloud_mail_sync
from ..utils import random_password
from ..firefly_user_sync import trigger as trigger_firefly_user_sync
from ..wger_user_sync import trigger as trigger_wger_user_sync
def _tcp_check(host: str, port: int, timeout_sec: float) -> bool:
if not host or port <= 0:
return False
try:
with socket.create_connection((host, port), timeout=timeout_sec):
return True
except OSError:
return False
from .account_actions import register_account_actions
from .account_overview import register_account_overview
def register(app) -> None:
@app.route("/api/account/overview", methods=["GET"])
@require_auth
def account_overview() -> Any:
ok, resp = require_account_access()
if not ok:
return resp
"""Register all account self-service and admin routes."""
username = g.keycloak_username
keycloak_email = g.keycloak_email or ""
mailu_email = ""
mailu_app_password = ""
mailu_status = "ready"
nextcloud_mail_status = "unknown"
nextcloud_mail_primary_email = ""
nextcloud_mail_account_count = ""
nextcloud_mail_synced_at = ""
wger_status = "ready"
wger_password = ""
wger_password_updated_at = ""
firefly_status = "ready"
firefly_password = ""
firefly_password_updated_at = ""
vaultwarden_email = ""
vaultwarden_status = ""
vaultwarden_synced_at = ""
jellyfin_status = "ready"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = ""
jellyfin_user_is_ldap = False
onboarding_url = ""
if not admin_client().ready():
mailu_status = "server not configured"
wger_status = "server not configured"
firefly_status = "server not configured"
jellyfin_status = "server not configured"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = "keycloak admin not configured"
elif username:
try:
user = admin_client().find_user(username) or {}
if isinstance(user, dict):
jellyfin_user_is_ldap = bool(user.get("federationLink"))
if not keycloak_email:
keycloak_email = str(user.get("email") or "")
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
raw_pw = attrs.get("mailu_app_password")
if isinstance(raw_pw, list) and raw_pw:
mailu_app_password = str(raw_pw[0])
elif isinstance(raw_pw, str) and raw_pw:
mailu_app_password = raw_pw
raw_primary = attrs.get("nextcloud_mail_primary_email")
if isinstance(raw_primary, list) and raw_primary:
nextcloud_mail_primary_email = str(raw_primary[0])
elif isinstance(raw_primary, str) and raw_primary:
nextcloud_mail_primary_email = raw_primary
raw_count = attrs.get("nextcloud_mail_account_count")
if isinstance(raw_count, list) and raw_count:
nextcloud_mail_account_count = str(raw_count[0])
elif isinstance(raw_count, str) and raw_count:
nextcloud_mail_account_count = raw_count
raw_synced = attrs.get("nextcloud_mail_synced_at")
if isinstance(raw_synced, list) and raw_synced:
nextcloud_mail_synced_at = str(raw_synced[0])
elif isinstance(raw_synced, str) and raw_synced:
nextcloud_mail_synced_at = raw_synced
raw_wger_password = attrs.get("wger_password")
if isinstance(raw_wger_password, list) and raw_wger_password:
wger_password = str(raw_wger_password[0])
elif isinstance(raw_wger_password, str) and raw_wger_password:
wger_password = raw_wger_password
raw_wger_updated = attrs.get("wger_password_updated_at")
if isinstance(raw_wger_updated, list) and raw_wger_updated:
wger_password_updated_at = str(raw_wger_updated[0])
elif isinstance(raw_wger_updated, str) and raw_wger_updated:
wger_password_updated_at = raw_wger_updated
raw_firefly_password = attrs.get("firefly_password")
if isinstance(raw_firefly_password, list) and raw_firefly_password:
firefly_password = str(raw_firefly_password[0])
elif isinstance(raw_firefly_password, str) and raw_firefly_password:
firefly_password = raw_firefly_password
raw_firefly_updated = attrs.get("firefly_password_updated_at")
if isinstance(raw_firefly_updated, list) and raw_firefly_updated:
firefly_password_updated_at = str(raw_firefly_updated[0])
elif isinstance(raw_firefly_updated, str) and raw_firefly_updated:
firefly_password_updated_at = raw_firefly_updated
raw_vw_email = attrs.get("vaultwarden_email")
if isinstance(raw_vw_email, list) and raw_vw_email:
vaultwarden_email = str(raw_vw_email[0])
elif isinstance(raw_vw_email, str) and raw_vw_email:
vaultwarden_email = raw_vw_email
raw_vw_status = attrs.get("vaultwarden_status")
if isinstance(raw_vw_status, list) and raw_vw_status:
vaultwarden_status = str(raw_vw_status[0])
elif isinstance(raw_vw_status, str) and raw_vw_status:
vaultwarden_status = raw_vw_status
raw_vw_synced = attrs.get("vaultwarden_synced_at")
if isinstance(raw_vw_synced, list) and raw_vw_synced:
vaultwarden_synced_at = str(raw_vw_synced[0])
elif isinstance(raw_vw_synced, str) and raw_vw_synced:
vaultwarden_synced_at = raw_vw_synced
user_id = user.get("id") if isinstance(user, dict) else None
if user_id and (
not keycloak_email
or not mailu_email
or not mailu_app_password
or not wger_password
or not wger_password_updated_at
or not firefly_password
or not firefly_password_updated_at
or not vaultwarden_email
or not vaultwarden_status
or not vaultwarden_synced_at
):
full = admin_client().get_user(str(user_id))
if not keycloak_email:
keycloak_email = str(full.get("email") or "")
attrs = full.get("attributes") or {}
if isinstance(attrs, dict):
if not mailu_email:
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu and isinstance(raw_mailu[0], str):
mailu_email = raw_mailu[0]
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
if not mailu_app_password:
raw_pw = attrs.get("mailu_app_password")
if isinstance(raw_pw, list) and raw_pw:
mailu_app_password = str(raw_pw[0])
elif isinstance(raw_pw, str) and raw_pw:
mailu_app_password = raw_pw
if not nextcloud_mail_primary_email:
raw_primary = attrs.get("nextcloud_mail_primary_email")
if isinstance(raw_primary, list) and raw_primary:
nextcloud_mail_primary_email = str(raw_primary[0])
elif isinstance(raw_primary, str) and raw_primary:
nextcloud_mail_primary_email = raw_primary
if not nextcloud_mail_account_count:
raw_count = attrs.get("nextcloud_mail_account_count")
if isinstance(raw_count, list) and raw_count:
nextcloud_mail_account_count = str(raw_count[0])
elif isinstance(raw_count, str) and raw_count:
nextcloud_mail_account_count = raw_count
if not nextcloud_mail_synced_at:
raw_synced = attrs.get("nextcloud_mail_synced_at")
if isinstance(raw_synced, list) and raw_synced:
nextcloud_mail_synced_at = str(raw_synced[0])
elif isinstance(raw_synced, str) and raw_synced:
nextcloud_mail_synced_at = raw_synced
if not wger_password:
raw_wger_password = attrs.get("wger_password")
if isinstance(raw_wger_password, list) and raw_wger_password:
wger_password = str(raw_wger_password[0])
elif isinstance(raw_wger_password, str) and raw_wger_password:
wger_password = raw_wger_password
if not wger_password_updated_at:
raw_wger_updated = attrs.get("wger_password_updated_at")
if isinstance(raw_wger_updated, list) and raw_wger_updated:
wger_password_updated_at = str(raw_wger_updated[0])
elif isinstance(raw_wger_updated, str) and raw_wger_updated:
wger_password_updated_at = raw_wger_updated
if not firefly_password:
raw_firefly_password = attrs.get("firefly_password")
if isinstance(raw_firefly_password, list) and raw_firefly_password:
firefly_password = str(raw_firefly_password[0])
elif isinstance(raw_firefly_password, str) and raw_firefly_password:
firefly_password = raw_firefly_password
if not firefly_password_updated_at:
raw_firefly_updated = attrs.get("firefly_password_updated_at")
if isinstance(raw_firefly_updated, list) and raw_firefly_updated:
firefly_password_updated_at = str(raw_firefly_updated[0])
elif isinstance(raw_firefly_updated, str) and raw_firefly_updated:
firefly_password_updated_at = raw_firefly_updated
if not vaultwarden_email:
raw_vw_email = attrs.get("vaultwarden_email")
if isinstance(raw_vw_email, list) and raw_vw_email:
vaultwarden_email = str(raw_vw_email[0])
elif isinstance(raw_vw_email, str) and raw_vw_email:
vaultwarden_email = raw_vw_email
if not vaultwarden_status:
raw_vw_status = attrs.get("vaultwarden_status")
if isinstance(raw_vw_status, list) and raw_vw_status:
vaultwarden_status = str(raw_vw_status[0])
elif isinstance(raw_vw_status, str) and raw_vw_status:
vaultwarden_status = raw_vw_status
if not vaultwarden_synced_at:
raw_vw_synced = attrs.get("vaultwarden_synced_at")
if isinstance(raw_vw_synced, list) and raw_vw_synced:
vaultwarden_synced_at = str(raw_vw_synced[0])
elif isinstance(raw_vw_synced, str) and raw_vw_synced:
vaultwarden_synced_at = raw_vw_synced
except Exception:
mailu_status = "unavailable"
nextcloud_mail_status = "unavailable"
wger_status = "unavailable"
firefly_status = "unavailable"
vaultwarden_status = "unavailable"
jellyfin_status = "unavailable"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = "unavailable"
mailu_username = mailu_email or (f"{username}@{settings.MAILU_DOMAIN}" if username else "")
firefly_username = mailu_username
vaultwarden_username = vaultwarden_email or mailu_username
if not mailu_app_password and mailu_status == "ready":
mailu_status = "needs app password"
if not wger_password and wger_status == "ready":
wger_status = "needs provisioning"
if not firefly_password and firefly_status == "ready":
firefly_status = "needs provisioning"
if nextcloud_mail_status == "unknown":
try:
count_val = int(nextcloud_mail_account_count) if nextcloud_mail_account_count else 0
except ValueError:
count_val = 0
if count_val > 0:
nextcloud_mail_status = "ready"
else:
nextcloud_mail_status = "needs sync"
if jellyfin_status == "ready":
ldap_reachable = _tcp_check(
settings.JELLYFIN_LDAP_HOST,
settings.JELLYFIN_LDAP_PORT,
settings.JELLYFIN_LDAP_CHECK_TIMEOUT_SEC,
)
if not ldap_reachable:
jellyfin_sync_status = "degraded"
jellyfin_sync_detail = "LDAP unreachable"
elif not jellyfin_user_is_ldap:
jellyfin_sync_status = "degraded"
jellyfin_sync_detail = "Keycloak user is not LDAP-backed"
else:
jellyfin_sync_status = "ok"
jellyfin_sync_detail = "LDAP-backed (Keycloak is source of truth)"
if not vaultwarden_status:
vaultwarden_status = "needs provisioning"
if settings.PORTAL_DATABASE_URL and username:
request_code = ""
try:
with connect() as conn:
row = conn.execute(
"SELECT request_code FROM access_requests WHERE username = %s ORDER BY created_at DESC LIMIT 1",
(username,),
).fetchone()
if not row and keycloak_email:
row = conn.execute(
"SELECT request_code FROM access_requests WHERE contact_email = %s ORDER BY created_at DESC LIMIT 1",
(keycloak_email,),
).fetchone()
if row and isinstance(row, dict):
request_code = str(row.get("request_code") or "").strip()
except Exception:
request_code = ""
if request_code:
onboarding_url = f"{settings.PORTAL_PUBLIC_BASE_URL}/onboarding?code={quote(request_code)}"
return jsonify(
{
"user": {"username": username, "email": keycloak_email, "groups": g.keycloak_groups},
"onboarding_url": onboarding_url,
"mailu": {"status": mailu_status, "username": mailu_username, "app_password": mailu_app_password},
"nextcloud_mail": {
"status": nextcloud_mail_status,
"primary_email": nextcloud_mail_primary_email,
"account_count": nextcloud_mail_account_count,
"synced_at": nextcloud_mail_synced_at,
},
"wger": {
"status": wger_status,
"username": username,
"password": wger_password,
"password_updated_at": wger_password_updated_at,
},
"firefly": {
"status": firefly_status,
"username": firefly_username,
"password": firefly_password,
"password_updated_at": firefly_password_updated_at,
},
"vaultwarden": {
"status": vaultwarden_status,
"username": vaultwarden_username,
"synced_at": vaultwarden_synced_at,
},
"jellyfin": {
"status": jellyfin_status,
"username": username,
"sync_status": jellyfin_sync_status,
"sync_detail": jellyfin_sync_detail,
},
}
)
@app.route("/api/account/mailu/rotate", methods=["POST"])
@require_auth
def account_mailu_rotate() -> Any:
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/mailu/rotate")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
password = random_password()
try:
admin_client().set_user_attribute(username, "mailu_app_password", password)
except Exception:
return jsonify({"error": "failed to update mail password"}), 502
sync_enabled = bool(settings.MAILU_SYNC_URL)
sync_ok = False
sync_error = ""
if sync_enabled:
try:
with httpx.Client(timeout=30) as client:
resp = client.post(
settings.MAILU_SYNC_URL,
json={"ts": int(time.time()), "wait": True, "reason": "portal_mailu_rotate"},
)
sync_ok = resp.status_code == 200
if not sync_ok:
sync_error = f"sync status {resp.status_code}"
except Exception:
sync_error = "sync request failed"
nextcloud_sync: dict[str, Any] = {"status": "skipped"}
try:
nextcloud_sync = trigger_nextcloud_mail_sync(username, wait=True)
except Exception:
nextcloud_sync = {"status": "error"}
return jsonify(
{
"password": password,
"sync_enabled": sync_enabled,
"sync_ok": sync_ok,
"sync_error": sync_error,
"nextcloud_sync": nextcloud_sync,
}
)
@app.route("/api/account/wger/reset", methods=["POST"])
@require_auth
def account_wger_reset() -> Any:
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/wger/reset")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
keycloak_email = g.keycloak_email or ""
mailu_email = ""
try:
user = admin_client().find_user(username) or {}
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
except Exception:
pass
email = mailu_email or f"{username}@{settings.MAILU_DOMAIN}"
password = random_password()
try:
result = trigger_wger_user_sync(username, email, password, wait=True)
status_val = result.get("status") if isinstance(result, dict) else "error"
if status_val != "ok":
raise RuntimeError(f"wger sync {status_val}")
except Exception as exc:
message = str(exc).strip() or "wger sync failed"
return jsonify({"error": message}), 502
try:
admin_client().set_user_attribute(username, "wger_password", password)
admin_client().set_user_attribute(
username,
"wger_password_updated_at",
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
)
except Exception:
return jsonify({"error": "failed to store wger password"}), 502
return jsonify({"status": "ok", "password": password})
@app.route("/api/account/firefly/reset", methods=["POST"])
@require_auth
def account_firefly_reset() -> Any:
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/firefly/reset")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
keycloak_email = g.keycloak_email or ""
mailu_email = ""
try:
user = admin_client().find_user(username) or {}
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
except Exception:
pass
email = mailu_email or f"{username}@{settings.MAILU_DOMAIN}"
password = random_password(24)
try:
result = trigger_firefly_user_sync(username, email, password, wait=True)
status_val = result.get("status") if isinstance(result, dict) else "error"
if status_val != "ok":
raise RuntimeError(f"firefly sync {status_val}")
except Exception as exc:
message = str(exc).strip() or "firefly sync failed"
return jsonify({"error": message}), 502
try:
admin_client().set_user_attribute(username, "firefly_password", password)
admin_client().set_user_attribute(
username,
"firefly_password_updated_at",
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
)
except Exception:
return jsonify({"error": "failed to store firefly password"}), 502
return jsonify({"status": "ok", "password": password})
@app.route("/api/account/nextcloud/mail/sync", methods=["POST"])
@require_auth
def account_nextcloud_mail_sync() -> Any:
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
payload = request.get_json(silent=True) or {}
return ariadne_client.proxy("POST", "/api/account/nextcloud/mail/sync", payload=payload)
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
payload = request.get_json(silent=True) or {}
wait = bool(payload.get("wait", True))
try:
result = trigger_nextcloud_mail_sync(username, wait=wait)
return jsonify(result)
except Exception as exc:
message = str(exc).strip() or "failed to sync nextcloud mail"
return jsonify({"error": message}), 502
register_account_overview(app)
register_account_actions(app)

View File

@ -0,0 +1,249 @@
from __future__ import annotations
import socket
import time
from urllib.parse import quote
from typing import Any
import httpx
from flask import jsonify, g, request
from .. import settings
from .. import ariadne_client
from ..db import connect
from ..keycloak import admin_client, require_auth, require_account_access
from ..nextcloud_mail_sync import trigger as trigger_nextcloud_mail_sync
from ..utils import random_password
from ..firefly_user_sync import trigger as trigger_firefly_user_sync
from ..wger_user_sync import trigger as trigger_wger_user_sync
def _tcp_check(host: str, port: int, timeout_sec: float) -> bool:
if not host or port <= 0:
return False
try:
with socket.create_connection((host, port), timeout=timeout_sec):
return True
except OSError:
return False
def register_account_actions(app) -> None:
"""Register account mutation and admin-action endpoints."""
@app.route("/api/account/mailu/rotate", methods=["POST"])
@require_auth
def account_mailu_rotate() -> Any:
"""Rotate the user's Mailu app password and trigger dependent syncs."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/mailu/rotate")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
password = random_password()
try:
admin_client().set_user_attribute(username, "mailu_app_password", password)
except Exception:
return jsonify({"error": "failed to update mail password"}), 502
sync_enabled = bool(settings.MAILU_SYNC_URL)
sync_ok = False
sync_error = ""
if sync_enabled:
try:
with httpx.Client(timeout=30) as client:
resp = client.post(
settings.MAILU_SYNC_URL,
json={"ts": int(time.time()), "wait": True, "reason": "portal_mailu_rotate"},
)
sync_ok = resp.status_code == 200
if not sync_ok:
sync_error = f"sync status {resp.status_code}"
except Exception:
sync_error = "sync request failed"
nextcloud_sync: dict[str, Any] = {"status": "skipped"}
try:
nextcloud_sync = trigger_nextcloud_mail_sync(username, wait=True)
except Exception:
nextcloud_sync = {"status": "error"}
return jsonify(
{
"password": password,
"sync_enabled": sync_enabled,
"sync_ok": sync_ok,
"sync_error": sync_error,
"nextcloud_sync": nextcloud_sync,
}
)
@app.route("/api/account/wger/reset", methods=["POST"])
@require_auth
def account_wger_reset() -> Any:
"""Reset the user's Wger password through the sync Job path."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/wger/reset")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
keycloak_email = g.keycloak_email or ""
mailu_email = ""
try:
user = admin_client().find_user(username) or {}
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
except Exception:
pass
email = mailu_email or f"{username}@{settings.MAILU_DOMAIN}"
password = random_password()
try:
result = trigger_wger_user_sync(username, email, password, wait=True)
status_val = result.get("status") if isinstance(result, dict) else "error"
if status_val != "ok":
raise RuntimeError(f"wger sync {status_val}")
except Exception as exc:
message = str(exc).strip() or "wger sync failed"
return jsonify({"error": message}), 502
try:
admin_client().set_user_attribute(username, "wger_password", password)
admin_client().set_user_attribute(
username,
"wger_password_updated_at",
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
)
except Exception:
return jsonify({"error": "failed to store wger password"}), 502
return jsonify({"status": "ok", "password": password})
@app.route("/api/account/wger/rotation/check", methods=["POST"])
@require_auth
def account_wger_rotation_check() -> Any:
"""Proxy or reject Wger rotation status checks for this account."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/wger/rotation/check")
return jsonify({"error": "server not configured"}), 503
@app.route("/api/account/firefly/reset", methods=["POST"])
@require_auth
def account_firefly_reset() -> Any:
"""Reset the user's Firefly password through the sync Job path."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/firefly/reset")
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
keycloak_email = g.keycloak_email or ""
mailu_email = ""
try:
user = admin_client().find_user(username) or {}
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
except Exception:
pass
email = mailu_email or f"{username}@{settings.MAILU_DOMAIN}"
password = random_password(24)
try:
result = trigger_firefly_user_sync(username, email, password, wait=True)
status_val = result.get("status") if isinstance(result, dict) else "error"
if status_val != "ok":
raise RuntimeError(f"firefly sync {status_val}")
except Exception as exc:
message = str(exc).strip() or "firefly sync failed"
return jsonify({"error": message}), 502
try:
admin_client().set_user_attribute(username, "firefly_password", password)
admin_client().set_user_attribute(
username,
"firefly_password_updated_at",
time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
)
except Exception:
return jsonify({"error": "failed to store firefly password"}), 502
return jsonify({"status": "ok", "password": password})
@app.route("/api/account/firefly/rotation/check", methods=["POST"])
@require_auth
def account_firefly_rotation_check() -> Any:
"""Proxy or reject Firefly rotation status checks for this account."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
return ariadne_client.proxy("POST", "/api/account/firefly/rotation/check")
return jsonify({"error": "server not configured"}), 503
@app.route("/api/account/nextcloud/mail/sync", methods=["POST"])
@require_auth
def account_nextcloud_mail_sync() -> Any:
"""Trigger a targeted Nextcloud mail sync for the signed-in user."""
ok, resp = require_account_access()
if not ok:
return resp
if ariadne_client.enabled():
payload = request.get_json(silent=True) or {}
return ariadne_client.proxy("POST", "/api/account/nextcloud/mail/sync", payload=payload)
if not admin_client().ready():
return jsonify({"error": "server not configured"}), 503
username = g.keycloak_username
if not username:
return jsonify({"error": "missing username"}), 400
payload = request.get_json(silent=True) or {}
wait = bool(payload.get("wait", True))
try:
result = trigger_nextcloud_mail_sync(username, wait=wait)
return jsonify(result)
except Exception as exc:
message = str(exc).strip() or "failed to sync nextcloud mail"
return jsonify({"error": message}), 502

View File

@ -0,0 +1,410 @@
from __future__ import annotations
import socket
import time
from urllib.parse import quote
from typing import Any
import httpx
from flask import jsonify, g, request
from .. import settings
from .. import ariadne_client
from ..db import connect
from ..keycloak import admin_client, require_auth, require_account_access
from ..nextcloud_mail_sync import trigger as trigger_nextcloud_mail_sync
from ..utils import random_password
from ..firefly_user_sync import trigger as trigger_firefly_user_sync
from ..wger_user_sync import trigger as trigger_wger_user_sync
def _tcp_check(host: str, port: int, timeout_sec: float) -> bool:
if not host or port <= 0:
return False
try:
with socket.create_connection((host, port), timeout=timeout_sec):
return True
except OSError:
return False
def register_account_overview(app) -> None:
"""Register the account overview endpoint."""
@app.route("/api/account/overview", methods=["GET"])
@require_auth
def account_overview() -> Any:
"""Build the signed-in user's self-service account dashboard state.
WHY: the UI needs one coherent status payload assembled from Keycloak,
service sync markers, and legacy fallback checks.
"""
ok, resp = require_account_access()
if not ok:
return resp
username = g.keycloak_username
keycloak_email = g.keycloak_email or ""
mailu_email = ""
mailu_app_password = ""
mailu_status = "ready"
nextcloud_mail_status = "unknown"
nextcloud_mail_primary_email = ""
nextcloud_mail_account_count = ""
nextcloud_mail_synced_at = ""
wger_status = "ready"
wger_password = ""
wger_password_updated_at = ""
firefly_status = "ready"
firefly_password = ""
firefly_password_updated_at = ""
vaultwarden_email = ""
vaultwarden_status = ""
vaultwarden_synced_at = ""
vaultwarden_master_set_at = ""
jellyfin_status = "ready"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = ""
jellyfin_user_is_ldap = False
onboarding_url = ""
if not admin_client().ready():
mailu_status = "server not configured"
wger_status = "server not configured"
firefly_status = "server not configured"
jellyfin_status = "server not configured"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = "keycloak admin not configured"
elif username:
try:
user = admin_client().find_user(username) or {}
if isinstance(user, dict):
jellyfin_user_is_ldap = bool(user.get("federationLink"))
if not keycloak_email:
keycloak_email = str(user.get("email") or "")
attrs = user.get("attributes") if isinstance(user, dict) else None
if isinstance(attrs, dict):
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu:
mailu_email = str(raw_mailu[0])
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
raw_pw = attrs.get("mailu_app_password")
if isinstance(raw_pw, list) and raw_pw:
mailu_app_password = str(raw_pw[0])
elif isinstance(raw_pw, str) and raw_pw:
mailu_app_password = raw_pw
raw_primary = attrs.get("nextcloud_mail_primary_email")
if isinstance(raw_primary, list) and raw_primary:
nextcloud_mail_primary_email = str(raw_primary[0])
elif isinstance(raw_primary, str) and raw_primary:
nextcloud_mail_primary_email = raw_primary
raw_count = attrs.get("nextcloud_mail_account_count")
if isinstance(raw_count, list) and raw_count:
nextcloud_mail_account_count = str(raw_count[0])
elif isinstance(raw_count, str) and raw_count:
nextcloud_mail_account_count = raw_count
raw_synced = attrs.get("nextcloud_mail_synced_at")
if isinstance(raw_synced, list) and raw_synced:
nextcloud_mail_synced_at = str(raw_synced[0])
elif isinstance(raw_synced, str) and raw_synced:
nextcloud_mail_synced_at = raw_synced
raw_wger_password = attrs.get("wger_password")
if isinstance(raw_wger_password, list) and raw_wger_password:
wger_password = str(raw_wger_password[0])
elif isinstance(raw_wger_password, str) and raw_wger_password:
wger_password = raw_wger_password
raw_wger_updated = attrs.get("wger_password_updated_at")
if isinstance(raw_wger_updated, list) and raw_wger_updated:
wger_password_updated_at = str(raw_wger_updated[0])
elif isinstance(raw_wger_updated, str) and raw_wger_updated:
wger_password_updated_at = raw_wger_updated
raw_firefly_password = attrs.get("firefly_password")
if isinstance(raw_firefly_password, list) and raw_firefly_password:
firefly_password = str(raw_firefly_password[0])
elif isinstance(raw_firefly_password, str) and raw_firefly_password:
firefly_password = raw_firefly_password
raw_firefly_updated = attrs.get("firefly_password_updated_at")
if isinstance(raw_firefly_updated, list) and raw_firefly_updated:
firefly_password_updated_at = str(raw_firefly_updated[0])
elif isinstance(raw_firefly_updated, str) and raw_firefly_updated:
firefly_password_updated_at = raw_firefly_updated
raw_vw_email = attrs.get("vaultwarden_email")
if isinstance(raw_vw_email, list) and raw_vw_email:
vaultwarden_email = str(raw_vw_email[0])
elif isinstance(raw_vw_email, str) and raw_vw_email:
vaultwarden_email = raw_vw_email
raw_vw_status = attrs.get("vaultwarden_status")
if isinstance(raw_vw_status, list) and raw_vw_status:
vaultwarden_status = str(raw_vw_status[0])
elif isinstance(raw_vw_status, str) and raw_vw_status:
vaultwarden_status = raw_vw_status
raw_vw_synced = attrs.get("vaultwarden_synced_at")
if isinstance(raw_vw_synced, list) and raw_vw_synced:
vaultwarden_synced_at = str(raw_vw_synced[0])
elif isinstance(raw_vw_synced, str) and raw_vw_synced:
vaultwarden_synced_at = raw_vw_synced
raw_vw_master = attrs.get("vaultwarden_master_password_set_at")
if isinstance(raw_vw_master, list) and raw_vw_master:
vaultwarden_master_set_at = str(raw_vw_master[0])
elif isinstance(raw_vw_master, str) and raw_vw_master:
vaultwarden_master_set_at = raw_vw_master
user_id = user.get("id") if isinstance(user, dict) else None
if user_id and (
not keycloak_email
or not mailu_email
or not mailu_app_password
or not wger_password
or not wger_password_updated_at
or not firefly_password
or not firefly_password_updated_at
or not vaultwarden_email
or not vaultwarden_status
or not vaultwarden_synced_at
or not vaultwarden_master_set_at
):
full = admin_client().get_user(str(user_id))
if not keycloak_email:
keycloak_email = str(full.get("email") or "")
attrs = full.get("attributes") or {}
if isinstance(attrs, dict):
if not mailu_email:
raw_mailu = attrs.get("mailu_email")
if isinstance(raw_mailu, list) and raw_mailu and isinstance(raw_mailu[0], str):
mailu_email = raw_mailu[0]
elif isinstance(raw_mailu, str) and raw_mailu:
mailu_email = raw_mailu
if not mailu_app_password:
raw_pw = attrs.get("mailu_app_password")
if isinstance(raw_pw, list) and raw_pw:
mailu_app_password = str(raw_pw[0])
elif isinstance(raw_pw, str) and raw_pw:
mailu_app_password = raw_pw
if not nextcloud_mail_primary_email:
raw_primary = attrs.get("nextcloud_mail_primary_email")
if isinstance(raw_primary, list) and raw_primary:
nextcloud_mail_primary_email = str(raw_primary[0])
elif isinstance(raw_primary, str) and raw_primary:
nextcloud_mail_primary_email = raw_primary
if not nextcloud_mail_account_count:
raw_count = attrs.get("nextcloud_mail_account_count")
if isinstance(raw_count, list) and raw_count:
nextcloud_mail_account_count = str(raw_count[0])
elif isinstance(raw_count, str) and raw_count:
nextcloud_mail_account_count = raw_count
if not nextcloud_mail_synced_at:
raw_synced = attrs.get("nextcloud_mail_synced_at")
if isinstance(raw_synced, list) and raw_synced:
nextcloud_mail_synced_at = str(raw_synced[0])
elif isinstance(raw_synced, str) and raw_synced:
nextcloud_mail_synced_at = raw_synced
if not wger_password:
raw_wger_password = attrs.get("wger_password")
if isinstance(raw_wger_password, list) and raw_wger_password:
wger_password = str(raw_wger_password[0])
elif isinstance(raw_wger_password, str) and raw_wger_password:
wger_password = raw_wger_password
if not wger_password_updated_at:
raw_wger_updated = attrs.get("wger_password_updated_at")
if isinstance(raw_wger_updated, list) and raw_wger_updated:
wger_password_updated_at = str(raw_wger_updated[0])
elif isinstance(raw_wger_updated, str) and raw_wger_updated:
wger_password_updated_at = raw_wger_updated
if not firefly_password:
raw_firefly_password = attrs.get("firefly_password")
if isinstance(raw_firefly_password, list) and raw_firefly_password:
firefly_password = str(raw_firefly_password[0])
elif isinstance(raw_firefly_password, str) and raw_firefly_password:
firefly_password = raw_firefly_password
if not firefly_password_updated_at:
raw_firefly_updated = attrs.get("firefly_password_updated_at")
if isinstance(raw_firefly_updated, list) and raw_firefly_updated:
firefly_password_updated_at = str(raw_firefly_updated[0])
elif isinstance(raw_firefly_updated, str) and raw_firefly_updated:
firefly_password_updated_at = raw_firefly_updated
if not vaultwarden_email:
raw_vw_email = attrs.get("vaultwarden_email")
if isinstance(raw_vw_email, list) and raw_vw_email:
vaultwarden_email = str(raw_vw_email[0])
elif isinstance(raw_vw_email, str) and raw_vw_email:
vaultwarden_email = raw_vw_email
if not vaultwarden_status:
raw_vw_status = attrs.get("vaultwarden_status")
if isinstance(raw_vw_status, list) and raw_vw_status:
vaultwarden_status = str(raw_vw_status[0])
elif isinstance(raw_vw_status, str) and raw_vw_status:
vaultwarden_status = raw_vw_status
if not vaultwarden_synced_at:
raw_vw_synced = attrs.get("vaultwarden_synced_at")
if isinstance(raw_vw_synced, list) and raw_vw_synced:
vaultwarden_synced_at = str(raw_vw_synced[0])
elif isinstance(raw_vw_synced, str) and raw_vw_synced:
vaultwarden_synced_at = raw_vw_synced
if not vaultwarden_master_set_at:
raw_vw_master = attrs.get("vaultwarden_master_password_set_at")
if isinstance(raw_vw_master, list) and raw_vw_master:
vaultwarden_master_set_at = str(raw_vw_master[0])
elif isinstance(raw_vw_master, str) and raw_vw_master:
vaultwarden_master_set_at = raw_vw_master
if vaultwarden_master_set_at:
vaultwarden_status = "ready"
except Exception:
mailu_status = "unavailable"
nextcloud_mail_status = "unavailable"
wger_status = "unavailable"
firefly_status = "unavailable"
vaultwarden_status = "unavailable"
jellyfin_status = "unavailable"
jellyfin_sync_status = "unknown"
jellyfin_sync_detail = "unavailable"
if (
username
and not vaultwarden_master_set_at
and vaultwarden_status in {"", "invited", "needs provisioning"}
and settings.PORTAL_DATABASE_URL
):
try:
with connect() as conn:
row = conn.execute(
"""
SELECT request_code
FROM access_requests
WHERE username = %s AND status IN ('awaiting_onboarding', 'ready')
ORDER BY created_at DESC
LIMIT 1
""",
(username,),
).fetchone()
if not row:
row = conn.execute(
"""
SELECT request_code
FROM access_requests
WHERE username = %s
ORDER BY created_at DESC
LIMIT 1
""",
(username,),
).fetchone()
if row and isinstance(row, dict):
request_code = str(row.get("request_code") or "").strip()
if request_code:
step = conn.execute(
"""
SELECT 1
FROM access_request_onboarding_steps
WHERE request_code = %s AND step = %s
LIMIT 1
""",
(request_code, "vaultwarden_master_password"),
).fetchone()
if step:
vaultwarden_master_set_at = "confirmed"
vaultwarden_status = "ready"
except Exception:
pass
mailu_username = mailu_email or (f"{username}@{settings.MAILU_DOMAIN}" if username else "")
firefly_username = mailu_username
vaultwarden_username = vaultwarden_email or mailu_username
if not mailu_app_password and mailu_status == "ready":
mailu_status = "needs app password"
if not wger_password and wger_status == "ready":
wger_status = "needs provisioning"
if not firefly_password and firefly_status == "ready":
firefly_status = "needs provisioning"
if nextcloud_mail_status == "unknown":
try:
count_val = int(nextcloud_mail_account_count) if nextcloud_mail_account_count else 0
except ValueError:
count_val = 0
if count_val > 0:
nextcloud_mail_status = "ready"
else:
nextcloud_mail_status = "needs sync"
if jellyfin_status == "ready":
ldap_reachable = _tcp_check(
settings.JELLYFIN_LDAP_HOST,
settings.JELLYFIN_LDAP_PORT,
settings.JELLYFIN_LDAP_CHECK_TIMEOUT_SEC,
)
if not ldap_reachable:
jellyfin_sync_status = "degraded"
jellyfin_sync_detail = "LDAP unreachable"
elif not jellyfin_user_is_ldap:
jellyfin_sync_status = "degraded"
jellyfin_sync_detail = "Keycloak user is not LDAP-backed"
else:
jellyfin_sync_status = "ok"
jellyfin_sync_detail = "LDAP-backed (Keycloak is source of truth)"
if not vaultwarden_status:
vaultwarden_status = "needs provisioning"
if settings.PORTAL_DATABASE_URL and username:
request_code = ""
try:
with connect() as conn:
row = conn.execute(
"SELECT request_code FROM access_requests WHERE username = %s ORDER BY created_at DESC LIMIT 1",
(username,),
).fetchone()
if not row and keycloak_email:
row = conn.execute(
"SELECT request_code FROM access_requests WHERE contact_email = %s ORDER BY created_at DESC LIMIT 1",
(keycloak_email,),
).fetchone()
if row and isinstance(row, dict):
request_code = str(row.get("request_code") or "").strip()
except Exception:
request_code = ""
if request_code:
onboarding_url = f"{settings.PORTAL_PUBLIC_BASE_URL}/onboarding?code={quote(request_code)}"
return jsonify(
{
"user": {"username": username, "email": keycloak_email, "groups": g.keycloak_groups},
"onboarding_url": onboarding_url,
"mailu": {"status": mailu_status, "username": mailu_username, "app_password": mailu_app_password},
"nextcloud_mail": {
"status": nextcloud_mail_status,
"primary_email": nextcloud_mail_primary_email,
"account_count": nextcloud_mail_account_count,
"synced_at": nextcloud_mail_synced_at,
},
"wger": {
"status": wger_status,
"username": username,
"password": wger_password,
"password_updated_at": wger_password_updated_at,
},
"firefly": {
"status": firefly_status,
"username": firefly_username,
"password": firefly_password,
"password_updated_at": firefly_password_updated_at,
},
"vaultwarden": {
"status": vaultwarden_status,
"username": vaultwarden_username,
"synced_at": vaultwarden_synced_at,
},
"jellyfin": {
"status": jellyfin_status,
"username": username,
"sync_status": jellyfin_sync_status,
"sync_detail": jellyfin_sync_detail,
},
}
)

View File

@ -12,9 +12,13 @@ from ..provisioning import provision_access_request
def register(app) -> None:
"""Register administrator routes for access-request decisions."""
@app.route("/api/admin/access/requests", methods=["GET"])
@require_auth
def admin_list_requests() -> Any:
"""List pending access requests for portal administrators."""
ok, resp = require_portal_admin()
if not ok:
return resp
@ -56,6 +60,8 @@ def register(app) -> None:
@app.route("/api/admin/access/flags", methods=["GET"])
@require_auth
def admin_list_flags() -> Any:
"""List Keycloak groups that can be applied as approval flags."""
ok, resp = require_portal_admin()
if not ok:
return resp
@ -74,6 +80,12 @@ def register(app) -> None:
@app.route("/api/admin/access/requests/<username>/approve", methods=["POST"])
@require_auth
def admin_approve_request(username: str) -> Any:
"""Approve one verified access request and start provisioning.
WHY: approval should atomically record the admin decision before
best-effort provisioning so status polling can surface any later issue.
"""
ok, resp = require_portal_admin()
if not ok:
return resp
@ -125,6 +137,8 @@ def register(app) -> None:
@app.route("/api/admin/access/requests/<username>/deny", methods=["POST"])
@require_auth
def admin_deny_request(username: str) -> Any:
"""Deny one pending access request with optional admin context."""
ok, resp = require_portal_admin()
if not ok:
return resp

View File

@ -13,58 +13,122 @@ from .. import settings
def register(app) -> None:
"""Register the Atlas AI chat and model-info endpoints."""
@app.route("/api/chat", methods=["POST"])
@app.route("/api/ai/chat", methods=["POST"])
def ai_chat() -> Any:
"""Return an Atlasbot answer or a budget-aware fallback message."""
payload = request.get_json(silent=True) or {}
user_message = (payload.get("message") or "").strip()
history = payload.get("history") or []
profile = (payload.get("profile") or payload.get("mode") or "atlas-quick").strip().lower()
conversation_id = payload.get("conversation_id") if isinstance(payload.get("conversation_id"), str) else ""
if not user_message:
return jsonify({"error": "message required"}), 400
messages: list[dict[str, str]] = []
if settings.AI_CHAT_SYSTEM_PROMPT:
messages.append({"role": "system", "content": settings.AI_CHAT_SYSTEM_PROMPT})
for item in history:
role = item.get("role")
content = (item.get("content") or "").strip()
if role in ("user", "assistant") and content:
messages.append({"role": role, "content": content})
messages.append({"role": "user", "content": user_message})
body = {"model": settings.AI_CHAT_MODEL, "messages": messages, "stream": False}
started = time.time()
try:
with httpx.Client(timeout=settings.AI_CHAT_TIMEOUT_SEC) as client:
resp = client.post(f"{settings.AI_CHAT_API}/api/chat", json=body)
resp.raise_for_status()
data = resp.json()
reply = (data.get("message") or {}).get("content") or ""
elapsed_ms = int((time.time() - started) * 1000)
return jsonify({"reply": reply, "latency_ms": elapsed_ms})
except (httpx.RequestError, httpx.HTTPStatusError, ValueError) as exc:
return jsonify({"error": str(exc)}), 502
mode = "quick"
if profile in {"atlas-smart", "smart"}:
mode = "smart"
elif profile in {"atlas-genius", "genius"}:
mode = "genius"
reply = _atlasbot_answer(user_message, mode, conversation_id)
source = f"atlas-{mode}"
if reply:
elapsed_ms = int((time.time() - started) * 1000)
return jsonify({"reply": reply, "latency_ms": elapsed_ms, "source": source})
elapsed_ms = int((time.time() - started) * 1000)
if mode == "quick":
budget = max(1, int(round(settings.AI_ATLASBOT_TIMEOUT_QUICK_SEC)))
fallback = (
f"Quick mode hit {budget}s response budget before finishing. "
"Try atlas-smart for a deeper answer."
)
elif mode == "smart":
budget = max(1, int(round(settings.AI_ATLASBOT_TIMEOUT_SMART_SEC)))
fallback = (
f"Smart mode hit {budget}s response budget before finishing. "
"Try atlas-genius or ask a narrower follow-up."
)
else:
fallback = "Atlas genius mode timed out before it could finish. Please retry with a narrower prompt."
return jsonify(
{
"reply": fallback,
"latency_ms": elapsed_ms,
"source": source,
}
)
@app.route("/api/chat/info", methods=["GET"])
@app.route("/api/ai/info", methods=["GET"])
def ai_info() -> Any:
meta = _discover_ai_meta()
"""Return model and placement metadata for the requested AI profile."""
profile = (request.args.get("profile") or "atlas-quick").strip().lower()
meta = _discover_ai_meta(profile)
return jsonify(meta)
_start_keep_warm()
def _discover_ai_meta() -> dict[str, str]:
def _atlasbot_answer(message: str, mode: str, conversation_id: str) -> str:
"""Ask Atlasbot for one answer and return an empty string on soft failure."""
endpoint = settings.AI_ATLASBOT_ENDPOINT
if not endpoint:
return ""
headers: dict[str, str] = {}
if settings.AI_ATLASBOT_TOKEN:
headers["X-Internal-Token"] = settings.AI_ATLASBOT_TOKEN
try:
payload = {"prompt": message, "mode": mode}
if conversation_id:
payload["conversation_id"] = conversation_id
with httpx.Client(timeout=_atlasbot_timeout_sec(mode)) as client:
resp = client.post(endpoint, json=payload, headers=headers)
if resp.status_code != 200:
return ""
data = resp.json()
answer = (data.get("reply") or data.get("answer") or "").strip()
return answer
except (httpx.RequestError, ValueError):
return ""
def _atlasbot_timeout_sec(mode: str) -> float:
if mode == "genius":
return settings.AI_ATLASBOT_TIMEOUT_GENIUS_SEC
if mode == "smart":
return settings.AI_ATLASBOT_TIMEOUT_SMART_SEC
return settings.AI_ATLASBOT_TIMEOUT_QUICK_SEC
def _discover_ai_meta(profile: str) -> dict[str, str]:
"""Discover AI model metadata from settings and the running Kubernetes pod.
WHY: the frontend needs a human-readable model/GPU hint even when the model
image or GPU placement changes outside the portal code.
"""
meta = {
"node": settings.AI_NODE_NAME,
"gpu": settings.AI_GPU_DESC,
"model": settings.AI_CHAT_MODEL,
"endpoint": settings.AI_PUBLIC_ENDPOINT or "/api/chat",
"profile": profile,
}
if profile in {"atlas-smart", "smart"}:
meta["model"] = settings.AI_ATLASBOT_MODEL_SMART or settings.AI_CHAT_MODEL
meta["endpoint"] = "/api/ai/chat"
elif profile in {"atlas-genius", "genius"}:
meta["model"] = settings.AI_ATLASBOT_MODEL_GENIUS or settings.AI_CHAT_MODEL
meta["endpoint"] = "/api/ai/chat"
elif profile in {"atlas-quick", "quick"}:
meta["model"] = settings.AI_ATLASBOT_MODEL_FAST or settings.AI_CHAT_MODEL
meta["endpoint"] = "/api/ai/chat"
sa_path = Path("/var/run/secrets/kubernetes.io/serviceaccount")
token_path = sa_path / "token"
@ -118,10 +182,14 @@ def _discover_ai_meta() -> dict[str, str]:
def _start_keep_warm() -> None:
"""Start the optional background keep-warm loop for the chat backend."""
if not settings.AI_WARM_ENABLED or settings.AI_WARM_INTERVAL_SEC <= 0:
return
def loop() -> None:
"""Periodically send a tiny chat request so the backend stays warm."""
while True:
time.sleep(settings.AI_WARM_INTERVAL_SEC)
try:
@ -136,4 +204,3 @@ def _start_keep_warm() -> None:
continue
threading.Thread(target=loop, daemon=True, name="ai-keep-warm").start()

View File

@ -9,8 +9,12 @@ from .. import settings
def register(app) -> None:
"""Expose the login URLs the frontend needs for auth state rendering."""
@app.route("/api/auth/config", methods=["GET"])
def auth_config() -> Any:
"""Render the auth configuration payload consumed by the SPA."""
if not settings.KEYCLOAK_ENABLED:
return jsonify({"enabled": False})

View File

@ -6,7 +6,10 @@ from flask import jsonify
def register(app) -> None:
"""Register the lightweight health endpoint on the Flask app."""
@app.route("/api/healthz")
def healthz() -> Any:
return jsonify({"ok": True})
"""Return the basic liveness payload used by probes and tests."""
return jsonify({"ok": True})

View File

@ -15,6 +15,8 @@ _LAB_STATUS_CACHE: dict[str, Any] = {"ts": 0.0, "value": None}
def _vm_query(expr: str) -> float | None:
"""Run one instant VictoriaMetrics query and return the largest value."""
url = f"{settings.VM_BASE_URL}/api/v1/query?{urlencode({'query': expr})}"
with urlopen(url, timeout=settings.VM_QUERY_TIMEOUT_SEC) as resp:
payload = json.loads(resp.read().decode("utf-8"))
@ -40,6 +42,8 @@ def _vm_query(expr: str) -> float | None:
def _http_ok(url: str, expect_substring: str | None = None) -> bool:
"""Return whether a URL responds successfully and optionally contains text."""
try:
with urlopen(url, timeout=settings.HTTP_CHECK_TIMEOUT_SEC) as resp:
if getattr(resp, "status", 200) != 200:
@ -53,8 +57,12 @@ def _http_ok(url: str, expect_substring: str | None = None) -> bool:
def register(app) -> None:
"""Register the lightweight lab connectivity status endpoint."""
@app.route("/api/lab/status")
def lab_status() -> Any:
"""Return cached Atlas/Oceanus health hints for the home page."""
now = time.time()
cached = _LAB_STATUS_CACHE.get("value")
if cached and (now - float(_LAB_STATUS_CACHE.get("ts", 0.0)) < settings.LAB_STATUS_CACHE_SEC):

View File

@ -11,12 +11,15 @@ from .. import settings
def register(app) -> None:
"""Expose the Monero node health endpoint through Flask."""
@app.route("/api/monero/get_info")
def monero_get_info() -> Any:
"""Proxy `get_info` from the Monero daemon with a predictable response."""
try:
with urlopen(settings.MONERO_GET_INFO_URL, timeout=2) as resp:
payload = json.loads(resp.read().decode("utf-8"))
return jsonify(payload)
except (URLError, TimeoutError, ValueError) as exc:
return jsonify({"error": str(exc), "url": settings.MONERO_GET_INFO_URL}), 503

View File

@ -4,6 +4,8 @@ import os
def _env_bool(name: str, default: str = "false") -> bool:
"""Parse a truthy environment variable with the repo's boolean semantics."""
return os.getenv(name, default).lower() in ("1", "true", "yes")
@ -26,6 +28,19 @@ AI_CHAT_SYSTEM_PROMPT = os.getenv(
"You are the Titan Lab assistant for bstein.dev. Be concise and helpful.",
)
AI_CHAT_TIMEOUT_SEC = float(os.getenv("AI_CHAT_TIMEOUT_SEC", "20"))
AI_ATLASBOT_ENDPOINT = os.getenv("AI_ATLASBOT_ENDPOINT", "").strip()
AI_ATLASBOT_TOKEN = os.getenv("AI_ATLASBOT_TOKEN", "").strip()
AI_ATLASBOT_TIMEOUT_SEC = float(os.getenv("AI_ATLASBOT_TIMEOUT_SEC", "5"))
AI_ATLASBOT_TIMEOUT_QUICK_SEC = float(os.getenv("AI_ATLASBOT_TIMEOUT_QUICK_SEC", "15"))
AI_ATLASBOT_TIMEOUT_SMART_SEC = float(
os.getenv("AI_ATLASBOT_TIMEOUT_SMART_SEC", str(max(AI_ATLASBOT_TIMEOUT_SEC, 45)))
)
AI_ATLASBOT_TIMEOUT_GENIUS_SEC = float(
os.getenv("AI_ATLASBOT_TIMEOUT_GENIUS_SEC", str(max(AI_ATLASBOT_TIMEOUT_SEC, 180)))
)
AI_ATLASBOT_MODEL_FAST = os.getenv("AI_ATLASBOT_MODEL_FAST", "").strip()
AI_ATLASBOT_MODEL_SMART = os.getenv("AI_ATLASBOT_MODEL_SMART", "").strip()
AI_ATLASBOT_MODEL_GENIUS = os.getenv("AI_ATLASBOT_MODEL_GENIUS", "").strip()
AI_NODE_NAME = os.getenv("AI_CHAT_NODE_NAME") or os.getenv("AI_NODE_NAME") or "ai-cluster"
AI_GPU_DESC = os.getenv("AI_CHAT_GPU_DESC") or "local GPU (dynamic)"
AI_PUBLIC_ENDPOINT = os.getenv("AI_PUBLIC_CHAT_ENDPOINT", "https://chat.ai.bstein.dev/api/chat")
@ -60,6 +75,13 @@ ACCOUNT_ALLOWED_GROUPS = [
]
PORTAL_DATABASE_URL = os.getenv("PORTAL_DATABASE_URL", "").strip()
PORTAL_DB_POOL_MIN = int(os.getenv("PORTAL_DB_POOL_MIN", "0"))
PORTAL_DB_POOL_MAX = int(os.getenv("PORTAL_DB_POOL_MAX", "5"))
PORTAL_DB_CONNECT_TIMEOUT_SEC = int(os.getenv("PORTAL_DB_CONNECT_TIMEOUT_SEC", "5"))
PORTAL_DB_LOCK_TIMEOUT_SEC = int(os.getenv("PORTAL_DB_LOCK_TIMEOUT_SEC", "5"))
PORTAL_DB_STATEMENT_TIMEOUT_SEC = int(os.getenv("PORTAL_DB_STATEMENT_TIMEOUT_SEC", "30"))
PORTAL_DB_IDLE_IN_TX_TIMEOUT_SEC = int(os.getenv("PORTAL_DB_IDLE_IN_TX_TIMEOUT_SEC", "10"))
PORTAL_RUN_MIGRATIONS = _env_bool("PORTAL_RUN_MIGRATIONS", "false")
PORTAL_ADMIN_USERS = [u.strip() for u in os.getenv("PORTAL_ADMIN_USERS", "bstein").split(",") if u.strip()]
PORTAL_ADMIN_GROUPS = [g.strip() for g in os.getenv("PORTAL_ADMIN_GROUPS", "admin").split(",") if g.strip()]

View File

@ -10,11 +10,19 @@ from . import settings
def random_password(length: int = 32) -> str:
"""Generate a URL-safe mixed-case password for one-off account bootstrap."""
alphabet = string.ascii_letters + string.digits
return "".join(secrets.choice(alphabet) for _ in range(length))
def best_effort_post(url: str) -> None:
"""Fire-and-forget a JSON ping without letting transport failures bubble.
WHY: background sync helpers should keep moving even if the destination is
briefly unavailable or the cluster network is in a bad state.
"""
if not url:
return
try:
@ -22,4 +30,3 @@ def best_effort_post(url: str) -> None:
client.post(url, json={"ts": int(time.time())})
except Exception:
return

View File

@ -28,6 +28,8 @@ def _read_service_account() -> tuple[str, str]:
def _k8s_get_json(path: str) -> dict[str, Any]:
"""Fetch a Kubernetes object as JSON through the pod service account."""
token, ca_path = _read_service_account()
url = f"{_K8S_BASE_URL}{path}"
with httpx.Client(
@ -44,12 +46,16 @@ def _k8s_get_json(path: str) -> dict[str, Any]:
def _k8s_find_pod_ip(namespace: str, label_selector: str) -> str:
"""Find a usable Vaultwarden pod IP for direct admin fallback."""
data = _k8s_get_json(f"/api/v1/namespaces/{namespace}/pods?labelSelector={label_selector}")
items = data.get("items") or []
if not isinstance(items, list) or not items:
raise RuntimeError("no vaultwarden pods found")
def _pod_ready(pod: dict[str, Any]) -> bool:
"""Return whether a listed pod is running and ready enough to contact."""
status = pod.get("status") if isinstance(pod.get("status"), dict) else {}
if status.get("phase") != "Running":
return False
@ -74,6 +80,8 @@ def _k8s_find_pod_ip(namespace: str, label_selector: str) -> str:
def _k8s_get_secret_value(namespace: str, name: str, key: str) -> str:
"""Read and decode one Kubernetes Secret value."""
data = _k8s_get_json(f"/api/v1/namespaces/{namespace}/secrets/{name}")
blob = data.get("data") if isinstance(data.get("data"), dict) else {}
raw = blob.get(key)
@ -90,6 +98,8 @@ def _k8s_get_secret_value(namespace: str, name: str, key: str) -> str:
@dataclass(frozen=True)
class VaultwardenInvite:
"""Describe the result of attempting to create a Vaultwarden invite."""
ok: bool
status: str
detail: str = ""
@ -103,6 +113,12 @@ _ADMIN_RATE_LIMITED_UNTIL: float = 0.0
def _admin_session(base_url: str) -> httpx.Client:
"""Return a cached authenticated Vaultwarden admin session.
WHY: Vaultwarden rate-limits admin login attempts, so invite creation must
reuse a short-lived session instead of logging in for every user action.
"""
global _ADMIN_SESSION, _ADMIN_SESSION_EXPIRES_AT, _ADMIN_SESSION_BASE_URL, _ADMIN_RATE_LIMITED_UNTIL
now = time.time()
with _ADMIN_LOCK:
@ -146,6 +162,8 @@ def _admin_session(base_url: str) -> httpx.Client:
def invite_user(email: str) -> VaultwardenInvite:
"""Invite one email address to Vaultwarden through the admin UI."""
global _ADMIN_RATE_LIMITED_UNTIL
email = (email or "").strip()
if not email or "@" not in email:

View File

@ -21,6 +21,8 @@ def _job_from_cronjob(
email: str,
password: str,
) -> dict[str, Any]:
"""Render a one-off Wger user sync Job from the managed CronJob template."""
spec = cronjob.get("spec") if isinstance(cronjob.get("spec"), dict) else {}
jt = spec.get("jobTemplate") if isinstance(spec.get("jobTemplate"), dict) else {}
job_spec = jt.get("spec") if isinstance(jt.get("spec"), dict) else {}
@ -71,6 +73,8 @@ def _job_from_cronjob(
def _job_succeeded(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as successfully complete."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("succeeded") or 0) > 0:
return True
@ -84,6 +88,8 @@ def _job_succeeded(job: dict[str, Any]) -> bool:
def _job_failed(job: dict[str, Any]) -> bool:
"""Return whether Kubernetes reports the sync Job as failed."""
status = job.get("status") if isinstance(job.get("status"), dict) else {}
if int(status.get("failed") or 0) > 0:
return True
@ -97,6 +103,12 @@ def _job_failed(job: dict[str, Any]) -> bool:
def trigger(username: str, email: str, password: str, wait: bool = True) -> dict[str, Any]:
"""Start the Wger sync Job for one user and optionally wait for completion.
WHY: account actions need an immediate per-user repair path without
mutating the reusable CronJob template that Flux owns.
"""
username = (username or "").strip()
if not username:
raise RuntimeError("missing username")

View File

@ -0,0 +1,3 @@
pytest==8.3.4
pytest-cov==6.0.0
pytest-mock==3.14.0

View File

@ -1,6 +1,8 @@
flask==3.0.3
flask-cors==4.0.0
gunicorn==21.2.0
flask-cors==6.0.2
gunicorn==25.3.0
httpx==0.27.2
PyJWT[crypto]==2.10.1
psycopg[binary]==3.2.6
PyJWT[crypto]==2.12.1
# Keep the binary extra so CI runners do not need host libpq packages.
psycopg[binary]==3.2.13
psycopg-pool==3.2.6

15
backend/tests/conftest.py Normal file
View File

@ -0,0 +1,15 @@
from __future__ import annotations
"""Pytest bootstrap for backend tests.
The backend package lives under `backend/`, so test runs from the repository
root need that directory on `sys.path` before importing `atlas_portal`.
"""
import sys
from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
if str(ROOT) not in sys.path:
sys.path.insert(0, str(ROOT))

View File

@ -0,0 +1,356 @@
from __future__ import annotations
from contextlib import contextmanager
from types import SimpleNamespace
from typing import Any
from flask import Flask
from atlas_portal.routes.access_request_onboarding import register_access_request_onboarding
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None) -> None:
self.row = row
def fetchone(self) -> dict[str, Any] | None:
return self.row
class DummyConn:
def __init__(self, row: dict[str, Any] | None = None, *, fail: bool = False) -> None:
self.row = row
self.fail = fail
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if self.fail:
raise RuntimeError("database failed")
return DummyResult(self.row)
class DummyOidc:
def __init__(self, *, fail: bool = False, claims: dict[str, Any] | None = None) -> None:
self.fail = fail
self.claims = claims or {"preferred_username": "alice", "groups": ["/vaultwarden_grandfathered"]}
def verify(self, token: str) -> dict[str, Any]:
if self.fail:
raise RuntimeError("bad token")
return self.claims
class DummyAdmin:
def __init__(
self,
*,
ready: bool = True,
user: dict[str, Any] | None = None,
full: dict[str, Any] | None = None,
fail_attrs: bool = False,
) -> None:
self._ready = ready
self.user = user if user is not None else {"id": "user-1"}
self.full = full if full is not None else {"requiredActions": []}
self.fail_attrs = fail_attrs
self.attributes: list[tuple[str, str, str]] = []
self.updated: list[tuple[str, dict[str, Any]]] = []
def ready(self) -> bool:
return self._ready
def set_user_attribute(self, username: str, key: str, value: str) -> None:
if self.fail_attrs:
raise RuntimeError("attribute update failed")
self.attributes.append((username, key, value))
def find_user(self, username: str) -> dict[str, Any] | None:
return self.user
def get_user(self, user_id: str) -> dict[str, Any]:
return self.full
def update_user_safe(self, user_id: str, payload: dict[str, Any]) -> None:
self.updated.append((user_id, payload))
class DummyDeps:
ONBOARDING_STEPS = {
"profile_reviewed",
"vaultwarden_master_password",
"vaultwarden_store_temp_password",
"keycloak_password_rotated",
}
KEYCLOAK_MANAGED_STEPS = {"keycloak_password_rotated"}
ONBOARDING_STEP_PREREQUISITES = {
"vaultwarden_master_password": {"profile_reviewed"},
"keycloak_password_rotated": {"profile_reviewed"},
}
VAULTWARDEN_GRANDFATHERED_FLAG = "vaultwarden_grandfathered"
_KEYCLOAK_PASSWORD_ROTATION_REQUESTED_ARTIFACT = "keycloak_password_rotation_requested"
def __init__(self, conn: DummyConn | None = None) -> None:
self.configured_value = True
self.conn = conn or DummyConn(self.request_row())
self.oidc = DummyOidc()
self.admin = DummyAdmin()
self.completed_steps: set[str] = {"profile_reviewed"}
self.rotation_requested = True
self.request_rotation_fails = False
self.user_in_group = False
self.recovery_email = "alice@example.dev"
self.advanced_status = "ready"
def request_row(self, **overrides: Any) -> dict[str, Any]:
row = {
"username": "alice",
"status": "awaiting_onboarding",
"approval_flags": [],
"contact_email": "alice@example.dev",
}
row.update(overrides)
return row
def configured(self) -> bool:
return self.configured_value
@contextmanager
def connect(self):
yield self.conn
def oidc_client(self) -> DummyOidc:
return self.oidc
def admin_client(self) -> DummyAdmin:
return self.admin
def _normalize_status(self, status: str) -> str:
return "accounts_building" if status == "approved" else (status or "unknown")
def _normalize_flag_list(self, raw: Any) -> set[str]:
return {item for item in raw if isinstance(item, str)} if isinstance(raw, list) else set()
def _completed_onboarding_steps(self, conn: DummyConn, code: str, username: str) -> set[str]:
return self.completed_steps
def _password_rotation_requested(self, conn: DummyConn, code: str) -> bool:
return self.rotation_requested
def _request_keycloak_password_rotation(self, conn: DummyConn, code: str, username: str) -> None:
if self.request_rotation_fails:
raise RuntimeError("rotation request failed")
self.rotation_requested = True
def _user_in_group(self, username: str, group: str) -> bool:
return self.user_in_group
def _resolve_recovery_email(self, username: str, fallback: str) -> str:
return self.recovery_email or fallback
def _advance_status(self, conn: DummyConn, code: str, username: str, status: str) -> str:
return self.advanced_status
def _onboarding_payload(self, conn: DummyConn, code: str, username: str) -> dict[str, str]:
return {"code": code, "username": username}
def make_client(deps: DummyDeps):
app = Flask(__name__)
register_access_request_onboarding(app, deps)
return app.test_client()
def test_attest_preflight_token_and_lookup_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.configured_value = False
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "profile_reviewed"}).status_code == 503
deps.configured_value = True
assert client.post("/api/access/request/onboarding/attest", json={"step": "profile_reviewed"}).status_code == 400
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "missing"}).status_code == 400
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "keycloak_password_rotated"}).status_code == 400
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "profile_reviewed"},
headers={"Authorization": "bad"},
).status_code == 401
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "profile_reviewed"},
headers={"Authorization": "Bearer "},
).status_code == 401
deps.oidc = DummyOidc(fail=True)
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "profile_reviewed"},
headers={"Authorization": "Bearer token"},
).status_code == 401
deps.oidc = DummyOidc(claims={"preferred_username": "other", "groups": []})
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "profile_reviewed"},
headers={"Authorization": "Bearer token"},
).status_code == 403
deps.oidc = DummyOidc()
deps.conn = DummyConn(None)
assert client.post("/api/access/request/onboarding/attest", json={"code": "missing", "step": "profile_reviewed"}).status_code == 404
deps.conn = DummyConn(deps.request_row(status="pending"))
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "profile_reviewed"}).status_code == 409
def test_attest_prerequisites_rotation_and_manual_clear_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.completed_steps = set()
response = client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "vaultwarden_master_password"})
assert response.status_code == 409
assert response.get_json()["blocked_by"] == ["profile_reviewed"]
deps.completed_steps = {"profile_reviewed"}
deps.rotation_requested = False
deps.request_rotation_fails = True
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "vaultwarden_store_temp_password"}).status_code == 502
deps.request_rotation_fails = False
response = client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "profile_reviewed", "completed": False})
assert response.status_code == 200
assert any("DELETE FROM access_request_onboarding_steps" in query for query, _ in deps.conn.executed)
deps.conn = DummyConn(deps.request_row(), fail=True)
assert client.post("/api/access/request/onboarding/attest", json={"code": "code", "step": "profile_reviewed"}).status_code == 502
def test_attest_vaultwarden_claim_and_attribute_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password", "vaultwarden_claim": True},
).status_code == 401
deps.oidc = DummyOidc(claims={"preferred_username": "alice", "groups": []})
deps.completed_steps = {"profile_reviewed"}
deps.user_in_group = False
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password", "vaultwarden_claim": True},
headers={"Authorization": "Bearer token"},
).status_code == 403
deps.conn = DummyConn(deps.request_row(approval_flags=[deps.VAULTWARDEN_GRANDFATHERED_FLAG]))
deps.admin = DummyAdmin(ready=False)
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password", "vaultwarden_claim": True},
headers={"Authorization": "Bearer token"},
).status_code == 503
deps.admin = DummyAdmin()
deps.recovery_email = ""
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password", "vaultwarden_claim": True},
headers={"Authorization": "Bearer token"},
).status_code == 200
deps.recovery_email = "recovery@example.dev"
response = client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password", "vaultwarden_claim": True},
headers={"Authorization": "Bearer token"},
)
assert response.status_code == 200
assert ("alice", "vaultwarden_email", "recovery@example.dev") in deps.admin.attributes
assert any("INSERT INTO access_request_onboarding_steps" in query for query, _ in deps.conn.executed)
deps.admin = DummyAdmin()
response = client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password"},
)
assert response.status_code == 200
assert ("alice", "vaultwarden_status", "already_present") in deps.admin.attributes
deps.admin = DummyAdmin(fail_attrs=True)
assert client.post(
"/api/access/request/onboarding/attest",
json={"code": "code", "step": "vaultwarden_master_password"},
).status_code == 502
def test_keycloak_rotate_preflight_and_lookup_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.configured_value = False
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 503
deps.configured_value = True
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={}).status_code == 400
assert client.post(
"/api/access/request/onboarding/keycloak-password-rotate",
json={"code": "code"},
headers={"Authorization": "bad"},
).status_code == 401
deps.oidc = DummyOidc(fail=True)
assert client.post(
"/api/access/request/onboarding/keycloak-password-rotate",
json={"code": "code"},
headers={"Authorization": "Bearer token"},
).status_code == 401
deps.oidc = DummyOidc()
deps.admin = DummyAdmin(ready=False)
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 503
deps.admin = DummyAdmin()
deps.conn = DummyConn(None)
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "missing"}).status_code == 404
deps.conn = DummyConn(deps.request_row())
deps.oidc = DummyOidc(claims={"preferred_username": "other"})
assert client.post(
"/api/access/request/onboarding/keycloak-password-rotate",
json={"code": "code"},
headers={"Authorization": "Bearer token"},
).status_code == 403
deps.oidc = DummyOidc()
deps.conn = DummyConn(deps.request_row(status="pending"))
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 409
deps.conn = DummyConn(deps.request_row())
deps.completed_steps = set()
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 409
def test_keycloak_rotate_success_and_error_paths() -> None:
deps = DummyDeps()
deps.rotation_requested = False
deps.completed_steps = {"profile_reviewed"}
deps.admin = DummyAdmin(full={"requiredActions": ["CONFIGURE_TOTP"]})
client = make_client(deps)
response = client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"})
assert response.status_code == 200
assert deps.admin.updated == [("user-1", {"requiredActions": ["CONFIGURE_TOTP", "UPDATE_PASSWORD"]})]
assert any("INSERT INTO access_request_onboarding_artifacts" in query for query, _ in deps.conn.executed)
deps.rotation_requested = True
deps.admin = DummyAdmin(full={"requiredActions": []})
response = client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"})
assert response.status_code == 200
assert deps.admin.updated == []
deps.admin = DummyAdmin(user={})
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 409
deps.conn = DummyConn(deps.request_row(), fail=True)
assert client.post("/api/access/request/onboarding/keycloak-password-rotate", json={"code": "code"}).status_code == 502

View File

@ -0,0 +1,389 @@
from __future__ import annotations
from datetime import datetime, timedelta, timezone
from typing import Any
from flask import Flask
import pytest
from atlas_portal.routes import access_request_state as state
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None, rows: list[dict[str, Any]] | None = None) -> None:
self.row = row
self.rows = rows or []
def fetchone(self) -> dict[str, Any] | None:
return self.row
def fetchall(self) -> list[dict[str, Any]]:
return self.rows
class DummyConn:
def __init__(
self,
*,
rows_by_query: dict[str, dict[str, Any] | None] | None = None,
many_by_query: dict[str, list[dict[str, Any]]] | None = None,
) -> None:
self.rows_by_query = rows_by_query or {}
self.many_by_query = many_by_query or {}
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
for key, rows in self.many_by_query.items():
if key in query:
return DummyResult(rows=rows)
for key, row in self.rows_by_query.items():
if key in query:
return DummyResult(row=row)
return DummyResult()
class DummyAdmin:
def __init__(
self,
*,
ready: bool = True,
user: dict[str, Any] | None = None,
full: dict[str, Any] | None = None,
groups: list[str] | None = None,
fail_find: bool = False,
fail_get: bool = False,
fail_update: bool = False,
) -> None:
self._ready = ready
self.user = user if user is not None else {"id": "user-1"}
self.full = full if full is not None else {}
self.groups = groups or []
self.fail_find = fail_find
self.fail_get = fail_get
self.fail_update = fail_update
self.updated: list[tuple[str, dict[str, Any]]] = []
def ready(self) -> bool:
return self._ready
def find_user(self, username: str) -> dict[str, Any] | None:
if self.fail_find:
raise RuntimeError("lookup failed")
return self.user
def list_user_groups(self, user_id: str) -> list[str]:
return self.groups
def get_user(self, user_id: str) -> dict[str, Any]:
if self.fail_get:
raise RuntimeError("get failed")
return self.full
def update_user_safe(self, user_id: str, payload: dict[str, Any]) -> None:
if self.fail_update:
raise RuntimeError("update failed")
self.updated.append((user_id, payload))
def test_request_payload_names_username_and_client_ip(monkeypatch) -> None:
app = Flask(__name__)
monkeypatch.setattr(state.secrets, "choice", lambda alphabet: "A")
with app.test_request_context(
"/request",
json={
"username": " alice ",
"email": " alice@example.com ",
"note": " hello ",
"first_name": " Alice ",
"last_name": " Atlas ",
},
headers={"X-Forwarded-For": "203.0.113.10, 10.0.0.1"},
):
assert state._extract_request_payload() == (
"alice",
"alice@example.com",
"hello",
"Alice",
"Atlas",
)
assert state._client_ip() == "203.0.113.10"
with app.test_request_context("/request", headers={"X-Real-IP": "198.51.100.5"}):
assert state._client_ip() == "198.51.100.5"
with app.test_request_context("/request", environ_base={"REMOTE_ADDR": "192.0.2.10"}):
assert state._client_ip() == "192.0.2.10"
assert state._normalize_name(" Alice Atlas ") == "Alice Atlas"
assert state._validate_name("", label="last name", required=True) == "last name is required"
assert state._validate_name("A" * 81, label="last name", required=True) == "last name must be 1-80 characters"
assert state._validate_name("Alice\tAtlas", label="last name", required=True) == "last name contains invalid whitespace"
assert state._validate_name("", label="first name", required=False) is None
assert state._validate_name("Alice Atlas", label="last name", required=True) is None
assert state._validate_username("") == "username is required"
assert state._validate_username("ab") == "username must be 3-32 characters"
assert state._validate_username("bad name") == "username contains invalid characters"
assert state._validate_username("alice_ok-1") is None
assert state._random_request_code("alice") == "alice~AAAAAAAAAA"
def test_verification_url_email_and_error_paths(monkeypatch) -> None:
sent: dict[str, str] = {}
def fake_send_text_email(*, to_addr: str, subject: str, body: str) -> None:
sent.update({"to": to_addr, "subject": subject, "body": body})
monkeypatch.setattr(state.settings, "PORTAL_PUBLIC_BASE_URL", "https://portal.example.dev/")
monkeypatch.setattr(state, "send_text_email", fake_send_text_email)
state._send_verification_email(request_code="alice CODE", email="alice@example.dev", token="tok/1")
assert sent["to"] == "alice@example.dev"
assert "confirm your email" in sent["subject"]
assert "alice%20CODE" in sent["body"]
assert "tok/1" not in sent["body"]
with pytest.raises(state.VerificationError) as missing:
state._verify_request(DummyConn(), "missing", "tok")
assert missing.value.status_code == 404
non_pending = DummyConn(rows_by_query={"SELECT status": {"status": "approved"}})
assert state._verify_request(non_pending, "code", "tok") == "accounts_building"
no_hash = DummyConn(rows_by_query={"SELECT status": {"status": state.EMAIL_VERIFY_PENDING_STATUS}})
with pytest.raises(state.VerificationError) as no_token:
state._verify_request(no_hash, "code", "tok")
assert no_token.value.status_code == 409
bad_hash = DummyConn(
rows_by_query={
"SELECT status": {
"status": state.EMAIL_VERIFY_PENDING_STATUS,
"email_verification_token_hash": state._hash_verification_token("other"),
}
}
)
with pytest.raises(state.VerificationError) as invalid:
state._verify_request(bad_hash, "code", "tok")
assert invalid.value.status_code == 401
expired_at = datetime.now() - timedelta(seconds=state.settings.ACCESS_REQUEST_EMAIL_VERIFY_TTL_SEC + 5)
expired = DummyConn(
rows_by_query={
"SELECT status": {
"status": state.EMAIL_VERIFY_PENDING_STATUS,
"email_verification_token_hash": state._hash_verification_token("tok"),
"email_verification_sent_at": expired_at,
}
}
)
with pytest.raises(state.VerificationError) as expired_error:
state._verify_request(expired, "code", "tok")
assert expired_error.value.status_code == 410
success = DummyConn(
rows_by_query={
"SELECT status": {
"status": state.EMAIL_VERIFY_PENDING_STATUS,
"email_verification_token_hash": state._hash_verification_token("tok"),
"email_verification_sent_at": datetime.now(timezone.utc),
}
}
)
assert state._verify_request(success, "code", "tok") == "pending"
assert any("UPDATE access_requests" in query for query, _ in success.executed)
def test_onboarding_flags_groups_and_recovery_email(monkeypatch) -> None:
conn = DummyConn(
rows_by_query={
"SELECT approval_flags": {
"approval_flags": [state.VAULTWARDEN_GRANDFATHERED_FLAG, "", 7],
"contact_email": " contact@example.dev ",
}
},
many_by_query={
"SELECT step FROM access_request_onboarding_steps": [
{"step": "keycloak_password_rotated"},
{"step": ""},
{"step": 7},
]
},
)
assert state._fetch_completed_onboarding_steps(conn, "code") == {"keycloak_password_rotated"}
assert state._normalize_flag_list("one") == {"one"}
assert state._normalize_flag_list(["one", "", 2]) == {"one"}
assert state._normalize_flag_list(None) == set()
assert state._fetch_request_flags_and_email(conn, "code") == (
{state.VAULTWARDEN_GRANDFATHERED_FLAG},
"contact@example.dev",
)
assert state._fetch_request_flags_and_email(DummyConn(), "missing") == (set(), "")
assert state._vaultwarden_grandfathered(conn, "code", "alice") == (True, "contact@example.dev")
admin = DummyAdmin(groups=[state.VAULTWARDEN_GRANDFATHERED_FLAG], full={"email": "real@example.dev"})
monkeypatch.setattr(state, "admin_client", lambda: admin)
assert state._user_in_group("", "group") is False
assert state._user_in_group("alice", "") is False
assert state._user_in_group("alice", state.VAULTWARDEN_GRANDFATHERED_FLAG) is True
assert state._vaultwarden_grandfathered(DummyConn(), "missing", "alice") == (True, "")
assert state._resolve_recovery_email("alice", "fallback@example.dev") == "real@example.dev"
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(ready=False))
assert state._user_in_group("alice", "group") is False
assert state._resolve_recovery_email("alice", "fallback@example.dev") == "fallback@example.dev"
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user={"id": ""}))
assert state._user_in_group("alice", "group") is False
assert state._vaultwarden_grandfathered(DummyConn(), "missing", "alice") == (False, "")
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(fail_find=True))
assert state._user_in_group("alice", "group") is False
assert state._resolve_recovery_email("alice", "fallback@example.dev") == "fallback@example.dev"
def test_keycloak_rotation_and_auto_completed_steps(monkeypatch) -> None:
conn = DummyConn(rows_by_query={"SELECT 1": {"exists": True}})
admin = DummyAdmin(full={"requiredActions": ["CONFIGURE_TOTP"], "attributes": {"mailu_app_password": ["pw"]}})
monkeypatch.setattr(state, "admin_client", lambda: admin)
state._request_keycloak_password_rotation(conn, "code", "alice")
assert admin.updated == [("user-1", {"requiredActions": ["CONFIGURE_TOTP", "UPDATE_PASSWORD"]})]
assert any("INSERT INTO access_request_onboarding_artifacts" in query for query, _ in conn.executed)
assert state._password_rotation_requested(conn, "code") is True
with pytest.raises(ValueError):
state._request_keycloak_password_rotation(conn, "code", "")
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(ready=False))
with pytest.raises(RuntimeError):
state._request_keycloak_password_rotation(conn, "code", "alice")
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user={}))
with pytest.raises(RuntimeError):
state._request_keycloak_password_rotation(conn, "code", "alice")
attrs = {
"vaultwarden_status": ["active"],
"nextcloud_mail_synced_at": ["now"],
"firefly_password_rotated_at": "now",
"wger_password_rotated_at": "now",
}
assert state._extract_attr({"key": ["", "value"]}, "key") == "value"
assert state._extract_attr({"key": "value"}, "key") == "value"
assert state._extract_attr({"key": ["", 7]}, "key") == ""
assert state._extract_attr([], "key") == ""
assert state._auto_completed_service_steps(attrs) == {
"vaultwarden_master_password",
"nextcloud_mail_integration",
"firefly_password_rotated",
"wger_password_rotated",
}
assert state._auto_completed_service_steps([]) == set()
admin = DummyAdmin(full={"requiredActions": ["CONFIGURE_TOTP"], "attributes": attrs})
monkeypatch.setattr(state, "admin_client", lambda: admin)
completed = state._auto_completed_keycloak_steps(conn, "code", "alice")
assert "keycloak_password_rotated" in completed
assert admin.updated[-1] == ("user-1", {"requiredActions": []})
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user={"id": ""}))
assert state._auto_completed_keycloak_steps(conn, "code", "alice") == set()
fallback_admin = DummyAdmin(
user={
"id": "user-1",
"requiredActions": [],
"attributes": {"vaultwarden_master_password_set_at": ["now"]},
},
fail_get=True,
)
monkeypatch.setattr(state, "admin_client", lambda: fallback_admin)
assert state._auto_completed_keycloak_steps(conn, "code", "alice") == {
"keycloak_password_rotated",
"vaultwarden_master_password",
}
update_fails = DummyAdmin(full={"requiredActions": ["CONFIGURE_TOTP"], "attributes": {}}, fail_update=True)
monkeypatch.setattr(state, "admin_client", lambda: update_fails)
assert state._auto_completed_keycloak_steps(conn, "code", "alice") == {"keycloak_password_rotated"}
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(ready=False))
assert state._auto_completed_keycloak_steps(conn, "code", "alice") == set()
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(fail_find=True))
assert state._auto_completed_keycloak_steps(conn, "code", "alice") == set()
assert state._auto_completed_keycloak_steps(conn, "", "alice") == set()
assert state._auto_completed_keycloak_steps(conn, "code", "") == set()
def test_vaultwarden_status_and_automation_readiness(monkeypatch) -> None:
ready_admin = DummyAdmin(full={"attributes": {"vaultwarden_status": ["grandfathered"], "mailu_app_password": ["pw"]}})
monkeypatch.setattr(state, "admin_client", lambda: ready_admin)
assert state._vaultwarden_status_for_user("") == ""
assert state._vaultwarden_status_for_user("alice") == "grandfathered"
assert state._automation_ready(DummyConn(), "code", "alice") is True
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(ready=False))
assert state._vaultwarden_status_for_user("alice") == ""
assert state._automation_ready(DummyConn(), "code", "alice") is False
assert state._automation_ready(DummyConn(), "code", "") is False
monkeypatch.setattr(state, "admin_client", lambda: ready_admin)
task_conn = DummyConn(rows_by_query={"SELECT 1 FROM access_request_tasks": {"exists": True}})
monkeypatch.setattr(state, "provision_tasks_complete", lambda conn, code: True)
assert state._automation_ready(task_conn, "code", "alice") is True
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user=None))
assert state._automation_ready(DummyConn(), "code", "alice") is False
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user={}))
assert state._automation_ready(DummyConn(), "code", "alice") is False
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(user={"id": ""}))
assert state._vaultwarden_status_for_user("alice") == ""
assert state._automation_ready(DummyConn(), "code", "alice") is False
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(full={"attributes": "bad"}))
assert state._automation_ready(DummyConn(), "code", "alice") is False
monkeypatch.setattr(state, "admin_client", lambda: DummyAdmin(fail_find=True))
assert state._vaultwarden_status_for_user("alice") == ""
assert state._automation_ready(DummyConn(), "code", "alice") is False
def test_status_transitions_and_payload(monkeypatch) -> None:
conn = DummyConn(rows_by_query={"SELECT approval_flags": {"approval_flags": [], "contact_email": "owner@example.dev"}})
monkeypatch.setattr(state, "_automation_ready", lambda conn, code, username: True)
assert state._advance_status(conn, "code", "alice", "approved") == "awaiting_onboarding"
assert state._advance_status(conn, "code", "alice", "pending") == "pending"
required = set(state.ONBOARDING_REQUIRED_STEPS)
monkeypatch.setattr(state, "_completed_onboarding_steps", lambda conn, code, username: required)
monkeypatch.setattr(state, "_vaultwarden_grandfathered", lambda conn, code, username: (False, "owner@example.dev"))
monkeypatch.setattr(state, "_vaultwarden_status_for_user", lambda username: "")
assert state._advance_status(conn, "code", "alice", "awaiting_onboarding") == "ready"
required_with_vault = required | {"vaultwarden_store_temp_password"}
monkeypatch.setattr(state, "_completed_onboarding_steps", lambda conn, code, username: required_with_vault)
monkeypatch.setattr(state, "_vaultwarden_grandfathered", lambda conn, code, username: (True, "owner@example.dev"))
monkeypatch.setattr(state, "_vaultwarden_status_for_user", lambda username: "grandfathered")
assert state._advance_status(conn, "code", "alice", "awaiting_onboarding") == "ready"
monkeypatch.setattr(state, "_password_rotation_requested", lambda conn, code: True)
monkeypatch.setattr(state, "_vaultwarden_grandfathered", lambda conn, code, username: (False, "owner@example.dev"))
monkeypatch.setattr(state, "_vaultwarden_status_for_user", lambda username: "")
payload = state._onboarding_payload(conn, "code", "alice")
assert payload["keycloak"]["password_rotation_requested"] is True
assert payload["vaultwarden"]["grandfathered"] is False
monkeypatch.setattr(state, "_vaultwarden_grandfathered", lambda conn, code, username: (True, "owner@example.dev"))
monkeypatch.setattr(state, "_vaultwarden_status_for_user", lambda username: "grandfathered")
monkeypatch.setattr(state, "_resolve_recovery_email", lambda username, fallback: fallback)
payload = state._onboarding_payload(conn, "code", "alice")
assert "vaultwarden_store_temp_password" in payload["required_steps"]
assert payload["vaultwarden"]["matched"] is True
def test_completed_onboarding_steps_merges_manual_and_auto(monkeypatch) -> None:
manual_conn = DummyConn(many_by_query={"SELECT step FROM access_request_onboarding_steps": [{"step": "manual"}]})
monkeypatch.setattr(state, "_auto_completed_keycloak_steps", lambda conn, code, username: {"auto"})
assert state._completed_onboarding_steps(manual_conn, "code", "alice") == {"manual", "auto"}

View File

@ -0,0 +1,242 @@
from __future__ import annotations
from contextlib import contextmanager
from datetime import datetime, timezone
from types import SimpleNamespace
from typing import Any
from flask import Flask, jsonify
from atlas_portal.routes.access_request_status import register_access_request_status
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None, rows: list[dict[str, Any]] | None = None) -> None:
self.row = row
self.rows = rows or []
def fetchone(self) -> dict[str, Any] | None:
return self.row
def fetchall(self) -> list[dict[str, Any]]:
return self.rows
class DummyConn:
def __init__(
self,
*,
rows_by_query: dict[str, dict[str, Any] | None] | None = None,
many_by_query: dict[str, list[dict[str, Any]]] | None = None,
) -> None:
self.rows_by_query = rows_by_query or {}
self.many_by_query = many_by_query or {}
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
for key, rows in self.many_by_query.items():
if key in query:
return DummyResult(rows=rows)
for key, row in self.rows_by_query.items():
if key in query:
return DummyResult(row=row)
return DummyResult()
class DummyAriadne:
def __init__(self, *, enabled: bool = False) -> None:
self._enabled = enabled
self.proxy_calls: list[tuple[str, str, object | None]] = []
def enabled(self) -> bool:
return self._enabled
def proxy(self, method: str, path: str, payload: object | None = None):
self.proxy_calls.append((method, path, payload))
return jsonify({"proxied": True, "method": method, "path": path, "payload": payload})
class DummyDeps:
def __init__(self, conn: DummyConn | None = None) -> None:
self.settings = SimpleNamespace(
ACCESS_REQUEST_ENABLED=True,
ACCESS_REQUEST_STATUS_RATE_LIMIT=5,
ACCESS_REQUEST_STATUS_RATE_WINDOW_SEC=60,
)
self.conn = conn or DummyConn()
self.ariadne_client = DummyAriadne()
self.rate_limit_results: list[bool] = []
self.provisioned: list[str] = []
self.fail_connect = False
self.fail_provision = False
self.configured_value = True
def configured(self) -> bool:
return self.configured_value
def _client_ip(self) -> str:
return "203.0.113.20"
def rate_limit_allow(self, *args, **kwargs) -> bool:
if self.rate_limit_results:
return self.rate_limit_results.pop(0)
return True
@contextmanager
def connect(self):
if self.fail_connect:
raise RuntimeError("database offline")
yield self.conn
def _normalize_status(self, status: str) -> str:
return "accounts_building" if status == "approved" else (status or "unknown")
def _advance_status(self, conn: DummyConn, code: str, username: str, status: str) -> str:
return self._normalize_status(status)
def provision_access_request(self, code: str) -> None:
self.provisioned.append(code)
if self.fail_provision:
raise RuntimeError("provision failed")
def provision_tasks_complete(self, conn: DummyConn, code: str) -> bool:
return True
def _onboarding_payload(self, conn: DummyConn, code: str, username: str) -> dict[str, str]:
return {"code": code, "username": username}
def make_client(deps: DummyDeps):
app = Flask(__name__)
register_access_request_status(app, deps)
return app.test_client()
def test_status_preflight_and_rate_limit_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.post("/api/access/request/status", json={"request_code": "code"}).status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.post("/api/access/request/status", json={"request_code": "code"}).status_code == 503
deps.configured_value = True
deps.rate_limit_results = [False]
assert client.post("/api/access/request/status", json={"request_code": "code"}).status_code == 429
assert client.post("/api/access/request/status", json={}).status_code == 400
deps.rate_limit_results = [True, False]
assert client.post("/api/access/request/status", json={"request_code": "code"}).status_code == 429
def test_status_returns_tasks_onboarding_and_reveals_password() -> None:
now = datetime(2026, 4, 20, tzinfo=timezone.utc)
conn = DummyConn(
rows_by_query={
"SELECT status,": {
"status": "awaiting_onboarding",
"username": "alice",
"initial_password": "temp-pass",
"initial_password_revealed_at": now,
"email_verified_at": now,
}
},
many_by_query={
"SELECT task, status, detail, updated_at": [
{"task": "mail", "status": "error", "detail": "smtp failed", "updated_at": now},
{"task": "apps", "status": "ok", "detail": "", "updated_at": "not-a-date"},
]
},
)
deps = DummyDeps(conn)
client = make_client(deps)
response = client.post("/api/access/request/status", json={"request_code": "code", "reveal_initial_password": True})
data = response.get_json()
assert response.status_code == 200
assert data["status"] == "awaiting_onboarding"
assert data["email_verified"] is True
assert data["blocked"] is True
assert data["automation_complete"] is True
assert data["tasks"][0]["detail"] == "smtp failed"
assert data["tasks"][0]["updated_at"].startswith("2026-04-20T00:00:00")
assert data["initial_password_revealed_at"].startswith("2026-04-20T00:00:00")
assert "initial_password" not in data
assert data["onboarding_url"] == "/onboarding?code=code"
assert data["onboarding"] == {"code": "code", "username": "alice"}
def test_status_autoprovisions_and_handles_failure_paths() -> None:
conn = DummyConn(rows_by_query={"SELECT status,": {"status": "approved", "username": "alice"}})
deps = DummyDeps(conn)
deps.fail_provision = True
client = make_client(deps)
response = client.post("/api/access/request/status", json={"request_code": "code", "reveal_password": True})
assert response.status_code == 200
assert deps.provisioned == ["code"]
not_found = DummyDeps(DummyConn(rows_by_query={"SELECT status,": None}))
assert make_client(not_found).post("/api/access/request/status", json={"request_code": "missing"}).status_code == 404
broken = DummyDeps()
broken.fail_connect = True
assert make_client(broken).post("/api/access/request/status", json={"request_code": "code"}).status_code == 502
def test_retry_preflight_proxy_and_validation_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.post("/api/access/request/retry", json={"request_code": "code"}).status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.post("/api/access/request/retry", json={"request_code": "code"}).status_code == 503
deps.configured_value = True
deps.rate_limit_results = [False]
assert client.post("/api/access/request/retry", json={"request_code": "code"}).status_code == 429
assert client.post("/api/access/request/retry", json={}).status_code == 400
deps.ariadne_client = DummyAriadne(enabled=True)
response = client.post("/api/access/request/retry", json={"request_code": "code", "tasks": ["mail", "", 5]})
assert response.status_code == 200
assert deps.ariadne_client.proxy_calls == [
("POST", "/api/access/requests/code/retry", {"tasks": ["mail"]})
]
def test_retry_updates_failed_tasks_and_swallows_provision_errors() -> None:
conn = DummyConn(rows_by_query={"SELECT status FROM access_requests": {"status": "accounts_building"}})
deps = DummyDeps(conn)
deps.fail_provision = True
client = make_client(deps)
response = client.post("/api/access/request/retry", json={"request_code": "code", "tasks": ["mail"]})
data = response.get_json()
assert response.status_code == 200
assert data == {"ok": True, "status": "accounts_building"}
assert any("task = ANY" in query for query, _ in conn.executed)
no_tasks_conn = DummyConn(rows_by_query={"SELECT status FROM access_requests": {"status": "approved"}})
no_tasks = DummyDeps(no_tasks_conn)
assert make_client(no_tasks).post("/api/access/request/retry", json={"request_code": "code"}).status_code == 200
assert any("WHERE request_code = %s AND status = 'error'" in query for query, _ in no_tasks_conn.executed)
missing = DummyDeps(DummyConn(rows_by_query={"SELECT status FROM access_requests": None}))
assert make_client(missing).post("/api/access/request/retry", json={"request_code": "missing"}).status_code == 404
rejected = DummyDeps(DummyConn(rows_by_query={"SELECT status FROM access_requests": {"status": "ready"}}))
assert make_client(rejected).post("/api/access/request/retry", json={"request_code": "code"}).status_code == 409
broken = DummyDeps()
broken.fail_connect = True
assert make_client(broken).post("/api/access/request/retry", json={"request_code": "code"}).status_code == 502

View File

@ -0,0 +1,386 @@
from __future__ import annotations
from contextlib import contextmanager
from types import SimpleNamespace
from typing import Any
from flask import Flask, request
import psycopg
from atlas_portal.routes.access_request_submission import register_access_request_submission
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None) -> None:
self.row = row
def fetchone(self) -> dict[str, Any] | None:
return self.row
class DummyConn:
def __init__(self, rows_by_query: dict[str, dict[str, Any] | None] | None = None) -> None:
self.rows_by_query = rows_by_query or {}
self.executed: list[tuple[str, object | None]] = []
self.raise_unique_on_insert = False
self.raise_on_any = False
self.rolled_back = False
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if self.raise_on_any:
raise RuntimeError("database failed")
if self.raise_unique_on_insert and "INSERT INTO access_requests" in query:
raise psycopg.errors.UniqueViolation("duplicate")
for key, row in self.rows_by_query.items():
if key in query:
return DummyResult(row)
return DummyResult()
def rollback(self) -> None:
self.rolled_back = True
class UniqueRaceConn(DummyConn):
def __init__(self, row_after_rollback: dict[str, Any] | None) -> None:
super().__init__()
self.row_after_rollback = row_after_rollback
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if "INSERT INTO access_requests" in query:
raise psycopg.errors.UniqueViolation("duplicate")
if "SELECT request_code, status" in query:
return DummyResult(self.row_after_rollback if self.rolled_back else None)
return DummyResult()
class DummyAdmin:
def __init__(
self,
*,
ready: bool = False,
user: dict[str, Any] | None = None,
email_user: dict[str, Any] | None = None,
) -> None:
self._ready = ready
self.user = user
self.email_user = email_user
def ready(self) -> bool:
return self._ready
def find_user(self, username: str) -> dict[str, Any] | None:
return self.user
def find_user_by_email(self, email: str) -> dict[str, Any] | None:
return self.email_user
class MailerError(Exception):
pass
class VerificationError(Exception):
def __init__(self, status_code: int, message: str) -> None:
super().__init__(message)
self.status_code = status_code
self.message = message
class DummyDeps:
EMAIL_VERIFY_PENDING_STATUS = "pending_email_verification"
MailerError = MailerError
VerificationError = VerificationError
def __init__(self, conn: DummyConn | None = None) -> None:
self.settings = SimpleNamespace(
ACCESS_REQUEST_ENABLED=True,
ACCESS_REQUEST_SUBMIT_RATE_LIMIT=5,
ACCESS_REQUEST_SUBMIT_RATE_WINDOW_SEC=60,
MAILU_DOMAIN="bstein.dev",
ACCESS_REQUEST_INTERNAL_EMAIL_ALLOWLIST={"allowed@bstein.dev"},
)
self.conn = conn or DummyConn()
self.configured_value = True
self.admin = DummyAdmin()
self.rate_limit_results: list[bool] = []
self.sent: list[tuple[str, str, str]] = []
self.fail_connect = False
self.fail_send = False
self.verify_status = "pending"
self.verify_error: VerificationError | None = None
self.verify_runtime_error = False
def configured(self) -> bool:
return self.configured_value
def admin_client(self) -> DummyAdmin:
return self.admin
def _client_ip(self) -> str:
return "203.0.113.30"
def rate_limit_allow(self, *args, **kwargs) -> bool:
if self.rate_limit_results:
return self.rate_limit_results.pop(0)
return True
@contextmanager
def connect(self):
if self.fail_connect:
raise RuntimeError("database offline")
yield self.conn
def _extract_request_payload(self) -> tuple[str, str, str, str, str]:
payload = request.get_json(silent=True) or {}
return (
(payload.get("username") or "").strip(),
(payload.get("email") or "").strip(),
(payload.get("note") or "").strip(),
(payload.get("first_name") or "").strip(),
(payload.get("last_name") or "").strip(),
)
def _normalize_name(self, value: str) -> str:
return " ".join(value.strip().split())
def _validate_username(self, username: str) -> str | None:
return None if username and username != "bad" else "username is required"
def _validate_name(self, value: str, *, label: str, required: bool) -> str | None:
if value == "bad":
return f"{label} is invalid"
if required and not value:
return f"{label} is required"
return None
def _normalize_status(self, status: str) -> str:
return "accounts_building" if status == "approved" else (status or "unknown")
def _random_request_code(self, username: str) -> str:
return f"{username}~CODE"
def _hash_verification_token(self, token: str) -> str:
return f"hash:{token}"
def _send_verification_email(self, *, request_code: str, email: str, token: str) -> None:
if self.fail_send:
raise self.MailerError("send failed")
self.sent.append((request_code, email, token))
def _verify_request(self, conn: DummyConn, code: str, token: str) -> str:
if self.verify_runtime_error:
raise RuntimeError("verify failed")
if self.verify_error:
raise self.verify_error
return self.verify_status
def make_client(deps: DummyDeps):
app = Flask(__name__)
register_access_request_submission(app, deps)
return app.test_client()
def request_payload(**overrides: str) -> dict[str, str]:
payload = {
"username": "alice",
"email": "alice@example.dev",
"first_name": "Alice",
"last_name": "Atlas",
"note": "please",
}
payload.update(overrides)
return payload
def test_availability_preflight_existing_and_available_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.get("/api/access/request/availability?username=alice").status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.get("/api/access/request/availability?username=alice").status_code == 503
deps.configured_value = True
assert client.get("/api/access/request/availability?username=bad").get_json()["reason"] == "invalid"
deps.admin = DummyAdmin(ready=True, user={"id": "user-1"})
assert client.get("/api/access/request/availability?username=alice").get_json()["reason"] == "exists"
deps.admin = DummyAdmin()
deps.conn = DummyConn({"SELECT status": {"status": "approved"}})
data = client.get("/api/access/request/availability?username=alice").get_json()
assert data == {"available": False, "reason": "requested", "status": "accounts_building"}
deps.conn = DummyConn()
assert client.get("/api/access/request/availability?username=alice").get_json() == {"available": True}
deps.fail_connect = True
assert client.get("/api/access/request/availability?username=alice").status_code == 502
def test_submit_preflight_validation_and_admin_conflicts() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.post("/api/access/request", json=request_payload()).status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.post("/api/access/request", json=request_payload()).status_code == 503
deps.configured_value = True
deps.rate_limit_results = [False]
assert client.post("/api/access/request", json=request_payload()).status_code == 429
assert client.post("/api/access/request", json=request_payload(username="bad")).status_code == 400
assert client.post("/api/access/request", json=request_payload(first_name="bad")).status_code == 400
assert client.post("/api/access/request", json=request_payload(last_name="")).status_code == 400
assert client.post("/api/access/request", json=request_payload(email="")).status_code == 400
assert client.post("/api/access/request", json=request_payload(email="not-email")).status_code == 400
assert client.post("/api/access/request", json=request_payload(email="new@bstein.dev")).status_code == 400
deps.admin = DummyAdmin(ready=True, user={"id": "user-1"})
assert client.post("/api/access/request", json=request_payload()).status_code == 409
deps.admin = DummyAdmin(ready=True, email_user={"id": "user-2"})
assert client.post("/api/access/request", json=request_payload()).status_code == 409
def test_submit_existing_pending_new_unique_and_failure_paths() -> None:
pending = DummyConn({"SELECT request_code, status": {"request_code": "alice~OLD", "status": "pending"}})
deps = DummyDeps(pending)
client = make_client(deps)
assert client.post("/api/access/request", json=request_payload()).get_json() == {
"ok": True,
"request_code": "alice~OLD",
"status": "pending",
}
existing = DummyConn(
{"SELECT request_code, status": {"request_code": "alice~VERIFY", "status": deps.EMAIL_VERIFY_PENDING_STATUS}}
)
deps.conn = existing
data = client.post("/api/access/request", json=request_payload()).get_json()
assert data["request_code"] == "alice~VERIFY"
assert deps.sent[-1][0] == "alice~VERIFY"
assert any("UPDATE access_requests" in query for query, _ in existing.executed)
deps.fail_send = True
assert client.post("/api/access/request", json=request_payload()).status_code == 502
deps.fail_send = False
new_conn = DummyConn()
deps.conn = new_conn
data = client.post("/api/access/request", json=request_payload(username="brad")).get_json()
assert data["request_code"] == "brad~CODE"
assert any("INSERT INTO access_requests" in query for query, _ in new_conn.executed)
deps.fail_send = True
assert client.post("/api/access/request", json=request_payload(username="casey")).status_code == 502
deps.fail_send = False
unique = UniqueRaceConn({"request_code": "alice~RACE", "status": "pending"})
deps.conn = unique
assert client.post("/api/access/request", json=request_payload(username="dana")).get_json()["request_code"] == "alice~RACE"
assert unique.rolled_back is True
unique_missing = UniqueRaceConn(None)
deps.conn = unique_missing
assert client.post("/api/access/request", json=request_payload(username="erin")).status_code == 502
deps.fail_connect = True
assert client.post("/api/access/request", json=request_payload(username="fran")).status_code == 502
def test_verify_and_verify_link_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.post("/api/access/request/verify", json={"request_code": "code", "token": "tok"}).status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.post("/api/access/request/verify", json={"request_code": "code", "token": "tok"}).status_code == 503
deps.configured_value = True
deps.rate_limit_results = [False]
assert client.post("/api/access/request/verify", json={"request_code": "code", "token": "tok"}).status_code == 429
assert client.post("/api/access/request/verify", json={"token": "tok"}).status_code == 400
assert client.post("/api/access/request/verify", json={"request_code": "code"}).status_code == 400
deps.rate_limit_results = [True, False]
assert client.post("/api/access/request/verify", json={"request_code": "code", "verify": "tok"}).status_code == 429
assert client.post("/api/access/request/verify", json={"code": "code", "token": "tok"}).get_json() == {
"ok": True,
"status": "pending",
}
deps.verify_error = VerificationError(410, "expired")
assert client.post("/api/access/request/verify", json={"code": "code", "token": "tok"}).status_code == 410
deps.verify_error = None
deps.verify_runtime_error = True
assert client.post("/api/access/request/verify", json={"code": "code", "token": "tok"}).status_code == 502
deps.verify_runtime_error = False
assert client.get("/api/access/request/verify-link").headers["Location"].endswith("verify_error=missing+token")
assert "verified=1" in client.get("/api/access/request/verify-link?code=code&token=tok").headers["Location"]
deps.verify_error = VerificationError(401, "bad token")
assert "bad%20token" in client.get("/api/access/request/verify-link?code=code&token=tok").headers["Location"]
deps.verify_error = None
deps.verify_runtime_error = True
assert "failed+to+verify" in client.get("/api/access/request/verify-link?code=code&token=tok").headers["Location"]
deps.verify_runtime_error = False
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.get("/api/access/request/verify-link?code=code&token=tok").status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.get("/api/access/request/verify-link?code=code&token=tok").status_code == 503
def test_resend_preflight_success_and_failure_paths() -> None:
deps = DummyDeps()
client = make_client(deps)
deps.settings.ACCESS_REQUEST_ENABLED = False
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 503
deps.settings.ACCESS_REQUEST_ENABLED = True
deps.configured_value = False
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 503
deps.configured_value = True
deps.rate_limit_results = [False]
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 429
assert client.post("/api/access/request/resend", json={}).status_code == 400
deps.rate_limit_results = [True, False]
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 429
deps.conn = DummyConn({"SELECT status, contact_email": None})
assert client.post("/api/access/request/resend", json={"request_code": "missing"}).status_code == 404
deps.conn = DummyConn({"SELECT status, contact_email": {"status": "approved", "contact_email": "alice@example.dev"}})
assert client.post("/api/access/request/resend", json={"request_code": "code"}).get_json()["status"] == "accounts_building"
deps.conn = DummyConn({"SELECT status, contact_email": {"status": deps.EMAIL_VERIFY_PENDING_STATUS, "contact_email": ""}})
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 409
success_conn = DummyConn(
{"SELECT status, contact_email": {"status": deps.EMAIL_VERIFY_PENDING_STATUS, "contact_email": "alice@example.dev"}}
)
deps.conn = success_conn
assert client.post("/api/access/request/resend", json={"code": "code"}).get_json()["ok"] is True
assert any("UPDATE access_requests" in query for query, _ in success_conn.executed)
deps.fail_send = True
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 502
deps.fail_send = False
deps.fail_connect = True
assert client.post("/api/access/request/resend", json={"request_code": "code"}).status_code == 502

View File

@ -51,14 +51,12 @@ def dummy_connect(rows_by_query=None):
class AccessRequestTests(TestCase):
@classmethod
def setUpClass(cls):
cls.schema_patch = mock.patch("atlas_portal.app_factory.ensure_schema", lambda: None)
cls.schema_patch.start()
cls.app = create_app()
cls.client = cls.app.test_client()
@classmethod
def tearDownClass(cls):
cls.schema_patch.stop()
return None
def setUp(self):
self.configured_patch = mock.patch.object(ar, "configured", lambda: True)
@ -226,3 +224,103 @@ class AccessRequestTests(TestCase):
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertTrue(data.get("email_verified"))
def test_status_hides_initial_password_without_reveal_flag(self):
rows = {
"SELECT status": {
"status": "awaiting_onboarding",
"username": "alice",
"initial_password": "temp-pass",
"initial_password_revealed_at": None,
"email_verified_at": None,
}
}
with (
mock.patch.object(ar, "connect", lambda: dummy_connect(rows)),
mock.patch.object(ar, "_advance_status", lambda *args, **kwargs: "awaiting_onboarding"),
):
resp = self.client.post(
"/api/access/request/status",
data=json.dumps({"request_code": "alice~CODE123"}),
content_type="application/json",
)
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertIsNone(data.get("initial_password"))
def test_status_reveals_initial_password_with_flag(self):
rows = {
"SELECT status": {
"status": "awaiting_onboarding",
"username": "alice",
"initial_password": "temp-pass",
"initial_password_revealed_at": None,
"email_verified_at": None,
}
}
with (
mock.patch.object(ar, "connect", lambda: dummy_connect(rows)),
mock.patch.object(ar, "_advance_status", lambda *args, **kwargs: "awaiting_onboarding"),
):
resp = self.client.post(
"/api/access/request/status",
data=json.dumps({"request_code": "alice~CODE123", "reveal_initial_password": True}),
content_type="application/json",
)
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertEqual(data.get("initial_password"), "temp-pass")
def test_onboarding_payload_includes_vaultwarden_grandfathered(self):
rows = {
"SELECT approval_flags": {
"approval_flags": ["vaultwarden_grandfathered"],
"contact_email": "alice@example.com",
}
}
conn = DummyConn(rows_by_query=rows)
with (
mock.patch.object(ar, "_completed_onboarding_steps", lambda *args, **kwargs: set()),
mock.patch.object(ar, "_password_rotation_requested", lambda *args, **kwargs: False),
):
payload = ar._onboarding_payload(conn, "alice~CODE123", "alice")
vault = payload.get("vaultwarden") or {}
self.assertTrue(vault.get("grandfathered"))
self.assertEqual(vault.get("recovery_email"), "alice@example.com")
def test_retry_request_fallback_updates_tasks(self):
rows = {"SELECT status": {"status": "accounts_building"}}
conn = DummyConn(rows_by_query=rows)
@contextmanager
def connect_override():
yield conn
with (
mock.patch.object(ar.ariadne_client, "enabled", lambda: False),
mock.patch.object(ar, "connect", lambda: connect_override()),
mock.patch.object(ar, "provision_access_request", lambda *_args, **_kwargs: None),
):
resp = self.client.post(
"/api/access/request/retry",
data=json.dumps({"request_code": "alice~CODE123"}),
content_type="application/json",
)
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertTrue(data.get("ok"))
self.assertTrue(any("provision_attempted_at" in query for query, _params in conn.executed))
def test_retry_request_rejects_non_retryable(self):
rows = {"SELECT status": {"status": "ready"}}
with (
mock.patch.object(ar.ariadne_client, "enabled", lambda: False),
mock.patch.object(ar, "connect", lambda: dummy_connect(rows)),
):
resp = self.client.post(
"/api/access/request/retry",
data=json.dumps({"request_code": "alice~CODE123"}),
content_type="application/json",
)
self.assertEqual(resp.status_code, 409)

View File

@ -0,0 +1,264 @@
from __future__ import annotations
from types import SimpleNamespace
from typing import Any
from flask import Flask, g, jsonify
from atlas_portal.routes import account_actions as actions
class DummyAriadne:
def __init__(self, enabled: bool = False) -> None:
self._enabled = enabled
self.calls: list[tuple[str, str, object | None]] = []
def enabled(self) -> bool:
return self._enabled
def proxy(self, method: str, path: str, payload: object | None = None):
self.calls.append((method, path, payload))
return jsonify({"proxied": True, "path": path, "payload": payload})
class DummyAdmin:
def __init__(
self,
*,
ready: bool = True,
user: dict[str, Any] | None = None,
fail_find: bool = False,
fail_set: bool = False,
) -> None:
self._ready = ready
self.user = user if user is not None else {"attributes": {}}
self.fail_find = fail_find
self.fail_set = fail_set
self.attributes: list[tuple[str, str, str]] = []
def ready(self) -> bool:
return self._ready
def find_user(self, username: str) -> dict[str, Any] | None:
if self.fail_find:
raise RuntimeError("lookup failed")
return self.user
def set_user_attribute(self, username: str, key: str, value: str) -> None:
if self.fail_set:
raise RuntimeError("write failed")
self.attributes.append((username, key, value))
class MailuClient:
status_code = 200
raises = False
def __init__(self, *, timeout: int) -> None:
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, url: str, json: dict[str, Any] | None = None):
if self.raises:
raise RuntimeError("mailu unavailable")
return SimpleNamespace(status_code=self.status_code)
def make_client(
monkeypatch,
*,
admin: DummyAdmin | None = None,
ariadne: DummyAriadne | None = None,
account_ok: bool = True,
username: str = "alice",
email: str = "alice@example.dev",
):
app = Flask(__name__)
active_admin = admin or DummyAdmin()
active_ariadne = ariadne or DummyAriadne()
monkeypatch.setattr(actions, "require_auth", lambda fn: fn)
monkeypatch.setattr(
actions,
"require_account_access",
lambda: (True, None) if account_ok else (False, (jsonify({"error": "forbidden"}), 403)),
)
monkeypatch.setattr(actions, "admin_client", lambda: active_admin)
monkeypatch.setattr(actions, "ariadne_client", active_ariadne)
monkeypatch.setattr(actions, "random_password", lambda length=16: f"pw-{length}")
monkeypatch.setattr(actions.settings, "MAILU_DOMAIN", "bstein.dev")
monkeypatch.setattr(actions.settings, "MAILU_SYNC_URL", "")
monkeypatch.setattr(actions, "trigger_nextcloud_mail_sync", lambda user, wait=True: {"status": "ok", "wait": wait})
monkeypatch.setattr(actions, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "ok"})
monkeypatch.setattr(actions, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "ok"})
@app.before_request
def set_user() -> None:
g.keycloak_username = username
g.keycloak_email = email
actions.register_account_actions(app)
return app.test_client(), active_admin, active_ariadne
def test_tcp_check_success_and_failure(monkeypatch) -> None:
class SocketContext:
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
monkeypatch.setattr(actions.socket, "create_connection", lambda *args, **kwargs: SocketContext())
assert actions._tcp_check("host", 443, 1) is True
assert actions._tcp_check("", 443, 1) is False
assert actions._tcp_check("host", 0, 1) is False
monkeypatch.setattr(actions.socket, "create_connection", lambda *args, **kwargs: (_ for _ in ()).throw(OSError()))
assert actions._tcp_check("host", 443, 1) is False
def test_mailu_rotate_paths(monkeypatch) -> None:
client, _admin, _ariadne = make_client(monkeypatch, account_ok=False)
assert client.post("/api/account/mailu/rotate").status_code == 403
ariadne = DummyAriadne(enabled=True)
client, _admin, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/account/mailu/rotate").get_json()["proxied"] is True
assert proxied.calls == [("POST", "/api/account/mailu/rotate", None)]
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(ready=False))
assert client.post("/api/account/mailu/rotate").status_code == 503
client, _admin, _ariadne = make_client(monkeypatch, username="")
assert client.post("/api/account/mailu/rotate").status_code == 400
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(fail_set=True))
assert client.post("/api/account/mailu/rotate").status_code == 502
client, admin, _ariadne = make_client(monkeypatch)
data = client.post("/api/account/mailu/rotate").get_json()
assert data["password"] == "pw-16"
assert data["sync_enabled"] is False
assert data["nextcloud_sync"] == {"status": "ok", "wait": True}
assert admin.attributes[0] == ("alice", "mailu_app_password", "pw-16")
for status_code, expected_error in ((200, ""), (503, "sync status 503")):
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions.settings, "MAILU_SYNC_URL", "https://mailu-sync.example.dev")
MailuClient.status_code = status_code
MailuClient.raises = False
monkeypatch.setattr(actions, "httpx", SimpleNamespace(Client=MailuClient))
data = client.post("/api/account/mailu/rotate").get_json()
assert data["sync_enabled"] is True
assert data["sync_error"] == expected_error
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions.settings, "MAILU_SYNC_URL", "https://mailu-sync.example.dev")
MailuClient.raises = True
monkeypatch.setattr(actions, "httpx", SimpleNamespace(Client=MailuClient))
assert client.post("/api/account/mailu/rotate").get_json()["sync_error"] == "sync request failed"
MailuClient.raises = False
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions, "trigger_nextcloud_mail_sync", lambda *args, **kwargs: (_ for _ in ()).throw(RuntimeError()))
assert client.post("/api/account/mailu/rotate").get_json()["nextcloud_sync"] == {"status": "error"}
def test_wger_reset_and_rotation_check_paths(monkeypatch) -> None:
client, _admin, _ariadne = make_client(monkeypatch, account_ok=False)
assert client.post("/api/account/wger/reset").status_code == 403
ariadne = DummyAriadne(enabled=True)
client, _admin, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/account/wger/reset").get_json()["path"] == "/api/account/wger/reset"
assert client.post("/api/account/wger/rotation/check").get_json()["path"] == "/api/account/wger/rotation/check"
assert len(proxied.calls) == 2
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(ready=False))
assert client.post("/api/account/wger/reset").status_code == 503
client, _admin, _ariadne = make_client(monkeypatch, username="")
assert client.post("/api/account/wger/reset").status_code == 400
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(user={"attributes": {"mailu_email": ["mail@example.dev"]}}))
assert client.post("/api/account/wger/reset").get_json()["password"] == "pw-16"
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(user={"attributes": {"mailu_email": "mail@example.dev"}}))
assert client.post("/api/account/wger/reset").status_code == 200
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(fail_find=True))
assert client.post("/api/account/wger/reset").status_code == 200
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "failed"})
assert client.post("/api/account/wger/reset").status_code == 502
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(fail_set=True))
assert client.post("/api/account/wger/reset").status_code == 502
client, _admin, _ariadne = make_client(monkeypatch)
assert client.post("/api/account/wger/rotation/check").status_code == 503
def test_firefly_reset_and_rotation_check_paths(monkeypatch) -> None:
client, _admin, _ariadne = make_client(monkeypatch, account_ok=False)
assert client.post("/api/account/firefly/reset").status_code == 403
ariadne = DummyAriadne(enabled=True)
client, _admin, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/account/firefly/reset").get_json()["path"] == "/api/account/firefly/reset"
assert client.post("/api/account/firefly/rotation/check").get_json()["path"] == "/api/account/firefly/rotation/check"
assert len(proxied.calls) == 2
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(ready=False))
assert client.post("/api/account/firefly/reset").status_code == 503
client, _admin, _ariadne = make_client(monkeypatch, username="")
assert client.post("/api/account/firefly/reset").status_code == 400
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(user={"attributes": {"mailu_email": ["mail@example.dev"]}}))
assert client.post("/api/account/firefly/reset").get_json()["password"] == "pw-24"
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(user={"attributes": {"mailu_email": "mail@example.dev"}}))
assert client.post("/api/account/firefly/reset").status_code == 200
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(fail_find=True))
assert client.post("/api/account/firefly/reset").status_code == 200
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "failed"})
assert client.post("/api/account/firefly/reset").status_code == 502
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(fail_set=True))
assert client.post("/api/account/firefly/reset").status_code == 502
client, _admin, _ariadne = make_client(monkeypatch)
assert client.post("/api/account/firefly/rotation/check").status_code == 503
def test_nextcloud_mail_sync_paths(monkeypatch) -> None:
client, _admin, _ariadne = make_client(monkeypatch, account_ok=False)
assert client.post("/api/account/nextcloud/mail/sync").status_code == 403
ariadne = DummyAriadne(enabled=True)
client, _admin, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/account/nextcloud/mail/sync", json={"wait": False}).get_json()["proxied"] is True
assert proxied.calls == [("POST", "/api/account/nextcloud/mail/sync", {"wait": False})]
client, _admin, _ariadne = make_client(monkeypatch, admin=DummyAdmin(ready=False))
assert client.post("/api/account/nextcloud/mail/sync").status_code == 503
client, _admin, _ariadne = make_client(monkeypatch, username="")
assert client.post("/api/account/nextcloud/mail/sync").status_code == 400
client, _admin, _ariadne = make_client(monkeypatch)
assert client.post("/api/account/nextcloud/mail/sync", json={"wait": False}).get_json() == {
"status": "ok",
"wait": False,
}
client, _admin, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(actions, "trigger_nextcloud_mail_sync", lambda *args, **kwargs: (_ for _ in ()).throw(RuntimeError("sync failed")))
assert client.post("/api/account/nextcloud/mail/sync").status_code == 502

View File

@ -0,0 +1,234 @@
from __future__ import annotations
from contextlib import contextmanager
from typing import Any
from flask import Flask, g, jsonify
from atlas_portal.routes import account_overview as overview
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None) -> None:
self.row = row
def fetchone(self) -> dict[str, Any] | None:
return self.row
class DummyConn:
def __init__(self, *, request_code: str = "alice~CODE", step_done: bool = True, fail: bool = False) -> None:
self.request_code = request_code
self.step_done = step_done
self.fail = fail
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if self.fail:
raise RuntimeError("database failed")
if "access_request_onboarding_steps" in query:
return DummyResult({"exists": 1} if self.step_done else None)
if "FROM access_requests" in query:
return DummyResult({"request_code": self.request_code} if self.request_code else None)
return DummyResult()
class DummyAdmin:
def __init__(
self,
*,
ready: bool = True,
user: dict[str, Any] | None = None,
full: dict[str, Any] | None = None,
fail_find: bool = False,
) -> None:
self._ready = ready
self.user = user if user is not None else {"id": "user-1", "email": "alice@idp.dev", "attributes": {}}
self.full = full if full is not None else {"email": "full@idp.dev", "attributes": {}}
self.fail_find = fail_find
def ready(self) -> bool:
return self._ready
def find_user(self, username: str) -> dict[str, Any] | None:
if self.fail_find:
raise RuntimeError("keycloak failed")
return self.user
def get_user(self, user_id: str) -> dict[str, Any]:
return self.full
class DummyAriadne:
def enabled(self) -> bool:
return False
def make_client(
monkeypatch,
*,
admin: DummyAdmin | None = None,
conn: DummyConn | None = None,
account_ok: bool = True,
username: str = "alice",
email: str = "",
):
app = Flask(__name__)
active_admin = admin or DummyAdmin()
active_conn = conn or DummyConn()
monkeypatch.setattr(overview, "require_auth", lambda fn: fn)
monkeypatch.setattr(
overview,
"require_account_access",
lambda: (True, None) if account_ok else (False, (jsonify({"error": "forbidden"}), 403)),
)
monkeypatch.setattr(overview, "admin_client", lambda: active_admin)
monkeypatch.setattr(overview, "ariadne_client", DummyAriadne())
monkeypatch.setattr(overview.settings, "MAILU_DOMAIN", "bstein.dev")
monkeypatch.setattr(overview.settings, "PORTAL_DATABASE_URL", "postgres://portal")
monkeypatch.setattr(overview.settings, "PORTAL_PUBLIC_BASE_URL", "https://portal.example.dev")
monkeypatch.setattr(overview.settings, "JELLYFIN_LDAP_HOST", "ldap.example.dev")
monkeypatch.setattr(overview.settings, "JELLYFIN_LDAP_PORT", 389)
monkeypatch.setattr(overview.settings, "JELLYFIN_LDAP_CHECK_TIMEOUT_SEC", 1)
@contextmanager
def connect():
yield active_conn
monkeypatch.setattr(overview, "connect", connect)
@app.before_request
def set_user() -> None:
g.keycloak_username = username
g.keycloak_email = email
g.keycloak_groups = ["dev"]
overview.register_account_overview(app)
return app.test_client(), active_conn
def full_attrs(prefix: str = "") -> dict[str, Any]:
return {
"mailu_email": f"{prefix}mail@example.dev",
"mailu_app_password": f"{prefix}mail-pw",
"nextcloud_mail_primary_email": f"{prefix}mail@example.dev",
"nextcloud_mail_account_count": "2",
"nextcloud_mail_synced_at": f"{prefix}synced",
"wger_password": f"{prefix}wger-pw",
"wger_password_updated_at": f"{prefix}wger-updated",
"firefly_password": f"{prefix}firefly-pw",
"firefly_password_updated_at": f"{prefix}firefly-updated",
"vaultwarden_email": f"{prefix}vault@example.dev",
"vaultwarden_status": f"{prefix}active",
"vaultwarden_synced_at": f"{prefix}vault-synced",
"vaultwarden_master_password_set_at": f"{prefix}vault-master",
}
def list_attrs(attrs: dict[str, Any]) -> dict[str, list[str]]:
return {key: [str(value)] for key, value in attrs.items()}
def test_tcp_check_paths(monkeypatch) -> None:
class SocketContext:
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
monkeypatch.setattr(overview.socket, "create_connection", lambda *args, **kwargs: SocketContext())
assert overview._tcp_check("host", 443, 1) is True
assert overview._tcp_check("", 443, 1) is False
assert overview._tcp_check("host", 0, 1) is False
monkeypatch.setattr(overview.socket, "create_connection", lambda *args, **kwargs: (_ for _ in ()).throw(OSError()))
assert overview._tcp_check("host", 443, 1) is False
def test_overview_preflight_and_admin_unavailable(monkeypatch) -> None:
client, _conn = make_client(monkeypatch, account_ok=False)
assert client.get("/api/account/overview").status_code == 403
client, _conn = make_client(monkeypatch, admin=DummyAdmin(ready=False))
data = client.get("/api/account/overview").get_json()
assert data["mailu"]["status"] == "server not configured"
assert data["jellyfin"]["sync_detail"] == "keycloak admin not configured"
def test_overview_reads_list_attributes_and_reports_ldap_ok(monkeypatch) -> None:
attrs = list_attrs(full_attrs())
user = {"id": "user-1", "email": "alice@idp.dev", "federationLink": "ldap", "attributes": attrs}
client, _conn = make_client(monkeypatch, admin=DummyAdmin(user=user))
monkeypatch.setattr(overview, "_tcp_check", lambda *args, **kwargs: True)
data = client.get("/api/account/overview").get_json()
assert data["user"] == {"username": "alice", "email": "alice@idp.dev", "groups": ["dev"]}
assert data["mailu"]["app_password"] == "mail-pw"
assert data["nextcloud_mail"]["status"] == "ready"
assert data["wger"]["password"] == "wger-pw"
assert data["firefly"]["password"] == "firefly-pw"
assert data["vaultwarden"]["status"] == "ready"
assert data["jellyfin"]["sync_status"] == "ok"
def test_overview_reads_string_attributes_and_database_confirmed_step(monkeypatch) -> None:
attrs = full_attrs("str-")
attrs["vaultwarden_master_password_set_at"] = ""
attrs["vaultwarden_status"] = "invited"
attrs["nextcloud_mail_account_count"] = "not-a-number"
user = {"id": "user-1", "email": "alice@idp.dev", "attributes": attrs}
client, _conn = make_client(monkeypatch, admin=DummyAdmin(user=user))
monkeypatch.setattr(overview, "_tcp_check", lambda *args, **kwargs: False)
data = client.get("/api/account/overview").get_json()
assert data["nextcloud_mail"]["status"] == "needs sync"
assert data["vaultwarden"]["status"] == "ready"
assert data["jellyfin"]["sync_status"] == "degraded"
assert data["jellyfin"]["sync_detail"] == "LDAP unreachable"
assert data["onboarding_url"] == "https://portal.example.dev/onboarding?code=alice~CODE"
def test_overview_falls_back_to_full_user_list_attributes(monkeypatch) -> None:
user = {"id": "user-1", "attributes": {}}
full = {"email": "full@example.dev", "attributes": list_attrs(full_attrs("full-"))}
client, _conn = make_client(monkeypatch, admin=DummyAdmin(user=user, full=full), email="")
monkeypatch.setattr(overview, "_tcp_check", lambda *args, **kwargs: True)
data = client.get("/api/account/overview").get_json()
assert data["user"]["email"] == "full@example.dev"
assert data["mailu"]["username"] == "full-mail@example.dev"
assert data["nextcloud_mail"]["primary_email"] == "full-mail@example.dev"
assert data["vaultwarden"]["username"] == "full-vault@example.dev"
assert data["jellyfin"]["sync_status"] == "degraded"
assert data["jellyfin"]["sync_detail"] == "Keycloak user is not LDAP-backed"
def test_overview_falls_back_to_full_user_string_attributes(monkeypatch) -> None:
user = {"id": "user-1", "attributes": {}}
full = {"email": "full@example.dev", "attributes": full_attrs("full-str-")}
client, _conn = make_client(monkeypatch, admin=DummyAdmin(user=user, full=full), email="")
monkeypatch.setattr(overview, "_tcp_check", lambda *args, **kwargs: True)
data = client.get("/api/account/overview").get_json()
assert data["mailu"]["app_password"] == "full-str-mail-pw"
assert data["wger"]["password_updated_at"] == "full-str-wger-updated"
assert data["firefly"]["password_updated_at"] == "full-str-firefly-updated"
assert data["vaultwarden"]["synced_at"] == "full-str-vault-synced"
def test_overview_handles_keycloak_and_database_failures(monkeypatch) -> None:
client, _conn = make_client(monkeypatch, admin=DummyAdmin(fail_find=True), conn=DummyConn(fail=True))
data = client.get("/api/account/overview").get_json()
assert data["mailu"]["status"] == "unavailable"
assert data["nextcloud_mail"]["status"] == "unavailable"
assert data["wger"]["status"] == "unavailable"
assert data["firefly"]["status"] == "unavailable"
assert data["vaultwarden"]["status"] == "unavailable"
assert data["jellyfin"]["sync_detail"] == "unavailable"

View File

@ -0,0 +1,226 @@
from __future__ import annotations
from contextlib import contextmanager
from datetime import datetime, timezone
from typing import Any
from flask import Flask, g, jsonify
from atlas_portal.routes import admin_access as admin
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None, rows: list[dict[str, Any]] | None = None) -> None:
self.row = row
self.rows = rows or []
def fetchone(self) -> dict[str, Any] | None:
return self.row
def fetchall(self) -> list[dict[str, Any]]:
return self.rows
class DummyConn:
def __init__(
self,
*,
row: dict[str, Any] | None = None,
rows: list[dict[str, Any]] | None = None,
fail: bool = False,
) -> None:
self.row = row
self.rows = rows or []
self.fail = fail
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if self.fail:
raise RuntimeError("database failed")
return DummyResult(row=self.row, rows=self.rows)
class DummyAriadne:
def __init__(self, enabled: bool = False) -> None:
self._enabled = enabled
self.calls: list[tuple[str, str, object | None]] = []
def enabled(self) -> bool:
return self._enabled
def proxy(self, method: str, path: str, payload: object | None = None):
self.calls.append((method, path, payload))
return jsonify({"proxied": True, "method": method, "path": path, "payload": payload})
class DummyAdmin:
def __init__(self, *, ready: bool = True, groups: list[str] | None = None, fail: bool = False) -> None:
self._ready = ready
self.groups = groups or []
self.fail = fail
def ready(self) -> bool:
return self._ready
def list_group_names(self) -> list[str]:
if self.fail:
raise RuntimeError("keycloak failed")
return self.groups
def make_client(monkeypatch, *, conn: DummyConn | None = None, ariadne: DummyAriadne | None = None, is_admin: bool = True):
app = Flask(__name__)
app.secret_key = "test"
active_conn = conn or DummyConn()
active_ariadne = ariadne or DummyAriadne()
monkeypatch.setattr(admin, "require_auth", lambda fn: fn)
monkeypatch.setattr(
admin,
"require_portal_admin",
lambda: (True, None) if is_admin else (False, (jsonify({"error": "forbidden"}), 403)),
)
monkeypatch.setattr(admin, "configured", lambda: True)
monkeypatch.setattr(admin, "ariadne_client", active_ariadne)
monkeypatch.setattr(admin, "admin_client", lambda: DummyAdmin(groups=["dev", "admin", "quality"]))
@contextmanager
def connect():
yield active_conn
monkeypatch.setattr(admin, "connect", connect)
@app.before_request
def set_user() -> None:
g.keycloak_username = "brad"
admin.register(app)
return app.test_client(), active_conn, active_ariadne
def test_admin_list_requests_preflight_proxy_and_database_paths(monkeypatch) -> None:
client, _conn, _ariadne = make_client(monkeypatch, is_admin=False)
assert client.get("/api/admin/access/requests").status_code == 403
client, _conn, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(admin, "configured", lambda: False)
assert client.get("/api/admin/access/requests").status_code == 503
ariadne = DummyAriadne(enabled=True)
client, _conn, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.get("/api/admin/access/requests").get_json()["proxied"] is True
assert proxied.calls == [("GET", "/api/admin/access/requests", None)]
now = datetime(2026, 4, 20, tzinfo=timezone.utc)
conn = DummyConn(
rows=[
{
"request_code": "alice~CODE",
"username": "alice",
"contact_email": "alice@example.dev",
"first_name": "Alice",
"last_name": "Atlas",
"created_at": now,
"note": "please",
}
]
)
client, _conn, _ariadne = make_client(monkeypatch, conn=conn)
data = client.get("/api/admin/access/requests").get_json()
assert data["requests"][0]["id"] == "alice~CODE"
assert data["requests"][0]["created_at"].startswith("2026-04-20T00:00:00")
broken = DummyConn(fail=True)
client, _conn, _ariadne = make_client(monkeypatch, conn=broken)
assert client.get("/api/admin/access/requests").status_code == 502
def test_admin_flags_paths(monkeypatch) -> None:
client, _conn, _ariadne = make_client(monkeypatch, is_admin=False)
assert client.get("/api/admin/access/flags").status_code == 403
ariadne = DummyAriadne(enabled=True)
client, _conn, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.get("/api/admin/access/flags").get_json()["proxied"] is True
assert proxied.calls == [("GET", "/api/admin/access/flags", None)]
client, _conn, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(admin, "admin_client", lambda: DummyAdmin(ready=False))
assert client.get("/api/admin/access/flags").status_code == 503
monkeypatch.setattr(admin, "admin_client", lambda: DummyAdmin(groups=["dev", "admin", "quality"], fail=True))
assert client.get("/api/admin/access/flags").status_code == 502
monkeypatch.setattr(admin.settings, "PORTAL_ADMIN_GROUPS", {"admin"})
monkeypatch.setattr(admin, "admin_client", lambda: DummyAdmin(groups=["dev", "admin", "quality"]))
assert client.get("/api/admin/access/flags").get_json() == {"flags": ["dev", "quality"]}
def test_admin_approve_paths(monkeypatch) -> None:
client, _conn, _ariadne = make_client(monkeypatch, is_admin=False)
assert client.post("/api/admin/access/requests/alice/approve", json={}).status_code == 403
client, _conn, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(admin, "configured", lambda: False)
assert client.post("/api/admin/access/requests/alice/approve", json={}).status_code == 503
ariadne = DummyAriadne(enabled=True)
client, _conn, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/admin/access/requests/alice space/approve", json={"flags": ["dev"]}).get_json()["proxied"]
assert proxied.calls == [
("POST", "/api/admin/access/requests/alice%20space/approve", {"flags": ["dev"]})
]
conn = DummyConn(row={"request_code": "alice~CODE"})
provisioned: list[str] = []
monkeypatch.setattr(admin, "provision_access_request", lambda code: provisioned.append(code))
client, active_conn, _ariadne = make_client(monkeypatch, conn=conn)
response = client.post("/api/admin/access/requests/alice/approve", json={"flags": ["dev", 7], "note": " ok "})
assert response.get_json() == {"ok": True, "request_code": "alice~CODE"}
assert provisioned == ["alice~CODE"]
assert active_conn.executed[0][1] == ("brad", ["dev"], "ok", "alice")
monkeypatch.setattr(admin, "provision_access_request", lambda code: (_ for _ in ()).throw(RuntimeError("boom")))
client, _conn, _ariadne = make_client(monkeypatch, conn=conn)
assert client.post("/api/admin/access/requests/alice/approve", json={}).status_code == 200
client, _conn, _ariadne = make_client(monkeypatch, conn=DummyConn(row=None))
assert client.post("/api/admin/access/requests/alice/approve", json={}).get_json() == {
"ok": True,
"request_code": "",
}
client, _conn, _ariadne = make_client(monkeypatch, conn=DummyConn(fail=True))
assert client.post("/api/admin/access/requests/alice/approve", json={}).status_code == 502
def test_admin_deny_paths(monkeypatch) -> None:
client, _conn, _ariadne = make_client(monkeypatch, is_admin=False)
assert client.post("/api/admin/access/requests/alice/deny", json={}).status_code == 403
client, _conn, _ariadne = make_client(monkeypatch)
monkeypatch.setattr(admin, "configured", lambda: False)
assert client.post("/api/admin/access/requests/alice/deny", json={}).status_code == 503
ariadne = DummyAriadne(enabled=True)
client, _conn, proxied = make_client(monkeypatch, ariadne=ariadne)
assert client.post("/api/admin/access/requests/alice space/deny", json={"note": "no"}).get_json()["proxied"]
assert proxied.calls == [("POST", "/api/admin/access/requests/alice%20space/deny", {"note": "no"})]
conn = DummyConn(row={"request_code": "alice~DENY"})
client, active_conn, _ariadne = make_client(monkeypatch, conn=conn)
assert client.post("/api/admin/access/requests/alice/deny", json={"note": " no "}).get_json() == {
"ok": True,
"request_code": "alice~DENY",
}
assert active_conn.executed[0][1] == ("brad", "no", "alice")
client, _conn, _ariadne = make_client(monkeypatch, conn=DummyConn(row=None))
assert client.post("/api/admin/access/requests/alice/deny", json={}).get_json() == {
"ok": True,
"request_code": "",
}
client, _conn, _ariadne = make_client(monkeypatch, conn=DummyConn(fail=True))
assert client.post("/api/admin/access/requests/alice/deny", json={}).status_code == 502

357
backend/tests/test_ai.py Normal file
View File

@ -0,0 +1,357 @@
from __future__ import annotations
import json
from types import SimpleNamespace
from unittest import TestCase, mock
import pytest
from atlas_portal.app_factory import create_app
from atlas_portal.routes import ai
class AiRouteTests(TestCase):
@classmethod
def setUpClass(cls):
cls.app = create_app()
cls.client = cls.app.test_client()
def test_chat_routes_profiles_to_modes(self):
seen: list[tuple[str, str]] = []
def fake_atlasbot_answer(message: str, mode: str, conversation_id: str) -> str:
seen.append((mode, conversation_id))
return f"{mode}:{conversation_id}"
with mock.patch.object(ai, "_atlasbot_answer", side_effect=fake_atlasbot_answer):
for profile, expected_mode in (
("atlas-quick", "quick"),
("atlas-smart", "smart"),
("atlas-genius", "genius"),
):
resp = self.client.post(
"/api/chat",
data=json.dumps(
{
"message": "How is Titan doing?",
"profile": profile,
"conversation_id": f"conv-{profile}",
}
),
content_type="application/json",
)
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertEqual(data.get("source"), f"atlas-{expected_mode}")
self.assertEqual(data.get("reply"), f"{expected_mode}:conv-{profile}")
self.assertEqual(
seen,
[
("quick", "conv-atlas-quick"),
("smart", "conv-atlas-smart"),
("genius", "conv-atlas-genius"),
],
)
def test_info_endpoint_exposes_profile_specific_model(self):
with mock.patch.object(ai.settings, "AI_ATLASBOT_MODEL_GENIUS", "genius-model"):
resp = self.client.get("/api/ai/info?profile=atlas-genius")
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertEqual(data.get("profile"), "atlas-genius")
self.assertEqual(data.get("model"), "genius-model")
def test_atlasbot_answer_uses_profile_specific_timeout(self):
captured: dict[str, object] = {}
class DummyResponse:
status_code = 200
def json(self):
return {"reply": "atlas reply"}
class DummyClient:
def __init__(self, timeout):
captured["timeout"] = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, endpoint, json=None, headers=None):
captured["endpoint"] = endpoint
captured["json"] = json
captured["headers"] = headers or {}
return DummyResponse()
with (
mock.patch.object(ai.httpx, "Client", DummyClient),
mock.patch.object(ai.settings, "AI_ATLASBOT_ENDPOINT", "http://atlasbot.invalid/v1/answer"),
mock.patch.object(ai.settings, "AI_ATLASBOT_TOKEN", "internal-token"),
):
reply = ai._atlasbot_answer("How is Titan doing?", "genius", "conv-1")
self.assertEqual(reply, "atlas reply")
self.assertEqual(captured["timeout"], ai.settings.AI_ATLASBOT_TIMEOUT_GENIUS_SEC)
self.assertEqual(captured["json"], {"prompt": "How is Titan doing?", "mode": "genius", "conversation_id": "conv-1"})
self.assertEqual(captured["headers"], {"X-Internal-Token": "internal-token"})
def test_chat_returns_fallback_when_atlasbot_returns_empty(self):
for profile, expected in (
("atlas-quick", "Quick mode hit"),
("atlas-smart", "Smart mode hit"),
("atlas-genius", "Atlas genius mode timed out"),
):
with mock.patch.object(ai, "_atlasbot_answer", return_value=""):
resp = self.client.post(
"/api/chat",
data=json.dumps(
{
"message": "How is Titan doing?",
"profile": profile,
}
),
content_type="application/json",
)
data = resp.get_json()
self.assertEqual(resp.status_code, 200)
self.assertIn(expected, data.get("reply", ""))
def test_chat_requires_message() -> None:
client = create_app().test_client()
response = client.post("/api/chat", data=json.dumps({"message": ""}), content_type="application/json")
assert response.status_code == 400
assert response.get_json()["error"] == "message required"
def test_atlasbot_answer_soft_failure_paths(monkeypatch) -> None:
monkeypatch.setattr(ai.settings, "AI_ATLASBOT_ENDPOINT", "")
assert ai._atlasbot_answer("hello", "quick", "") == ""
class NonOkClient:
def __init__(self, timeout):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, endpoint, json=None, headers=None):
return SimpleNamespace(status_code=503)
monkeypatch.setattr(ai.settings, "AI_ATLASBOT_ENDPOINT", "http://atlasbot")
monkeypatch.setattr(ai.httpx, "Client", NonOkClient)
assert ai._atlasbot_answer("hello", "smart", "") == ""
class BadJsonClient(NonOkClient):
def post(self, endpoint, json=None, headers=None):
return SimpleNamespace(status_code=200, json=lambda: (_ for _ in ()).throw(ValueError("bad")))
monkeypatch.setattr(ai.httpx, "Client", BadJsonClient)
assert ai._atlasbot_answer("hello", "quick", "") == ""
assert ai._atlasbot_timeout_sec("smart") == ai.settings.AI_ATLASBOT_TIMEOUT_SMART_SEC
assert ai._atlasbot_timeout_sec("quick") == ai.settings.AI_ATLASBOT_TIMEOUT_QUICK_SEC
def test_discover_ai_meta_reads_pod_annotations(monkeypatch) -> None:
class FakePath:
def __init__(self, value):
self.value = value
def __truediv__(self, child):
return FakePath(child)
def exists(self):
return True
def read_text(self):
if self.value == "token":
return "token"
if self.value == "namespace":
return "ai"
return "ca"
def __str__(self):
return self.value
class PodClient:
def __init__(self, **kwargs):
self.kwargs = kwargs
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, url):
return SimpleNamespace(
raise_for_status=lambda: None,
json=lambda: {
"items": [
{
"status": {"phase": "Pending"},
"spec": {
"nodeName": "titan-24",
"containers": [{"image": "registry/atlasbot:model-from-image"}],
},
"metadata": {
"annotations": {
ai.settings.AI_GPU_ANNOTATION: "RTX 3090",
ai.settings.AI_MODEL_ANNOTATION: "annotated-model",
}
},
}
]
},
)
monkeypatch.setattr(ai, "Path", FakePath)
monkeypatch.setattr(ai.httpx, "Client", PodClient)
meta = ai._discover_ai_meta("atlas-quick")
assert meta["node"] == "titan-24"
assert meta["gpu"] == "RTX 3090"
assert meta["model"] == "annotated-model"
class ImageOnlyClient(PodClient):
def get(self, url):
return SimpleNamespace(
raise_for_status=lambda: None,
json=lambda: {
"items": [
{
"status": {"phase": "Running"},
"spec": {
"nodeName": "titan-22",
"containers": [{"image": "registry/atlasbot:model-from-image"}],
},
"metadata": {"annotations": {}},
}
]
},
)
monkeypatch.setattr(ai.httpx, "Client", ImageOnlyClient)
image_meta = ai._discover_ai_meta("atlas-smart")
assert image_meta["endpoint"] == "/api/ai/chat"
assert image_meta["model"] == "model-from-image"
def test_discover_ai_meta_handles_probe_errors(monkeypatch) -> None:
class MissingPath:
def __init__(self, value):
self.value = value
def __truediv__(self, child):
return MissingPath(child)
def exists(self):
return False
monkeypatch.setattr(ai, "Path", MissingPath)
assert ai._discover_ai_meta("quick")["endpoint"] == "/api/ai/chat"
class ExistingPath(MissingPath):
def __truediv__(self, child):
return ExistingPath(child)
def exists(self):
return True
def read_text(self):
return "token"
def __str__(self):
return self.value
class FailingClient:
def __init__(self, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, url):
raise RuntimeError("offline")
monkeypatch.setattr(ai, "Path", ExistingPath)
monkeypatch.setattr(ai.httpx, "Client", FailingClient)
assert ai._discover_ai_meta("atlas-genius")["endpoint"] == "/api/ai/chat"
def test_start_keep_warm_disabled_and_loop(monkeypatch) -> None:
monkeypatch.setattr(ai.settings, "AI_WARM_ENABLED", False)
ai._start_keep_warm()
posts: list[dict] = []
class WarmClient:
def __init__(self, timeout):
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, url, json=None):
posts.append({"url": url, "json": json})
sleeps = {"count": 0}
def fake_sleep(seconds):
sleeps["count"] += 1
if sleeps["count"] > 1:
raise KeyboardInterrupt()
class InlineThread:
def __init__(self, target, daemon, name):
self.target = target
def start(self):
self.target()
monkeypatch.setattr(ai.settings, "AI_WARM_ENABLED", True)
monkeypatch.setattr(ai.settings, "AI_WARM_INTERVAL_SEC", 1)
monkeypatch.setattr(ai.time, "sleep", fake_sleep)
monkeypatch.setattr(ai.httpx, "Client", WarmClient)
monkeypatch.setattr(ai.threading, "Thread", InlineThread)
with pytest.raises(KeyboardInterrupt):
ai._start_keep_warm()
assert posts
class RaisingWarmClient(WarmClient):
def post(self, url, json=None):
raise RuntimeError("keep-warm backend unavailable")
loop_sleeps = {"count": 0}
def stop_after_exception(seconds):
loop_sleeps["count"] += 1
if loop_sleeps["count"] > 1:
raise KeyboardInterrupt()
monkeypatch.setattr(ai.time, "sleep", stop_after_exception)
monkeypatch.setattr(ai.httpx, "Client", RaisingWarmClient)
with pytest.raises(KeyboardInterrupt):
ai._start_keep_warm()

View File

@ -0,0 +1,50 @@
from __future__ import annotations
"""Tests for Flask application assembly and frontend fallback behavior."""
from pathlib import Path
from atlas_portal.app_factory import create_app
def test_create_app_exposes_health_endpoint() -> None:
app = create_app()
client = app.test_client()
resp = client.get("/api/healthz")
assert resp.status_code == 200
assert resp.get_json() == {"ok": True}
def test_create_app_returns_json_when_frontend_is_missing() -> None:
app = create_app()
client = app.test_client()
original = app.static_folder
app.static_folder = str(Path("/tmp") / "missing-frontend-dist")
try:
resp = client.get("/")
finally:
app.static_folder = original
data = resp.get_json()
assert resp.status_code == 200
assert "Frontend not built yet" in data["message"]
def test_create_app_serves_existing_static_assets(tmp_path) -> None:
app = create_app()
(tmp_path / "index.html").write_text("<html>ok</html>")
(tmp_path / "asset.txt").write_text("payload")
original = app.static_folder
app.static_folder = str(tmp_path)
try:
with app.test_request_context("/asset.txt"):
resp = app.view_functions["serve_frontend"]("asset.txt")
finally:
app.static_folder = original
assert resp.status_code == 200
resp.direct_passthrough = False
assert resp.get_data() == b"payload"

View File

@ -0,0 +1,36 @@
from __future__ import annotations
"""Tests for the Keycloak auth config route."""
from atlas_portal.app_factory import create_app
from atlas_portal import settings
def test_auth_config_disabled_by_default() -> None:
app = create_app()
client = app.test_client()
resp = client.get("/api/auth/config")
assert resp.status_code == 200
assert resp.get_json() == {"enabled": False}
def test_auth_config_builds_urls_when_enabled(monkeypatch) -> None:
monkeypatch.setattr(settings, "KEYCLOAK_ENABLED", True)
monkeypatch.setattr(settings, "KEYCLOAK_URL", "https://sso.example.dev")
monkeypatch.setattr(settings, "KEYCLOAK_REALM", "atlas")
monkeypatch.setattr(settings, "KEYCLOAK_CLIENT_ID", "portal-client")
monkeypatch.setattr(settings, "KEYCLOAK_ISSUER", "https://sso.example.dev/realms/atlas")
app = create_app()
client = app.test_client()
resp = client.get("/api/auth/config", base_url="https://portal.example.dev")
data = resp.get_json()
assert resp.status_code == 200
assert data["enabled"] is True
assert data["login_url"].startswith("https://sso.example.dev/realms/atlas/protocol/openid-connect/auth")
assert "client_id=portal-client" in data["login_url"]
assert data["account_password_url"].endswith("#/security/signingin")

View File

@ -0,0 +1,46 @@
from __future__ import annotations
"""Tests for the tiny health and Monero endpoints."""
import json
from urllib.error import URLError
from atlas_portal.app_factory import create_app
from atlas_portal.routes import monero
def test_monero_endpoint_returns_upstream_json(monkeypatch) -> None:
class DummyResponse:
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def read(self):
return json.dumps({"status": "OK", "nettype": "mainnet"}).encode("utf-8")
monkeypatch.setattr(monero, "urlopen", lambda *args, **kwargs: DummyResponse())
app = create_app()
client = app.test_client()
resp = client.get("/api/monero/get_info")
assert resp.status_code == 200
assert resp.get_json()["status"] == "OK"
def test_monero_endpoint_handles_upstream_failure(monkeypatch) -> None:
def boom(*args, **kwargs):
raise URLError("boom")
monkeypatch.setattr(monero, "urlopen", boom)
app = create_app()
client = app.test_client()
resp = client.get("/api/monero/get_info")
assert resp.status_code == 503
assert resp.get_json()["url"].startswith("http://")

View File

@ -0,0 +1,380 @@
from __future__ import annotations
"""Coverage for Keycloak token verification, admin operations, and route guards."""
from types import SimpleNamespace
import pytest
from atlas_portal import keycloak
from atlas_portal.app_factory import create_app
class DummyResponse:
"""Small HTTP response double for Keycloak admin tests."""
def __init__(self, payload=None, *, headers=None, status_code: int = 200) -> None:
self._payload = payload if payload is not None else {}
self.headers = headers or {}
self.status_code = status_code
def json(self):
"""Return the configured response payload."""
return self._payload
def raise_for_status(self) -> None:
"""Raise for configured error statuses."""
if self.status_code >= 400:
raise RuntimeError("bad status")
class SequenceClient:
"""httpx.Client replacement that returns queued responses."""
responses: list[DummyResponse] = []
calls: list[tuple[str, str, dict]] = []
def __init__(self, timeout):
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
@classmethod
def reset(cls, *responses: DummyResponse) -> None:
"""Replace queued responses and clear captured calls."""
cls.responses = list(responses)
cls.calls = []
def _next(self) -> DummyResponse:
return self.responses.pop(0)
def get(self, url, **kwargs):
self.calls.append(("GET", url, kwargs))
return self._next()
def post(self, url, **kwargs):
self.calls.append(("POST", url, kwargs))
return self._next()
def put(self, url, **kwargs):
self.calls.append(("PUT", url, kwargs))
return self._next()
def test_oidc_verify_checks_issuer_and_client(monkeypatch) -> None:
verifier = keycloak.KeycloakOIDC()
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ENABLED", False)
with pytest.raises(ValueError, match="not enabled"):
verifier.verify("token")
class DummyJwk:
key = "signing-key"
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ENABLED", True)
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_CLIENT_ID", "portal")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ISSUER", "https://sso/realms/atlas")
monkeypatch.setattr(verifier, "_client", lambda: SimpleNamespace(get_signing_key_from_jwt=lambda token: DummyJwk()))
monkeypatch.setattr(keycloak.jwt, "decode", lambda *a, **k: {"azp": "portal", "aud": ["other"]})
assert verifier.verify("token")["azp"] == "portal"
monkeypatch.setattr(keycloak.jwt, "decode", lambda *a, **k: {"azp": "other", "aud": ["not-portal"]})
with pytest.raises(ValueError, match="not issued"):
verifier.verify("token")
made: list[str] = []
class DummyJwkClient:
def __init__(self, url):
made.append(url)
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_JWKS_URL", "https://sso/jwks")
monkeypatch.setattr(keycloak, "PyJWKClient", DummyJwkClient)
verifier = keycloak.KeycloakOIDC()
assert verifier._client() is verifier._client()
assert made == ["https://sso/jwks"]
verifier = keycloak.KeycloakOIDC()
monkeypatch.setattr(verifier, "_client", lambda: SimpleNamespace(get_signing_key_from_jwt=lambda token: SimpleNamespace(key="k")))
monkeypatch.setattr(keycloak.jwt, "decode", lambda *a, **k: {"azp": "other", "aud": "portal"})
assert verifier.verify("token")["aud"] == "portal"
def test_admin_token_cache_and_basic_user_operations(monkeypatch) -> None:
monkeypatch.setattr(keycloak.httpx, "Client", SequenceClient)
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_ID", "admin-client")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_SECRET", "secret")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_URL", "https://sso.example.dev")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_REALM", "master")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_REALM", "atlas")
client = keycloak.KeycloakAdminClient()
SequenceClient.reset(DummyResponse({"access_token": "tok", "expires_in": 120}))
assert client.ready()
assert client.headers() == {"Authorization": "Bearer tok"}
assert client.headers() == {"Authorization": "Bearer tok"}
assert len([call for call in SequenceClient.calls if call[0] == "POST"]) == 1
SequenceClient.reset(DummyResponse([{"id": "u1", "username": "alice"}]))
assert client.find_user("alice") == {"id": "u1", "username": "alice"}
SequenceClient.reset(DummyResponse([{"email": "ALICE@example.dev"}]))
assert client.find_user_by_email("alice@example.dev") == {"email": "ALICE@example.dev"}
assert client.find_user_by_email("") is None
SequenceClient.reset(DummyResponse({"id": "u1", "username": "alice"}))
assert client.get_user("u1")["username"] == "alice"
SequenceClient.reset(DummyResponse({}))
client.update_user("u1", {"enabled": True})
assert SequenceClient.calls[-1][0] == "PUT"
SequenceClient.reset(
DummyResponse({"username": "alice", "enabled": True, "attributes": {"old": ["1"]}}),
DummyResponse({}),
)
client.update_user_safe("u1", {"attributes": {"new": ["2"]}, "email": "alice@example.dev"})
sent = SequenceClient.calls[-1][2]["json"]
assert sent["attributes"]["old"] == ["1"]
assert sent["attributes"]["new"] == ["2"]
assert sent["email"] == "alice@example.dev"
full_payload = keycloak.KeycloakAdminClient._safe_update_payload(
{
"username": "alice",
"enabled": True,
"email": "alice@example.dev",
"emailVerified": True,
"firstName": "Alice",
"lastName": "Atlas",
"requiredActions": ["UPDATE_PASSWORD", 7],
"attributes": "bad",
}
)
assert full_payload["emailVerified"] is True
assert full_payload["firstName"] == "Alice"
assert full_payload["lastName"] == "Atlas"
assert full_payload["requiredActions"] == ["UPDATE_PASSWORD"]
assert full_payload["attributes"] == {}
unready = keycloak.KeycloakAdminClient()
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_ID", "")
with pytest.raises(RuntimeError, match="not configured"):
unready._get_token()
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_ID", "admin-client")
SequenceClient.reset(DummyResponse({}))
client = keycloak.KeycloakAdminClient()
with pytest.raises(RuntimeError, match="no access_token"):
client._get_token()
client._token = "tok"
client._expires_at = 9999999999
SequenceClient.reset(DummyResponse([]), DummyResponse("not-list"), DummyResponse([{"email": "other@example.dev"}]))
assert client.find_user("missing") is None
assert client.find_user_by_email("alice@example.dev") is None
assert client.find_user_by_email("alice@example.dev") is None
SequenceClient.reset(DummyResponse([]))
assert client.find_user_by_email("alice@example.dev") is None
SequenceClient.reset(DummyResponse("not-dict"))
with pytest.raises(RuntimeError, match="unexpected user payload"):
client.get_user("u1")
SequenceClient.reset(DummyResponse({"username": "alice", "attributes": "bad"}), DummyResponse({}))
client.update_user_safe("u1", {"attributes": {"new": ["2"]}})
assert SequenceClient.calls[-1][2]["json"]["attributes"] == {"new": ["2"]}
def test_admin_create_password_groups_and_credentials(monkeypatch) -> None:
monkeypatch.setattr(keycloak.httpx, "Client", SequenceClient)
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_ID", "admin-client")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_SECRET", "secret")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_URL", "https://sso.example.dev")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_REALM", "master")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_REALM", "atlas")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_CLIENT_ID", "portal")
client = keycloak.KeycloakAdminClient()
client._token = "tok"
client._expires_at = 9999999999
SequenceClient.reset(DummyResponse({}, headers={"Location": "https://sso/admin/users/u1"}))
assert client.create_user({"username": "alice"}) == "u1"
SequenceClient.reset(DummyResponse({}))
client.reset_password("u1", "pw", temporary=False)
assert SequenceClient.calls[-1][2]["json"]["temporary"] is False
SequenceClient.reset(
DummyResponse([{"name": "dev", "id": "g1"}]),
DummyResponse([{"name": "ignored", "id": "g2"}]),
)
assert client.get_group_id("dev") == "g1"
assert client.get_group_id("dev") == "g1"
assert len(SequenceClient.calls) == 1
groups = [{"name": "root", "subGroups": [{"name": "child"}]}]
SequenceClient.reset(DummyResponse(groups))
assert client.list_group_names() == ["child", "root"]
SequenceClient.reset(DummyResponse([{"name": "/dev"}, {"name": "admin"}, "bad"]))
assert client.list_user_groups("u1") == ["dev", "admin"]
SequenceClient.reset(DummyResponse({}))
client.add_user_to_group("u1", "g1")
assert "/groups/g1" in SequenceClient.calls[-1][1]
SequenceClient.reset(DummyResponse({}))
client.execute_actions_email("u1", ["UPDATE_PASSWORD"], "https://portal/account")
assert SequenceClient.calls[-1][2]["json"] == ["UPDATE_PASSWORD"]
SequenceClient.reset(DummyResponse([{"type": "password"}, "bad"]))
assert client.get_user_credentials("u1") == [{"type": "password"}]
SequenceClient.reset(DummyResponse({"not": "a-list"}))
assert client.get_user_credentials("u1") == []
SequenceClient.reset(DummyResponse([{"name": "other", "id": "g2"}, "bad"]))
assert client.get_group_id("missing") is None
SequenceClient.reset(DummyResponse([{"name": "root", "subGroups": ["bad"]}]))
assert client.list_group_names() == ["root"]
def test_admin_set_attribute_and_error_edges(monkeypatch) -> None:
client = keycloak.KeycloakAdminClient()
monkeypatch.setattr(client, "find_user", lambda username: {"id": "u1"} if username == "alice" else None)
monkeypatch.setattr(client, "get_user", lambda user_id: {"username": "alice", "attributes": {"old": ["1"]}})
updated: list[dict] = []
monkeypatch.setattr(client, "update_user", lambda user_id, payload: updated.append(payload))
client.set_user_attribute("alice", "mailu", "pw")
assert updated[0]["attributes"]["mailu"] == ["pw"]
monkeypatch.setattr(client, "get_user", lambda user_id: {"username": "alice", "attributes": "bad"})
client.set_user_attribute("alice", "next", "value")
assert updated[-1]["attributes"]["next"] == ["value"]
with pytest.raises(RuntimeError, match="user not found"):
client.set_user_attribute("nobody", "mailu", "pw")
monkeypatch.setattr(client, "find_user", lambda username: {"id": ""})
with pytest.raises(RuntimeError, match="user id missing"):
client.set_user_attribute("alice", "mailu", "pw")
monkeypatch.setattr(keycloak.httpx, "Client", SequenceClient)
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_ID", "admin-client")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_CLIENT_SECRET", "secret")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ADMIN_URL", "https://sso.example.dev")
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_REALM", "atlas")
client = keycloak.KeycloakAdminClient()
client._token = "tok"
client._expires_at = 9999999999
SequenceClient.reset(DummyResponse({}))
with pytest.raises(RuntimeError, match="created user id"):
client.create_user({"username": "alice"})
SequenceClient.reset(DummyResponse([]))
assert client.get_group_id("missing") is None
SequenceClient.reset(DummyResponse({"not": "groups"}))
assert client.list_group_names() == []
SequenceClient.reset(DummyResponse({"not": "groups"}))
assert client.list_user_groups("u1") == []
def test_singletons_and_auth_guards(monkeypatch) -> None:
app = create_app()
monkeypatch.setattr(keycloak, "_OIDC", None)
monkeypatch.setattr(keycloak, "_ADMIN", None)
assert keycloak.oidc_client() is keycloak.oidc_client()
assert keycloak.admin_client() is keycloak.admin_client()
with app.test_request_context():
assert keycloak._extract_bearer_token() is None
with app.test_request_context(headers={"Authorization": "Bearer tok"}):
assert keycloak._extract_bearer_token() == "tok"
with app.test_request_context(headers={"Authorization": "Token tok"}):
assert keycloak._extract_bearer_token() is None
with app.test_request_context(headers={"Authorization": "Bearer "}):
assert keycloak._extract_bearer_token() is None
assert keycloak._normalize_groups("bad") == []
assert keycloak._normalize_groups(["/dev", 7, ""]) == ["dev"]
@keycloak.require_auth
def protected():
return {"ok": True}
monkeypatch.setattr(
keycloak,
"oidc_client",
lambda: SimpleNamespace(verify=lambda token: {"preferred_username": "alice", "email": "a@example.dev", "groups": ["/dev"]}),
)
with app.test_request_context(headers={"Authorization": "Bearer tok"}):
assert protected() == {"ok": True}
assert keycloak.g.keycloak_groups == ["dev"]
monkeypatch.setattr(
keycloak,
"oidc_client",
lambda: SimpleNamespace(verify=lambda token: (_ for _ in ()).throw(ValueError("bad"))),
)
with app.test_request_context(headers={"Authorization": "Bearer tok"}):
response, status = protected()
assert status == 401
assert response.get_json()["error"] == "invalid token"
with app.test_request_context():
response, status = protected()
assert status == 401
assert response.get_json()["error"] == "missing bearer token"
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ENABLED", False)
with app.test_request_context():
ok, response = keycloak.require_portal_admin()
assert not ok and response[1] == 503
ok, response = keycloak.require_account_access()
assert not ok and response[1] == 503
monkeypatch.setattr(keycloak.settings, "KEYCLOAK_ENABLED", True)
monkeypatch.setattr(keycloak.settings, "PORTAL_ADMIN_USERS", {"alice"})
monkeypatch.setattr(keycloak.settings, "PORTAL_ADMIN_GROUPS", {"admin"})
monkeypatch.setattr(keycloak.settings, "ACCOUNT_ALLOWED_GROUPS", {"dev"})
with app.test_request_context():
keycloak.g.keycloak_username = "alice"
keycloak.g.keycloak_groups = ["dev"]
assert keycloak.require_portal_admin() == (True, None)
assert keycloak.require_account_access() == (True, None)
monkeypatch.setattr(keycloak.settings, "PORTAL_ADMIN_USERS", set())
with app.test_request_context():
keycloak.g.keycloak_username = "carol"
keycloak.g.keycloak_groups = ["admin"]
assert keycloak.require_portal_admin() == (True, None)
monkeypatch.setattr(keycloak.settings, "ACCOUNT_ALLOWED_GROUPS", set())
with app.test_request_context():
assert keycloak.require_account_access() == (True, None)
monkeypatch.setattr(keycloak.settings, "ACCOUNT_ALLOWED_GROUPS", {"dev"})
with app.test_request_context():
keycloak.g.keycloak_groups = []
assert keycloak.require_account_access() == (True, None)
with app.test_request_context():
keycloak.g.keycloak_username = "bob"
keycloak.g.keycloak_groups = ["other"]
assert keycloak.require_portal_admin()[0] is False
assert keycloak.require_account_access()[0] is False

View File

@ -0,0 +1,132 @@
from __future__ import annotations
"""Tests for lab health query helpers and route payloads."""
import json
from urllib.error import URLError
from atlas_portal.app_factory import create_app
from atlas_portal.routes import lab
class DummyUrlResponse:
"""Small context-manager response for urlopen tests."""
def __init__(self, payload: dict | str, *, status: int = 200) -> None:
self.payload = payload
self.status = status
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def read(self, size: int | None = None) -> bytes:
"""Return JSON or string content as response bytes."""
if isinstance(self.payload, str):
return self.payload.encode("utf-8")
return json.dumps(self.payload).encode("utf-8")
def test_vm_query_success_and_empty_paths(monkeypatch) -> None:
payloads = [
{"status": "success", "data": {"result": [{"value": [0, "2"]}, {"value": [0, "5"]}]}},
{"status": "error"},
{"status": "success", "data": {"result": []}},
{"status": "success", "data": {"result": [{"bad": []}]}},
]
def fake_urlopen(url, timeout):
return DummyUrlResponse(payloads.pop(0))
monkeypatch.setattr(lab, "urlopen", fake_urlopen)
assert lab._vm_query("up") == 5.0
assert lab._vm_query("up") is None
assert lab._vm_query("up") is None
assert lab._vm_query("up") is None
def test_http_ok_status_substring_and_errors(monkeypatch) -> None:
responses = [
DummyUrlResponse("service ok"),
DummyUrlResponse("wrong body"),
DummyUrlResponse("bad", status=503),
]
def fake_urlopen(url, timeout):
item = responses.pop(0)
if item == "raise":
raise URLError("offline")
return item
monkeypatch.setattr(lab, "urlopen", fake_urlopen)
assert lab._http_ok("https://grafana", expect_substring="ok")
assert not lab._http_ok("https://grafana", expect_substring="ok")
assert not lab._http_ok("https://grafana")
responses.append("raise")
assert not lab._http_ok("https://grafana")
def test_lab_status_uses_cache_and_probe_fallbacks(monkeypatch) -> None:
app = create_app()
client = app.test_client()
lab._LAB_STATUS_CACHE["ts"] = 0.0
lab._LAB_STATUS_CACHE["value"] = None
monkeypatch.setattr(lab.settings, "OCEANUS_NODE_EXPORTER_URL", "https://oceanus.example.dev/metrics")
calls: list[str] = []
def fake_http_ok(url, expect_substring=None):
calls.append(url)
return "grafana" in url or "oceanus" in url
monkeypatch.setattr(lab, "_http_ok", fake_http_ok)
monkeypatch.setattr(lab, "_vm_query", lambda expr: 1.0)
response = client.get("/api/lab/status")
payload = response.get_json()
assert response.status_code == 200
assert payload["connected"] is True
assert payload["atlas"]["source"] == "grafana"
assert payload["oceanus"]["source"] == "node-exporter"
second = client.get("/api/lab/status")
assert second.get_json() == payload
lab._LAB_STATUS_CACHE["ts"] = 0.0
lab._LAB_STATUS_CACHE["value"] = None
monkeypatch.setattr(lab, "_http_ok", lambda *a, **k: False)
monkeypatch.setattr(lab, "_vm_query", lambda expr: 0.0)
response = client.get("/api/lab/status")
payload = response.get_json()
assert payload["atlas"]["source"] == "victoria-metrics"
assert payload["atlas"]["up"] is False
assert payload["oceanus"]["known"] is False
def test_lab_status_handles_probe_exceptions(monkeypatch) -> None:
app = create_app()
client = app.test_client()
lab._LAB_STATUS_CACHE["ts"] = 0.0
lab._LAB_STATUS_CACHE["value"] = None
def boom(*args, **kwargs):
raise RuntimeError("offline")
monkeypatch.setattr(lab, "_http_ok", boom)
monkeypatch.setattr(lab, "_vm_query", boom)
response = client.get("/api/lab/status")
payload = response.get_json()
assert response.status_code == 200
assert payload["connected"] is False
assert payload["atlas"]["known"] is False
assert payload["oceanus"]["known"] is False

View File

@ -0,0 +1,353 @@
from __future__ import annotations
"""Coverage for backend integration helper modules."""
from contextlib import contextmanager
from types import SimpleNamespace
import httpx
import pytest
from atlas_portal import ariadne_client, db, k8s, mailer, migrate
from atlas_portal.app_factory import create_app
class DummyResponse:
"""Small httpx-like response for helper tests."""
def __init__(self, payload=None, *, status_code: int = 200, text: str = "", headers=None) -> None:
self._payload = payload if payload is not None else {}
self.status_code = status_code
self.text = text
self.headers = headers or {}
def json(self):
"""Return the configured JSON payload."""
if isinstance(self._payload, BaseException):
raise self._payload
return self._payload
def raise_for_status(self) -> None:
"""Raise for non-success responses like httpx does."""
if self.status_code >= 400:
raise httpx.HTTPStatusError("bad status", request=None, response=None)
def test_migrate_main_delegates_to_db(monkeypatch) -> None:
calls: list[str] = []
monkeypatch.setattr(migrate, "run_migrations", lambda: calls.append("run"))
migrate.main()
assert calls == ["run"]
def test_mailer_validates_configuration_and_sends(monkeypatch) -> None:
sent: list[tuple[str, str]] = []
class DummySMTP:
def __init__(self, host, port, timeout):
self.host = host
self.port = port
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def starttls(self) -> None:
sent.append(("starttls", ""))
def login(self, username, password) -> None:
sent.append(("login", f"{username}:{password}"))
def send_message(self, message) -> None:
sent.append(("send", message["To"]))
monkeypatch.setattr(mailer.settings, "SMTP_HOST", "")
with pytest.raises(mailer.MailerError):
mailer.send_text_email(to_addr="a@example.dev", subject="Subject", body="Body")
monkeypatch.setattr(mailer.settings, "SMTP_HOST", "smtp.example.dev")
monkeypatch.setattr(mailer.settings, "SMTP_PORT", 587)
monkeypatch.setattr(mailer.settings, "SMTP_USE_TLS", False)
monkeypatch.setattr(mailer.settings, "SMTP_STARTTLS", True)
monkeypatch.setattr(mailer.settings, "SMTP_USERNAME", "user")
monkeypatch.setattr(mailer.settings, "SMTP_PASSWORD", "pw")
monkeypatch.setattr(mailer.smtplib, "SMTP", DummySMTP)
mailer.send_text_email(to_addr="a@example.dev", subject="Subject", body="Body")
assert ("starttls", "") in sent
assert ("login", "user:pw") in sent
assert ("send", "a@example.dev") in sent
body = mailer.access_request_verification_body(request_code="REQ", verify_url="https://verify.example.dev")
assert "REQ" in body and "https://verify.example.dev" in body
def test_mailer_reports_missing_recipient_and_send_errors(monkeypatch) -> None:
class FailingSMTP:
def __init__(self, *args, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def send_message(self, message) -> None:
raise OSError("offline")
with pytest.raises(mailer.MailerError, match="missing recipient"):
mailer.send_text_email(to_addr="", subject="Subject", body="Body")
monkeypatch.setattr(mailer.settings, "SMTP_HOST", "smtp.example.dev")
monkeypatch.setattr(mailer.settings, "SMTP_USE_TLS", True)
monkeypatch.setattr(mailer.settings, "SMTP_STARTTLS", False)
monkeypatch.setattr(mailer.settings, "SMTP_USERNAME", "")
monkeypatch.setattr(mailer.smtplib, "SMTP_SSL", FailingSMTP)
with pytest.raises(mailer.MailerError, match="failed to send email"):
mailer.send_text_email(to_addr="a@example.dev", subject="Subject", body="Body")
def test_k8s_get_and_post_json(monkeypatch) -> None:
calls: list[tuple[str, str, object]] = []
class DummyClient:
def __init__(self, **kwargs):
calls.append(("init", "", kwargs))
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, url):
calls.append(("get", url, None))
return DummyResponse({"kind": "Pod"})
def post(self, url, json=None):
calls.append(("post", url, json))
return DummyResponse({"kind": "Job"})
monkeypatch.setattr(k8s, "_read_service_account", lambda: ("token", "/ca.crt"))
monkeypatch.setattr(k8s.httpx, "Client", DummyClient)
assert k8s.get_json("/api/v1/pods") == {"kind": "Pod"}
assert k8s.post_json("/apis/batch/v1/jobs", {"metadata": {"name": "job"}}) == {"kind": "Job"}
assert calls[1][0] == "get"
assert calls[2][0] == "init"
assert calls[3][0] == "post"
def test_k8s_service_account_and_bad_json(monkeypatch, tmp_path) -> None:
sa_path = tmp_path / "sa"
sa_path.mkdir()
monkeypatch.setattr(k8s, "_SA_PATH", sa_path)
with pytest.raises(RuntimeError, match="token missing"):
k8s._read_service_account()
(sa_path / "token").write_text(" ")
(sa_path / "ca.crt").write_text("ca")
with pytest.raises(RuntimeError, match="token empty"):
k8s._read_service_account()
(sa_path / "token").write_text("token")
assert k8s._read_service_account() == ("token", str(sa_path / "ca.crt"))
class BadClient:
def __init__(self, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, url):
return DummyResponse([])
def post(self, url, json=None):
return DummyResponse([])
monkeypatch.setattr(k8s.httpx, "Client", BadClient)
with pytest.raises(RuntimeError, match="unexpected kubernetes response"):
k8s.get_json("/api/v1/pods")
with pytest.raises(RuntimeError, match="unexpected kubernetes response"):
k8s.post_json("/api/v1/pods", {})
def test_ariadne_proxy_paths(monkeypatch) -> None:
monkeypatch.setattr(ariadne_client.settings, "ARIADNE_URL", "")
assert not ariadne_client.enabled()
with pytest.raises(ariadne_client.AriadneError):
ariadne_client.request_raw("GET", "/health")
class DummyClient:
def __init__(self, timeout):
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def request(self, method, url, headers=None, json=None, params=None):
assert headers == {"Authorization": "Bearer token"}
return DummyResponse({"ok": True})
monkeypatch.setattr(ariadne_client.settings, "ARIADNE_URL", "https://ariadne.example.dev")
monkeypatch.setattr(ariadne_client.httpx, "Client", DummyClient)
app = create_app()
with app.test_request_context(headers={"Authorization": "Bearer token"}):
response = ariadne_client.request_raw("POST", "/path", payload={"a": 1})
assert response.json() == {"ok": True}
flask_response, status = ariadne_client.proxy("POST", "/path")
assert status == 200
assert flask_response.get_json() == {"ok": True}
def test_ariadne_error_and_proxy_fallback_paths(monkeypatch) -> None:
class ServerErrorClient:
def __init__(self, timeout):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def request(self, method, url, headers=None, json=None, params=None):
return DummyResponse({"error": "upstream"}, status_code=503)
monkeypatch.setattr(ariadne_client.settings, "ARIADNE_URL", "https://ariadne.example.dev")
monkeypatch.setattr(ariadne_client.httpx, "Client", ServerErrorClient)
app = create_app()
with app.test_request_context():
assert ariadne_client.request_raw("GET", "/health").status_code == 503
attempts = {"count": 0}
class FailingClient:
def __init__(self, timeout):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def request(self, method, url, headers=None, json=None, params=None):
attempts["count"] += 1
raise httpx.RequestError("offline")
monkeypatch.setattr(ariadne_client.settings, "ARIADNE_RETRY_COUNT", 2)
monkeypatch.setattr(ariadne_client.settings, "ARIADNE_RETRY_BACKOFF_SEC", 0)
monkeypatch.setattr(ariadne_client.httpx, "Client", FailingClient)
with app.test_request_context():
with pytest.raises(ariadne_client.AriadneError):
ariadne_client.request_raw("GET", "/health")
assert attempts["count"] == 2
with app.test_request_context():
monkeypatch.setattr(ariadne_client, "request_raw", lambda *a, **k: (_ for _ in ()).throw(ariadne_client.AriadneError("down", 504)))
response, status = ariadne_client.proxy("GET", "/health")
assert status == 504
assert response.get_json()["error"] == "down"
monkeypatch.setattr(ariadne_client, "request_raw", lambda *a, **k: DummyResponse(ValueError("bad json"), text="plain", status_code=502))
response, status = ariadne_client.proxy("GET", "/health")
assert status == 502
assert response.get_json()["error"] == "plain"
def test_db_pool_and_migration_paths(monkeypatch) -> None:
executed: list[tuple[str, object]] = []
class DummyConn:
row_factory = None
def execute(self, query, params=None):
executed.append((str(query), params))
if "pg_try_advisory_lock" in str(query):
return SimpleNamespace(fetchone=lambda: {"pg_try_advisory_lock": True})
return SimpleNamespace(fetchone=lambda: None)
@contextmanager
def fake_connect():
yield DummyConn()
monkeypatch.setattr(db.settings, "PORTAL_DATABASE_URL", "")
assert not db.configured()
with pytest.raises(RuntimeError):
db._get_pool()
with pytest.raises(RuntimeError):
with db.connect():
pass
monkeypatch.setattr(db.settings, "PORTAL_DATABASE_URL", "postgres://portal")
monkeypatch.setattr(db.settings, "PORTAL_RUN_MIGRATIONS", True)
monkeypatch.setattr(db, "connect", fake_connect)
monkeypatch.setattr(db, "_release_advisory_lock", lambda conn, lock_id: executed.append(("release", lock_id)))
db.run_migrations()
db.ensure_schema()
assert any("CREATE TABLE IF NOT EXISTS access_requests" in query for query, _ in executed)
assert ("release", db.MIGRATION_LOCK_ID) in executed
def test_db_pool_connect_and_lock_edge_paths(monkeypatch) -> None:
class DummyPool:
def __init__(self, **kwargs):
self.kwargs = kwargs
@contextmanager
def connection(self):
yield SimpleNamespace(row_factory=None)
monkeypatch.setattr(db.settings, "PORTAL_DATABASE_URL", "postgres://portal")
monkeypatch.setattr(db, "ConnectionPool", DummyPool)
monkeypatch.setattr(db, "_pool", None)
assert db.configured()
pool = db._get_pool()
assert pool.kwargs["conninfo"] == "postgres://portal"
assert "statement_timeout" in db._pool_kwargs()["options"]
with db.connect() as conn:
assert conn.row_factory is db.dict_row
assert db._get_pool() is pool
tuple_conn = SimpleNamespace(execute=lambda *a, **k: SimpleNamespace(fetchone=lambda: (0,)))
assert not db._try_advisory_lock(tuple_conn, 1)
class BadConn:
def execute(self, *args, **kwargs):
raise RuntimeError("ignore")
db._release_advisory_lock(BadConn(), 1)
def test_db_migration_lock_skip(monkeypatch) -> None:
@contextmanager
def fake_connect():
yield SimpleNamespace(execute=lambda *a, **k: SimpleNamespace(fetchone=lambda: {"pg_try_advisory_lock": False}))
monkeypatch.setattr(db.settings, "PORTAL_DATABASE_URL", "postgres://portal")
monkeypatch.setattr(db.settings, "PORTAL_RUN_MIGRATIONS", True)
monkeypatch.setattr(db, "connect", fake_connect)
db.run_migrations()

View File

@ -0,0 +1,433 @@
from __future__ import annotations
from contextlib import contextmanager
from datetime import datetime, timedelta, timezone
from types import SimpleNamespace
from typing import Any
from atlas_portal import provisioning
class DummyResult:
def __init__(self, row: dict[str, Any] | None = None) -> None:
self.row = row
def fetchone(self) -> dict[str, Any] | None:
return self.row
class DummyConn:
def __init__(self, row: dict[str, Any] | None = None, *, locked: bool = True) -> None:
self.row = row
self.locked = locked
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
if "pg_try_advisory_lock" in query:
return DummyResult({"locked": self.locked})
if "FROM access_requests" in query and "SELECT username" in query:
return DummyResult(self.row)
return DummyResult()
class DummyAdmin:
def __init__(
self,
*,
ready: bool = True,
user: dict[str, Any] | None = None,
full: dict[str, Any] | None = None,
group_id: str | None = "group-1",
email_user: dict[str, Any] | None = None,
) -> None:
self._ready = ready
self.user = user
self.full = full if full is not None else {"id": "user-1", "attributes": {}, "requiredActions": []}
self.group_id = group_id
self.email_user = email_user
self.created: list[dict[str, Any]] = []
self.updated: list[tuple[str, dict[str, Any]]] = []
self.attributes: list[tuple[str, str, str]] = []
self.passwords: list[tuple[str, str, bool]] = []
self.groups: list[tuple[str, str]] = []
def ready(self) -> bool:
return self._ready
def find_user(self, username: str) -> dict[str, Any] | None:
return self.user
def find_user_by_email(self, email: str) -> dict[str, Any] | None:
return self.email_user
def create_user(self, payload: dict[str, Any]) -> str:
self.created.append(payload)
self.user = {"id": "user-1"}
return "user-1"
def get_user(self, user_id: str) -> dict[str, Any]:
return self.full
def update_user_safe(self, user_id: str, payload: dict[str, Any]) -> None:
self.updated.append((user_id, payload))
def set_user_attribute(self, username: str, key: str, value: str) -> None:
self.attributes.append((username, key, value))
def reset_password(self, user_id: str, password: str, *, temporary: bool) -> None:
self.passwords.append((user_id, password, temporary))
def get_group_id(self, group_name: str) -> str | None:
return self.group_id
def add_user_to_group(self, user_id: str, group_id: str) -> None:
self.groups.append((user_id, group_id))
class MailuClient:
def __init__(self, *, timeout: int) -> None:
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, url: str, json: dict[str, Any] | None = None):
return SimpleNamespace(status_code=200)
class FailingMailuClient(MailuClient):
def post(self, url: str, json: dict[str, Any] | None = None):
return SimpleNamespace(status_code=503)
class ExplodingMailuClient(MailuClient):
def post(self, url: str, json: dict[str, Any] | None = None):
raise RuntimeError("mailu offline")
def request_row(**overrides: Any) -> dict[str, Any]:
row = {
"username": "alice",
"contact_email": "alice@example.dev",
"email_verified_at": datetime.now(timezone.utc),
"status": "accounts_building",
"initial_password": None,
"initial_password_revealed_at": None,
"provision_attempted_at": None,
}
row.update(overrides)
return row
def install_common_patches(monkeypatch, conn: DummyConn, admin: DummyAdmin, *, all_ok: bool = True) -> None:
@contextmanager
def connect():
yield conn
monkeypatch.setattr(provisioning, "connect", connect)
monkeypatch.setattr(provisioning, "admin_client", lambda: admin)
monkeypatch.setattr(provisioning.settings, "MAILU_DOMAIN", "bstein.dev")
monkeypatch.setattr(provisioning.settings, "DEFAULT_USER_GROUPS", ["dev"])
monkeypatch.setattr(provisioning.settings, "MAILU_SYNC_URL", "")
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_NAMESPACE", "")
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_MAIL_SYNC_CRONJOB", "")
monkeypatch.setattr(provisioning.settings, "ACCESS_REQUEST_PROVISION_RETRY_COOLDOWN_SEC", 300)
monkeypatch.setattr(provisioning, "random_password", lambda length=16: f"pw-{length}")
monkeypatch.setattr(provisioning, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "ok"})
monkeypatch.setattr(provisioning, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "ok"})
monkeypatch.setattr(provisioning, "invite_user", lambda email: SimpleNamespace(ok=True, status="invited", detail=""))
monkeypatch.setattr(provisioning, "all_tasks_ok", lambda conn, code, tasks: all_ok)
def task_statuses(conn: DummyConn) -> dict[str, str]:
statuses: dict[str, str] = {}
for query, params in conn.executed:
if "INSERT INTO access_request_tasks" in query and isinstance(params, tuple) and len(params) >= 3:
statuses[str(params[1])] = str(params[2])
return statuses
def test_provision_preflight_lock_and_status_paths(monkeypatch) -> None:
monkeypatch.setattr(provisioning, "all_tasks_ok", lambda conn, code, task_list: task_list == ["x"])
assert provisioning.provision_tasks_complete(DummyConn(), "code") is False
assert provisioning.provision_access_request("").status == "unknown"
monkeypatch.setattr(provisioning, "admin_client", lambda: DummyAdmin(ready=False))
assert provisioning.provision_access_request("code").status == "accounts_building"
conn = DummyConn(request_row(), locked=False)
install_common_patches(monkeypatch, conn, DummyAdmin())
assert provisioning.provision_access_request("code").status == "accounts_building"
conn = DummyConn(None)
install_common_patches(monkeypatch, conn, DummyAdmin())
assert provisioning.provision_access_request("code").status == "unknown"
conn = DummyConn(request_row(status="denied"))
install_common_patches(monkeypatch, conn, DummyAdmin())
assert provisioning.provision_access_request("code").status == "denied"
recent = datetime.now(timezone.utc) - timedelta(seconds=30)
conn = DummyConn(request_row(provision_attempted_at=recent))
install_common_patches(monkeypatch, conn, DummyAdmin())
assert provisioning.provision_access_request("code").status == "accounts_building"
naive_recent = datetime.now(timezone.utc).replace(tzinfo=None) - timedelta(seconds=30)
conn = DummyConn(request_row(provision_attempted_at=naive_recent))
install_common_patches(monkeypatch, conn, DummyAdmin())
assert provisioning.provision_access_request("code").status == "accounts_building"
def test_provision_happy_path_creates_user_and_downstream_accounts(monkeypatch) -> None:
conn = DummyConn(request_row(status="approved"))
admin = DummyAdmin(full={"id": "user-1", "attributes": {}, "requiredActions": ["CONFIGURE_TOTP"]})
install_common_patches(monkeypatch, conn, admin)
result = provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert result.ok is True
assert result.status == "awaiting_onboarding"
assert admin.created[0]["username"] == "alice"
assert admin.updated == [("user-1", {"requiredActions": []})]
assert admin.passwords == [("user-1", "pw-20", False)]
assert admin.groups == [("user-1", "group-1")]
assert statuses["keycloak_user"] == "ok"
assert statuses["keycloak_password"] == "ok"
assert statuses["keycloak_groups"] == "ok"
assert statuses["mailu_app_password"] == "ok"
assert statuses["mailu_sync"] == "ok"
assert statuses["nextcloud_mail_sync"] == "ok"
assert statuses["wger_account"] == "ok"
assert statuses["firefly_account"] == "ok"
assert statuses["vaultwarden_invite"] == "ok"
assert any("pg_advisory_unlock" in query for query, _ in conn.executed)
def test_provision_uses_existing_user_attributes_and_enabled_syncs(monkeypatch) -> None:
attrs = {
provisioning.MAILU_EMAIL_ATTR: ["custom@example.dev"],
provisioning.MAILU_ENABLED_ATTR: ["yes"],
provisioning.MAILU_APP_PASSWORD_ATTR: ["mail-pw"],
provisioning.WGER_PASSWORD_ATTR: ["wger-pw"],
provisioning.WGER_PASSWORD_UPDATED_ATTR: ["done"],
provisioning.FIREFLY_PASSWORD_ATTR: "firefly-pw",
provisioning.FIREFLY_PASSWORD_UPDATED_ATTR: "done",
"vaultwarden_email": ["vault@example.dev"],
}
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": attrs, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
monkeypatch.setattr(provisioning.settings, "MAILU_SYNC_URL", "https://mailu-sync.example.dev")
monkeypatch.setattr(provisioning, "httpx", SimpleNamespace(Client=MailuClient))
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_NAMESPACE", "nextcloud")
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_MAIL_SYNC_CRONJOB", "sync")
monkeypatch.setattr(provisioning, "trigger_nextcloud_mail_sync", lambda *args, **kwargs: {"status": "ok"})
result = provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert result.ok is False
assert result.status == "accounts_building"
assert admin.created == []
assert admin.passwords == [("user-1", "existing-pw", False)]
assert statuses["mailu_sync"] == "ok"
assert statuses["nextcloud_mail_sync"] == "ok"
def test_provision_existing_user_attribute_variants(monkeypatch) -> None:
attrs = {
provisioning.MAILU_EMAIL_ATTR: "custom@example.dev",
provisioning.MAILU_ENABLED_ATTR: "yes",
provisioning.MAILU_APP_PASSWORD_ATTR: "mail-pw",
provisioning.WGER_PASSWORD_ATTR: "wger-pw",
provisioning.WGER_PASSWORD_UPDATED_ATTR: "done",
provisioning.FIREFLY_PASSWORD_ATTR: ["firefly-pw"],
provisioning.FIREFLY_PASSWORD_UPDATED_ATTR: ["done"],
"vaultwarden_email": "vault@example.dev",
}
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": attrs, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin)
provisioning.provision_access_request("code")
assert ("alice", "vaultwarden_email", "vault@example.dev") in admin.attributes
def test_provision_keycloak_user_error_paths(monkeypatch) -> None:
conn = DummyConn(request_row(contact_email="alice@example.dev"))
admin = DummyAdmin(email_user={"username": "other"})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
provisioning.provision_access_request("code")
assert task_statuses(conn)["keycloak_user"] == "error"
conn = DummyConn(request_row(contact_email="alice@example.dev"))
admin = DummyAdmin(user={"username": "alice"})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
provisioning.provision_access_request("code")
assert task_statuses(conn)["keycloak_user"] == "error"
def test_provision_attribute_and_sync_error_paths(monkeypatch) -> None:
class SelectiveFailAdmin(DummyAdmin):
def set_user_attribute(self, username: str, key: str, value: str) -> None:
if key == provisioning.MAILU_ENABLED_ATTR:
raise RuntimeError("mailu enabled write failed")
super().set_user_attribute(username, key, value)
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = SelectiveFailAdmin(
user={"id": "user-1"},
full={"id": "user-1", "attributes": {provisioning.MAILU_ENABLED_ATTR: "no"}, "requiredActions": []},
)
install_common_patches(monkeypatch, conn, admin, all_ok=False)
provisioning.provision_access_request("code")
assert task_statuses(conn)["mailu_app_password"] == "ok"
class GetUserFailsAdmin(DummyAdmin):
def get_user(self, user_id: str) -> dict[str, Any]:
raise RuntimeError("keycloak read failed")
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = GetUserFailsAdmin(user={"id": "user-1"})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert statuses["mailu_app_password"] == "error"
assert statuses["wger_account"] == "error"
for client_cls in (FailingMailuClient, ExplodingMailuClient):
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
monkeypatch.setattr(provisioning.settings, "MAILU_SYNC_URL", "https://mailu-sync.example.dev")
monkeypatch.setattr(provisioning, "httpx", SimpleNamespace(Client=client_cls))
provisioning.provision_access_request("code")
assert task_statuses(conn)["mailu_sync"] == "error"
def test_provision_nextcloud_and_password_edge_paths(monkeypatch) -> None:
conn = DummyConn(request_row(initial_password=None, initial_password_revealed_at=datetime.now(timezone.utc)))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
provisioning.provision_access_request("code")
assert task_statuses(conn)["keycloak_password"] == "ok"
for sync_result in ({"status": "failed"}, RuntimeError("nextcloud failed")):
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_NAMESPACE", "nextcloud")
monkeypatch.setattr(provisioning.settings, "NEXTCLOUD_MAIL_SYNC_CRONJOB", "sync")
if isinstance(sync_result, Exception):
monkeypatch.setattr(provisioning, "trigger_nextcloud_mail_sync", lambda *args, **kwargs: (_ for _ in ()).throw(sync_result))
else:
monkeypatch.setattr(provisioning, "trigger_nextcloud_mail_sync", lambda *args, **kwargs: sync_result)
provisioning.provision_access_request("code")
assert task_statuses(conn)["nextcloud_mail_sync"] == "error"
def test_provision_records_task_errors_without_throwing(monkeypatch) -> None:
conn = DummyConn(request_row(contact_email=""))
admin = DummyAdmin(group_id=None)
install_common_patches(monkeypatch, conn, admin, all_ok=False)
monkeypatch.setattr(provisioning, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "failed"})
monkeypatch.setattr(provisioning, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "failed"})
monkeypatch.setattr(
provisioning,
"invite_user",
lambda email: SimpleNamespace(ok=False, status="error", detail="invite failed"),
)
result = provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert result.ok is False
assert statuses["keycloak_user"] == "error"
assert "keycloak_password" not in statuses
existing = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []}, group_id=None)
conn = DummyConn(request_row(initial_password=None))
install_common_patches(monkeypatch, conn, existing, all_ok=False)
monkeypatch.setattr(provisioning, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "failed"})
monkeypatch.setattr(provisioning, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "failed"})
monkeypatch.setattr(
provisioning,
"invite_user",
lambda email: SimpleNamespace(ok=False, status="error", detail="invite failed"),
)
provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert statuses["keycloak_groups"] == "error"
assert statuses["wger_account"] == "error"
assert statuses["firefly_account"] == "error"
assert statuses["vaultwarden_invite"] == "error"
conn = DummyConn(request_row(initial_password="existing-pw"))
existing = DummyAdmin(
user={"id": "user-1"},
full={
"id": "user-1",
"attributes": {
provisioning.WGER_PASSWORD_ATTR: "wger-pw",
provisioning.FIREFLY_PASSWORD_ATTR: ["firefly-pw"],
},
"requiredActions": [],
},
)
install_common_patches(monkeypatch, conn, existing, all_ok=False)
monkeypatch.setattr(provisioning, "trigger_wger_user_sync", lambda *args, **kwargs: {"status": "failed"})
monkeypatch.setattr(provisioning, "trigger_firefly_user_sync", lambda *args, **kwargs: {"status": "failed"})
provisioning.provision_access_request("code")
statuses = task_statuses(conn)
assert statuses["wger_account"] == "error"
assert statuses["firefly_account"] == "error"
def test_provision_falls_back_for_vaultwarden_invite(monkeypatch) -> None:
conn = DummyConn(request_row(contact_email="fallback@example.dev"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin)
invited: list[str] = []
def fake_invite(email: str):
invited.append(email)
if email == "alice@bstein.dev":
return SimpleNamespace(ok=False, status="error", detail="primary failed")
return SimpleNamespace(ok=True, status="fallback_invited", detail="")
monkeypatch.setattr(provisioning, "invite_user", fake_invite)
provisioning.provision_access_request("code")
assert invited == ["alice@bstein.dev", "fallback@example.dev"]
assert ("alice", "vaultwarden_email", "fallback@example.dev") in admin.attributes
class VaultAttrFailAdmin(DummyAdmin):
def set_user_attribute(self, username: str, key: str, value: str) -> None:
if key.startswith("vaultwarden_"):
raise RuntimeError("vault attr failed")
super().set_user_attribute(username, key, value)
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = VaultAttrFailAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin)
provisioning.provision_access_request("code")
assert task_statuses(conn)["vaultwarden_invite"] == "ok"
conn = DummyConn(request_row(initial_password="existing-pw"))
admin = DummyAdmin(user={"id": "user-1"}, full={"id": "user-1", "attributes": {}, "requiredActions": []})
install_common_patches(monkeypatch, conn, admin, all_ok=False)
monkeypatch.setattr(provisioning, "invite_user", lambda email: (_ for _ in ()).throw(RuntimeError("vault down")))
provisioning.provision_access_request("code")
assert task_statuses(conn)["vaultwarden_invite"] == "error"

View File

@ -0,0 +1,60 @@
from __future__ import annotations
import httpx
from atlas_portal import provisioning_tasks as tasks
class DummyResult:
def __init__(self, rows=None) -> None:
self.rows = rows or []
def fetchall(self):
return self.rows
class DummyConn:
def __init__(self, rows=None) -> None:
self.rows = rows or []
self.executed: list[tuple[str, object | None]] = []
def execute(self, query: str, params: object | None = None) -> DummyResult:
self.executed.append((query, params))
return DummyResult(self.rows)
def test_task_row_helpers_are_idempotent_and_status_based() -> None:
conn = DummyConn(rows=[{"task": "keycloak_user", "status": "ok"}, {"task": 7, "status": "ignored"}])
tasks.upsert_task(conn, "code", "keycloak_user", "ok", "created")
tasks.ensure_task_rows(conn, "code", ["keycloak_user", "mailu_sync"])
tasks.ensure_task_rows(conn, "code", [])
assert conn.executed[0][1] == ("code", "keycloak_user", "ok", "created")
assert conn.executed[1][1] == ("code", ["keycloak_user", "mailu_sync"])
assert tasks.task_statuses(conn, "code") == {"keycloak_user": "ok"}
assert tasks.all_tasks_ok(conn, "code", ["keycloak_user"]) is True
assert tasks.all_tasks_ok(conn, "code", ["keycloak_user", "mailu_sync"]) is False
def test_safe_error_detail_prefers_actionable_messages() -> None:
assert tasks.safe_error_detail(RuntimeError(" explicit failure "), "fallback") == "explicit failure"
assert tasks.safe_error_detail(RuntimeError(" "), "fallback") == "fallback"
assert tasks.safe_error_detail(httpx.TimeoutException("slow"), "fallback") == "timeout"
assert tasks.safe_error_detail(ValueError("bad"), "fallback") == "fallback"
def test_safe_error_detail_formats_http_status_payloads() -> None:
request = httpx.Request("GET", "https://example.invalid")
response = httpx.Response(409, request=request, json={"errorMessage": " duplicate user "})
exc = httpx.HTTPStatusError("conflict", request=request, response=response)
assert tasks.safe_error_detail(exc, "fallback") == "http 409: duplicate user"
text_response = httpx.Response(502, request=request, content=b" upstream offline ")
text_exc = httpx.HTTPStatusError("bad gateway", request=request, response=text_response)
assert tasks.safe_error_detail(text_exc, "fallback") == "http 502: upstream offline"
string_response = httpx.Response(400, request=request, json=" plain text ")
string_exc = httpx.HTTPStatusError("bad request", request=request, response=string_response)
assert tasks.safe_error_detail(string_exc, "fallback") == "http 400: plain text"

View File

@ -0,0 +1,75 @@
from __future__ import annotations
"""Tests for generic backend utilities used across routes."""
from atlas_portal import rate_limit, utils
def test_rate_limit_allows_when_limit_is_non_positive() -> None:
assert rate_limit.rate_limit_allow("1.2.3.4", key="access", limit=0, window_sec=60)
assert rate_limit.rate_limit_allow("1.2.3.4", key="access", limit=-1, window_sec=60)
def test_rate_limit_rejects_after_limit(monkeypatch) -> None:
monkeypatch.setattr(rate_limit.time, "time", lambda: 100.0)
assert rate_limit.rate_limit_allow("1.2.3.4", key="access", limit=2, window_sec=60)
assert rate_limit.rate_limit_allow("1.2.3.4", key="access", limit=2, window_sec=60)
assert not rate_limit.rate_limit_allow("1.2.3.4", key="access", limit=2, window_sec=60)
def test_random_password_has_requested_length() -> None:
password = utils.random_password(24)
assert len(password) == 24
assert password.isalnum()
def test_best_effort_post_ignores_errors(monkeypatch) -> None:
calls = []
class DummyClient:
def __init__(self, timeout):
calls.append(timeout)
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, url, json=None):
raise RuntimeError("boom")
monkeypatch.setattr(utils.httpx, "Client", DummyClient)
utils.best_effort_post("https://example.dev/hook")
assert calls
def test_best_effort_post_success(monkeypatch) -> None:
posts = []
class DummyClient:
def __init__(self, timeout):
self.timeout = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def post(self, url, json=None):
posts.append((url, json))
return None
monkeypatch.setattr(utils.httpx, "Client", DummyClient)
utils.best_effort_post("https://example.dev/hook")
assert posts and posts[0][0] == "https://example.dev/hook"
def test_best_effort_post_ignores_empty_url() -> None:
utils.best_effort_post("")

View File

@ -0,0 +1,26 @@
from __future__ import annotations
"""Tests for environment-backed settings parsing."""
import importlib
def test_env_bool_handles_truthy_and_falsey(monkeypatch) -> None:
import atlas_portal.settings as settings
monkeypatch.setenv("TEST_FLAG", "YES")
assert settings._env_bool("TEST_FLAG") is True
monkeypatch.setenv("TEST_FLAG", "0")
assert settings._env_bool("TEST_FLAG") is False
def test_settings_reload_picks_up_environment(monkeypatch) -> None:
monkeypatch.setenv("KEYCLOAK_ENABLED", "true")
monkeypatch.setenv("PORTAL_ADMIN_USERS", "alice,bob")
import atlas_portal.settings as settings
reloaded = importlib.reload(settings)
assert reloaded.KEYCLOAK_ENABLED is True
assert reloaded.PORTAL_ADMIN_USERS == ["alice", "bob"]

View File

@ -0,0 +1,178 @@
from __future__ import annotations
"""Tests for per-user Kubernetes sync Job adapters."""
import pytest
from atlas_portal import firefly_user_sync, nextcloud_mail_sync, wger_user_sync
def _cronjob_template() -> dict:
"""Build a CronJob payload shaped like the templates used in the cluster."""
return {
"spec": {
"jobTemplate": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "worker",
"env": [
{"name": "ONLY_USERNAME", "value": "old"},
{"name": "FIREFLY_USER_EMAIL", "value": "old"},
{"name": "WGER_USERNAME", "value": "old"},
],
}
]
}
}
}
}
}
}
@pytest.mark.parametrize(
("module", "namespace_attr", "cronjob_attr", "timeout_attr", "args", "expected_env"),
[
(
nextcloud_mail_sync,
"NEXTCLOUD_NAMESPACE",
"NEXTCLOUD_MAIL_SYNC_CRONJOB",
"NEXTCLOUD_MAIL_SYNC_WAIT_TIMEOUT_SEC",
("alice",),
{"ONLY_USERNAME": "alice"},
),
(
firefly_user_sync,
"FIREFLY_NAMESPACE",
"FIREFLY_USER_SYNC_CRONJOB",
"FIREFLY_USER_SYNC_WAIT_TIMEOUT_SEC",
("alice", "alice@example.dev", "pw"),
{"FIREFLY_USER_EMAIL": "alice@example.dev", "FIREFLY_USER_PASSWORD": "pw"},
),
(
wger_user_sync,
"WGER_NAMESPACE",
"WGER_USER_SYNC_CRONJOB",
"WGER_USER_SYNC_WAIT_TIMEOUT_SEC",
("alice", "alice@example.dev", "pw"),
{"WGER_USERNAME": "alice", "WGER_EMAIL": "alice@example.dev", "WGER_PASSWORD": "pw"},
),
],
)
def test_user_sync_modules_render_jobs_and_trigger(monkeypatch, module, namespace_attr, cronjob_attr, timeout_attr, args, expected_env) -> None:
monkeypatch.setattr(module.settings, namespace_attr, "apps")
monkeypatch.setattr(module.settings, cronjob_attr, "sync-cron")
monkeypatch.setattr(module.settings, timeout_attr, 0)
monkeypatch.setattr(module.time, "time", lambda: 1000)
posted: list[dict] = []
def fake_get_json(path: str) -> dict:
if "cronjobs" in path:
return _cronjob_template()
return {"status": {"conditions": [{"type": "Complete", "status": "True"}]}}
def fake_post_json(path: str, payload: dict) -> dict:
posted.append(payload)
return {"metadata": {"name": payload["metadata"]["name"]}}
monkeypatch.setattr(module, "get_json", fake_get_json)
monkeypatch.setattr(module, "post_json", fake_post_json)
result = module.trigger(*args, wait=True)
assert result["status"] in {"ok", "running"}
env = posted[0]["spec"]["template"]["spec"]["containers"][0]["env"]
env_map = {item["name"]: item["value"] for item in env}
for key, value in expected_env.items():
assert env_map[key] == value
assert module._job_succeeded({"status": {"succeeded": 1}})
assert module._job_failed({"status": {"failed": 1}})
@pytest.mark.parametrize(
("module", "namespace_attr", "cronjob_attr", "timeout_attr", "args"),
[
(
nextcloud_mail_sync,
"NEXTCLOUD_NAMESPACE",
"NEXTCLOUD_MAIL_SYNC_CRONJOB",
"NEXTCLOUD_MAIL_SYNC_WAIT_TIMEOUT_SEC",
("alice",),
),
(
firefly_user_sync,
"FIREFLY_NAMESPACE",
"FIREFLY_USER_SYNC_CRONJOB",
"FIREFLY_USER_SYNC_WAIT_TIMEOUT_SEC",
("alice", "alice@example.dev", "pw"),
),
(
wger_user_sync,
"WGER_NAMESPACE",
"WGER_USER_SYNC_CRONJOB",
"WGER_USER_SYNC_WAIT_TIMEOUT_SEC",
("alice", "alice@example.dev", "pw"),
),
],
)
def test_user_sync_modules_cover_edge_paths(monkeypatch, module, namespace_attr, cronjob_attr, timeout_attr, args) -> None:
assert module._safe_name_fragment("!!!") == "user"
assert module._job_succeeded({"status": {"conditions": [None, {"type": "Complete", "status": "True"}]}})
assert not module._job_succeeded({"status": {"conditions": [{"type": "Complete", "status": "False"}]}})
assert module._job_failed({"status": {"conditions": [None, {"type": "Failed", "status": "True"}]}})
assert not module._job_failed({"status": {"conditions": [{"type": "Failed", "status": "False"}]}})
cronjob = _cronjob_template()
container = cronjob["spec"]["jobTemplate"]["spec"]["template"]["spec"]["containers"][0]
container["env"] = "not-a-list"
job = module._job_from_cronjob(cronjob, *args)
assert job["spec"]["template"]["spec"]["containers"][0]["env"]
monkeypatch.setattr(module.settings, namespace_attr, "apps")
monkeypatch.setattr(module.settings, cronjob_attr, "sync-cron")
monkeypatch.setattr(module.settings, timeout_attr, 5)
monkeypatch.setattr(module.time, "sleep", lambda *_: None)
with pytest.raises(RuntimeError, match="missing username"):
module.trigger("", *args[1:])
if module in {firefly_user_sync, wger_user_sync}:
with pytest.raises(RuntimeError, match="missing password"):
module.trigger(args[0], args[1], "")
monkeypatch.setattr(module.settings, namespace_attr, "")
with pytest.raises(RuntimeError, match="not configured"):
module.trigger(*args)
monkeypatch.setattr(module.settings, namespace_attr, "apps")
def cron_then_complete(path: str) -> dict:
if "cronjobs" in path:
return _cronjob_template()
return {"status": {"conditions": [{"type": "Complete", "status": "True"}]}}
monkeypatch.setattr(module, "get_json", cron_then_complete)
monkeypatch.setattr(module, "post_json", lambda path, payload: {})
assert module.trigger(*args, wait=False)["status"] == "queued"
monkeypatch.setattr(module, "post_json", lambda path, payload: {"metadata": {"name": ""}})
with pytest.raises(RuntimeError, match="job name missing"):
module.trigger(*args, wait=True)
monkeypatch.setattr(module, "post_json", lambda path, payload: {"metadata": {"name": payload["metadata"]["name"]}})
clock = iter([0, 1, 2])
monkeypatch.setattr(module.time, "time", lambda: next(clock))
assert module.trigger(*args, wait=True)["status"] == "ok"
def cron_then_failed(path: str) -> dict:
if "cronjobs" in path:
return _cronjob_template()
return {"status": {"conditions": [{"type": "Failed", "status": "True"}]}}
clock = iter([0, 1, 2])
monkeypatch.setattr(module.time, "time", lambda: next(clock))
monkeypatch.setattr(module, "get_json", cron_then_failed)
assert module.trigger(*args, wait=True)["status"] == "error"

View File

@ -0,0 +1,260 @@
from __future__ import annotations
"""Tests for Vaultwarden invite and Kubernetes discovery helpers."""
import base64
from pathlib import Path
import pytest
from atlas_portal import vaultwarden
class DummyResponse:
"""HTTP response double used by Vaultwarden tests."""
def __init__(self, payload=None, *, status_code: int = 200, text: str = "") -> None:
self._payload = payload if payload is not None else {}
self.status_code = status_code
self.text = text
def json(self):
"""Return configured JSON payload."""
return self._payload
def raise_for_status(self) -> None:
"""Raise for HTTP failure statuses."""
if self.status_code >= 400:
raise RuntimeError("bad status")
class DummyClient:
"""httpx.Client replacement with queued responses."""
responses: list[DummyResponse] = []
calls: list[tuple[str, str, dict]] = []
closed = 0
def __init__(self, **kwargs):
self.kwargs = kwargs
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
@classmethod
def reset(cls, *responses: DummyResponse) -> None:
"""Replace queued responses and clear captured calls."""
cls.responses = list(responses)
cls.calls = []
cls.closed = 0
def close(self) -> None:
"""Record cache eviction closing behavior."""
type(self).closed += 1
def get(self, url):
self.calls.append(("GET", url, {}))
return self.responses.pop(0)
def post(self, url, **kwargs):
self.calls.append(("POST", url, kwargs))
return self.responses.pop(0)
def _reset_admin_state(monkeypatch) -> None:
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION", None)
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION_EXPIRES_AT", 0.0)
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION_BASE_URL", "")
monkeypatch.setattr(vaultwarden, "_ADMIN_RATE_LIMITED_UNTIL", 0.0)
def test_service_account_and_k8s_json_paths(monkeypatch, tmp_path: Path) -> None:
sa_path = tmp_path / "sa"
sa_path.mkdir()
monkeypatch.setattr(vaultwarden, "_SA_PATH", sa_path)
with pytest.raises(RuntimeError, match="token missing"):
vaultwarden._read_service_account()
(sa_path / "token").write_text(" ")
(sa_path / "ca.crt").write_text("ca")
with pytest.raises(RuntimeError, match="token empty"):
vaultwarden._read_service_account()
(sa_path / "token").write_text("token")
assert vaultwarden._read_service_account() == ("token", str(sa_path / "ca.crt"))
monkeypatch.setattr(vaultwarden, "_read_service_account", lambda: ("token", "/ca.crt"))
monkeypatch.setattr(vaultwarden.httpx, "Client", DummyClient)
DummyClient.reset(DummyResponse({"items": []}), DummyResponse([]))
assert vaultwarden._k8s_get_json("/api") == {"items": []}
with pytest.raises(RuntimeError, match="unexpected kubernetes response"):
vaultwarden._k8s_get_json("/api")
def test_k8s_pod_and_secret_helpers(monkeypatch) -> None:
encoded = base64.b64encode(b"admin-token").decode("ascii")
pods = {
"items": [
{"status": {"phase": "Pending", "podIP": "10.0.0.1"}},
{
"status": {
"phase": "Running",
"podIP": "10.0.0.2",
"conditions": [{"type": "Ready", "status": "True"}],
}
},
]
}
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: pods if "pods" in path else {"data": {"token": encoded}})
assert vaultwarden._k8s_find_pod_ip("apps", "app=vaultwarden") == "10.0.0.2"
assert vaultwarden._k8s_get_secret_value("apps", "secret", "token") == "admin-token"
no_condition_pod = {"items": [{"status": {"phase": "Running", "podIP": "10.0.0.3", "conditions": [None]}}]}
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: no_condition_pod)
assert vaultwarden._k8s_find_pod_ip("apps", "app=vaultwarden") == "10.0.0.3"
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: {"items": []})
with pytest.raises(RuntimeError, match="no vaultwarden pods"):
vaultwarden._k8s_find_pod_ip("apps", "app=vaultwarden")
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: {"items": [{"status": {"phase": "Running"}}]})
with pytest.raises(RuntimeError, match="no IP"):
vaultwarden._k8s_find_pod_ip("apps", "app=vaultwarden")
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: {"data": {}})
with pytest.raises(RuntimeError, match="secret key missing"):
vaultwarden._k8s_get_secret_value("apps", "secret", "token")
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: {"data": {"token": "bad-base64"}})
with pytest.raises(RuntimeError, match="failed to decode"):
vaultwarden._k8s_get_secret_value("apps", "secret", "token")
empty = base64.b64encode(b" ").decode("ascii")
monkeypatch.setattr(vaultwarden, "_k8s_get_json", lambda path: {"data": {"token": empty}})
with pytest.raises(RuntimeError, match="value empty"):
vaultwarden._k8s_get_secret_value("apps", "secret", "token")
def test_admin_session_cache_and_rate_limit(monkeypatch) -> None:
_reset_admin_state(monkeypatch)
monkeypatch.setattr(vaultwarden, "_k8s_get_secret_value", lambda *args: "admin-token")
monkeypatch.setattr(vaultwarden.httpx, "Client", DummyClient)
monkeypatch.setattr(vaultwarden.time, "time", lambda: 100.0)
DummyClient.reset(DummyResponse({}, status_code=200))
session = vaultwarden._admin_session("http://vaultwarden")
assert vaultwarden._admin_session("http://vaultwarden") is session
DummyClient.reset(DummyResponse({}, status_code=200))
vaultwarden._admin_session("http://other")
assert DummyClient.closed == 1
class BadCloseSession:
def close(self) -> None:
raise RuntimeError("ignore")
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION", BadCloseSession())
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION_EXPIRES_AT", 999.0)
monkeypatch.setattr(vaultwarden, "_ADMIN_SESSION_BASE_URL", "http://old")
DummyClient.reset(DummyResponse({}, status_code=200))
vaultwarden._admin_session("http://new")
_reset_admin_state(monkeypatch)
DummyClient.reset(DummyResponse({}, status_code=429))
with pytest.raises(RuntimeError, match="rate limited"):
vaultwarden._admin_session("http://vaultwarden")
with pytest.raises(RuntimeError, match="rate limited"):
vaultwarden._admin_session("http://vaultwarden")
def test_invite_user_success_and_idempotent_paths(monkeypatch) -> None:
_reset_admin_state(monkeypatch)
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_SERVICE_HOST", "vaultwarden.apps.svc:80")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_NAMESPACE", "apps")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_POD_LABEL", "app=vaultwarden")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_POD_PORT", 8080)
monkeypatch.setattr(vaultwarden, "_k8s_find_pod_ip", lambda *args: "10.0.0.2")
class InviteSession:
def __init__(self, response):
self.response = response
def post(self, path, json=None):
return self.response
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: InviteSession(DummyResponse(status_code=200)))
assert vaultwarden.invite_user("alice@example.dev").status == "invited"
monkeypatch.setattr(
vaultwarden,
"_admin_session",
lambda base_url: InviteSession(DummyResponse(status_code=409, text="user already exists")),
)
result = vaultwarden.invite_user("alice@example.dev")
assert result.ok and result.status == "already_present"
monkeypatch.setattr(vaultwarden, "_ADMIN_RATE_LIMITED_UNTIL", 9999999999.0)
assert vaultwarden.invite_user("alice@example.dev").status == "rate_limited"
assert vaultwarden.invite_user("not-email").status == "invalid_email"
monkeypatch.setattr(vaultwarden, "_ADMIN_RATE_LIMITED_UNTIL", 0.0)
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: InviteSession(DummyResponse(status_code=429)))
assert vaultwarden.invite_user("alice@example.dev").status == "rate_limited"
def test_invite_user_fallback_and_error_paths(monkeypatch) -> None:
_reset_admin_state(monkeypatch)
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_SERVICE_HOST", "vaultwarden.apps.svc:80")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_NAMESPACE", "apps")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_POD_LABEL", "app=vaultwarden")
monkeypatch.setattr(vaultwarden.settings, "VAULTWARDEN_POD_PORT", 8080)
monkeypatch.setattr(vaultwarden, "_k8s_find_pod_ip", lambda *args: "10.0.0.2")
class SequenceSession:
def __init__(self):
self.calls = 0
def post(self, path, json=None):
self.calls += 1
if self.calls == 1:
raise RuntimeError("service offline")
return DummyResponse(status_code=201)
session = SequenceSession()
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: session)
assert vaultwarden.invite_user("alice@example.dev").status == "invited"
class BadTextResponse:
status_code = 500
@property
def text(self):
raise RuntimeError("no body")
class BadTextSession:
def post(self, path, json=None):
return BadTextResponse()
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: BadTextSession())
result = vaultwarden.invite_user("alice@example.dev")
assert result.status == "error"
assert "status 500" in result.detail
monkeypatch.setattr(vaultwarden, "_k8s_find_pod_ip", lambda *args: (_ for _ in ()).throw(RuntimeError("no pod")))
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: (_ for _ in ()).throw(RuntimeError("rate limited")))
assert vaultwarden.invite_user("alice@example.dev").status == "rate_limited"
monkeypatch.setattr(vaultwarden, "_admin_session", lambda base_url: (_ for _ in ()).throw(RuntimeError("offline")))
result = vaultwarden.invite_user("alice@example.dev")
assert not result.ok
assert result.status == "error"

13
frontend/babel.config.cjs Normal file
View File

@ -0,0 +1,13 @@
const path = require("node:path");
module.exports = {
presets: [
[
"@babel/preset-env",
{
targets: { node: "current" },
},
],
],
plugins: [path.resolve(__dirname, "../testing/frontend/babel-plugin-import-meta-env.cjs")],
};

View File

@ -1,5 +1,5 @@
server {
listen 80;
listen 8080;
server_name _;
root /usr/share/nginx/html;

File diff suppressed because it is too large Load Diff

View File

@ -7,10 +7,15 @@
"dev": "vite",
"prebuild": "node scripts/build_media_manifest.mjs",
"build": "vite build",
"preview": "vite preview"
"preview": "vite preview",
"test:unit": "JEST_JUNIT_OUTPUT_FILE=../build/junit-frontend-unit.xml jest --ci --runInBand --config ../testing/frontend/jest.config.cjs --coverage --coverageReporters=text --coverageReporters=lcov --coverageReporters=json-summary --coverageDirectory=coverage --reporters=default --reporters=jest-junit",
"test:component": "playwright test --config ../testing/frontend/playwright-ct.config.mjs",
"test:e2e": "playwright test --config ../testing/frontend/playwright.config.mjs",
"test": "npm run test:unit && npm run test:component && npm run test:e2e",
"lint": "cd .. && eslint --config testing/frontend/eslint.config.js $(find frontend/src testing/frontend -type f \\( -name '*.js' -o -name '*.mjs' \\) | sort)"
},
"dependencies": {
"axios": "^1.6.7",
"axios": "^1.15.2",
"keycloak-js": "^26.2.2",
"mermaid": "^10.9.1",
"qrcode": "^1.5.4",
@ -18,7 +23,28 @@
"vue-router": "^4.3.2"
},
"devDependencies": {
"@babel/core": "^7.26.0",
"@babel/preset-env": "^7.26.0",
"@eslint/js": "^9.22.0",
"@playwright/experimental-ct-vue": "^1.51.0",
"@playwright/test": "^1.51.0",
"@vitejs/plugin-vue": "^5.0.4",
"@vue/vue3-jest": "^29.2.6",
"@vue/test-utils": "^2.4.6",
"babel-jest": "^29.7.0",
"eslint": "^9.22.0",
"globals": "^16.0.0",
"jest": "^29.7.0",
"jest-environment-jsdom": "^29.7.0",
"jest-junit": "^16.0.0",
"jsdom": "^26.0.0",
"vite": "^5.2.0"
},
"overrides": {
"diff": "^5.2.2",
"dompurify": "^3.3.4",
"follow-redirects": "^1.15.12",
"lodash-es": "^4.18.1",
"rollup": "^4.59.0"
}
}

View File

@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Playwright CT</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="./index.ts"></script>
</body>
</html>

View File

@ -0,0 +1 @@
export {};

View File

@ -1,6 +1,7 @@
import { promises as fs } from "fs";
import path from "path";
const SOURCE = path.resolve("..", "media", "onboarding");
const ROOT = path.resolve("public", "media", "onboarding");
const MANIFEST = path.join(ROOT, "manifest.json");
const EXTENSIONS = new Set([".png", ".jpg", ".jpeg", ".webp"]);
@ -24,10 +25,30 @@ async function ensureDir(dir) {
await fs.mkdir(dir, { recursive: true });
}
async function exists(dir) {
try {
await fs.access(dir);
return true;
} catch {
return false;
}
}
async function main() {
try {
const sourceExists = await exists(SOURCE);
const rootExists = await exists(ROOT);
const source = sourceExists ? SOURCE : rootExists ? ROOT : null;
await ensureDir(ROOT);
const files = await walk(ROOT).catch(() => []);
const files = source ? await walk(source) : [];
if (source && source !== ROOT) {
for (const file of files) {
const src = path.join(source, file);
const dest = path.join(ROOT, file);
await ensureDir(path.dirname(dest));
await fs.copyFile(src, dest);
}
}
const payload = {
generated_at: new Date().toISOString(),
files: files.sort(),

View File

@ -0,0 +1,500 @@
import { computed, onMounted, reactive, ref, watch } from "vue";
import { auth, authFetch, login } from "@/auth";
/**
* Build the Account page dashboard and admin action state.
*
* WHY: this page coordinates several downstream services plus admin
* approval flows, so isolating the orchestration keeps the SFC readable and
* gives tests a direct seam for service state behavior.
*
* @returns {object} reactive service cards, admin state, and event handlers.
*/
export function useAccountDashboard() {
const mailu = reactive({
status: "loading",
imap: "mail.bstein.dev:993 (TLS)",
smtp: "mail.bstein.dev:587 (STARTTLS)",
username: "",
currentPassword: "",
revealPassword: false,
rotating: false,
newPassword: "",
error: "",
});
const jellyfin = reactive({
status: "loading",
username: "",
syncStatus: "",
syncDetail: "",
error: "",
});
const vaultwarden = reactive({
status: "loading",
username: "",
syncedAt: "",
error: "",
});
const nextcloudMail = reactive({
status: "loading",
primaryEmail: "",
accountCount: "",
syncedAt: "",
syncing: false,
error: "",
});
const wger = reactive({
status: "loading",
username: "",
password: "",
passwordUpdatedAt: "",
revealPassword: false,
resetting: false,
error: "",
});
const firefly = reactive({
status: "loading",
username: "",
password: "",
passwordUpdatedAt: "",
revealPassword: false,
resetting: false,
error: "",
});
const admin = reactive({
enabled: false,
loading: false,
requests: [],
error: "",
acting: {},
flags: [],
flagsLoading: false,
notes: {},
selectedFlags: {},
});
const onboardingUrl = ref("/onboarding");
const vaultwardenReady = computed(() =>
["ready", "already_present", "active", "grandfathered"].includes(vaultwarden.status),
);
const vaultwardenDisplayStatus = computed(() => (vaultwardenReady.value ? "ready" : vaultwarden.status));
const vaultwardenOrder = computed(() => (vaultwardenReady.value ? 3 : 0));
const doLogin = () => login("/account");
const copied = reactive({});
const normalizeEmail = (value) => (typeof value === "string" ? value.toLowerCase() : "");
onMounted(() => {
if (auth.ready && auth.authenticated) {
refreshOverview();
refreshAdminRequests();
refreshAdminFlags();
} else {
mailu.status = "login required";
nextcloudMail.status = "login required";
jellyfin.status = "login required";
vaultwarden.status = "login required";
wger.status = "login required";
firefly.status = "login required";
}
});
watch(
() => [auth.ready, auth.authenticated],
([ready, authenticated]) => {
if (!ready) return;
if (!authenticated) {
mailu.status = "login required";
nextcloudMail.status = "login required";
jellyfin.status = "login required";
vaultwarden.status = "login required";
wger.status = "login required";
firefly.status = "login required";
onboardingUrl.value = "/onboarding";
admin.enabled = false;
admin.requests = [];
admin.flags = [];
return;
}
refreshOverview();
refreshAdminRequests();
refreshAdminFlags();
},
{ immediate: false },
);
async function refreshOverview() {
mailu.error = "";
jellyfin.error = "";
vaultwarden.error = "";
nextcloudMail.error = "";
wger.error = "";
firefly.error = "";
try {
const resp = await authFetch("/api/account/overview", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data?.error || `status ${resp.status}`);
}
const data = await resp.json();
mailu.status = data.mailu?.status || "ready";
mailu.username = normalizeEmail(data.mailu?.username) || normalizeEmail(auth.email) || auth.username;
mailu.currentPassword = data.mailu?.app_password || "";
nextcloudMail.status = data.nextcloud_mail?.status || "unknown";
nextcloudMail.primaryEmail = normalizeEmail(data.nextcloud_mail?.primary_email) || "";
nextcloudMail.accountCount = data.nextcloud_mail?.account_count || "";
nextcloudMail.syncedAt = data.nextcloud_mail?.synced_at || "";
wger.status = data.wger?.status || "unknown";
wger.username = normalizeEmail(data.wger?.username) || mailu.username || auth.username;
wger.password = data.wger?.password || "";
wger.passwordUpdatedAt = data.wger?.password_updated_at || "";
firefly.status = data.firefly?.status || "unknown";
firefly.username = normalizeEmail(data.firefly?.username) || mailu.username || auth.username;
firefly.password = data.firefly?.password || "";
firefly.passwordUpdatedAt = data.firefly?.password_updated_at || "";
vaultwarden.status = data.vaultwarden?.status || "unknown";
vaultwarden.username = normalizeEmail(data.vaultwarden?.username) || mailu.username || auth.username;
vaultwarden.syncedAt = data.vaultwarden?.synced_at || "";
jellyfin.status = data.jellyfin?.status || "ready";
jellyfin.username = data.jellyfin?.username || auth.username;
jellyfin.syncStatus = data.jellyfin?.sync_status || "";
jellyfin.syncDetail = data.jellyfin?.sync_detail || "";
onboardingUrl.value = data.onboarding_url || "/onboarding";
} catch (err) {
mailu.status = "unavailable";
nextcloudMail.status = "unavailable";
wger.status = "unavailable";
firefly.status = "unavailable";
vaultwarden.status = "unavailable";
jellyfin.status = "unavailable";
jellyfin.syncStatus = "";
jellyfin.syncDetail = "";
onboardingUrl.value = "/onboarding";
const message = err?.message ? `Failed to load account status (${err.message})` : "Failed to load account status.";
mailu.error = message;
nextcloudMail.error = message;
wger.error = message;
firefly.error = message;
vaultwarden.error = message;
jellyfin.error = message;
}
}
async function refreshAdminRequests() {
if (!auth.authenticated) {
admin.enabled = false;
admin.requests = [];
return;
}
admin.error = "";
admin.loading = true;
try {
const resp = await authFetch("/api/admin/access/requests", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (resp.status === 403) {
admin.enabled = false;
admin.requests = [];
return;
}
if (!resp.ok) throw new Error(`status ${resp.status}`);
const data = await resp.json();
admin.enabled = true;
admin.requests = Array.isArray(data.requests) ? data.requests : [];
for (const req of admin.requests) {
if (!req?.username) continue;
if (!(req.username in admin.notes)) admin.notes[req.username] = "";
if (!(req.username in admin.selectedFlags)) admin.selectedFlags[req.username] = [];
}
} catch (err) {
admin.enabled = false;
admin.requests = [];
admin.error = err.message || "Failed to load access requests.";
} finally {
admin.loading = false;
}
}
async function refreshAdminFlags() {
if (!auth.authenticated) {
admin.flags = [];
admin.flagsLoading = false;
return;
}
admin.flagsLoading = true;
try {
const resp = await authFetch("/api/admin/access/flags", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (resp.status === 403) {
admin.flags = [];
return;
}
if (!resp.ok) throw new Error(`status ${resp.status}`);
const data = await resp.json();
admin.flags = Array.isArray(data.flags) ? data.flags : [];
} catch (err) {
admin.flags = [];
admin.error = admin.error || err.message || "Failed to load access flags.";
} finally {
admin.flagsLoading = false;
}
}
function hasFlag(username, flag) {
const selected = admin.selectedFlags[username];
return Array.isArray(selected) && selected.includes(flag);
}
// WHY: admins need readable applicant names even when optional fields are missing; @returns display name.
function formatName(req) {
if (!req) return "unknown";
const parts = [];
if (req.first_name && String(req.first_name).trim()) {
parts.push(String(req.first_name).trim());
}
if (req.last_name && String(req.last_name).trim()) {
parts.push(String(req.last_name).trim());
}
return parts.length ? parts.join(" ") : "unknown";
}
function formatActionError(err, fallback) {
const message = err?.message || "";
if (!message) return fallback;
const normalized = message.toLowerCase();
if (normalized.includes("ariadne unavailable") || normalized.includes("status 502") || normalized.includes("status 503")) {
return "Ariadne is busy. Please try again in a moment.";
}
return message;
}
function toggleFlag(username, flag, event) {
const checked = Boolean(event?.target?.checked);
const selected = Array.isArray(admin.selectedFlags[username]) ? [...admin.selectedFlags[username]] : [];
const next = checked ? Array.from(new Set([...selected, flag])) : selected.filter((item) => item !== flag);
admin.selectedFlags[username] = next;
}
async function rotateMailu() {
mailu.error = "";
mailu.newPassword = "";
mailu.rotating = true;
try {
const resp = await authFetch("/api/account/mailu/rotate", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
mailu.newPassword = data.password || "";
if (mailu.newPassword) {
mailu.currentPassword = mailu.newPassword;
mailu.revealPassword = true;
}
const syncEnabled = Boolean(data.sync_enabled);
const syncOk = Boolean(data.sync_ok);
const syncError = data.sync_error || "";
let syncWarning = "";
if (!syncEnabled) {
mailu.status = "updated";
syncWarning = "Mail sync is not configured; password may not take effect until an admin sync runs.";
} else if (!syncOk) {
mailu.status = "sync pending";
syncWarning = syncError || "Mail sync did not confirm success yet. Try again in a moment.";
} else {
mailu.status = "updated";
}
await refreshOverview();
if (syncWarning) mailu.error = syncWarning;
} catch (err) {
mailu.error = formatActionError(err, "Rotation failed");
} finally {
mailu.rotating = false;
}
}
async function resetWger() {
wger.error = "";
wger.resetting = true;
try {
const resp = await authFetch("/api/account/wger/reset", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.password) {
wger.password = data.password;
wger.revealPassword = true;
}
await refreshOverview();
} catch (err) {
wger.error = formatActionError(err, "Reset failed");
} finally {
wger.resetting = false;
}
}
async function resetFirefly() {
firefly.error = "";
firefly.resetting = true;
try {
const resp = await authFetch("/api/account/firefly/reset", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.password) {
firefly.password = data.password;
firefly.revealPassword = true;
}
await refreshOverview();
} catch (err) {
firefly.error = formatActionError(err, "Reset failed");
} finally {
firefly.resetting = false;
}
}
async function syncNextcloudMail() {
nextcloudMail.error = "";
nextcloudMail.syncing = true;
try {
const resp = await authFetch("/api/account/nextcloud/mail/sync", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ wait: true }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
await refreshOverview();
} catch (err) {
const message = formatActionError(err, "Sync failed");
if (message.toLowerCase().includes("ariadne is busy")) {
nextcloudMail.error = "Ariadne is busy. Refresh in a moment; the sync may have completed.";
} else {
nextcloudMail.error = message;
}
} finally {
nextcloudMail.syncing = false;
}
}
// WHY: Safari/private-mode clipboard support varies; @returns whether fallback copy completed.
function fallbackCopy(text) {
const textarea = document.createElement("textarea");
textarea.value = text;
textarea.setAttribute("readonly", "");
textarea.style.position = "fixed";
textarea.style.top = "-9999px";
textarea.style.left = "-9999px";
document.body.appendChild(textarea);
textarea.select();
textarea.setSelectionRange(0, textarea.value.length);
document.execCommand("copy");
document.body.removeChild(textarea);
}
async function approve(username) {
admin.error = "";
admin.acting[username] = true;
try {
const flags = Array.isArray(admin.selectedFlags[username]) ? admin.selectedFlags[username] : [];
const note = (admin.notes[username] || "").trim();
const payload = {};
if (flags.length) payload.flags = flags;
if (note) payload.note = note;
const resp = await authFetch(`/api/admin/access/requests/${encodeURIComponent(username)}/approve`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data.error || `status ${resp.status}`);
}
await refreshAdminRequests();
} catch (err) {
admin.error = err.message || "Approve failed";
} finally {
admin.acting[username] = false;
}
}
async function deny(username) {
admin.error = "";
admin.acting[username] = true;
try {
const note = (admin.notes[username] || "").trim();
const payload = note ? { note } : {};
const resp = await authFetch(`/api/admin/access/requests/${encodeURIComponent(username)}/deny`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data.error || `status ${resp.status}`);
}
await refreshAdminRequests();
} catch (err) {
admin.error = err.message || "Deny failed";
} finally {
admin.acting[username] = false;
}
}
async function copy(key, text) {
if (!text) return;
try {
if (navigator?.clipboard?.writeText) {
await navigator.clipboard.writeText(text);
} else {
fallbackCopy(text);
}
copied[key] = true;
window.setTimeout(() => {
copied[key] = false;
}, 1500);
} catch {
try {
fallbackCopy(text);
copied[key] = true;
window.setTimeout(() => {
copied[key] = false;
}, 1500);
} catch {
// ignore
}
}
}
return {
auth,
mailu,
jellyfin,
vaultwarden,
nextcloudMail,
wger,
firefly,
admin,
onboardingUrl,
vaultwardenReady,
vaultwardenDisplayStatus,
vaultwardenOrder,
doLogin,
copied,
hasFlag,
formatName,
toggleFlag,
rotateMailu,
resetWger,
resetFirefly,
syncNextcloudMail,
approve,
deny,
copy,
};
}

View File

@ -52,3 +52,9 @@ p {
.page > section + section {
margin-top: 32px;
}
@media (max-width: 720px) {
.page {
padding: 24px 16px 56px;
}
}

View File

@ -18,7 +18,27 @@ export const auth = reactive({
let keycloak = null;
let initPromise = null;
function normalizeGroups(groups) {
/**
* Build a Keycloak client for the current environment.
*
* WHY: tests need to inject a predictable client without changing the runtime
* behavior for the browser.
*/
export function createKeycloak(config) {
const factory = globalThis.__ATLAS_KEYCLOAK_FACTORY__;
if (typeof factory === "function") return factory(config);
const ctor = globalThis.__ATLAS_KEYCLOAK_CONSTRUCTOR__;
if (typeof ctor === "function") return new ctor(config);
return new Keycloak(config);
}
/**
* Normalize Keycloak groups into the format the UI expects.
*
* @param {unknown} groups - Raw group list from the access token.
* @returns {string[]} A cleaned list of group names without leading slashes.
*/
export function normalizeGroups(groups) {
if (!Array.isArray(groups)) return [];
return groups
.filter((g) => typeof g === "string")
@ -26,6 +46,12 @@ function normalizeGroups(groups) {
.filter(Boolean);
}
/**
* Refresh the reactive auth state from the current Keycloak token.
*
* WHY: the UI reads from a shared reactive object, so a token refresh needs to
* update all dependent fields in one place.
*/
function updateFromToken() {
const parsed = keycloak?.tokenParsed || {};
auth.authenticated = Boolean(keycloak?.authenticated);
@ -35,6 +61,11 @@ function updateFromToken() {
auth.groups = normalizeGroups(parsed.groups);
}
/**
* Initialize Keycloak session probing and populate the reactive auth state.
*
* @returns {Promise<void>} A singleton promise so callers can await startup.
*/
export async function initAuth() {
if (initPromise) return initPromise;
@ -51,7 +82,7 @@ export async function initAuth() {
if (!auth.enabled) return;
keycloak = new Keycloak({
keycloak = createKeycloak({
url: cfg.url,
realm: cfg.realm,
clientId: cfg.client_id,
@ -92,6 +123,10 @@ export async function initAuth() {
return initPromise;
}
/**
* Open the Keycloak login flow and preserve the current location as the return
* target.
*/
export async function login(
redirectPath = window.location.pathname + window.location.search + window.location.hash,
loginHint = "",
@ -105,11 +140,21 @@ export async function login(
await keycloak.login(options);
}
/**
* Log the current user out of Keycloak and return them to the portal root.
*/
export async function logout() {
if (!keycloak) return;
await keycloak.logout({ redirectUri: window.location.origin });
}
/**
* Perform a fetch with the current bearer token attached when available.
*
* @param {string} url - Target URL.
* @param {RequestInit} options - Standard fetch options.
* @returns {Promise<Response>} The browser fetch response.
*/
export async function authFetch(url, options = {}) {
const headers = new Headers(options.headers || {});
if (keycloak?.authenticated) {

View File

@ -67,6 +67,11 @@ const renderDiagram = async () => {
}
};
/**
* Cancel pending Mermaid rendering work before scheduling a replacement.
*
* @returns {void}
*/
function cancelScheduledRender() {
if (!scheduledHandle) return;
if (scheduledKind === "idle" && window.cancelIdleCallback) {
@ -78,6 +83,14 @@ function cancelScheduledRender() {
scheduledKind = "";
}
/**
* Schedule Mermaid rendering during idle time when the browser supports it.
*
* WHY: diagrams are decorative and can be expensive, so rendering should not
* compete with first paint or input responsiveness.
*
* @returns {void}
*/
function scheduleRenderDiagram() {
cancelScheduledRender();
if (!props.diagram) return;

View File

@ -1,3 +1,9 @@
/**
* Return the static Atlas and Oceanus hardware inventory used as fallback data.
*
* WHY: the home page needs stable content when live cluster data cannot be
* fetched during startup or testing.
*/
export function fallbackHardware() {
return {
clusters: [
@ -39,6 +45,11 @@ export function fallbackHardware() {
};
}
/**
* Return the curated service catalog shown on the home page when live data is absent.
*
* WHY: the service grid must stay useful without a live backend response.
*/
export function fallbackServices() {
return {
services: [
@ -262,6 +273,11 @@ export function fallbackServices() {
};
}
/**
* Return the static ingress and egress relationships that power the network diagram.
*
* WHY: the topology diagram needs deterministic fallback data for offline runs.
*/
export function fallbackNetwork() {
return {
ingress: [
@ -297,6 +313,12 @@ export function fallbackNetwork() {
};
}
/**
* Return the Atlas metrics summary card content used on the overview page.
*
* WHY: the metrics cards should still render a coherent overview if live
* dashboard links are unavailable.
*/
export function fallbackMetrics() {
return {
dashboard: "https://metrics.bstein.dev",
@ -304,7 +326,16 @@ export function fallbackMetrics() {
};
}
export function buildHardwareDiagram(data) {
/**
* Render the hardware topology diagram used on the home page.
*
* WHY: the landing page needs a deterministic Mermaid diagram even before
* live cluster state is available.
*
* @param {object} _data - Live hardware state, accepted for future shaping.
* @returns {string} Mermaid flowchart text for the Atlas hardware overview.
*/
export function buildHardwareDiagram(_data) {
return `
flowchart TB
subgraph TitanLab["Titan Lab (25 nodes)"]
@ -370,6 +401,12 @@ flowchart TB
`;
}
/**
* Render the ingress and auth sequence for the portal network flow.
*
* WHY: the home page should explain request routing without depending on live
* cluster state.
*/
export function buildNetworkDiagram() {
return `
sequenceDiagram
@ -394,6 +431,12 @@ sequenceDiagram
`;
}
/**
* Render the delivery pipeline from developer push to Flux reconciliation.
*
* WHY: the overview page needs a compact visual of the release path even when
* the CI backend is not reachable.
*/
export function buildPipelineDiagram() {
return `
flowchart LR

View File

@ -0,0 +1,57 @@
/**
* Parse onboarding media manifests into guide groups.
*
* WHY: keeping manifest shaping outside the view makes guide behavior
* testable without mounting the whole onboarding page.
*
* @param {string[]} files - Manifest file paths relative to the onboarding media root.
* @returns {object} Guide groups keyed by service, step, and variant.
*/
export function parseManifest(files) {
const grouped = {};
for (const path of files) {
if (typeof path !== "string") continue;
const cleaned = path.replace(/^\/+/, "").replace(/\\/g, "/");
const parts = cleaned.split("/");
if (parts.length < 3) continue;
const service = parts[0];
const step = parts[1];
const rest = parts.slice(2);
let variant = "default";
let filename = rest.join("/");
if (rest.length > 1) {
variant = rest[0];
filename = rest.slice(1).join("/");
}
const order = guideOrder(filename);
const label = guideLabel(filename);
const url = `/media/onboarding/${cleaned}`;
grouped[service] = grouped[service] || {};
grouped[service][step] = grouped[service][step] || {};
grouped[service][step][variant] = grouped[service][step][variant] || { id: variant, title: variant === "default" ? "" : variant, shots: [] };
grouped[service][step][variant].shots.push({ url, order, label, file: filename });
}
Object.values(grouped).forEach((serviceSteps) => {
Object.values(serviceSteps).forEach((variants) => {
Object.values(variants).forEach((group) => {
group.shots.sort((a, b) => (a.order - b.order) || a.file.localeCompare(b.file));
});
});
});
return grouped;
}
function guideOrder(filename) {
const prefix = filename.match(/^(\d{1,3})/);
if (prefix) return Number(prefix[1]);
const step = filename.match(/step[-_ ]?(\d{1,3})/i);
if (step) return Number(step[1]);
return Number.MAX_SAFE_INTEGER;
}
function guideLabel(filename) {
const base = filename.replace(/\.(png|jpe?g|webp)$/i, "");
return base.replace(/^\d+[-_]?/, "").replace(/[-_]/g, " ").trim();
}

View File

@ -0,0 +1,46 @@
/**
* Convert an onboarding status code into display copy.
*
* @param {string} value - Raw status returned by the backend.
* @returns {string} Human-readable status label.
*/
export function statusLabel(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "confirm email";
if (key === "pending") return "awaiting approval";
if (key === "accounts_building") return "accounts building";
if (key === "awaiting_onboarding") return "awaiting onboarding";
if (key === "ready") return "ready";
if (key === "denied") return "rejected";
return key || "unknown";
}
/**
* Convert an onboarding status code into the matching pill class.
*
* @param {string} value - Raw status returned by the backend.
* @returns {string} CSS class for status emphasis.
*/
export function statusPillClass(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "pill-warn";
if (key === "pending") return "pill-wait";
if (key === "accounts_building") return "pill-warn";
if (key === "awaiting_onboarding") return "pill-ok";
if (key === "ready") return "pill-info";
if (key === "denied") return "pill-bad";
return "pill-warn";
}
/**
* Convert a provisioning task status into a pill class.
*
* @param {string} value - Raw task status returned by the backend.
* @returns {string} CSS class for task emphasis.
*/
export function taskPillClass(value) {
const key = (value || "").trim();
if (key === "ok") return "pill-ok";
if (key === "error") return "pill-bad";
return "pill-warn";
}

View File

@ -0,0 +1,299 @@
/** Static onboarding section and prerequisite definitions. */
export const STEP_PREREQS = {
vaultwarden_master_password: [],
vaultwarden_store_temp_password: ["vaultwarden_master_password"],
vaultwarden_browser_extension: ["vaultwarden_master_password"],
vaultwarden_mobile_app: ["vaultwarden_master_password"],
keycloak_password_rotated: ["vaultwarden_master_password"],
element_recovery_key: ["keycloak_password_rotated"],
element_mobile_app: ["element_recovery_key"],
mail_client_setup: ["vaultwarden_master_password"],
nextcloud_web_access: ["vaultwarden_master_password"],
nextcloud_mail_integration: ["nextcloud_web_access"],
nextcloud_desktop_app: ["nextcloud_web_access"],
nextcloud_mobile_app: ["nextcloud_web_access"],
budget_encryption_ack: ["nextcloud_mail_integration"],
firefly_password_rotated: ["element_recovery_key"],
firefly_mobile_app: ["firefly_password_rotated"],
wger_password_rotated: ["firefly_password_rotated"],
wger_mobile_app: ["wger_password_rotated"],
jellyfin_web_access: ["vaultwarden_master_password"],
jellyfin_mobile_app: ["jellyfin_web_access"],
jellyfin_tv_setup: ["jellyfin_web_access"],
};
export const SECTION_DEFS = [
{
id: "vaultwarden",
title: "Vaultwarden",
summary: "Self-hosted password manager for Atlas credentials.",
benefit: "Keeps every lab password encrypted and synced across devices.",
steps: [
{
id: "vaultwarden_master_password",
title: "Set your Vaultwarden master password",
action: "confirm",
description:
"Open Nextcloud Mail to find the invite, then visit vault.bstein.dev and create your master password. Use the temporary Keycloak password to sign in to Nextcloud for the first time.",
bullets: [
"Prefer a long (64+ character) multi word phrase over a single word. Length is stronger than complexity.",
"Never share, write, or store your password with anyone or anywhere for any reason. Your password must only live between your ears.",
"Pick something you will not forget, probably something you already know, something easy to remember, maybe something close to you.",
],
links: [
{ href: "https://cloud.bstein.dev", text: "Nextcloud Mail" },
{ href: "https://vault.bstein.dev", text: "Vaultwarden" },
],
guide: { service: "vaultwarden", step: "step1_website" },
},
{
id: "vaultwarden_browser_extension",
title: "Install the browser extension",
action: "checkbox",
description:
"Install Bitwarden in your browser and point it at vault.bstein.dev (Settings → Account → Environment → Self-hosted).",
links: [
{ href: "https://addons.mozilla.org/en-US/firefox/addon/bitwarden-password-manager/", text: "Firefox" },
{ href: "https://chromewebstore.google.com/detail/bitwarden-free-password-m/nngceckbapebfimnlniiiahkandclblb", text: "Chrome" },
{ href: "https://apps.apple.com/app/bitwarden/id1352778147", text: "Safari" },
{ href: "https://www.mozilla.org/firefox/new/", text: "Need a browser? Get Firefox" },
],
guide: { service: "vaultwarden", step: "step2_browser_extension" },
},
{
id: "vaultwarden_mobile_app",
title: "Install the mobile app",
action: "checkbox",
description: "Install Bitwarden on your phone, set the server to vault.bstein.dev, and enable biometrics.",
links: [{ href: "https://bitwarden.com/download/", text: "Bitwarden downloads" }],
guide: { service: "vaultwarden", step: "step3_mobile_app" },
},
],
},
{
id: "element",
title: "Element",
summary: "Secure chat, calls, and video for the lab.",
benefit: "Private messaging with encryption and recovery controls you own.",
steps: [
{
id: "keycloak_password_rotated",
title: "Connect to Element web",
action: "confirm",
description:
"Sign in to Element with the temporary password. Keycloak will prompt you to set a new password. Store the new password in Vaultwarden.",
links: [
{ href: "https://live.bstein.dev", text: "Element" },
{ href: "https://sso.bstein.dev/realms/atlas/account", text: "Keycloak account" },
],
guide: { service: "element", step: "step1_web_access" },
},
{
id: "element_recovery_key",
title: "Create your recovery key",
action: "confirm",
description:
"In Element settings → Encryption, create a recovery key and store it in Vaultwarden.",
guide: { service: "element", step: "step2_record_recovery_key" },
},
{
id: "element_mobile_app",
title: "Optional: install Element X on mobile",
action: "checkbox",
description:
"Install Element X and sign in. Use Element Web → Settings → Sessions to connect your phone via QR.",
links: [{ href: "https://element.io/download", text: "Element X downloads" }],
guide: { service: "element", step: "step3_mobile_app_and_qr_code_login" },
},
],
},
{
id: "mail",
title: "Mail",
summary: "Your @bstein.dev inbox for lab notifications and contact.",
benefit: "One address for every Atlas service and shared communication.",
steps: [
{
id: "mail_client_setup",
title: "Set up mail on a device",
action: "checkbox",
description:
"Use the IMAP/SMTP details on your Account page to add mail to your phone or desktop client (Thunderbird, Apple Mail, FairEmail).",
links: [{ href: "/account", text: "Open Account details" }],
guide: { service: "mail", step: "step1_mail_app" },
},
],
},
{
id: "nextcloud",
title: "Nextcloud",
summary: "File storage, calendar, and mail hub for the lab.",
benefit: "Central workspace for docs, sharing, and your mailbox.",
steps: [
{
id: "nextcloud_web_access",
title: "Sign in to Nextcloud",
action: "checkbox",
description:
"Open Nextcloud, confirm you can access Files, Calendar, and Mail, and keep the tab handy during onboarding.",
links: [{ href: "https://cloud.bstein.dev", text: "Nextcloud" }],
guide: { service: "nextcloud", step: "step1_web_access" },
},
{
id: "nextcloud_mail_integration",
title: "Mail integration ready",
action: "auto",
description:
"Atlas configures your mailbox inside Nextcloud automatically. If this stays pending, use Accounts → Sync Mail and retry.",
guide: { service: "nextcloud", step: "step2_mail_integration" },
},
{
id: "nextcloud_desktop_app",
title: "Optional: install the desktop sync app",
action: "checkbox",
description: "Install the Nextcloud desktop app to sync files locally.",
links: [{ href: "https://nextcloud.com/install/", text: "Nextcloud desktop" }],
guide: { service: "nextcloud", step: "step3_desktop_storage_app" },
},
{
id: "nextcloud_mobile_app",
title: "Optional: install the mobile app",
action: "checkbox",
description: "Install the Nextcloud mobile app for files and photos on the go.",
links: [{ href: "https://nextcloud.com/install/", text: "Nextcloud mobile" }],
guide: { service: "nextcloud", step: "step4_mobile_app" },
},
],
},
{
id: "budget",
title: "Budget Encryption",
summary: "Actual Budget for private personal finance.",
benefit: "Encryption keeps your budget data safe and portable.",
steps: [
{
id: "budget_encryption_ack",
title: "Enable encryption inside Actual Budget",
action: "checkbox",
description:
"Actual Budget does not encrypt by default. Open Settings → Encryption, enable it, and store the key in Vaultwarden.",
bullets: [
"Keep the encryption key only in Vaultwarden.",
"If you lose the key, your budget data cannot be recovered.",
],
links: [
{ href: "https://budget.bstein.dev", text: "Actual Budget" },
{ href: "https://vault.bstein.dev", text: "Vaultwarden" },
],
guide: { service: "budget", step: "step1_encrypt_data" },
},
],
},
{
id: "firefly",
title: "Firefly III",
summary: "Personal finance tracker for transactions and reporting.",
benefit: "Detailed insights, budgets, and exports under your control.",
steps: [
{
id: "firefly_password_rotated",
title: "Change your Firefly password",
action: "confirm",
description:
"Sign in to money.bstein.dev with the credentials on your Account page, change the password, then confirm here.",
links: [
{ href: "https://money.bstein.dev", text: "Firefly III" },
{ href: "/account", text: "Account credentials" },
],
guide: { service: "firefly", step: "step1_web_access" },
},
{
id: "firefly_mobile_app",
title: "Optional: set up the mobile app",
action: "checkbox",
description:
"Install Abacus (Firefly III), connect to money.bstein.dev, and keep the OAuth credentials in Vaultwarden.",
links: [
{ href: "https://github.com/vgsmar/Abacus/releases", text: "Abacus releases" },
{ href: "/account", text: "Account credentials" },
],
guide: { service: "firefly", step: "step2_mobile_app" },
},
],
},
{
id: "wger",
title: "Wger",
summary: "Fitness tracking for workouts and nutrition.",
benefit: "Keeps training plans and progress in one place.",
steps: [
{
id: "wger_password_rotated",
title: "Change your Wger password",
action: "confirm",
description:
"Sign in to health.bstein.dev with the credentials on your Account page, change the password, then confirm here.",
links: [
{ href: "https://health.bstein.dev", text: "Wger" },
{ href: "/account", text: "Account credentials" },
],
guide: { service: "wger", step: "step1_web_access" },
},
{
id: "wger_mobile_app",
title: "Optional: set up the mobile app",
action: "checkbox",
description:
"Install the Wger mobile app, sign in with your updated credentials, and store the password in Vaultwarden.",
links: [
{ href: "https://github.com/wger-project/wger", text: "Wger project" },
{ href: "/account", text: "Account credentials" },
],
guide: { service: "wger", step: "step2_mobile_app" },
},
],
},
{
id: "jellyfin",
title: "Jellyfin",
summary: "Self-hosted media streaming for the lab.",
benefit: "Watch your media anywhere without third-party accounts.",
steps: [
{
id: "jellyfin_web_access",
title: "Sign in to Jellyfin",
action: "checkbox",
description:
"Sign in with your Atlas username/password (LDAP-backed).",
links: [{ href: "https://stream.bstein.dev", text: "Jellyfin" }],
guide: { service: "jellyfin", step: "step1_web_access" },
},
{
id: "jellyfin_mobile_app",
title: "Optional: install the mobile app",
action: "checkbox",
description: "Install Jellyfin on mobile and connect to stream.bstein.dev.",
links: [{ href: "https://jellyfin.org/downloads/", text: "Jellyfin downloads" }],
guide: { service: "jellyfin", step: "step2_mobile_app" },
},
{
id: "jellyfin_tv_setup",
title: "Optional: connect a TV client",
action: "checkbox",
description:
"Use the Jellyfin app on your TV or streaming device (LG, Samsung, Roku, Apple TV, Xbox).",
links: [{ href: "https://jellyfin.org/downloads/", text: "Jellyfin TV apps" }],
},
],
},
];
export const VAULTWARDEN_TEMP_STEP = {
id: "vaultwarden_store_temp_password",
title: "Store the temporary Keycloak password",
action: "confirm",
description:
"Save the temporary Keycloak password in Vaultwarden so you can rotate it later without losing access.",
links: [{ href: "https://vault.bstein.dev", text: "Vaultwarden" }],
guide: { service: "vaultwarden", step: "step1_website", tail: 4 },
};

View File

@ -0,0 +1,500 @@
import { computed, onMounted, ref } from "vue";
import { auth, authFetch } from "../auth";
import { useOnboardingGuides } from "./useOnboardingGuides";
import { useOnboardingNavigation } from "./useOnboardingNavigation";
import { statusLabel, statusPillClass, taskPillClass } from "./onboardingLabels";
import { SECTION_DEFS, STEP_PREREQS, VAULTWARDEN_TEMP_STEP } from "./onboardingSections";
/**
* Build the Onboarding page state machine.
* WHY: onboarding coordinates request status, guide media, password reveal,
* and service attestation flow; isolating that state keeps the view focused
* on layout and makes the workflow independently testable.
* @param {import("vue-router").RouteLocationNormalizedLoaded} route - active route with optional request code query params.
* @returns {object} reactive onboarding state and event handlers.
*/
export function useOnboardingFlow(route) {
const requestCode = ref("");
const requestUsername = ref("");
const status = ref("");
const loading = ref(false);
const error = ref("");
const onboarding = ref({ required_steps: [], optional_steps: [], completed_steps: [] });
const initialPassword = ref("");
const initialPasswordRevealedAt = ref("");
const revealPassword = ref(false);
const passwordCopied = ref(false);
const usernameCopied = ref(false);
const tasks = ref([]);
const blocked = ref(false);
const retrying = ref(false);
const retryMessage = ref("");
const keycloakPasswordRotationRequested = ref(false);
const activeSectionId = ref("vaultwarden");
const {
guideShots,
guidePage,
lightboxShot,
guideGroups,
guideKey,
guideIndex,
guideSet,
guidePrev,
guideNext,
guideShot,
shouldOpenGuide,
openLightbox,
closeLightbox,
loadGuideShots,
} = useOnboardingGuides({ isStepDone, isStepBlocked });
const confirmingStepId = ref("");
const showPasswordCard = computed(() => Boolean(initialPassword.value || initialPasswordRevealedAt.value));
const passwordRevealLocked = computed(() => Boolean(!initialPassword.value && initialPasswordRevealedAt.value));
const passwordRevealHint = computed(() =>
passwordRevealLocked.value
? "This password was already revealed and cannot be shown again. Ask an admin to reset it if you missed it."
: "",
);
const vaultwardenRecoveryEmail = computed(() => onboarding.value?.vaultwarden?.recovery_email || "");
const vaultwardenMatched = computed(() => Boolean(onboarding.value?.vaultwarden?.matched));
const vaultwardenLoginEmail = computed(() => {
if (vaultwardenMatched.value) {
return vaultwardenRecoveryEmail.value || "your recovery email";
}
if (requestUsername.value) {
return `${requestUsername.value}@bstein.dev`;
}
return "your @bstein.dev address";
});
const vaultwardenLoginEmailLower = computed(() => (vaultwardenLoginEmail.value || "").toLowerCase());
const mailAddress = computed(() => (requestUsername.value ? `${requestUsername.value}@bstein.dev` : "your @bstein.dev address"));
const mailAddressLower = computed(() => (mailAddress.value || "").toLowerCase());
const sections = computed(() =>
SECTION_DEFS.map((section) => {
if (section.id !== "vaultwarden") return section;
const steps = [...section.steps];
if (vaultwardenMatched.value) {
steps.splice(1, 0, VAULTWARDEN_TEMP_STEP);
}
return { ...section, steps };
}),
);
const {
activeSection,
nextSectionItem,
hasPrevSection,
hasNextSection,
selectSection,
prevSection,
nextSection,
stepCardClass,
sectionProgress,
sectionStatusLabel,
sectionPillClass,
isSectionLocked,
isSectionDone,
sectionCardClass,
sectionGateComplete,
} = useOnboardingNavigation({ sections, activeSectionId, isStepDone, isStepRequired, isStepBlocked });
const showOnboarding = computed(() => status.value === "awaiting_onboarding" || status.value === "ready");
function isStepDone(stepId) {
const steps = onboarding.value?.completed_steps || [];
return Array.isArray(steps) ? steps.includes(stepId) : false;
}
function isStepRequired(stepId) {
const required = onboarding.value?.required_steps || [];
return Array.isArray(required) && required.includes(stepId);
}
function isStepBlocked(stepId) {
const prereqs = STEP_PREREQS[stepId] || [];
if (!prereqs.length) return false;
return prereqs.some((req) => !isStepDone(req));
}
// WHY: service login rules vary by step; @returns optional helper copy.
function stepNote(step) {
if (step.id === "vaultwarden_master_password") {
return `Vaultwarden uses an email login. Use ${vaultwardenLoginEmailLower.value} to sign in.`;
}
if (step.id === "vaultwarden_store_temp_password") {
return "Store the temporary Keycloak password in Vaultwarden so you can rotate it safely later.";
}
if (step.id === "firefly_password_rotated") {
return `Firefly uses an email login. Use ${mailAddressLower.value} to sign in.`;
}
if (step.id === "mail_client_setup") {
return `Your mailbox address is ${mailAddressLower.value}.`;
}
return "";
}
// WHY: step state combines completion, prerequisites, and backend automation; @returns pill text.
function stepPillLabel(step) {
if (isStepDone(step.id)) return "done";
if (isStepBlocked(step.id)) return "blocked";
if (step.action === "auto") return "pending";
if (!isStepRequired(step.id)) return "optional";
if (step.id === "keycloak_password_rotated") {
return keycloakPasswordRotationRequested.value ? "rotate now" : "ready";
}
return "pending";
}
function stepPillClass(step) {
if (isStepDone(step.id)) return "pill-ok";
if (isStepBlocked(step.id)) return "pill-wait";
if (!isStepRequired(step.id)) return "pill-info";
if (step.id === "keycloak_password_rotated" && !keycloakPasswordRotationRequested.value) {
return "pill-info";
}
return "pill-warn";
}
function isConfirming(step) {
return confirmingStepId.value === step.id;
}
function confirmLabel(step) {
return isConfirming(step) ? "Confirming..." : "Confirm";
}
function selectDefaultSection() {
const list = sections.value;
const firstIncomplete = list.find((section) => !isSectionDone(section) && !isSectionLocked(section));
activeSectionId.value = (firstIncomplete || list[0] || {}).id || "vaultwarden";
}
async function check() {
if (loading.value) return;
error.value = "";
loading.value = true;
try {
const resp = await fetch("/api/access/request/status", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
request_code: requestCode.value.trim(),
reveal_initial_password: true,
}),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || "unknown";
requestUsername.value = data.username || "";
onboarding.value = data.onboarding || { required_steps: [], optional_steps: [], completed_steps: [] };
keycloakPasswordRotationRequested.value = Boolean(data.onboarding?.keycloak?.password_rotation_requested);
tasks.value = Array.isArray(data.tasks) ? data.tasks : [];
blocked.value = Boolean(data.blocked);
initialPassword.value = data.initial_password || "";
initialPasswordRevealedAt.value = data.initial_password_revealed_at || "";
if (showOnboarding.value) {
selectDefaultSection();
}
} catch (err) {
error.value = err?.message || "Failed to check status";
tasks.value = [];
blocked.value = false;
keycloakPasswordRotationRequested.value = false;
} finally {
loading.value = false;
}
}
async function retryProvisioning() {
if (retrying.value) return;
retryMessage.value = "";
const code = requestCode.value.trim();
if (!code) return;
retrying.value = true;
try {
const retryTasks = tasks.value
.filter((item) => item.status === "error")
.map((item) => item.task)
.filter(Boolean);
const resp = await fetch("/api/access/request/retry", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code, tasks: retryTasks }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
retryMessage.value = "Retry requested. Check again in a moment.";
await check();
} catch (err) {
retryMessage.value = err?.message || "Retry request failed.";
} finally {
retrying.value = false;
}
}
function togglePassword() {
revealPassword.value = !revealPassword.value;
}
async function copyText(text, setFlag) {
if (!text) return;
try {
if (navigator?.clipboard?.writeText) {
await navigator.clipboard.writeText(text);
} else {
const fallback = document.createElement("textarea");
fallback.value = text;
fallback.setAttribute("readonly", "");
fallback.style.position = "fixed";
fallback.style.top = "-9999px";
fallback.style.left = "-9999px";
document.body.appendChild(fallback);
fallback.select();
fallback.setSelectionRange(0, fallback.value.length);
document.execCommand("copy");
document.body.removeChild(fallback);
}
setFlag(true);
setTimeout(() => setFlag(false), 1500);
} catch (err) {
error.value = err?.message || "Copy failed";
}
}
function copyInitialPassword() {
copyText(initialPassword.value, (value) => (passwordCopied.value = value));
}
function copyUsername() {
copyText(requestUsername.value, (value) => (usernameCopied.value = value));
}
async function toggleStep(stepId, event) {
const checked = Boolean(event?.target?.checked);
await setStepCompletion(stepId, checked);
}
async function setStepCompletion(stepId, completed, extra = {}) {
if (!requestCode.value.trim()) {
error.value = "Request code is missing.";
return;
}
if (isStepBlocked(stepId)) {
return;
}
loading.value = true;
error.value = "";
try {
const requester = auth.authenticated ? authFetch : fetch;
let resp = await requester("/api/access/request/onboarding/attest", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ request_code: requestCode.value.trim(), step: stepId, completed, ...extra }),
});
if ([401, 403].includes(resp.status) && requester === authFetch) {
resp = await fetch("/api/access/request/onboarding/attest", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ request_code: requestCode.value.trim(), step: stepId, completed, ...extra }),
});
}
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || status.value;
onboarding.value = data.onboarding || onboarding.value;
} catch (err) {
error.value = err?.message || "Failed to update onboarding";
} finally {
loading.value = false;
}
}
async function confirmStep(step) {
if (!step || isStepBlocked(step.id) || isStepDone(step.id)) return;
confirmingStepId.value = step.id;
try {
if (step.id === "keycloak_password_rotated") {
await requestKeycloakPasswordRotation();
await check();
return;
}
if (step.action === "auto") {
if (step.id === "firefly_password_rotated") {
const result = await runRotationCheck("firefly");
if (result && result.rotated === false) {
throw new Error("Firefly still uses the initial password. Change it in Firefly, then confirm again.");
}
}
if (step.id === "wger_password_rotated") {
const result = await runRotationCheck("wger");
if (result && result.rotated === false) {
throw new Error("Wger still uses the initial password. Change it in Wger, then confirm again.");
}
}
await check();
return;
}
if (step.action === "confirm") {
await check();
if (!isStepDone(step.id)) {
await setStepCompletion(step.id, true);
}
return;
}
await setStepCompletion(step.id, true);
} catch (err) {
error.value = err?.message || "Failed to confirm step";
} finally {
confirmingStepId.value = "";
}
}
async function runRotationCheck(service) {
if (!auth.authenticated) {
throw new Error("Log in to update onboarding steps.");
}
const endpoint =
service === "firefly"
? "/api/account/firefly/rotation/check"
: "/api/account/wger/rotation/check";
const resp = await authFetch(endpoint, { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) {
throw new Error(data.error || resp.statusText || `status ${resp.status}`);
}
return data;
}
async function requestKeycloakPasswordRotation() {
if (!requestCode.value.trim()) {
error.value = "Request code is missing.";
return;
}
if (isStepBlocked("keycloak_password_rotated")) {
error.value = "Complete earlier onboarding steps first.";
return;
}
if (keycloakPasswordRotationRequested.value) return;
loading.value = true;
error.value = "";
try {
const requester = auth.authenticated ? authFetch : fetch;
let resp = await requester("/api/access/request/onboarding/keycloak-password-rotate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ request_code: requestCode.value.trim() }),
});
if ([401, 403].includes(resp.status) && requester === authFetch) {
resp = await fetch("/api/access/request/onboarding/keycloak-password-rotate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ request_code: requestCode.value.trim() }),
});
}
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
onboarding.value = data.onboarding || onboarding.value;
status.value = data.status || status.value;
keycloakPasswordRotationRequested.value = Boolean(data.onboarding?.keycloak?.password_rotation_requested);
} catch (err) {
error.value = err?.message || "Failed to request password rotation";
} finally {
loading.value = false;
}
}
onMounted(async () => {
const code = route.query.code || route.query.request_code || "";
if (typeof code === "string" && code.trim()) {
requestCode.value = code.trim();
await check();
}
await loadGuideShots();
});
return {
requestCode,
requestUsername,
status,
loading,
error,
onboarding,
initialPassword,
initialPasswordRevealedAt,
revealPassword,
passwordCopied,
usernameCopied,
tasks,
blocked,
retrying,
retryMessage,
keycloakPasswordRotationRequested,
activeSectionId,
guideShots,
guidePage,
lightboxShot,
confirmingStepId,
showPasswordCard,
passwordRevealLocked,
passwordRevealHint,
vaultwardenRecoveryEmail,
vaultwardenMatched,
vaultwardenLoginEmail,
vaultwardenLoginEmailLower,
mailAddress,
mailAddressLower,
sections,
activeSection,
nextSectionItem,
hasPrevSection,
hasNextSection,
showOnboarding,
selectSection,
prevSection,
nextSection,
statusLabel,
statusPillClass,
isStepDone,
isStepRequired,
isStepBlocked,
stepNote,
stepPillLabel,
stepPillClass,
isConfirming,
confirmLabel,
stepCardClass,
sectionProgress,
sectionStatusLabel,
sectionPillClass,
isSectionLocked,
isSectionDone,
sectionCardClass,
sectionGateComplete,
guideGroups,
guideKey,
guideIndex,
guideSet,
guidePrev,
guideNext,
guideShot,
shouldOpenGuide,
openLightbox,
closeLightbox,
taskPillClass,
selectDefaultSection,
check,
retryProvisioning,
togglePassword,
copyText,
copyInitialPassword,
copyUsername,
toggleStep,
setStepCompletion,
confirmStep,
runRotationCheck,
requestKeycloakPasswordRotation,
loadGuideShots,
};
}

View File

@ -0,0 +1,114 @@
import { ref } from "vue";
import { parseManifest } from "./onboardingGuides";
/**
* Manage onboarding guide media, pagination, and lightbox state.
*
* @param {object} gates - Step completion/blocking predicates from the onboarding flow.
* @returns {object} guide state and guide UI helpers.
*/
export function useOnboardingGuides({ isStepDone, isStepBlocked }) {
const guideShots = ref({});
const guidePage = ref({});
const lightboxShot = ref(null);
/**
* Return screenshot groups for a step, honoring configured head/tail limits.
*
* @param {object} step - Onboarding step definition with optional guide metadata.
* @returns {Array<object>} Screenshot groups to render for the guide carousel.
*/
function guideGroups(step) {
if (!step.guide) return [];
const service = step.guide.service;
const stepKey = step.guide.step;
const serviceShots = guideShots.value?.[service] || {};
const stepShots = serviceShots?.[stepKey] || {};
const groups = Object.values(stepShots);
const take = step.guide.take || step.guide.tail || 0;
if (!take) return groups;
const useTail = Boolean(step.guide.tail);
return groups.map((group) => {
const shots = useTail ? group.shots.slice(-take) : group.shots.slice(0, take);
return { ...group, shots };
});
}
function guideKey(step, group) {
const service = step.guide?.service || "unknown";
const stepKey = step.guide?.step || "unknown";
return `${service}:${stepKey}:${group.id}`;
}
function guideIndex(step, group) {
const key = guideKey(step, group);
const index = guidePage.value[key] ?? 0;
const maxIndex = Math.max(group.shots.length - 1, 0);
return Math.min(Math.max(index, 0), maxIndex);
}
function guideSet(step, group, index) {
const key = guideKey(step, group);
const next = Math.min(Math.max(index, 0), group.shots.length - 1);
guidePage.value = { ...guidePage.value, [key]: next };
}
function guidePrev(step, group) {
guideSet(step, group, guideIndex(step, group) - 1);
}
function guideNext(step, group) {
guideSet(step, group, guideIndex(step, group) + 1);
}
function guideShot(step, group) {
return group.shots[guideIndex(step, group)] || {};
}
function shouldOpenGuide(step, section) {
if (!step || !step.guide || !section) return false;
const first = section.steps.find(
(item) => item.guide && !isStepDone(item.id) && !isStepBlocked(item.id),
);
return Boolean(first && first.id === step.id);
}
function openLightbox(shot) {
if (!shot || !shot.url) return;
lightboxShot.value = shot;
}
function closeLightbox() {
lightboxShot.value = null;
}
async function loadGuideShots() {
try {
const resp = await fetch("/media/onboarding/manifest.json", { headers: { Accept: "application/json" } });
if (!resp.ok) return;
const payload = await resp.json();
const files = Array.isArray(payload?.files) ? payload.files : [];
guideShots.value = parseManifest(files);
} catch {
guideShots.value = {};
}
}
return {
guideShots,
guidePage,
lightboxShot,
guideGroups,
guideKey,
guideIndex,
guideSet,
guidePrev,
guideNext,
guideShot,
shouldOpenGuide,
openLightbox,
closeLightbox,
loadGuideShots,
};
}

View File

@ -0,0 +1,123 @@
import { computed } from "vue";
/**
* Manage onboarding section navigation and section completion state.
*
* @param {object} options - Section refs and step predicates from the flow.
* @returns {object} section navigation state and helpers.
*/
export function useOnboardingNavigation({ sections, activeSectionId, isStepDone, isStepRequired, isStepBlocked }) {
const activeSection = computed(() => sections.value.find((item) => item.id === activeSectionId.value));
const nextSectionItem = computed(() => {
const list = sections.value;
const index = list.findIndex((item) => item.id === activeSectionId.value);
return index >= 0 ? list[index + 1] : null;
});
const hasPrevSection = computed(() => {
const list = sections.value;
const index = list.findIndex((item) => item.id === activeSectionId.value);
return index > 0;
});
const hasNextSection = computed(() => Boolean(nextSectionItem.value));
function selectSection(sectionId) {
if (!sectionId) return;
const section = sections.value.find((item) => item.id === sectionId);
if (!section) return;
if (isSectionLocked(section)) return;
activeSectionId.value = sectionId;
}
function prevSection() {
const list = sections.value;
const index = list.findIndex((item) => item.id === activeSectionId.value);
if (index > 0) {
activeSectionId.value = list[index - 1].id;
}
}
function nextSection() {
const nextItem = nextSectionItem.value;
if (nextItem && !isSectionLocked(nextItem)) {
activeSectionId.value = nextItem.id;
}
}
function stepCardClass(step) {
return {
done: isStepDone(step.id),
blocked: isStepBlocked(step.id),
optional: !isStepRequired(step.id),
};
}
function sectionProgress(section) {
const requiredSteps = section.steps.filter((step) => isStepRequired(step.id));
if (!requiredSteps.length) return "optional";
if (isSectionLocked(section)) return `0/${requiredSteps.length} done`;
const doneCount = requiredSteps.filter((step) => isStepDone(step.id) && !isStepBlocked(step.id)).length;
return `${doneCount}/${requiredSteps.length} done`;
}
function sectionStatusLabel(section) {
if (isSectionDone(section)) return "";
if (isSectionLocked(section)) return "locked";
return "active";
}
function sectionPillClass(section) {
if (isSectionLocked(section)) return "pill-wait";
return "pill-info";
}
function isSectionLocked(section) {
const list = sections.value;
const index = list.findIndex((item) => item.id === section.id);
if (index <= 0) return false;
const previous = list[index - 1];
return !sectionGateComplete(previous);
}
function isSectionDone(section) {
const requiredSteps = section.steps.filter((step) => isStepRequired(step.id));
const stepsToCheck = requiredSteps.length ? requiredSteps : section.steps;
if (!stepsToCheck.length) return false;
return stepsToCheck.every((step) => isStepDone(step.id));
}
function sectionCardClass(section) {
return {
active: section.id === activeSectionId.value,
done: isSectionDone(section),
locked: isSectionLocked(section),
};
}
function sectionGateComplete(section) {
const requiredSteps = section.steps.filter((step) => isStepRequired(step.id));
if (!requiredSteps.length) return true;
return requiredSteps.every((step) => isStepDone(step.id));
}
return {
activeSection,
nextSectionItem,
hasPrevSection,
hasNextSection,
selectSection,
prevSection,
nextSection,
stepCardClass,
sectionProgress,
sectionStatusLabel,
sectionPillClass,
isSectionLocked,
isSectionDone,
sectionCardClass,
sectionGateComplete,
};
}

View File

@ -0,0 +1,454 @@
import { onMounted, reactive, ref, watch } from "vue";
/**
* Build the Request Access page state machine.
*
* WHY: the view combines form submission, verification-link handling,
* provisioning retry, and status polling; keeping that orchestration in a
* composable makes the SFC small and gives the behavior a testable seam.
*
* @param {import("vue-router").RouteLocationNormalizedLoaded} route - active route with optional verification query params.
* @returns {object} reactive state and event handlers used by the view template.
*/
export function useRequestAccessFlow(route) {
/**
* Convert backend request status into copy for the public flow.
*
* @param {string} value - Raw access request status.
* @returns {string} Human-readable status label.
*/
function statusLabel(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "confirm email";
if (key === "pending") return "awaiting approval";
if (key === "accounts_building") return "accounts building";
if (key === "awaiting_onboarding") return "awaiting onboarding";
if (key === "ready") return "ready";
if (key === "denied") return "rejected";
return key || "unknown";
}
/**
* Convert backend request status into the matching pill class.
*
* @param {string} value - Raw access request status.
* @returns {string} CSS class for status emphasis.
*/
function statusPillClass(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "pill-warn";
if (key === "pending") return "pill-wait";
if (key === "accounts_building") return "pill-warn";
if (key === "awaiting_onboarding") return "pill-ok";
if (key === "ready") return "pill-info";
if (key === "denied") return "pill-bad";
return "pill-warn";
}
const form = reactive({
username: "",
first_name: "",
last_name: "",
email: "",
note: "",
});
const submitting = ref(false);
const submitted = ref(false);
const error = ref("");
const requestCode = ref("");
const copied = ref(false);
const verifying = ref(false);
const mailDomain = import.meta.env?.VITE_MAILU_DOMAIN || "bstein.dev";
const availability = reactive({
label: "",
detail: "",
pillClass: "",
checking: false,
blockSubmit: false,
});
let availabilityTimer = 0;
let availabilityToken = 0;
const statusForm = reactive({
request_code: "",
});
const checking = ref(false);
const status = ref("");
const onboardingUrl = ref("");
const tasks = ref([]);
const blocked = ref(false);
const retrying = ref(false);
const retryMessage = ref("");
const resending = ref(false);
const resendMessage = ref("");
const verifyBanner = ref(null);
function taskPillClass(status) {
const key = (status || "").trim();
if (key === "ok") return "pill-ok";
if (key === "error") return "pill-bad";
if (key === "pending") return "pill-warn";
return "pill-warn";
}
function resetAvailability() {
availability.label = "";
availability.detail = "";
availability.pillClass = "";
availability.blockSubmit = false;
}
/**
* Update username availability UI state from a normalized backend result.
*
* @param {string} state - Availability state key.
* @param {string} detail - Optional human-readable explanation.
* @returns {void}
*/
function setAvailability(state, detail = "") {
availability.detail = detail;
availability.blockSubmit = false;
if (state === "checking") {
availability.label = "checking";
availability.pillClass = "pill-warn";
return;
}
if (state === "available") {
availability.label = "available";
availability.pillClass = "pill-ok";
return;
}
if (state === "invalid") {
availability.label = "invalid";
availability.pillClass = "pill-bad";
availability.blockSubmit = true;
return;
}
if (state === "requested") {
availability.label = "requested";
availability.pillClass = "pill-warn";
availability.blockSubmit = true;
return;
}
if (state === "exists") {
availability.label = "taken";
availability.pillClass = "pill-bad";
availability.blockSubmit = true;
return;
}
if (state === "error") {
availability.label = "error";
availability.pillClass = "pill-warn";
return;
}
resetAvailability();
}
async function checkAvailability(name) {
const token = (availabilityToken += 1);
setAvailability("checking");
availability.checking = true;
try {
const resp = await fetch(`/api/access/request/availability?username=${encodeURIComponent(name)}`, {
headers: { Accept: "application/json" },
cache: "no-store",
});
const data = await resp.json().catch(() => ({}));
if (token !== availabilityToken) return;
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.available) {
setAvailability("available", "Username is available.");
return;
}
const reason = data.reason || "";
const status = data.status || "";
if (reason === "invalid") {
setAvailability("invalid", data.detail || "Use 3-32 characters (letters, numbers, . _ -).");
return;
}
if (reason === "exists") {
setAvailability("exists", "Already in use. Choose another name.");
return;
}
if (reason === "requested") {
const label = status ? `Existing request: ${statusLabel(status)}` : "Request already exists.";
setAvailability("requested", label);
return;
}
setAvailability("error", "Unable to confirm availability.");
} catch (err) {
if (token !== availabilityToken) return;
setAvailability("error", err.message || "Availability check failed.");
} finally {
if (token === availabilityToken) availability.checking = false;
}
}
async function submit() {
if (submitting.value) return;
error.value = "";
submitting.value = true;
try {
const resp = await fetch("/api/access/request", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({
username: form.username.trim(),
first_name: form.first_name.trim(),
last_name: form.last_name.trim(),
email: form.email.trim(),
note: form.note.trim(),
}),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
submitted.value = true;
requestCode.value = data.request_code || "";
statusForm.request_code = requestCode.value;
status.value = data.status || "pending_email_verification";
} catch (err) {
error.value = err.message || "Failed to submit request";
} finally {
submitting.value = false;
}
}
watch(
() => form.username,
(value) => {
const trimmed = value.trim();
if (availabilityTimer) {
window.clearTimeout(availabilityTimer);
availabilityTimer = 0;
}
availabilityToken += 1;
if (!trimmed) {
resetAvailability();
return;
}
if (trimmed.length < 3 || trimmed.length > 32) {
setAvailability("invalid", "Use 3-32 characters (letters, numbers, . _ -).");
return;
}
if (!/^[a-zA-Z0-9._-]+$/.test(trimmed)) {
setAvailability("invalid", "Use letters, numbers, and . _ - only.");
return;
}
availabilityTimer = window.setTimeout(() => {
checkAvailability(trimmed);
}, 350);
},
);
async function copyRequestCode() {
if (!requestCode.value) return;
try {
if (navigator?.clipboard?.writeText) {
await navigator.clipboard.writeText(requestCode.value);
} else {
const textarea = document.createElement("textarea");
textarea.value = requestCode.value;
textarea.setAttribute("readonly", "");
textarea.style.position = "fixed";
textarea.style.top = "-9999px";
textarea.style.left = "-9999px";
document.body.appendChild(textarea);
textarea.select();
textarea.setSelectionRange(0, textarea.value.length);
document.execCommand("copy");
document.body.removeChild(textarea);
}
copied.value = true;
setTimeout(() => (copied.value = false), 1500);
} catch (err) {
error.value = err?.message || "Failed to copy request code";
}
}
async function checkStatus() {
if (checking.value) return;
error.value = "";
verifyBanner.value = null;
const trimmed = statusForm.request_code.trim();
if (!trimmed) return;
if (!trimmed.includes("~")) {
error.value = "Request code should look like username~XXXXXXXXXX. Copy it from the submit step.";
status.value = "unknown";
onboardingUrl.value = "";
tasks.value = [];
blocked.value = false;
return;
}
checking.value = true;
try {
const resp = await fetch("/api/access/request/status", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: trimmed }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || "unknown";
onboardingUrl.value = data.onboarding_url || "";
tasks.value = Array.isArray(data.tasks) ? data.tasks : [];
blocked.value = Boolean(data.blocked);
if (data.email_verified && status.value === "pending") {
verifyBanner.value = {
title: "Email confirmed",
body: "Your request is now waiting for manual approval. Check back here after an admin reviews it.",
};
} else {
verifyBanner.value = null;
}
} catch (err) {
error.value = err.message || "Failed to check status";
status.value = "unknown";
onboardingUrl.value = "";
tasks.value = [];
blocked.value = false;
} finally {
checking.value = false;
}
}
async function retryProvisioning() {
if (retrying.value) return;
retryMessage.value = "";
const code = statusForm.request_code.trim();
if (!code) return;
retrying.value = true;
try {
const retryTasks = tasks.value
.filter((item) => item.status === "error")
.map((item) => item.task)
.filter(Boolean);
const resp = await fetch("/api/access/request/retry", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code, tasks: retryTasks }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
retryMessage.value = "Retry requested. Check again in a moment.";
await checkStatus();
} catch (err) {
retryMessage.value = err?.message || "Retry request failed.";
} finally {
retrying.value = false;
}
}
async function verifyFromLink(code, token) {
verifying.value = true;
try {
const resp = await fetch("/api/access/request/verify", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code, token }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || status.value;
if (status.value === "pending") {
verifyBanner.value = {
title: "Email confirmed",
body: "Your request is now waiting for manual approval. Check back here after an admin reviews it.",
};
} else {
verifyBanner.value = null;
}
} finally {
verifying.value = false;
}
}
async function resendVerification() {
if (resending.value) return;
const code = statusForm.request_code.trim();
if (!code) return;
resending.value = true;
resendMessage.value = "";
try {
const resp = await fetch("/api/access/request/resend", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
resendMessage.value = "Verification email sent.";
} catch (err) {
resendMessage.value = err?.message || "Failed to resend verification email.";
} finally {
resending.value = false;
}
}
onMounted(async () => {
const code = typeof route.query.code === "string" ? route.query.code.trim() : "";
const token = typeof route.query.verify === "string" ? route.query.verify.trim() : "";
const verified = typeof route.query.verified === "string" ? route.query.verified.trim() : "";
const verifyError = typeof route.query.verify_error === "string" ? route.query.verify_error.trim() : "";
if (code) {
requestCode.value = code;
statusForm.request_code = code;
submitted.value = true;
}
if (code && token) {
try {
await verifyFromLink(code, token);
} catch (err) {
error.value = err?.message || "Failed to verify email";
}
}
if (code) {
await checkStatus();
}
if (verified && status.value === "pending") {
verifyBanner.value = {
title: "Email confirmed",
body: "Your request is now waiting for manual approval. Check back here after an admin reviews it.",
};
}
if (verifyError) {
error.value = `Email verification failed: ${decodeURIComponent(verifyError)}`;
}
});
return {
statusLabel,
statusPillClass,
form,
submitting,
submitted,
error,
requestCode,
copied,
verifying,
mailDomain,
availability,
statusForm,
checking,
status,
onboardingUrl,
tasks,
blocked,
retrying,
retryMessage,
resending,
resendMessage,
verifyBanner,
taskPillClass,
submit,
copyRequestCode,
checkStatus,
retryProvisioning,
resendVerification,
};
}

View File

@ -0,0 +1,348 @@
.page {
max-width: 1200px;
margin: 0 auto;
padding: 32px 22px 72px;
}
.hero {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 18px;
margin-bottom: 12px;
}
.hero-actions {
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: flex-end;
gap: 28px;
}
.divider {
height: 1px;
background: rgba(255, 255, 255, 0.08);
margin: 18px 0;
}
.subhead h3 {
margin: 0;
font-size: 16px;
}
.eyebrow {
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
margin: 0 0 6px;
font-size: 13px;
}
h1 {
margin: 0 0 6px;
font-size: 32px;
}
.lede {
margin: 0;
color: var(--text-muted);
max-width: 640px;
}
.account-grid {
display: grid;
grid-template-columns: minmax(0, 1fr) minmax(0, 1fr);
gap: 12px;
margin-top: 12px;
align-items: stretch;
}
.account-column,
.account-stack {
display: grid;
gap: 12px;
align-content: start;
}
.account-column .module,
.account-stack .module {
min-height: 0;
display: flex;
flex-direction: column;
}
.module {
padding: 18px;
}
.module-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
}
.muted {
color: var(--text-muted);
margin: 10px 0 0;
}
.kv {
margin-top: 12px;
border: 1px solid var(--card-border);
border-radius: 12px;
overflow: hidden;
}
.row {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
padding: 10px 12px;
background: rgba(255, 255, 255, 0.02);
border-top: 1px solid rgba(255, 255, 255, 0.06);
}
.row:first-child {
border-top: none;
}
.k {
color: var(--text-muted);
}
.v {
color: var(--text-strong);
}
.link {
color: var(--accent-cyan);
text-decoration: none;
}
.actions {
margin-top: 12px;
display: flex;
gap: 10px;
}
button.primary {
background: linear-gradient(90deg, #4f8bff, #7dd0ff);
color: #0b1222;
padding: 10px 14px;
border: none;
border-radius: 10px;
cursor: pointer;
font-weight: 700;
}
.secret-box {
margin-top: 12px;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(255, 255, 255, 0.03);
padding: 12px;
}
.secret-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
margin-bottom: 8px;
}
.secret-actions {
display: inline-flex;
align-items: center;
gap: 8px;
}
.secret {
word-break: break-word;
color: var(--text-strong);
}
.hint {
margin-top: 6px;
color: var(--text-muted);
font-size: 12px;
}
.jellyfin-detail {
margin-top: auto;
}
.copy {
background: transparent;
border: 1px solid rgba(255, 255, 255, 0.14);
color: var(--text-primary);
border-radius: 10px;
padding: 6px 10px;
cursor: pointer;
display: inline-flex;
align-items: center;
gap: 8px;
}
.copied {
font-size: 12px;
color: rgba(120, 255, 160, 0.9);
}
.error-box {
margin-top: 12px;
border-radius: 12px;
border: 1px solid rgba(255, 87, 87, 0.5);
background: rgba(255, 87, 87, 0.06);
padding: 10px 12px;
}
@media (max-width: 820px) {
.account-grid {
grid-template-columns: 1fr;
}
.account-stack .module {
flex: none;
}
.account-column .module {
flex: none;
}
}
@media (max-width: 720px) {
.page {
padding: 24px 16px 56px;
}
.hero {
flex-direction: column;
align-items: flex-start;
}
.hero-actions {
width: 100%;
justify-content: flex-start;
gap: 12px;
}
.row {
flex-direction: column;
align-items: flex-start;
}
.actions {
flex-direction: column;
align-items: stretch;
}
.secret-head {
flex-direction: column;
align-items: flex-start;
}
.req-row {
grid-template-columns: 1fr;
}
}
.admin {
margin-top: 12px;
}
.requests {
margin-top: 12px;
display: grid;
gap: 10px;
}
.req-row {
display: grid;
grid-template-columns: minmax(220px, 1.2fr) minmax(200px, 1fr) minmax(200px, 1fr) minmax(140px, 0.6fr);
gap: 16px;
align-items: start;
border: 1px solid rgba(255, 255, 255, 0.08);
background: rgba(0, 0, 0, 0.18);
border-radius: 14px;
padding: 10px 12px;
}
.req-summary {
display: grid;
gap: 6px;
}
.req-label {
color: var(--text-muted);
font-size: 11px;
letter-spacing: 0.06em;
text-transform: uppercase;
}
.req-flags {
display: grid;
gap: 8px;
align-content: start;
}
.req-flag-grid {
display: flex;
flex-wrap: wrap;
gap: 6px;
}
.note {
color: var(--text-muted);
}
.flag-pill {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 4px 8px;
border-radius: 999px;
border: 1px solid rgba(255, 255, 255, 0.12);
background: rgba(0, 0, 0, 0.2);
font-size: 12px;
}
.flag-pill input {
width: 14px;
height: 14px;
}
.req-note .input {
width: 100%;
border-radius: 10px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(0, 0, 0, 0.22);
color: var(--text-primary);
padding: 8px 10px;
}
.req-actions {
display: grid;
gap: 8px;
align-content: start;
}
.req-action-stack {
display: grid;
gap: 8px;
}
@media (max-width: 900px) {
.req-row {
grid-template-columns: 1fr;
gap: 12px;
align-items: start;
}
.note {
white-space: normal;
}
}

View File

@ -0,0 +1,146 @@
.guide-details {
margin-top: 10px;
}
.guide-summary {
cursor: pointer;
display: inline-flex;
align-items: center;
gap: 10px;
padding: 6px 12px;
border-radius: 999px;
border: 1px solid rgba(92, 214, 167, 0.5);
background: rgba(92, 214, 167, 0.15);
color: var(--text-strong);
font-weight: 600;
}
.guide-summary::after {
content: "Tap to open";
font-size: 12px;
color: var(--text-muted);
}
.guide-details[open] .guide-summary::after {
content: "Tap to close";
}
.guide-summary::-webkit-details-marker {
display: none;
}
.guide-groups {
display: grid;
gap: 12px;
margin-top: 8px;
}
.guide-title {
margin: 0 0 6px;
}
.guide-images {
display: grid;
gap: 10px;
}
.guide-shot {
border-radius: 10px;
overflow: hidden;
border: 1px solid rgba(255, 255, 255, 0.1);
background: rgba(0, 0, 0, 0.2);
padding: 0;
cursor: zoom-in;
display: flex;
flex-direction: column;
gap: 6px;
}
.guide-shot figcaption {
margin: 0;
padding: 10px 12px 6px;
font-size: 18px;
font-weight: 600;
color: var(--text-strong);
}
.guide-shot img {
display: block;
border-radius: 10px;
max-width: 100%;
width: auto;
height: auto;
max-height: min(60vh, 520px);
margin: 0 auto 10px;
}
.guide-pagination {
display: flex;
align-items: center;
justify-content: space-between;
gap: 8px;
flex-wrap: wrap;
}
.guide-dots {
display: flex;
gap: 6px;
flex-wrap: wrap;
}
.guide-dot {
border: 1px solid rgba(255, 255, 255, 0.18);
background: rgba(0, 0, 0, 0.3);
color: var(--text-muted);
border-radius: 999px;
padding: 4px 8px;
cursor: pointer;
font-size: 12px;
}
.guide-dot.active {
border-color: rgba(120, 180, 255, 0.5);
color: var(--text-strong);
}
.lightbox {
position: fixed;
inset: 0;
background: rgba(6, 8, 12, 0.82);
display: flex;
align-items: center;
justify-content: center;
padding: 28px;
z-index: 2000;
}
.lightbox-card {
width: min(1400px, 96vw);
max-height: 94vh;
background: rgba(10, 14, 24, 0.96);
border: 1px solid rgba(255, 255, 255, 0.12);
border-radius: 16px;
padding: 16px;
display: flex;
flex-direction: column;
gap: 12px;
}
.lightbox-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
}
.lightbox-label {
color: var(--text-muted);
}
.lightbox-card img {
width: 100%;
max-height: 82vh;
object-fit: contain;
border-radius: 12px;
background: rgba(0, 0, 0, 0.35);
}

View File

@ -0,0 +1,500 @@
.page {
max-width: 1080px;
margin: 0 auto;
padding: 32px 22px 72px;
}
.hero {
margin-bottom: 12px;
padding: 18px;
}
.eyebrow {
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
margin: 0 0 6px;
font-size: 13px;
}
h1 {
margin: 0 0 6px;
font-size: 32px;
}
.lede {
margin: 0;
color: var(--text-muted);
max-width: 640px;
}
.module {
padding: 18px;
}
.module-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 18px;
}
.status-form {
display: flex;
gap: 10px;
margin-top: 12px;
}
.status-meta {
margin-top: 12px;
padding: 12px;
border-radius: 14px;
border: 1px solid rgba(255, 255, 255, 0.1);
background: rgba(0, 0, 0, 0.18);
}
.meta-row {
display: flex;
align-items: center;
justify-content: space-between;
gap: 14px;
}
.meta-row .label {
color: var(--text-muted);
}
.input {
flex: 1;
padding: 10px 12px;
border-radius: 10px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(0, 0, 0, 0.25);
color: var(--text-primary);
}
button.primary {
background: linear-gradient(90deg, #4f8bff, #7dd0ff);
color: #0b1222;
padding: 10px 14px;
border: none;
border-radius: 10px;
cursor: pointer;
font-weight: 700;
}
button.secondary {
background: rgba(255, 255, 255, 0.08);
border: 1px solid rgba(255, 255, 255, 0.16);
color: var(--text-primary);
padding: 8px 12px;
border-radius: 10px;
cursor: pointer;
font-weight: 600;
}
button.primary:disabled,
button.secondary:disabled,
button.copy:disabled {
opacity: 0.45;
cursor: not-allowed;
}
.steps {
margin-top: 16px;
}
.onboarding-head {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 14px;
margin-bottom: 8px;
}
.section-stepper {
margin: 16px 0 18px;
list-style: none;
display: grid;
grid-template-columns: repeat(4, minmax(0, 1fr));
gap: 10px;
padding: 0;
}
.stepper-item {
min-width: 0;
}
.stepper-card {
width: 100%;
text-align: left;
padding: 12px 12px 12px 36px;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.12);
background: rgba(0, 0, 0, 0.2);
color: var(--text-primary);
display: flex;
gap: 8px;
position: relative;
}
.stepper-card::after {
display: none;
}
.stepper-dot {
position: absolute;
left: 12px;
top: 12px;
width: 10px;
height: 10px;
border-radius: 999px;
border: 2px solid rgba(255, 255, 255, 0.22);
background: rgba(255, 255, 255, 0.16);
z-index: 1;
}
.stepper-body {
display: flex;
flex-direction: column;
gap: 6px;
min-width: 0;
}
.stepper-title {
font-weight: 700;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.stepper-meta {
display: flex;
align-items: center;
justify-content: center;
gap: 6px 10px;
flex-wrap: wrap;
color: var(--text-muted);
width: 100%;
}
.stepper-meta .pill {
padding: 4px 6px;
font-size: 11px;
white-space: nowrap;
}
.pill-compact {
padding: 4px 6px;
font-size: 11px;
white-space: nowrap;
max-width: 100%;
overflow: hidden;
text-overflow: ellipsis;
}
.stepper-card.active {
border-color: rgba(125, 208, 255, 0.5);
box-shadow: 0 0 0 1px rgba(79, 139, 255, 0.3);
}
.stepper-card.active .stepper-dot {
background: rgba(125, 208, 255, 0.8);
border-color: rgba(125, 208, 255, 0.85);
}
.stepper-card.done {
border-color: rgba(92, 214, 167, 0.35);
background: rgba(92, 214, 167, 0.08);
}
.stepper-card.done .stepper-dot {
background: rgba(92, 214, 167, 0.9);
border-color: rgba(92, 214, 167, 0.95);
}
.stepper-card.locked {
opacity: 0.6;
}
@media (max-width: 1200px) {
.section-stepper {
grid-template-columns: repeat(3, minmax(0, 1fr));
}
}
@media (max-width: 860px) {
.section-stepper {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
}
@media (max-width: 520px) {
.section-stepper {
grid-template-columns: 1fr;
}
}
.credential-card {
margin-top: 14px;
padding: 14px;
border-radius: 14px;
border: 1px solid rgba(255, 255, 255, 0.12);
background: rgba(0, 0, 0, 0.2);
}
.credential-head {
margin-bottom: 10px;
}
.credential-head h4 {
margin: 0 0 4px;
font-size: 18px;
}
.credential-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
gap: 14px;
}
.credential-field .label {
display: block;
margin-bottom: 6px;
color: var(--text-muted);
}
.credential-field .input[readonly] {
opacity: 0.8;
}
.password-row {
display: flex;
gap: 8px;
align-items: center;
}
.password-row .input {
flex: 1;
}
.section-shell {
margin-top: 16px;
padding-top: 12px;
border-top: 1px solid rgba(255, 255, 255, 0.08);
}
.section-header {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 12px;
margin-bottom: 12px;
}
.section-actions {
display: flex;
align-items: center;
justify-content: space-between;
gap: 8px;
width: 100%;
margin-top: 12px;
}
.step-grid {
display: grid;
gap: 12px;
}
.step-card {
border: 1px solid rgba(255, 255, 255, 0.08);
border-radius: 14px;
padding: 12px 12px 10px;
background: rgba(255, 255, 255, 0.02);
display: flex;
flex-direction: column;
}
.step-note {
margin-top: 8px;
padding: 8px 10px;
border-radius: 10px;
border: 1px solid rgba(0, 229, 197, 0.35);
background: rgba(0, 229, 197, 0.08);
color: var(--text-strong);
font-weight: 600;
}
.step-card.blocked {
opacity: 0.55;
pointer-events: none;
}
.step-card.done {
border-color: rgba(92, 214, 167, 0.35);
background: rgba(92, 214, 167, 0.05);
}
.step-head {
display: flex;
align-items: center;
gap: 12px;
}
.auto-pill {
margin-left: auto;
font-size: 12px;
padding: 3px 10px;
border-radius: 999px;
}
.step-title {
font-weight: 650;
color: var(--text-strong);
}
.step-label {
display: flex;
align-items: center;
gap: 10px;
}
.step-label input {
width: 18px;
height: 18px;
}
.step-bullets {
margin: 8px 0 0;
padding-left: 18px;
color: var(--text-muted);
}
.step-links {
margin-top: 10px;
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 10px;
}
.step-links a {
color: rgba(92, 214, 167, 0.95);
text-decoration: none;
font-weight: 600;
border-radius: 999px;
padding: 6px 14px;
border: 1px solid rgba(92, 214, 167, 0.35);
background: rgba(92, 214, 167, 0.12);
display: inline-flex;
align-items: center;
gap: 6px;
}
.step-links a:hover {
text-decoration: none;
background: rgba(92, 214, 167, 0.2);
}
.step-actions {
display: flex;
align-items: center;
gap: 12px;
justify-content: flex-end;
margin-top: auto;
padding-top: 10px;
}
.recovery-verify {
display: flex;
gap: 10px;
margin-top: 10px;
align-items: stretch;
}
.recovery-verify .input {
flex: 1;
}
.ready-box {
margin-top: 18px;
padding: 14px;
border-radius: 14px;
border: 1px solid rgba(92, 214, 167, 0.3);
background: rgba(92, 214, 167, 0.08);
}
.task-list {
margin: 0;
padding: 0;
list-style: none;
display: grid;
gap: 8px;
}
.task-row {
display: flex;
align-items: center;
gap: 12px;
}
.task-name {
min-width: 180px;
}
.task-detail {
color: var(--text-muted);
}
.error-box {
margin-top: 12px;
padding: 12px;
border-radius: 14px;
border: 1px solid rgba(255, 120, 120, 0.4);
background: rgba(255, 70, 70, 0.1);
}
.tooltip-wrap {
display: inline-flex;
}
@media (max-width: 720px) {
.page {
padding: 24px 16px 56px;
}
.status-form {
flex-direction: column;
}
.onboarding-head {
flex-direction: column;
align-items: flex-start;
}
.credential-grid {
grid-template-columns: 1fr;
}
.step-head {
flex-direction: column;
align-items: flex-start;
}
.auto-pill {
margin-left: 0;
}
.section-actions {
flex-direction: column;
align-items: stretch;
width: 100%;
justify-content: flex-start;
}
.step-actions {
justify-content: flex-start;
}
.password-row {
flex-direction: column;
align-items: stretch;
}
}

View File

@ -0,0 +1,254 @@
.page {
max-width: 960px;
margin: 0 auto;
padding: 32px 22px 72px;
}
.hero {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 18px;
margin-bottom: 12px;
}
.eyebrow {
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
margin: 0 0 6px;
font-size: 13px;
}
h1 {
margin: 0 0 6px;
font-size: 32px;
}
.lede {
margin: 0;
color: var(--text-muted);
max-width: 640px;
}
.module {
padding: 18px;
}
.status-module {
margin-top: 14px;
}
.module-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
}
.muted {
color: var(--text-muted);
margin: 10px 0 0;
}
.mono {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
}
.form {
margin-top: 14px;
display: grid;
gap: 12px;
}
.field {
display: grid;
gap: 6px;
}
.availability {
display: flex;
align-items: center;
gap: 8px;
}
.label {
color: var(--text-muted);
font-size: 12px;
letter-spacing: 0.04em;
text-transform: uppercase;
}
.input,
.textarea {
width: 100%;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.1);
background: rgba(0, 0, 0, 0.22);
color: var(--text);
padding: 10px 12px;
outline: none;
}
.textarea {
resize: vertical;
}
.actions {
display: flex;
align-items: center;
gap: 12px;
margin-top: 6px;
}
button.primary,
a.primary {
background: linear-gradient(90deg, #4f8bff, #7dd0ff);
color: #0b1222;
padding: 10px 14px;
border: none;
border-radius: 10px;
cursor: pointer;
font-weight: 700;
text-decoration: none;
display: inline-flex;
align-items: center;
justify-content: center;
}
button.primary:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.onboarding-actions {
margin-top: 18px;
flex-direction: column;
align-items: stretch;
padding: 14px;
border-radius: 14px;
border: 1px solid rgba(120, 180, 255, 0.2);
background: rgba(0, 0, 0, 0.24);
}
.onboarding-copy {
display: grid;
gap: 6px;
}
.onboarding-cta {
text-align: center;
width: 100%;
}
.status-form {
display: flex;
gap: 10px;
margin-top: 12px;
}
.hint {
color: var(--text-muted);
font-size: 12px;
}
.error-box {
margin-top: 14px;
border: 1px solid rgba(255, 120, 120, 0.35);
background: rgba(255, 64, 64, 0.12);
border-radius: 14px;
padding: 12px;
}
.success-box {
margin-top: 14px;
border: 1px solid rgba(120, 255, 160, 0.25);
background: rgba(48, 255, 160, 0.1);
border-radius: 14px;
padding: 12px;
}
.request-code-row {
margin-top: 12px;
display: flex;
flex-direction: column;
gap: 6px;
}
.copy {
display: inline-flex;
align-items: center;
gap: 10px;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(0, 0, 0, 0.22);
color: var(--text);
padding: 10px 12px;
cursor: pointer;
}
.copied {
font-size: 12px;
color: rgba(120, 255, 160, 0.9);
}
.pill {
padding: 6px 10px;
border-radius: 999px;
font-size: 12px;
}
.verify-box {
margin-top: 12px;
padding: 12px 14px;
border: 1px solid rgba(120, 200, 255, 0.35);
border-radius: 14px;
background: rgba(48, 120, 200, 0.16);
display: grid;
gap: 4px;
}
.verify-title {
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.08em;
color: rgba(150, 220, 255, 0.95);
}
.verify-body {
font-size: 13px;
color: var(--text);
}
.task-box {
margin-top: 14px;
padding: 14px;
border: 1px solid rgba(255, 255, 255, 0.08);
border-radius: 14px;
background: rgba(0, 0, 0, 0.25);
}
.task-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.task-row {
display: grid;
gap: 6px;
grid-template-columns: 1fr auto;
align-items: center;
}
.task-name {
color: var(--text);
}
.task-detail {
grid-column: 1 / -1;
color: var(--text-muted);
font-size: 12px;
}

View File

@ -118,7 +118,7 @@
<div class="actions">
<button class="primary" type="button" :disabled="mailu.rotating" @click="rotateMailu">
{{ mailu.rotating ? "Rotating..." : "Rotate mail app password" }}
{{ mailu.rotating ? "Resetting..." : "Reset mail app password" }}
</button>
</div>
@ -192,23 +192,23 @@
</div>
<div class="account-stack">
<div class="card module">
<div class="card module" :style="{ order: vaultwardenOrder }">
<div class="module-head">
<h2>Vaultwarden</h2>
<span
class="pill mono"
:class="
vaultwarden.status === 'ready' || vaultwarden.status === 'already_present'
vaultwardenReady
? 'pill-ok'
: vaultwarden.status === 'unavailable' || vaultwarden.status === 'error'
? 'pill-bad'
: ''
"
>
{{ vaultwarden.status }}
{{ vaultwardenDisplayStatus }}
</span>
</div>
<p v-if="vaultwarden.status !== 'ready' && vaultwarden.status !== 'already_present'" class="muted">
<p v-if="!vaultwardenReady" class="muted">
Password manager for Atlas accounts. Store your Element recovery key here. Signups are admin-provisioned.
</p>
<div class="kv">
@ -233,7 +233,7 @@
</div>
</div>
<div class="card module">
<div class="card module" :style="{ order: 1 }">
<div class="module-head">
<h2>Wger</h2>
<span
@ -298,7 +298,7 @@
</div>
</div>
<div class="card module">
<div class="card module" :style="{ order: 2 }">
<div class="module-head">
<h2>Jellyfin</h2>
<span
@ -418,757 +418,34 @@
</template>
<script setup>
import { onMounted, reactive, ref, watch } from "vue";
import { auth, authFetch, login } from "@/auth";
import { useAccountDashboard } from "../account/useAccountDashboard";
const mailu = reactive({
status: "loading",
imap: "mail.bstein.dev:993 (TLS)",
smtp: "mail.bstein.dev:587 (STARTTLS)",
username: "",
currentPassword: "",
revealPassword: false,
rotating: false,
newPassword: "",
error: "",
});
const jellyfin = reactive({
status: "loading",
username: "",
syncStatus: "",
syncDetail: "",
error: "",
});
const vaultwarden = reactive({
status: "loading",
username: "",
syncedAt: "",
error: "",
});
const nextcloudMail = reactive({
status: "loading",
primaryEmail: "",
accountCount: "",
syncedAt: "",
syncing: false,
error: "",
});
const wger = reactive({
status: "loading",
username: "",
password: "",
passwordUpdatedAt: "",
revealPassword: false,
resetting: false,
error: "",
});
const firefly = reactive({
status: "loading",
username: "",
password: "",
passwordUpdatedAt: "",
revealPassword: false,
resetting: false,
error: "",
});
const admin = reactive({
enabled: false,
loading: false,
requests: [],
error: "",
acting: {},
flags: [],
flagsLoading: false,
notes: {},
selectedFlags: {},
});
const onboardingUrl = ref("/onboarding");
const doLogin = () => login("/account");
const copied = reactive({});
onMounted(() => {
if (auth.ready && auth.authenticated) {
refreshOverview();
refreshAdminRequests();
refreshAdminFlags();
} else {
mailu.status = "login required";
nextcloudMail.status = "login required";
jellyfin.status = "login required";
vaultwarden.status = "login required";
wger.status = "login required";
firefly.status = "login required";
}
});
watch(
() => [auth.ready, auth.authenticated],
([ready, authenticated]) => {
if (!ready) return;
if (!authenticated) {
mailu.status = "login required";
nextcloudMail.status = "login required";
jellyfin.status = "login required";
vaultwarden.status = "login required";
wger.status = "login required";
firefly.status = "login required";
onboardingUrl.value = "/onboarding";
admin.enabled = false;
admin.requests = [];
admin.flags = [];
return;
}
refreshOverview();
refreshAdminRequests();
refreshAdminFlags();
},
{ immediate: false },
);
async function refreshOverview() {
mailu.error = "";
jellyfin.error = "";
vaultwarden.error = "";
nextcloudMail.error = "";
wger.error = "";
firefly.error = "";
try {
const resp = await authFetch("/api/account/overview", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data?.error || `status ${resp.status}`);
}
const data = await resp.json();
mailu.status = data.mailu?.status || "ready";
mailu.username = data.mailu?.username || auth.email || auth.username;
mailu.currentPassword = data.mailu?.app_password || "";
nextcloudMail.status = data.nextcloud_mail?.status || "unknown";
nextcloudMail.primaryEmail = data.nextcloud_mail?.primary_email || "";
nextcloudMail.accountCount = data.nextcloud_mail?.account_count || "";
nextcloudMail.syncedAt = data.nextcloud_mail?.synced_at || "";
wger.status = data.wger?.status || "unknown";
wger.username = data.wger?.username || mailu.username || auth.username;
wger.password = data.wger?.password || "";
wger.passwordUpdatedAt = data.wger?.password_updated_at || "";
firefly.status = data.firefly?.status || "unknown";
firefly.username = data.firefly?.username || mailu.username || auth.username;
firefly.password = data.firefly?.password || "";
firefly.passwordUpdatedAt = data.firefly?.password_updated_at || "";
vaultwarden.status = data.vaultwarden?.status || "unknown";
vaultwarden.username = data.vaultwarden?.username || mailu.username || auth.username;
vaultwarden.syncedAt = data.vaultwarden?.synced_at || "";
jellyfin.status = data.jellyfin?.status || "ready";
jellyfin.username = data.jellyfin?.username || auth.username;
jellyfin.syncStatus = data.jellyfin?.sync_status || "";
jellyfin.syncDetail = data.jellyfin?.sync_detail || "";
onboardingUrl.value = data.onboarding_url || "/onboarding";
} catch (err) {
mailu.status = "unavailable";
nextcloudMail.status = "unavailable";
wger.status = "unavailable";
firefly.status = "unavailable";
vaultwarden.status = "unavailable";
jellyfin.status = "unavailable";
jellyfin.syncStatus = "";
jellyfin.syncDetail = "";
onboardingUrl.value = "/onboarding";
const message = err?.message ? `Failed to load account status (${err.message})` : "Failed to load account status.";
mailu.error = message;
nextcloudMail.error = message;
wger.error = message;
firefly.error = message;
vaultwarden.error = message;
jellyfin.error = message;
}
}
async function refreshAdminRequests() {
if (!auth.authenticated) {
admin.enabled = false;
admin.requests = [];
return;
}
admin.error = "";
admin.loading = true;
try {
const resp = await authFetch("/api/admin/access/requests", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (resp.status === 403) {
admin.enabled = false;
admin.requests = [];
return;
}
if (!resp.ok) throw new Error(`status ${resp.status}`);
const data = await resp.json();
admin.enabled = true;
admin.requests = Array.isArray(data.requests) ? data.requests : [];
for (const req of admin.requests) {
if (!req?.username) continue;
if (!(req.username in admin.notes)) admin.notes[req.username] = "";
if (!(req.username in admin.selectedFlags)) admin.selectedFlags[req.username] = [];
}
} catch (err) {
admin.enabled = false;
admin.requests = [];
admin.error = err.message || "Failed to load access requests.";
} finally {
admin.loading = false;
}
}
async function refreshAdminFlags() {
if (!auth.authenticated) {
admin.flags = [];
admin.flagsLoading = false;
return;
}
admin.flagsLoading = true;
try {
const resp = await authFetch("/api/admin/access/flags", {
headers: { Accept: "application/json" },
cache: "no-store",
});
if (resp.status === 403) {
admin.flags = [];
return;
}
if (!resp.ok) throw new Error(`status ${resp.status}`);
const data = await resp.json();
admin.flags = Array.isArray(data.flags) ? data.flags : [];
} catch (err) {
admin.flags = [];
admin.error = admin.error || err.message || "Failed to load access flags.";
} finally {
admin.flagsLoading = false;
}
}
function hasFlag(username, flag) {
const selected = admin.selectedFlags[username];
return Array.isArray(selected) && selected.includes(flag);
}
function formatName(req) {
if (!req) return "unknown";
const parts = [];
if (req.first_name && String(req.first_name).trim()) {
parts.push(String(req.first_name).trim());
}
if (req.last_name && String(req.last_name).trim()) {
parts.push(String(req.last_name).trim());
}
return parts.length ? parts.join(" ") : "unknown";
}
function toggleFlag(username, flag, event) {
const checked = Boolean(event?.target?.checked);
const selected = Array.isArray(admin.selectedFlags[username]) ? [...admin.selectedFlags[username]] : [];
const next = checked ? Array.from(new Set([...selected, flag])) : selected.filter((item) => item !== flag);
admin.selectedFlags[username] = next;
}
async function rotateMailu() {
mailu.error = "";
mailu.newPassword = "";
mailu.rotating = true;
try {
const resp = await authFetch("/api/account/mailu/rotate", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
mailu.newPassword = data.password || "";
if (mailu.newPassword) {
mailu.currentPassword = mailu.newPassword;
mailu.revealPassword = true;
}
const syncEnabled = Boolean(data.sync_enabled);
const syncOk = Boolean(data.sync_ok);
const syncError = data.sync_error || "";
if (!syncEnabled) {
mailu.status = "updated";
mailu.error = "Mail sync is not configured; password may not take effect until an admin sync runs.";
} else if (!syncOk) {
mailu.status = "sync pending";
mailu.error = syncError || "Mail sync did not confirm success yet. Try again in a moment.";
} else {
mailu.status = "updated";
}
await refreshOverview();
} catch (err) {
mailu.error = err.message || "Rotation failed";
} finally {
mailu.rotating = false;
}
}
async function resetWger() {
wger.error = "";
wger.resetting = true;
try {
const resp = await authFetch("/api/account/wger/reset", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.password) {
wger.password = data.password;
wger.revealPassword = true;
}
await refreshOverview();
} catch (err) {
wger.error = err.message || "Reset failed";
} finally {
wger.resetting = false;
}
}
async function resetFirefly() {
firefly.error = "";
firefly.resetting = true;
try {
const resp = await authFetch("/api/account/firefly/reset", { method: "POST" });
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.password) {
firefly.password = data.password;
firefly.revealPassword = true;
}
await refreshOverview();
} catch (err) {
firefly.error = err.message || "Reset failed";
} finally {
firefly.resetting = false;
}
}
async function syncNextcloudMail() {
nextcloudMail.error = "";
nextcloudMail.syncing = true;
try {
const resp = await authFetch("/api/account/nextcloud/mail/sync", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ wait: true }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
await refreshOverview();
} catch (err) {
nextcloudMail.error = err.message || "Sync failed";
} finally {
nextcloudMail.syncing = false;
}
}
function fallbackCopy(text) {
const textarea = document.createElement("textarea");
textarea.value = text;
textarea.setAttribute("readonly", "");
textarea.style.position = "fixed";
textarea.style.top = "-9999px";
textarea.style.left = "-9999px";
document.body.appendChild(textarea);
textarea.select();
textarea.setSelectionRange(0, textarea.value.length);
document.execCommand("copy");
document.body.removeChild(textarea);
}
async function approve(username) {
admin.error = "";
admin.acting[username] = true;
try {
const flags = Array.isArray(admin.selectedFlags[username]) ? admin.selectedFlags[username] : [];
const note = (admin.notes[username] || "").trim();
const payload = {};
if (flags.length) payload.flags = flags;
if (note) payload.note = note;
const resp = await authFetch(`/api/admin/access/requests/${encodeURIComponent(username)}/approve`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data.error || `status ${resp.status}`);
}
await refreshAdminRequests();
} catch (err) {
admin.error = err.message || "Approve failed";
} finally {
admin.acting[username] = false;
}
}
async function deny(username) {
admin.error = "";
admin.acting[username] = true;
try {
const note = (admin.notes[username] || "").trim();
const payload = note ? { note } : {};
const resp = await authFetch(`/api/admin/access/requests/${encodeURIComponent(username)}/deny`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data.error || `status ${resp.status}`);
}
await refreshAdminRequests();
} catch (err) {
admin.error = err.message || "Deny failed";
} finally {
admin.acting[username] = false;
}
}
async function copy(key, text) {
if (!text) return;
try {
if (navigator?.clipboard?.writeText) {
await navigator.clipboard.writeText(text);
} else {
fallbackCopy(text);
}
copied[key] = true;
window.setTimeout(() => {
copied[key] = false;
}, 1500);
} catch (err) {
try {
fallbackCopy(text);
copied[key] = true;
window.setTimeout(() => {
copied[key] = false;
}, 1500);
} catch {
// ignore
}
}
}
const {
auth,
mailu,
jellyfin,
vaultwarden,
nextcloudMail,
wger,
firefly,
admin,
onboardingUrl,
vaultwardenReady,
vaultwardenDisplayStatus,
vaultwardenOrder,
doLogin,
copied,
hasFlag,
formatName,
toggleFlag,
rotateMailu,
resetWger,
resetFirefly,
syncNextcloudMail,
approve,
deny,
copy,
} = useAccountDashboard();
</script>
<style scoped>
.page {
max-width: 1200px;
margin: 0 auto;
padding: 32px 22px 72px;
}
.hero {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 18px;
margin-bottom: 12px;
}
.hero-actions {
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: flex-end;
gap: 28px;
}
.divider {
height: 1px;
background: rgba(255, 255, 255, 0.08);
margin: 18px 0;
}
.subhead h3 {
margin: 0;
font-size: 16px;
}
.eyebrow {
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
margin: 0 0 6px;
font-size: 13px;
}
h1 {
margin: 0 0 6px;
font-size: 32px;
}
.lede {
margin: 0;
color: var(--text-muted);
max-width: 640px;
}
.account-grid {
display: grid;
grid-template-columns: minmax(0, 1fr) minmax(0, 1fr);
gap: 12px;
margin-top: 12px;
align-items: stretch;
}
.account-column,
.account-stack {
display: grid;
gap: 12px;
align-content: start;
}
.account-column .module,
.account-stack .module {
min-height: 0;
display: flex;
flex-direction: column;
}
.module {
padding: 18px;
}
.module-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
}
.muted {
color: var(--text-muted);
margin: 10px 0 0;
}
.kv {
margin-top: 12px;
border: 1px solid var(--card-border);
border-radius: 12px;
overflow: hidden;
}
.row {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
padding: 10px 12px;
background: rgba(255, 255, 255, 0.02);
border-top: 1px solid rgba(255, 255, 255, 0.06);
}
.row:first-child {
border-top: none;
}
.k {
color: var(--text-muted);
}
.v {
color: var(--text-strong);
}
.link {
color: var(--accent-cyan);
text-decoration: none;
}
.actions {
margin-top: 12px;
display: flex;
gap: 10px;
}
button.primary {
background: linear-gradient(90deg, #4f8bff, #7dd0ff);
color: #0b1222;
padding: 10px 14px;
border: none;
border-radius: 10px;
cursor: pointer;
font-weight: 700;
}
.secret-box {
margin-top: 12px;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(255, 255, 255, 0.03);
padding: 12px;
}
.secret-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 10px;
margin-bottom: 8px;
}
.secret-actions {
display: inline-flex;
align-items: center;
gap: 8px;
}
.secret {
word-break: break-word;
color: var(--text-strong);
}
.hint {
margin-top: 6px;
color: var(--text-muted);
font-size: 12px;
}
.jellyfin-detail {
margin-top: auto;
}
.copy {
background: transparent;
border: 1px solid rgba(255, 255, 255, 0.14);
color: var(--text-primary);
border-radius: 10px;
padding: 6px 10px;
cursor: pointer;
display: inline-flex;
align-items: center;
gap: 8px;
}
.copied {
font-size: 12px;
color: rgba(120, 255, 160, 0.9);
}
.error-box {
margin-top: 12px;
border-radius: 12px;
border: 1px solid rgba(255, 87, 87, 0.5);
background: rgba(255, 87, 87, 0.06);
padding: 10px 12px;
}
@media (max-width: 820px) {
.account-grid {
grid-template-columns: 1fr;
}
.account-stack .module {
flex: none;
}
.account-column .module {
flex: none;
}
}
.admin {
margin-top: 12px;
}
.requests {
margin-top: 12px;
display: grid;
gap: 10px;
}
.req-row {
display: grid;
grid-template-columns: minmax(220px, 1.2fr) minmax(200px, 1fr) minmax(200px, 1fr) minmax(140px, 0.6fr);
gap: 16px;
align-items: start;
border: 1px solid rgba(255, 255, 255, 0.08);
background: rgba(0, 0, 0, 0.18);
border-radius: 14px;
padding: 10px 12px;
}
.req-summary {
display: grid;
gap: 6px;
}
.req-label {
color: var(--text-muted);
font-size: 11px;
letter-spacing: 0.06em;
text-transform: uppercase;
}
.req-flags {
display: grid;
gap: 8px;
align-content: start;
}
.req-flag-grid {
display: flex;
flex-wrap: wrap;
gap: 6px;
}
.note {
color: var(--text-muted);
}
.flag-pill {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 4px 8px;
border-radius: 999px;
border: 1px solid rgba(255, 255, 255, 0.12);
background: rgba(0, 0, 0, 0.2);
font-size: 12px;
}
.flag-pill input {
width: 14px;
height: 14px;
}
.req-note .input {
width: 100%;
border-radius: 10px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(0, 0, 0, 0.22);
color: var(--text-primary);
padding: 8px 10px;
}
.req-actions {
display: grid;
gap: 8px;
align-content: start;
}
.req-action-stack {
display: grid;
gap: 8px;
}
@media (max-width: 900px) {
.req-row {
grid-template-columns: 1fr;
gap: 12px;
align-items: start;
}
.note {
white-space: normal;
}
}
</style>
<style scoped src="../styles/account.css"></style>

View File

@ -12,16 +12,16 @@
<div class="hero-facts">
<div class="fact">
<span class="label mono">Model</span>
<span class="value mono">{{ meta.model }}</span>
<span class="value mono">{{ current.meta.model }}</span>
</div>
<div class="fact">
<span class="label mono">GPU</span>
<span class="value mono">{{ meta.gpu }}</span>
<span class="value mono">{{ current.meta.gpu }}</span>
</div>
<div class="fact">
<span class="label mono">Endpoint</span>
<button class="endpoint-copy mono" type="button" @click="copyCurl">
{{ meta.endpoint || apiDisplay }}
{{ current.meta.endpoint || apiDisplay }}
<span v-if="copied" class="copied">copied</span>
</button>
</div>
@ -29,19 +29,31 @@
</section>
<section class="card chat-card">
<div class="profile-tabs">
<button
v-for="profile in profiles"
:key="profile.id"
type="button"
class="profile-tab mono"
:class="{ active: activeProfile === profile.id }"
@click="activeProfile = profile.id"
>
{{ profile.label }}
</button>
</div>
<div class="chat-window" ref="chatWindow">
<div v-for="(msg, idx) in messages" :key="idx" :class="['chat-row', msg.role]">
<div v-for="(msg, idx) in current.messages" :key="idx" :class="['chat-row', msg.role]">
<div class="bubble" :class="{ streaming: msg.streaming }">
<div class="role mono">{{ msg.role === 'assistant' ? 'Atlas AI' : 'you' }}</div>
<p>{{ msg.content }}</p>
<p class="message">{{ msg.content }}</p>
<div v-if="msg.streaming" class="meta mono typing">streaming</div>
<div v-else-if="msg.latency_ms" class="meta mono">{{ msg.latency_ms }} ms</div>
</div>
</div>
<div v-if="error" class="chat-row error">
<div v-if="current.error" class="chat-row error">
<div class="bubble">
<div class="role mono">error</div>
<p>{{ error }}</p>
<p>{{ current.error }}</p>
</div>
</div>
</div>
@ -65,32 +77,70 @@
</template>
<script setup>
import { onMounted, onUpdated, ref } from "vue";
import { computed, onMounted, onUpdated, reactive, ref, watch } from "vue";
const API_URL = (import.meta.env.VITE_AI_ENDPOINT || "/api/chat").trim();
const apiUrl = new URL(API_URL, window.location.href);
const apiDisplay = apiUrl.host + apiUrl.pathname;
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const meta = ref({
model: "loading...",
gpu: "local GPU (dynamic)",
node: "unknown",
endpoint: apiUrl.toString(),
});
const messages = ref([
{
role: "assistant",
content: "Hi! I'm Atlas AI. How can I help?",
},
]);
const profiles = [
{ id: "atlas-quick", label: "Atlas Quick" },
{ id: "atlas-smart", label: "Atlas Smart" },
{ id: "atlas-genius", label: "Atlas Genius" },
];
const activeProfile = ref("atlas-quick");
const profileState = reactive(
Object.fromEntries(
profiles.map((profile) => [
profile.id,
{
meta: {
model: "loading...",
gpu: "local GPU (dynamic)",
node: "unknown",
endpoint: apiUrl.toString(),
},
messages: [
{
role: "assistant",
content: "Hi! I'm Atlas AI. How can I help?",
},
],
error: "",
},
])
)
);
const current = computed(() => profileState[activeProfile.value]);
const draft = ref("");
const sending = ref(false);
const error = ref("");
const chatWindow = ref(null);
const copied = ref(false);
const conversationIds = reactive({});
onMounted(fetchMeta);
/**
* Return a stable local conversation ID for one AI profile.
*
* @param {string} profile - Active AI profile key.
* @returns {string} Stable conversation identifier persisted in local storage.
*/
function ensureConversationId(profile) {
if (conversationIds[profile]) return conversationIds[profile];
const key = `atlas-ai-conversation:${profile}`;
let value = localStorage.getItem(key);
if (!value) {
const suffix =
typeof crypto !== "undefined" && crypto.randomUUID ? crypto.randomUUID() : `${Math.random()}`.slice(2);
value = `${profile}-${Date.now()}-${suffix}`;
localStorage.setItem(key, value);
}
conversationIds[profile] = value;
return value;
}
onMounted(() => fetchMeta(activeProfile.value));
watch(activeProfile, (profile) => fetchMeta(profile));
onUpdated(() => {
if (chatWindow.value) {
@ -98,16 +148,16 @@ onUpdated(() => {
}
});
async function fetchMeta() {
async function fetchMeta(profile) {
try {
const resp = await fetch("/api/ai/info");
const resp = await fetch(`/api/ai/info?profile=${encodeURIComponent(profile)}`);
if (!resp.ok) return;
const data = await resp.json();
meta.value = {
model: data.model || meta.value.model,
gpu: data.gpu || meta.value.gpu,
node: data.node || meta.value.node,
endpoint: data.endpoint || meta.value.endpoint || apiDisplay,
current.value.meta = {
model: data.model || current.value.meta.model,
gpu: data.gpu || current.value.meta.gpu,
node: data.node || current.value.meta.node,
endpoint: data.endpoint || current.value.meta.endpoint || apiDisplay,
};
} catch {
// swallow
@ -118,20 +168,22 @@ async function sendMessage() {
if (!draft.value.trim() || sending.value) return;
const text = draft.value.trim();
draft.value = "";
error.value = "";
const state = current.value;
state.error = "";
const userEntry = { role: "user", content: text };
messages.value.push(userEntry);
state.messages.push(userEntry);
const assistantEntry = { role: "assistant", content: "", streaming: true };
messages.value.push(assistantEntry);
state.messages.push(assistantEntry);
sending.value = true;
try {
const history = messages.value.filter((m) => !m.streaming).map((m) => ({ role: m.role, content: m.content }));
const history = state.messages.filter((m) => !m.streaming).map((m) => ({ role: m.role, content: m.content }));
const conversation_id = ensureConversationId(activeProfile.value);
const start = performance.now();
const resp = await fetch(API_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: text, history }),
body: JSON.stringify({ message: text, history, profile: activeProfile.value, conversation_id }),
});
const contentType = resp.headers.get("content-type") || "";
@ -164,7 +216,7 @@ async function sendMessage() {
await typeReveal(assistantEntry, textReply);
}
} catch (err) {
error.value = err.message || "Unexpected error";
state.error = err.message || "Unexpected error";
assistantEntry.content = assistantEntry.content || "(no response)";
assistantEntry.streaming = false;
} finally {
@ -191,7 +243,7 @@ function handleKeydown(e) {
}
async function copyCurl() {
const target = meta.value.endpoint || apiUrl.toString();
const target = current.value.meta.endpoint || apiUrl.toString();
const curl = `curl -X POST ${target} -H 'content-type: application/json' -d '{\"message\":\"hi\"}'`;
try {
await navigator.clipboard.writeText(curl);
@ -264,11 +316,34 @@ async function copyCurl() {
.chat-card {
margin-top: 18px;
display: grid;
grid-template-rows: 1fr auto;
grid-template-rows: auto 1fr auto;
gap: 12px;
min-height: 60vh;
}
.profile-tabs {
display: flex;
flex-wrap: wrap;
gap: 8px;
}
.profile-tab {
border: 1px solid var(--card-border);
background: rgba(255, 255, 255, 0.03);
color: var(--text-muted);
padding: 6px 12px;
border-radius: 999px;
cursor: pointer;
transition: border-color 0.2s ease, color 0.2s ease, background 0.2s ease;
}
.profile-tab.active {
border-color: rgba(0, 229, 197, 0.6);
color: var(--text-primary);
background: rgba(0, 229, 197, 0.12);
box-shadow: var(--glow-soft);
}
.chat-window {
background: rgba(255, 255, 255, 0.02);
border: 1px solid var(--card-border);
@ -297,6 +372,10 @@ async function copyCurl() {
border: 1px solid var(--card-border);
background: rgba(255, 255, 255, 0.04);
}
.message {
white-space: pre-wrap;
word-break: break-word;
}
.bubble.streaming {
border-color: rgba(0, 229, 197, 0.4);
box-shadow: var(--glow-soft);

View File

@ -43,7 +43,7 @@
const sections = [
{
title: "Productivity",
description: "Docs, planning, and cloud workspace for Atlas users.",
description: "Docs, planning, cloud workspace, and personal finance for Atlas users.",
groups: [
{
title: "Workspace",
@ -66,23 +66,6 @@ const sections = [
target: "_blank",
description: "Kanban planning boards for projects.",
},
{
name: "Wger",
url: "https://health.bstein.dev",
target: "_blank",
description: "Workout + nutrition tracking with the wger mobile app.",
},
],
},
],
},
{
title: "Finance",
description: "Personal budgeting and expense tracking.",
groups: [
{
title: "Money",
apps: [
{
name: "Actual Budget",
url: "https://budget.bstein.dev",
@ -95,6 +78,29 @@ const sections = [
target: "_blank",
description: "Expense tracking with Abacus mobile sync.",
},
{
name: "Wger",
url: "https://health.bstein.dev",
target: "_blank",
description: "Workout + nutrition tracking with the wger mobile app.",
},
],
},
],
},
{
title: "Dev",
description: "Build and ship: source control, CI, registry, and GitOps.",
groups: [
{
title: "Dev Stack",
apps: [
{ name: "Gitea", url: "https://scm.bstein.dev", target: "_blank", description: "Git hosting and collaboration." },
{ name: "Jenkins", url: "https://ci.bstein.dev", target: "_blank", description: "CI pipelines and automation." },
{ name: "Harbor", url: "https://registry.bstein.dev", target: "_blank", description: "Artifact registry." },
{ name: "GitOps", url: "https://cd.bstein.dev", target: "_blank", description: "GitOps UI for Flux." },
{ name: "OpenSearch", url: "https://logs.bstein.dev", target: "_blank", description: "Centralized logs powered by Fluent Bit." },
{ name: "Grafana", url: "https://metrics.bstein.dev", target: "_blank", description: "Dashboards and monitoring." },
],
},
],
@ -179,23 +185,6 @@ const sections = [
},
],
},
{
title: "Dev",
description: "Build and ship: source control, CI, registry, and GitOps.",
groups: [
{
title: "Dev Stack",
apps: [
{ name: "Gitea", url: "https://scm.bstein.dev", target: "_blank", description: "Git hosting and collaboration." },
{ name: "Jenkins", url: "https://ci.bstein.dev", target: "_blank", description: "CI pipelines and automation." },
{ name: "Harbor", url: "https://registry.bstein.dev", target: "_blank", description: "Artifact registry." },
{ name: "GitOps", url: "https://cd.bstein.dev", target: "_blank", description: "GitOps UI for Flux." },
{ name: "OpenSearch", url: "https://logs.bstein.dev", target: "_blank", description: "Centralized logs powered by Fluent Bit." },
{ name: "Grafana", url: "https://metrics.bstein.dev", target: "_blank", description: "Dashboards and monitoring." },
],
},
],
},
{
title: "Crypto",
description: "Local infrastructure for crypto workloads.",

View File

@ -99,7 +99,9 @@ const atlasPillClass = computed(() => (props.labStatus?.atlas?.up ? "pill-ok" :
const oceanusPillClass = computed(() => (props.labStatus?.oceanus?.up ? "pill-ok" : "pill-bad"));
const metricItems = computed(() => {
const items = [
const items = props.metricsData?.items?.length
? props.metricsData.items
: [
{ label: "Lab nodes", value: "26", note: "Workers: 8 rpi5s, 8 rpi4s, 2 jetsons,\n\t\t\t\t 1 minipc\nControl plane: 3 rpi5\nDedicated Hosts: oceanus, titan-db,\n\t\t\t\t\t\t\t\t tethys, theia" },
{ label: "CPU cores", value: "142", note: "32 arm64 cores @ 1.5Ghz\n12 arm64 cores @ 1.9Ghz\n52 arm64 cores @ 2.4Ghz\n10 amd64 cores @ 5.00Ghz\n12 amd64 cores @ 4.67Ghz\n24 amd64 cores @ 4.04Ghz" },
{
@ -108,7 +110,7 @@ const metricItems = computed(() => {
note: "64GB Raspberry Pi 4\n104GB Raspberry Pi 5\n32GB NVIDIA Jetson Xavier\n352GB AMD64 Chipsets",
},
{ label: "Storage", value: "80 TB", note: "astreae: 32GB/4xRPI4\nasteria: 48GB/4xRPI4" },
];
];
return items.map((item) => ({
...item,
note: item.note ? item.note.replaceAll("\t", " ") : "",
@ -127,6 +129,15 @@ const hardwareDiagram = computed(() => buildHardwareDiagram(props.labData || {})
const networkDiagram = computed(() => buildNetworkDiagram(props.networkData || {}));
const pipelineDiagram = computed(() => buildPipelineDiagram());
/**
* Pick a friendly emoji icon for a service name.
*
* WHY: the service grid should stay readable even when upstream service data
* omits a custom icon, so the default icon needs to be deterministic.
*
* @param {string} name - Service display name.
* @returns {string} Emoji used in the service grid card.
*/
function pickIcon(name) {
const h = name.toLowerCase();
if (h.includes("nextcloud")) return "☁️";

File diff suppressed because it is too large Load Diff

View File

@ -148,8 +148,9 @@
Verifying email
</div>
<div v-if="verifyMessage" class="hint mono" style="margin-top: 10px;">
{{ verifyMessage }}
<div v-if="verifyBanner" class="verify-box">
<div class="verify-title mono">{{ verifyBanner.title }}</div>
<div class="verify-body">{{ verifyBanner.body }}</div>
</div>
<div v-if="status === 'pending_email_verification'" class="actions" style="margin-top: 10px;">
@ -176,6 +177,16 @@
<p v-if="blocked" class="muted" style="margin-top: 10px;">
One or more automation steps failed. Fix the error above, then check again.
</p>
<div v-if="blocked" class="actions" style="margin-top: 10px;">
<button class="pill mono" type="button" :disabled="retrying" @click="retryProvisioning">
{{ retrying ? "Retrying..." : "Retry failed steps" }}
</button>
<span v-if="retryMessage" class="hint mono">{{ retryMessage }}</span>
</div>
<p v-if="blocked" class="muted" style="margin-top: 8px;">
If the error mentions rate limiting or a temporary outage, wait a few minutes and retry. If it keeps failing,
contact an admin.
</p>
</div>
<div
@ -199,591 +210,39 @@
</template>
<script setup>
import { onMounted, reactive, ref, watch } from "vue";
import { useRoute } from "vue-router";
import { useRequestAccessFlow } from "../request-access/useRequestAccessFlow";
const route = useRoute();
function statusLabel(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "confirm email";
if (key === "pending") return "awaiting approval";
if (key === "accounts_building") return "accounts building";
if (key === "awaiting_onboarding") return "awaiting onboarding";
if (key === "ready") return "ready";
if (key === "denied") return "rejected";
return key || "unknown";
}
function statusPillClass(value) {
const key = (value || "").trim();
if (key === "pending_email_verification") return "pill-warn";
if (key === "pending") return "pill-wait";
if (key === "accounts_building") return "pill-warn";
if (key === "awaiting_onboarding") return "pill-ok";
if (key === "ready") return "pill-info";
if (key === "denied") return "pill-bad";
return "pill-warn";
}
const form = reactive({
username: "",
first_name: "",
last_name: "",
email: "",
note: "",
});
const submitting = ref(false);
const submitted = ref(false);
const error = ref("");
const requestCode = ref("");
const copied = ref(false);
const verifying = ref(false);
const mailDomain = import.meta.env?.VITE_MAILU_DOMAIN || "bstein.dev";
const availability = reactive({
label: "",
detail: "",
pillClass: "",
checking: false,
blockSubmit: false,
});
let availabilityTimer = 0;
let availabilityToken = 0;
const statusForm = reactive({
request_code: "",
});
const checking = ref(false);
const status = ref("");
const onboardingUrl = ref("");
const tasks = ref([]);
const blocked = ref(false);
const resending = ref(false);
const resendMessage = ref("");
const verifyMessage = ref("");
function taskPillClass(status) {
const key = (status || "").trim();
if (key === "ok") return "pill-ok";
if (key === "error") return "pill-bad";
if (key === "pending") return "pill-warn";
return "pill-warn";
}
function resetAvailability() {
availability.label = "";
availability.detail = "";
availability.pillClass = "";
availability.blockSubmit = false;
}
function setAvailability(state, detail = "") {
availability.detail = detail;
availability.blockSubmit = false;
if (state === "checking") {
availability.label = "checking";
availability.pillClass = "pill-warn";
return;
}
if (state === "available") {
availability.label = "available";
availability.pillClass = "pill-ok";
return;
}
if (state === "invalid") {
availability.label = "invalid";
availability.pillClass = "pill-bad";
availability.blockSubmit = true;
return;
}
if (state === "requested") {
availability.label = "requested";
availability.pillClass = "pill-warn";
availability.blockSubmit = true;
return;
}
if (state === "exists") {
availability.label = "taken";
availability.pillClass = "pill-bad";
availability.blockSubmit = true;
return;
}
if (state === "error") {
availability.label = "error";
availability.pillClass = "pill-warn";
return;
}
resetAvailability();
}
async function checkAvailability(name) {
const token = (availabilityToken += 1);
setAvailability("checking");
availability.checking = true;
try {
const resp = await fetch(`/api/access/request/availability?username=${encodeURIComponent(name)}`, {
headers: { Accept: "application/json" },
cache: "no-store",
});
const data = await resp.json().catch(() => ({}));
if (token !== availabilityToken) return;
if (!resp.ok) throw new Error(data.error || `status ${resp.status}`);
if (data.available) {
setAvailability("available", "Username is available.");
return;
}
const reason = data.reason || "";
const status = data.status || "";
if (reason === "invalid") {
setAvailability("invalid", data.detail || "Use 3-32 characters (letters, numbers, . _ -).");
return;
}
if (reason === "exists") {
setAvailability("exists", "Already in use. Choose another name.");
return;
}
if (reason === "requested") {
const label = status ? `Existing request: ${statusLabel(status)}` : "Request already exists.";
setAvailability("requested", label);
return;
}
setAvailability("error", "Unable to confirm availability.");
} catch (err) {
if (token !== availabilityToken) return;
setAvailability("error", err.message || "Availability check failed.");
} finally {
if (token === availabilityToken) availability.checking = false;
}
}
async function submit() {
if (submitting.value) return;
error.value = "";
submitting.value = true;
try {
const resp = await fetch("/api/access/request", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({
username: form.username.trim(),
first_name: form.first_name.trim(),
last_name: form.last_name.trim(),
email: form.email.trim(),
note: form.note.trim(),
}),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
submitted.value = true;
requestCode.value = data.request_code || "";
statusForm.request_code = requestCode.value;
status.value = data.status || "pending_email_verification";
} catch (err) {
error.value = err.message || "Failed to submit request";
} finally {
submitting.value = false;
}
}
watch(
() => form.username,
(value) => {
const trimmed = value.trim();
if (availabilityTimer) {
window.clearTimeout(availabilityTimer);
availabilityTimer = 0;
}
availabilityToken += 1;
if (!trimmed) {
resetAvailability();
return;
}
if (trimmed.length < 3 || trimmed.length > 32) {
setAvailability("invalid", "Use 3-32 characters (letters, numbers, . _ -).");
return;
}
if (!/^[a-zA-Z0-9._-]+$/.test(trimmed)) {
setAvailability("invalid", "Use letters, numbers, and . _ - only.");
return;
}
availabilityTimer = window.setTimeout(() => {
checkAvailability(trimmed);
}, 350);
},
);
async function copyRequestCode() {
if (!requestCode.value) return;
try {
if (navigator?.clipboard?.writeText) {
await navigator.clipboard.writeText(requestCode.value);
} else {
const textarea = document.createElement("textarea");
textarea.value = requestCode.value;
textarea.setAttribute("readonly", "");
textarea.style.position = "fixed";
textarea.style.top = "-9999px";
textarea.style.left = "-9999px";
document.body.appendChild(textarea);
textarea.select();
textarea.setSelectionRange(0, textarea.value.length);
document.execCommand("copy");
document.body.removeChild(textarea);
}
copied.value = true;
setTimeout(() => (copied.value = false), 1500);
} catch (err) {
error.value = err?.message || "Failed to copy request code";
}
}
async function checkStatus() {
if (checking.value) return;
error.value = "";
verifyMessage.value = "";
const trimmed = statusForm.request_code.trim();
if (!trimmed) return;
if (!trimmed.includes("~")) {
error.value = "Request code should look like username~XXXXXXXXXX. Copy it from the submit step.";
status.value = "unknown";
onboardingUrl.value = "";
tasks.value = [];
blocked.value = false;
return;
}
checking.value = true;
try {
const resp = await fetch("/api/access/request/status", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: trimmed }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || "unknown";
onboardingUrl.value = data.onboarding_url || "";
tasks.value = Array.isArray(data.tasks) ? data.tasks : [];
blocked.value = Boolean(data.blocked);
if (data.email_verified) {
verifyMessage.value = "Email confirmed.";
}
} catch (err) {
error.value = err.message || "Failed to check status";
status.value = "unknown";
onboardingUrl.value = "";
tasks.value = [];
blocked.value = false;
} finally {
checking.value = false;
}
}
async function verifyFromLink(code, token) {
verifying.value = true;
try {
const resp = await fetch("/api/access/request/verify", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code, token }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
status.value = data.status || status.value;
verifyMessage.value = "Email confirmed.";
} finally {
verifying.value = false;
}
}
async function resendVerification() {
if (resending.value) return;
const code = statusForm.request_code.trim();
if (!code) return;
resending.value = true;
resendMessage.value = "";
try {
const resp = await fetch("/api/access/request/resend", {
method: "POST",
headers: { "Content-Type": "application/json" },
cache: "no-store",
body: JSON.stringify({ request_code: code }),
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) throw new Error(data.error || resp.statusText || `status ${resp.status}`);
resendMessage.value = "Verification email sent.";
} catch (err) {
resendMessage.value = err?.message || "Failed to resend verification email.";
} finally {
resending.value = false;
}
}
onMounted(async () => {
const code = typeof route.query.code === "string" ? route.query.code.trim() : "";
const token = typeof route.query.verify === "string" ? route.query.verify.trim() : "";
const verified = typeof route.query.verified === "string" ? route.query.verified.trim() : "";
const verifyError = typeof route.query.verify_error === "string" ? route.query.verify_error.trim() : "";
if (code) {
requestCode.value = code;
statusForm.request_code = code;
submitted.value = true;
}
if (code && token) {
try {
await verifyFromLink(code, token);
} catch (err) {
error.value = err?.message || "Failed to verify email";
}
}
if (code) {
await checkStatus();
}
if (verified) {
verifyMessage.value = "Email confirmed.";
}
if (verifyError) {
error.value = `Email verification failed: ${decodeURIComponent(verifyError)}`;
}
});
const {
statusLabel,
statusPillClass,
form,
submitting,
submitted,
error,
requestCode,
copied,
verifying,
mailDomain,
availability,
statusForm,
checking,
status,
onboardingUrl,
tasks,
blocked,
retrying,
retryMessage,
resending,
resendMessage,
verifyBanner,
taskPillClass,
submit,
copyRequestCode,
checkStatus,
retryProvisioning,
resendVerification,
} = useRequestAccessFlow(useRoute());
</script>
<style scoped>
.page {
max-width: 960px;
margin: 0 auto;
padding: 32px 22px 72px;
}
.hero {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 18px;
margin-bottom: 12px;
}
.eyebrow {
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--text-muted);
margin: 0 0 6px;
font-size: 13px;
}
h1 {
margin: 0 0 6px;
font-size: 32px;
}
.lede {
margin: 0;
color: var(--text-muted);
max-width: 640px;
}
.module {
padding: 18px;
}
.status-module {
margin-top: 14px;
}
.module-head {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
}
.muted {
color: var(--text-muted);
margin: 10px 0 0;
}
.mono {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
}
.form {
margin-top: 14px;
display: grid;
gap: 12px;
}
.field {
display: grid;
gap: 6px;
}
.availability {
display: flex;
align-items: center;
gap: 8px;
}
.label {
color: var(--text-muted);
font-size: 12px;
letter-spacing: 0.04em;
text-transform: uppercase;
}
.input,
.textarea {
width: 100%;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.1);
background: rgba(0, 0, 0, 0.22);
color: var(--text);
padding: 10px 12px;
outline: none;
}
.textarea {
resize: vertical;
}
.actions {
display: flex;
align-items: center;
gap: 12px;
margin-top: 6px;
}
button.primary,
a.primary {
background: linear-gradient(90deg, #4f8bff, #7dd0ff);
color: #0b1222;
padding: 10px 14px;
border: none;
border-radius: 10px;
cursor: pointer;
font-weight: 700;
text-decoration: none;
display: inline-flex;
align-items: center;
justify-content: center;
}
button.primary:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.onboarding-actions {
margin-top: 18px;
flex-direction: column;
align-items: stretch;
padding: 14px;
border-radius: 14px;
border: 1px solid rgba(120, 180, 255, 0.2);
background: rgba(0, 0, 0, 0.24);
}
.onboarding-copy {
display: grid;
gap: 6px;
}
.onboarding-cta {
text-align: center;
width: 100%;
}
.status-form {
display: flex;
gap: 10px;
margin-top: 12px;
}
.hint {
color: var(--text-muted);
font-size: 12px;
}
.error-box {
margin-top: 14px;
border: 1px solid rgba(255, 120, 120, 0.35);
background: rgba(255, 64, 64, 0.12);
border-radius: 14px;
padding: 12px;
}
.success-box {
margin-top: 14px;
border: 1px solid rgba(120, 255, 160, 0.25);
background: rgba(48, 255, 160, 0.1);
border-radius: 14px;
padding: 12px;
}
.request-code-row {
margin-top: 12px;
display: flex;
flex-direction: column;
gap: 6px;
}
.copy {
display: inline-flex;
align-items: center;
gap: 10px;
border-radius: 12px;
border: 1px solid rgba(255, 255, 255, 0.14);
background: rgba(0, 0, 0, 0.22);
color: var(--text);
padding: 10px 12px;
cursor: pointer;
}
.copied {
font-size: 12px;
color: rgba(120, 255, 160, 0.9);
}
.pill {
padding: 6px 10px;
border-radius: 999px;
font-size: 12px;
}
</style>
<style scoped>
.task-box {
margin-top: 14px;
padding: 14px;
border: 1px solid rgba(255, 255, 255, 0.08);
border-radius: 14px;
background: rgba(0, 0, 0, 0.25);
}
.task-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.task-row {
display: grid;
gap: 6px;
grid-template-columns: 1fr auto;
align-items: center;
}
.task-name {
color: var(--text);
}
.task-detail {
grid-column: 1 / -1;
color: var(--text-muted);
font-size: 12px;
}
</style>
<style scoped src="../styles/request-access.css"></style>

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 958 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Some files were not shown because too many files have changed in this diff Show More