Merge pull request 'feature/bstein-dev-home' (#7) from feature/bstein-dev-home into main

Reviewed-on: #7
This commit is contained in:
bstein 2025-12-18 04:23:01 +00:00
commit be23851878
74 changed files with 5186 additions and 3956 deletions

3
.gitignore vendored
View File

@ -1,5 +1,2 @@
# Ignore markdown by default, but keep top-level docs
*.md
!README.md
!AGENTS.md
!**/NOTES.md

View File

@ -1,81 +0,0 @@
Repository Guidelines
> Local-only note: apply changes through Flux-tracked manifests, not by manual kubectl edits in-cluster—manual tweaks will be reverted by Flux.
## Project Structure & Module Organization
- `infrastructure/`: cluster-scoped building blocks (core, flux-system, traefik, longhorn). Add new platform features by mirroring this layout.
- `services/`: workload manifests per app (`services/gitea/`, etc.) with `kustomization.yaml` plus one file per kind; keep diffs small and focused.
- `dockerfiles/` hosts bespoke images, while `scripts/` stores operational Fish/Bash helpers—extend these directories instead of relying on ad-hoc commands.
## Build, Test, and Development Commands
- `kustomize build services/<app>` (or `kubectl kustomize ...`) renders manifests exactly as Flux will.
- `kubectl apply --server-side --dry-run=client -k services/<app>` checks schema compatibility without touching the cluster.
- `flux reconcile kustomization <name> --namespace flux-system --with-source` pulls the latest Git state after merges or hotfixes.
- `fish scripts/flux_hammer.fish --help` explains the recovery tool; read it before running against production workloads.
## Coding Style & Naming Conventions
- YAML uses two-space indents; retain the leading path comment (e.g. `# services/gitea/deployment.yaml`) to speed code review.
- Keep resource names lowercase kebab-case, align labels/selectors, and mirror namespaces with directory names.
- List resources in `kustomization.yaml` from namespace/config, through storage, then workloads and networking for predictable diffs.
- Scripts start with `#!/usr/bin/env fish` or bash, stay executable, and follow snake_case names such as `flux_hammer.fish`.
## Testing Guidelines
- Run `kustomize build` and the dry-run apply for every service you touch; capture failures before opening a PR.
- `flux diff kustomization <name> --path services/<app>` previews reconciliations—link notable output when behavior shifts.
- Docker edits: `docker build -f dockerfiles/Dockerfile.monerod .` (swap the file you changed) to verify image builds.
## Commit & Pull Request Guidelines
- Keep commit subjects short, present-tense, and optionally scoped (`gpu(titan-24): add RuntimeClass`); squash fixups before review.
- Describe linked issues, affected services, and required operator steps (e.g. `flux reconcile kustomization services-gitea`) in the PR body.
- Focus each PR on one kustomization or service and update `infrastructure/flux-system` when Flux must track new folders.
- Record the validation you ran (dry-runs, diffs, builds) and add screenshots only when ingress or UI behavior changes.
## Security & Configuration Tips
- Never commit credentials; use Vault workflows (`services/vault/`) or SOPS-encrypted manifests wired through `infrastructure/flux-system`.
- Node selectors and tolerations gate workloads to hardware like `hardware: rpi4`; confirm labels before scaling or renaming nodes.
- Pin external images by digest or rely on Flux image automation to follow approved tags and avoid drift.
## Dashboard roadmap / context (2025-12-02)
- Atlas dashboards are generated via `scripts/dashboards_render_atlas.py --build`, which writes JSON under `services/monitoring/dashboards/` and ConfigMaps under `services/monitoring/`. Keep the Grafana manifests in sync by regenerating after edits.
- Atlas Overview panels are paired with internal dashboards (pods, nodes, storage, network, GPU). A new `atlas-gpu` internal dashboard holds the detailed GPU metrics that feed the overview share pie.
- Old Grafana folders (`Atlas Storage`, `Atlas SRE`, `Atlas Public`, `Atlas Nodes`) should be removed in Grafana UI when convenient; only `Atlas Overview` and `Atlas Internal` should remain provisioned.
- Future work: add a separate generator (e.g., `dashboards_render_oceanus.py`) for SUI/oceanus validation dashboards, mirroring the atlas pattern of internal dashboards feeding a public overview.
## Monitoring state (2025-12-03)
- dcgm-exporter DaemonSet pulls `registry.bstein.dev/monitoring/dcgm-exporter:4.4.2-4.7.0-ubuntu22.04` with nvidia runtime/imagePullSecret; titan-24 exports metrics, titan-22 remains NotReady.
- Atlas Overview is the Grafana home (1h range, 1m refresh), Overview folder UID `overview`, internal folder `atlas-internal` (oceanus-internal stub).
- Panels standardized via generator; hottest row compressed, worker/control rows taller, root disk row taller and top12 bar gauge with labels. GPU share pie uses 1h avg_over_time to persist idle activity.
- Internal dashboards are provisioned without Viewer role; if anonymous still sees them, restart Grafana and tighten auth if needed.
- GPU share panel updated (feature/sso) to use `max_over_time(…[$__range])`, so longer ranges (e.g., 12h) keep recent activity visible. Flux tracking `feature/sso`.
## Upcoming priorities (SSO/storage/mail)
- Establish SSO (Keycloak or similar) and federate Grafana, Gitea, Zot, Nextcloud, Pegasus/Jellyfin; keep Vaultwarden separate until safe.
- Add Nextcloud (limit to rpi5 workers) with office suite; integrate with SSO; plan storage class and ingress.
- Plan mail: mostly self-hosted, relay through trusted provider for outbound; integrate with services (Nextcloud, Vaultwarden, etc.) for notifications and account flows.
## SSO plan sketch (2025-12-03)
- IdP: use Keycloak (preferred) in a new `sso` namespace, Bitnami or codecentric chart with Postgres backing store (single PVC), ingress `sso.bstein.dev`, admin user bound to brad@bstein.dev; stick with local DB initially (no external IdP).
- Auth flow goals: Grafana (OIDC), Gitea (OAuth2/Keycloak), Zot (via Traefik forward-auth/oauth2-proxy), Jellyfin/Pegasus via Jellyfin OAuth/OpenID plugin (map existing usernames; run migration to pre-create users in Keycloak with same usernames/emails and temporary passwords), Pegasus keeps using Jellyfin tokens.
- Steps to implement:
1) Add service folder `services/keycloak/` (namespace, PVC, HelmRelease, ingress, secret for admin creds). Verify with kustomize + Flux reconcile.
2) Seed realm `atlas` with users (import CSV/realm). Create client for Grafana (public/implicit), Gitea (confidential), and a “jellyfin” client for the OAuth plugin; set email for brad@bstein.dev as admin.
3) Reconfigure Grafana to OIDC (disable anonymous to internal folders, leave Overview public via folder permissions). Reconfigure Gitea to OIDC (app.ini).
4) Add Traefik forward-auth (oauth2-proxy) in front of Zot and any other services needing headers-based auth.
5) Deploy Jellyfin OpenID plugin; map Keycloak users to existing Jellyfin usernames; communicate password reset path.
- Migration caution: do not delete existing local creds until SSO validated; keep Pegasus working via Jellyfin tokens during transition.
## Postgres centralization (2025-12-03)
- Prefer a shared in-cluster Postgres deployment with per-service databases to reduce resource sprawl on Pi nodes. Use it for services that can easily point at an external DB.
- Candidates to migrate to shared Postgres: Keycloak (realm DB), Gitea (git DB), Nextcloud (app DB), possibly Grafana (if persistence needed beyond current provisioner), Jitsi prosody/JVB state (if external DB supported). Keep tightly-coupled or lightweight embedded DBs as-is when migration is painful or not supported.
## SSO integration snapshot (2025-12-08)
- Current blockers: Zot still prompts for basic auth/double-login; Vault still wants the token UI after Keycloak (previously 502/404 when vault-0 sealed). Forward-auth middleware on Zot Ingress likely still causing the 401/Found hop; Vault OIDC mount not completing UI flow unless unsealed and preferred login is set.
- Flux-only changes required: remove zot forward-auth middleware from Ingress (let oauth2-proxy handle redirect), ensure Vault OIDC mount is preferred UI login and bound to admin group; keep all edits in repo so Flux enforces them.
- Secrets present (per user): `zot-oidc-client` (client_secret only), `oauth2-proxy-zot-oidc`, `oauth2-proxy-vault-oidc`, `vault-oidc-admin-token`. Zot needs its regcred in the zot namespace if image pulls fail.
- Cluster validation blocked here: `kubectl get nodes` fails (403/permission) and DNS to `*.bstein.dev` fails in this session, so no live curl verification could be run. Re-test on a host with cluster/DNS access after Flux applies fixes.
## Docs hygiene
- Do not add per-service `README.md` files; use `NOTES.md` if documentation is needed inside service folders. Keep only the top-level repo README.
- Keep comments succinct and in a human voice—no AI-sounding notes. Use `NOTES.md` for scratch notes instead of sprinkling reminders into code or extra READMEs.

View File

@ -1,3 +0,0 @@
# Rotation reminders (temporary secrets set by automation)
- Weave GitOps UI (`cd.bstein.dev`) admin: `admin` / `G1tOps!2025` — rotate immediately after first login.

View File

@ -9,4 +9,4 @@ resources:
- ../../services/monitoring
- ../../services/pegasus
- ../../services/vault
- ../../services/zot
- ../../services/bstein-dev-home

View File

@ -0,0 +1,26 @@
# clusters/atlas/flux-system/applications/bstein-dev-home/image-automation.yaml
apiVersion: image.toolkit.fluxcd.io/v1
kind: ImageUpdateAutomation
metadata:
name: bstein-dev-home
namespace: flux-system
spec:
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
git:
checkout:
ref:
branch: feature/ci-gitops
commit:
author:
email: ops@bstein.dev
name: flux-bot
messageTemplate: "chore(bstein-dev-home): update images to {{range .Updated.Images}}{{.}}{{end}}"
push:
branch: feature/ci-gitops
update:
strategy: Setters
path: services/bstein-dev-home

View File

@ -0,0 +1,15 @@
# clusters/atlas/flux-system/applications/bstein-dev-home/kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: bstein-dev-home
namespace: flux-system
spec:
interval: 10m
path: ./services/bstein-dev-home
prune: true
sourceRef:
kind: GitRepository
name: flux-system
targetNamespace: bstein-dev-home
wait: false

View File

@ -0,0 +1,26 @@
# clusters/atlas/flux-system/applications/ci-demo/image-automation.yaml
apiVersion: image.toolkit.fluxcd.io/v1
kind: ImageUpdateAutomation
metadata:
name: ci-demo
namespace: flux-system
spec:
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
git:
checkout:
ref:
branch: feature/ci-gitops
commit:
author:
email: ops@bstein.dev
name: flux-bot
messageTemplate: "chore(ci-demo): update image to {{range .Updated.Images}}{{.}}{{end}}"
push:
branch: feature/ci-gitops
update:
strategy: Setters
path: services/ci-demo

View File

@ -1,18 +1,19 @@
# clusters/atlas/flux-system/applications/zot/kustomization.yaml
# clusters/atlas/flux-system/applications/ci-demo/kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: zot
name: ci-demo
namespace: flux-system
spec:
interval: 10m
path: ./services/zot
targetNamespace: zot
prune: false
path: ./services/ci-demo
prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
wait: true
targetNamespace: ci-demo
dependsOn:
- name: core
- name: core
wait: false

View File

@ -0,0 +1,27 @@
# clusters/atlas/flux-system/applications/harbor/image-automation.yaml
apiVersion: image.toolkit.fluxcd.io/v1
kind: ImageUpdateAutomation
metadata:
name: harbor
namespace: harbor
spec:
suspend: true
interval: 5m0s
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
git:
checkout:
ref:
branch: feature/ci-gitops
commit:
author:
email: ops@bstein.dev
name: flux-bot
messageTemplate: "chore(harbor): update images to {{range .Updated.Images}}{{.}}{{end}}"
push:
branch: feature/ci-gitops
update:
strategy: Setters
path: ./services/harbor

View File

@ -0,0 +1,23 @@
# clusters/atlas/flux-system/applications/harbor/kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: harbor
namespace: flux-system
spec:
interval: 10m
path: ./services/harbor
targetNamespace: harbor
prune: false
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
healthChecks:
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
name: harbor
namespace: harbor
wait: false
dependsOn:
- name: core

View File

@ -0,0 +1,23 @@
# clusters/atlas/flux-system/applications/jenkins/kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: jenkins
namespace: flux-system
spec:
interval: 10m
path: ./services/jenkins
prune: true
sourceRef:
kind: GitRepository
name: flux-system
targetNamespace: jenkins
dependsOn:
- name: helm
- name: traefik
healthChecks:
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
name: jenkins
namespace: jenkins
wait: false

View File

@ -2,7 +2,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- zot/kustomization.yaml
- gitea/kustomization.yaml
- vault/kustomization.yaml
- jitsi/kustomization.yaml
@ -10,9 +9,16 @@ resources:
- monerod/kustomization.yaml
- pegasus/kustomization.yaml
- pegasus/image-automation.yaml
- bstein-dev-home/kustomization.yaml
- bstein-dev-home/image-automation.yaml
- harbor/kustomization.yaml
- harbor/image-automation.yaml
- jellyfin/kustomization.yaml
- xmr-miner/kustomization.yaml
- sui-metrics/kustomization.yaml
- keycloak/kustomization.yaml
- oauth2-proxy/kustomization.yaml
- mailu/kustomization.yaml
- jenkins/kustomization.yaml
- ci-demo/kustomization.yaml
- ci-demo/image-automation.yaml

View File

@ -1,5 +1,5 @@
# clusters/atlas/flux-system/applications/pegasus/image-automation.yaml
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1
kind: ImageUpdateAutomation
metadata:
name: pegasus
@ -9,12 +9,18 @@ spec:
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
git:
checkout:
ref:
branch: feature/ci-gitops
commit:
author:
email: ops@bstein.dev
name: flux-bot
messageTemplate: "chore(pegasus): update image to {{range .Updated.Images}}{{.}}{{end}}"
push:
branch: feature/ci-gitops
update:
strategy: Setters
path: ./services/pegasus
path: services/pegasus

File diff suppressed because it is too large Load Diff

View File

@ -8,7 +8,7 @@ metadata:
spec:
interval: 1m0s
ref:
branch: feature/mailu
branch: feature/ci-gitops
secretRef:
name: flux-system-gitea
url: ssh://git@scm.bstein.dev:2242/bstein/titan-iac.git

View File

@ -1,15 +0,0 @@
# Titan Homelab Topology
| Hostname | Role / Function | Managed By | Notes |
|------------|--------------------------------|---------------------|-------|
| titan-db | HA control plane database | Ansible | PostgreSQL / etcd backing services |
| titan-0a | Kubernetes control-plane | Flux (atlas cluster)| HA leader, tainted for control only |
| titan-0b | Kubernetes control-plane | Flux (atlas cluster)| Standby control node |
| titan-0c | Kubernetes control-plane | Flux (atlas cluster)| Standby control node |
| titan-04-19| Raspberry Pi workers | Flux (atlas cluster)| Workload nodes, labelled per hardware |
| titan-20&21| NVIDIA Jetson workers | Flux (atlas cluster)| Workload nodes, labelled per hardware |
| titan-22 | GPU mini-PC (Jellyfin) | Flux + Ansible | NVIDIA runtime managed via `modules/profiles/atlas-ha` |
| titan-23 | Dedicated SUI validator Oceanus| Manual + Ansible | Baremetal validator workloads, exposes metrics to atlas |
| titan-24 | Tethys hybrid node | Flux + Ansible | Runs SUI metrics via K8s, validator via Ansible |
| titan-jh | Jumphost & bastion & lesavka | Ansible | Entry point / future KVM services / custom kvm - lesavaka |
| styx | Air-gapped workstation | Manual / Scripts | Remains isolated, scripts tracked in `hosts/styx` |

View File

@ -1,2 +0,0 @@
# hosts/styx/README.md
Styx is air-gapped; provisioning scripts live under `scripts/`.

View File

@ -0,0 +1,9 @@
# infrastructure/sources/helm/harbor.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: harbor
namespace: flux-system
spec:
interval: 10m
url: https://helm.goharbor.io

View File

@ -0,0 +1,9 @@
# infrastructure/sources/helm/jenkins.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: jenkins
namespace: flux-system
spec:
interval: 1h
url: https://charts.jenkins.io

View File

@ -5,6 +5,8 @@ resources:
- grafana.yaml
- hashicorp.yaml
- jetstack.yaml
- jenkins.yaml
- mailu.yaml
- harbor.yaml
- prometheus.yaml
- victoria-metrics.yaml

View File

@ -371,9 +371,9 @@ function xmrwallet_bootstrap --description "Interactive setup of monero-wallet-r
echo "Skipping daemon probe due to xmrwallet_SKIP_DAEMON_CHECK=1"
end
# Use your private image by default (in Zot)
read -P "Container image for wallet RPC [registry.bstein.dev/infra/monero-wallet-rpc:0.18.4.1]: " image
if test -z "$image"; set image registry.bstein.dev/infra/monero-wallet-rpc:0.18.4.1; end
# Use your private image by default (in Harbor)
read -P "Container image for wallet RPC [registry.bstein.dev/crypto/monero-wallet-rpc:0.18.4.1]: " image
if test -z "$image"; set image registry.bstein.dev/crypto/monero-wallet-rpc:0.18.4.1; end
_require "Container image" $image; or return 1
# --- Secrets (defaults: RPC user=wallet name, passwords auto if missing)
@ -1375,4 +1375,3 @@ function xmrwallet_help_detailed
echo " Probes it via a temporary port-forward so it works from your workstation."
echo " Set xmrwallet_SKIP_DAEMON_CHECK=1 to bypass the daemon probe (not recommended)."
end

View File

@ -23,7 +23,7 @@ end
# Default image chooser (you should override with your own multi-arch image)
function _sui_default_image -a NET
echo registry.bstein.dev/infra/sui-tools:1.53.2
echo registry.bstein.dev/crypto/sui-tools:1.53.2
end
# Convert any string to a k8s-safe name (RFC-1123 label-ish)

View File

@ -241,9 +241,10 @@ UPTIME_PERCENT_THRESHOLDS = {
"mode": "absolute",
"steps": [
{"color": "red", "value": None},
{"color": "orange", "value": 0.999},
{"color": "yellow", "value": 0.9999},
{"color": "green", "value": 0.99999},
{"color": "orange", "value": 0.99},
{"color": "yellow", "value": 0.999},
{"color": "green", "value": 0.9999},
{"color": "blue", "value": 0.99999},
],
}
PROBLEM_TABLE_EXPR = (

92
scripts/gitea_cred_sync.sh Executable file
View File

@ -0,0 +1,92 @@
#!/usr/bin/env bash
# Sync Keycloak users into Gitea local accounts (for CLI + tokens).
# Requires: curl, jq, kubectl. Expects a Keycloak client with realm-management
# permissions (manage-users) and a Gitea admin token stored in a secret.
set -euo pipefail
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
require curl; require jq; require kubectl
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
: "${KEYCLOAK_REALM:=atlas}"
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
: "${GITEA_BASE_URL:=https://scm.bstein.dev}"
: "${GITEA_NAMESPACE:=gitea}"
: "${GITEA_TOKEN_SECRET_NAME:=gitea-admin-token}"
: "${GITEA_TOKEN_SECRET_KEY:=token}"
: "${DEFAULT_PASSWORD:=TempSsoPass!2025}"
fetch_token() {
curl -fsS -X POST \
-d "grant_type=client_credentials" \
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
| jq -r '.access_token'
}
pull_users() {
local token="$1"
curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
| jq -r '.[] | select(.enabled == true) | select(.username | startswith("service-account-") | not) | [.username, (.email // ""), (.firstName // ""), (.lastName // "")] | @tsv'
}
get_gitea_token() {
if [[ -n "${GITEA_ADMIN_TOKEN:-}" ]]; then
echo "${GITEA_ADMIN_TOKEN}"
return
fi
kubectl -n "${GITEA_NAMESPACE}" get secret "${GITEA_TOKEN_SECRET_NAME}" -o "jsonpath={.data.${GITEA_TOKEN_SECRET_KEY}}" \
| base64 -d
}
user_exists() {
local token="$1" username="$2"
local code
code=$(curl -s -o /dev/null -w '%{http_code}' \
-H "Authorization: token ${token}" \
"${GITEA_BASE_URL}/api/v1/admin/users/${username}")
[[ "${code}" == "200" ]]
}
create_user() {
local token="$1" username="$2" email="$3" fname="$4" lname="$5"
local body status fullname
fullname="$(echo "${fname} ${lname}" | xargs)"
if [[ -z "${email}" ]]; then
email="${username}@example.local"
fi
body=$(jq -n --arg u "${username}" --arg e "${email}" --arg p "${DEFAULT_PASSWORD}" \
--arg fn "${fullname}" '{username:$u, email:$e, password:$p, must_change_password:false, full_name:$fn}')
status=$(curl -s -o /dev/null -w '%{http_code}' \
-H "Authorization: token ${token}" \
-H "Content-Type: application/json" \
-X POST \
-d "${body}" \
"${GITEA_BASE_URL}/api/v1/admin/users")
if [[ "${status}" == "201" ]]; then
echo "created gitea user ${username}"
elif [[ "${status}" == "409" ]]; then
echo "gitea user ${username} already exists (409)" >&2
else
echo "failed to create gitea user ${username} (status ${status})" >&2
fi
}
main() {
local kc_token gitea_token
kc_token="$(fetch_token)"
gitea_token="$(get_gitea_token)"
while IFS=$'\t' read -r username email fname lname; do
if user_exists "${gitea_token}" "${username}"; then
continue
fi
create_user "${gitea_token}" "${username}" "${email}" "${fname}" "${lname}"
done < <(pull_users "${kc_token}")
}
main "$@"

87
scripts/gitops_cred_sync.sh Executable file
View File

@ -0,0 +1,87 @@
#!/usr/bin/env bash
# Ensure Keycloak users are in the GitOps admin group used by weave-gitops (cd.bstein.dev).
# Weave GitOps relies on OIDC; membership in the "admin" group maps to cluster-admin via RBAC.
# Requires: curl, jq. Needs a Keycloak client with realm-management (manage-users/groups).
set -euo pipefail
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
require curl; require jq
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
: "${KEYCLOAK_REALM:=atlas}"
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
: "${GITOPS_GROUP:=admin}"
# Comma-separated usernames to sync; set SYNC_ALL_USERS=true to include all Keycloak users.
: "${TARGET_USERNAMES:=bstein}"
: "${SYNC_ALL_USERS:=false}"
fetch_token() {
curl -fsS -X POST \
-d "grant_type=client_credentials" \
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
| jq -r '.access_token'
}
ensure_group() {
local token="$1" group="$2" id
id=$(curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups?search=${group}" \
| jq -r --arg g "${group}" '.[] | select(.name==$g) | .id' | head -n1)
if [[ -n "${id}" ]]; then
echo "${id}"
return
fi
curl -fsS -H "Authorization: Bearer ${token}" \
-H "Content-Type: application/json" \
-d "{\"name\":\"${group}\"}" \
-X POST "${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups"
# Fetch again to get id
curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups?search=${group}" \
| jq -r --arg g "${group}" '.[] | select(.name==$g) | .id' | head -n1
}
user_id_by_name() {
local token="$1" username="$2"
curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?username=${username}" \
| jq -r '.[0].id'
}
add_user_to_group() {
local token="$1" user_id="$2" group_id="$3" username="$4"
if [[ -z "${user_id}" ]]; then
echo "user ${username} not found in Keycloak; skip" >&2
return
fi
curl -fsS -o /dev/null -w '%{http_code}' \
-H "Authorization: Bearer ${token}" \
-X PUT "${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users/${user_id}/groups/${group_id}" \
| grep -qE '^(204|409)$' || echo "failed adding ${username} to group" >&2
}
main() {
local token group_id users=()
token="$(fetch_token)"
group_id="$(ensure_group "${token}" "${GITOPS_GROUP}")"
if [[ "${SYNC_ALL_USERS}" == "true" ]]; then
readarray -t users < <(curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
| jq -r '.[] | select(.enabled==true) | .username')
else
IFS=',' read -ra users <<< "${TARGET_USERNAMES}"
fi
for user in "${users[@]}"; do
user="$(echo "${user}" | xargs)"
[[ -z "${user}" ]] && continue
add_user_to_group "${token}" "$(user_id_by_name "${token}" "${user}")" "${group_id}" "${user}"
done
}
main "$@"

94
scripts/jenkins_cred_sync.sh Executable file
View File

@ -0,0 +1,94 @@
#!/usr/bin/env bash
# Sync Keycloak users into Jenkins local accounts (for CLI/API use).
# Jenkins is OIDC-enabled, but local users can still be provisioned for tokens.
# Requires: curl, jq, kubectl. Needs Jenkins admin user+API token.
set -euo pipefail
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
require curl; require jq; require kubectl
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
: "${KEYCLOAK_REALM:=atlas}"
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
: "${JENKINS_URL:=https://ci.bstein.dev}"
: "${JENKINS_NAMESPACE:=jenkins}"
: "${JENKINS_ADMIN_SECRET_NAME:=jenkins-admin-token}"
: "${JENKINS_ADMIN_USER_KEY:=username}"
: "${JENKINS_ADMIN_TOKEN_KEY:=token}"
: "${DEFAULT_PASSWORD:=TempSsoPass!2025}"
fetch_token() {
curl -fsS -X POST \
-d "grant_type=client_credentials" \
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
| jq -r '.access_token'
}
pull_users() {
local token="$1"
curl -fsS -H "Authorization: Bearer ${token}" \
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
| jq -r '.[] | select(.enabled == true) | select(.username | startswith("service-account-") | not) | [.id, .username, (.email // "")] | @tsv'
}
get_admin_auth() {
local user token
if [[ -n "${JENKINS_ADMIN_USER:-}" && -n "${JENKINS_ADMIN_TOKEN:-}" ]]; then
echo "${JENKINS_ADMIN_USER}:${JENKINS_ADMIN_TOKEN}"
return
fi
user=$(kubectl -n "${JENKINS_NAMESPACE}" get secret "${JENKINS_ADMIN_SECRET_NAME}" -o "jsonpath={.data.${JENKINS_ADMIN_USER_KEY}}" | base64 -d)
token=$(kubectl -n "${JENKINS_NAMESPACE}" get secret "${JENKINS_ADMIN_SECRET_NAME}" -o "jsonpath={.data.${JENKINS_ADMIN_TOKEN_KEY}}" | base64 -d)
echo "${user}:${token}"
}
get_crumb() {
local auth="$1"
curl -fsS -u "${auth}" "${JENKINS_URL}/crumbIssuer/api/json" | jq -r .crumb
}
user_exists() {
local auth="$1" user="$2"
local code
code=$(curl -s -o /dev/null -w '%{http_code}' -u "${auth}" "${JENKINS_URL}/user/${user}/api/json")
[[ "${code}" == "200" ]]
}
create_user() {
local auth="$1" crumb="$2" username="$3" email="$4"
local status
status=$(curl -s -o /dev/null -w '%{http_code}' \
-u "${auth}" \
-H "Jenkins-Crumb: ${crumb}" \
-X POST \
--data "username=${username}&password1=${DEFAULT_PASSWORD}&password2=${DEFAULT_PASSWORD}&fullname=${username}&email=${email}" \
"${JENKINS_URL}/securityRealm/createAccountByAdmin")
if [[ "${status}" == "200" || "${status}" == "302" ]]; then
echo "created jenkins user ${username}"
elif [[ "${status}" == "400" ]]; then
echo "jenkins user ${username} already exists (400)" >&2
else
echo "failed to create jenkins user ${username} (status ${status})" >&2
fi
}
main() {
local kc_token auth crumb
kc_token="$(fetch_token)"
auth="$(get_admin_auth)"
crumb="$(get_crumb "${auth}")"
while IFS=$'\t' read -r _ uid email; do
if user_exists "${auth}" "${uid}"; then
continue
fi
create_user "${auth}" "${crumb}" "${uid}" "${email}"
done < <(pull_users "${kc_token}")
}
main "$@"

View File

@ -1,6 +1,6 @@
#!/usr/bin/env fish
function pvc-usage --description "Show Longhorn PVC usage (human-readable) mapped to namespace/name"
function pvc-usage --description "Show Longhorn PVC usage mapped to namespace/name"
begin
kubectl -n longhorn-system get volumes.longhorn.io -o json \
| jq -r '.items[] | "\(.metadata.name)\t\(.status.actualSize)\t\(.spec.size)"' \

View File

@ -39,7 +39,7 @@ SITES=(
"Jellyfin|https://stream.bstein.dev"
"Gitea|https://scm.bstein.dev"
"Jenkins|https://ci.bstein.dev"
"Zot|https://registry.bstein.dev"
"Harbor|https://registry.bstein.dev"
"Vault|https://secret.bstein.dev"
"Jitsi|https://meet.bstein.dev"
"Grafana|https://metrics.bstein.dev"

View File

@ -0,0 +1,48 @@
# services/bstein-dev-home/backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: bstein-dev-home-backend
namespace: bstein-dev-home
spec:
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app: bstein-dev-home-backend
template:
metadata:
labels:
app: bstein-dev-home-backend
spec:
nodeSelector:
kubernetes.io/arch: arm64
node-role.kubernetes.io/worker: "true"
imagePullSecrets:
- name: harbor-bstein-robot
containers:
- name: backend
image: registry.bstein.dev/bstein/bstein-dev-home-backend:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /api/healthz
port: http
initialDelaySeconds: 2
periodSeconds: 5
livenessProbe:
httpGet:
path: /api/healthz
port: http
initialDelaySeconds: 10
periodSeconds: 10
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 300m
memory: 256Mi

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: bstein-dev-home-backend
namespace: bstein-dev-home
spec:
selector:
app: bstein-dev-home-backend
ports:
- name: http
port: 80
targetPort: 8080

View File

@ -0,0 +1,48 @@
# services/bstein-dev-home/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: bstein-dev-home-frontend
namespace: bstein-dev-home
spec:
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app: bstein-dev-home-frontend
template:
metadata:
labels:
app: bstein-dev-home-frontend
spec:
nodeSelector:
kubernetes.io/arch: arm64
node-role.kubernetes.io/worker: "true"
imagePullSecrets:
- name: harbor-bstein-robot
containers:
- name: frontend
image: registry.bstein.dev/bstein/bstein-dev-home-frontend:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 2
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 10
periodSeconds: 10
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 300m
memory: 256Mi

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: bstein-dev-home-frontend
namespace: bstein-dev-home
spec:
selector:
app: bstein-dev-home-frontend
ports:
- name: http
port: 80
targetPort: 80

View File

@ -0,0 +1,48 @@
# services/bstein-dev-home/image.yaml
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: bstein-dev-home-frontend
namespace: bstein-dev-home
spec:
image: registry.bstein.dev/bstein/bstein-dev-home-frontend
interval: 1m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: bstein-dev-home-frontend
namespace: bstein-dev-home
spec:
imageRepositoryRef:
name: bstein-dev-home-frontend
filterTags:
pattern: '^v?(?P<version>[0-9]+\\.[0-9]+\\.[0-9]+(?:[-.][0-9A-Za-z]+)?)$'
extract: '$version'
policy:
semver:
range: ">=0.1.0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: bstein-dev-home-backend
namespace: bstein-dev-home
spec:
image: registry.bstein.dev/bstein/bstein-dev-home-backend
interval: 1m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: bstein-dev-home-backend
namespace: bstein-dev-home
spec:
imageRepositoryRef:
name: bstein-dev-home-backend
filterTags:
pattern: '^v?(?P<version>[0-9]+\\.[0-9]+\\.[0-9]+(?:[-.][0-9A-Za-z]+)?)$'
extract: '$version'
policy:
semver:
range: ">=0.1.0"

View File

@ -0,0 +1,31 @@
# services/bstein-dev-home/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bstein-dev-home
namespace: bstein-dev-home
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts: [ "bstein.dev" ]
secretName: bstein-dev-home-tls
rules:
- host: bstein.dev
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: bstein-dev-home-backend
port: { number: 80 }
- path: /
pathType: Prefix
backend:
service:
name: bstein-dev-home-frontend
port: { number: 80 }

View File

@ -0,0 +1,17 @@
# services/bstein-dev-home/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: bstein-dev-home
resources:
- namespace.yaml
- image.yaml
- frontend-deployment.yaml
- frontend-service.yaml
- backend-deployment.yaml
- backend-service.yaml
- ingress.yaml
images:
- name: registry.bstein.dev/bstein/bstein-dev-home-frontend
newTag: latest # {"$imagepolicy": "bstein-dev-home:bstein-dev-home-frontend"}
- name: registry.bstein.dev/bstein/bstein-dev-home-backend
newTag: latest # {"$imagepolicy": "bstein-dev-home:bstein-dev-home-backend"}

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: bstein-dev-home

View File

@ -0,0 +1,31 @@
# services/ci-demo/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ci-demo
namespace: ci-demo
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ci-demo
template:
metadata:
labels:
app.kubernetes.io/name: ci-demo
spec:
nodeSelector:
hardware: rpi4
containers:
- name: ci-demo
image: registry.bstein.dev/infra/ci-demo:latest
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 2
periodSeconds: 5

View File

@ -0,0 +1,24 @@
# services/ci-demo/image.yaml
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: ci-demo
namespace: ci-demo
spec:
image: registry.bstein.dev/infra/ci-demo
interval: 1m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: ci-demo
namespace: ci-demo
spec:
imageRepositoryRef:
name: ci-demo
filterTags:
pattern: '^v(?P<version>0\.0\.0-\d+)$'
extract: '$version'
policy:
semver:
range: ">=0.0.0-0"

View File

@ -0,0 +1,11 @@
# services/ci-demo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- image.yaml
- deployment.yaml
- service.yaml
images:
- name: registry.bstein.dev/infra/ci-demo
newTag: v0.0.0-2 # {"$imagepolicy": "ci-demo:ci-demo:tag"}

View File

@ -0,0 +1,6 @@
# services/ci-demo/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ci-demo

View File

@ -0,0 +1,14 @@
# services/ci-demo/service.yaml
apiVersion: v1
kind: Service
metadata:
name: ci-demo
namespace: ci-demo
spec:
selector:
app.kubernetes.io/name: ci-demo
ports:
- name: http
port: 80
targetPort: http

View File

@ -35,7 +35,7 @@ spec:
values: ["rpi4"]
containers:
- name: monerod
image: registry.bstein.dev/infra/monerod:0.18.4.1
image: registry.bstein.dev/crypto/monerod:0.18.4.1
command: ["/opt/monero/monerod"]
args:
- --data-dir=/data

View File

@ -32,7 +32,7 @@ spec:
values: ["rpi4"]
containers:
- name: monero-p2pool
image: registry.bstein.dev/infra/monero-p2pool:4.9
image: registry.bstein.dev/crypto/monero-p2pool:4.9
imagePullPolicy: Always
command: ["p2pool"]
args:

View File

@ -21,6 +21,72 @@ spec:
labels:
app: gitea
spec:
initContainers:
- name: configure-oidc
image: gitea/gitea:1.23
securityContext:
runAsUser: 1000
runAsGroup: 1000
env:
- name: CLIENT_ID
valueFrom:
secretKeyRef:
name: gitea-oidc
key: client_id
- name: CLIENT_SECRET
valueFrom:
secretKeyRef:
name: gitea-oidc
key: client_secret
- name: DISCOVERY_URL
valueFrom:
secretKeyRef:
name: gitea-oidc
key: openid_auto_discovery_url
command:
- /bin/bash
- -c
- |
set -euo pipefail
APPINI=/data/gitea/conf/app.ini
BIN=/usr/local/bin/gitea
list="$($BIN -c "$APPINI" admin auth list)"
id=$(echo "$list" | awk '$2=="keycloak"{print $1}')
if [ -n "$id" ]; then
echo "Updating existing auth source id=$id"
$BIN -c "$APPINI" admin auth update-oauth \
--id "$id" \
--name keycloak \
--provider openidConnect \
--key "$CLIENT_ID" \
--secret "$CLIENT_SECRET" \
--auto-discover-url "$DISCOVERY_URL" \
--scopes "openid profile email groups" \
--required-claim-name "" \
--required-claim-value "" \
--group-claim-name groups \
--admin-group admin \
--skip-local-2fa
else
echo "Creating keycloak auth source"
$BIN -c "$APPINI" admin auth add-oauth \
--name keycloak \
--provider openidConnect \
--key "$CLIENT_ID" \
--secret "$CLIENT_SECRET" \
--auto-discover-url "$DISCOVERY_URL" \
--scopes "openid profile email groups" \
--required-claim-name "" \
--required-claim-value "" \
--group-claim-name groups \
--admin-group admin \
--skip-local-2fa
fi
volumeMounts:
- name: gitea-data
mountPath: /data
nodeSelector:
node-role.kubernetes.io/worker: "true"
affinity:
@ -55,6 +121,36 @@ spec:
value: "master"
- name: ROOT_URL
value: "https://scm.bstein.dev"
- name: GITEA__service__ENABLE_OPENID_SIGNIN
value: "true"
- name: GITEA__oauth2_client__ENABLE_AUTO_REGISTRATION
value: "true"
- name: GITEA__service__ALLOW_ONLY_EXTERNAL_REGISTRATION
value: "true"
- name: GITEA__service__DISABLE_REGISTRATION
value: "false"
- name: GITEA__log__LEVEL
value: "trace"
- name: GITEA__service__REQUIRE_SIGNIN_VIEW
value: "false"
- name: GITEA__server__PROXY_HEADERS
value: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Host"
- name: GITEA__session__COOKIE_SECURE
value: "true"
- name: GITEA__session__DOMAIN
value: "scm.bstein.dev"
- name: GITEA__session__SAME_SITE
value: "lax"
- name: GITEA__security__SECRET_KEY
valueFrom:
secretKeyRef:
name: gitea-secret
key: SECRET_KEY
- name: GITEA__security__INTERNAL_TOKEN
valueFrom:
secretKeyRef:
name: gitea-secret
key: INTERNAL_TOKEN
- name: DB_TYPE
value: "postgres"
- name: DB_HOST

View File

@ -0,0 +1,13 @@
# services/gitops-ui/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: gitops-ui-tls
namespace: flux-system
spec:
secretName: gitops-ui-tls
issuerRef:
kind: ClusterIssuer
name: letsencrypt
dnsNames:
- cd.bstein.dev

View File

@ -23,18 +23,15 @@ spec:
remediateLastFailure: true
cleanupOnFail: true
values:
additionalArgs:
- --auth-methods=oidc
adminUser:
create: true
createClusterRole: true
createSecret: true
username: admin
# bcrypt hash for temporary password "G1tOps!2025" (rotate after login)
passwordHash: "$2y$12$wDEOzR1Gc2dbvNSJ3ZXNdOBVFEjC6YASIxnZmHIbO.W1m0fie/QVi"
create: false
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
hosts:
- host: cd.bstein.dev
@ -45,5 +42,7 @@ spec:
- secretName: gitops-ui-tls
hosts:
- cd.bstein.dev
oidcSecret:
create: false
metrics:
enabled: true

View File

@ -5,3 +5,6 @@ namespace: flux-system
resources:
- source.yaml
- helmrelease.yaml
- certificate.yaml
- networkpolicy-acme.yaml
- rbac.yaml

View File

@ -0,0 +1,14 @@
# services/gitops-ui/networkpolicy-acme.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-acme-solver
namespace: flux-system
spec:
podSelector:
matchLabels:
acme.cert-manager.io/http01-solver: "true"
policyTypes:
- Ingress
ingress:
- {}

View File

@ -0,0 +1,15 @@
# services/gitops-ui/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitops-admins
labels:
app.kubernetes.io/name: weave-gitops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: admin
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,12 @@
# services/harbor/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: registry-bstein-dev
namespace: harbor
spec:
secretName: registry-bstein-dev-tls
dnsNames: [ "registry.bstein.dev" ]
issuerRef:
name: letsencrypt
kind: ClusterIssuer

View File

@ -0,0 +1,259 @@
# services/harbor/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: harbor
namespace: harbor
spec:
interval: 10m
install:
timeout: 20m
remediation:
retries: 3
upgrade:
timeout: 20m
remediation:
retries: 3
remediateLastFailure: true
cleanupOnFail: true
rollback:
timeout: 20m
chart:
spec:
chart: harbor
version: 1.18.1
sourceRef:
kind: HelmRepository
name: harbor
namespace: flux-system
values:
externalURL: https://registry.bstein.dev
imagePullPolicy: IfNotPresent
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: registry-bstein-dev-tls
ingress:
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
hosts:
core: registry.bstein.dev
persistence:
enabled: true
resourcePolicy: keep
persistentVolumeClaim:
registry:
existingClaim: harbor-registry
accessMode: ReadWriteOnce
size: 50Gi
jobservice:
jobLog:
existingClaim: harbor-jobservice-logs
accessMode: ReadWriteOnce
size: 5Gi
imageChartStorage:
type: filesystem
filesystem:
rootdirectory: /storage
database:
type: external
external:
host: postgres-service.postgres.svc.cluster.local
port: "5432"
username: harbor
coreDatabase: harbor
existingSecret: harbor-db
sslmode: disable
redis:
type: internal
internal:
image:
repository: registry.bstein.dev/infra/harbor-redis
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-redis:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
trivy:
enabled: false
metrics:
enabled: false
cache:
enabled: false
existingSecretAdminPassword: harbor-core
existingSecretAdminPasswordKey: harbor_admin_password
existingSecretSecretKey: harbor-core
core:
image:
repository: registry.bstein.dev/infra/harbor-core
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-core:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
existingSecret: harbor-core
existingXsrfSecret: harbor-core
existingXsrfSecretKey: CSRF_KEY
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
jobservice:
image:
repository: registry.bstein.dev/infra/harbor-jobservice
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-jobservice:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
portal:
image:
repository: registry.bstein.dev/infra/harbor-portal
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-portal:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
registry:
registry:
image:
repository: registry.bstein.dev/infra/harbor-registry
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-registry:tag"}
controller:
image:
repository: registry.bstein.dev/infra/harbor-registryctl
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-registryctl:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
nginx:
image:
repository: registry.bstein.dev/infra/harbor-nginx
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-nginx:tag"}
nodeSelector:
kubernetes.io/hostname: titan-05
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values: ["arm64"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 90
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi5"]
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
prepare:
image:
repository: registry.bstein.dev/infra/harbor-prepare
tag: v2.14.1-arm64 # {"$imagepolicy": "harbor:harbor-prepare:tag"}
updateStrategy:
type: Recreate

192
services/harbor/image.yaml Normal file
View File

@ -0,0 +1,192 @@
# services/harbor/image.yaml
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-core
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-core
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-core
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-core
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-jobservice
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-jobservice
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-jobservice
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-jobservice
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-portal
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-portal
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-portal
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-portal
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-registry
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-registry
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-registry
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-registry
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-registryctl
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-registryctl
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-registryctl
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-registryctl
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-redis
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-redis
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-redis
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-redis
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-nginx
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-nginx
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-nginx
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-nginx
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: harbor-prepare
namespace: harbor
spec:
image: registry.bstein.dev/infra/harbor-prepare
interval: 5m0s
---
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: harbor-prepare
namespace: harbor
spec:
imageRepositoryRef:
name: harbor-prepare
filterTags:
pattern: '^v(?P<version>\d+\.\d+\.\d+-arm64(\.\d+)?)$'
extract: '$version'
policy:
semver:
range: ">=2.14.0-0 <2.15.0-0"

View File

@ -0,0 +1,10 @@
# services/harbor/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: harbor
resources:
- namespace.yaml
- pvc.yaml
- certificate.yaml
- helmrelease.yaml
- image.yaml

View File

@ -0,0 +1,5 @@
# services/harbor/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: harbor

24
services/harbor/pvc.yaml Normal file
View File

@ -0,0 +1,24 @@
# services/harbor/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-registry
namespace: harbor
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
storageClassName: astreae
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-jobservice-logs
namespace: harbor
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
storageClassName: astreae

View File

@ -0,0 +1,314 @@
# services/jenkins/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
spec:
interval: 30m
chart:
spec:
chart: jenkins
version: 5.8.114
sourceRef:
kind: HelmRepository
name: jenkins
namespace: flux-system
install:
timeout: 15m
remediation:
retries: 3
upgrade:
timeout: 15m
remediation:
retries: 3
remediateLastFailure: true
cleanupOnFail: true
rollback:
timeout: 15m
values:
controller:
nodeSelector:
hardware: rpi4
jenkinsUrl: https://ci.bstein.dev
ingress:
enabled: true
hostName: ci.bstein.dev
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
- secretName: jenkins-tls
hosts:
- ci.bstein.dev
installPlugins:
- kubernetes
- workflow-aggregator
- git
- configuration-as-code
- oic-auth
- job-dsl
- configuration-as-code-support
containerEnv:
- name: ENABLE_OIDC
value: "true"
- name: OIDC_ISSUER
value: "https://sso.bstein.dev/realms/atlas"
- name: OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: clientId
- name: OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: clientSecret
- name: OIDC_AUTH_URL
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: authorizationUrl
- name: OIDC_TOKEN_URL
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: tokenUrl
- name: OIDC_USERINFO_URL
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: userInfoUrl
- name: OIDC_LOGOUT_URL
valueFrom:
secretKeyRef:
name: jenkins-oidc
key: logoutUrl
- name: GITEA_PAT_USERNAME
valueFrom:
secretKeyRef:
name: gitea-pat
key: username
- name: GITEA_PAT_TOKEN
valueFrom:
secretKeyRef:
name: gitea-pat
key: token
customInitContainers:
- name: clean-jcasc-stale
image: alpine:3.20
imagePullPolicy: IfNotPresent
command:
- sh
- -c
- |
set -euo pipefail
rm -f /var/jenkins_home/casc_configs/* || true
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
initScripts:
oidc.groovy: |
import hudson.util.Secret
import jenkins.model.IdStrategy
import jenkins.model.Jenkins
import org.jenkinsci.plugins.oic.OicSecurityRealm
import org.jenkinsci.plugins.oic.OicServerWellKnownConfiguration
import hudson.security.FullControlOnceLoggedInAuthorizationStrategy
def env = System.getenv()
if (!(env['ENABLE_OIDC'] ?: 'false').toBoolean()) {
println("OIDC disabled (ENABLE_OIDC=false); keeping default security realm")
return
}
def required = ['OIDC_CLIENT_ID','OIDC_CLIENT_SECRET','OIDC_ISSUER']
if (!required.every { env[it] }) {
throw new IllegalStateException("OIDC enabled but missing vars: ${required.findAll { !env[it] }}")
}
try {
def wellKnown = "${env['OIDC_ISSUER']}/.well-known/openid-configuration"
def serverCfg = new OicServerWellKnownConfiguration(wellKnown)
serverCfg.setScopesOverride('openid profile email')
def realm = new OicSecurityRealm(
env['OIDC_CLIENT_ID'],
Secret.fromString(env['OIDC_CLIENT_SECRET']),
serverCfg,
false,
IdStrategy.CASE_INSENSITIVE,
IdStrategy.CASE_INSENSITIVE
)
realm.createProxyAwareResourceRetriver()
realm.setLogoutFromOpenidProvider(true)
realm.setPostLogoutRedirectUrl('https://ci.bstein.dev')
realm.setUserNameField('preferred_username')
realm.setFullNameFieldName('name')
realm.setEmailFieldName('email')
realm.setGroupsFieldName('groups')
realm.setRootURLFromRequest(true)
realm.setSendScopesInTokenRequest(true)
def j = Jenkins.get()
j.setSecurityRealm(realm)
def auth = new FullControlOnceLoggedInAuthorizationStrategy()
auth.setAllowAnonymousRead(false)
j.setAuthorizationStrategy(auth)
j.save()
println("Configured OIDC realm from init script (well-known)")
} catch (Exception e) {
println("Failed to configure OIDC realm: ${e}")
throw e
}
JCasC:
defaultConfig: false
securityRealm: ""
authorizationStrategy: ""
configScripts:
base.yaml: |
jenkins:
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: ""
connectTimeout: "5"
readTimeout: "15"
jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
skipTlsVerify: false
usageRestricted: false
maxRequestsPerHostStr: "32"
retentionTimeout: "5"
waitForPodSec: "600"
name: "kubernetes"
namespace: "jenkins"
restrictedPssSecurityContext: false
serverUrl: "https://kubernetes.default"
credentialsId: ""
podLabels:
- key: "jenkins/jenkins-jenkins-agent"
value: "true"
templates:
- name: "default"
namespace: "jenkins"
id: a23c9bbcd21e360a77d51b426f05bd7b8032d8fdedd6ffb97c436883ce6c5ffa
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
envVars:
- envVar:
key: "JENKINS_URL"
value: "http://jenkins.jenkins.svc.cluster.local:8080/"
image: "jenkins/inbound-agent:3355.v388858a_47b_33-3"
privileged: "false"
resourceLimitCpu: 512m
resourceLimitMemory: 512Mi
resourceRequestCpu: 512m
resourceRequestMemory: 512Mi
ttyEnabled: false
workingDir: /home/jenkins/agent
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-jenkins-agent "
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "default"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
inheritYamlMergeStrategy: false
slaveAgentPort: 50000
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
creds.yaml: |
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
scope: GLOBAL
id: gitea-pat
username: "${GITEA_PAT_USERNAME}"
password: "${GITEA_PAT_TOKEN}"
description: "Gitea PAT for pipelines"
jobs.yaml: |
jobs:
- script: |
pipelineJob('harbor-arm-build') {
triggers {
scm('H/5 * * * *')
}
definition {
cpsScm {
scm {
git {
remote {
url('https://scm.bstein.dev/bstein/harbor-arm-build.git')
credentials('gitea-pat')
}
branches('*/master')
}
}
}
}
}
pipelineJob('ci-demo') {
triggers {
scm('H/1 * * * *')
}
definition {
cpsScm {
scm {
git {
remote {
url('https://scm.bstein.dev/bstein/ci-demo.git')
credentials('gitea-pat')
}
branches('*/master')
}
}
scriptPath('Jenkinsfile')
}
}
}
pipelineJob('bstein-dev-home') {
triggers {
scm('H/2 * * * *')
}
definition {
cpsScm {
scm {
git {
remote {
url('https://scm.bstein.dev/bstein/bstein-dev-home.git')
credentials('gitea-pat')
}
branches('*/master')
}
}
scriptPath('Jenkinsfile')
}
}
}
persistence:
enabled: true
storageClass: astreae
size: 50Gi
serviceAccount:
create: true

View File

@ -0,0 +1,7 @@
# services/jenkins/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: jenkins
resources:
- namespace.yaml
- helmrelease.yaml

View File

@ -0,0 +1,5 @@
# services/jenkins/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: jenkins

View File

@ -1,27 +0,0 @@
# services/keycloak
Keycloak is deployed via raw manifests and backed by the shared Postgres (`postgres-service.postgres.svc.cluster.local:5432`). Create these secrets before applying:
```bash
# DB creds (per-service DB/user in shared Postgres)
kubectl -n sso create secret generic keycloak-db \
--from-literal=username=keycloak \
--from-literal=password='<DB_PASSWORD>' \
--from-literal=database=keycloak
# Admin console creds (maps to KC admin user)
kubectl -n sso create secret generic keycloak-admin \
--from-literal=username=brad@bstein.dev \
--from-literal=password='<ADMIN_PASSWORD>'
```
Apply:
```bash
kubectl apply -k services/keycloak
```
Notes
- Service: `keycloak.sso.svc:80` (Ingress `sso.bstein.dev`, TLS via cert-manager).
- Uses Postgres schema `public`; DB/user should be provisioned in the shared Postgres instance.
- Health endpoints on :9000 are wired for probes.

View File

@ -48,8 +48,6 @@ spec:
runAsGroup: 0
fsGroup: 1000
fsGroupChangePolicy: OnRootMismatch
imagePullSecrets:
- name: zot-regcred
initContainers:
- name: mailu-http-listener
image: registry.bstein.dev/sso/mailu-http-listener:0.1.0

View File

@ -245,14 +245,18 @@
},
{
"color": "orange",
"value": 0.999
"value": 0.99
},
{
"color": "yellow",
"value": 0.9999
"value": 0.999
},
{
"color": "green",
"value": 0.9999
},
{
"color": "blue",
"value": 0.99999
}
]

View File

@ -56,8 +56,6 @@ spec:
volumeMounts:
- name: pod-resources
mountPath: /var/lib/kubelet/pod-resources
imagePullSecrets:
- name: zot-regcred
volumes:
- name: pod-resources
hostPath:

View File

@ -254,14 +254,18 @@ data:
},
{
"color": "orange",
"value": 0.999
"value": 0.99
},
{
"color": "yellow",
"value": 0.9999
"value": 0.999
},
{
"color": "green",
"value": 0.9999
},
{
"color": "blue",
"value": 0.99999
}
]

View File

@ -6,7 +6,7 @@ metadata:
namespace: sso
spec:
forwardAuth:
address: http://oauth2-proxy.sso.svc.cluster.local:4180/oauth2/auth
address: http://oauth2-proxy.sso.svc.cluster.local/oauth2/auth
trustForwardHeader: true
authResponseHeaders:
- Authorization

View File

@ -17,8 +17,6 @@ spec:
spec:
nodeSelector:
kubernetes.io/arch: amd64
imagePullSecrets:
- name: zot-regcred
securityContext:
runAsNonRoot: true
runAsUser: 65532
@ -58,7 +56,7 @@ spec:
containers:
- name: pegasus
image: registry.bstein.dev/pegasus:1.2.32 # {"$imagepolicy": "jellyfin:pegasus"}
image: registry.bstein.dev/streaming/pegasus:1.2.32 # {"$imagepolicy": "jellyfin:pegasus"}
imagePullPolicy: Always
command: ["/pegasus"]
env:

View File

@ -5,10 +5,8 @@ metadata:
name: pegasus
namespace: jellyfin
spec:
image: registry.bstein.dev/pegasus
image: registry.bstein.dev/streaming/pegasus
interval: 1m0s
secretRef:
name: zot-regcred
---

View File

@ -1,47 +0,0 @@
# services/zot/config.map
apiVersion: v1
kind: ConfigMap
metadata:
name: zot-config
namespace: zot
data:
config.json: |
{
"storage": {
"rootDirectory": "/var/lib/registry",
"dedupe": true,
"gc": true,
"gcDelay": "1h",
"gcInterval": "1h"
},
"http": {
"address": "0.0.0.0",
"port": "5000",
"realm": "zot-registry",
"compat": ["docker2s2"],
"auth": {
"htpasswd": { "path": "/etc/zot/htpasswd" }
},
"accessControl": {
"repositories": {
"**": {
"policies": [
{ "users": ["bstein"], "actions": ["read", "create", "update", "delete"] }
],
"defaultPolicy": [],
"anonymousPolicy": []
}
},
"adminPolicy": {
"users": ["bstein"],
"actions": ["read", "create", "update", "delete"]
}
}
},
"log": { "level": "info" },
"extensions": {
"ui": { "enable": true },
"search": { "enable": true },
"metrics": { "enable": true }
}
}

View File

@ -1,72 +0,0 @@
# services/zot/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zot
namespace: zot
labels: { app: zot }
spec:
replicas: 1
selector:
matchLabels: { app: zot }
template:
metadata:
labels: { app: zot }
spec:
nodeSelector:
node-role.kubernetes.io/worker: "true"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: hardware
operator: In
values: ["rpi4","rpi5"]
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: hardware
operator: In
values: ["rpi4"]
containers:
- name: zot
image: ghcr.io/project-zot/zot-linux-arm64:v2.1.8
imagePullPolicy: IfNotPresent
args: ["serve", "/etc/zot/config.json"]
ports:
- { name: http, containerPort: 5000 }
volumeMounts:
- name: cfg
mountPath: /etc/zot/config.json
subPath: config.json
readOnly: true
- name: htpasswd
mountPath: /etc/zot/htpasswd
subPath: htpasswd
readOnly: true
- name: zot-data
mountPath: /var/lib/registry
readinessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 2
periodSeconds: 5
livenessProbe:
tcpSocket:
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests: { cpu: "50m", memory: "64Mi" }
volumes:
- name: cfg
configMap:
name: zot-config
- name: htpasswd
secret:
secretName: zot-htpasswd
- name: zot-data
persistentVolumeClaim:
claimName: zot-data

View File

@ -1,27 +0,0 @@
# services/zot/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zot
namespace: zot
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.middlewares: zot-zot-resp-headers@kubernetescrd
spec:
ingressClassName: traefik
tls:
- hosts: [ "registry.bstein.dev" ]
secretName: registry-bstein-dev-tls
rules:
- host: registry.bstein.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: zot
port:
number: 5000

View File

@ -1,11 +0,0 @@
# services/zot/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- deployment.yaml
- configmap.yaml
- service.yaml
- ingress.yaml
- middleware.yaml

View File

@ -1,26 +0,0 @@
# services/zot/middleware.yaml
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: zot-resp-headers
namespace: zot
spec:
headers:
customResponseHeaders:
Docker-Distribution-Api-Version: "registry/2.0"
accessControlAllowOriginList:
- "*"
accessControlAllowCredentials: true
accessControlAllowHeaders:
- Authorization
- Content-Type
- Docker-Distribution-Api-Version
- X-Registry-Auth
accessControlAllowMethods:
- GET
- HEAD
- OPTIONS
- POST
- PUT
- PATCH
- DELETE

View File

@ -1,5 +0,0 @@
# services/zot/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: zot

View File

@ -1,13 +0,0 @@
# services/zot/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zot-data
namespace: zot
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25Gi
storageClassName: asteria

View File

@ -1,14 +0,0 @@
# services/zot/service.yaml
apiVersion: v1
kind: Service
metadata:
name: zot
namespace: zot
labels: { app: zot }
spec:
type: ClusterIP
selector: { app: zot }
ports:
- name: http
port: 5000
targetPort: 5000