Compare commits
49 Commits
main
...
feature/ci
| Author | SHA1 | Date | |
|---|---|---|---|
| 8b9fc8ff1c | |||
| 3066db793d | |||
| 759a77c745 | |||
| c661658a12 | |||
| 144a860a88 | |||
| bd64a36165 | |||
| 22b611f8ea | |||
| a8bde2edc7 | |||
| d51a19cab9 | |||
| 3e3cab6845 | |||
| 9cda32c0bf | |||
| 0f49849761 | |||
| 252743e416 | |||
| dba7cf00a4 | |||
| aa0df1f62b | |||
| aa2bb09873 | |||
| 54406661f2 | |||
| caef505677 | |||
| 54eb9e1ac5 | |||
| 1899bb7677 | |||
| 0416493f49 | |||
| b87f06f6ff | |||
| 828f66d18c | |||
| 7a1f3bfc3f | |||
| 294542e718 | |||
| c3a8c7ddae | |||
| 29da4be557 | |||
| fc5b0cccf8 | |||
| c8b89c3120 | |||
| 9b994111cb | |||
| a174e451d9 | |||
| d8dab08cd8 | |||
| 0d93929e3d | |||
| 2ffc906487 | |||
| 37761fa118 | |||
| a46226bb0a | |||
| 04602a2914 | |||
| fc0fa59981 | |||
| 0286f4f317 | |||
| 90bf1f7d8e | |||
| 6def1aa479 | |||
| 4eff9ebcc1 | |||
| ccfc473521 | |||
| b575c64de1 | |||
| 39d732d74d | |||
| b28e393524 | |||
| 694bb4d12e | |||
| 6993f51ef7 | |||
| 85cea34fe8 |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,5 +1,5 @@
|
|||||||
# Ignore markdown by default, but keep top-level docs
|
|
||||||
*.md
|
*.md
|
||||||
!README.md
|
!README.md
|
||||||
!AGENTS.md
|
!AGENTS.md
|
||||||
!**/NOTES.md
|
!**/NOTES.md
|
||||||
|
!NOTES.md
|
||||||
|
|||||||
81
AGENTS.md
81
AGENTS.md
@ -1,81 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
Repository Guidelines
|
|
||||||
|
|
||||||
> Local-only note: apply changes through Flux-tracked manifests, not by manual kubectl edits in-cluster—manual tweaks will be reverted by Flux.
|
|
||||||
|
|
||||||
## Project Structure & Module Organization
|
|
||||||
- `infrastructure/`: cluster-scoped building blocks (core, flux-system, traefik, longhorn). Add new platform features by mirroring this layout.
|
|
||||||
- `services/`: workload manifests per app (`services/gitea/`, etc.) with `kustomization.yaml` plus one file per kind; keep diffs small and focused.
|
|
||||||
- `dockerfiles/` hosts bespoke images, while `scripts/` stores operational Fish/Bash helpers—extend these directories instead of relying on ad-hoc commands.
|
|
||||||
|
|
||||||
## Build, Test, and Development Commands
|
|
||||||
- `kustomize build services/<app>` (or `kubectl kustomize ...`) renders manifests exactly as Flux will.
|
|
||||||
- `kubectl apply --server-side --dry-run=client -k services/<app>` checks schema compatibility without touching the cluster.
|
|
||||||
- `flux reconcile kustomization <name> --namespace flux-system --with-source` pulls the latest Git state after merges or hotfixes.
|
|
||||||
- `fish scripts/flux_hammer.fish --help` explains the recovery tool; read it before running against production workloads.
|
|
||||||
|
|
||||||
## Coding Style & Naming Conventions
|
|
||||||
- YAML uses two-space indents; retain the leading path comment (e.g. `# services/gitea/deployment.yaml`) to speed code review.
|
|
||||||
- Keep resource names lowercase kebab-case, align labels/selectors, and mirror namespaces with directory names.
|
|
||||||
- List resources in `kustomization.yaml` from namespace/config, through storage, then workloads and networking for predictable diffs.
|
|
||||||
- Scripts start with `#!/usr/bin/env fish` or bash, stay executable, and follow snake_case names such as `flux_hammer.fish`.
|
|
||||||
|
|
||||||
## Testing Guidelines
|
|
||||||
- Run `kustomize build` and the dry-run apply for every service you touch; capture failures before opening a PR.
|
|
||||||
- `flux diff kustomization <name> --path services/<app>` previews reconciliations—link notable output when behavior shifts.
|
|
||||||
- Docker edits: `docker build -f dockerfiles/Dockerfile.monerod .` (swap the file you changed) to verify image builds.
|
|
||||||
|
|
||||||
## Commit & Pull Request Guidelines
|
|
||||||
- Keep commit subjects short, present-tense, and optionally scoped (`gpu(titan-24): add RuntimeClass`); squash fixups before review.
|
|
||||||
- Describe linked issues, affected services, and required operator steps (e.g. `flux reconcile kustomization services-gitea`) in the PR body.
|
|
||||||
- Focus each PR on one kustomization or service and update `infrastructure/flux-system` when Flux must track new folders.
|
|
||||||
- Record the validation you ran (dry-runs, diffs, builds) and add screenshots only when ingress or UI behavior changes.
|
|
||||||
|
|
||||||
## Security & Configuration Tips
|
|
||||||
- Never commit credentials; use Vault workflows (`services/vault/`) or SOPS-encrypted manifests wired through `infrastructure/flux-system`.
|
|
||||||
- Node selectors and tolerations gate workloads to hardware like `hardware: rpi4`; confirm labels before scaling or renaming nodes.
|
|
||||||
- Pin external images by digest or rely on Flux image automation to follow approved tags and avoid drift.
|
|
||||||
|
|
||||||
## Dashboard roadmap / context (2025-12-02)
|
|
||||||
- Atlas dashboards are generated via `scripts/dashboards_render_atlas.py --build`, which writes JSON under `services/monitoring/dashboards/` and ConfigMaps under `services/monitoring/`. Keep the Grafana manifests in sync by regenerating after edits.
|
|
||||||
- Atlas Overview panels are paired with internal dashboards (pods, nodes, storage, network, GPU). A new `atlas-gpu` internal dashboard holds the detailed GPU metrics that feed the overview share pie.
|
|
||||||
- Old Grafana folders (`Atlas Storage`, `Atlas SRE`, `Atlas Public`, `Atlas Nodes`) should be removed in Grafana UI when convenient; only `Atlas Overview` and `Atlas Internal` should remain provisioned.
|
|
||||||
- Future work: add a separate generator (e.g., `dashboards_render_oceanus.py`) for SUI/oceanus validation dashboards, mirroring the atlas pattern of internal dashboards feeding a public overview.
|
|
||||||
|
|
||||||
## Monitoring state (2025-12-03)
|
|
||||||
- dcgm-exporter DaemonSet pulls `registry.bstein.dev/monitoring/dcgm-exporter:4.4.2-4.7.0-ubuntu22.04` with nvidia runtime/imagePullSecret; titan-24 exports metrics, titan-22 remains NotReady.
|
|
||||||
- Atlas Overview is the Grafana home (1h range, 1m refresh), Overview folder UID `overview`, internal folder `atlas-internal` (oceanus-internal stub).
|
|
||||||
- Panels standardized via generator; hottest row compressed, worker/control rows taller, root disk row taller and top12 bar gauge with labels. GPU share pie uses 1h avg_over_time to persist idle activity.
|
|
||||||
- Internal dashboards are provisioned without Viewer role; if anonymous still sees them, restart Grafana and tighten auth if needed.
|
|
||||||
- GPU share panel updated (feature/sso) to use `max_over_time(…[$__range])`, so longer ranges (e.g., 12h) keep recent activity visible. Flux tracking `feature/sso`.
|
|
||||||
|
|
||||||
## Upcoming priorities (SSO/storage/mail)
|
|
||||||
- Establish SSO (Keycloak or similar) and federate Grafana, Gitea, Zot, Nextcloud, Pegasus/Jellyfin; keep Vaultwarden separate until safe.
|
|
||||||
- Add Nextcloud (limit to rpi5 workers) with office suite; integrate with SSO; plan storage class and ingress.
|
|
||||||
- Plan mail: mostly self-hosted, relay through trusted provider for outbound; integrate with services (Nextcloud, Vaultwarden, etc.) for notifications and account flows.
|
|
||||||
|
|
||||||
## SSO plan sketch (2025-12-03)
|
|
||||||
- IdP: use Keycloak (preferred) in a new `sso` namespace, Bitnami or codecentric chart with Postgres backing store (single PVC), ingress `sso.bstein.dev`, admin user bound to brad@bstein.dev; stick with local DB initially (no external IdP).
|
|
||||||
- Auth flow goals: Grafana (OIDC), Gitea (OAuth2/Keycloak), Zot (via Traefik forward-auth/oauth2-proxy), Jellyfin/Pegasus via Jellyfin OAuth/OpenID plugin (map existing usernames; run migration to pre-create users in Keycloak with same usernames/emails and temporary passwords), Pegasus keeps using Jellyfin tokens.
|
|
||||||
- Steps to implement:
|
|
||||||
1) Add service folder `services/keycloak/` (namespace, PVC, HelmRelease, ingress, secret for admin creds). Verify with kustomize + Flux reconcile.
|
|
||||||
2) Seed realm `atlas` with users (import CSV/realm). Create client for Grafana (public/implicit), Gitea (confidential), and a “jellyfin” client for the OAuth plugin; set email for brad@bstein.dev as admin.
|
|
||||||
3) Reconfigure Grafana to OIDC (disable anonymous to internal folders, leave Overview public via folder permissions). Reconfigure Gitea to OIDC (app.ini).
|
|
||||||
4) Add Traefik forward-auth (oauth2-proxy) in front of Zot and any other services needing headers-based auth.
|
|
||||||
5) Deploy Jellyfin OpenID plugin; map Keycloak users to existing Jellyfin usernames; communicate password reset path.
|
|
||||||
- Migration caution: do not delete existing local creds until SSO validated; keep Pegasus working via Jellyfin tokens during transition.
|
|
||||||
|
|
||||||
## Postgres centralization (2025-12-03)
|
|
||||||
- Prefer a shared in-cluster Postgres deployment with per-service databases to reduce resource sprawl on Pi nodes. Use it for services that can easily point at an external DB.
|
|
||||||
- Candidates to migrate to shared Postgres: Keycloak (realm DB), Gitea (git DB), Nextcloud (app DB), possibly Grafana (if persistence needed beyond current provisioner), Jitsi prosody/JVB state (if external DB supported). Keep tightly-coupled or lightweight embedded DBs as-is when migration is painful or not supported.
|
|
||||||
|
|
||||||
## SSO integration snapshot (2025-12-08)
|
|
||||||
- Current blockers: Zot still prompts for basic auth/double-login; Vault still wants the token UI after Keycloak (previously 502/404 when vault-0 sealed). Forward-auth middleware on Zot Ingress likely still causing the 401/Found hop; Vault OIDC mount not completing UI flow unless unsealed and preferred login is set.
|
|
||||||
- Flux-only changes required: remove zot forward-auth middleware from Ingress (let oauth2-proxy handle redirect), ensure Vault OIDC mount is preferred UI login and bound to admin group; keep all edits in repo so Flux enforces them.
|
|
||||||
- Secrets present (per user): `zot-oidc-client` (client_secret only), `oauth2-proxy-zot-oidc`, `oauth2-proxy-vault-oidc`, `vault-oidc-admin-token`. Zot needs its regcred in the zot namespace if image pulls fail.
|
|
||||||
- Cluster validation blocked here: `kubectl get nodes` fails (403/permission) and DNS to `*.bstein.dev` fails in this session, so no live curl verification could be run. Re-test on a host with cluster/DNS access after Flux applies fixes.
|
|
||||||
|
|
||||||
## Docs hygiene
|
|
||||||
- Do not add per-service `README.md` files; use `NOTES.md` if documentation is needed inside service folders. Keep only the top-level repo README.
|
|
||||||
- Keep comments succinct and in a human voice—no AI-sounding notes. Use `NOTES.md` for scratch notes instead of sprinkling reminders into code or extra READMEs.
|
|
||||||
40
NOTES.md
40
NOTES.md
@ -1,3 +1,39 @@
|
|||||||
# Rotation reminders (temporary secrets set by automation)
|
AC Infinity metrics → Grafana plan
|
||||||
|
==================================
|
||||||
|
|
||||||
|
Goal: expose tent ambient temperature and fan speeds (inlet indoor/outdoor, outlet, internal) on metrics.bstein.dev overview. Keep the footprint minimal; avoid full Home Assistant if possible.
|
||||||
|
|
||||||
|
Option A (slim exporter; recommended)
|
||||||
|
- Use the community AC Infinity Python client (from the Home Assistant custom component) wrapped in a tiny Prometheus exporter.
|
||||||
|
- Flow:
|
||||||
|
1) Deploy a `Deployment` in `monitoring` (e.g., `acinfinity-exporter`) that:
|
||||||
|
- Reads AC Infinity cloud creds from a `Secret` (populated via Vault).
|
||||||
|
- Polls the AC Infinity cloud API on an interval (e.g., 30–60s) using the client to fetch current sensor state.
|
||||||
|
- Exposes `/metrics` with gauges: `acinfinity_temp_c`, `acinfinity_humidity_percent`, `acinfinity_fan_speed_percent{fan="inlet_indoor|inlet_outdoor|outlet|internal"}`, `acinfinity_mode`, `acinfinity_alarm`, etc.
|
||||||
|
2) Add a `Service` + `ServiceMonitor` to scrape the exporter. Metric relabel to friendly names if needed.
|
||||||
|
3) Update `scripts/dashboards_render_atlas.py` to add:
|
||||||
|
- Ambient temp gauge (°C/°F) on Overview.
|
||||||
|
- Fan speed gauges (3–4 panels) for inlet/outlet/internal.
|
||||||
|
4) Regenerate dashboards (`python3 scripts/dashboards_render_atlas.py --build`), commit, push, reconcile `monitoring`.
|
||||||
|
- Pros: minimal footprint, no HA. Cons: need to vendor the client library and keep it in sync if AC Infinity changes their API.
|
||||||
|
|
||||||
|
Option B (minimal Home Assistant)
|
||||||
|
- Run a stripped-down Home Assistant in `monitoring` with only the AC Infinity custom integration and Prometheus exporter enabled.
|
||||||
|
- Flow:
|
||||||
|
1) HA `Deployment` + PVC or emptyDir for config; `Secret` for AC Infinity creds; HA API password in Secret.
|
||||||
|
2) Prometheus scrapes `http://ha-acinfinity:8123/api/prometheus?api_password=...` via `ServiceMonitor`.
|
||||||
|
3) Same dashboard steps as above after metric relabeling to `acinfinity_*`.
|
||||||
|
- Pros: reuse maintained HA integration. Cons: heavier than exporter, HA maintenance overhead.
|
||||||
|
|
||||||
|
Secrets and ops
|
||||||
|
- Store AC Infinity username/password in Vault; template into a Secret consumed by the exporter/HA.
|
||||||
|
- Network: allow outbound HTTPS to AC Infinity cloud from the Pod.
|
||||||
|
- Interval: 30–60s polling is usually enough; avoid hammering the API.
|
||||||
|
|
||||||
|
Implementation sketch (Option A)
|
||||||
|
- New image `monitoring/acinfinity-exporter` (Python + prometheus_client + AC Infinity client).
|
||||||
|
- Deployment (namespace `monitoring`): env AC_INFINITY_USER/PASS, POLL_INTERVAL, LISTEN_ADDR.
|
||||||
|
- Service: `port: 9100` (or similar).
|
||||||
|
- ServiceMonitor: scrape `/metrics` every 30–60s, metricRelabel to normalize names/labels.
|
||||||
|
- Dashboard panels: add to Overview top/mid rows; regenerate JSONs; push; reconcile `monitoring`.
|
||||||
|
|
||||||
- Weave GitOps UI (`cd.bstein.dev`) admin: `admin` / `G1tOps!2025` — rotate immediately after first login.
|
|
||||||
|
|||||||
@ -9,4 +9,3 @@ resources:
|
|||||||
- ../../services/monitoring
|
- ../../services/monitoring
|
||||||
- ../../services/pegasus
|
- ../../services/pegasus
|
||||||
- ../../services/vault
|
- ../../services/vault
|
||||||
- ../../services/zot
|
|
||||||
|
|||||||
@ -1,13 +1,13 @@
|
|||||||
# clusters/atlas/flux-system/applications/zot/kustomization.yaml
|
# clusters/atlas/flux-system/applications/harbor/kustomization.yaml
|
||||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||||
kind: Kustomization
|
kind: Kustomization
|
||||||
metadata:
|
metadata:
|
||||||
name: zot
|
name: harbor
|
||||||
namespace: flux-system
|
namespace: flux-system
|
||||||
spec:
|
spec:
|
||||||
interval: 10m
|
interval: 10m
|
||||||
path: ./services/zot
|
path: ./services/harbor
|
||||||
targetNamespace: zot
|
targetNamespace: harbor
|
||||||
prune: false
|
prune: false
|
||||||
sourceRef:
|
sourceRef:
|
||||||
kind: GitRepository
|
kind: GitRepository
|
||||||
@ -15,4 +15,4 @@ spec:
|
|||||||
namespace: flux-system
|
namespace: flux-system
|
||||||
wait: true
|
wait: true
|
||||||
dependsOn:
|
dependsOn:
|
||||||
- name: core
|
- name: core
|
||||||
@ -0,0 +1,18 @@
|
|||||||
|
# clusters/atlas/flux-system/applications/jenkins/kustomization.yaml
|
||||||
|
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||||
|
kind: Kustomization
|
||||||
|
metadata:
|
||||||
|
name: jenkins
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
interval: 10m
|
||||||
|
path: ./services/jenkins
|
||||||
|
prune: true
|
||||||
|
sourceRef:
|
||||||
|
kind: GitRepository
|
||||||
|
name: flux-system
|
||||||
|
targetNamespace: jenkins
|
||||||
|
dependsOn:
|
||||||
|
- name: helm
|
||||||
|
- name: traefik
|
||||||
|
wait: true
|
||||||
@ -2,7 +2,6 @@
|
|||||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||||
kind: Kustomization
|
kind: Kustomization
|
||||||
resources:
|
resources:
|
||||||
- zot/kustomization.yaml
|
|
||||||
- gitea/kustomization.yaml
|
- gitea/kustomization.yaml
|
||||||
- vault/kustomization.yaml
|
- vault/kustomization.yaml
|
||||||
- jitsi/kustomization.yaml
|
- jitsi/kustomization.yaml
|
||||||
@ -10,9 +9,11 @@ resources:
|
|||||||
- monerod/kustomization.yaml
|
- monerod/kustomization.yaml
|
||||||
- pegasus/kustomization.yaml
|
- pegasus/kustomization.yaml
|
||||||
- pegasus/image-automation.yaml
|
- pegasus/image-automation.yaml
|
||||||
|
- harbor/kustomization.yaml
|
||||||
- jellyfin/kustomization.yaml
|
- jellyfin/kustomization.yaml
|
||||||
- xmr-miner/kustomization.yaml
|
- xmr-miner/kustomization.yaml
|
||||||
- sui-metrics/kustomization.yaml
|
- sui-metrics/kustomization.yaml
|
||||||
- keycloak/kustomization.yaml
|
- keycloak/kustomization.yaml
|
||||||
- oauth2-proxy/kustomization.yaml
|
- oauth2-proxy/kustomization.yaml
|
||||||
- mailu/kustomization.yaml
|
- mailu/kustomization.yaml
|
||||||
|
- jenkins/kustomization.yaml
|
||||||
|
|||||||
@ -8,7 +8,7 @@ metadata:
|
|||||||
spec:
|
spec:
|
||||||
interval: 1m0s
|
interval: 1m0s
|
||||||
ref:
|
ref:
|
||||||
branch: feature/mailu
|
branch: feature/ci-gitops
|
||||||
secretRef:
|
secretRef:
|
||||||
name: flux-system-gitea
|
name: flux-system-gitea
|
||||||
url: ssh://git@scm.bstein.dev:2242/bstein/titan-iac.git
|
url: ssh://git@scm.bstein.dev:2242/bstein/titan-iac.git
|
||||||
|
|||||||
@ -1,15 +0,0 @@
|
|||||||
# Titan Homelab Topology
|
|
||||||
|
|
||||||
| Hostname | Role / Function | Managed By | Notes |
|
|
||||||
|------------|--------------------------------|---------------------|-------|
|
|
||||||
| titan-db | HA control plane database | Ansible | PostgreSQL / etcd backing services |
|
|
||||||
| titan-0a | Kubernetes control-plane | Flux (atlas cluster)| HA leader, tainted for control only |
|
|
||||||
| titan-0b | Kubernetes control-plane | Flux (atlas cluster)| Standby control node |
|
|
||||||
| titan-0c | Kubernetes control-plane | Flux (atlas cluster)| Standby control node |
|
|
||||||
| titan-04-19| Raspberry Pi workers | Flux (atlas cluster)| Workload nodes, labelled per hardware |
|
|
||||||
| titan-20&21| NVIDIA Jetson workers | Flux (atlas cluster)| Workload nodes, labelled per hardware |
|
|
||||||
| titan-22 | GPU mini-PC (Jellyfin) | Flux + Ansible | NVIDIA runtime managed via `modules/profiles/atlas-ha` |
|
|
||||||
| titan-23 | Dedicated SUI validator Oceanus| Manual + Ansible | Baremetal validator workloads, exposes metrics to atlas |
|
|
||||||
| titan-24 | Tethys hybrid node | Flux + Ansible | Runs SUI metrics via K8s, validator via Ansible |
|
|
||||||
| titan-jh | Jumphost & bastion & lesavka | Ansible | Entry point / future KVM services / custom kvm - lesavaka |
|
|
||||||
| styx | Air-gapped workstation | Manual / Scripts | Remains isolated, scripts tracked in `hosts/styx` |
|
|
||||||
@ -1,2 +0,0 @@
|
|||||||
# hosts/styx/README.md
|
|
||||||
Styx is air-gapped; provisioning scripts live under `scripts/`.
|
|
||||||
9
infrastructure/sources/helm/harbor.yaml
Normal file
9
infrastructure/sources/helm/harbor.yaml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# infrastructure/sources/helm/harbor.yaml
|
||||||
|
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||||
|
kind: HelmRepository
|
||||||
|
metadata:
|
||||||
|
name: harbor
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
interval: 10m
|
||||||
|
url: https://helm.goharbor.io
|
||||||
9
infrastructure/sources/helm/jenkins.yaml
Normal file
9
infrastructure/sources/helm/jenkins.yaml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# infrastructure/sources/helm/jenkins.yaml
|
||||||
|
apiVersion: source.toolkit.fluxcd.io/v1
|
||||||
|
kind: HelmRepository
|
||||||
|
metadata:
|
||||||
|
name: jenkins
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
interval: 1h
|
||||||
|
url: https://charts.jenkins.io
|
||||||
@ -5,6 +5,8 @@ resources:
|
|||||||
- grafana.yaml
|
- grafana.yaml
|
||||||
- hashicorp.yaml
|
- hashicorp.yaml
|
||||||
- jetstack.yaml
|
- jetstack.yaml
|
||||||
|
- jenkins.yaml
|
||||||
- mailu.yaml
|
- mailu.yaml
|
||||||
|
- harbor.yaml
|
||||||
- prometheus.yaml
|
- prometheus.yaml
|
||||||
- victoria-metrics.yaml
|
- victoria-metrics.yaml
|
||||||
|
|||||||
@ -372,8 +372,8 @@ function xmrwallet_bootstrap --description "Interactive setup of monero-wallet-r
|
|||||||
end
|
end
|
||||||
|
|
||||||
# Use your private image by default (in Zot)
|
# Use your private image by default (in Zot)
|
||||||
read -P "Container image for wallet RPC [registry.bstein.dev/infra/monero-wallet-rpc:0.18.4.1]: " image
|
read -P "Container image for wallet RPC [registry.bstein.dev/crypto/monero-wallet-rpc:0.18.4.1]: " image
|
||||||
if test -z "$image"; set image registry.bstein.dev/infra/monero-wallet-rpc:0.18.4.1; end
|
if test -z "$image"; set image registry.bstein.dev/crypto/monero-wallet-rpc:0.18.4.1; end
|
||||||
_require "Container image" $image; or return 1
|
_require "Container image" $image; or return 1
|
||||||
|
|
||||||
# --- Secrets (defaults: RPC user=wallet name, passwords auto if missing)
|
# --- Secrets (defaults: RPC user=wallet name, passwords auto if missing)
|
||||||
@ -1375,4 +1375,3 @@ function xmrwallet_help_detailed
|
|||||||
echo " Probes it via a temporary port-forward so it works from your workstation."
|
echo " Probes it via a temporary port-forward so it works from your workstation."
|
||||||
echo " Set xmrwallet_SKIP_DAEMON_CHECK=1 to bypass the daemon probe (not recommended)."
|
echo " Set xmrwallet_SKIP_DAEMON_CHECK=1 to bypass the daemon probe (not recommended)."
|
||||||
end
|
end
|
||||||
|
|
||||||
|
|||||||
@ -23,7 +23,7 @@ end
|
|||||||
|
|
||||||
# Default image chooser (you should override with your own multi-arch image)
|
# Default image chooser (you should override with your own multi-arch image)
|
||||||
function _sui_default_image -a NET
|
function _sui_default_image -a NET
|
||||||
echo registry.bstein.dev/infra/sui-tools:1.53.2
|
echo registry.bstein.dev/crypto/sui-tools:1.53.2
|
||||||
end
|
end
|
||||||
|
|
||||||
# Convert any string to a k8s-safe name (RFC-1123 label-ish)
|
# Convert any string to a k8s-safe name (RFC-1123 label-ish)
|
||||||
|
|||||||
@ -241,9 +241,10 @@ UPTIME_PERCENT_THRESHOLDS = {
|
|||||||
"mode": "absolute",
|
"mode": "absolute",
|
||||||
"steps": [
|
"steps": [
|
||||||
{"color": "red", "value": None},
|
{"color": "red", "value": None},
|
||||||
{"color": "orange", "value": 0.999},
|
{"color": "orange", "value": 0.99},
|
||||||
{"color": "yellow", "value": 0.9999},
|
{"color": "yellow", "value": 0.999},
|
||||||
{"color": "green", "value": 0.99999},
|
{"color": "green", "value": 0.9999},
|
||||||
|
{"color": "blue", "value": 0.99999},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
PROBLEM_TABLE_EXPR = (
|
PROBLEM_TABLE_EXPR = (
|
||||||
|
|||||||
92
scripts/gitea_cred_sync.sh
Executable file
92
scripts/gitea_cred_sync.sh
Executable file
@ -0,0 +1,92 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Sync Keycloak users into Gitea local accounts (for CLI + tokens).
|
||||||
|
# Requires: curl, jq, kubectl. Expects a Keycloak client with realm-management
|
||||||
|
# permissions (manage-users) and a Gitea admin token stored in a secret.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
|
||||||
|
require curl; require jq; require kubectl
|
||||||
|
|
||||||
|
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
|
||||||
|
: "${KEYCLOAK_REALM:=atlas}"
|
||||||
|
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
|
||||||
|
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
|
||||||
|
: "${GITEA_BASE_URL:=https://scm.bstein.dev}"
|
||||||
|
: "${GITEA_NAMESPACE:=gitea}"
|
||||||
|
: "${GITEA_TOKEN_SECRET_NAME:=gitea-admin-token}"
|
||||||
|
: "${GITEA_TOKEN_SECRET_KEY:=token}"
|
||||||
|
: "${DEFAULT_PASSWORD:=TempSsoPass!2025}"
|
||||||
|
|
||||||
|
fetch_token() {
|
||||||
|
curl -fsS -X POST \
|
||||||
|
-d "grant_type=client_credentials" \
|
||||||
|
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
|
||||||
|
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
|
||||||
|
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
|
||||||
|
| jq -r '.access_token'
|
||||||
|
}
|
||||||
|
|
||||||
|
pull_users() {
|
||||||
|
local token="$1"
|
||||||
|
curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
|
||||||
|
| jq -r '.[] | select(.enabled == true) | select(.username | startswith("service-account-") | not) | [.username, (.email // ""), (.firstName // ""), (.lastName // "")] | @tsv'
|
||||||
|
}
|
||||||
|
|
||||||
|
get_gitea_token() {
|
||||||
|
if [[ -n "${GITEA_ADMIN_TOKEN:-}" ]]; then
|
||||||
|
echo "${GITEA_ADMIN_TOKEN}"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
kubectl -n "${GITEA_NAMESPACE}" get secret "${GITEA_TOKEN_SECRET_NAME}" -o "jsonpath={.data.${GITEA_TOKEN_SECRET_KEY}}" \
|
||||||
|
| base64 -d
|
||||||
|
}
|
||||||
|
|
||||||
|
user_exists() {
|
||||||
|
local token="$1" username="$2"
|
||||||
|
local code
|
||||||
|
code=$(curl -s -o /dev/null -w '%{http_code}' \
|
||||||
|
-H "Authorization: token ${token}" \
|
||||||
|
"${GITEA_BASE_URL}/api/v1/admin/users/${username}")
|
||||||
|
[[ "${code}" == "200" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
create_user() {
|
||||||
|
local token="$1" username="$2" email="$3" fname="$4" lname="$5"
|
||||||
|
local body status fullname
|
||||||
|
fullname="$(echo "${fname} ${lname}" | xargs)"
|
||||||
|
if [[ -z "${email}" ]]; then
|
||||||
|
email="${username}@example.local"
|
||||||
|
fi
|
||||||
|
body=$(jq -n --arg u "${username}" --arg e "${email}" --arg p "${DEFAULT_PASSWORD}" \
|
||||||
|
--arg fn "${fullname}" '{username:$u, email:$e, password:$p, must_change_password:false, full_name:$fn}')
|
||||||
|
status=$(curl -s -o /dev/null -w '%{http_code}' \
|
||||||
|
-H "Authorization: token ${token}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-X POST \
|
||||||
|
-d "${body}" \
|
||||||
|
"${GITEA_BASE_URL}/api/v1/admin/users")
|
||||||
|
if [[ "${status}" == "201" ]]; then
|
||||||
|
echo "created gitea user ${username}"
|
||||||
|
elif [[ "${status}" == "409" ]]; then
|
||||||
|
echo "gitea user ${username} already exists (409)" >&2
|
||||||
|
else
|
||||||
|
echo "failed to create gitea user ${username} (status ${status})" >&2
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
main() {
|
||||||
|
local kc_token gitea_token
|
||||||
|
kc_token="$(fetch_token)"
|
||||||
|
gitea_token="$(get_gitea_token)"
|
||||||
|
|
||||||
|
while IFS=$'\t' read -r username email fname lname; do
|
||||||
|
if user_exists "${gitea_token}" "${username}"; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
create_user "${gitea_token}" "${username}" "${email}" "${fname}" "${lname}"
|
||||||
|
done < <(pull_users "${kc_token}")
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
||||||
87
scripts/gitops_cred_sync.sh
Executable file
87
scripts/gitops_cred_sync.sh
Executable file
@ -0,0 +1,87 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Ensure Keycloak users are in the GitOps admin group used by weave-gitops (cd.bstein.dev).
|
||||||
|
# Weave GitOps relies on OIDC; membership in the "admin" group maps to cluster-admin via RBAC.
|
||||||
|
# Requires: curl, jq. Needs a Keycloak client with realm-management (manage-users/groups).
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
|
||||||
|
require curl; require jq
|
||||||
|
|
||||||
|
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
|
||||||
|
: "${KEYCLOAK_REALM:=atlas}"
|
||||||
|
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
|
||||||
|
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
|
||||||
|
: "${GITOPS_GROUP:=admin}"
|
||||||
|
# Comma-separated usernames to sync; set SYNC_ALL_USERS=true to include all Keycloak users.
|
||||||
|
: "${TARGET_USERNAMES:=bstein}"
|
||||||
|
: "${SYNC_ALL_USERS:=false}"
|
||||||
|
|
||||||
|
fetch_token() {
|
||||||
|
curl -fsS -X POST \
|
||||||
|
-d "grant_type=client_credentials" \
|
||||||
|
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
|
||||||
|
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
|
||||||
|
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
|
||||||
|
| jq -r '.access_token'
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_group() {
|
||||||
|
local token="$1" group="$2" id
|
||||||
|
id=$(curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups?search=${group}" \
|
||||||
|
| jq -r --arg g "${group}" '.[] | select(.name==$g) | .id' | head -n1)
|
||||||
|
if [[ -n "${id}" ]]; then
|
||||||
|
echo "${id}"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"name\":\"${group}\"}" \
|
||||||
|
-X POST "${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups"
|
||||||
|
# Fetch again to get id
|
||||||
|
curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/groups?search=${group}" \
|
||||||
|
| jq -r --arg g "${group}" '.[] | select(.name==$g) | .id' | head -n1
|
||||||
|
}
|
||||||
|
|
||||||
|
user_id_by_name() {
|
||||||
|
local token="$1" username="$2"
|
||||||
|
curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?username=${username}" \
|
||||||
|
| jq -r '.[0].id'
|
||||||
|
}
|
||||||
|
|
||||||
|
add_user_to_group() {
|
||||||
|
local token="$1" user_id="$2" group_id="$3" username="$4"
|
||||||
|
if [[ -z "${user_id}" ]]; then
|
||||||
|
echo "user ${username} not found in Keycloak; skip" >&2
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
curl -fsS -o /dev/null -w '%{http_code}' \
|
||||||
|
-H "Authorization: Bearer ${token}" \
|
||||||
|
-X PUT "${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users/${user_id}/groups/${group_id}" \
|
||||||
|
| grep -qE '^(204|409)$' || echo "failed adding ${username} to group" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
main() {
|
||||||
|
local token group_id users=()
|
||||||
|
token="$(fetch_token)"
|
||||||
|
group_id="$(ensure_group "${token}" "${GITOPS_GROUP}")"
|
||||||
|
|
||||||
|
if [[ "${SYNC_ALL_USERS}" == "true" ]]; then
|
||||||
|
readarray -t users < <(curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
|
||||||
|
| jq -r '.[] | select(.enabled==true) | .username')
|
||||||
|
else
|
||||||
|
IFS=',' read -ra users <<< "${TARGET_USERNAMES}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
for user in "${users[@]}"; do
|
||||||
|
user="$(echo "${user}" | xargs)"
|
||||||
|
[[ -z "${user}" ]] && continue
|
||||||
|
add_user_to_group "${token}" "$(user_id_by_name "${token}" "${user}")" "${group_id}" "${user}"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
||||||
94
scripts/jenkins_cred_sync.sh
Executable file
94
scripts/jenkins_cred_sync.sh
Executable file
@ -0,0 +1,94 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Sync Keycloak users into Jenkins local accounts (for CLI/API use).
|
||||||
|
# Jenkins is OIDC-enabled, but local users can still be provisioned for tokens.
|
||||||
|
# Requires: curl, jq, kubectl. Needs Jenkins admin user+API token.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
require() { command -v "$1" >/dev/null 2>&1 || { echo "missing required binary: $1" >&2; exit 1; }; }
|
||||||
|
require curl; require jq; require kubectl
|
||||||
|
|
||||||
|
: "${KEYCLOAK_URL:=https://sso.bstein.dev}"
|
||||||
|
: "${KEYCLOAK_REALM:=atlas}"
|
||||||
|
: "${KEYCLOAK_CLIENT_ID:?set KEYCLOAK_CLIENT_ID or export via secret}"
|
||||||
|
: "${KEYCLOAK_CLIENT_SECRET:?set KEYCLOAK_CLIENT_SECRET or export via secret}"
|
||||||
|
: "${JENKINS_URL:=https://ci.bstein.dev}"
|
||||||
|
: "${JENKINS_NAMESPACE:=jenkins}"
|
||||||
|
: "${JENKINS_ADMIN_SECRET_NAME:=jenkins-admin-token}"
|
||||||
|
: "${JENKINS_ADMIN_USER_KEY:=username}"
|
||||||
|
: "${JENKINS_ADMIN_TOKEN_KEY:=token}"
|
||||||
|
: "${DEFAULT_PASSWORD:=TempSsoPass!2025}"
|
||||||
|
|
||||||
|
fetch_token() {
|
||||||
|
curl -fsS -X POST \
|
||||||
|
-d "grant_type=client_credentials" \
|
||||||
|
-d "client_id=${KEYCLOAK_CLIENT_ID}" \
|
||||||
|
-d "client_secret=${KEYCLOAK_CLIENT_SECRET}" \
|
||||||
|
"${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}/protocol/openid-connect/token" \
|
||||||
|
| jq -r '.access_token'
|
||||||
|
}
|
||||||
|
|
||||||
|
pull_users() {
|
||||||
|
local token="$1"
|
||||||
|
curl -fsS -H "Authorization: Bearer ${token}" \
|
||||||
|
"${KEYCLOAK_URL}/admin/realms/${KEYCLOAK_REALM}/users?max=500" \
|
||||||
|
| jq -r '.[] | select(.enabled == true) | select(.username | startswith("service-account-") | not) | [.id, .username, (.email // "")] | @tsv'
|
||||||
|
}
|
||||||
|
|
||||||
|
get_admin_auth() {
|
||||||
|
local user token
|
||||||
|
if [[ -n "${JENKINS_ADMIN_USER:-}" && -n "${JENKINS_ADMIN_TOKEN:-}" ]]; then
|
||||||
|
echo "${JENKINS_ADMIN_USER}:${JENKINS_ADMIN_TOKEN}"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
user=$(kubectl -n "${JENKINS_NAMESPACE}" get secret "${JENKINS_ADMIN_SECRET_NAME}" -o "jsonpath={.data.${JENKINS_ADMIN_USER_KEY}}" | base64 -d)
|
||||||
|
token=$(kubectl -n "${JENKINS_NAMESPACE}" get secret "${JENKINS_ADMIN_SECRET_NAME}" -o "jsonpath={.data.${JENKINS_ADMIN_TOKEN_KEY}}" | base64 -d)
|
||||||
|
echo "${user}:${token}"
|
||||||
|
}
|
||||||
|
|
||||||
|
get_crumb() {
|
||||||
|
local auth="$1"
|
||||||
|
curl -fsS -u "${auth}" "${JENKINS_URL}/crumbIssuer/api/json" | jq -r .crumb
|
||||||
|
}
|
||||||
|
|
||||||
|
user_exists() {
|
||||||
|
local auth="$1" user="$2"
|
||||||
|
local code
|
||||||
|
code=$(curl -s -o /dev/null -w '%{http_code}' -u "${auth}" "${JENKINS_URL}/user/${user}/api/json")
|
||||||
|
[[ "${code}" == "200" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
create_user() {
|
||||||
|
local auth="$1" crumb="$2" username="$3" email="$4"
|
||||||
|
local status
|
||||||
|
status=$(curl -s -o /dev/null -w '%{http_code}' \
|
||||||
|
-u "${auth}" \
|
||||||
|
-H "Jenkins-Crumb: ${crumb}" \
|
||||||
|
-X POST \
|
||||||
|
--data "username=${username}&password1=${DEFAULT_PASSWORD}&password2=${DEFAULT_PASSWORD}&fullname=${username}&email=${email}" \
|
||||||
|
"${JENKINS_URL}/securityRealm/createAccountByAdmin")
|
||||||
|
|
||||||
|
if [[ "${status}" == "200" || "${status}" == "302" ]]; then
|
||||||
|
echo "created jenkins user ${username}"
|
||||||
|
elif [[ "${status}" == "400" ]]; then
|
||||||
|
echo "jenkins user ${username} already exists (400)" >&2
|
||||||
|
else
|
||||||
|
echo "failed to create jenkins user ${username} (status ${status})" >&2
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
main() {
|
||||||
|
local kc_token auth crumb
|
||||||
|
kc_token="$(fetch_token)"
|
||||||
|
auth="$(get_admin_auth)"
|
||||||
|
crumb="$(get_crumb "${auth}")"
|
||||||
|
|
||||||
|
while IFS=$'\t' read -r _ uid email; do
|
||||||
|
if user_exists "${auth}" "${uid}"; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
create_user "${auth}" "${crumb}" "${uid}" "${email}"
|
||||||
|
done < <(pull_users "${kc_token}")
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
||||||
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env fish
|
#!/usr/bin/env fish
|
||||||
|
|
||||||
function pvc-usage --description "Show Longhorn PVC usage (human-readable) mapped to namespace/name"
|
function pvc-usage --description "Show Longhorn PVC usage mapped to namespace/name"
|
||||||
begin
|
begin
|
||||||
kubectl -n longhorn-system get volumes.longhorn.io -o json \
|
kubectl -n longhorn-system get volumes.longhorn.io -o json \
|
||||||
| jq -r '.items[] | "\(.metadata.name)\t\(.status.actualSize)\t\(.spec.size)"' \
|
| jq -r '.items[] | "\(.metadata.name)\t\(.status.actualSize)\t\(.spec.size)"' \
|
||||||
|
|||||||
@ -39,7 +39,7 @@ SITES=(
|
|||||||
"Jellyfin|https://stream.bstein.dev"
|
"Jellyfin|https://stream.bstein.dev"
|
||||||
"Gitea|https://scm.bstein.dev"
|
"Gitea|https://scm.bstein.dev"
|
||||||
"Jenkins|https://ci.bstein.dev"
|
"Jenkins|https://ci.bstein.dev"
|
||||||
"Zot|https://registry.bstein.dev"
|
"Harbor|https://registry.bstein.dev"
|
||||||
"Vault|https://secret.bstein.dev"
|
"Vault|https://secret.bstein.dev"
|
||||||
"Jitsi|https://meet.bstein.dev"
|
"Jitsi|https://meet.bstein.dev"
|
||||||
"Grafana|https://metrics.bstein.dev"
|
"Grafana|https://metrics.bstein.dev"
|
||||||
|
|||||||
@ -35,7 +35,7 @@ spec:
|
|||||||
values: ["rpi4"]
|
values: ["rpi4"]
|
||||||
containers:
|
containers:
|
||||||
- name: monerod
|
- name: monerod
|
||||||
image: registry.bstein.dev/infra/monerod:0.18.4.1
|
image: registry.bstein.dev/crypto/monerod:0.18.4.1
|
||||||
command: ["/opt/monero/monerod"]
|
command: ["/opt/monero/monerod"]
|
||||||
args:
|
args:
|
||||||
- --data-dir=/data
|
- --data-dir=/data
|
||||||
|
|||||||
@ -32,7 +32,7 @@ spec:
|
|||||||
values: ["rpi4"]
|
values: ["rpi4"]
|
||||||
containers:
|
containers:
|
||||||
- name: monero-p2pool
|
- name: monero-p2pool
|
||||||
image: registry.bstein.dev/infra/monero-p2pool:4.9
|
image: registry.bstein.dev/crypto/monero-p2pool:4.9
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
command: ["p2pool"]
|
command: ["p2pool"]
|
||||||
args:
|
args:
|
||||||
|
|||||||
@ -21,6 +21,72 @@ spec:
|
|||||||
labels:
|
labels:
|
||||||
app: gitea
|
app: gitea
|
||||||
spec:
|
spec:
|
||||||
|
initContainers:
|
||||||
|
- name: configure-oidc
|
||||||
|
image: gitea/gitea:1.23
|
||||||
|
securityContext:
|
||||||
|
runAsUser: 1000
|
||||||
|
runAsGroup: 1000
|
||||||
|
env:
|
||||||
|
- name: CLIENT_ID
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: gitea-oidc
|
||||||
|
key: client_id
|
||||||
|
- name: CLIENT_SECRET
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: gitea-oidc
|
||||||
|
key: client_secret
|
||||||
|
- name: DISCOVERY_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: gitea-oidc
|
||||||
|
key: openid_auto_discovery_url
|
||||||
|
command:
|
||||||
|
- /bin/bash
|
||||||
|
- -c
|
||||||
|
- |
|
||||||
|
set -euo pipefail
|
||||||
|
APPINI=/data/gitea/conf/app.ini
|
||||||
|
BIN=/usr/local/bin/gitea
|
||||||
|
|
||||||
|
list="$($BIN -c "$APPINI" admin auth list)"
|
||||||
|
id=$(echo "$list" | awk '$2=="keycloak"{print $1}')
|
||||||
|
|
||||||
|
if [ -n "$id" ]; then
|
||||||
|
echo "Updating existing auth source id=$id"
|
||||||
|
$BIN -c "$APPINI" admin auth update-oauth \
|
||||||
|
--id "$id" \
|
||||||
|
--name keycloak \
|
||||||
|
--provider openidConnect \
|
||||||
|
--key "$CLIENT_ID" \
|
||||||
|
--secret "$CLIENT_SECRET" \
|
||||||
|
--auto-discover-url "$DISCOVERY_URL" \
|
||||||
|
--scopes "openid profile email groups" \
|
||||||
|
--required-claim-name "" \
|
||||||
|
--required-claim-value "" \
|
||||||
|
--group-claim-name groups \
|
||||||
|
--admin-group admin \
|
||||||
|
--skip-local-2fa
|
||||||
|
else
|
||||||
|
echo "Creating keycloak auth source"
|
||||||
|
$BIN -c "$APPINI" admin auth add-oauth \
|
||||||
|
--name keycloak \
|
||||||
|
--provider openidConnect \
|
||||||
|
--key "$CLIENT_ID" \
|
||||||
|
--secret "$CLIENT_SECRET" \
|
||||||
|
--auto-discover-url "$DISCOVERY_URL" \
|
||||||
|
--scopes "openid profile email groups" \
|
||||||
|
--required-claim-name "" \
|
||||||
|
--required-claim-value "" \
|
||||||
|
--group-claim-name groups \
|
||||||
|
--admin-group admin \
|
||||||
|
--skip-local-2fa
|
||||||
|
fi
|
||||||
|
volumeMounts:
|
||||||
|
- name: gitea-data
|
||||||
|
mountPath: /data
|
||||||
nodeSelector:
|
nodeSelector:
|
||||||
node-role.kubernetes.io/worker: "true"
|
node-role.kubernetes.io/worker: "true"
|
||||||
affinity:
|
affinity:
|
||||||
@ -55,6 +121,36 @@ spec:
|
|||||||
value: "master"
|
value: "master"
|
||||||
- name: ROOT_URL
|
- name: ROOT_URL
|
||||||
value: "https://scm.bstein.dev"
|
value: "https://scm.bstein.dev"
|
||||||
|
- name: GITEA__service__ENABLE_OPENID_SIGNIN
|
||||||
|
value: "true"
|
||||||
|
- name: GITEA__oauth2_client__ENABLE_AUTO_REGISTRATION
|
||||||
|
value: "true"
|
||||||
|
- name: GITEA__service__ALLOW_ONLY_EXTERNAL_REGISTRATION
|
||||||
|
value: "true"
|
||||||
|
- name: GITEA__service__DISABLE_REGISTRATION
|
||||||
|
value: "false"
|
||||||
|
- name: GITEA__log__LEVEL
|
||||||
|
value: "trace"
|
||||||
|
- name: GITEA__service__REQUIRE_SIGNIN_VIEW
|
||||||
|
value: "false"
|
||||||
|
- name: GITEA__server__PROXY_HEADERS
|
||||||
|
value: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Host"
|
||||||
|
- name: GITEA__session__COOKIE_SECURE
|
||||||
|
value: "true"
|
||||||
|
- name: GITEA__session__DOMAIN
|
||||||
|
value: "scm.bstein.dev"
|
||||||
|
- name: GITEA__session__SAME_SITE
|
||||||
|
value: "lax"
|
||||||
|
- name: GITEA__security__SECRET_KEY
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: gitea-secret
|
||||||
|
key: SECRET_KEY
|
||||||
|
- name: GITEA__security__INTERNAL_TOKEN
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: gitea-secret
|
||||||
|
key: INTERNAL_TOKEN
|
||||||
- name: DB_TYPE
|
- name: DB_TYPE
|
||||||
value: "postgres"
|
value: "postgres"
|
||||||
- name: DB_HOST
|
- name: DB_HOST
|
||||||
|
|||||||
13
services/gitops-ui/certificate.yaml
Normal file
13
services/gitops-ui/certificate.yaml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
# services/gitops-ui/certificate.yaml
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: Certificate
|
||||||
|
metadata:
|
||||||
|
name: gitops-ui-tls
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
secretName: gitops-ui-tls
|
||||||
|
issuerRef:
|
||||||
|
kind: ClusterIssuer
|
||||||
|
name: letsencrypt
|
||||||
|
dnsNames:
|
||||||
|
- cd.bstein.dev
|
||||||
@ -23,18 +23,15 @@ spec:
|
|||||||
remediateLastFailure: true
|
remediateLastFailure: true
|
||||||
cleanupOnFail: true
|
cleanupOnFail: true
|
||||||
values:
|
values:
|
||||||
|
additionalArgs:
|
||||||
|
- --auth-methods=oidc
|
||||||
adminUser:
|
adminUser:
|
||||||
create: true
|
create: false
|
||||||
createClusterRole: true
|
|
||||||
createSecret: true
|
|
||||||
username: admin
|
|
||||||
# bcrypt hash for temporary password "G1tOps!2025" (rotate after login)
|
|
||||||
passwordHash: "$2y$12$wDEOzR1Gc2dbvNSJ3ZXNdOBVFEjC6YASIxnZmHIbO.W1m0fie/QVi"
|
|
||||||
ingress:
|
ingress:
|
||||||
enabled: true
|
enabled: true
|
||||||
className: traefik
|
className: traefik
|
||||||
annotations:
|
annotations:
|
||||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
cert-manager.io/cluster-issuer: letsencrypt
|
||||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
hosts:
|
hosts:
|
||||||
- host: cd.bstein.dev
|
- host: cd.bstein.dev
|
||||||
@ -45,5 +42,7 @@ spec:
|
|||||||
- secretName: gitops-ui-tls
|
- secretName: gitops-ui-tls
|
||||||
hosts:
|
hosts:
|
||||||
- cd.bstein.dev
|
- cd.bstein.dev
|
||||||
|
oidcSecret:
|
||||||
|
create: false
|
||||||
metrics:
|
metrics:
|
||||||
enabled: true
|
enabled: true
|
||||||
|
|||||||
@ -5,3 +5,6 @@ namespace: flux-system
|
|||||||
resources:
|
resources:
|
||||||
- source.yaml
|
- source.yaml
|
||||||
- helmrelease.yaml
|
- helmrelease.yaml
|
||||||
|
- certificate.yaml
|
||||||
|
- networkpolicy-acme.yaml
|
||||||
|
- rbac.yaml
|
||||||
|
|||||||
14
services/gitops-ui/networkpolicy-acme.yaml
Normal file
14
services/gitops-ui/networkpolicy-acme.yaml
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
# services/gitops-ui/networkpolicy-acme.yaml
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: allow-acme-solver
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
acme.cert-manager.io/http01-solver: "true"
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- {}
|
||||||
15
services/gitops-ui/rbac.yaml
Normal file
15
services/gitops-ui/rbac.yaml
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
# services/gitops-ui/rbac.yaml
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: gitops-admins
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: weave-gitops
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: cluster-admin
|
||||||
|
subjects:
|
||||||
|
- kind: Group
|
||||||
|
name: admin
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
12
services/harbor/certificate.yaml
Normal file
12
services/harbor/certificate.yaml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# services/harbor/certificate.yaml
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: Certificate
|
||||||
|
metadata:
|
||||||
|
name: registry-bstein-dev
|
||||||
|
namespace: harbor
|
||||||
|
spec:
|
||||||
|
secretName: registry-bstein-dev-tls
|
||||||
|
dnsNames: [ "registry.bstein.dev" ]
|
||||||
|
issuerRef:
|
||||||
|
name: letsencrypt
|
||||||
|
kind: ClusterIssuer
|
||||||
248
services/harbor/helmrelease.yaml
Normal file
248
services/harbor/helmrelease.yaml
Normal file
@ -0,0 +1,248 @@
|
|||||||
|
# services/harbor/helmrelease.yaml
|
||||||
|
apiVersion: helm.toolkit.fluxcd.io/v2beta2
|
||||||
|
kind: HelmRelease
|
||||||
|
metadata:
|
||||||
|
name: harbor
|
||||||
|
namespace: harbor
|
||||||
|
spec:
|
||||||
|
interval: 10m
|
||||||
|
timeout: 10m
|
||||||
|
chart:
|
||||||
|
spec:
|
||||||
|
chart: harbor
|
||||||
|
version: 1.18.1
|
||||||
|
sourceRef:
|
||||||
|
kind: HelmRepository
|
||||||
|
name: harbor
|
||||||
|
namespace: flux-system
|
||||||
|
values:
|
||||||
|
externalURL: https://registry.bstein.dev
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
expose:
|
||||||
|
type: ingress
|
||||||
|
tls:
|
||||||
|
enabled: true
|
||||||
|
certSource: secret
|
||||||
|
secret:
|
||||||
|
secretName: registry-bstein-dev-tls
|
||||||
|
ingress:
|
||||||
|
className: traefik
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: letsencrypt
|
||||||
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
|
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||||
|
hosts:
|
||||||
|
core: registry.bstein.dev
|
||||||
|
persistence:
|
||||||
|
enabled: true
|
||||||
|
resourcePolicy: keep
|
||||||
|
persistentVolumeClaim:
|
||||||
|
registry:
|
||||||
|
existingClaim: harbor-registry
|
||||||
|
accessMode: ReadWriteOnce
|
||||||
|
size: 50Gi
|
||||||
|
jobservice:
|
||||||
|
jobLog:
|
||||||
|
existingClaim: harbor-jobservice-logs
|
||||||
|
accessMode: ReadWriteOnce
|
||||||
|
size: 5Gi
|
||||||
|
imageChartStorage:
|
||||||
|
type: filesystem
|
||||||
|
filesystem:
|
||||||
|
rootdirectory: /storage
|
||||||
|
database:
|
||||||
|
type: external
|
||||||
|
external:
|
||||||
|
host: postgres-service.postgres.svc.cluster.local
|
||||||
|
port: "5432"
|
||||||
|
username: harbor
|
||||||
|
coreDatabase: harbor
|
||||||
|
existingSecret: harbor-db
|
||||||
|
sslmode: disable
|
||||||
|
redis:
|
||||||
|
type: internal
|
||||||
|
internal:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-redis
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
trivy:
|
||||||
|
enabled: false
|
||||||
|
metrics:
|
||||||
|
enabled: false
|
||||||
|
cache:
|
||||||
|
enabled: false
|
||||||
|
existingSecretAdminPassword: harbor-core
|
||||||
|
existingSecretAdminPasswordKey: harbor_admin_password
|
||||||
|
existingSecretSecretKey: harbor-core
|
||||||
|
core:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-core
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
existingSecret: harbor-core
|
||||||
|
existingXsrfSecret: harbor-core
|
||||||
|
existingXsrfSecretKey: CSRF_KEY
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
jobservice:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-jobservice
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
portal:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-portal
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
registry:
|
||||||
|
registry:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-registry
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
controller:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-registryctl
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
nginx:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-nginx
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: titan-05
|
||||||
|
affinity:
|
||||||
|
nodeAffinity:
|
||||||
|
requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
nodeSelectorTerms:
|
||||||
|
- matchExpressions:
|
||||||
|
- key: kubernetes.io/arch
|
||||||
|
operator: In
|
||||||
|
values: [ "arm64" ]
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 90
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi5" ]
|
||||||
|
- weight: 50
|
||||||
|
preference:
|
||||||
|
matchExpressions:
|
||||||
|
- key: hardware
|
||||||
|
operator: In
|
||||||
|
values: [ "rpi4" ]
|
||||||
|
prepare:
|
||||||
|
image:
|
||||||
|
repository: registry.bstein.dev/infra/harbor-prepare
|
||||||
|
tag: v2.14.1-arm64
|
||||||
|
updateStrategy:
|
||||||
|
type: Recreate
|
||||||
9
services/harbor/kustomization.yaml
Normal file
9
services/harbor/kustomization.yaml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
# services/harbor/kustomization.yaml
|
||||||
|
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||||
|
kind: Kustomization
|
||||||
|
namespace: harbor
|
||||||
|
resources:
|
||||||
|
- namespace.yaml
|
||||||
|
- pvc.yaml
|
||||||
|
- certificate.yaml
|
||||||
|
- helmrelease.yaml
|
||||||
5
services/harbor/namespace.yaml
Normal file
5
services/harbor/namespace.yaml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
# services/harbor/namespace.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: harbor
|
||||||
24
services/harbor/pvc.yaml
Normal file
24
services/harbor/pvc.yaml
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# services/harbor/pvc.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: harbor-registry
|
||||||
|
namespace: harbor
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 50Gi
|
||||||
|
storageClassName: astreae
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: harbor-jobservice-logs
|
||||||
|
namespace: harbor
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 5Gi
|
||||||
|
storageClassName: astreae
|
||||||
136
services/jenkins/helmrelease.yaml
Normal file
136
services/jenkins/helmrelease.yaml
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
# services/jenkins/helmrelease.yaml
|
||||||
|
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||||
|
kind: HelmRelease
|
||||||
|
metadata:
|
||||||
|
name: jenkins
|
||||||
|
namespace: jenkins
|
||||||
|
spec:
|
||||||
|
interval: 30m
|
||||||
|
chart:
|
||||||
|
spec:
|
||||||
|
chart: jenkins
|
||||||
|
version: 5.8.114
|
||||||
|
sourceRef:
|
||||||
|
kind: HelmRepository
|
||||||
|
name: jenkins
|
||||||
|
namespace: flux-system
|
||||||
|
install:
|
||||||
|
remediation:
|
||||||
|
retries: 3
|
||||||
|
upgrade:
|
||||||
|
remediation:
|
||||||
|
retries: 3
|
||||||
|
remediateLastFailure: true
|
||||||
|
cleanupOnFail: true
|
||||||
|
values:
|
||||||
|
controller:
|
||||||
|
jenkinsUrl: https://ci.bstein.dev
|
||||||
|
ingress:
|
||||||
|
enabled: true
|
||||||
|
hostName: ci.bstein.dev
|
||||||
|
ingressClassName: traefik
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: letsencrypt
|
||||||
|
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||||
|
tls:
|
||||||
|
- secretName: jenkins-tls
|
||||||
|
hosts:
|
||||||
|
- ci.bstein.dev
|
||||||
|
installPlugins:
|
||||||
|
- kubernetes
|
||||||
|
- workflow-aggregator
|
||||||
|
- git
|
||||||
|
- configuration-as-code
|
||||||
|
- oic-auth
|
||||||
|
containerEnv:
|
||||||
|
- name: ENABLE_OIDC
|
||||||
|
value: "true"
|
||||||
|
- name: OIDC_ISSUER
|
||||||
|
value: "https://sso.bstein.dev/realms/atlas"
|
||||||
|
- name: OIDC_CLIENT_ID
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: clientId
|
||||||
|
optional: true
|
||||||
|
- name: OIDC_CLIENT_SECRET
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: clientSecret
|
||||||
|
optional: true
|
||||||
|
- name: OIDC_AUTH_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: authorizationUrl
|
||||||
|
optional: true
|
||||||
|
- name: OIDC_TOKEN_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: tokenUrl
|
||||||
|
optional: true
|
||||||
|
- name: OIDC_USERINFO_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: userInfoUrl
|
||||||
|
optional: true
|
||||||
|
- name: OIDC_LOGOUT_URL
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: jenkins-oidc
|
||||||
|
key: logoutUrl
|
||||||
|
optional: true
|
||||||
|
initScripts:
|
||||||
|
oidc.groovy: |
|
||||||
|
import hudson.util.Secret
|
||||||
|
import jenkins.model.IdStrategy
|
||||||
|
import jenkins.model.Jenkins
|
||||||
|
import org.jenkinsci.plugins.oic.OicSecurityRealm
|
||||||
|
import org.jenkinsci.plugins.oic.OicServerWellKnownConfiguration
|
||||||
|
def env = System.getenv()
|
||||||
|
if (!(env['ENABLE_OIDC'] ?: 'false').toBoolean()) {
|
||||||
|
println("OIDC disabled (ENABLE_OIDC=false); keeping default security realm")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
def required = ['OIDC_CLIENT_ID','OIDC_CLIENT_SECRET','OIDC_ISSUER']
|
||||||
|
if (!required.every { env[it] }) {
|
||||||
|
println("OIDC enabled but missing vars: ${required.findAll { !env[it] }}")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
def wellKnown = "${env['OIDC_ISSUER']}/.well-known/openid-configuration"
|
||||||
|
def serverCfg = new OicServerWellKnownConfiguration(wellKnown)
|
||||||
|
serverCfg.setScopesOverride('openid profile email')
|
||||||
|
def realm = new OicSecurityRealm(
|
||||||
|
env['OIDC_CLIENT_ID'],
|
||||||
|
Secret.fromString(env['OIDC_CLIENT_SECRET']),
|
||||||
|
serverCfg,
|
||||||
|
false,
|
||||||
|
IdStrategy.CASE_INSENSITIVE,
|
||||||
|
IdStrategy.CASE_INSENSITIVE
|
||||||
|
)
|
||||||
|
realm.createProxyAwareResourceRetriver()
|
||||||
|
realm.setLogoutFromOpenidProvider(true)
|
||||||
|
realm.setPostLogoutRedirectUrl('https://ci.bstein.dev')
|
||||||
|
realm.setUserNameField('preferred_username')
|
||||||
|
realm.setFullNameFieldName('name')
|
||||||
|
realm.setEmailFieldName('email')
|
||||||
|
realm.setGroupsFieldName('groups')
|
||||||
|
realm.setRootURLFromRequest(true)
|
||||||
|
realm.setSendScopesInTokenRequest(true)
|
||||||
|
def j = Jenkins.get()
|
||||||
|
j.setSecurityRealm(realm)
|
||||||
|
j.save()
|
||||||
|
println("Configured OIDC realm from init script (well-known)")
|
||||||
|
} catch (Exception e) {
|
||||||
|
println("Failed to configure OIDC realm: ${e}")
|
||||||
|
}
|
||||||
|
persistence:
|
||||||
|
enabled: true
|
||||||
|
storageClass: astreae
|
||||||
|
size: 50Gi
|
||||||
|
serviceAccount:
|
||||||
|
create: true
|
||||||
7
services/jenkins/kustomization.yaml
Normal file
7
services/jenkins/kustomization.yaml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# services/jenkins/kustomization.yaml
|
||||||
|
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||||
|
kind: Kustomization
|
||||||
|
namespace: jenkins
|
||||||
|
resources:
|
||||||
|
- namespace.yaml
|
||||||
|
- helmrelease.yaml
|
||||||
5
services/jenkins/namespace.yaml
Normal file
5
services/jenkins/namespace.yaml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
# services/jenkins/namespace.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: jenkins
|
||||||
@ -1,27 +0,0 @@
|
|||||||
# services/keycloak
|
|
||||||
|
|
||||||
Keycloak is deployed via raw manifests and backed by the shared Postgres (`postgres-service.postgres.svc.cluster.local:5432`). Create these secrets before applying:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# DB creds (per-service DB/user in shared Postgres)
|
|
||||||
kubectl -n sso create secret generic keycloak-db \
|
|
||||||
--from-literal=username=keycloak \
|
|
||||||
--from-literal=password='<DB_PASSWORD>' \
|
|
||||||
--from-literal=database=keycloak
|
|
||||||
|
|
||||||
# Admin console creds (maps to KC admin user)
|
|
||||||
kubectl -n sso create secret generic keycloak-admin \
|
|
||||||
--from-literal=username=brad@bstein.dev \
|
|
||||||
--from-literal=password='<ADMIN_PASSWORD>'
|
|
||||||
```
|
|
||||||
|
|
||||||
Apply:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -k services/keycloak
|
|
||||||
```
|
|
||||||
|
|
||||||
Notes
|
|
||||||
- Service: `keycloak.sso.svc:80` (Ingress `sso.bstein.dev`, TLS via cert-manager).
|
|
||||||
- Uses Postgres schema `public`; DB/user should be provisioned in the shared Postgres instance.
|
|
||||||
- Health endpoints on :9000 are wired for probes.
|
|
||||||
@ -48,8 +48,6 @@ spec:
|
|||||||
runAsGroup: 0
|
runAsGroup: 0
|
||||||
fsGroup: 1000
|
fsGroup: 1000
|
||||||
fsGroupChangePolicy: OnRootMismatch
|
fsGroupChangePolicy: OnRootMismatch
|
||||||
imagePullSecrets:
|
|
||||||
- name: zot-regcred
|
|
||||||
initContainers:
|
initContainers:
|
||||||
- name: mailu-http-listener
|
- name: mailu-http-listener
|
||||||
image: registry.bstein.dev/sso/mailu-http-listener:0.1.0
|
image: registry.bstein.dev/sso/mailu-http-listener:0.1.0
|
||||||
|
|||||||
@ -245,14 +245,18 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "orange",
|
"color": "orange",
|
||||||
"value": 0.999
|
"value": 0.99
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "yellow",
|
"color": "yellow",
|
||||||
"value": 0.9999
|
"value": 0.999
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "green",
|
"color": "green",
|
||||||
|
"value": 0.9999
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "blue",
|
||||||
"value": 0.99999
|
"value": 0.99999
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@ -56,8 +56,6 @@ spec:
|
|||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: pod-resources
|
- name: pod-resources
|
||||||
mountPath: /var/lib/kubelet/pod-resources
|
mountPath: /var/lib/kubelet/pod-resources
|
||||||
imagePullSecrets:
|
|
||||||
- name: zot-regcred
|
|
||||||
volumes:
|
volumes:
|
||||||
- name: pod-resources
|
- name: pod-resources
|
||||||
hostPath:
|
hostPath:
|
||||||
|
|||||||
@ -254,14 +254,18 @@ data:
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "orange",
|
"color": "orange",
|
||||||
"value": 0.999
|
"value": 0.99
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "yellow",
|
"color": "yellow",
|
||||||
"value": 0.9999
|
"value": 0.999
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"color": "green",
|
"color": "green",
|
||||||
|
"value": 0.9999
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "blue",
|
||||||
"value": 0.99999
|
"value": 0.99999
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@ -6,7 +6,7 @@ metadata:
|
|||||||
namespace: sso
|
namespace: sso
|
||||||
spec:
|
spec:
|
||||||
forwardAuth:
|
forwardAuth:
|
||||||
address: http://oauth2-proxy.sso.svc.cluster.local:4180/oauth2/auth
|
address: http://oauth2-proxy.sso.svc.cluster.local/oauth2/auth
|
||||||
trustForwardHeader: true
|
trustForwardHeader: true
|
||||||
authResponseHeaders:
|
authResponseHeaders:
|
||||||
- Authorization
|
- Authorization
|
||||||
|
|||||||
@ -17,8 +17,6 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
nodeSelector:
|
nodeSelector:
|
||||||
kubernetes.io/arch: amd64
|
kubernetes.io/arch: amd64
|
||||||
imagePullSecrets:
|
|
||||||
- name: zot-regcred
|
|
||||||
securityContext:
|
securityContext:
|
||||||
runAsNonRoot: true
|
runAsNonRoot: true
|
||||||
runAsUser: 65532
|
runAsUser: 65532
|
||||||
@ -58,7 +56,7 @@ spec:
|
|||||||
|
|
||||||
containers:
|
containers:
|
||||||
- name: pegasus
|
- name: pegasus
|
||||||
image: registry.bstein.dev/pegasus:1.2.32 # {"$imagepolicy": "jellyfin:pegasus"}
|
image: registry.bstein.dev/streaming/pegasus:1.2.32 # {"$imagepolicy": "jellyfin:pegasus"}
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
command: ["/pegasus"]
|
command: ["/pegasus"]
|
||||||
env:
|
env:
|
||||||
|
|||||||
@ -5,10 +5,8 @@ metadata:
|
|||||||
name: pegasus
|
name: pegasus
|
||||||
namespace: jellyfin
|
namespace: jellyfin
|
||||||
spec:
|
spec:
|
||||||
image: registry.bstein.dev/pegasus
|
image: registry.bstein.dev/streaming/pegasus
|
||||||
interval: 1m0s
|
interval: 1m0s
|
||||||
secretRef:
|
|
||||||
name: zot-regcred
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@ -1,47 +0,0 @@
|
|||||||
# services/zot/config.map
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: zot-config
|
|
||||||
namespace: zot
|
|
||||||
data:
|
|
||||||
config.json: |
|
|
||||||
{
|
|
||||||
"storage": {
|
|
||||||
"rootDirectory": "/var/lib/registry",
|
|
||||||
"dedupe": true,
|
|
||||||
"gc": true,
|
|
||||||
"gcDelay": "1h",
|
|
||||||
"gcInterval": "1h"
|
|
||||||
},
|
|
||||||
"http": {
|
|
||||||
"address": "0.0.0.0",
|
|
||||||
"port": "5000",
|
|
||||||
"realm": "zot-registry",
|
|
||||||
"compat": ["docker2s2"],
|
|
||||||
"auth": {
|
|
||||||
"htpasswd": { "path": "/etc/zot/htpasswd" }
|
|
||||||
},
|
|
||||||
"accessControl": {
|
|
||||||
"repositories": {
|
|
||||||
"**": {
|
|
||||||
"policies": [
|
|
||||||
{ "users": ["bstein"], "actions": ["read", "create", "update", "delete"] }
|
|
||||||
],
|
|
||||||
"defaultPolicy": [],
|
|
||||||
"anonymousPolicy": []
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"adminPolicy": {
|
|
||||||
"users": ["bstein"],
|
|
||||||
"actions": ["read", "create", "update", "delete"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"log": { "level": "info" },
|
|
||||||
"extensions": {
|
|
||||||
"ui": { "enable": true },
|
|
||||||
"search": { "enable": true },
|
|
||||||
"metrics": { "enable": true }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,72 +0,0 @@
|
|||||||
# services/zot/deployment.yaml
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: zot
|
|
||||||
namespace: zot
|
|
||||||
labels: { app: zot }
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels: { app: zot }
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels: { app: zot }
|
|
||||||
spec:
|
|
||||||
nodeSelector:
|
|
||||||
node-role.kubernetes.io/worker: "true"
|
|
||||||
affinity:
|
|
||||||
nodeAffinity:
|
|
||||||
requiredDuringSchedulingIgnoredDuringExecution:
|
|
||||||
nodeSelectorTerms:
|
|
||||||
- matchExpressions:
|
|
||||||
- key: hardware
|
|
||||||
operator: In
|
|
||||||
values: ["rpi4","rpi5"]
|
|
||||||
preferredDuringSchedulingIgnoredDuringExecution:
|
|
||||||
- weight: 50
|
|
||||||
preference:
|
|
||||||
matchExpressions:
|
|
||||||
- key: hardware
|
|
||||||
operator: In
|
|
||||||
values: ["rpi4"]
|
|
||||||
containers:
|
|
||||||
- name: zot
|
|
||||||
image: ghcr.io/project-zot/zot-linux-arm64:v2.1.8
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
args: ["serve", "/etc/zot/config.json"]
|
|
||||||
ports:
|
|
||||||
- { name: http, containerPort: 5000 }
|
|
||||||
volumeMounts:
|
|
||||||
- name: cfg
|
|
||||||
mountPath: /etc/zot/config.json
|
|
||||||
subPath: config.json
|
|
||||||
readOnly: true
|
|
||||||
- name: htpasswd
|
|
||||||
mountPath: /etc/zot/htpasswd
|
|
||||||
subPath: htpasswd
|
|
||||||
readOnly: true
|
|
||||||
- name: zot-data
|
|
||||||
mountPath: /var/lib/registry
|
|
||||||
readinessProbe:
|
|
||||||
tcpSocket:
|
|
||||||
port: 5000
|
|
||||||
initialDelaySeconds: 2
|
|
||||||
periodSeconds: 5
|
|
||||||
livenessProbe:
|
|
||||||
tcpSocket:
|
|
||||||
port: 5000
|
|
||||||
initialDelaySeconds: 5
|
|
||||||
periodSeconds: 10
|
|
||||||
resources:
|
|
||||||
requests: { cpu: "50m", memory: "64Mi" }
|
|
||||||
volumes:
|
|
||||||
- name: cfg
|
|
||||||
configMap:
|
|
||||||
name: zot-config
|
|
||||||
- name: htpasswd
|
|
||||||
secret:
|
|
||||||
secretName: zot-htpasswd
|
|
||||||
- name: zot-data
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: zot-data
|
|
||||||
@ -1,27 +0,0 @@
|
|||||||
# services/zot/ingress.yaml
|
|
||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: zot
|
|
||||||
namespace: zot
|
|
||||||
annotations:
|
|
||||||
cert-manager.io/cluster-issuer: letsencrypt
|
|
||||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
|
||||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
|
||||||
traefik.ingress.kubernetes.io/router.middlewares: zot-zot-resp-headers@kubernetescrd
|
|
||||||
spec:
|
|
||||||
ingressClassName: traefik
|
|
||||||
tls:
|
|
||||||
- hosts: [ "registry.bstein.dev" ]
|
|
||||||
secretName: registry-bstein-dev-tls
|
|
||||||
rules:
|
|
||||||
- host: registry.bstein.dev
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
- path: /
|
|
||||||
pathType: Prefix
|
|
||||||
backend:
|
|
||||||
service:
|
|
||||||
name: zot
|
|
||||||
port:
|
|
||||||
number: 5000
|
|
||||||
@ -1,11 +0,0 @@
|
|||||||
# services/zot/kustomization.yaml
|
|
||||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
|
||||||
kind: Kustomization
|
|
||||||
resources:
|
|
||||||
- namespace.yaml
|
|
||||||
- pvc.yaml
|
|
||||||
- deployment.yaml
|
|
||||||
- configmap.yaml
|
|
||||||
- service.yaml
|
|
||||||
- ingress.yaml
|
|
||||||
- middleware.yaml
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
# services/zot/middleware.yaml
|
|
||||||
apiVersion: traefik.io/v1alpha1
|
|
||||||
kind: Middleware
|
|
||||||
metadata:
|
|
||||||
name: zot-resp-headers
|
|
||||||
namespace: zot
|
|
||||||
spec:
|
|
||||||
headers:
|
|
||||||
customResponseHeaders:
|
|
||||||
Docker-Distribution-Api-Version: "registry/2.0"
|
|
||||||
accessControlAllowOriginList:
|
|
||||||
- "*"
|
|
||||||
accessControlAllowCredentials: true
|
|
||||||
accessControlAllowHeaders:
|
|
||||||
- Authorization
|
|
||||||
- Content-Type
|
|
||||||
- Docker-Distribution-Api-Version
|
|
||||||
- X-Registry-Auth
|
|
||||||
accessControlAllowMethods:
|
|
||||||
- GET
|
|
||||||
- HEAD
|
|
||||||
- OPTIONS
|
|
||||||
- POST
|
|
||||||
- PUT
|
|
||||||
- PATCH
|
|
||||||
- DELETE
|
|
||||||
@ -1,5 +0,0 @@
|
|||||||
# services/zot/namespace.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Namespace
|
|
||||||
metadata:
|
|
||||||
name: zot
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
# services/zot/pvc.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: zot-data
|
|
||||||
namespace: zot
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 25Gi
|
|
||||||
storageClassName: asteria
|
|
||||||
@ -1,14 +0,0 @@
|
|||||||
# services/zot/service.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: zot
|
|
||||||
namespace: zot
|
|
||||||
labels: { app: zot }
|
|
||||||
spec:
|
|
||||||
type: ClusterIP
|
|
||||||
selector: { app: zot }
|
|
||||||
ports:
|
|
||||||
- name: http
|
|
||||||
port: 5000
|
|
||||||
targetPort: 5000
|
|
||||||
Loading…
x
Reference in New Issue
Block a user