|
|
6f4cc58941
|
vault: prep helm releases and image pins
|
2026-01-13 19:29:14 -03:00 |
|
|
|
9eac335d53
|
ai-llm: serialize rollout for RWO pvc
|
2026-01-01 14:48:54 -03:00 |
|
|
|
ceea2539bc
|
monitoring: per-panel namespace share filters
|
2026-01-01 14:44:33 -03:00 |
|
|
|
91de1c1d8d
|
gpu: enable time-slicing and refresh dashboards
|
2026-01-01 14:16:08 -03:00 |
|
|
|
6ac5a0ac46
|
chore(ai-llm): annotate pod with model and gpu
|
2025-12-21 00:47:57 -03:00 |
|
|
|
fb6e71a62a
|
ai-llm: GPU qwen2.5-coder on titan-24; add chat.ai host
|
2025-12-20 15:19:03 -03:00 |
|
|
|
497ac90858
|
ai-llm: use phi3 mini model
|
2025-12-20 14:24:52 -03:00 |
|
|
|
b50977c5a0
|
ai: allow ollama to share titan-24 gpu
|
2025-12-20 14:16:22 -03:00 |
|
|
|
95ebdce813
|
ai: add ollama service and wire chat backend
|
2025-12-20 14:10:34 -03:00 |
|