jenkins
|
c46764e80c
|
recovery(atlas): stop post-outage control-plane churn
|
2026-05-05 10:42:28 -03:00 |
|
jenkins
|
b81053aaec
|
ai(ollama): recover onto live jetson gpu pool
|
2026-05-05 06:42:15 -03:00 |
|
|
|
91d4da9397
|
atlasbot: shift to facts context and upgrade model
|
2026-01-27 06:28:26 -03:00 |
|
|
|
ef7946b4f2
|
atlasbot: use cluster snapshot + model update
|
2026-01-27 05:42:28 -03:00 |
|
|
|
6c84cf60c6
|
ai-llm: tighten gpu placement and resources
|
2026-01-26 11:44:28 -03:00 |
|
|
|
712bba23a1
|
ai: restart ollama deployment
|
2026-01-25 16:19:15 -03:00 |
|
|
|
6f4cc58941
|
vault: prep helm releases and image pins
|
2026-01-13 19:29:14 -03:00 |
|
|
|
9eac335d53
|
ai-llm: serialize rollout for RWO pvc
|
2026-01-01 14:48:54 -03:00 |
|
|
|
ceea2539bc
|
monitoring: per-panel namespace share filters
|
2026-01-01 14:44:33 -03:00 |
|
|
|
91de1c1d8d
|
gpu: enable time-slicing and refresh dashboards
|
2026-01-01 14:16:08 -03:00 |
|
|
|
6ac5a0ac46
|
chore(ai-llm): annotate pod with model and gpu
|
2025-12-21 00:47:57 -03:00 |
|
|
|
fb6e71a62a
|
ai-llm: GPU qwen2.5-coder on titan-24; add chat.ai host
|
2025-12-20 15:19:03 -03:00 |
|
|
|
497ac90858
|
ai-llm: use phi3 mini model
|
2025-12-20 14:24:52 -03:00 |
|
|
|
b50977c5a0
|
ai: allow ollama to share titan-24 gpu
|
2025-12-20 14:16:22 -03:00 |
|
|
|
95ebdce813
|
ai: add ollama service and wire chat backend
|
2025-12-20 14:10:34 -03:00 |
|