refactor: complete hygiene gate cleanup
This commit is contained in:
parent
1f6d34b6fa
commit
f900d7e582
3
.gitignore
vendored
3
.gitignore
vendored
@ -18,3 +18,6 @@ override.toml
|
||||
**/*~
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Local Codex/agent instructions stay machine-local.
|
||||
AGENTS.md
|
||||
|
||||
848
AGENTS.md
848
AGENTS.md
@ -1,848 +0,0 @@
|
||||
# Lesavka Agent Notes
|
||||
|
||||
## 0.19.0 Upstream Media v2 Rebuild Checklist
|
||||
|
||||
Context: manual Google Meet testing showed the 0.18.x upstream media stack was
|
||||
still capable of seconds-scale lag and A/V skew. Treat that implementation as
|
||||
quarantined v1, not as something to tune. The v2 contract is deliberately small:
|
||||
when webcam video is active, microphone audio and camera frames travel through
|
||||
one client-owned bundle path; the server maps that client capture clock onto one
|
||||
local epoch, applies only explicit UVC/UAC output-path offsets, and drops stale
|
||||
bundles as a unit. Microphone-only remains supported as the explicit no-camera
|
||||
path.
|
||||
|
||||
### Product Invariants
|
||||
- [x] Webcam-enabled sessions use one bundled upstream media RPC by default.
|
||||
- [x] Webcam-enabled sessions imply microphone capture when the server supports UAC.
|
||||
- [x] The previous upstream media runtime/planner is quarantined under
|
||||
`quarantine/upstream-media-v1/` with retained-idea notes.
|
||||
- [x] The UI-selected camera, camera quality, microphone, speaker, gain, and
|
||||
enable switches remain authoritative; defaults may not override visible UI state.
|
||||
- [x] Client capture timestamps are the source of A/V sync truth for webcam sessions.
|
||||
- [x] Server v2 playout rebases that client timeline onto a fresh local epoch.
|
||||
- [x] Server v2 may drop stale bundles, but must not rebuild sync by independently
|
||||
pairing separate camera and microphone streams.
|
||||
- [x] Mic-only sessions keep an explicit no-camera audio path.
|
||||
- [x] Legacy split webcam/mic uplink is only an explicit compatibility escape hatch.
|
||||
- [x] Manual probes and diagnostics clearly label `bundled-webcam-media` versus
|
||||
`mic-only` so we never confuse the architectures during debugging.
|
||||
- [x] Sync protection takes precedence over freshness and smoothness: bad mixed
|
||||
bundle timing is dropped coherently instead of letting one side play alone.
|
||||
- [x] An already-attached UVC gadget descriptor is the physical browser contract:
|
||||
if it still advertises an older profile, server handshake/capture sizing
|
||||
follows that live descriptor until a controlled gadget rebuild is allowed.
|
||||
- [x] Bundled webcam sessions use the shared client capture timeline for transit
|
||||
sync, then apply runtime output-path calibration as explicit per-device
|
||||
handoff delays.
|
||||
- [x] Optional common playout delay is only smoothness slack; it cannot clip or
|
||||
replace sync-critical UVC/UAC offsets.
|
||||
- [x] Direct UVC/UAC hardware probes produce a first-class
|
||||
`output-delay-calibration.json` artifact for the server-to-host device
|
||||
delay that v2 consumes through the same active output-offset calibration.
|
||||
- [x] Treat the lab-attached host as measuring equipment only; future remote
|
||||
hosts are not expected to expose SSH, browser probes, or local capture
|
||||
access for lip-sync calibration.
|
||||
|
||||
### Wire Protocol
|
||||
- [x] Add `UpstreamMediaBundle` containing one optional video frame plus zero or
|
||||
more audio packets from the same client capture timeline.
|
||||
- [x] Add `StreamWebcamMedia(stream UpstreamMediaBundle)` to the relay service.
|
||||
- [x] Advertise bundled support in the handshake capability set.
|
||||
- [ ] Add compatibility tests proving older split RPCs are not used by normal
|
||||
webcam sessions when bundled support is advertised.
|
||||
|
||||
### Client Migration
|
||||
- [x] Change session startup to prefer bundled webcam media whenever camera and
|
||||
microphone are both available.
|
||||
- [x] Spawn one bundled capture/uplink task instead of separate camera and mic
|
||||
tasks for webcam sessions.
|
||||
- [x] Bundle camera frames and microphone packets into one freshness-bounded queue.
|
||||
- [x] With a camera active, do not flush microphone packets as standalone upstream
|
||||
bundles; trim old pending audio and emit with a video frame instead.
|
||||
- [x] Add a short video/audio grace window so audio captured beside a frame has
|
||||
a chance to join that frame's bundle before uplink.
|
||||
- [x] Keep the microphone-only RPC running as the no-camera path even when the
|
||||
server supports bundled webcam media; it yields while camera is active.
|
||||
- [x] Bound the pre-bundle capture handoff channel so camera/mic workers drop
|
||||
old events under pressure instead of building unbounded latency.
|
||||
- [x] Drop lag-clamped camera and microphone source buffers before bundling;
|
||||
stale source data may not be relabeled as fresh media.
|
||||
- [x] Stamp all packets at capture/uplink enqueue before the async gRPC stream
|
||||
can add misleading delay.
|
||||
- [x] Preserve live UI device/profile changes by restarting the bundled capture
|
||||
pipeline when selected camera, camera quality, or microphone changes.
|
||||
- [x] Make launcher diagnostics expose the active upstream mode as first-class
|
||||
text rather than inferring from separate camera/mic telemetry.
|
||||
- [x] Migrate sync-probe runner to the bundled path explicitly and remove any
|
||||
normal probe dependence on split `StreamCamera` + `StreamMicrophone`.
|
||||
|
||||
### Server Migration
|
||||
- [x] Implement `StreamWebcamMedia` and make it own both UAC and UVC/HDMI sinks
|
||||
for one upstream session.
|
||||
- [x] Schedule bundled packets by shared client capture timestamp instead of
|
||||
startup-pairing independent streams.
|
||||
- [x] Replace the old bundled event sorter/reanchor planner with one v2 bundle
|
||||
clock and explicit per-device handoff scheduling.
|
||||
- [x] Sanitize packet timestamps before bundling so stale/future source PTS values
|
||||
cannot become the server's A/V sync truth.
|
||||
- [x] Make server bundled scheduling use the client capture sidecar rather than
|
||||
raw packet `pts`, and reset the bundled epoch on client-session changes.
|
||||
- [x] Drop bundles coherently when they are already outside the live age budget.
|
||||
- [x] Drop mixed A/V bundles coherently when capture timestamps are too far apart
|
||||
to represent one real capture moment.
|
||||
- [x] Keep bundled UVC/UAC output-path compensation authoritative; do not clip
|
||||
measured offsets just to improve freshness when that would break sync.
|
||||
- [x] Activate the camera relay before opening the microphone sink so UVC can
|
||||
become ready even if UAC setup is slow.
|
||||
- [x] Log the first bundled video frame handed to the camera sink.
|
||||
- [x] Honor the live configfs UVC descriptor when it differs from configured
|
||||
defaults, preventing browsers from receiving frames outside negotiated caps.
|
||||
- [x] Make the UVC control helper answer probe/commit requests from the same
|
||||
live descriptor so Firefox/Chrome negotiation matches server frame output.
|
||||
- [x] Bound UVC helper buffering and stale MJPEG replay so server-to-host video
|
||||
freshness can be tightened without changing the bundled sync architecture.
|
||||
- [x] Make direct UVC/UAC output-delay application absolute by default so stale
|
||||
legacy calibration does not keep a hidden multi-second video delay alive.
|
||||
- [x] Make the direct output probe report separate sync and clock-corrected
|
||||
freshness verdicts from the same paired server-generated signatures.
|
||||
- [x] Make applied direct output-delay calibrations run a fixed-delay
|
||||
confirmation probe that must pass sync before the run can be trusted.
|
||||
- [x] Refuse freshness passes when Theia/Tethys clock alignment is too uncertain
|
||||
or would imply impossible negative media age.
|
||||
- [x] Use persistent SSH midpoint clock sampling for freshness checks so SSH
|
||||
startup latency does not masquerade as seconds of timing uncertainty.
|
||||
- [x] Measure freshness clock alignment from the server host to Tethys directly
|
||||
instead of routing both clock samples through the client laptop.
|
||||
- [x] Include clock uncertainty as a margin in freshness pass/fail decisions.
|
||||
- [x] Recenter the normal MJPEG/UVC output-path video baseline to `+170ms`
|
||||
from repeated direct UVC/UAC target estimates, and migrate untouched
|
||||
`+1090ms` factory/env baselines down to that center.
|
||||
- [x] Split freshness reporting into fixed sync delay, last-hop device
|
||||
overhead, and total RC target event age so freshness work cannot hide
|
||||
the sync cost.
|
||||
- [x] Add dense server-generated smoothness evidence on the normal UVC/UAC
|
||||
path: per-frame video continuity watermark, quiet audio pilot, cadence
|
||||
jitter, duplicate/missing frame estimates, and low-RMS audio gap counts.
|
||||
- [ ] Keep UI/profile controls authoritative for webcam-backed UVC output
|
||||
profiles; validate `1280x720@20/30` and `1920x1080@20/30` after sync is
|
||||
locked.
|
||||
- [x] Add a server-to-RC mode-matrix harness so the same sync/freshness/
|
||||
smoothness contract can be run against the core Logitech-backed modes:
|
||||
`1280x720@20`, `1280x720@30`, `1920x1080@20`, and `1920x1080@30`.
|
||||
- [ ] Run the mode matrix on Theia/Tethys and record per-mode static delay
|
||||
center points before changing the normal advertised profiles.
|
||||
- [ ] Keep the UI +/-5ms calibration nudges available as small post-baseline
|
||||
operator trims for future non-probeable remote hosts.
|
||||
- [x] Continue reporting client timing and sink handoff diagnostics from bundled packets.
|
||||
- [ ] Add bundled-mode counters for first bundle, first audio push, first video feed,
|
||||
dropped stale bundles, and bundle queue age.
|
||||
- [ ] Retire split-stream planner assumptions from the webcam path after the
|
||||
bundled mode passes manual Google Meet and mirrored-probe validation.
|
||||
|
||||
### Validation
|
||||
- [x] `cargo check -p lesavka_common -p lesavka_client -p lesavka_server --bins`
|
||||
- [x] Focused handshake and launcher tests.
|
||||
- [x] Focused UVC profile test for stale configured profile vs live attached descriptor.
|
||||
- [ ] Focused server upstream-media tests including bundled stream acceptance.
|
||||
- [x] Direct UVC/UAC probe can derive, gate, apply, and optionally save measured
|
||||
output-delay calibration without using the fragile webcam-at-screen path.
|
||||
- [x] Direct UVC/UAC probe confirms a newly applied static delay with a second
|
||||
pass/fail sync run before moving on to freshness or smoothness claims.
|
||||
- [x] Saved output-delay calibration is a static server-side baseline for the
|
||||
UVC/UAC gadget path, not a dependency on probing every future attached host.
|
||||
- [ ] Install on both ends and verify diagnostics show bundled webcam media.
|
||||
- [ ] Manual Google Meet test: camera starts, video is not black/unsupported,
|
||||
audio is intelligible, and lip sync is inside the acceptable band.
|
||||
- [ ] Mirrored browser probe: recordings open in Dolphin and automated analyzer
|
||||
no longer depends on fragile split-channel assumptions.
|
||||
|
||||
## A/V Sync Probe And Lip-Sync Validation Checklist
|
||||
|
||||
Context: Google Meet testing on 2026-04-30 showed audio roughly 8 seconds behind video even though internal client/server telemetry reported fresh uplink packets. Treat this as a product correctness failure, not a calibration issue. Do not resume blind lip-sync tuning until the probe can explain where delay appears.
|
||||
|
||||
### Operating Principles
|
||||
- Avoid hard-resetting USB, UVC, UAC, display managers, or remote hosts unless the user explicitly approves it.
|
||||
- Prefer observation and reversible user-space probes before changing media pipelines.
|
||||
- Treat Tethys-only SSH/device inspection as a development luxury, not a production dependency.
|
||||
- Do not claim lip sync is fixed from internal telemetry alone; require end-to-end device-level evidence.
|
||||
- Keep this checklist updated as work lands.
|
||||
|
||||
### Phase 1: Build The Probe
|
||||
- [x] Create this tracked checklist in `AGENTS.md`.
|
||||
- [x] Inventory existing `client/src/sync_probe/` code and decide what can be reused.
|
||||
- Reuse the existing analyzer/coded-pulse model, but keep direct UVC/UAC
|
||||
output-delay media server-generated.
|
||||
- Reuse the existing Tethys capture harness in `scripts/manual/run_upstream_av_sync.sh`.
|
||||
- Reuse and extend `lesavka-sync-analyze`; current gap is structured evidence output, not capture generation.
|
||||
- [x] Define the phase-1 output contract:
|
||||
- [x] `report.json`
|
||||
- [x] `report.txt`
|
||||
- [x] per-event rows with event id, video time, audio time, skew, and confidence
|
||||
- [x] server-output timeline and correlation artifacts that separate Theia
|
||||
feed timing from Tethys-observed UVC/UAC skew
|
||||
- [x] pass/fail verdict using preferred/acceptable/catastrophic thresholds
|
||||
- [x] Add a deterministic server-output sync beacon source:
|
||||
- [x] video flash pattern with event identity or cadence
|
||||
- [x] simultaneous audio click/beep
|
||||
- [x] stable event schedule suitable for automated detection
|
||||
- [x] Add a Tethys-side capture probe:
|
||||
- [x] capture Lesavka UVC video device
|
||||
- [x] capture Lesavka UAC microphone device
|
||||
- [x] record enough raw evidence for debugging when detection fails
|
||||
- [x] detect video flashes
|
||||
- [x] detect audio clicks
|
||||
- [x] pair events and compute skew
|
||||
- [x] Add a runner that can launch or instruct the Tethys probe safely over SSH without rebooting or restarting the desktop.
|
||||
- [x] Store direct UVC/UAC probe artifacts under `/tmp/lesavka-output-delay-probe-*` by default.
|
||||
- [x] Keep the probe usable without Google Meet first; Google Meet validation is a later application-level check.
|
||||
|
||||
### Phase 2: Use Probe To Root-Cause Desync
|
||||
- [x] Run probe through direct Lesavka UVC/UAC devices on Tethys.
|
||||
- First live run reached the devices but exposed analyzer/tooling gaps instead of a valid skew report.
|
||||
- Fixed the manual probe tunnel to preserve HTTPS/mTLS through SSH (`LESAVKA_SERVER_SCHEME=https`, `LESAVKA_TLS_DOMAIN=lesavka-server`).
|
||||
- Fixed analyzer handling for MJPEG captures whose FFprobe metadata over-reports frames versus decodable video frames.
|
||||
- [x] Compare server-generated paired output signatures against Tethys-observed device capture times.
|
||||
- The preserved Tethys capture had 323 decodable frames with constant brightness, so no video flash reached UVC.
|
||||
- Server logs show the probe entered a stale upstream session and dropped audio as ~326 seconds late.
|
||||
- [x] Identify whether delay appears before server planning, at server UAC sink, at UVC helper, inside Tethys device capture, or inside browser/WebRTC.
|
||||
- Current root cause is server planning/session lifecycle, before UVC/UAC sink output.
|
||||
- A previous one-sided microphone session started at 2026-04-30T22:59:52Z; the new probe at 2026-05-01T00:57:08Z inherited its stale playout epoch.
|
||||
- [x] Add diagnostics for whichever stage is hiding delay.
|
||||
- Existing server lifecycle/planning logs were enough to isolate this run; next gate should preserve these as structured artifacts.
|
||||
- [x] Do not tune calibration offsets until gross backlog is ruled out.
|
||||
- No calibration offsets were changed during the stale-session investigation.
|
||||
- Current evidence points at lifecycle/session planning, not an offset problem.
|
||||
|
||||
### Phase 3: Fix Lesavka With Evidence
|
||||
- [x] If stale upstream lifecycle is confirmed, reset shared A/V timing anchors when a new stream replaces an existing owner.
|
||||
- Added a lifecycle guard so normal camera/microphone stream replacement clears stale shared timing anchors before re-pairing.
|
||||
- Kept soft microphone recovery intentionally separate so it supersedes the mic owner without disturbing an active healthy camera/shared clock.
|
||||
- Added regression coverage for stale timing-anchor replacement and soft microphone recovery preservation.
|
||||
- [ ] If UAC sink backlog is confirmed, make UAC output freshness-bounded.
|
||||
- [ ] If audio progress is marked too early, move/augment progress telemetry to reflect actual sink emission readiness.
|
||||
- [ ] If UVC and UAC are using incompatible freshness semantics, unify them behind one live-media policy.
|
||||
- [ ] If browser/WebRTC adds delay after devices are already synced, document the application boundary and add browser-specific mitigation or guidance.
|
||||
|
||||
### Phase 4: Gate And Release Criteria
|
||||
- [x] Add deterministic unit/integration tests for probe analysis logic.
|
||||
- [x] Add a hardware-in-the-loop/manual gate artifact schema for real Tethys probe runs.
|
||||
- [x] Update `scripts/ci/media_reliability_gate.sh` to report probe evidence when present.
|
||||
- Gate now reads `LESAVKA_SYNC_PROBE_REPORT_JSON`, `LESAVKA_SYNC_PROBE_REPORT_DIR`, or `target/media-reliability-gate/sync-probe/report.json`.
|
||||
- Gate emits sync-probe verdict/check metrics, skew metrics, event counts, and a verdict info metric.
|
||||
- [x] Require a fresh probe report before declaring lip sync fixed.
|
||||
- Gate now supports `LESAVKA_REQUIRE_SYNC_PROBE=1`, which fails media reliability when a valid passing probe report is absent.
|
||||
- Product/release judgment still requires a new live Theia/Tethys probe after the lifecycle fix is installed.
|
||||
- [ ] Suggested thresholds:
|
||||
- [x] preferred: p95 skew <= 35 ms
|
||||
- [x] acceptable: p95 skew <= 80 ms
|
||||
- [x] gross failure: sustained skew > 250 ms
|
||||
- [x] catastrophic failure: any sustained skew near or above 1000 ms
|
||||
|
||||
### Open Questions
|
||||
- [x] Decide whether the phase-1 beacon should run as a separate binary, a hidden client mode, or both.
|
||||
- [x] Decide whether Tethys probe should be Rust-only, shell plus GStreamer, or a hybrid.
|
||||
- [ ] Confirm whether sudo/Vault access is available for installing missing probe dependencies on Theia/Tethys.
|
||||
- Non-sudo server journal inspection worked; noninteractive sudo over SSH still needs an explicit TTY/password path.
|
||||
|
||||
### Validation Evidence
|
||||
- [x] `cargo test -p lesavka_server upstream_media_runtime::tests::lifecycle`
|
||||
- [x] `cargo test -p lesavka_client sync_probe::analyze`
|
||||
- [x] `cargo test -p lesavka_testing upstream_sync_script_tunnels_auto_server_addr_through_ssh`
|
||||
- [x] `bash -n scripts/ci/media_reliability_gate.sh`
|
||||
- [x] `cargo test -p lesavka_testing media_reliability_gate_reports_direct_sync_probe_evidence`
|
||||
- [x] `LESAVKA_REQUIRE_SYNC_PROBE=1 ./scripts/ci/media_reliability_gate.sh`
|
||||
- Used a synthetic passing report at `target/media-reliability-gate/sync-probe/report.json` to verify gate parsing/enforcement.
|
||||
- This validates CI glue only; a real Theia/Tethys probe is still required for product judgment.
|
||||
|
||||
## Real Upstream Lip-Sync Fix Checklist
|
||||
|
||||
Context: the mirrored browser probe finally reproduced the real failure class on 2026-05-01:
|
||||
`activity_start_delta_ms=+9591.1`. This means the end-to-end browser-visible path can still start video far ahead of audio. The fix target is not silence in the logs; it is a freshness-first A/V uplink whose startup can heal briefly but cannot drift into seconds of skew.
|
||||
|
||||
### Acceptance Criteria
|
||||
- [ ] Mirrored browser probe passes with `activity_start_delta_ms <= 1000`.
|
||||
- [ ] Steady-state preferred sync: median skew within `35 ms`.
|
||||
- [ ] Steady-state acceptable sync: p95 absolute skew within `80 ms`.
|
||||
- [ ] Any sustained or startup A/V split near `1000 ms` remains a hard failure.
|
||||
- [ ] No stale audio backlog is ever drained into UAC to catch up.
|
||||
- [ ] No stale video backlog is ever drained into UVC to catch up.
|
||||
- [ ] Google Meet manual testing agrees with the mirrored probe instead of revealing hidden seconds-scale skew.
|
||||
|
||||
### Phase 0: Keep The Probe Honest
|
||||
- [x] Split raw activity-start fields from filtered/coded paired-pulse fields in probe reports.
|
||||
- [x] Print explicit raw first-video and first-audio timestamps in `report.txt`.
|
||||
- [x] Root-cause the 0.16.17 `raw_first_video_activity_s=0.000` artifact as the mirrored probe counting its own bright pre-start positioning card.
|
||||
- [x] Make the mirrored stimulus pre-start screen dark/dim so only real flash pulses can be detected as video activity.
|
||||
- [x] Add analyzer coverage proving dim pre-start positioning frames are ignored.
|
||||
- [x] Replace generic light/dark mirrored flashes with color-coded event IDs.
|
||||
- [x] Make mirrored audio pulses unique by the same event ID via pulse width plus tone frequency.
|
||||
- [x] Teach the analyzer to decode mirrored video event IDs from color, not grayscale brightness.
|
||||
- [x] Tighten real-camera color matching after 0.16.18 accepted washed-out brown/gray remnants as red/yellow events.
|
||||
- [x] Preserve raw activity-start timing before cadence cleanup in coded reports.
|
||||
- [x] Merge short audio envelope dropouts inside one coded pulse so a single tone burst cannot become two fake events.
|
||||
- [x] Add diagnostic coded-pair correlation so stable large skew reports as measured failure instead of `not enough pairs`.
|
||||
- [x] Make coded mirrored verdicts/calibration use matched coded pulses as authority; raw activity-start deltas are reported separately unless they agree with the coded pairs.
|
||||
- [x] Print unpaired video/audio onsets in the human report so missed coded pulses are visible during probe triage.
|
||||
- [ ] Keep the mirrored browser probe as the release/blocking upstream A/V gate.
|
||||
- [ ] Keep the old raw-device probe as a lower-level diagnostic only.
|
||||
|
||||
### Phase 1: Stop One-Sided Startup Drift
|
||||
- [x] Default upstream planning must require both camera and microphone before live playout.
|
||||
- [x] One-sided playout may only happen through an explicit compatibility override.
|
||||
- [x] While pairing is overdue, keep replacing the waiting-side anchor with fresh packets instead of preserving stale startup anchors.
|
||||
- [x] While awaiting the peer stream, keep only fresh pending camera packets.
|
||||
- [x] While awaiting the peer stream, keep only fresh pending microphone packets.
|
||||
- [x] Send the latest camera packet from the client uplink queue instead of draining old-but-not-yet-stale video backlog.
|
||||
- [x] Add tests proving the pairing window no longer expires into one-sided playout by default.
|
||||
- [x] Add tests proving the explicit one-sided override still works for intentional single-stream scenarios.
|
||||
|
||||
### Phase 2: Bound UAC Freshness
|
||||
- [x] Configure UAC `appsrc` as non-blocking and bounded.
|
||||
- [x] Log and drop UAC appsrc push failures instead of treating enqueue as guaranteed playback.
|
||||
- [x] Raise calibration offset limits enough to cover the measured MJPEG/UVC path delta without rejecting probe corrections.
|
||||
- [x] Update the MJPEG/UVC factory audio baseline from the old `-45ms`/`+720ms` values to `+1260ms` as the mirrored probe exposes the fresh UAC-vs-UVC path delta.
|
||||
- [x] Migrate untouched legacy `-45ms` factory/env calibration files on load so old installs actually receive the new baseline.
|
||||
- [x] Make the video/audio-master wait offset-aware so a positive audio playout delay does not freeze UVC video while UAC sleeps before emission.
|
||||
- [ ] Flush/stop UAC cleanly on session close, replacement, and recovery.
|
||||
- [x] Add tests or contract coverage for bounded UAC settings where practical.
|
||||
|
||||
### Phase 3: Add Real Timing Evidence
|
||||
- [ ] Add server timing counters for first camera packet, first mic packet, first UVC write, and first UAC push per session.
|
||||
- [ ] Add dropped-stale audio/video counters to diagnostics.
|
||||
- [ ] Add a concise health explanation when startup pairing exceeds the healing window.
|
||||
- [ ] Surface `Starting`, `Healing`, `Flowing`, `Lagging`, `Dropping`, and `Stale` states in chips/diagnostics from real path evidence.
|
||||
|
||||
### Phase 4: Recovery And Mid-Session Changes
|
||||
- [x] Make device changes trigger soft-pause, stream replacement, queue flush, and re-pairing.
|
||||
- [ ] Keep recovery soft-first; reserve hard UVC/UAC gadget rebuilds for explicit guarded recoveries.
|
||||
- [ ] Add cooldown/state guards so recovery buttons cannot wedge Theia.
|
||||
- [ ] Ensure disconnect closes all client/server media tasks for the session.
|
||||
|
||||
### Phase 5: Verification Loop
|
||||
- [x] Run focused upstream runtime tests.
|
||||
- [x] Run server/client media contract tests.
|
||||
- [x] Run `cargo check` for touched packages.
|
||||
- [x] Bump version for the fix release.
|
||||
- [x] Run the mirrored browser probe on installed client/server.
|
||||
- 0.16.17 still failed: reported `activity_start_delta_ms=+6735.0`, but `raw_first_video_activity_s=0.000` exposed a probe false-positive from the pre-start screen. Paired pulses still showed real steady-state skew (`p95=411.8 ms`, `median=-99.0 ms`), so the product remains unfixed.
|
||||
- 0.16.18 captured real colored/audio-coded events but the analyzer still bailed with `need at least 3 matching coded pulse pairs; saw 1`. Replaying that artifact after analyzer hardening now reports `gross_failure`: 16/16 coded pairs, p95 `775.7 ms`, activity start `-766.4 ms`, and drift `-2.8 ms`; the failure is stable audio-ahead/video-late skew, not random detector noise.
|
||||
- 0.16.19 changes the shipped MJPEG/UVC audio playout baseline to `+720ms`; the next mirrored browser probe should move the measured median from about `-766ms` toward roughly `-46ms` before fine calibration.
|
||||
- 0.16.19 mirrored browser probe did not move the measured skew: p95 `885.7 ms`, median `-788.4 ms`, activity start `-659.1 ms`, drift `-81.2 ms`. SSH inspection showed Theia was on commit `c348597`, but `/etc/lesavka/server.env` still contained `LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US=-45000`; the new `+720ms` baseline was not actually installed. Patch the installer to migrate leaked legacy ambient `-45000` to `+720000` unless `LESAVKA_INSTALL_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US` explicitly asks for the legacy value.
|
||||
- 0.16.20 installed the `+720ms` offset (`/etc/lesavka/server.env` had `LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US=720000`), but the mirrored browser capture contained no recognizable color pulses. Theia server logs showed repeated `upstream video frame dropped because the audio master never caught up inside the pairing window`; UVC was effectively starved by the positive audio delay instead of flowing delayed-but-fresh frames.
|
||||
- 0.16.21 makes that wait offset-aware and adds a regression test proving a configured positive audio delay does not freeze UVC video while UAC sleeps before playout.
|
||||
- Replaying the 0.16.21 artifact after 0.16.22 analyzer hardening changes the verdict from false `catastrophic_failure` to `gross_failure`: p95 `273.8 ms`, median `-188.4 ms`, 7 paired coded pulses. The raw activity-start delta (`-3620.7 ms`) is still printed, but it is ignored for verdict/calibration because it disagrees with coded pairs by `3432.3 ms`; unpaired video/audio onsets are printed for triage.
|
||||
- 0.16.22 live mirrored run still failed with p95 `433.7 ms`, median `-359.4 ms`, and 5 paired coded pulses. Client telemetry showed camera uplink `latest_age_ms` repeatedly around `300-350 ms`, matching the measured skew; patch 0.16.23 to make video queues latest-only instead of draining stale-but-under-budget backlog.
|
||||
- 0.16.23 local validation passed for fresh-queue behavior, uplink/probe freshness contracts, sync analyzer tests, client/server binary checks, and whitespace checks.
|
||||
- 0.16.23 live mirrored run improved to p95 `215.2 ms`, median `+142.2 ms`, 13 paired coded pulses, and raw activity alignment within `6.6 ms` of coded pairs. Patch 0.16.24 makes the probe print local client and remote server versions before capture so every run records what was actually tested.
|
||||
- 0.16.24 live mirrored run improved again to p95 `168.4 ms`, median `-19.1 ms`, 11 paired coded pulses, but still failed because individual paired pulses bounced between about `-168 ms` and `+45 ms`. Client logs showed the microphone uplink queue still accumulating depth `16`; patch 0.16.25 makes microphone uplink queues latest-only too so stale audio PTS cannot continue acting as the server timing master under backpressure.
|
||||
- 0.16.25 removed the client mic backlog but exposed a stable hardware/browser path delta: p95 `557.3 ms`, median `-540.5 ms`, drift `+9.0 ms`, and fresh mic delivery ages around `2-10 ms`. Patch 0.16.26 raises the MJPEG/UVC factory audio delay to `+1260 ms` and expands the calibration clamp so this stable offset can actually be corrected instead of rejected.
|
||||
- [ ] Re-run the mirrored browser probe after the pre-start false-positive fix.
|
||||
- [ ] Run Google Meet manual validation.
|
||||
|
||||
## 0.17.0 Tyrannical Upstream Playout Checklist
|
||||
|
||||
Context: 0.16.x proved that queue tweaks and static calibration cannot guarantee lip sync. 0.17.0 changes the upstream contract: the server planner is authoritative, audio is the master, video follows by timestamp or freezes, and freshness wins over smooth-but-wrong playback.
|
||||
|
||||
### Hard Product Invariants
|
||||
- [x] No normal live upstream playout may be more than 1000 ms behind the freshest known client capture frontier.
|
||||
- [x] Video may not advance outside the audio playhead sync budget.
|
||||
- [x] Audio should be continuous when possible, but stale audio must be dropped/skipped rather than drained.
|
||||
- [x] Missing or late video should freeze/stutter instead of pulling audio backward.
|
||||
- [x] Startup may be ugly, but must either enter fresh synced live mode or declare failure within 60 seconds.
|
||||
- [x] Healing may be visible, but must prevent persistent seconds-scale skew.
|
||||
- [x] Calibration may fine-tune sub-frame offsets only; it must not be required to rescue seconds-scale desync.
|
||||
|
||||
- [x] Bump Lesavka to 0.17.0 because this is a media-contract change, not a patch tune.
|
||||
- [x] Bump patch follow-ups to 0.17.1 instead of reporting `version+revision` as the release version.
|
||||
- [x] Add planner policy config: max live lag, max skew, startup timeout, target playout delay, and healing cooldown.
|
||||
- [x] Reset 0.17 defaults so shipped audio/video offsets do not intentionally exceed the freshness budget.
|
||||
- [x] Track latest camera/audio input timestamps in the server planner.
|
||||
- [x] Track actual planned/emitted audio and video playheads.
|
||||
- [x] Enforce audio as master: stale audio is dropped/skipped; it does not drain backlog.
|
||||
- [x] Enforce video follower behavior: frames ahead of audio wait; frames too far behind audio are dropped so the previous frame freezes.
|
||||
- [x] Re-anchor continuously when the live playhead falls outside the freshness budget, not only once at startup.
|
||||
- [x] Keep startup paired-only by default and fail visibly after the startup timeout.
|
||||
- [x] Add planner phases and counters to diagnostics/logs: acquiring, syncing, live, healing, failed; stale drops, skew drops, freshness heals, video freezes.
|
||||
- [x] Keep UVC/UAC sinks as dumb consumers of planner-approved packets.
|
||||
- [x] Update tests to prove stale media cannot be emitted and video cannot outrun/lag audio beyond policy.
|
||||
- [x] Update manual/probe diagnostics so 0.17 reports the planner state being tested.
|
||||
|
||||
### Validation Targets Before Human Test
|
||||
- [x] Unit tests for startup pairing, stale audio drop, stale video drop, reanchor, startup timeout, audio-master/video-follower rules.
|
||||
- [x] Contract tests for installer defaults and version reporting.
|
||||
- [x] `cargo check -p lesavka_client -p lesavka_server --bins`.
|
||||
- [x] Focused `lesavka_testing` media/runtime contracts.
|
||||
- [x] Only after all of the above, run the mirrored browser probe.
|
||||
|
||||
### Progress Log
|
||||
- 2026-05-01: Added 0.17 planner defaults (`350ms` target playout, `1000ms` max live lag, `60000ms` startup timeout, `80000us` pair slack), reset MJPEG audio factory offset to `0`, and migrated old `-45ms`, `+720ms`, and `+1260ms` untouched baselines.
|
||||
- 2026-05-01: Server planner now tracks latest input frontier, presented audio/video playheads, phase, stale drops, skew drops, reanchors, startup timeouts, and freezes.
|
||||
- 2026-05-01: Runtime tests green for stale audio drop, stale video drop, audio-master/video-follower freeze, repeated reanchor, paired startup timeout, and planner snapshot basics: `cargo test -p lesavka_server upstream_media_runtime::tests -- --nocapture`.
|
||||
- 2026-05-01: Added `GetUpstreamSync` RPC, `lesavka-relayctl upstream-sync`, launcher diagnostics text, and mirrored-probe before/after planner snapshots so 0.17 probe runs report the exact planner state under test.
|
||||
- 2026-05-01: Validation green: `cargo test -p lesavka_server --lib --bins`, `cargo test -p lesavka_testing`, `cargo test -p lesavka_client --bins --lib`, and targeted installer/RPC/layout contracts.
|
||||
- 2026-05-01: First installed 0.17.0 mirrored browser probe on client/server commit `3920e0a` failed honestly: planner reported fresh live state (`live_lag_ms=10`, `skew_ms=+20.7`) but browser-observed paired pulses showed audio late by median `+349.1ms`, p95 `429.1ms`, with 6 video freezes/skew drops. Replayed artifact after analyzer hardening now reports `gross_failure` instead of false raw-start `catastrophic_failure`.
|
||||
- 2026-05-01: Patch follow-up models the observed MJPEG/UVC browser egress delta by defaulting video playout offset to `+350ms` and preserving the 1s freshness ceiling. Raw activity-start evidence is now ignored for verdict/calibration when it disagrees with paired pulses that are already failing directly. Existing early-0.17 `audio=0/video=0` factory/env calibration files migrate to the new `video=+350ms` default on load.
|
||||
- 2026-05-01: Release identity cleanup: bumped the patched build to clean semver `0.17.1`; probe attribution now prints `client_version`/`server_version` separately from `client_revision`/`server_revision` and refuses old `client_full_version` output.
|
||||
- 2026-05-01: 0.17.1 mirrored probe failed with video about `1.18-1.31s` behind audio and 761 planner video freezes. Root cause candidate: the client rebaser forced independent camera/mic pipelines onto one first-packet capture base, so a later-starting camera path was timestamped too early and looked permanently behind audio. Patch 0.17.2 anchors each stream to the shared monotonic clock at its own first packet time.
|
||||
- 2026-05-02: 0.17.2 mirrored probe and Google Meet test showed major improvement but persistent sub-second late video. Root cause follow-up: the temporary `+350ms` factory MJPEG video playout offset matched the observed browser skew and also made the server skew guard freeze video against its own offset. Patch 0.17.3 restores factory video offset to `0ms`, migrates untouched `+350ms` install/calibration defaults back to `0ms`, and makes the skew guard offset-aware for intentional site calibration.
|
||||
- 2026-05-02: 0.17.3 Google Meet manual test improved to roughly sub-second/near-quarter-second lip sync, but the mirrored analyzer could not pair pulses and the user still heard choppy background audio. Client logs showed Pulse microphone packets arriving unevenly with ages around `90-240ms`; patch 0.17.4 lowers Pulse mic `buffer-time`/`latency-time`, bounds the mic queue/appsink, and keeps mirrored-probe after-run planner diagnostics even when analysis fails.
|
||||
- 2026-05-02: 0.17.4 mirrored run was salvageable after an SCP banner timeout, but analysis still failed with no close pulse pairs. The client log still showed `180-240ms` microphone delivery ages, pointing at server playout sleeps backpressuring the gRPC microphone stream. Patch 0.17.5 drains inbound microphone packets while waiting for scheduled UAC playout and retries browser-capture SCP fetches.
|
||||
- 2026-05-02: 0.17.5 mirrored run still failed with insufficient paired evidence, and the client log still showed recurring `180-240ms` microphone packet age while camera age stayed near zero. Patch 0.17.6 splits oversized mic samples into `20ms` timestamped packets and keeps a short fresh server-side audio window instead of collapsing every pending burst to one newest chunk, aiming to preserve lip sync without making background audio choppy.
|
||||
- 2026-05-02: 0.17.6 Bumblebee mirrored run proved Bumblebee mic packets are already `10ms`, but camera source timestamps were being rebased up to roughly `1.8s` into the future while mic packets sat around `180-240ms` old. Patch 0.17.7 adds a source lead cap (`80ms` default) to both direct and duration-paced client timestamp rebasing so bursty camera buffers cannot make the server wait for fake future video while fresh audio keeps moving.
|
||||
- 2026-05-02: The launcher UI was still writing live control files with only camera/mic/speaker booleans, so media device combo changes were honestly only staged for the next child launch. Patch 0.17.7 extends the live media control file with base64-encoded camera source, camera profile, microphone source, and speaker sink choices; the relay child now rebuilds the affected camera, mic, or speaker pipeline when those selections change.
|
||||
|
||||
## 0.17.8 Sync-Only Output Bias Checklist
|
||||
|
||||
Context: 0.17.7 with the Bumblebee mic and BRIO camera removed the seconds-scale failure and left a stable browser-visible output skew: paired pulses were audio-late by roughly `+95ms` to `+183ms` (`median=+110.8ms`, `mean=+132.6ms`, `p95=+183.1ms`). Per user direction, 0.17.8 is only about establishing sync. Freshness and smoothness tuning are explicitly deferred until the mirrored probe is inside the sync band.
|
||||
|
||||
- [x] Do not change freshness ceilings, reanchor thresholds, queue policy, UAC smoothness, or startup healing behavior in this version.
|
||||
- [x] Set the MJPEG/UVC factory video playout baseline to `+130ms` to counter the measured browser output audio-late bias.
|
||||
- [x] Migrate only untouched old `0ms` and `+350ms` video defaults to the new `+130ms` baseline.
|
||||
- [x] Preserve manual/site calibration values exactly as-is.
|
||||
- [x] Update installer defaults so Theia receives the same `+130ms` baseline after reinstall.
|
||||
- [x] Update docs and contracts to state the measured sync baseline clearly.
|
||||
- [x] Run focused calibration/installer/runtime tests.
|
||||
- [x] Run package checks before push.
|
||||
- [x] Push clean semver `0.17.8` for installed client/server testing.
|
||||
|
||||
## 0.17.9 Sync-Only Audio-Master Presentation Checklist
|
||||
|
||||
Context: 0.17.8 installed cleanly on both ends (`314c55b`) but the mirrored probe failed with insufficient data: only 2 paired events, 1187 video freezes, and planner phase `healing`. The server was using the newest planned audio packet as the video-drop reference, so future audio planning could make current video look falsely behind before that audio was actually handed to UAC.
|
||||
|
||||
- [x] Keep 0.17.9 scoped to sync enforcement only; no freshness ceilings, queue policy, or smoothness changes.
|
||||
- [x] Make video freeze/drop decisions compare against audio actually presented to UAC, not merely planned audio.
|
||||
- [x] Make `wait_for_audio_master` wake on `mark_audio_presented` so video waits for real audio progress.
|
||||
- [x] Add/adjust tests proving future planned audio alone cannot freeze video.
|
||||
- [x] Run focused upstream planner tests.
|
||||
- [x] Run package checks before push.
|
||||
- [x] Push clean semver `0.17.9` for installed client/server testing.
|
||||
|
||||
## 0.17.10 Sync-Only Audio Catch-Up Grace Checklist
|
||||
|
||||
Context: 0.17.9 installed cleanly on both ends (`fbf274d`) and improved the mirrored probe to `median=+19.8ms`, `mean=-42.0ms`, and planner phase `live`, but it still failed with `p95=254.1ms`, only 6 paired pulses, `drift=341.9ms`, and 591 video freezes. The Theia server log showed repeated `upstream video frame dropped because the audio master never caught up inside the pairing window`, so the video follower was still giving up at the nominal video due time instead of spending a bounded sync grace to let audio catch up.
|
||||
|
||||
- [x] Keep 0.17.10 scoped to establishing sync; defer freshness and smoothness tuning until paired skew is stable.
|
||||
- [x] Add `LESAVKA_UPSTREAM_AUDIO_MASTER_WAIT_GRACE_MS` with a `350ms` default so video can wait past nominal due time for UAC audio progress.
|
||||
- [x] Stop dropping video solely because it woke late after a successful audio-master wait.
|
||||
- [x] Preserve the global `1000ms` live-lag ceiling and existing stale-input planner rules.
|
||||
- [x] Update installer defaults and operational docs for the sync grace.
|
||||
- [x] Add/adjust tests proving video can wait through sync grace and still times out after grace expires.
|
||||
- [x] Run focused upstream planner tests.
|
||||
- [x] Run package checks before push.
|
||||
- [x] Push clean semver `0.17.10` for installed client/server testing.
|
||||
|
||||
## 0.17.11 Sync-Only Browser Egress Compensation Checklist
|
||||
|
||||
Context: 0.17.10 installed cleanly on both ends (`4bb0f4a`) and produced a high-confidence coded-pulse failure instead of probe ambiguity. Browser-visible audio on Tethys arrived about `+891ms` to `+971ms` after the matching video (`median=+962.1ms`, `mean=+946.7ms`, `p95=+971.5ms`), while the server planner reported internal skew near zero (`planner_skew_ms=-56.9`). The missing model is UAC/browser output egress latency: Lesavka was treating `appsrc.push_buffer`/UAC enqueue as audio presentation, but the browser consumes that audio about one second later.
|
||||
|
||||
- [x] Keep 0.17.11 scoped to establishing sync; do not tune freshness ceilings or smoothness policy.
|
||||
- [x] Raise the MJPEG/UVC factory video playout baseline from `+130ms` to `+1090ms` to align video with browser-visible UAC audio.
|
||||
- [x] Allow intentional A/V playout offsets to exceed the generic future-wait freshness guard so the planner does not immediately reanchor away the sync compensation.
|
||||
- [x] Widen calibration offset bounds so the measured browser egress baseline is representable instead of silently clamped.
|
||||
- [x] Migrate untouched `0ms`, `+130ms`, and `+350ms` MJPEG/UVC video baselines to the new browser-visible baseline.
|
||||
- [x] Preserve manual/site calibration values exactly as-is.
|
||||
- [x] Update installer defaults so Theia receives the same browser-visible baseline after reinstall.
|
||||
- [x] Update operational docs and installer contract tests for the new baseline.
|
||||
- [x] Run focused calibration, installer, and runtime checks.
|
||||
- [x] Push clean semver `0.17.11` for installed client/server testing.
|
||||
|
||||
## 0.17.12 Sync-Only Audio Marker Continuity Checklist
|
||||
|
||||
Context: 0.17.11 installed cleanly on both ends (`092c03a`) and fixed the constant browser-visible offset: raw activity start was `-17.0ms`, median coded skew was `+13.1ms`, and no server planner freeze/drop counters moved. The run still failed because only 5 coded pairs were usable and the last two had weak confidence (`0.58`, `0.33`) while the client log showed repeated microphone stale/superseded drops. The remaining sync blocker is audio-marker continuity, not another constant offset.
|
||||
|
||||
- [x] Keep 0.17.12 scoped to sync establishment; do not change video freshness or server freshness ceilings.
|
||||
- [x] Stop using latest-only policy for microphone uplink packets; preserve oldest fresh audio chunks so speech/pulses stay continuous.
|
||||
- [x] Keep microphone queue age bounded at `400ms` so continuity does not become unbounded backlog.
|
||||
- [x] Apply the same bounded audio-continuity policy to the synthetic sync-probe audio queue.
|
||||
- [x] Update contract tests to encode the new audio continuity policy.
|
||||
- [x] Run focused client queue/probe contracts and package checks.
|
||||
- [x] Push clean semver `0.17.12` for installed client/server testing.
|
||||
|
||||
## 0.17.13 Probe-Driven Calibration Loop Checklist
|
||||
|
||||
Context: 0.17.12 installed cleanly on both ends (`2b26fde`) and moved the paired-pulse median into the near-sync zone (`median=+71.6ms`, first/last `+59.8ms/-60.1ms`), but the run still failed with only 4 usable coded pairs and `p95=206.8ms`. Static guesses have reached diminishing returns. 0.17.13 adds a measured calibration loop so the mirrored probe can become the authority for site-specific browser-visible output compensation when, and only when, the analyzer has enough stable evidence.
|
||||
|
||||
- [x] Keep 0.17.13 scoped to probe-driven sync calibration tooling; do not change freshness ceilings, queue policy, UAC smoothness, or startup healing behavior.
|
||||
- [x] Expose relay CLI calibration state and safe calibration actions for scripts.
|
||||
- [x] Make the mirrored probe locate the analyzer `report.json` after a run and print the calibration decision.
|
||||
- [x] Add opt-in probe calibration apply mode gated by `LESAVKA_SYNC_APPLY_CALIBRATION=1`.
|
||||
- [x] Apply only when analyzer `calibration.ready=true`; otherwise refuse and print the analyzer reason.
|
||||
- [x] Default probe-driven correction to video offset adjustment, because the measured residual is browser-visible UAC egress delay relative to UVC video.
|
||||
- [x] Keep saving the measured offset as the site default opt-in via `LESAVKA_SYNC_SAVE_CALIBRATION=1`.
|
||||
- [x] Update contract tests for relay CLI and manual probe script behavior.
|
||||
- [x] Run focused relay/manual-script tests and package checks.
|
||||
- [x] Push clean semver `0.17.13` for installed client/server testing.
|
||||
|
||||
Follow-up candidate: after 0.17.13 proves safe measured apply/refuse behavior, add a segmented live-calibration probe. The current browser probe uploads one WebM after recording ends, so it can only do measure/apply/rerun. A true same-session loop should run a longer stimulus, capture/analyze separate Tethys browser windows, apply calibration only between windows, and use the next window as the confirmation segment so before/after evidence is not mixed.
|
||||
|
||||
## 0.17.14 Segmented Live Calibration Probe Checklist
|
||||
|
||||
Context: 0.17.13 adds safe measured calibration apply/refuse plumbing, but it is still a single-window measure-then-rerun workflow. The next probe should keep the same Lesavka client/server session alive across multiple browser-capture windows so we can measure, apply, and re-measure without reinstalling or restarting the media path. This is the bridge from probe truth to blind/server-side calibration targets.
|
||||
|
||||
- [x] Keep 0.17.14 scoped to probe tooling and observability; do not change media planner policy.
|
||||
- [x] Add optional multi-segment mirrored probe mode via `LESAVKA_SYNC_CALIBRATION_SEGMENTS`.
|
||||
- [x] Keep one local stimulus browser and one headless Lesavka sender alive across all segments.
|
||||
- [x] Run a fresh Tethys browser recording/analyzer pass per segment so before/after calibration evidence is not mixed in one WebM.
|
||||
- [x] Allow calibration apply between segments using the 0.17.13 ready/refuse gate.
|
||||
- [x] Capture planner and calibration snapshots before and after each segment for metric correlation.
|
||||
- [x] Preserve single-segment default behavior for normal manual probes.
|
||||
- [x] Update manual probe contract tests for segmented live calibration mode.
|
||||
- [x] Run focused script/CLI checks and package checks.
|
||||
- [x] Push clean semver `0.17.14` for installed client/server testing.
|
||||
|
||||
## 0.17.15 Adaptive Probe Metrics and Blind Target Checklist
|
||||
|
||||
Context: 0.17.14 can keep one Lesavka session alive across multiple measured segments, but we still need the probe to teach Lesavka what "good" looks like from server-only telemetry. 0.17.15 turns segmented runs into an adaptive calibration dataset: every segment gets probe truth, planner state, and calibration state joined into artifacts that can drive blind calibration/healing targets when Tethys/browser probe access is not available.
|
||||
|
||||
- [x] Keep 0.17.15 scoped to probe intelligence and metrics correlation; do not change media playout policy.
|
||||
- [x] Add adaptive calibration ergonomics for longer near-continuous runs without changing the default one-segment probe.
|
||||
- [x] Write per-run segment metrics as CSV and JSONL, joining analyzer verdicts with planner/calibration before/after snapshots.
|
||||
- [x] Emit a blind-target candidate JSON from segments whose probe verdict passes, including server-visible planner lag/skew ranges.
|
||||
- [x] Record when no segment is probe-good enough so blind-target generation refuses instead of inventing targets.
|
||||
- [x] Keep calibration mutation gated by the existing ready/refuse logic and `LESAVKA_SYNC_APPLY_CALIBRATION=1`.
|
||||
- [x] Update manual probe contract tests for the adaptive artifacts and controls.
|
||||
- [x] Run focused script checks and package checks.
|
||||
- [x] Push clean semver `0.17.15` for installed client/server testing.
|
||||
|
||||
## 0.17.16 Continuous Browser Evidence Checklist
|
||||
|
||||
Context: 0.17.15 proved the adaptive/live-edit loop is structurally useful, but each segment restarted the Tethys browser/getUserMedia receiver. That contaminates calibration segments with receiver startup noise and keeps usable coded pairs too low (`3-5` pairs instead of the `8+` needed for safe calibration). 0.17.16 is scoped to making the probe evidence continuous and attributable before any more media playout changes.
|
||||
|
||||
- [x] Keep 0.17.16 scoped to probe/tooling reliability; do not change server media playout policy, freshness ceilings, queue policy, or UAC smoothness.
|
||||
- [x] Make adaptive mirrored runs keep one Tethys browser/getUserMedia session alive after the first segment.
|
||||
- [x] Preserve single-segment/manual probe behavior by default.
|
||||
- [x] Add an explicit `BROWSER_CONSUMER_REUSE_SESSION` control to the browser probe runner.
|
||||
- [x] Verify browser uploads by start token so the fetched WebM belongs to the segment just triggered.
|
||||
- [x] Track browser upload counts/tokens in `browser_consumer_probe.py` status JSON for post-run debugging.
|
||||
- [x] Wire adaptive mirrored mode to reuse the existing Tethys receiver for segments 2+.
|
||||
- [x] Keep calibration mutation behind the existing analyzer `calibration.ready=true` gate.
|
||||
- [x] Update manual probe contract tests for continuous browser session behavior.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.16` for installed client/server testing.
|
||||
|
||||
## 0.17.17 Adaptive Segment Failure Continuation Checklist
|
||||
|
||||
Context: 0.17.16 successfully installed and token-verified the first browser capture, but the adaptive run still aborted at segment 1 because `lesavka-sync-analyze` could not form coded pairs (`saw 0`) even though raw activity was nearly synced (`-32.2ms`). That prevented segments 2-4 from exercising the same long-lived Tethys browser session. 0.17.17 keeps adaptive data gathering alive across analyzer-only failures and preserves the failure evidence for summaries.
|
||||
|
||||
- [x] Keep 0.17.17 scoped to probe/tooling reliability; do not change media playout policy.
|
||||
- [x] Add `BROWSER_ANALYSIS_REQUIRED` so browser captures can keep artifacts even when analyzer exits nonzero.
|
||||
- [x] In adaptive mirrored mode, treat analyzer-only failures as nonfatal segment evidence.
|
||||
- [x] Preserve analyzer failure reason, raw activity delta, and log path as structured segment artifacts.
|
||||
- [x] Continue to abort on browser startup, recording, upload, or capture fetch failures.
|
||||
- [x] Include analyzer failure artifacts in segment CSV/JSONL summaries.
|
||||
- [x] Keep calibration apply impossible without a real analyzer `report.json` and `calibration.ready=true`.
|
||||
- [x] Update manual probe contract tests for analyzer-failure continuation.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.17` for installed client/server testing.
|
||||
|
||||
## 0.17.18 Explicit Probe Source Integrity Checklist
|
||||
|
||||
Context: the 0.17.17 Bumblebee mirrored run was configured with
|
||||
`LESAVKA_MIC_SOURCE=alsa_input.usb-Neat_Microphones_Bumblebee_II_USB_Microphone-00.mono-fallback`,
|
||||
but the Bumblebee was unplugged. The client log recorded `requested mic ... not found; using default`,
|
||||
so the run measured the fallback/default microphone and must not be treated as Bumblebee calibration
|
||||
evidence.
|
||||
|
||||
- [x] Treat 0.17.17 Bumblebee probe metrics as invalid for Bumblebee-specific calibration.
|
||||
- [x] Keep ordinary launcher/client source selection forgiving by default.
|
||||
- [x] Add strict explicit-source mode with `LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES=1`.
|
||||
- [x] In strict mode, fail client startup when requested `LESAVKA_MIC_SOURCE` is unavailable instead of falling back to default.
|
||||
- [x] Make the mirrored manual probe launch the real client with strict explicit-source mode by default.
|
||||
- [x] Add contract coverage so the mirrored probe cannot regress to silent explicit-source fallback.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.18` for installed client/server testing.
|
||||
- [ ] Re-run the mirrored probe only after confirming the intended microphone is physically present and selected.
|
||||
|
||||
## 0.17.19 Fatal Required Source Failure Checklist
|
||||
|
||||
Context: the 0.17.18 run proved fallback was blocked, but the headless client kept running
|
||||
camera-only after the required Bumblebee source failed to open. The server stayed in `acquiring`
|
||||
for all four segments (`awaiting both upstream media streams`), the analyzer saw no color-coded
|
||||
video pulses, and no calibration data was produced. Required-source failure must fail the probe,
|
||||
not degrade into camera-only evidence.
|
||||
|
||||
- [x] Treat the 0.17.18 run as a required-microphone setup failure, not a lip-sync measurement.
|
||||
- [x] Keep strict no-fallback behavior from 0.17.18.
|
||||
- [x] Abort the client process when an explicit required microphone source cannot start.
|
||||
- [x] Abort the client process when an explicit required camera source cannot start.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.19` for installed client/server testing.
|
||||
- [ ] Re-run only after `LESAVKA_MIC_SOURCE` is listed by the local audio stack.
|
||||
|
||||
## 0.17.20 Provisional Adaptive Calibration Checklist
|
||||
|
||||
Context: the 0.17.19 adaptive run reached live media and produced browser-visible sync evidence,
|
||||
but the probe still did not calibrate. Each segment either had only 3 coded pairs or an analyzer
|
||||
failure, so the existing `calibration.ready=true` gate refused every adjustment. That was safe for
|
||||
saved defaults, but wrong for the user-directed adaptive probe goal: live segments should be allowed
|
||||
to make bounded provisional corrections from imperfect-but-usable evidence, then let the next segment
|
||||
judge whether the correction helped.
|
||||
|
||||
- [x] Treat the 0.17.19 run as a measurement-only run, not a calibration run.
|
||||
- [x] Keep permanent saved calibration gated by analyzer-ready evidence.
|
||||
- [x] Add provisional adaptive calibration mode enabled by adaptive runs by default.
|
||||
- [x] Allow provisional corrections from low-but-usable paired-pulse reports when p95 and drift are bounded.
|
||||
- [x] Use median skew as the provisional correction source and default to video offset adjustment.
|
||||
- [x] Clamp provisional correction gain and max step so one noisy segment cannot swing the site offset wildly.
|
||||
- [x] Record decision mode and provisional recommendations in segment metrics.
|
||||
- [x] Update manual probe contract tests for provisional calibration controls and output.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.20` for installed client/server testing.
|
||||
|
||||
## 0.17.21 Calibrate-Then-Confirm Probe Checklist
|
||||
|
||||
Context: 0.17.20 made adaptive runs capable of provisional calibration between measured
|
||||
segments, but that still did not strictly guarantee the user-requested flow: run the probe,
|
||||
calibrate the server while it is running, then run a post-calibration test segment. It also
|
||||
still ignored analyzer-failure captures that contained a bounded raw activity delta. 0.17.21
|
||||
makes the probe behavior explicit: calibration segments mutate active server calibration,
|
||||
confirmation segments do not mutate it, and adaptive runs fail unless confirmation passes.
|
||||
|
||||
- [x] Treat `LESAVKA_SYNC_CALIBRATION_SEGMENTS` as calibration windows in adaptive confirm mode.
|
||||
- [x] Add post-calibration confirmation windows via `LESAVKA_SYNC_CONFIRMATION_SEGMENTS`.
|
||||
- [x] Disable calibration apply during confirmation windows so they are a clean test.
|
||||
- [x] Require confirmation pass by default in adaptive confirm mode.
|
||||
- [x] Add bounded raw-activity provisional calibration for analyzer failures that still report a raw A/V delta.
|
||||
- [x] Include confirmation summaries and segment phase in adaptive artifacts.
|
||||
- [x] Update manual probe contract tests for calibrate-then-confirm behavior.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.21` for installed client/server testing.
|
||||
|
||||
## 0.17.22 Adaptive Calibration Export Fix Checklist
|
||||
|
||||
Context: the 0.17.21 run proved the calibrate-then-confirm shape worked, but no calibration
|
||||
was actually applied. The calibration decision printed usable provisional recommendations, then
|
||||
reported `provisional_calibration_enabled=false` and left the active offset unchanged. Root cause:
|
||||
the Bash defaults were shell variables, while the embedded Python decision helper reads environment
|
||||
variables via `os.environ`.
|
||||
|
||||
- [x] Treat the 0.17.21 run as confirmation that the probe shape works but calibration was disabled by script plumbing.
|
||||
- [x] Export adaptive/provisional/raw-failure/confirmation knobs after defaulting them.
|
||||
- [x] Add contract coverage so provisional calibration defaults cannot silently stop reaching the Python decision helper.
|
||||
- [x] Run shell syntax checks, focused contract tests, and package checks.
|
||||
- [x] Push clean semver `0.17.22` for installed client/server testing.
|
||||
|
||||
## 0.17.23 Audio Probe Robustness Checklist
|
||||
|
||||
Context: recent mirrored runs consistently detected more video events than audio events. That can
|
||||
represent a real audio path problem, but the probe should not under-count audio just because the
|
||||
room/speaker/mic path is quieter or mildly chopped. Harden the test tooling before interpreting
|
||||
low paired-pulse counts as product failure.
|
||||
|
||||
- [x] Raise the default local stimulus tone level and expose it as `PROBE_AUDIO_GAIN`.
|
||||
- [x] Pass the configured audio gain into the local stimulus browser page.
|
||||
- [x] Lower the analyzer audio peak floor so faint but valid probe tones are accepted.
|
||||
- [x] Smooth the audio envelope before thresholding so single-window dips do not erase pulses.
|
||||
- [x] Merge longer internal tone dropouts inside one pulse without merging adjacent 1s pulses.
|
||||
- [x] Add analyzer tests for faint tones and longer within-pulse audio dropouts.
|
||||
- [x] Update manual probe contract coverage for the audio-gain control.
|
||||
- [x] Run focused analyzer/manual-probe tests and package checks.
|
||||
- [x] Push clean semver `0.17.23` for installed client/server testing.
|
||||
|
||||
## 0.17.24 Probe Truthfulness And Localization Checklist
|
||||
|
||||
Context: the 0.17.23 run proved adaptive calibration is now live-editing the server,
|
||||
but confirmation still failed. Segment 3 passed and triggered a provisional calibration
|
||||
nudge, while the confirmation segment failed with a near-centered median but high p95/drift.
|
||||
This means the fastest high-quality path is localization tooling, not another static offset
|
||||
guess.
|
||||
|
||||
- [x] Treat the latest failure as timing instability/outlier drift until the probe proves otherwise.
|
||||
- [x] Fix analyzer-failure raw activity delta parsing so bounded raw-delta calibration can use the evidence it prints.
|
||||
- [x] Stop marking `blind-targets.json` ready from calibration-only passes when confirmation segments exist and fail.
|
||||
- [x] Emit combined `segment-events.csv` and `segment-events.jsonl` artifacts so each run exposes per-pulse skew and confidence across segments.
|
||||
- [ ] Use the next run to decide whether bad p95 is caused by low-confidence analyzer pairings, camera/mic capture instability, or server planner/output jitter.
|
||||
- [ ] Add stage-local timing evidence for stimulus schedule, client capture onsets, server output timing, and browser/device capture if the event table still cannot isolate the source.
|
||||
- [ ] Only save calibration defaults after a confirmation segment passes.
|
||||
|
||||
## 0.17.25 Client/Server Timing Sidecar Checklist
|
||||
|
||||
Context: the probe should remain an external truth check, not a runtime dependency.
|
||||
Production sync needs client/server-only timing evidence that can predict and heal jitter before
|
||||
browser/probe validation. Attach timing metadata to media packets first; add a separate timing RPC
|
||||
only if packet-attached metadata cannot explain the next failure.
|
||||
|
||||
- [x] Add client send/capture/queue timing metadata to upstream camera packets.
|
||||
- [x] Add client send/capture/queue timing metadata to upstream microphone packets.
|
||||
- [x] Record the latest packet timing samples on the server when media packets arrive.
|
||||
- [x] Expose blind client/server timing metrics through `GetUpstreamSync` and `lesavka-relayctl upstream-sync`.
|
||||
- [x] Include those timing metrics in segmented mirrored-probe summaries.
|
||||
- [x] Add planner tests covering client capture skew, client send skew, server receive skew, and queue ages.
|
||||
- [ ] Use the next mirrored run to compare browser p95/drift against client capture/send skew and server receive skew.
|
||||
- [x] Instrument UVC/UAC/HDMI sink handoff timing before waiting for another run.
|
||||
|
||||
## 0.17.26 Blind Timing Window And Sink Handoff Checklist
|
||||
|
||||
Context: the next probe should not be required to discover that the server is
|
||||
blind between "packet arrived" and "packet handed to UAC/UVC/HDMI". Close
|
||||
measurement gaps before tuning any new healing controller.
|
||||
|
||||
- [x] Retain rolling client capture/send skew windows inside the server.
|
||||
- [x] Retain rolling server receive skew and client queue age windows.
|
||||
- [x] Record audio/video sink handoff instants and schedule lateness at the server boundary.
|
||||
- [x] Expose sink handoff skew, sink lateness, and rolling p95 timing metrics through `GetUpstreamSync`.
|
||||
- [x] Include rolling blind metrics in mirrored-probe CSV/JSONL summaries and blind targets.
|
||||
- [x] Add planner tests for rolling timing windows and sink handoff timing.
|
||||
- [ ] Use the next mirrored run only for correlation/tuning: decide whether the controller should adjust playout delay, offset, or drop/freeze policy from these blind metrics.
|
||||
|
||||
## 0.17.27 Runtime Blind Healing Checklist
|
||||
|
||||
Context: if client/server-only timing is stable enough, Lesavka should make
|
||||
small runtime corrections without waiting for the external probe. The probe
|
||||
remains the truth judge and root-cause localizer, not the production dependency.
|
||||
|
||||
- [x] Add a server-owned blind healer loop enabled by default with `LESAVKA_UPSTREAM_BLIND_HEAL=0` escape hatch.
|
||||
- [x] Gate blind healing on rolling sample counts plus client-send, server-receive, queue-age, sink-late, and sink-handoff p95 limits.
|
||||
- [x] Apply bounded transient offset nudges from sink handoff skew without saving them as site defaults.
|
||||
- [x] Expose sample counts in `upstream-sync` and mirrored probe artifacts so failed runs can separate "insufficient evidence" from real timing failure.
|
||||
- [x] Emit `root-cause-summary.json` from mirrored probe runs to classify failing layers instead of eyeballing raw metrics.
|
||||
- [x] Add unit tests for apply/refuse/target behavior in the blind healer.
|
||||
- [ ] Next run should identify the failing layer if confirmation still fails: client capture/uplink, network/server receive, server planner, server sink handoff, or external USB/browser/probe boundary.
|
||||
|
||||
## 0.17.28 Blind Timing Normalization Checklist
|
||||
|
||||
Context: the first preferred confirmation pass showed the probe-calibrate-confirm
|
||||
loop can work, but also revealed two blind-healing blockers: sink handoff samples
|
||||
stayed empty, and client timing skew included a false cross-pipeline PTS offset.
|
||||
|
||||
- [x] Pair server sink handoff samples by planned due time, not raw local PTS, so offset-compensated streams still produce handoff evidence.
|
||||
- [x] Normalize client sidecar capture/send windows onto the shared capture clock using queue delivery age instead of raw per-pipeline packet PTS.
|
||||
- [x] Add tests proving sink handoff survives large offset-compensated local PTS gaps.
|
||||
- [x] Add tests proving audio/video timing metadata no longer copies packet PTS domains into blind sidecar fields.
|
||||
- [ ] Next mirrored run should show non-zero `planner_sink_handoff_window_samples` and much smaller client send/capture p95 skew before trusting blind healing.
|
||||
|
||||
## 0.17.29 Enqueue-Bound Client Timing Checklist
|
||||
|
||||
Context: the first blind-healing runs showed huge client capture/send skew even though media packets
|
||||
were latest-only. The sidecar timestamps were being written in async sender tasks after queueing, so
|
||||
parallel scheduling delay leaked into the diagnostic clock and made blind healing distrust the wrong
|
||||
layer.
|
||||
|
||||
- [x] Stamp client timing metadata at the capture/enqueue boundary instead of the async gRPC send boundary.
|
||||
- [x] Keep async sender updates limited to queue depth and queue age so scheduling delay stays observable but does not rewrite capture/send time.
|
||||
- [x] Pair server-side client timing samples by nearby enqueue/send time before reporting rolling skew windows.
|
||||
- [x] Add regression tests proving queue delay no longer changes capture/send timestamps.
|
||||
- [x] Push clean semver `0.17.29` for installed client/server testing.
|
||||
- [x] Use the next mirrored run to confirm client capture/send p95 drops from seconds to single-digit milliseconds.
|
||||
|
||||
## 0.17.30 Raw-Failure Calibration Safety Checklist
|
||||
|
||||
Context: the 0.17.29 mirrored run confirmed the client-side scheduling leak is fixed, but the probe
|
||||
then applied large opposite calibration nudges from analyzer failures with zero or one coded pair.
|
||||
Raw activity deltas are useful diagnostic breadcrumbs; they are not safe steering evidence when coded
|
||||
pairing collapses.
|
||||
|
||||
- [x] Treat the 0.17.29 run as proof that client sidecar timing is now trustworthy enough to move the investigation downstream.
|
||||
- [x] Default raw analyzer-failure calibration to off instead of inheriting provisional calibration.
|
||||
- [x] Add `LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS` so even explicit raw-failure calibration refuses weak coded evidence.
|
||||
- [x] Print the raw-failure pair floor in calibration decisions and segment artifacts.
|
||||
- [x] Prefer server-side receive/sink blockers over probe-pairing blockers when root-cause evidence is available.
|
||||
- [x] Update manual probe contract coverage for the safer defaults and refusal reason.
|
||||
- [ ] Re-run the probe-calibrate-confirm flow; analyzer failures should diagnose but not mutate calibration unless raw fallback is explicitly enabled and has enough coded support.
|
||||
- [ ] If client send/capture p95 stays low and server receive p95 stays high, localize the transport/server-receive timing layer next.
|
||||
|
||||
## 0.17.31 Server Receive Timing Drain Checklist
|
||||
|
||||
Context: the 0.17.30 mirrored run kept calibration stable and proved the client enqueue-side timing
|
||||
fix held: client capture/send p95 stayed in single-digit milliseconds after startup. The remaining
|
||||
blind blocker moved to server receive timing, where video packets were only timestamped when the
|
||||
camera playout loop woke up after waiting for the audio master.
|
||||
|
||||
- [x] Treat the 0.17.30 run as confirmation that raw-failure calibration no longer whipsaws offsets.
|
||||
- [x] Keep polling and timing inbound camera packets while video waits for the audio master.
|
||||
- [x] Keep polling and timing inbound camera packets while video waits for its due time.
|
||||
- [x] Coalesce pending video to the freshest packet during those waits so the server does not build a stale video backlog.
|
||||
- [x] Add regression coverage that video timing is recorded at enqueue/drain time before scheduler waits.
|
||||
- [x] Re-run the probe-calibrate-confirm flow; `planner_server_receive_abs_skew_p95_ms` should fall if this was the receive-side scheduling leak.
|
||||
- [ ] If receive p95 remains high after this, inspect actual gRPC/HTTP2 stream delivery and OS/network scheduling rather than static calibration.
|
||||
|
||||
## 0.17.32 Blind Heal Opt-In And Stability Checklist
|
||||
|
||||
Context: the 0.17.31 mirrored run confirmed the receive-side drain worked: client send and server
|
||||
receive p95 both stayed near 50ms. The run still failed as probe pairing, and the server-side blind
|
||||
healer silently changed calibration during the probe run because it was enabled by default and allowed
|
||||
sink handoff p95 near 240ms.
|
||||
|
||||
- [x] Treat the 0.17.31 run as confirmation that the server receive scheduling leak is fixed.
|
||||
- [x] Default runtime blind healing to disabled so probe-calibration runs cannot be contaminated by hidden server nudges.
|
||||
- [x] Require explicit server-side `LESAVKA_UPSTREAM_BLIND_HEAL=1` before blind healing mutates transient calibration.
|
||||
- [x] Tighten the blind-heal sink handoff p95 gate from 250ms to 120ms before applying runtime nudges.
|
||||
- [x] Align mirrored-run root-cause summaries with the stricter sink handoff stability threshold.
|
||||
- [x] Add regression coverage for default-disabled blind healing and noisy sink-handoff refusal.
|
||||
- [ ] Re-run the normal probe-calibrate-confirm flow; `calibration_source` should remain non-blind unless the server was explicitly started with blind healing.
|
||||
- [ ] If the probe still produces only one or two visual events while blind metrics stay stable, move the next fix to stimulus/browser/probe detection instead of transport timing.
|
||||
|
||||
## 0.17.33 Probe Detection Robustness Checklist
|
||||
|
||||
Context: the 0.17.32 mirrored run proved hidden blind healing stayed off and calibration remained
|
||||
stable, but the external browser probe still produced too few pairs. The capture path is now limited
|
||||
by analyzer robustness: the webcam sees the screen plus room background, and the microphone hears the
|
||||
stimulus plus environmental noise.
|
||||
|
||||
- [x] Treat probe pairing as the top priority before applying more calibration logic.
|
||||
- [x] Replace whole-frame color/brightness averaging with adaptive video ROI detection that follows the changing stimulus region.
|
||||
- [x] Add regression coverage for a small flashing screen region inside a larger static frame.
|
||||
- [x] Add tone-aware audio detection using the stimulus frequency palette so steady hum/noise is less likely to become a pulse.
|
||||
- [x] Add regression coverage for test-tone pulses under strong low-frequency background hum.
|
||||
- [x] Add coded-video fallbacks for overexposed screen captures: pulse-shaped color filtering, brightness fallback, and duplicate-frame normalization.
|
||||
- [x] Make the local stimulus more probe-friendly by defaulting to kiosk mode and darker saturated colors.
|
||||
- [x] Generate a `manual-review/index.html` with embedded segment captures so runs are easy to inspect by eye.
|
||||
- [ ] Re-run the mirrored probe and confirm pair counts rise enough for calibration-ready evidence.
|
||||
- [ ] If pair counts improve but p95 remains high, move next to server sink handoff jitter and late-run queue pressure.
|
||||
|
||||
## 0.17.34 Stimulus Verification Checklist
|
||||
|
||||
Context: the first 0.17.33 run did not prove the analyzer changes because the
|
||||
browser-control path timed out before the local stimulus was started, and the
|
||||
operator did not see colored flashes or hear coded tones. A sync probe must
|
||||
verify its own source before asking the analyzer to explain the capture.
|
||||
|
||||
- [x] Add a short visible/audible local stimulus preview before the real client starts so framing and audio audibility are human-checkable.
|
||||
- [x] Record stimulus start/preview tokens, audio context state, and active pulse metadata in `stimulus-status.json`.
|
||||
- [x] Make each mirrored segment fail early if the local page does not observe `/start` or WebAudio is not running.
|
||||
- [x] Retry the Tethys browser recording `/start` request to survive transient SSH banner timeouts.
|
||||
- [x] Open the manual review capture directory in Dolphin after summarization so copied Tethys captures are immediately inspectable.
|
||||
- [ ] Re-run the mirrored probe and confirm the preview is visible/audible before trusting any pairing diagnosis.
|
||||
|
||||
## 0.17.35 Right-Eye Capture Diagnostics Checklist
|
||||
|
||||
Context: manual Tethys testing showed the desktop was awake and HDMI was on, but the right-eye feed stayed black. Server logs showed `eye=r` reached `PLAYING` and then hit a V4L2/GStreamer `Invalid argument (22)` poll error before any frame was pushed, while the left eye streamed normally.
|
||||
|
||||
- [x] Confirm Tethys was not asleep: X11 reported HDMI connected at 1920x1080 and `Monitor is On`.
|
||||
- [x] Remove stale Tethys browser-probe processes after mirrored probe runs so manual Google Meet testing does not compete with old recorder sessions.
|
||||
- [x] Propagate late eye-capture GStreamer bus errors into the gRPC video stream so the launcher reports a preview stream error instead of a silent black window.
|
||||
- [x] Add a first-frame watchdog for eye capture streams so opened-but-empty sources surface as explicit diagnostics.
|
||||
- [ ] Re-run a manual two-eye session and confirm right-eye failures now appear in the session log with the concrete source error.
|
||||
- [ ] If `eye-r` still reports `poll error ... EINVAL`, recover/reseat the right HDMI capture path or add a dedicated eye-capture soft recovery path separate from UVC/UAC.
|
||||
|
||||
## 0.17.36 Call Media Stability Checklist
|
||||
|
||||
Context: a manual Google Meet test on 0.17.34/0.17.35 was much worse than the earlier baseline:
|
||||
remote audio sounded like choppy chunks/clicks and the video was visibly choppy. Live Theia
|
||||
configuration showed the installer-generated UVC gadget was advertising `640x480 @ 20fps`, the
|
||||
client camera pipeline was using that server profile as the outgoing packet profile even when the UI
|
||||
selected `720p@30`, and the UAC speech path used a very tight 20ms/5ms sink buffer/latency with
|
||||
downstream appsrc dropping.
|
||||
|
||||
- [x] Treat probe/analyzer measurements as suspect until the copied Tethys capture is visually and audibly stable.
|
||||
- [x] Make UI-selected camera quality king: the launcher camera profile now drives the outgoing uplink profile by default instead of being downscaled to stale server caps.
|
||||
- [x] Keep an explicit `LESAVKA_CAM_LOCK_TO_SERVER_PROFILE=1` lab escape hatch for debugging the server UVC gadget contract.
|
||||
- [x] Restore generated UVC gadget fallback defaults to `1280x720 @ 30fps` for sessions without an explicit UI/session profile.
|
||||
- [x] Align runtime UVC fallback defaults with the generated 30fps gadget profile.
|
||||
- [x] Raise UAC speech sink buffering and appsrc limits so speech favors intelligibility over bare-minimum latency under jitter.
|
||||
- [x] Stop default downstream appsrc leaking on the UAC speech path; shredded chunks are worse than modest added latency for calls.
|
||||
- [ ] Reinstall/restart Theia services so `/etc/lesavka/uvc.env` is refreshed from `640x480 @ 20fps` to `1280x720 @ 30fps`.
|
||||
- [ ] Re-run manual Google Meet before trusting mirrored probe calibration; verify speech is intelligible and video cadence is stable by eye.
|
||||
|
||||
## 0.17.37 UVC Format Compatibility Checklist
|
||||
|
||||
Context: after 0.17.36, Google Meet showed `Video Format Not Supported`. The client correctly
|
||||
captured the UI-selected `720p@30` profile, but it emitted those frames into a server UVC gadget still
|
||||
advertising `640x480 @ 20fps`. USB camera consumers require advertised caps and frame payloads to
|
||||
match; otherwise the feed is rejected before we can evaluate smoothness or sync.
|
||||
|
||||
- [x] Preserve UI-selected capture quality as the source capture profile.
|
||||
- [x] Restore safe default UVC emission to the negotiated server gadget profile so browsers see frames matching the camera format they negotiated.
|
||||
- [x] Keep `LESAVKA_CAM_EMIT_UI_PROFILE=1` as an explicit lab-only opt-in until the server can reconfigure the UVC gadget from the UI/session profile.
|
||||
- [x] Keep `LESAVKA_CAM_LOCK_TO_SERVER_PROFILE=1` as a safety override that wins over experimental UI-profile emission.
|
||||
- [ ] Add a real server-side UVC profile reconfigure path before making UI-selected quality drive the gadget-advertised output format.
|
||||
6
Cargo.lock
generated
6
Cargo.lock
generated
@ -1652,7 +1652,7 @@ checksum = "09edd9e8b54e49e587e4f6295a7d29c3ea94d469cb40ab8ca70b288248a81db2"
|
||||
|
||||
[[package]]
|
||||
name = "lesavka_client"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"async-stream",
|
||||
@ -1686,7 +1686,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "lesavka_common"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"base64",
|
||||
@ -1698,7 +1698,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "lesavka_server"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"base64",
|
||||
|
||||
@ -4,7 +4,7 @@ path = "src/main.rs"
|
||||
|
||||
[package]
|
||||
name = "lesavka_client"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
204
client/src/app/uplink_media/bundled_media_queue.rs
Normal file
204
client/src/app/uplink_media/bundled_media_queue.rs
Normal file
@ -0,0 +1,204 @@
|
||||
#[cfg(not(coverage))]
|
||||
const VIDEO_UPLINK_QUEUE: crate::uplink_fresh_queue::FreshQueueConfig =
|
||||
crate::uplink_fresh_queue::FreshQueueConfig {
|
||||
capacity: 32,
|
||||
max_age: Duration::from_millis(350),
|
||||
policy: crate::uplink_fresh_queue::FreshQueuePolicy::LatestOnly,
|
||||
};
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const AUDIO_UPLINK_QUEUE: crate::uplink_fresh_queue::FreshQueueConfig =
|
||||
crate::uplink_fresh_queue::FreshQueueConfig {
|
||||
capacity: 64,
|
||||
max_age: Duration::from_millis(400),
|
||||
policy: crate::uplink_fresh_queue::FreshQueuePolicy::DrainOldest,
|
||||
};
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const BUNDLED_MEDIA_UPLINK_QUEUE: crate::uplink_fresh_queue::FreshQueueConfig =
|
||||
crate::uplink_fresh_queue::FreshQueueConfig {
|
||||
capacity: 4,
|
||||
max_age: Duration::from_secs(1),
|
||||
policy: crate::uplink_fresh_queue::FreshQueuePolicy::LatestOnly,
|
||||
};
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const BUNDLED_AUDIO_FLUSH_INTERVAL: Duration = Duration::from_millis(20);
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const BUNDLED_AUDIO_MAX_PENDING: usize = 8;
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const BUNDLED_VIDEO_AUDIO_GRACE: Duration = Duration::from_millis(30);
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const BUNDLED_CAPTURE_EVENT_CHANNEL_CAPACITY: usize = 64;
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
#[derive(Debug)]
|
||||
enum BundledCaptureEvent {
|
||||
Audio(AudioPacket),
|
||||
Video(VideoPacket),
|
||||
Restart,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `bundle_captured_media` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn bundle_captured_media(
|
||||
event_rx: std::sync::mpsc::Receiver<BundledCaptureEvent>,
|
||||
stop: Arc<AtomicBool>,
|
||||
queue: crate::uplink_fresh_queue::FreshPacketQueue<UpstreamMediaBundle>,
|
||||
video_required: bool,
|
||||
camera_telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
microphone_telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
drop_log: Arc<std::sync::Mutex<UplinkDropLogLimiter>>,
|
||||
) {
|
||||
static BUNDLED_SESSION: AtomicU64 = AtomicU64::new(0);
|
||||
let session_id = BUNDLED_SESSION
|
||||
.fetch_add(1, Ordering::Relaxed)
|
||||
.saturating_add(1);
|
||||
let mut bundle_seq = 0_u64;
|
||||
let mut pending_audio = Vec::<AudioPacket>::new();
|
||||
let mut pending_video = None::<VideoPacket>;
|
||||
let mut pending_video_deadline = None::<Instant>;
|
||||
let mut next_audio_flush = Instant::now() + BUNDLED_AUDIO_FLUSH_INTERVAL;
|
||||
|
||||
loop {
|
||||
if stop.load(Ordering::Relaxed) {
|
||||
break;
|
||||
}
|
||||
let now = Instant::now();
|
||||
let timeout = if video_required {
|
||||
pending_video_deadline
|
||||
.unwrap_or(now + BUNDLED_AUDIO_FLUSH_INTERVAL)
|
||||
.saturating_duration_since(now)
|
||||
} else {
|
||||
next_audio_flush.saturating_duration_since(now)
|
||||
};
|
||||
match event_rx.recv_timeout(timeout) {
|
||||
Ok(BundledCaptureEvent::Audio(packet)) => {
|
||||
pending_audio.push(packet);
|
||||
if video_required {
|
||||
let dropped = retain_newest_pending_audio(&mut pending_audio);
|
||||
if dropped > 0 {
|
||||
microphone_telemetry.record_stale_drop(dropped as u64);
|
||||
log_uplink_drop(
|
||||
&drop_log,
|
||||
UplinkDropReason::Stale,
|
||||
dropped as u64,
|
||||
pending_audio.len(),
|
||||
0.0,
|
||||
);
|
||||
}
|
||||
} else if pending_audio.len() >= BUNDLED_AUDIO_MAX_PENDING {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
None,
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
next_audio_flush = Instant::now() + BUNDLED_AUDIO_FLUSH_INTERVAL;
|
||||
}
|
||||
}
|
||||
Ok(BundledCaptureEvent::Video(packet)) => {
|
||||
if let Some(video) = pending_video.take() {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
Some(video),
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
}
|
||||
pending_video = Some(packet);
|
||||
pending_video_deadline = Some(Instant::now() + BUNDLED_VIDEO_AUDIO_GRACE);
|
||||
if !video_required {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
pending_video.take(),
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
pending_video_deadline = None;
|
||||
next_audio_flush = Instant::now() + BUNDLED_AUDIO_FLUSH_INTERVAL;
|
||||
}
|
||||
}
|
||||
Ok(BundledCaptureEvent::Restart) => {
|
||||
if pending_video.is_some() || (!video_required && !pending_audio.is_empty()) {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
pending_video.take(),
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
}
|
||||
stop.store(true, Ordering::Relaxed);
|
||||
break;
|
||||
}
|
||||
Err(std::sync::mpsc::RecvTimeoutError::Timeout) => {
|
||||
if video_required {
|
||||
if pending_video_deadline.is_some_and(|deadline| Instant::now() >= deadline) {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
pending_video.take(),
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
pending_video_deadline = None;
|
||||
} else {
|
||||
let dropped = retain_newest_pending_audio(&mut pending_audio);
|
||||
if dropped > 0 {
|
||||
microphone_telemetry.record_stale_drop(dropped as u64);
|
||||
}
|
||||
}
|
||||
} else if !pending_audio.is_empty() {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
None,
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
}
|
||||
next_audio_flush = Instant::now() + BUNDLED_AUDIO_FLUSH_INTERVAL;
|
||||
}
|
||||
Err(std::sync::mpsc::RecvTimeoutError::Disconnected) => break,
|
||||
}
|
||||
}
|
||||
if pending_video.is_some() || (!video_required && !pending_audio.is_empty()) {
|
||||
emit_bundled_media(
|
||||
session_id,
|
||||
&mut bundle_seq,
|
||||
pending_video.take(),
|
||||
std::mem::take(&mut pending_audio),
|
||||
&queue,
|
||||
&camera_telemetry,
|
||||
µphone_telemetry,
|
||||
&drop_log,
|
||||
);
|
||||
}
|
||||
queue.close();
|
||||
}
|
||||
233
client/src/app/uplink_media/camera_loop.rs
Normal file
233
client/src/app/uplink_media/camera_loop.rs
Normal file
@ -0,0 +1,233 @@
|
||||
impl LesavkaClientApp {
|
||||
/*──────────────── cam stream ───────────────────*/
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `cam_loop` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn cam_loop(
|
||||
ep: Channel,
|
||||
initial_source: Option<String>,
|
||||
initial_profile: Option<String>,
|
||||
camera_cfg: Option<crate::input::camera::CameraConfig>,
|
||||
telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
media_controls: crate::live_media_control::LiveMediaControls,
|
||||
) {
|
||||
let mut delay = Duration::from_secs(1);
|
||||
|
||||
loop {
|
||||
let state = media_controls.refresh();
|
||||
if !state.camera {
|
||||
telemetry.record_enabled(false);
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
continue;
|
||||
}
|
||||
let active_source = state.camera_source.resolve(initial_source.as_deref());
|
||||
let active_profile = state.camera_profile.resolve(initial_profile.as_deref());
|
||||
let capture_profile = active_profile
|
||||
.as_deref()
|
||||
.and_then(parse_camera_profile_id);
|
||||
let setup_source = active_source.clone();
|
||||
let result = tokio::task::spawn_blocking(move || {
|
||||
CameraCapture::new_with_capture_profile(
|
||||
setup_source.as_deref(),
|
||||
camera_cfg,
|
||||
capture_profile,
|
||||
)
|
||||
})
|
||||
.await;
|
||||
let cam = match result {
|
||||
Ok(Ok(cam)) => Arc::new(cam),
|
||||
Ok(Err(err)) => {
|
||||
telemetry.record_disconnect(format!("webcam uplink setup failed: {err:#}"));
|
||||
warn!(
|
||||
"📸 webcam uplink setup failed for {:?}: {err:#}",
|
||||
active_source.as_deref().unwrap_or("auto")
|
||||
);
|
||||
abort_if_required_media_source_failed("camera", "📸", active_source.as_deref(), &err);
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
Err(err) => {
|
||||
telemetry.record_disconnect(format!("webcam uplink setup task failed: {err}"));
|
||||
warn!("📸 webcam uplink setup task failed before StreamCamera could start: {err}");
|
||||
abort_if_required_media_source_failed(
|
||||
"camera",
|
||||
"📸",
|
||||
active_source.as_deref(),
|
||||
&err,
|
||||
);
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
telemetry.record_reconnect_attempt();
|
||||
let mut cli = RelayClient::new(ep.clone());
|
||||
let queue = crate::uplink_fresh_queue::FreshPacketQueue::new(VIDEO_UPLINK_QUEUE);
|
||||
let drop_log =
|
||||
Arc::new(std::sync::Mutex::new(UplinkDropLogLimiter::new("camera", "📸")));
|
||||
|
||||
let queue_stream = queue.clone();
|
||||
let telemetry_stream = telemetry.clone();
|
||||
let drop_log_stream = Arc::clone(&drop_log);
|
||||
let outbound = async_stream::stream! {
|
||||
loop {
|
||||
let next = queue_stream.pop_fresh().await;
|
||||
if next.dropped_stale > 0 {
|
||||
telemetry_stream.record_stale_drop(next.dropped_stale);
|
||||
log_uplink_drop(
|
||||
&drop_log_stream,
|
||||
UplinkDropReason::Stale,
|
||||
next.dropped_stale,
|
||||
next.queue_depth,
|
||||
duration_ms(next.delivery_age),
|
||||
);
|
||||
}
|
||||
if let Some(mut packet) = next.packet {
|
||||
telemetry_stream.record_streamed(
|
||||
queue_depth_u32(next.queue_depth),
|
||||
duration_ms(next.delivery_age),
|
||||
);
|
||||
attach_video_queue_metadata(
|
||||
&mut packet,
|
||||
next.queue_depth,
|
||||
next.delivery_age,
|
||||
);
|
||||
yield packet;
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
match cli.stream_camera(Request::new(outbound)).await {
|
||||
Ok(mut resp) => {
|
||||
let (stop_tx, stop_rx) = std::sync::mpsc::channel::<()>();
|
||||
let cam_worker = std::thread::spawn({
|
||||
let cam = cam.clone();
|
||||
let telemetry = telemetry.clone();
|
||||
let queue = queue.clone();
|
||||
let drop_log = Arc::clone(&drop_log);
|
||||
let media_controls = media_controls.clone();
|
||||
let initial_source_thread = initial_source.clone();
|
||||
let active_source_thread = active_source.clone();
|
||||
let initial_profile_thread = initial_profile.clone();
|
||||
let active_profile_thread = active_profile.clone();
|
||||
move || loop {
|
||||
if stop_rx.try_recv().is_ok() {
|
||||
break;
|
||||
}
|
||||
let state = media_controls.refresh();
|
||||
let desired_source =
|
||||
state.camera_source.resolve(initial_source_thread.as_deref());
|
||||
let desired_profile =
|
||||
state.camera_profile.resolve(initial_profile_thread.as_deref());
|
||||
if desired_source != active_source_thread
|
||||
|| desired_profile != active_profile_thread
|
||||
{
|
||||
tracing::info!(
|
||||
from = active_source_thread.as_deref().unwrap_or("auto"),
|
||||
to = desired_source.as_deref().unwrap_or("auto"),
|
||||
"📸 webcam source changed; restarting live uplink pipeline"
|
||||
);
|
||||
break;
|
||||
}
|
||||
if !state.camera {
|
||||
telemetry.record_enabled(false);
|
||||
tracing::info!("📸 webcam uplink soft-paused");
|
||||
while stop_rx.try_recv().is_err() {
|
||||
let state = media_controls.refresh();
|
||||
let desired_source =
|
||||
state.camera_source.resolve(initial_source_thread.as_deref());
|
||||
let desired_profile = state
|
||||
.camera_profile
|
||||
.resolve(initial_profile_thread.as_deref());
|
||||
if desired_source != active_source_thread
|
||||
|| desired_profile != active_profile_thread
|
||||
{
|
||||
break;
|
||||
}
|
||||
if state.camera {
|
||||
break;
|
||||
}
|
||||
std::thread::sleep(Duration::from_millis(25));
|
||||
}
|
||||
if stop_rx.try_recv().is_ok() {
|
||||
break;
|
||||
}
|
||||
let state = media_controls.refresh();
|
||||
let desired_source =
|
||||
state.camera_source.resolve(initial_source_thread.as_deref());
|
||||
let desired_profile = state
|
||||
.camera_profile
|
||||
.resolve(initial_profile_thread.as_deref());
|
||||
if desired_source != active_source_thread
|
||||
|| desired_profile != active_profile_thread
|
||||
{
|
||||
tracing::info!(
|
||||
from = active_source_thread.as_deref().unwrap_or("auto"),
|
||||
to = desired_source.as_deref().unwrap_or("auto"),
|
||||
"📸 webcam source changed while paused; restarting live uplink pipeline"
|
||||
);
|
||||
break;
|
||||
}
|
||||
telemetry.record_enabled(true);
|
||||
tracing::info!("📸 webcam uplink resumed");
|
||||
}
|
||||
let Some(mut pkt) = cam.pull() else {
|
||||
std::thread::sleep(Duration::from_millis(5));
|
||||
continue;
|
||||
};
|
||||
static CNT: std::sync::atomic::AtomicU64 =
|
||||
std::sync::atomic::AtomicU64::new(0);
|
||||
let n = CNT.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
||||
if n < 10 || n.is_multiple_of(120) {
|
||||
tracing::trace!("📸 cli frame#{n} {} B", pkt.data.len());
|
||||
}
|
||||
tracing::trace!("📸⬆️ sent webcam AU pts={} {} B", pkt.pts, pkt.data.len());
|
||||
let enqueue_age = stamp_video_timing_metadata_at_enqueue(&mut pkt);
|
||||
let stats = queue.push(pkt, enqueue_age);
|
||||
if stats.dropped_queue_full > 0 {
|
||||
telemetry.record_queue_full_drop(stats.dropped_queue_full);
|
||||
log_uplink_drop(
|
||||
&drop_log,
|
||||
UplinkDropReason::QueueFull,
|
||||
stats.dropped_queue_full,
|
||||
stats.queue_depth,
|
||||
duration_ms(enqueue_age),
|
||||
);
|
||||
}
|
||||
telemetry.record_enqueue(
|
||||
queue_depth_u32(stats.queue_depth),
|
||||
duration_ms(enqueue_age),
|
||||
0.0,
|
||||
);
|
||||
}
|
||||
});
|
||||
delay = Duration::from_secs(1);
|
||||
telemetry.record_connected();
|
||||
while resp.get_mut().message().await.transpose().is_some() {}
|
||||
telemetry.record_disconnect("camera uplink stream ended");
|
||||
queue.close();
|
||||
let _ = stop_tx.send(());
|
||||
let _ = cam_worker.join();
|
||||
}
|
||||
Err(e) if e.code() == tonic::Code::Unimplemented => {
|
||||
tracing::warn!("📸 server does not support StreamCamera – giving up");
|
||||
telemetry.record_disconnect("camera uplink unavailable on server");
|
||||
queue.close();
|
||||
return;
|
||||
}
|
||||
Err(e) => {
|
||||
telemetry.record_disconnect(format!("camera uplink connect failed: {e}"));
|
||||
tracing::warn!("❌📸 connect failed: {e:?}");
|
||||
delay = app_support::next_delay(delay);
|
||||
}
|
||||
}
|
||||
|
||||
queue.close();
|
||||
tokio::time::sleep(delay).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
88
client/src/app/uplink_media/drop_logging.rs
Normal file
88
client/src/app/uplink_media/drop_logging.rs
Normal file
@ -0,0 +1,88 @@
|
||||
#[cfg(not(coverage))]
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
enum UplinkDropReason {
|
||||
QueueFull,
|
||||
Stale,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
#[derive(Debug)]
|
||||
struct UplinkDropLogLimiter {
|
||||
stream: &'static str,
|
||||
icon: &'static str,
|
||||
last_warn_at: Option<Instant>,
|
||||
suppressed_full: u64,
|
||||
suppressed_stale: u64,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Aggregate freshness-first upstream drops into periodic warnings per stream.
|
||||
impl UplinkDropLogLimiter {
|
||||
fn new(stream: &'static str, icon: &'static str) -> Self {
|
||||
Self {
|
||||
stream,
|
||||
icon,
|
||||
last_warn_at: None,
|
||||
suppressed_full: 0,
|
||||
suppressed_stale: 0,
|
||||
}
|
||||
}
|
||||
|
||||
/// Fold full-queue and stale-packet drops into one periodic warning.
|
||||
fn record(&mut self, reason: UplinkDropReason, count: u64, queue_depth: usize, age_ms: f32) {
|
||||
match reason {
|
||||
UplinkDropReason::QueueFull => {
|
||||
self.suppressed_full = self.suppressed_full.saturating_add(count)
|
||||
}
|
||||
UplinkDropReason::Stale => {
|
||||
self.suppressed_stale = self.suppressed_stale.saturating_add(count)
|
||||
}
|
||||
}
|
||||
|
||||
let should_warn = self
|
||||
.last_warn_at
|
||||
.map(|last| last.elapsed() >= UPLINK_DROP_WARN_INTERVAL)
|
||||
.unwrap_or(true);
|
||||
if should_warn {
|
||||
warn!(
|
||||
stream = self.stream,
|
||||
dropped_queue_full = self.suppressed_full,
|
||||
dropped_stale = self.suppressed_stale,
|
||||
queue_depth,
|
||||
latest_age_ms = age_ms,
|
||||
"{} upstream {} queue is dropping stale/superseded packets to preserve live A/V sync",
|
||||
self.icon,
|
||||
self.stream
|
||||
);
|
||||
self.suppressed_full = 0;
|
||||
self.suppressed_stale = 0;
|
||||
self.last_warn_at = Some(Instant::now());
|
||||
} else {
|
||||
debug!(
|
||||
stream = self.stream,
|
||||
?reason,
|
||||
count,
|
||||
queue_depth,
|
||||
latest_age_ms = age_ms,
|
||||
"upstream media queue drop suppressed from WARN noise"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
const UPLINK_DROP_WARN_INTERVAL: Duration = Duration::from_secs(5);
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Report an upstream queue drop through the shared rate limiter.
|
||||
fn log_uplink_drop(
|
||||
limiter: &Arc<std::sync::Mutex<UplinkDropLogLimiter>>,
|
||||
reason: UplinkDropReason,
|
||||
count: u64,
|
||||
queue_depth: usize,
|
||||
age_ms: f32,
|
||||
) {
|
||||
if let Ok(mut limiter) = limiter.lock() {
|
||||
limiter.record(reason, count, queue_depth, age_ms);
|
||||
}
|
||||
}
|
||||
52
client/src/app/uplink_media/media_source_requirements.rs
Normal file
52
client/src/app/uplink_media/media_source_requirements.rs
Normal file
@ -0,0 +1,52 @@
|
||||
#[cfg(not(coverage))]
|
||||
fn initial_camera_profile_id_from_env() -> Option<String> {
|
||||
let width = std::env::var("LESAVKA_CAM_WIDTH").ok()?;
|
||||
let height = std::env::var("LESAVKA_CAM_HEIGHT").ok()?;
|
||||
let fps = std::env::var("LESAVKA_CAM_FPS").ok()?;
|
||||
Some(format!("{width}x{height}@{fps}"))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn parse_camera_profile_id(raw: &str) -> Option<(u32, u32, u32)> {
|
||||
let (size, fps) = raw.split_once('@')?;
|
||||
let (width, height) = size.split_once('x')?;
|
||||
let width = width.parse().ok()?;
|
||||
let height = height.parse().ok()?;
|
||||
let fps = fps.parse().ok()?;
|
||||
(width > 0 && height > 0 && fps > 0).then_some((width, height, fps))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `abort_if_required_media_source_failed` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn abort_if_required_media_source_failed(
|
||||
kind: &str,
|
||||
icon: &str,
|
||||
source: Option<&str>,
|
||||
err: &dyn std::fmt::Display,
|
||||
) {
|
||||
if !explicit_media_sources_required() || source.is_none_or(|source| source.trim().is_empty()) {
|
||||
return;
|
||||
}
|
||||
let source = source.expect("checked source presence");
|
||||
error!(
|
||||
"{icon} required {kind} source '{source}' failed to start; aborting client because LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES=1: {err}"
|
||||
);
|
||||
eprintln!(
|
||||
"{icon} required {kind} source '{source}' failed to start; aborting client because LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES=1: {err}"
|
||||
);
|
||||
std::process::exit(2);
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn explicit_media_sources_required() -> bool {
|
||||
std::env::var("LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES")
|
||||
.ok()
|
||||
.is_some_and(|value| {
|
||||
let value = value.trim();
|
||||
value == "1"
|
||||
|| value.eq_ignore_ascii_case("true")
|
||||
|| value.eq_ignore_ascii_case("yes")
|
||||
|| value.eq_ignore_ascii_case("on")
|
||||
})
|
||||
}
|
||||
121
client/src/app/uplink_media/tests/mod.rs
Normal file
121
client/src/app/uplink_media/tests/mod.rs
Normal file
@ -0,0 +1,121 @@
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
/// Keeps `audio_timing_metadata_is_stamped_before_async_queue_pop` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn audio_timing_metadata_is_stamped_before_async_queue_pop() {
|
||||
std::thread::sleep(Duration::from_millis(5));
|
||||
let packet_pts_us = crate::live_capture_clock::capture_pts_us().saturating_sub(2_000);
|
||||
let mut packet = AudioPacket {
|
||||
pts: packet_pts_us,
|
||||
..AudioPacket::default()
|
||||
};
|
||||
|
||||
let enqueue_age = stamp_audio_timing_metadata_at_enqueue(&mut packet);
|
||||
let capture_pts_us = packet.client_capture_pts_us;
|
||||
let send_pts_us = packet.client_send_pts_us;
|
||||
std::thread::sleep(Duration::from_millis(5));
|
||||
attach_audio_queue_metadata(
|
||||
&mut packet,
|
||||
3,
|
||||
enqueue_age.saturating_add(Duration::from_millis(5)),
|
||||
);
|
||||
|
||||
assert!(packet.seq > 0);
|
||||
assert_eq!(packet.client_queue_depth, 3);
|
||||
assert!(packet.client_queue_age_ms >= 5);
|
||||
assert_eq!(packet.client_capture_pts_us, capture_pts_us);
|
||||
assert_eq!(packet.client_send_pts_us, send_pts_us);
|
||||
assert!(
|
||||
packet.client_send_pts_us >= packet.client_capture_pts_us,
|
||||
"enqueue/send stamp must be on or after the shared-clock capture estimate"
|
||||
);
|
||||
let capture_to_enqueue = Duration::from_micros(
|
||||
packet
|
||||
.client_send_pts_us
|
||||
.saturating_sub(packet.client_capture_pts_us),
|
||||
);
|
||||
assert_eq!(
|
||||
capture_to_enqueue, enqueue_age,
|
||||
"enqueue timing metadata should stay anchored to the pre-pop stamp"
|
||||
);
|
||||
assert!(
|
||||
packet.client_queue_age_ms >= duration_ms_u32(enqueue_age).saturating_add(4),
|
||||
"queue age should include the simulated async pop delay without mutating send timing"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `video_timing_metadata_is_stamped_before_async_queue_pop` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn video_timing_metadata_is_stamped_before_async_queue_pop() {
|
||||
std::thread::sleep(Duration::from_millis(5));
|
||||
let packet_pts_us = crate::live_capture_clock::capture_pts_us().saturating_sub(3_000);
|
||||
let mut packet = VideoPacket {
|
||||
pts: packet_pts_us,
|
||||
..VideoPacket::default()
|
||||
};
|
||||
|
||||
let enqueue_age = stamp_video_timing_metadata_at_enqueue(&mut packet);
|
||||
let capture_pts_us = packet.client_capture_pts_us;
|
||||
let send_pts_us = packet.client_send_pts_us;
|
||||
std::thread::sleep(Duration::from_millis(5));
|
||||
attach_video_queue_metadata(
|
||||
&mut packet,
|
||||
4,
|
||||
enqueue_age.saturating_add(Duration::from_millis(5)),
|
||||
);
|
||||
|
||||
assert!(packet.seq > 0);
|
||||
assert_eq!(packet.client_queue_depth, 4);
|
||||
assert!(packet.client_queue_age_ms >= 5);
|
||||
assert_eq!(packet.client_capture_pts_us, capture_pts_us);
|
||||
assert_eq!(packet.client_send_pts_us, send_pts_us);
|
||||
assert!(
|
||||
packet.client_send_pts_us >= packet.client_capture_pts_us,
|
||||
"enqueue/send stamp must be on or after the shared-clock capture estimate"
|
||||
);
|
||||
let capture_to_enqueue = Duration::from_micros(
|
||||
packet
|
||||
.client_send_pts_us
|
||||
.saturating_sub(packet.client_capture_pts_us),
|
||||
);
|
||||
assert_eq!(
|
||||
capture_to_enqueue, enqueue_age,
|
||||
"enqueue timing metadata should stay anchored to the pre-pop stamp"
|
||||
);
|
||||
assert!(
|
||||
packet.client_queue_age_ms >= duration_ms_u32(enqueue_age).saturating_add(4),
|
||||
"queue age should include the simulated async pop delay without mutating send timing"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `stale_source_timestamps_are_clamped_before_bundling` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn stale_source_timestamps_are_clamped_before_bundling() {
|
||||
let enqueue_pts_us = crate::live_capture_clock::capture_pts_us();
|
||||
let stale_pts_us = enqueue_pts_us.saturating_sub(30_000_000);
|
||||
let mut audio = AudioPacket {
|
||||
pts: stale_pts_us,
|
||||
..AudioPacket::default()
|
||||
};
|
||||
let mut video = VideoPacket {
|
||||
pts: stale_pts_us,
|
||||
..VideoPacket::default()
|
||||
};
|
||||
|
||||
let audio_age = stamp_audio_timing_metadata_at_enqueue(&mut audio);
|
||||
let video_age = stamp_video_timing_metadata_at_enqueue(&mut video);
|
||||
|
||||
assert_eq!(audio.pts, audio.client_capture_pts_us);
|
||||
assert_eq!(video.pts, video.client_capture_pts_us);
|
||||
assert!(
|
||||
audio_age <= crate::live_capture_clock::upstream_source_lag_cap(),
|
||||
"audio capture timestamp should not resurrect stale source timing"
|
||||
);
|
||||
assert!(
|
||||
video_age <= crate::live_capture_clock::upstream_source_lag_cap(),
|
||||
"video capture timestamp should not resurrect stale source timing"
|
||||
);
|
||||
}
|
||||
220
client/src/app/uplink_media/uplink_queue_metadata.rs
Normal file
220
client/src/app/uplink_media/uplink_queue_metadata.rs
Normal file
@ -0,0 +1,220 @@
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `retain_newest_pending_audio` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_newest_pending_audio(pending_audio: &mut Vec<AudioPacket>) -> usize {
|
||||
if pending_audio.len() <= BUNDLED_AUDIO_MAX_PENDING {
|
||||
return 0;
|
||||
}
|
||||
let dropped = pending_audio.len() - BUNDLED_AUDIO_MAX_PENDING;
|
||||
pending_audio.drain(..dropped);
|
||||
dropped
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
/// Keeps `emit_bundled_media` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn emit_bundled_media(
|
||||
session_id: u64,
|
||||
bundle_seq: &mut u64,
|
||||
video: Option<VideoPacket>,
|
||||
audio: Vec<AudioPacket>,
|
||||
queue: &crate::uplink_fresh_queue::FreshPacketQueue<UpstreamMediaBundle>,
|
||||
camera_telemetry: &crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
microphone_telemetry: &crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
drop_log: &Arc<std::sync::Mutex<UplinkDropLogLimiter>>,
|
||||
) {
|
||||
if video.is_none() && audio.is_empty() {
|
||||
return;
|
||||
}
|
||||
*bundle_seq = bundle_seq.saturating_add(1);
|
||||
let (capture_start_us, capture_end_us) = bundled_capture_bounds(video.as_ref(), &audio);
|
||||
let enqueue_now_us = crate::live_capture_clock::capture_pts_us();
|
||||
let enqueue_age = Duration::from_micros(enqueue_now_us.saturating_sub(capture_start_us));
|
||||
let has_video = video.is_some();
|
||||
let has_audio = !audio.is_empty();
|
||||
let mut bundle = UpstreamMediaBundle {
|
||||
session_id,
|
||||
seq: *bundle_seq,
|
||||
capture_start_us,
|
||||
capture_end_us,
|
||||
video,
|
||||
audio,
|
||||
..UpstreamMediaBundle::default()
|
||||
};
|
||||
attach_bundle_queue_metadata(&mut bundle, 0, enqueue_age);
|
||||
let stats = queue.push(bundle, enqueue_age);
|
||||
if stats.dropped_queue_full > 0 {
|
||||
if has_video {
|
||||
camera_telemetry.record_queue_full_drop(stats.dropped_queue_full);
|
||||
}
|
||||
if has_audio {
|
||||
microphone_telemetry.record_queue_full_drop(stats.dropped_queue_full);
|
||||
}
|
||||
log_uplink_drop(
|
||||
drop_log,
|
||||
UplinkDropReason::QueueFull,
|
||||
stats.dropped_queue_full,
|
||||
stats.queue_depth,
|
||||
duration_ms(enqueue_age),
|
||||
);
|
||||
}
|
||||
let queue_depth = queue_depth_u32(stats.queue_depth);
|
||||
let age_ms = duration_ms(enqueue_age);
|
||||
if has_video {
|
||||
camera_telemetry.record_enqueue(queue_depth, age_ms, 0.0);
|
||||
}
|
||||
if has_audio {
|
||||
microphone_telemetry.record_enqueue(queue_depth, age_ms, 0.0);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `bundled_capture_bounds` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn bundled_capture_bounds(video: Option<&VideoPacket>, audio: &[AudioPacket]) -> (u64, u64) {
|
||||
let mut start = u64::MAX;
|
||||
let mut end = 0_u64;
|
||||
if let Some(video) = video {
|
||||
let pts = packet_video_capture_pts_us(video);
|
||||
start = start.min(pts);
|
||||
end = end.max(pts);
|
||||
}
|
||||
for packet in audio {
|
||||
let pts = packet_audio_capture_pts_us(packet);
|
||||
start = start.min(pts);
|
||||
end = end.max(pts);
|
||||
}
|
||||
if start == u64::MAX {
|
||||
let now = crate::live_capture_clock::capture_pts_us();
|
||||
return (now, now);
|
||||
}
|
||||
(start, end.max(start))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `packet_audio_capture_pts_us` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn packet_audio_capture_pts_us(packet: &AudioPacket) -> u64 {
|
||||
if packet.client_capture_pts_us == 0 {
|
||||
packet.pts
|
||||
} else {
|
||||
packet.client_capture_pts_us
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `packet_video_capture_pts_us` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn packet_video_capture_pts_us(packet: &VideoPacket) -> u64 {
|
||||
if packet.client_capture_pts_us == 0 {
|
||||
packet.pts
|
||||
} else {
|
||||
packet.client_capture_pts_us
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn queue_depth_u32(depth: usize) -> u32 {
|
||||
depth.try_into().unwrap_or(u32::MAX)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn duration_ms(duration: Duration) -> f32 {
|
||||
duration.as_secs_f32() * 1_000.0
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn duration_ms_u32(duration: Duration) -> u32 {
|
||||
duration.as_millis().min(u128::from(u32::MAX)) as u32
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn age_between_capture_and_enqueue(capture_pts_us: u64, enqueue_pts_us: u64) -> Duration {
|
||||
Duration::from_micros(enqueue_pts_us.saturating_sub(capture_pts_us))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn stamp_audio_timing_metadata_at_enqueue(packet: &mut AudioPacket) -> Duration {
|
||||
static AUDIO_SEQUENCE: AtomicU64 = AtomicU64::new(0);
|
||||
let enqueue_pts_us = crate::live_capture_clock::capture_pts_us();
|
||||
let capture_pts_us = sanitized_capture_pts_us(packet.pts, enqueue_pts_us);
|
||||
packet.pts = capture_pts_us;
|
||||
packet.seq = AUDIO_SEQUENCE.fetch_add(1, Ordering::Relaxed).saturating_add(1);
|
||||
packet.client_capture_pts_us = capture_pts_us;
|
||||
packet.client_send_pts_us = enqueue_pts_us;
|
||||
age_between_capture_and_enqueue(capture_pts_us, enqueue_pts_us)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn stamp_video_timing_metadata_at_enqueue(packet: &mut VideoPacket) -> Duration {
|
||||
static VIDEO_SEQUENCE: AtomicU64 = AtomicU64::new(0);
|
||||
let enqueue_pts_us = crate::live_capture_clock::capture_pts_us();
|
||||
let capture_pts_us = sanitized_capture_pts_us(packet.pts, enqueue_pts_us);
|
||||
packet.pts = capture_pts_us;
|
||||
packet.seq = VIDEO_SEQUENCE.fetch_add(1, Ordering::Relaxed).saturating_add(1);
|
||||
packet.client_capture_pts_us = capture_pts_us;
|
||||
packet.client_send_pts_us = enqueue_pts_us;
|
||||
age_between_capture_and_enqueue(capture_pts_us, enqueue_pts_us)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `sanitized_capture_pts_us` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn sanitized_capture_pts_us(packet_pts_us: u64, enqueue_pts_us: u64) -> u64 {
|
||||
let mut capture_pts_us = packet_pts_us.min(enqueue_pts_us);
|
||||
let max_lag_us = crate::live_capture_clock::upstream_source_lag_cap()
|
||||
.as_micros()
|
||||
.min(u64::MAX as u128) as u64;
|
||||
let lag_floor_us = enqueue_pts_us.saturating_sub(max_lag_us);
|
||||
if capture_pts_us < lag_floor_us {
|
||||
capture_pts_us = lag_floor_us;
|
||||
}
|
||||
capture_pts_us
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `attach_audio_queue_metadata` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn attach_audio_queue_metadata(
|
||||
packet: &mut AudioPacket,
|
||||
queue_depth: usize,
|
||||
delivery_age: Duration,
|
||||
) {
|
||||
if packet.seq == 0 {
|
||||
let _ = stamp_audio_timing_metadata_at_enqueue(packet);
|
||||
}
|
||||
packet.client_queue_depth = queue_depth_u32(queue_depth);
|
||||
packet.client_queue_age_ms = duration_ms_u32(delivery_age);
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `attach_video_queue_metadata` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn attach_video_queue_metadata(
|
||||
packet: &mut VideoPacket,
|
||||
queue_depth: usize,
|
||||
delivery_age: Duration,
|
||||
) {
|
||||
if packet.seq == 0 {
|
||||
let _ = stamp_video_timing_metadata_at_enqueue(packet);
|
||||
}
|
||||
packet.client_queue_depth = queue_depth_u32(queue_depth);
|
||||
packet.client_queue_age_ms = duration_ms_u32(delivery_age);
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `attach_bundle_queue_metadata` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn attach_bundle_queue_metadata(
|
||||
bundle: &mut UpstreamMediaBundle,
|
||||
queue_depth: usize,
|
||||
delivery_age: Duration,
|
||||
) {
|
||||
for packet in &mut bundle.audio {
|
||||
attach_audio_queue_metadata(packet, queue_depth, delivery_age);
|
||||
}
|
||||
if let Some(packet) = bundle.video.as_mut() {
|
||||
attach_video_queue_metadata(packet, queue_depth, delivery_age);
|
||||
}
|
||||
}
|
||||
202
client/src/app/uplink_media/voice_loop.rs
Normal file
202
client/src/app/uplink_media/voice_loop.rs
Normal file
@ -0,0 +1,202 @@
|
||||
impl LesavkaClientApp {
|
||||
/*──────────────── mic stream ─────────────────*/
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `voice_loop` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn voice_loop(
|
||||
ep: Channel,
|
||||
initial_source: Option<String>,
|
||||
telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
media_controls: crate::live_media_control::LiveMediaControls,
|
||||
pause_when_camera_active: bool,
|
||||
) {
|
||||
let mut delay = Duration::from_secs(1);
|
||||
static FAIL_CNT: AtomicUsize = AtomicUsize::new(0);
|
||||
|
||||
loop {
|
||||
let state = media_controls.refresh();
|
||||
if pause_when_camera_active && state.camera {
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
continue;
|
||||
}
|
||||
if !state.microphone {
|
||||
telemetry.record_enabled(false);
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
continue;
|
||||
}
|
||||
let microphone_source_choice = state.microphone_source.clone();
|
||||
let active_source = microphone_source_choice.resolve(initial_source.as_deref());
|
||||
let use_default_source = matches!(
|
||||
microphone_source_choice,
|
||||
crate::live_media_control::MediaDeviceChoice::Auto
|
||||
) && active_source.is_none();
|
||||
let setup_source = active_source.clone();
|
||||
let result = tokio::task::spawn_blocking(move || {
|
||||
if use_default_source {
|
||||
MicrophoneCapture::new_default_source()
|
||||
} else {
|
||||
MicrophoneCapture::new_with_source(setup_source.as_deref())
|
||||
}
|
||||
})
|
||||
.await;
|
||||
let mic = match result {
|
||||
Ok(Ok(mic)) => Arc::new(mic),
|
||||
Ok(Err(err)) => {
|
||||
telemetry.record_disconnect(format!("microphone uplink setup failed: {err:#}"));
|
||||
warn!(
|
||||
"🎤 microphone uplink setup failed for {:?}: {err:#}",
|
||||
active_source.as_deref().unwrap_or("auto")
|
||||
);
|
||||
abort_if_required_media_source_failed("microphone", "🎤", active_source.as_deref(), &err);
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
Err(err) => {
|
||||
telemetry.record_disconnect(format!("microphone uplink setup task failed: {err}"));
|
||||
warn!("🎤 microphone uplink setup task failed before StreamMicrophone could start: {err}");
|
||||
abort_if_required_media_source_failed(
|
||||
"microphone",
|
||||
"🎤",
|
||||
active_source.as_deref(),
|
||||
&err,
|
||||
);
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
telemetry.record_reconnect_attempt();
|
||||
let mut cli = RelayClient::new(ep.clone());
|
||||
let queue = crate::uplink_fresh_queue::FreshPacketQueue::new(AUDIO_UPLINK_QUEUE);
|
||||
let drop_log = Arc::new(std::sync::Mutex::new(UplinkDropLogLimiter::new(
|
||||
"microphone",
|
||||
"🎤",
|
||||
)));
|
||||
|
||||
let queue_stream = queue.clone();
|
||||
let telemetry_stream = telemetry.clone();
|
||||
let drop_log_stream = Arc::clone(&drop_log);
|
||||
let outbound = async_stream::stream! {
|
||||
loop {
|
||||
let next = queue_stream.pop_fresh().await;
|
||||
if next.dropped_stale > 0 {
|
||||
telemetry_stream.record_stale_drop(next.dropped_stale);
|
||||
log_uplink_drop(
|
||||
&drop_log_stream,
|
||||
UplinkDropReason::Stale,
|
||||
next.dropped_stale,
|
||||
next.queue_depth,
|
||||
duration_ms(next.delivery_age),
|
||||
);
|
||||
}
|
||||
if let Some(mut packet) = next.packet {
|
||||
telemetry_stream.record_streamed(
|
||||
queue_depth_u32(next.queue_depth),
|
||||
duration_ms(next.delivery_age),
|
||||
);
|
||||
attach_audio_queue_metadata(
|
||||
&mut packet,
|
||||
next.queue_depth,
|
||||
next.delivery_age,
|
||||
);
|
||||
yield packet;
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
match cli.stream_microphone(Request::new(outbound)).await {
|
||||
Ok(mut resp) => {
|
||||
let (stop_tx, stop_rx) = std::sync::mpsc::channel::<()>();
|
||||
let mic_clone = mic.clone();
|
||||
let telemetry_thread = telemetry.clone();
|
||||
let queue_thread = queue.clone();
|
||||
let drop_log_thread = Arc::clone(&drop_log);
|
||||
let media_controls_thread = media_controls.clone();
|
||||
let initial_source_thread = initial_source.clone();
|
||||
let active_source_thread = active_source.clone();
|
||||
let mic_worker = std::thread::spawn(move || {
|
||||
let mut paused = false;
|
||||
while stop_rx.try_recv().is_err() {
|
||||
let state = media_controls_thread.refresh();
|
||||
let desired_source = state
|
||||
.microphone_source
|
||||
.resolve(initial_source_thread.as_deref());
|
||||
if pause_when_camera_active && state.camera {
|
||||
tracing::info!(
|
||||
"🎤 microphone-only uplink yielding to bundled webcam A/V"
|
||||
);
|
||||
break;
|
||||
}
|
||||
if desired_source != active_source_thread {
|
||||
tracing::info!(
|
||||
from = active_source_thread.as_deref().unwrap_or("auto"),
|
||||
to = desired_source.as_deref().unwrap_or("auto"),
|
||||
"🎤 microphone source changed; restarting live uplink pipeline"
|
||||
);
|
||||
break;
|
||||
}
|
||||
if !state.microphone {
|
||||
if !paused {
|
||||
telemetry_thread.record_enabled(false);
|
||||
tracing::info!("🎤 microphone uplink soft-paused");
|
||||
paused = true;
|
||||
}
|
||||
std::thread::sleep(Duration::from_millis(20));
|
||||
continue;
|
||||
}
|
||||
if paused {
|
||||
telemetry_thread.record_enabled(true);
|
||||
tracing::info!("🎤 microphone uplink resumed");
|
||||
paused = false;
|
||||
}
|
||||
if let Some(mut pkt) = mic_clone.pull() {
|
||||
trace!("🎤📤 cli {} bytes → gRPC", pkt.data.len());
|
||||
let enqueue_age = stamp_audio_timing_metadata_at_enqueue(&mut pkt);
|
||||
let stats = queue_thread.push(pkt, enqueue_age);
|
||||
if stats.dropped_queue_full > 0 {
|
||||
telemetry_thread.record_queue_full_drop(stats.dropped_queue_full);
|
||||
log_uplink_drop(
|
||||
&drop_log_thread,
|
||||
UplinkDropReason::QueueFull,
|
||||
stats.dropped_queue_full,
|
||||
stats.queue_depth,
|
||||
duration_ms(enqueue_age),
|
||||
);
|
||||
}
|
||||
telemetry_thread.record_enqueue(
|
||||
queue_depth_u32(stats.queue_depth),
|
||||
duration_ms(enqueue_age),
|
||||
0.0,
|
||||
);
|
||||
}
|
||||
}
|
||||
});
|
||||
delay = Duration::from_secs(1);
|
||||
telemetry.record_connected();
|
||||
while resp.get_mut().message().await.transpose().is_some() {}
|
||||
telemetry.record_disconnect("microphone uplink stream ended");
|
||||
queue.close();
|
||||
let _ = stop_tx.send(());
|
||||
let _ = mic_worker.join();
|
||||
}
|
||||
Err(e) => {
|
||||
telemetry.record_disconnect(format!("microphone uplink connect failed: {e}"));
|
||||
if FAIL_CNT.fetch_add(1, Ordering::Relaxed) == 0 {
|
||||
warn!("❌🎤 connect failed: {e}");
|
||||
warn!("⚠️🎤 further microphone‑stream failures will be logged at DEBUG");
|
||||
} else {
|
||||
debug!("❌🎤 reconnect failed: {e}");
|
||||
}
|
||||
delay = app_support::next_delay(delay);
|
||||
}
|
||||
}
|
||||
|
||||
queue.close();
|
||||
tokio::time::sleep(delay).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
295
client/src/app/uplink_media/webcam_media_loop.rs
Normal file
295
client/src/app/uplink_media/webcam_media_loop.rs
Normal file
@ -0,0 +1,295 @@
|
||||
impl LesavkaClientApp {
|
||||
/*──────────────── bundled webcam + mic stream ─────────────────*/
|
||||
#[cfg(not(coverage))]
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
/// Keeps `webcam_media_loop` explicit because it sits on the live uplink path, where stale media must be dropped instead of queued into latency.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn webcam_media_loop(
|
||||
ep: Channel,
|
||||
initial_camera_source: Option<String>,
|
||||
initial_camera_profile: Option<String>,
|
||||
initial_microphone_source: Option<String>,
|
||||
camera_cfg: Option<crate::input::camera::CameraConfig>,
|
||||
camera_telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
microphone_telemetry: crate::uplink_telemetry::UplinkTelemetryHandle,
|
||||
media_controls: crate::live_media_control::LiveMediaControls,
|
||||
) {
|
||||
let mut delay = Duration::from_secs(1);
|
||||
static FAIL_CNT: AtomicUsize = AtomicUsize::new(0);
|
||||
|
||||
loop {
|
||||
let state = media_controls.refresh();
|
||||
let camera_requested = state.camera;
|
||||
if !camera_requested {
|
||||
camera_telemetry.record_enabled(false);
|
||||
tokio::time::sleep(Duration::from_millis(100)).await;
|
||||
continue;
|
||||
}
|
||||
|
||||
let active_camera_source = state.camera_source.resolve(initial_camera_source.as_deref());
|
||||
let active_camera_profile =
|
||||
state.camera_profile.resolve(initial_camera_profile.as_deref());
|
||||
let active_microphone_source = state
|
||||
.microphone_source
|
||||
.resolve(initial_microphone_source.as_deref());
|
||||
let capture_profile = active_camera_profile
|
||||
.as_deref()
|
||||
.and_then(parse_camera_profile_id);
|
||||
let use_default_microphone = matches!(
|
||||
state.microphone_source,
|
||||
crate::live_media_control::MediaDeviceChoice::Auto
|
||||
) && active_microphone_source.is_none();
|
||||
let setup_camera_source = active_camera_source.clone();
|
||||
let setup_microphone_source = active_microphone_source.clone();
|
||||
|
||||
let setup = tokio::task::spawn_blocking(move || {
|
||||
let microphone = if use_default_microphone {
|
||||
MicrophoneCapture::new_default_source()
|
||||
} else {
|
||||
MicrophoneCapture::new_with_source(setup_microphone_source.as_deref())
|
||||
}?;
|
||||
let camera = if camera_requested {
|
||||
Some(CameraCapture::new_with_capture_profile(
|
||||
setup_camera_source.as_deref(),
|
||||
camera_cfg,
|
||||
capture_profile,
|
||||
)?)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
Ok::<_, anyhow::Error>((camera.map(Arc::new), Arc::new(microphone)))
|
||||
})
|
||||
.await;
|
||||
|
||||
let (camera, microphone) = match setup {
|
||||
Ok(Ok(captures)) => captures,
|
||||
Ok(Err(err)) => {
|
||||
camera_telemetry.record_disconnect(format!(
|
||||
"bundled webcam media setup failed: {err:#}"
|
||||
));
|
||||
microphone_telemetry.record_disconnect(format!(
|
||||
"bundled webcam media setup failed: {err:#}"
|
||||
));
|
||||
warn!(
|
||||
"📦 bundled webcam media setup failed for camera={:?} mic={:?}: {err:#}",
|
||||
active_camera_source.as_deref().unwrap_or("auto"),
|
||||
active_microphone_source.as_deref().unwrap_or("auto")
|
||||
);
|
||||
if camera_requested {
|
||||
abort_if_required_media_source_failed(
|
||||
"camera",
|
||||
"📸",
|
||||
active_camera_source.as_deref(),
|
||||
&err,
|
||||
);
|
||||
}
|
||||
abort_if_required_media_source_failed(
|
||||
"microphone",
|
||||
"🎤",
|
||||
active_microphone_source.as_deref(),
|
||||
&err,
|
||||
);
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
Err(err) => {
|
||||
camera_telemetry.record_disconnect(format!(
|
||||
"bundled webcam media setup task failed: {err}"
|
||||
));
|
||||
microphone_telemetry.record_disconnect(format!(
|
||||
"bundled webcam media setup task failed: {err}"
|
||||
));
|
||||
warn!("📦 bundled webcam media setup task failed: {err}");
|
||||
delay = app_support::next_delay(delay);
|
||||
tokio::time::sleep(delay).await;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
camera_telemetry.record_reconnect_attempt();
|
||||
microphone_telemetry.record_reconnect_attempt();
|
||||
let mut cli = RelayClient::new(ep.clone());
|
||||
let queue: crate::uplink_fresh_queue::FreshPacketQueue<UpstreamMediaBundle> =
|
||||
crate::uplink_fresh_queue::FreshPacketQueue::new(BUNDLED_MEDIA_UPLINK_QUEUE);
|
||||
let drop_log = Arc::new(std::sync::Mutex::new(UplinkDropLogLimiter::new(
|
||||
"bundled-webcam-media",
|
||||
"📦",
|
||||
)));
|
||||
|
||||
let queue_stream = queue.clone();
|
||||
let camera_telemetry_stream = camera_telemetry.clone();
|
||||
let microphone_telemetry_stream = microphone_telemetry.clone();
|
||||
let drop_log_stream = Arc::clone(&drop_log);
|
||||
let outbound = async_stream::stream! {
|
||||
loop {
|
||||
let next = queue_stream.pop_fresh().await;
|
||||
if next.dropped_stale > 0 {
|
||||
camera_telemetry_stream.record_stale_drop(next.dropped_stale);
|
||||
microphone_telemetry_stream.record_stale_drop(next.dropped_stale);
|
||||
log_uplink_drop(
|
||||
&drop_log_stream,
|
||||
UplinkDropReason::Stale,
|
||||
next.dropped_stale,
|
||||
next.queue_depth,
|
||||
duration_ms(next.delivery_age),
|
||||
);
|
||||
}
|
||||
if let Some(mut bundle) = next.packet {
|
||||
let queue_depth = queue_depth_u32(next.queue_depth);
|
||||
let delivery_age_ms = duration_ms(next.delivery_age);
|
||||
if bundle.video.is_some() {
|
||||
camera_telemetry_stream.record_streamed(
|
||||
queue_depth,
|
||||
delivery_age_ms,
|
||||
);
|
||||
}
|
||||
if !bundle.audio.is_empty() {
|
||||
microphone_telemetry_stream.record_streamed(
|
||||
queue_depth,
|
||||
delivery_age_ms,
|
||||
);
|
||||
}
|
||||
attach_bundle_queue_metadata(&mut bundle, next.queue_depth, next.delivery_age);
|
||||
yield bundle;
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
match cli.stream_webcam_media(Request::new(outbound)).await {
|
||||
Ok(mut resp) => {
|
||||
let stop = Arc::new(AtomicBool::new(false));
|
||||
let (event_tx, event_rx) = std::sync::mpsc::sync_channel::<BundledCaptureEvent>(
|
||||
BUNDLED_CAPTURE_EVENT_CHANNEL_CAPACITY,
|
||||
);
|
||||
let camera_worker = camera.as_ref().map(|camera| {
|
||||
let camera = Arc::clone(camera);
|
||||
let stop = Arc::clone(&stop);
|
||||
let event_tx = event_tx.clone();
|
||||
let media_controls = media_controls.clone();
|
||||
let initial_camera_source = initial_camera_source.clone();
|
||||
let initial_camera_profile = initial_camera_profile.clone();
|
||||
let active_camera_source = active_camera_source.clone();
|
||||
let active_camera_profile = active_camera_profile.clone();
|
||||
std::thread::spawn(move || {
|
||||
while !stop.load(Ordering::Relaxed) {
|
||||
let state = media_controls.refresh();
|
||||
let desired_source =
|
||||
state.camera_source.resolve(initial_camera_source.as_deref());
|
||||
let desired_profile =
|
||||
state.camera_profile.resolve(initial_camera_profile.as_deref());
|
||||
if !state.camera
|
||||
|| desired_source != active_camera_source
|
||||
|| desired_profile != active_camera_profile
|
||||
{
|
||||
stop.store(true, Ordering::Relaxed);
|
||||
let _ = event_tx.try_send(BundledCaptureEvent::Restart);
|
||||
break;
|
||||
}
|
||||
if let Some(mut pkt) = camera.pull() {
|
||||
let _ = stamp_video_timing_metadata_at_enqueue(&mut pkt);
|
||||
match event_tx.try_send(BundledCaptureEvent::Video(pkt)) {
|
||||
Ok(()) => {}
|
||||
Err(std::sync::mpsc::TrySendError::Full(_)) => continue,
|
||||
Err(std::sync::mpsc::TrySendError::Disconnected(_)) => break,
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
let microphone_worker = {
|
||||
let microphone = Arc::clone(µphone);
|
||||
let stop = Arc::clone(&stop);
|
||||
let event_tx = event_tx.clone();
|
||||
let media_controls = media_controls.clone();
|
||||
let initial_microphone_source = initial_microphone_source.clone();
|
||||
let active_microphone_source = active_microphone_source.clone();
|
||||
let active_camera_requested = camera_requested;
|
||||
std::thread::spawn(move || {
|
||||
while !stop.load(Ordering::Relaxed) {
|
||||
let state = media_controls.refresh();
|
||||
let desired_source = state
|
||||
.microphone_source
|
||||
.resolve(initial_microphone_source.as_deref());
|
||||
if state.camera != active_camera_requested
|
||||
|| !(state.microphone || state.camera)
|
||||
|| desired_source != active_microphone_source
|
||||
{
|
||||
stop.store(true, Ordering::Relaxed);
|
||||
let _ = event_tx.try_send(BundledCaptureEvent::Restart);
|
||||
break;
|
||||
}
|
||||
if let Some(mut pkt) = microphone.pull() {
|
||||
let _ = stamp_audio_timing_metadata_at_enqueue(&mut pkt);
|
||||
match event_tx.try_send(BundledCaptureEvent::Audio(pkt)) {
|
||||
Ok(()) => {}
|
||||
Err(std::sync::mpsc::TrySendError::Full(_)) => continue,
|
||||
Err(std::sync::mpsc::TrySendError::Disconnected(_)) => break,
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
};
|
||||
drop(event_tx);
|
||||
|
||||
let bundle_worker = {
|
||||
let stop = Arc::clone(&stop);
|
||||
let queue = queue.clone();
|
||||
let camera_telemetry = camera_telemetry.clone();
|
||||
let microphone_telemetry = microphone_telemetry.clone();
|
||||
let drop_log = Arc::clone(&drop_log);
|
||||
std::thread::spawn(move || {
|
||||
bundle_captured_media(
|
||||
event_rx,
|
||||
stop,
|
||||
queue,
|
||||
camera.is_some(),
|
||||
camera_telemetry,
|
||||
microphone_telemetry,
|
||||
drop_log,
|
||||
);
|
||||
})
|
||||
};
|
||||
|
||||
delay = Duration::from_secs(1);
|
||||
camera_telemetry.record_connected();
|
||||
microphone_telemetry.record_connected();
|
||||
while resp.get_mut().message().await.transpose().is_some() {}
|
||||
camera_telemetry.record_disconnect("bundled webcam media stream ended");
|
||||
microphone_telemetry.record_disconnect("bundled webcam media stream ended");
|
||||
stop.store(true, Ordering::Relaxed);
|
||||
queue.close();
|
||||
if let Some(worker) = camera_worker {
|
||||
let _ = worker.join();
|
||||
}
|
||||
let _ = microphone_worker.join();
|
||||
let _ = bundle_worker.join();
|
||||
}
|
||||
Err(e) if e.code() == tonic::Code::Unimplemented => {
|
||||
camera_telemetry.record_disconnect("bundled webcam media unavailable on server");
|
||||
microphone_telemetry
|
||||
.record_disconnect("bundled webcam media unavailable on server");
|
||||
warn!("📦 server does not support bundled webcam media – retrying");
|
||||
delay = app_support::next_delay(delay);
|
||||
}
|
||||
Err(e) => {
|
||||
camera_telemetry
|
||||
.record_disconnect(format!("bundled webcam media connect failed: {e}"));
|
||||
microphone_telemetry
|
||||
.record_disconnect(format!("bundled webcam media connect failed: {e}"));
|
||||
if FAIL_CNT.fetch_add(1, Ordering::Relaxed) == 0 {
|
||||
warn!("❌📦 bundled webcam media connect failed: {e}");
|
||||
} else {
|
||||
debug!("❌📦 bundled webcam media reconnect failed: {e}");
|
||||
}
|
||||
delay = app_support::next_delay(delay);
|
||||
}
|
||||
}
|
||||
|
||||
queue.close();
|
||||
tokio::time::sleep(delay).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -100,6 +100,8 @@ fn usage() -> &'static str {
|
||||
"Usage: lesavka-relayctl [--server http://HOST:50051] <status|version|upstream-sync|output-delay-probe [duration_s warmup_s period_ms width_ms codes audio_delay_us video_delay_us]|calibration|calibrate <audio_delta_us> <video_delta_us> [note]|calibration-save-default|calibration-restore-default|calibration-restore-factory|auto|on|off|recover-usb|recover-uac|recover-uvc|reset-usb>"
|
||||
}
|
||||
|
||||
/// Keeps `parse_args_outcome_from` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args_outcome_from<I, S>(args: I) -> Result<ParseOutcome>
|
||||
where
|
||||
I: IntoIterator<Item = S>,
|
||||
@ -150,6 +152,8 @@ where
|
||||
}))
|
||||
}
|
||||
|
||||
/// Keeps `parse_command_args` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_command_args(command: CommandKind, args: Vec<String>) -> Result<ParsedCommandArgs> {
|
||||
if command == CommandKind::OutputDelayProbe {
|
||||
return parse_output_delay_probe_args(args);
|
||||
@ -184,6 +188,8 @@ fn parse_command_args(command: CommandKind, args: Vec<String>) -> Result<ParsedC
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `parse_output_delay_probe_args` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_output_delay_probe_args(args: Vec<String>) -> Result<ParsedCommandArgs> {
|
||||
if args.len() > 7 {
|
||||
bail!(
|
||||
@ -224,6 +230,8 @@ fn parse_output_delay_probe_args(args: Vec<String>) -> Result<ParsedCommandArgs>
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
/// Keeps `parse_args_from` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args_from<I, S>(args: I) -> Result<Config>
|
||||
where
|
||||
I: IntoIterator<Item = S>,
|
||||
@ -239,6 +247,8 @@ fn parse_args() -> Result<ParseOutcome> {
|
||||
parse_args_outcome_from(std::env::args().skip(1))
|
||||
}
|
||||
|
||||
/// Keeps `capture_power_request` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn capture_power_request(command: CommandKind) -> Option<SetCapturePowerRequest> {
|
||||
let (enabled, command) = match command {
|
||||
CommandKind::Status
|
||||
@ -276,6 +286,8 @@ async fn connect(server_addr: &str) -> Result<RelayClient<Channel>> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `get_server_capabilities` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn get_server_capabilities(server_addr: &str) -> Result<HandshakeSet> {
|
||||
let channel = relay_transport::endpoint(server_addr)?
|
||||
.tcp_nodelay(true)
|
||||
@ -308,6 +320,8 @@ fn print_state(state: lesavka_common::lesavka::CapturePowerState) {
|
||||
println!("detail={}", state.detail);
|
||||
}
|
||||
|
||||
/// Keeps `print_versions` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn print_versions(server_addr: &str, caps: &HandshakeSet) {
|
||||
let server_version = if caps.server_version.is_empty() {
|
||||
"unknown"
|
||||
@ -332,166 +346,9 @@ fn print_versions(server_addr: &str, caps: &HandshakeSet) {
|
||||
println!("server_bundled_webcam_media={}", caps.bundled_webcam_media);
|
||||
}
|
||||
|
||||
fn print_upstream_sync(state: lesavka_common::lesavka::UpstreamSyncState) {
|
||||
println!("planner_session_id={}", state.session_id);
|
||||
println!("planner_phase={}", state.phase);
|
||||
println!(
|
||||
"planner_live_lag_ms={}",
|
||||
state
|
||||
.live_lag_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_skew_ms={}",
|
||||
state
|
||||
.planner_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!("planner_stale_audio_drops={}", state.stale_audio_drops);
|
||||
println!("planner_stale_video_drops={}", state.stale_video_drops);
|
||||
println!("planner_skew_video_drops={}", state.skew_video_drops);
|
||||
println!("planner_freshness_reanchors={}", state.freshness_reanchors);
|
||||
println!("planner_startup_timeouts={}", state.startup_timeouts);
|
||||
println!("planner_video_freezes={}", state.video_freezes);
|
||||
println!(
|
||||
"planner_client_capture_skew_ms={}",
|
||||
state
|
||||
.client_capture_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_send_skew_ms={}",
|
||||
state
|
||||
.client_send_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_server_receive_skew_ms={}",
|
||||
state
|
||||
.server_receive_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_client_queue_age_ms={}",
|
||||
state
|
||||
.camera_client_queue_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_client_queue_age_ms={}",
|
||||
state
|
||||
.microphone_client_queue_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_server_receive_age_ms={}",
|
||||
state
|
||||
.camera_server_receive_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_server_receive_age_ms={}",
|
||||
state
|
||||
.microphone_server_receive_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_capture_abs_skew_p95_ms={}",
|
||||
state
|
||||
.client_capture_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_send_abs_skew_p95_ms={}",
|
||||
state
|
||||
.client_send_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_server_receive_abs_skew_p95_ms={}",
|
||||
state
|
||||
.server_receive_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_client_queue_age_p95_ms={}",
|
||||
state
|
||||
.camera_client_queue_age_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_client_queue_age_p95_ms={}",
|
||||
state
|
||||
.microphone_client_queue_age_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_skew_ms={}",
|
||||
state
|
||||
.sink_handoff_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_abs_skew_p95_ms={}",
|
||||
state
|
||||
.sink_handoff_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_sink_late_ms={}",
|
||||
state
|
||||
.camera_sink_late_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_sink_late_ms={}",
|
||||
state
|
||||
.microphone_sink_late_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_sink_late_p95_ms={}",
|
||||
state
|
||||
.camera_sink_late_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_sink_late_p95_ms={}",
|
||||
state
|
||||
.microphone_sink_late_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_timing_window_samples={}",
|
||||
state.client_timing_window_samples
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_window_samples={}",
|
||||
state.sink_handoff_window_samples
|
||||
);
|
||||
println!("planner_detail={}", state.last_reason);
|
||||
}
|
||||
|
||||
include!("lesavka_relayctl/upstream_sync_formatting.rs");
|
||||
/// Keeps `print_calibration_state` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn print_calibration_state(state: CalibrationState) {
|
||||
println!("calibration_profile={}", state.profile);
|
||||
println!(
|
||||
@ -524,6 +381,8 @@ fn print_calibration_state(state: CalibrationState) {
|
||||
println!("calibration_detail={}", state.detail);
|
||||
}
|
||||
|
||||
/// Keeps `calibration_request_for` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_request_for(config: &Config) -> Option<CalibrationRequest> {
|
||||
let action = match config.command {
|
||||
CommandKind::CalibrationAdjust => CalibrationAction::AdjustActive,
|
||||
@ -553,441 +412,12 @@ fn calibration_request_for(config: &Config) -> Option<CalibrationRequest> {
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
#[tokio::main(flavor = "current_thread")]
|
||||
async fn main() -> Result<()> {
|
||||
let config = match parse_args()? {
|
||||
ParseOutcome::Run(config) => config,
|
||||
ParseOutcome::Help => {
|
||||
println!("{}", usage());
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
if config.command == CommandKind::Version {
|
||||
let caps = get_server_capabilities(config.server.as_str()).await?;
|
||||
print_versions(config.server.as_str(), &caps);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut client = connect(config.server.as_str()).await?;
|
||||
|
||||
if let Some(request) = capture_power_request(config.command) {
|
||||
let context = match config.command {
|
||||
CommandKind::Auto => "setting capture power to auto",
|
||||
CommandKind::On => "forcing capture power on",
|
||||
CommandKind::Off => "forcing capture power off",
|
||||
CommandKind::Status
|
||||
| CommandKind::Version
|
||||
| CommandKind::CalibrationStatus
|
||||
| CommandKind::CalibrationAdjust
|
||||
| CommandKind::CalibrationRestoreDefault
|
||||
| CommandKind::CalibrationRestoreFactory
|
||||
| CommandKind::CalibrationSaveDefault
|
||||
| CommandKind::RecoverUsb
|
||||
| CommandKind::RecoverUac
|
||||
| CommandKind::RecoverUvc
|
||||
| CommandKind::ResetUsb
|
||||
| CommandKind::UpstreamSync
|
||||
| CommandKind::OutputDelayProbe => unreachable!(),
|
||||
};
|
||||
let reply = client
|
||||
.set_capture_power(Request::new(request))
|
||||
.await
|
||||
.context(context)?
|
||||
.into_inner();
|
||||
print_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if let Some(request) = calibration_request_for(&config) {
|
||||
let reply = client
|
||||
.calibrate(Request::new(request))
|
||||
.await
|
||||
.context("applying upstream A/V calibration")?
|
||||
.into_inner();
|
||||
print_calibration_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if config.command == CommandKind::OutputDelayProbe {
|
||||
let request = OutputDelayProbeRequest {
|
||||
duration_seconds: config.probe_duration_seconds,
|
||||
warmup_seconds: config.probe_warmup_seconds,
|
||||
pulse_period_ms: config.probe_pulse_period_ms,
|
||||
pulse_width_ms: config.probe_pulse_width_ms,
|
||||
event_width_codes: config.probe_event_width_codes.clone(),
|
||||
audio_delay_us: config.probe_audio_delay_us,
|
||||
video_delay_us: config.probe_video_delay_us,
|
||||
};
|
||||
let mut stream = client
|
||||
.run_output_delay_probe(Request::new(request))
|
||||
.await
|
||||
.context("running server-generated UVC/UAC output-delay probe")?
|
||||
.into_inner();
|
||||
while let Some(reply) = stream
|
||||
.message()
|
||||
.await
|
||||
.context("reading output-delay probe reply")?
|
||||
{
|
||||
println!("ok={}", reply.ok);
|
||||
println!("detail={}", reply.detail);
|
||||
if !reply.server_timeline_json.trim().is_empty() {
|
||||
println!("server_timeline_json={}", reply.server_timeline_json);
|
||||
}
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let reply = match config.command {
|
||||
CommandKind::Status => client
|
||||
.get_capture_power(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying capture power state")?
|
||||
.into_inner(),
|
||||
CommandKind::RecoverUsb => {
|
||||
let reply = client
|
||||
.recover_usb(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft USB recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::RecoverUac => {
|
||||
let reply = client
|
||||
.recover_uac(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft UAC recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::RecoverUvc => {
|
||||
let reply = client
|
||||
.recover_uvc(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft UVC recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::ResetUsb => {
|
||||
let reply = client
|
||||
.reset_usb(Request::new(Empty {}))
|
||||
.await
|
||||
.context("forcing USB gadget recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::UpstreamSync => {
|
||||
let reply = client
|
||||
.get_upstream_sync(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying upstream sync planner state")?
|
||||
.into_inner();
|
||||
print_upstream_sync(reply);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::CalibrationStatus => {
|
||||
let reply = client
|
||||
.get_calibration(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying upstream A/V calibration")?
|
||||
.into_inner();
|
||||
print_calibration_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::Version | CommandKind::Auto | CommandKind::On | CommandKind::Off => {
|
||||
unreachable!()
|
||||
}
|
||||
CommandKind::OutputDelayProbe => unreachable!(),
|
||||
CommandKind::CalibrationAdjust
|
||||
| CommandKind::CalibrationRestoreDefault
|
||||
| CommandKind::CalibrationRestoreFactory
|
||||
| CommandKind::CalibrationSaveDefault => unreachable!(),
|
||||
};
|
||||
|
||||
print_state(reply);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
include!("lesavka_relayctl/command_dispatch.rs");
|
||||
#[cfg(coverage)]
|
||||
fn main() {
|
||||
let _ = parse_args as fn() -> Result<ParseOutcome>;
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
CalibrationAction, CapturePowerCommand, CommandKind, Config, ParseOutcome,
|
||||
calibration_request_for, capture_power_request, parse_args_from, parse_args_outcome_from,
|
||||
};
|
||||
use lesavka_common::lesavka::CapturePowerState;
|
||||
|
||||
#[test]
|
||||
/// Verifies safe recovery commands stay separate from explicit hard reset.
|
||||
fn command_aliases_parse_to_stable_actions() {
|
||||
assert_eq!(CommandKind::parse("status"), Some(CommandKind::Status));
|
||||
assert_eq!(CommandKind::parse("get"), Some(CommandKind::Status));
|
||||
assert_eq!(CommandKind::parse("version"), Some(CommandKind::Version));
|
||||
assert_eq!(CommandKind::parse("versions"), Some(CommandKind::Version));
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration"),
|
||||
Some(CommandKind::CalibrationStatus)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibrate"),
|
||||
Some(CommandKind::CalibrationAdjust)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-restore-default"),
|
||||
Some(CommandKind::CalibrationRestoreDefault)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-restore-factory"),
|
||||
Some(CommandKind::CalibrationRestoreFactory)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-save-default"),
|
||||
Some(CommandKind::CalibrationSaveDefault)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("upstream-sync"),
|
||||
Some(CommandKind::UpstreamSync)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("sync"), Some(CommandKind::UpstreamSync));
|
||||
assert_eq!(
|
||||
CommandKind::parse("output-delay-probe"),
|
||||
Some(CommandKind::OutputDelayProbe)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("probe-output-delay"),
|
||||
Some(CommandKind::OutputDelayProbe)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("force-on"), Some(CommandKind::On));
|
||||
assert_eq!(CommandKind::parse("force-off"), Some(CommandKind::Off));
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-usb"),
|
||||
Some(CommandKind::RecoverUsb)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-uac"),
|
||||
Some(CommandKind::RecoverUac)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-uvc"),
|
||||
Some(CommandKind::RecoverUvc)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("hard-reset-usb"),
|
||||
Some(CommandKind::ResetUsb)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("wat"), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_defaults_to_local_status() {
|
||||
let config = parse_args_from(std::iter::empty::<&str>()).expect("default config");
|
||||
assert_eq!(config.server, "http://127.0.0.1:50051");
|
||||
assert_eq!(config.command, CommandKind::Status);
|
||||
assert_eq!(config.audio_delta_us, 0);
|
||||
assert_eq!(config.video_delta_us, 0);
|
||||
assert!(config.note.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_server_and_command() {
|
||||
let config =
|
||||
parse_args_from(["--server", " http://lab:50051 ", "upstream-sync"]).expect("config");
|
||||
assert_eq!(config.server, "http://lab:50051");
|
||||
assert_eq!(config.command, CommandKind::UpstreamSync);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_output_delay_probe_config() {
|
||||
let config = parse_args_from([
|
||||
"--server",
|
||||
"http://lab:50051",
|
||||
"output-delay-probe",
|
||||
"20",
|
||||
"4",
|
||||
"1000",
|
||||
"120",
|
||||
"1,2,3,4",
|
||||
"0",
|
||||
"157712",
|
||||
])
|
||||
.expect("probe config");
|
||||
|
||||
assert_eq!(config.command, CommandKind::OutputDelayProbe);
|
||||
assert_eq!(config.probe_duration_seconds, 20);
|
||||
assert_eq!(config.probe_warmup_seconds, 4);
|
||||
assert_eq!(config.probe_pulse_period_ms, 1000);
|
||||
assert_eq!(config.probe_pulse_width_ms, 120);
|
||||
assert_eq!(config.probe_event_width_codes, "1,2,3,4");
|
||||
assert_eq!(config.probe_audio_delay_us, 0);
|
||||
assert_eq!(config.probe_video_delay_us, 157_712);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_calibration_adjustment() {
|
||||
let config = parse_args_from([
|
||||
"--server",
|
||||
"http://lab:50051",
|
||||
"calibrate",
|
||||
"0",
|
||||
"71600",
|
||||
"probe",
|
||||
"median",
|
||||
])
|
||||
.expect("calibration config");
|
||||
assert_eq!(config.command, CommandKind::CalibrationAdjust);
|
||||
assert_eq!(config.audio_delta_us, 0);
|
||||
assert_eq!(config.video_delta_us, 71_600);
|
||||
assert_eq!(config.note, "probe median");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_rejects_bad_inputs() {
|
||||
assert!(parse_args_from(["--server"]).is_err());
|
||||
assert!(parse_args_from(["nope"]).is_err());
|
||||
assert!(parse_args_from(["status", "extra"]).is_err());
|
||||
assert!(parse_args_from(["calibrate"]).is_err());
|
||||
assert!(parse_args_from(["calibrate", "0", "not-int"]).is_err());
|
||||
assert!(
|
||||
parse_args_from([
|
||||
"output-delay-probe",
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"1",
|
||||
"0",
|
||||
"0",
|
||||
"extra"
|
||||
])
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_reports_help_without_exiting_test_process() {
|
||||
assert_eq!(
|
||||
parse_args_outcome_from(["--help"]).unwrap(),
|
||||
ParseOutcome::Help
|
||||
);
|
||||
assert_eq!(parse_args_outcome_from(["-h"]).unwrap(), ParseOutcome::Help);
|
||||
assert!(parse_args_from(["--help"]).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_runtime_wrapper_is_non_panicking_under_tests() {
|
||||
let _ = super::parse_args();
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
#[test]
|
||||
fn coverage_main_references_runtime_parser() {
|
||||
super::main();
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
#[tokio::test(flavor = "current_thread")]
|
||||
async fn coverage_connect_uses_lazy_channel_after_endpoint_validation() {
|
||||
let client = super::connect("http://127.0.0.1:1")
|
||||
.await
|
||||
.expect("coverage lazy channel");
|
||||
drop(client);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps status/read commands from accidentally mutating capture power.
|
||||
fn mutating_commands_map_to_capture_power_requests() {
|
||||
let auto = capture_power_request(CommandKind::Auto).expect("auto request");
|
||||
assert!(!auto.enabled);
|
||||
assert_eq!(auto.command, CapturePowerCommand::Auto as i32);
|
||||
|
||||
let on = capture_power_request(CommandKind::On).expect("on request");
|
||||
assert!(on.enabled);
|
||||
assert_eq!(on.command, CapturePowerCommand::ForceOn as i32);
|
||||
|
||||
let off = capture_power_request(CommandKind::Off).expect("off request");
|
||||
assert!(!off.enabled);
|
||||
assert_eq!(off.command, CapturePowerCommand::ForceOff as i32);
|
||||
|
||||
assert!(capture_power_request(CommandKind::Status).is_none());
|
||||
assert!(capture_power_request(CommandKind::Version).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationStatus).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationAdjust).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationRestoreDefault).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationRestoreFactory).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationSaveDefault).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUsb).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUac).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUvc).is_none());
|
||||
assert!(capture_power_request(CommandKind::ResetUsb).is_none());
|
||||
assert!(capture_power_request(CommandKind::UpstreamSync).is_none());
|
||||
assert!(capture_power_request(CommandKind::OutputDelayProbe).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn print_state_accepts_full_capture_power_payload() {
|
||||
super::print_state(CapturePowerState {
|
||||
available: true,
|
||||
enabled: false,
|
||||
mode: "auto".to_string(),
|
||||
detected_devices: 2,
|
||||
active_leases: 1,
|
||||
unit: "lesavka-capture-power.service".to_string(),
|
||||
detail: "ready".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn print_calibration_accepts_full_payload() {
|
||||
super::print_calibration_state(lesavka_common::lesavka::CalibrationState {
|
||||
profile: "mjpeg".to_string(),
|
||||
factory_audio_offset_us: 0,
|
||||
factory_video_offset_us: 1_090_000,
|
||||
default_audio_offset_us: 0,
|
||||
default_video_offset_us: 1_090_000,
|
||||
active_audio_offset_us: 0,
|
||||
active_video_offset_us: 1_161_600,
|
||||
source: "manual".to_string(),
|
||||
confidence: "manual".to_string(),
|
||||
updated_at: "2026-05-02T00:00:00Z".to_string(),
|
||||
detail: "probe nudge".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_requests_are_only_built_for_calibration_mutations() {
|
||||
let config = Config {
|
||||
server: "http://127.0.0.1:50051".to_string(),
|
||||
command: CommandKind::CalibrationAdjust,
|
||||
audio_delta_us: 0,
|
||||
video_delta_us: 71_600,
|
||||
note: "probe".to_string(),
|
||||
probe_duration_seconds: 0,
|
||||
probe_warmup_seconds: 0,
|
||||
probe_pulse_period_ms: 0,
|
||||
probe_pulse_width_ms: 0,
|
||||
probe_event_width_codes: String::new(),
|
||||
probe_audio_delay_us: 0,
|
||||
probe_video_delay_us: 0,
|
||||
};
|
||||
let request = calibration_request_for(&config).expect("request");
|
||||
assert_eq!(request.action, CalibrationAction::AdjustActive as i32);
|
||||
assert_eq!(request.audio_delta_us, 0);
|
||||
assert_eq!(request.video_delta_us, 71_600);
|
||||
assert_eq!(request.note, "probe");
|
||||
|
||||
let status = Config {
|
||||
command: CommandKind::CalibrationStatus,
|
||||
..config
|
||||
};
|
||||
assert!(calibration_request_for(&status).is_none());
|
||||
}
|
||||
}
|
||||
#[path = "tests/lesavka_relayctl.rs"]
|
||||
mod tests;
|
||||
|
||||
@ -38,6 +38,8 @@ struct SignatureCoverage {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `main` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn main() -> Result<()> {
|
||||
let args = parse_args(std::env::args().skip(1))?;
|
||||
let report = analyze_capture(&args.capture_path, &args.options)
|
||||
@ -85,6 +87,8 @@ struct AnalyzeArgs {
|
||||
}
|
||||
|
||||
#[cfg(any(not(coverage), test))]
|
||||
/// Keeps `parse_args` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args<I, S>(args: I) -> Result<AnalyzeArgs>
|
||||
where
|
||||
I: IntoIterator<Item = S>,
|
||||
@ -194,6 +198,8 @@ where
|
||||
}
|
||||
|
||||
#[cfg(any(not(coverage), test))]
|
||||
/// Keeps `parse_event_width_codes` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_event_width_codes(raw: &str) -> Result<Vec<u32>> {
|
||||
let codes = raw
|
||||
.split(',')
|
||||
@ -218,6 +224,8 @@ fn parse_event_width_codes(raw: &str) -> Result<Vec<u32>> {
|
||||
}
|
||||
|
||||
#[cfg(any(not(coverage), test))]
|
||||
/// Keeps `parse_analysis_window` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_analysis_window(raw: &str, options: &mut SyncAnalysisOptions) -> Result<()> {
|
||||
let Some((start, end)) = raw.split_once(':') else {
|
||||
bail!("--analysis-window-s requires START:END seconds");
|
||||
@ -236,6 +244,8 @@ fn parse_analysis_window(raw: &str, options: &mut SyncAnalysisOptions) -> Result
|
||||
}
|
||||
|
||||
#[cfg(any(not(coverage), test))]
|
||||
/// Keeps `parse_analysis_seconds` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_analysis_seconds(raw: &str, label: &str) -> Result<f64> {
|
||||
let value = raw
|
||||
.trim()
|
||||
@ -247,99 +257,10 @@ fn parse_analysis_seconds(raw: &str, label: &str) -> Result<f64> {
|
||||
Ok(value)
|
||||
}
|
||||
|
||||
include!("lesavka_sync_analyze/human_report.rs");
|
||||
#[cfg(not(coverage))]
|
||||
fn format_human_report(
|
||||
capture_path: &std::path::Path,
|
||||
report: &SyncAnalysisReport,
|
||||
signature_coverage: Option<&SignatureCoverage>,
|
||||
calibration: &SyncCalibrationRecommendation,
|
||||
verdict: &SyncAnalysisVerdict,
|
||||
) -> String {
|
||||
let first_paired_video = report
|
||||
.paired_events
|
||||
.first()
|
||||
.map(|event| event.video_time_s)
|
||||
.unwrap_or(0.0);
|
||||
let first_paired_audio = report
|
||||
.paired_events
|
||||
.first()
|
||||
.map(|event| event.audio_time_s)
|
||||
.unwrap_or(0.0);
|
||||
let unpaired_video = format_onset_list(&unpaired_video_onsets(report));
|
||||
let unpaired_audio = format_onset_list(&unpaired_audio_onsets(report));
|
||||
let raw_activity_handling = if report.raw_activity_start_is_verdict_relevant() {
|
||||
"used as verdict evidence"
|
||||
} else {
|
||||
"reported only; ignored for verdict/calibration because it disagrees with paired pulses"
|
||||
};
|
||||
let signature_coverage = format_signature_coverage(signature_coverage);
|
||||
format!(
|
||||
"\
|
||||
A/V sync report for {capture}
|
||||
- verdict: {status} ({passed})
|
||||
- verdict reason: {reason}
|
||||
- evidence mode: {evidence_mode}
|
||||
- p95 abs skew: {p95:.1} ms
|
||||
- video onsets: {video_events}
|
||||
- audio onsets: {audio_events}
|
||||
- paired pulses: {paired_events}
|
||||
- activity start delta: {activity_start_delta:+.1} ms (audio after video is positive)
|
||||
- raw first video activity: {raw_video:.3} s
|
||||
- raw first audio activity: {raw_audio:.3} s
|
||||
- raw-vs-paired disagreement: {raw_pair_disagreement:.1} ms
|
||||
- raw activity handling: {raw_activity_handling}
|
||||
- paired window first video/audio: {paired_video:.3} s / {paired_audio:.3} s
|
||||
- unpaired video onsets: {unpaired_video}
|
||||
- unpaired audio onsets: {unpaired_audio}
|
||||
{signature_coverage}\
|
||||
- first skew: {first_skew:+.1} ms (audio after video is positive)
|
||||
- last skew: {last_skew:+.1} ms
|
||||
- mean skew: {mean_skew:+.1} ms
|
||||
- median skew: {median_skew:+.1} ms
|
||||
- max abs skew: {max_abs:.1} ms
|
||||
- drift: {drift:+.1} ms
|
||||
- calibration ready: {cal_ready}
|
||||
- recommended audio offset adjust: {audio_adjust:+} us
|
||||
- alternative video offset adjust: {video_adjust:+} us
|
||||
- calibration note: {cal_note}
|
||||
",
|
||||
capture = capture_path.display(),
|
||||
status = verdict.status,
|
||||
passed = if verdict.passed { "pass" } else { "fail" },
|
||||
reason = verdict.reason,
|
||||
evidence_mode = if report.coded_events {
|
||||
"coded pulses"
|
||||
} else {
|
||||
"cadence/brightness pulses"
|
||||
},
|
||||
p95 = verdict.p95_abs_skew_ms,
|
||||
video_events = report.video_event_count,
|
||||
audio_events = report.audio_event_count,
|
||||
paired_events = report.paired_event_count,
|
||||
activity_start_delta = report.activity_start_delta_ms,
|
||||
raw_video = report.raw_first_video_activity_s,
|
||||
raw_audio = report.raw_first_audio_activity_s,
|
||||
raw_pair_disagreement = report.activity_start_pair_disagreement_ms().abs(),
|
||||
raw_activity_handling = raw_activity_handling,
|
||||
paired_video = first_paired_video,
|
||||
paired_audio = first_paired_audio,
|
||||
unpaired_video = unpaired_video,
|
||||
unpaired_audio = unpaired_audio,
|
||||
signature_coverage = signature_coverage,
|
||||
first_skew = report.first_skew_ms,
|
||||
last_skew = report.last_skew_ms,
|
||||
mean_skew = report.mean_skew_ms,
|
||||
median_skew = report.median_skew_ms,
|
||||
max_abs = report.max_abs_skew_ms,
|
||||
drift = report.drift_ms,
|
||||
cal_ready = calibration.ready,
|
||||
audio_adjust = calibration.recommended_audio_offset_adjust_us,
|
||||
video_adjust = calibration.recommended_video_offset_adjust_us,
|
||||
cal_note = calibration.note,
|
||||
)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `signature_coverage` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn signature_coverage(
|
||||
expected_codes: &[u32],
|
||||
report: &SyncAnalysisReport,
|
||||
@ -385,6 +306,8 @@ fn signature_coverage(
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `format_signature_coverage` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn format_signature_coverage(coverage: Option<&SignatureCoverage>) -> String {
|
||||
let Some(coverage) = coverage else {
|
||||
return String::new();
|
||||
@ -427,6 +350,8 @@ fn unpaired_audio_onsets(report: &SyncAnalysisReport) -> Vec<f64> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `unpaired_onsets` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn unpaired_onsets(all_onsets: &[f64], paired_onsets: &[f64]) -> Vec<f64> {
|
||||
const SAME_ONSET_EPSILON_S: f64 = 0.000_001;
|
||||
all_onsets
|
||||
@ -441,6 +366,8 @@ fn unpaired_onsets(all_onsets: &[f64], paired_onsets: &[f64]) -> Vec<f64> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `format_onset_list` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn format_onset_list(onsets: &[f64]) -> String {
|
||||
const MAX_PRINTED_ONSETS: usize = 12;
|
||||
if onsets.is_empty() {
|
||||
@ -458,6 +385,8 @@ fn format_onset_list(onsets: &[f64]) -> String {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `format_usize_list` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn format_usize_list(values: &[usize]) -> String {
|
||||
if values.is_empty() {
|
||||
return "none".to_string();
|
||||
@ -470,6 +399,8 @@ fn format_usize_list(values: &[usize]) -> String {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `format_u32_list` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn format_u32_list(values: &[u32]) -> String {
|
||||
if values.is_empty() {
|
||||
return "none".to_string();
|
||||
@ -481,251 +412,11 @@ fn format_u32_list(values: &[u32]) -> String {
|
||||
.join(", ")
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn write_report_dir(
|
||||
report_dir: &std::path::Path,
|
||||
human_report: &str,
|
||||
output: &SyncAnalyzeOutput<'_>,
|
||||
) -> Result<()> {
|
||||
std::fs::create_dir_all(report_dir)
|
||||
.with_context(|| format!("creating report directory {}", report_dir.display()))?;
|
||||
std::fs::write(report_dir.join("report.txt"), human_report)
|
||||
.with_context(|| format!("writing {}", report_dir.join("report.txt").display()))?;
|
||||
std::fs::write(
|
||||
report_dir.join("report.json"),
|
||||
serde_json::to_string_pretty(output).context("serializing JSON report")?,
|
||||
)
|
||||
.with_context(|| format!("writing {}", report_dir.join("report.json").display()))?;
|
||||
write_events_csv(&report_dir.join("events.csv"), output.report)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn write_events_csv(path: &std::path::Path, report: &SyncAnalysisReport) -> Result<()> {
|
||||
let mut csv = String::from(
|
||||
"event_id,server_event_id,event_code,video_time_s,audio_time_s,skew_ms,confidence\n",
|
||||
);
|
||||
for event in &report.paired_events {
|
||||
csv.push_str(&format!(
|
||||
"{},{},{},{:.9},{:.9},{:.6},{:.6}\n",
|
||||
event.event_id,
|
||||
optional_usize(event.server_event_id),
|
||||
optional_u32(event.event_code),
|
||||
event.video_time_s,
|
||||
event.audio_time_s,
|
||||
event.skew_ms,
|
||||
event.confidence
|
||||
));
|
||||
}
|
||||
std::fs::write(path, csv).with_context(|| format!("writing {}", path.display()))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn optional_usize(value: Option<usize>) -> String {
|
||||
value.map(|value| value.to_string()).unwrap_or_default()
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn optional_u32(value: Option<u32>) -> String {
|
||||
value.map(|value| value.to_string()).unwrap_or_default()
|
||||
}
|
||||
include!("lesavka_sync_analyze/report_output_files.rs");
|
||||
|
||||
#[cfg(coverage)]
|
||||
fn main() {}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::parse_args;
|
||||
use lesavka_client::sync_probe::analyze::{
|
||||
SyncAnalysisReport, SyncAnalysisVerdict, SyncCalibrationRecommendation, SyncEventPair,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_capture_path_and_json_flag() {
|
||||
let args = parse_args(["capture.mkv", "--json"]).expect("args");
|
||||
assert_eq!(args.capture_path, std::path::PathBuf::from("capture.mkv"));
|
||||
assert!(args.emit_json);
|
||||
assert_eq!(args.report_dir, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_report_dir() {
|
||||
let args = parse_args(["capture.mkv", "--report-dir", "/tmp/probe"]).expect("args");
|
||||
assert_eq!(args.capture_path, std::path::PathBuf::from("capture.mkv"));
|
||||
assert_eq!(
|
||||
args.report_dir,
|
||||
Some(std::path::PathBuf::from("/tmp/probe"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_event_width_codes() {
|
||||
let args = parse_args(["capture.mkv", "--event-width-codes", "1,2,1,3"]).expect("args");
|
||||
assert_eq!(args.options.event_width_codes, vec![1, 2, 1, 3]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_analysis_window() {
|
||||
let args = parse_args(["capture.mkv", "--analysis-window-s", "8.25:26.5"]).expect("args");
|
||||
assert_eq!(args.options.analysis_start_s, Some(8.25));
|
||||
assert_eq!(args.options.analysis_end_s, Some(26.5));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_rejects_extra_positional_arguments() {
|
||||
assert!(parse_args(["one.mkv", "two.mkv"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--event-width-codes", ""]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--event-width-codes", "0"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--analysis-window-s", "wat:10"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--analysis-start-s", "-1"]).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_requires_a_capture_path() {
|
||||
let error = parse_args(["--json"]).expect_err("missing capture path should fail");
|
||||
assert!(
|
||||
error.to_string().contains("capture path is required"),
|
||||
"unexpected error: {error:#}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn coverage_main_stub_is_non_panicking() {
|
||||
let _ = super::main();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn human_report_explains_coded_raw_activity_and_unpaired_onsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 3,
|
||||
audio_event_count: 3,
|
||||
paired_event_count: 1,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -188.4,
|
||||
last_skew_ms: -188.4,
|
||||
mean_skew_ms: -188.4,
|
||||
median_skew_ms: -188.4,
|
||||
max_abs_skew_ms: 188.4,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![-188.4],
|
||||
video_onsets_s: vec![9.461, 11.420, 13.367],
|
||||
audio_onsets_s: vec![9.135, 11.146, 13.135],
|
||||
paired_events: vec![SyncEventPair {
|
||||
event_id: 0,
|
||||
server_event_id: None,
|
||||
event_code: None,
|
||||
video_time_s: 11.420,
|
||||
audio_time_s: 11.146,
|
||||
skew_ms: -188.4,
|
||||
confidence: 0.62,
|
||||
}],
|
||||
};
|
||||
let calibration = SyncCalibrationRecommendation {
|
||||
ready: false,
|
||||
recommended_audio_offset_adjust_us: 0,
|
||||
recommended_video_offset_adjust_us: 0,
|
||||
note: "need more pairs".to_string(),
|
||||
};
|
||||
let verdict = SyncAnalysisVerdict {
|
||||
status: "gross_failure".to_string(),
|
||||
passed: false,
|
||||
p95_abs_skew_ms: 188.4,
|
||||
max_abs_skew_ms: 188.4,
|
||||
preferred_p95_abs_skew_ms: 35.0,
|
||||
acceptable_p95_abs_skew_ms: 80.0,
|
||||
gross_failure_p95_abs_skew_ms: 250.0,
|
||||
catastrophic_max_abs_skew_ms: 1_000.0,
|
||||
reason: "coded pulse skew is too high".to_string(),
|
||||
};
|
||||
|
||||
let text = super::format_human_report(
|
||||
std::path::Path::new("/tmp/capture.webm"),
|
||||
&report,
|
||||
None,
|
||||
&calibration,
|
||||
&verdict,
|
||||
);
|
||||
|
||||
assert!(text.contains("- evidence mode: coded pulses"));
|
||||
assert!(text.contains("raw activity handling: reported only"));
|
||||
assert!(text.contains("- paired window first video/audio: 11.420 s / 11.146 s"));
|
||||
assert!(text.contains("- unpaired video onsets: 9.461s, 13.367s"));
|
||||
assert!(text.contains("- unpaired audio onsets: 9.135s, 13.135s"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn signature_coverage_reports_missing_and_unknown_coded_pairs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 3,
|
||||
audio_event_count: 3,
|
||||
paired_event_count: 2,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 1.0,
|
||||
raw_first_audio_activity_s: 1.0,
|
||||
first_skew_ms: 0.0,
|
||||
last_skew_ms: 0.0,
|
||||
mean_skew_ms: 0.0,
|
||||
median_skew_ms: 0.0,
|
||||
max_abs_skew_ms: 0.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![0.0, 0.0],
|
||||
video_onsets_s: vec![1.0, 2.0, 3.0],
|
||||
audio_onsets_s: vec![1.0, 2.0, 3.0],
|
||||
paired_events: vec![
|
||||
SyncEventPair {
|
||||
event_id: 0,
|
||||
server_event_id: Some(0),
|
||||
event_code: Some(1),
|
||||
video_time_s: 1.0,
|
||||
audio_time_s: 1.0,
|
||||
skew_ms: 0.0,
|
||||
confidence: 1.0,
|
||||
},
|
||||
SyncEventPair {
|
||||
event_id: 1,
|
||||
server_event_id: None,
|
||||
event_code: None,
|
||||
video_time_s: 2.0,
|
||||
audio_time_s: 2.0,
|
||||
skew_ms: 0.0,
|
||||
confidence: 0.4,
|
||||
},
|
||||
],
|
||||
};
|
||||
let calibration = SyncCalibrationRecommendation {
|
||||
ready: false,
|
||||
recommended_audio_offset_adjust_us: 0,
|
||||
recommended_video_offset_adjust_us: 0,
|
||||
note: "need more pairs".to_string(),
|
||||
};
|
||||
let verdict = SyncAnalysisVerdict {
|
||||
status: "insufficient_data".to_string(),
|
||||
passed: false,
|
||||
p95_abs_skew_ms: 0.0,
|
||||
max_abs_skew_ms: 0.0,
|
||||
preferred_p95_abs_skew_ms: 35.0,
|
||||
acceptable_p95_abs_skew_ms: 80.0,
|
||||
gross_failure_p95_abs_skew_ms: 250.0,
|
||||
catastrophic_max_abs_skew_ms: 1_000.0,
|
||||
reason: "need more pairs".to_string(),
|
||||
};
|
||||
let coverage = super::signature_coverage(&[1, 2, 3], &report);
|
||||
let text = super::format_human_report(
|
||||
std::path::Path::new("/tmp/capture.webm"),
|
||||
&report,
|
||||
coverage.as_ref(),
|
||||
&calibration,
|
||||
&verdict,
|
||||
);
|
||||
|
||||
assert!(text.contains("- expected coded signatures: 3"));
|
||||
assert!(text.contains("- paired coded signatures: 1/3"));
|
||||
assert!(text.contains("- missing paired signature ids: 1, 2"));
|
||||
assert!(text.contains("- missing paired signature codes: 2, 3"));
|
||||
assert!(text.contains("- paired signatures without identity: 1"));
|
||||
}
|
||||
}
|
||||
#[path = "tests/lesavka_sync_analyze.rs"]
|
||||
mod tests;
|
||||
|
||||
161
client/src/bin/lesavka_relayctl/command_dispatch.rs
Normal file
161
client/src/bin/lesavka_relayctl/command_dispatch.rs
Normal file
@ -0,0 +1,161 @@
|
||||
#[cfg(not(coverage))]
|
||||
#[tokio::main(flavor = "current_thread")]
|
||||
/// Keeps `main` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn main() -> Result<()> {
|
||||
let config = match parse_args()? {
|
||||
ParseOutcome::Run(config) => config,
|
||||
ParseOutcome::Help => {
|
||||
println!("{}", usage());
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
if config.command == CommandKind::Version {
|
||||
let caps = get_server_capabilities(config.server.as_str()).await?;
|
||||
print_versions(config.server.as_str(), &caps);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut client = connect(config.server.as_str()).await?;
|
||||
|
||||
if let Some(request) = capture_power_request(config.command) {
|
||||
let context = match config.command {
|
||||
CommandKind::Auto => "setting capture power to auto",
|
||||
CommandKind::On => "forcing capture power on",
|
||||
CommandKind::Off => "forcing capture power off",
|
||||
CommandKind::Status
|
||||
| CommandKind::Version
|
||||
| CommandKind::CalibrationStatus
|
||||
| CommandKind::CalibrationAdjust
|
||||
| CommandKind::CalibrationRestoreDefault
|
||||
| CommandKind::CalibrationRestoreFactory
|
||||
| CommandKind::CalibrationSaveDefault
|
||||
| CommandKind::RecoverUsb
|
||||
| CommandKind::RecoverUac
|
||||
| CommandKind::RecoverUvc
|
||||
| CommandKind::ResetUsb
|
||||
| CommandKind::UpstreamSync
|
||||
| CommandKind::OutputDelayProbe => unreachable!(),
|
||||
};
|
||||
let reply = client
|
||||
.set_capture_power(Request::new(request))
|
||||
.await
|
||||
.context(context)?
|
||||
.into_inner();
|
||||
print_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if let Some(request) = calibration_request_for(&config) {
|
||||
let reply = client
|
||||
.calibrate(Request::new(request))
|
||||
.await
|
||||
.context("applying upstream A/V calibration")?
|
||||
.into_inner();
|
||||
print_calibration_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if config.command == CommandKind::OutputDelayProbe {
|
||||
let request = OutputDelayProbeRequest {
|
||||
duration_seconds: config.probe_duration_seconds,
|
||||
warmup_seconds: config.probe_warmup_seconds,
|
||||
pulse_period_ms: config.probe_pulse_period_ms,
|
||||
pulse_width_ms: config.probe_pulse_width_ms,
|
||||
event_width_codes: config.probe_event_width_codes.clone(),
|
||||
audio_delay_us: config.probe_audio_delay_us,
|
||||
video_delay_us: config.probe_video_delay_us,
|
||||
};
|
||||
let mut stream = client
|
||||
.run_output_delay_probe(Request::new(request))
|
||||
.await
|
||||
.context("running server-generated UVC/UAC output-delay probe")?
|
||||
.into_inner();
|
||||
while let Some(reply) = stream
|
||||
.message()
|
||||
.await
|
||||
.context("reading output-delay probe reply")?
|
||||
{
|
||||
println!("ok={}", reply.ok);
|
||||
println!("detail={}", reply.detail);
|
||||
if !reply.server_timeline_json.trim().is_empty() {
|
||||
println!("server_timeline_json={}", reply.server_timeline_json);
|
||||
}
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let reply = match config.command {
|
||||
CommandKind::Status => client
|
||||
.get_capture_power(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying capture power state")?
|
||||
.into_inner(),
|
||||
CommandKind::RecoverUsb => {
|
||||
let reply = client
|
||||
.recover_usb(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft USB recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::RecoverUac => {
|
||||
let reply = client
|
||||
.recover_uac(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft UAC recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::RecoverUvc => {
|
||||
let reply = client
|
||||
.recover_uvc(Request::new(Empty {}))
|
||||
.await
|
||||
.context("requesting soft UVC recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::ResetUsb => {
|
||||
let reply = client
|
||||
.reset_usb(Request::new(Empty {}))
|
||||
.await
|
||||
.context("forcing USB gadget recovery")?
|
||||
.into_inner();
|
||||
println!("ok={}", reply.ok);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::UpstreamSync => {
|
||||
let reply = client
|
||||
.get_upstream_sync(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying upstream sync planner state")?
|
||||
.into_inner();
|
||||
print_upstream_sync(reply);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::CalibrationStatus => {
|
||||
let reply = client
|
||||
.get_calibration(Request::new(Empty {}))
|
||||
.await
|
||||
.context("querying upstream A/V calibration")?
|
||||
.into_inner();
|
||||
print_calibration_state(reply);
|
||||
return Ok(());
|
||||
}
|
||||
CommandKind::Version | CommandKind::Auto | CommandKind::On | CommandKind::Off => {
|
||||
unreachable!()
|
||||
}
|
||||
CommandKind::OutputDelayProbe => unreachable!(),
|
||||
CommandKind::CalibrationAdjust
|
||||
| CommandKind::CalibrationRestoreDefault
|
||||
| CommandKind::CalibrationRestoreFactory
|
||||
| CommandKind::CalibrationSaveDefault => unreachable!(),
|
||||
};
|
||||
|
||||
print_state(reply);
|
||||
Ok(())
|
||||
}
|
||||
161
client/src/bin/lesavka_relayctl/upstream_sync_formatting.rs
Normal file
161
client/src/bin/lesavka_relayctl/upstream_sync_formatting.rs
Normal file
@ -0,0 +1,161 @@
|
||||
/// Keeps `print_upstream_sync` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn print_upstream_sync(state: lesavka_common::lesavka::UpstreamSyncState) {
|
||||
println!("planner_session_id={}", state.session_id);
|
||||
println!("planner_phase={}", state.phase);
|
||||
println!(
|
||||
"planner_live_lag_ms={}",
|
||||
state
|
||||
.live_lag_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_skew_ms={}",
|
||||
state
|
||||
.planner_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!("planner_stale_audio_drops={}", state.stale_audio_drops);
|
||||
println!("planner_stale_video_drops={}", state.stale_video_drops);
|
||||
println!("planner_skew_video_drops={}", state.skew_video_drops);
|
||||
println!("planner_freshness_reanchors={}", state.freshness_reanchors);
|
||||
println!("planner_startup_timeouts={}", state.startup_timeouts);
|
||||
println!("planner_video_freezes={}", state.video_freezes);
|
||||
println!(
|
||||
"planner_client_capture_skew_ms={}",
|
||||
state
|
||||
.client_capture_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_send_skew_ms={}",
|
||||
state
|
||||
.client_send_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_server_receive_skew_ms={}",
|
||||
state
|
||||
.server_receive_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_client_queue_age_ms={}",
|
||||
state
|
||||
.camera_client_queue_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_client_queue_age_ms={}",
|
||||
state
|
||||
.microphone_client_queue_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_server_receive_age_ms={}",
|
||||
state
|
||||
.camera_server_receive_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_server_receive_age_ms={}",
|
||||
state
|
||||
.microphone_server_receive_age_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_capture_abs_skew_p95_ms={}",
|
||||
state
|
||||
.client_capture_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_send_abs_skew_p95_ms={}",
|
||||
state
|
||||
.client_send_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_server_receive_abs_skew_p95_ms={}",
|
||||
state
|
||||
.server_receive_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_client_queue_age_p95_ms={}",
|
||||
state
|
||||
.camera_client_queue_age_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_client_queue_age_p95_ms={}",
|
||||
state
|
||||
.microphone_client_queue_age_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_skew_ms={}",
|
||||
state
|
||||
.sink_handoff_skew_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_abs_skew_p95_ms={}",
|
||||
state
|
||||
.sink_handoff_abs_skew_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_sink_late_ms={}",
|
||||
state
|
||||
.camera_sink_late_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_sink_late_ms={}",
|
||||
state
|
||||
.microphone_sink_late_ms
|
||||
.map(|value| format!("{value:+.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_camera_sink_late_p95_ms={}",
|
||||
state
|
||||
.camera_sink_late_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_microphone_sink_late_p95_ms={}",
|
||||
state
|
||||
.microphone_sink_late_p95_ms
|
||||
.map(|value| format!("{value:.1}"))
|
||||
.unwrap_or_else(|| "pending".to_string())
|
||||
);
|
||||
println!(
|
||||
"planner_client_timing_window_samples={}",
|
||||
state.client_timing_window_samples
|
||||
);
|
||||
println!(
|
||||
"planner_sink_handoff_window_samples={}",
|
||||
state.sink_handoff_window_samples
|
||||
);
|
||||
println!("planner_detail={}", state.last_reason);
|
||||
}
|
||||
93
client/src/bin/lesavka_sync_analyze/human_report.rs
Normal file
93
client/src/bin/lesavka_sync_analyze/human_report.rs
Normal file
@ -0,0 +1,93 @@
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `format_human_report` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn format_human_report(
|
||||
capture_path: &std::path::Path,
|
||||
report: &SyncAnalysisReport,
|
||||
signature_coverage: Option<&SignatureCoverage>,
|
||||
calibration: &SyncCalibrationRecommendation,
|
||||
verdict: &SyncAnalysisVerdict,
|
||||
) -> String {
|
||||
let first_paired_video = report
|
||||
.paired_events
|
||||
.first()
|
||||
.map(|event| event.video_time_s)
|
||||
.unwrap_or(0.0);
|
||||
let first_paired_audio = report
|
||||
.paired_events
|
||||
.first()
|
||||
.map(|event| event.audio_time_s)
|
||||
.unwrap_or(0.0);
|
||||
let unpaired_video = format_onset_list(&unpaired_video_onsets(report));
|
||||
let unpaired_audio = format_onset_list(&unpaired_audio_onsets(report));
|
||||
let raw_activity_handling = if report.raw_activity_start_is_verdict_relevant() {
|
||||
"used as verdict evidence"
|
||||
} else {
|
||||
"reported only; ignored for verdict/calibration because it disagrees with paired pulses"
|
||||
};
|
||||
let signature_coverage = format_signature_coverage(signature_coverage);
|
||||
format!(
|
||||
"\
|
||||
A/V sync report for {capture}
|
||||
- verdict: {status} ({passed})
|
||||
- verdict reason: {reason}
|
||||
- evidence mode: {evidence_mode}
|
||||
- p95 abs skew: {p95:.1} ms
|
||||
- video onsets: {video_events}
|
||||
- audio onsets: {audio_events}
|
||||
- paired pulses: {paired_events}
|
||||
- activity start delta: {activity_start_delta:+.1} ms (audio after video is positive)
|
||||
- raw first video activity: {raw_video:.3} s
|
||||
- raw first audio activity: {raw_audio:.3} s
|
||||
- raw-vs-paired disagreement: {raw_pair_disagreement:.1} ms
|
||||
- raw activity handling: {raw_activity_handling}
|
||||
- paired window first video/audio: {paired_video:.3} s / {paired_audio:.3} s
|
||||
- unpaired video onsets: {unpaired_video}
|
||||
- unpaired audio onsets: {unpaired_audio}
|
||||
{signature_coverage}\
|
||||
- first skew: {first_skew:+.1} ms (audio after video is positive)
|
||||
- last skew: {last_skew:+.1} ms
|
||||
- mean skew: {mean_skew:+.1} ms
|
||||
- median skew: {median_skew:+.1} ms
|
||||
- max abs skew: {max_abs:.1} ms
|
||||
- drift: {drift:+.1} ms
|
||||
- calibration ready: {cal_ready}
|
||||
- recommended audio offset adjust: {audio_adjust:+} us
|
||||
- alternative video offset adjust: {video_adjust:+} us
|
||||
- calibration note: {cal_note}
|
||||
",
|
||||
capture = capture_path.display(),
|
||||
status = verdict.status,
|
||||
passed = if verdict.passed { "pass" } else { "fail" },
|
||||
reason = verdict.reason,
|
||||
evidence_mode = if report.coded_events {
|
||||
"coded pulses"
|
||||
} else {
|
||||
"cadence/brightness pulses"
|
||||
},
|
||||
p95 = verdict.p95_abs_skew_ms,
|
||||
video_events = report.video_event_count,
|
||||
audio_events = report.audio_event_count,
|
||||
paired_events = report.paired_event_count,
|
||||
activity_start_delta = report.activity_start_delta_ms,
|
||||
raw_video = report.raw_first_video_activity_s,
|
||||
raw_audio = report.raw_first_audio_activity_s,
|
||||
raw_pair_disagreement = report.activity_start_pair_disagreement_ms().abs(),
|
||||
raw_activity_handling = raw_activity_handling,
|
||||
paired_video = first_paired_video,
|
||||
paired_audio = first_paired_audio,
|
||||
unpaired_video = unpaired_video,
|
||||
unpaired_audio = unpaired_audio,
|
||||
signature_coverage = signature_coverage,
|
||||
first_skew = report.first_skew_ms,
|
||||
last_skew = report.last_skew_ms,
|
||||
mean_skew = report.mean_skew_ms,
|
||||
median_skew = report.median_skew_ms,
|
||||
max_abs = report.max_abs_skew_ms,
|
||||
drift = report.drift_ms,
|
||||
cal_ready = calibration.ready,
|
||||
audio_adjust = calibration.recommended_audio_offset_adjust_us,
|
||||
video_adjust = calibration.recommended_video_offset_adjust_us,
|
||||
cal_note = calibration.note,
|
||||
)
|
||||
}
|
||||
52
client/src/bin/lesavka_sync_analyze/report_output_files.rs
Normal file
52
client/src/bin/lesavka_sync_analyze/report_output_files.rs
Normal file
@ -0,0 +1,52 @@
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `write_report_dir` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn write_report_dir(
|
||||
report_dir: &std::path::Path,
|
||||
human_report: &str,
|
||||
output: &SyncAnalyzeOutput<'_>,
|
||||
) -> Result<()> {
|
||||
std::fs::create_dir_all(report_dir)
|
||||
.with_context(|| format!("creating report directory {}", report_dir.display()))?;
|
||||
std::fs::write(report_dir.join("report.txt"), human_report)
|
||||
.with_context(|| format!("writing {}", report_dir.join("report.txt").display()))?;
|
||||
std::fs::write(
|
||||
report_dir.join("report.json"),
|
||||
serde_json::to_string_pretty(output).context("serializing JSON report")?,
|
||||
)
|
||||
.with_context(|| format!("writing {}", report_dir.join("report.json").display()))?;
|
||||
write_events_csv(&report_dir.join("events.csv"), output.report)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `write_events_csv` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn write_events_csv(path: &std::path::Path, report: &SyncAnalysisReport) -> Result<()> {
|
||||
let mut csv = String::from(
|
||||
"event_id,server_event_id,event_code,video_time_s,audio_time_s,skew_ms,confidence\n",
|
||||
);
|
||||
for event in &report.paired_events {
|
||||
csv.push_str(&format!(
|
||||
"{},{},{},{:.9},{:.9},{:.6},{:.6}\n",
|
||||
event.event_id,
|
||||
optional_usize(event.server_event_id),
|
||||
optional_u32(event.event_code),
|
||||
event.video_time_s,
|
||||
event.audio_time_s,
|
||||
event.skew_ms,
|
||||
event.confidence
|
||||
));
|
||||
}
|
||||
std::fs::write(path, csv).with_context(|| format!("writing {}", path.display()))
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn optional_usize(value: Option<usize>) -> String {
|
||||
value.map(|value| value.to_string()).unwrap_or_default()
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn optional_u32(value: Option<u32>) -> String {
|
||||
value.map(|value| value.to_string()).unwrap_or_default()
|
||||
}
|
||||
280
client/src/bin/tests/lesavka_relayctl.rs
Normal file
280
client/src/bin/tests/lesavka_relayctl.rs
Normal file
@ -0,0 +1,280 @@
|
||||
use super::{
|
||||
CalibrationAction, CapturePowerCommand, CommandKind, Config, ParseOutcome,
|
||||
calibration_request_for, capture_power_request, parse_args_from, parse_args_outcome_from,
|
||||
};
|
||||
use lesavka_common::lesavka::CapturePowerState;
|
||||
|
||||
#[test]
|
||||
/// Verifies safe recovery commands stay separate from explicit hard reset.
|
||||
fn command_aliases_parse_to_stable_actions() {
|
||||
assert_eq!(CommandKind::parse("status"), Some(CommandKind::Status));
|
||||
assert_eq!(CommandKind::parse("get"), Some(CommandKind::Status));
|
||||
assert_eq!(CommandKind::parse("version"), Some(CommandKind::Version));
|
||||
assert_eq!(CommandKind::parse("versions"), Some(CommandKind::Version));
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration"),
|
||||
Some(CommandKind::CalibrationStatus)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibrate"),
|
||||
Some(CommandKind::CalibrationAdjust)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-restore-default"),
|
||||
Some(CommandKind::CalibrationRestoreDefault)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-restore-factory"),
|
||||
Some(CommandKind::CalibrationRestoreFactory)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("calibration-save-default"),
|
||||
Some(CommandKind::CalibrationSaveDefault)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("upstream-sync"),
|
||||
Some(CommandKind::UpstreamSync)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("sync"), Some(CommandKind::UpstreamSync));
|
||||
assert_eq!(
|
||||
CommandKind::parse("output-delay-probe"),
|
||||
Some(CommandKind::OutputDelayProbe)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("probe-output-delay"),
|
||||
Some(CommandKind::OutputDelayProbe)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("force-on"), Some(CommandKind::On));
|
||||
assert_eq!(CommandKind::parse("force-off"), Some(CommandKind::Off));
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-usb"),
|
||||
Some(CommandKind::RecoverUsb)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-uac"),
|
||||
Some(CommandKind::RecoverUac)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("recover-uvc"),
|
||||
Some(CommandKind::RecoverUvc)
|
||||
);
|
||||
assert_eq!(
|
||||
CommandKind::parse("hard-reset-usb"),
|
||||
Some(CommandKind::ResetUsb)
|
||||
);
|
||||
assert_eq!(CommandKind::parse("wat"), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_defaults_to_local_status() {
|
||||
let config = parse_args_from(std::iter::empty::<&str>()).expect("default config");
|
||||
assert_eq!(config.server, "http://127.0.0.1:50051");
|
||||
assert_eq!(config.command, CommandKind::Status);
|
||||
assert_eq!(config.audio_delta_us, 0);
|
||||
assert_eq!(config.video_delta_us, 0);
|
||||
assert!(config.note.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_server_and_command() {
|
||||
let config =
|
||||
parse_args_from(["--server", " http://lab:50051 ", "upstream-sync"]).expect("config");
|
||||
assert_eq!(config.server, "http://lab:50051");
|
||||
assert_eq!(config.command, CommandKind::UpstreamSync);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `parse_args_accepts_output_delay_probe_config` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args_accepts_output_delay_probe_config() {
|
||||
let config = parse_args_from([
|
||||
"--server",
|
||||
"http://lab:50051",
|
||||
"output-delay-probe",
|
||||
"20",
|
||||
"4",
|
||||
"1000",
|
||||
"120",
|
||||
"1,2,3,4",
|
||||
"0",
|
||||
"157712",
|
||||
])
|
||||
.expect("probe config");
|
||||
|
||||
assert_eq!(config.command, CommandKind::OutputDelayProbe);
|
||||
assert_eq!(config.probe_duration_seconds, 20);
|
||||
assert_eq!(config.probe_warmup_seconds, 4);
|
||||
assert_eq!(config.probe_pulse_period_ms, 1000);
|
||||
assert_eq!(config.probe_pulse_width_ms, 120);
|
||||
assert_eq!(config.probe_event_width_codes, "1,2,3,4");
|
||||
assert_eq!(config.probe_audio_delay_us, 0);
|
||||
assert_eq!(config.probe_video_delay_us, 157_712);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `parse_args_accepts_calibration_adjustment` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args_accepts_calibration_adjustment() {
|
||||
let config = parse_args_from([
|
||||
"--server",
|
||||
"http://lab:50051",
|
||||
"calibrate",
|
||||
"0",
|
||||
"71600",
|
||||
"probe",
|
||||
"median",
|
||||
])
|
||||
.expect("calibration config");
|
||||
assert_eq!(config.command, CommandKind::CalibrationAdjust);
|
||||
assert_eq!(config.audio_delta_us, 0);
|
||||
assert_eq!(config.video_delta_us, 71_600);
|
||||
assert_eq!(config.note, "probe median");
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `parse_args_rejects_bad_inputs` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_args_rejects_bad_inputs() {
|
||||
assert!(parse_args_from(["--server"]).is_err());
|
||||
assert!(parse_args_from(["nope"]).is_err());
|
||||
assert!(parse_args_from(["status", "extra"]).is_err());
|
||||
assert!(parse_args_from(["calibrate"]).is_err());
|
||||
assert!(parse_args_from(["calibrate", "0", "not-int"]).is_err());
|
||||
assert!(
|
||||
parse_args_from([
|
||||
"output-delay-probe",
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4",
|
||||
"1",
|
||||
"0",
|
||||
"0",
|
||||
"extra"
|
||||
])
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_reports_help_without_exiting_test_process() {
|
||||
assert_eq!(
|
||||
parse_args_outcome_from(["--help"]).unwrap(),
|
||||
ParseOutcome::Help
|
||||
);
|
||||
assert_eq!(parse_args_outcome_from(["-h"]).unwrap(), ParseOutcome::Help);
|
||||
assert!(parse_args_from(["--help"]).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_runtime_wrapper_is_non_panicking_under_tests() {
|
||||
let _ = super::parse_args();
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
#[test]
|
||||
fn coverage_main_references_runtime_parser() {
|
||||
super::main();
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
#[tokio::test(flavor = "current_thread")]
|
||||
async fn coverage_connect_uses_lazy_channel_after_endpoint_validation() {
|
||||
let client = super::connect("http://127.0.0.1:1")
|
||||
.await
|
||||
.expect("coverage lazy channel");
|
||||
drop(client);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps status/read commands from accidentally mutating capture power.
|
||||
fn mutating_commands_map_to_capture_power_requests() {
|
||||
let auto = capture_power_request(CommandKind::Auto).expect("auto request");
|
||||
assert!(!auto.enabled);
|
||||
assert_eq!(auto.command, CapturePowerCommand::Auto as i32);
|
||||
|
||||
let on = capture_power_request(CommandKind::On).expect("on request");
|
||||
assert!(on.enabled);
|
||||
assert_eq!(on.command, CapturePowerCommand::ForceOn as i32);
|
||||
|
||||
let off = capture_power_request(CommandKind::Off).expect("off request");
|
||||
assert!(!off.enabled);
|
||||
assert_eq!(off.command, CapturePowerCommand::ForceOff as i32);
|
||||
|
||||
assert!(capture_power_request(CommandKind::Status).is_none());
|
||||
assert!(capture_power_request(CommandKind::Version).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationStatus).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationAdjust).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationRestoreDefault).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationRestoreFactory).is_none());
|
||||
assert!(capture_power_request(CommandKind::CalibrationSaveDefault).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUsb).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUac).is_none());
|
||||
assert!(capture_power_request(CommandKind::RecoverUvc).is_none());
|
||||
assert!(capture_power_request(CommandKind::ResetUsb).is_none());
|
||||
assert!(capture_power_request(CommandKind::UpstreamSync).is_none());
|
||||
assert!(capture_power_request(CommandKind::OutputDelayProbe).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn print_state_accepts_full_capture_power_payload() {
|
||||
super::print_state(CapturePowerState {
|
||||
available: true,
|
||||
enabled: false,
|
||||
mode: "auto".to_string(),
|
||||
detected_devices: 2,
|
||||
active_leases: 1,
|
||||
unit: "lesavka-capture-power.service".to_string(),
|
||||
detail: "ready".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `print_calibration_accepts_full_payload` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn print_calibration_accepts_full_payload() {
|
||||
super::print_calibration_state(lesavka_common::lesavka::CalibrationState {
|
||||
profile: "mjpeg".to_string(),
|
||||
factory_audio_offset_us: 0,
|
||||
factory_video_offset_us: 1_090_000,
|
||||
default_audio_offset_us: 0,
|
||||
default_video_offset_us: 1_090_000,
|
||||
active_audio_offset_us: 0,
|
||||
active_video_offset_us: 1_161_600,
|
||||
source: "manual".to_string(),
|
||||
confidence: "manual".to_string(),
|
||||
updated_at: "2026-05-02T00:00:00Z".to_string(),
|
||||
detail: "probe nudge".to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_requests_are_only_built_for_calibration_mutations` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_requests_are_only_built_for_calibration_mutations() {
|
||||
let config = Config {
|
||||
server: "http://127.0.0.1:50051".to_string(),
|
||||
command: CommandKind::CalibrationAdjust,
|
||||
audio_delta_us: 0,
|
||||
video_delta_us: 71_600,
|
||||
note: "probe".to_string(),
|
||||
probe_duration_seconds: 0,
|
||||
probe_warmup_seconds: 0,
|
||||
probe_pulse_period_ms: 0,
|
||||
probe_pulse_width_ms: 0,
|
||||
probe_event_width_codes: String::new(),
|
||||
probe_audio_delay_us: 0,
|
||||
probe_video_delay_us: 0,
|
||||
};
|
||||
let request = calibration_request_for(&config).expect("request");
|
||||
assert_eq!(request.action, CalibrationAction::AdjustActive as i32);
|
||||
assert_eq!(request.audio_delta_us, 0);
|
||||
assert_eq!(request.video_delta_us, 71_600);
|
||||
assert_eq!(request.note, "probe");
|
||||
|
||||
let status = Config {
|
||||
command: CommandKind::CalibrationStatus,
|
||||
..config
|
||||
};
|
||||
assert!(calibration_request_for(&status).is_none());
|
||||
}
|
||||
197
client/src/bin/tests/lesavka_sync_analyze.rs
Normal file
197
client/src/bin/tests/lesavka_sync_analyze.rs
Normal file
@ -0,0 +1,197 @@
|
||||
use super::parse_args;
|
||||
use lesavka_client::sync_probe::analyze::{
|
||||
SyncAnalysisReport, SyncAnalysisVerdict, SyncCalibrationRecommendation, SyncEventPair,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_capture_path_and_json_flag() {
|
||||
let args = parse_args(["capture.mkv", "--json"]).expect("args");
|
||||
assert_eq!(args.capture_path, std::path::PathBuf::from("capture.mkv"));
|
||||
assert!(args.emit_json);
|
||||
assert_eq!(args.report_dir, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_report_dir() {
|
||||
let args = parse_args(["capture.mkv", "--report-dir", "/tmp/probe"]).expect("args");
|
||||
assert_eq!(args.capture_path, std::path::PathBuf::from("capture.mkv"));
|
||||
assert_eq!(
|
||||
args.report_dir,
|
||||
Some(std::path::PathBuf::from("/tmp/probe"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_event_width_codes() {
|
||||
let args = parse_args(["capture.mkv", "--event-width-codes", "1,2,1,3"]).expect("args");
|
||||
assert_eq!(args.options.event_width_codes, vec![1, 2, 1, 3]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_accepts_analysis_window() {
|
||||
let args = parse_args(["capture.mkv", "--analysis-window-s", "8.25:26.5"]).expect("args");
|
||||
assert_eq!(args.options.analysis_start_s, Some(8.25));
|
||||
assert_eq!(args.options.analysis_end_s, Some(26.5));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_rejects_extra_positional_arguments() {
|
||||
assert!(parse_args(["one.mkv", "two.mkv"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--event-width-codes", ""]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--event-width-codes", "0"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--analysis-window-s", "wat:10"]).is_err());
|
||||
assert!(parse_args(["one.mkv", "--analysis-start-s", "-1"]).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_args_requires_a_capture_path() {
|
||||
let error = parse_args(["--json"]).expect_err("missing capture path should fail");
|
||||
assert!(
|
||||
error.to_string().contains("capture path is required"),
|
||||
"unexpected error: {error:#}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn coverage_main_stub_is_non_panicking() {
|
||||
let _ = super::main();
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `human_report_explains_coded_raw_activity_and_unpaired_onsets` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn human_report_explains_coded_raw_activity_and_unpaired_onsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 3,
|
||||
audio_event_count: 3,
|
||||
paired_event_count: 1,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -188.4,
|
||||
last_skew_ms: -188.4,
|
||||
mean_skew_ms: -188.4,
|
||||
median_skew_ms: -188.4,
|
||||
max_abs_skew_ms: 188.4,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![-188.4],
|
||||
video_onsets_s: vec![9.461, 11.420, 13.367],
|
||||
audio_onsets_s: vec![9.135, 11.146, 13.135],
|
||||
paired_events: vec![SyncEventPair {
|
||||
event_id: 0,
|
||||
server_event_id: None,
|
||||
event_code: None,
|
||||
video_time_s: 11.420,
|
||||
audio_time_s: 11.146,
|
||||
skew_ms: -188.4,
|
||||
confidence: 0.62,
|
||||
}],
|
||||
};
|
||||
let calibration = SyncCalibrationRecommendation {
|
||||
ready: false,
|
||||
recommended_audio_offset_adjust_us: 0,
|
||||
recommended_video_offset_adjust_us: 0,
|
||||
note: "need more pairs".to_string(),
|
||||
};
|
||||
let verdict = SyncAnalysisVerdict {
|
||||
status: "gross_failure".to_string(),
|
||||
passed: false,
|
||||
p95_abs_skew_ms: 188.4,
|
||||
max_abs_skew_ms: 188.4,
|
||||
preferred_p95_abs_skew_ms: 35.0,
|
||||
acceptable_p95_abs_skew_ms: 80.0,
|
||||
gross_failure_p95_abs_skew_ms: 250.0,
|
||||
catastrophic_max_abs_skew_ms: 1_000.0,
|
||||
reason: "coded pulse skew is too high".to_string(),
|
||||
};
|
||||
|
||||
let text = super::format_human_report(
|
||||
std::path::Path::new("/tmp/capture.webm"),
|
||||
&report,
|
||||
None,
|
||||
&calibration,
|
||||
&verdict,
|
||||
);
|
||||
|
||||
assert!(text.contains("- evidence mode: coded pulses"));
|
||||
assert!(text.contains("raw activity handling: reported only"));
|
||||
assert!(text.contains("- paired window first video/audio: 11.420 s / 11.146 s"));
|
||||
assert!(text.contains("- unpaired video onsets: 9.461s, 13.367s"));
|
||||
assert!(text.contains("- unpaired audio onsets: 9.135s, 13.135s"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `signature_coverage_reports_missing_and_unknown_coded_pairs` explicit because it sits on CLI orchestration, where operators need deterministic exits and artifact paths.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn signature_coverage_reports_missing_and_unknown_coded_pairs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 3,
|
||||
audio_event_count: 3,
|
||||
paired_event_count: 2,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 1.0,
|
||||
raw_first_audio_activity_s: 1.0,
|
||||
first_skew_ms: 0.0,
|
||||
last_skew_ms: 0.0,
|
||||
mean_skew_ms: 0.0,
|
||||
median_skew_ms: 0.0,
|
||||
max_abs_skew_ms: 0.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![0.0, 0.0],
|
||||
video_onsets_s: vec![1.0, 2.0, 3.0],
|
||||
audio_onsets_s: vec![1.0, 2.0, 3.0],
|
||||
paired_events: vec![
|
||||
SyncEventPair {
|
||||
event_id: 0,
|
||||
server_event_id: Some(0),
|
||||
event_code: Some(1),
|
||||
video_time_s: 1.0,
|
||||
audio_time_s: 1.0,
|
||||
skew_ms: 0.0,
|
||||
confidence: 1.0,
|
||||
},
|
||||
SyncEventPair {
|
||||
event_id: 1,
|
||||
server_event_id: None,
|
||||
event_code: None,
|
||||
video_time_s: 2.0,
|
||||
audio_time_s: 2.0,
|
||||
skew_ms: 0.0,
|
||||
confidence: 0.4,
|
||||
},
|
||||
],
|
||||
};
|
||||
let calibration = SyncCalibrationRecommendation {
|
||||
ready: false,
|
||||
recommended_audio_offset_adjust_us: 0,
|
||||
recommended_video_offset_adjust_us: 0,
|
||||
note: "need more pairs".to_string(),
|
||||
};
|
||||
let verdict = SyncAnalysisVerdict {
|
||||
status: "insufficient_data".to_string(),
|
||||
passed: false,
|
||||
p95_abs_skew_ms: 0.0,
|
||||
max_abs_skew_ms: 0.0,
|
||||
preferred_p95_abs_skew_ms: 35.0,
|
||||
acceptable_p95_abs_skew_ms: 80.0,
|
||||
gross_failure_p95_abs_skew_ms: 250.0,
|
||||
catastrophic_max_abs_skew_ms: 1_000.0,
|
||||
reason: "need more pairs".to_string(),
|
||||
};
|
||||
let coverage = super::signature_coverage(&[1, 2, 3], &report);
|
||||
let text = super::format_human_report(
|
||||
std::path::Path::new("/tmp/capture.webm"),
|
||||
&report,
|
||||
coverage.as_ref(),
|
||||
&calibration,
|
||||
&verdict,
|
||||
);
|
||||
|
||||
assert!(text.contains("- expected coded signatures: 3"));
|
||||
assert!(text.contains("- paired coded signatures: 1/3"));
|
||||
assert!(text.contains("- missing paired signature ids: 1, 2"));
|
||||
assert!(text.contains("- missing paired signature codes: 2, 3"));
|
||||
assert!(text.contains("- paired signatures without identity: 1"));
|
||||
}
|
||||
@ -3,6 +3,8 @@ impl CameraCapture {
|
||||
Self::new_with_capture_profile(device_fragment, cfg, None)
|
||||
}
|
||||
|
||||
/// Keeps `new_with_capture_profile` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn new_with_capture_profile(
|
||||
device_fragment: Option<&str>,
|
||||
cfg: Option<CameraConfig>,
|
||||
@ -243,6 +245,8 @@ impl CameraCapture {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `pull` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn pull(&self) -> Option<VideoPacket> {
|
||||
let sample = self.sink.pull_sample().ok()?;
|
||||
let buf = sample.buffer()?;
|
||||
@ -316,6 +320,8 @@ fn env_flag_enabled(name: &str) -> bool {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `log_camera_first_packet` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn log_camera_first_packet(packet_index: u64, bytes: usize, pts_us: u64) {
|
||||
if packet_index == 0 {
|
||||
tracing::info!(bytes, pts_us, "📸 upstream webcam frames flowing");
|
||||
@ -327,6 +333,8 @@ fn should_log_camera_timing_sample(packet_index: u64) -> bool {
|
||||
&& (packet_index < 10 || packet_index.is_multiple_of(300))
|
||||
}
|
||||
|
||||
/// Keeps `log_camera_timing_sample` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn log_camera_timing_sample(
|
||||
packet_index: u64,
|
||||
timing: crate::live_capture_clock::RebasedSourcePts,
|
||||
@ -350,6 +358,8 @@ fn log_camera_timing_sample(
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `log_camera_stale_source_drop` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn log_camera_stale_source_drop(timing: crate::live_capture_clock::RebasedSourcePts, bytes: usize) {
|
||||
static CAMERA_STALE_SOURCE_DROPS: std::sync::atomic::AtomicU64 =
|
||||
std::sync::atomic::AtomicU64::new(0);
|
||||
|
||||
@ -47,289 +47,7 @@ pub struct MicrophoneCapture {
|
||||
pending_packets: Mutex<VecDeque<AudioPacket>>,
|
||||
}
|
||||
|
||||
impl MicrophoneCapture {
|
||||
pub fn new() -> Result<Self> {
|
||||
Self::new_with_source_and_env(None, true)
|
||||
}
|
||||
|
||||
pub fn new_with_source(source_override: Option<&str>) -> Result<Self> {
|
||||
Self::new_with_source_and_env(source_override, true)
|
||||
}
|
||||
|
||||
pub fn new_default_source() -> Result<Self> {
|
||||
Self::new_with_source_and_env(None, false)
|
||||
}
|
||||
|
||||
fn new_with_source_and_env(
|
||||
source_override: Option<&str>,
|
||||
allow_env_source: bool,
|
||||
) -> Result<Self> {
|
||||
gst::init().ok(); // idempotent
|
||||
|
||||
/* preferred path: pipewiresrc; fallback: pulsesrc ----------------*/
|
||||
let selected_source = source_override.map(str::to_string).or_else(|| {
|
||||
allow_env_source
|
||||
.then(|| std::env::var("LESAVKA_MIC_SOURCE").ok())
|
||||
.flatten()
|
||||
});
|
||||
let source_desc = match selected_source {
|
||||
Some(s) if !s.is_empty() => match Self::resolve_source_desc(&s) {
|
||||
Some(desc) => desc,
|
||||
None => {
|
||||
if explicit_media_sources_required() {
|
||||
bail!(
|
||||
"requested mic '{s}' was not found; refusing to use default because {REQUIRE_EXPLICIT_MEDIA_SOURCES_ENV}=1"
|
||||
);
|
||||
}
|
||||
warn!("🎤 requested mic '{s}' not found; using default");
|
||||
Self::default_source_desc()
|
||||
}
|
||||
},
|
||||
_ => Self::default_source_desc(),
|
||||
};
|
||||
debug!("🎤 source: {source_desc}");
|
||||
let gain = mic_gain_from_env();
|
||||
let level_tap_path = mic_level_tap_path();
|
||||
let desc = microphone_pipeline_desc(&source_desc, gain, level_tap_path.is_some());
|
||||
|
||||
let pipeline: gst::Pipeline = gst::parse::launch(&desc)?.downcast().expect("pipeline");
|
||||
let sink: gst_app::AppSink = pipeline.by_name("asink").unwrap().downcast().unwrap();
|
||||
let volume = pipeline
|
||||
.by_name("mic_input_gain")
|
||||
.context("missing mic_input_gain volume")?;
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
{
|
||||
/* ─── bus for diagnostics ───────────────────────────────────────*/
|
||||
let bus = pipeline.bus().unwrap();
|
||||
std::thread::spawn(move || {
|
||||
use gst::MessageView::*;
|
||||
for msg in bus.iter_timed(gst::ClockTime::NONE) {
|
||||
match msg.view() {
|
||||
StateChanged(s)
|
||||
if s.current() == gst::State::Playing
|
||||
&& msg.src().map(|s| s.is::<gst::Pipeline>()).unwrap_or(false) =>
|
||||
{
|
||||
info!("🎤 mic pipeline ▶️")
|
||||
}
|
||||
Error(e) => error!(
|
||||
"🎤💥 mic: {} ({})",
|
||||
e.error(),
|
||||
e.debug().unwrap_or_default()
|
||||
),
|
||||
Warning(w) => warn!(
|
||||
"🎤⚠️ mic: {} ({})",
|
||||
w.error(),
|
||||
w.debug().unwrap_or_default()
|
||||
),
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
if let Err(err) = pipeline.set_state(gst::State::Playing) {
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
return Err(err).context("start mic pipeline");
|
||||
}
|
||||
maybe_spawn_mic_gain_control(volume);
|
||||
let level_tap_running = if let Some(path) = level_tap_path {
|
||||
let level_sink = pipeline
|
||||
.by_name("level_sink")
|
||||
.context("missing microphone level tap appsink")?
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("microphone level tap appsink");
|
||||
Some(spawn_mic_level_tap(level_sink, path))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
Ok(Self {
|
||||
pipeline,
|
||||
sink,
|
||||
level_tap_running,
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Mutex::default(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Blocking pull; call from an async wrapper
|
||||
pub fn pull(&self) -> Option<AudioPacket> {
|
||||
if let Some(packet) = self.pending_packets.lock().ok()?.pop_front() {
|
||||
return Some(packet);
|
||||
}
|
||||
match self.sink.pull_sample() {
|
||||
Ok(sample) => {
|
||||
let buf = sample.buffer().unwrap();
|
||||
let map = buf.map_readable().unwrap();
|
||||
let source_pts_us = buf.pts().map(|ts| ts.nseconds() / 1_000);
|
||||
let packet_duration_us = buffer_duration_us(buf, map.len());
|
||||
let timing = self.pts_rebaser.rebase_with_packet_duration(
|
||||
source_pts_us,
|
||||
packet_duration_us,
|
||||
crate::live_capture_clock::upstream_source_lag_cap(),
|
||||
);
|
||||
if timing.lag_clamped {
|
||||
log_microphone_stale_source_drop(timing, map.len());
|
||||
return None;
|
||||
}
|
||||
let pts = timing.packet_pts_us;
|
||||
let target_bytes = mic_packet_target_bytes();
|
||||
let mut packets = split_audio_sample(pts, map.as_slice(), target_bytes);
|
||||
let packet_count = packets.len();
|
||||
let first_packet = packets.pop_front();
|
||||
#[cfg(not(coverage))]
|
||||
{
|
||||
static CNT: AtomicU64 = AtomicU64::new(0);
|
||||
let n = CNT.fetch_add(1, Ordering::Relaxed);
|
||||
if crate::live_capture_clock::upstream_timing_trace_enabled()
|
||||
&& (n < 10 || n.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
packet_index = n,
|
||||
source_pts_us = timing.source_pts_us.unwrap_or_default(),
|
||||
source_base_us = timing.source_base_us.unwrap_or_default(),
|
||||
capture_base_us = timing.capture_base_us.unwrap_or_default(),
|
||||
capture_now_us = timing.capture_now_us,
|
||||
packet_pts_us = timing.packet_pts_us,
|
||||
pull_path_delay_us =
|
||||
timing.capture_now_us as i128 - timing.packet_pts_us as i128,
|
||||
used_source_pts = timing.used_source_pts,
|
||||
lag_clamped = timing.lag_clamped,
|
||||
lead_clamped = timing.lead_clamped,
|
||||
bytes = map.len(),
|
||||
packet_duration_us,
|
||||
split_packets = packet_count,
|
||||
target_packet_bytes = target_bytes,
|
||||
"🎤 upstream microphone timing sample"
|
||||
);
|
||||
}
|
||||
if n < 10 || n.is_multiple_of(300) {
|
||||
trace!(
|
||||
"🎤⇧ cli sample#{n} {} bytes -> {} packet(s)",
|
||||
map.len(),
|
||||
packet_count
|
||||
);
|
||||
}
|
||||
}
|
||||
if !packets.is_empty()
|
||||
&& let Ok(mut pending) = self.pending_packets.lock()
|
||||
{
|
||||
pending.extend(packets);
|
||||
}
|
||||
first_packet
|
||||
}
|
||||
Err(_) => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve launcher-selected mic names while preserving Pulse catalog routing.
|
||||
fn resolve_source_desc(fragment: &str) -> Option<String> {
|
||||
if looks_like_pulse_source_name(fragment)
|
||||
&& let Some(full) = Self::pulse_source_by_substr(fragment)
|
||||
{
|
||||
return Some(Self::pulse_source_desc(Some(&full)));
|
||||
}
|
||||
if Self::pipewire_source_available()
|
||||
&& let Some(full) = Self::pipewire_source_by_substr(fragment)
|
||||
{
|
||||
return Some(Self::pipewire_source_desc(Some(&full)));
|
||||
}
|
||||
Self::pulse_source_by_substr(fragment).map(|full| Self::pulse_source_desc(Some(&full)))
|
||||
}
|
||||
|
||||
fn pipewire_source_available() -> bool {
|
||||
#[cfg(coverage)]
|
||||
if std::env::var("LESAVKA_MIC_DISABLE_PIPEWIRE").is_ok() {
|
||||
return false;
|
||||
}
|
||||
gst::ElementFactory::find("pipewiresrc").is_some()
|
||||
}
|
||||
|
||||
fn pipewire_source_desc(source: Option<&str>) -> String {
|
||||
match source {
|
||||
Some(source) if !source.trim().is_empty() => {
|
||||
format!(
|
||||
"pipewiresrc target-object={} do-timestamp=true",
|
||||
escape(source.to_string().into())
|
||||
)
|
||||
}
|
||||
_ => "pipewiresrc do-timestamp=true".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
fn pulse_source_desc(source: Option<&str>) -> String {
|
||||
let buffer_time_us = mic_pulse_buffer_time_us();
|
||||
let latency_time_us = mic_pulse_latency_time_us().min(buffer_time_us);
|
||||
match source {
|
||||
Some(source) if !source.trim().is_empty() => {
|
||||
format!(
|
||||
"pulsesrc device={} do-timestamp=true buffer-time={buffer_time_us} latency-time={latency_time_us}",
|
||||
escape(source.to_string().into())
|
||||
)
|
||||
}
|
||||
_ => {
|
||||
format!(
|
||||
"pulsesrc do-timestamp=true buffer-time={buffer_time_us} latency-time={latency_time_us}"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn pipewire_source_by_substr(fragment: &str) -> Option<String> {
|
||||
let out = std::process::Command::new("pw-dump").output().ok()?;
|
||||
let list = serde_json::from_slice::<serde_json::Value>(&out.stdout).ok()?;
|
||||
let objects = list.as_array()?;
|
||||
objects.iter().find_map(|object| {
|
||||
let props = object.get("info")?.get("props")?.as_object()?;
|
||||
if props.get("media.class")?.as_str()? != "Audio/Source" {
|
||||
return None;
|
||||
}
|
||||
let name = props
|
||||
.get("node.name")
|
||||
.or_else(|| props.get("node.nick"))?
|
||||
.as_str()?;
|
||||
if name.contains(fragment) && !name.ends_with(".monitor") {
|
||||
Some(name.to_owned())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn pulse_source_by_substr(fragment: &str) -> Option<String> {
|
||||
use std::process::Command;
|
||||
let out = Command::new("pactl")
|
||||
.args(["list", "short", "sources"])
|
||||
.output()
|
||||
.ok()?;
|
||||
let list = String::from_utf8_lossy(&out.stdout);
|
||||
list.lines().find_map(|ln| {
|
||||
let mut cols = ln.split_whitespace();
|
||||
let _id = cols.next()?;
|
||||
let name = cols.next()?; // column #1
|
||||
if name.contains(fragment) {
|
||||
Some(name.to_owned())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn default_source_desc() -> String {
|
||||
#[cfg(coverage)]
|
||||
if let Ok(source) = std::env::var("LESAVKA_MIC_TEST_SOURCE_DESC")
|
||||
&& !source.trim().is_empty()
|
||||
{
|
||||
return source;
|
||||
}
|
||||
if Self::pipewire_source_available() {
|
||||
return Self::pipewire_source_desc(None);
|
||||
}
|
||||
Self::pulse_source_desc(None)
|
||||
}
|
||||
}
|
||||
|
||||
include!("microphone/capture_runtime.rs");
|
||||
fn mic_level_tap_path() -> Option<PathBuf> {
|
||||
std::env::var(MIC_LEVEL_TAP_ENV)
|
||||
.ok()
|
||||
@ -338,6 +56,8 @@ fn mic_level_tap_path() -> Option<PathBuf> {
|
||||
.map(PathBuf::from)
|
||||
}
|
||||
|
||||
/// Keeps `microphone_pipeline_desc` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn microphone_pipeline_desc(source_desc: &str, gain: f64, level_tap_enabled: bool) -> String {
|
||||
let gain = format_mic_gain_for_gst(gain);
|
||||
if level_tap_enabled {
|
||||
@ -382,6 +102,8 @@ fn pcm_payload_duration_us(bytes: usize) -> u64 {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `log_microphone_stale_source_drop` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn log_microphone_stale_source_drop(
|
||||
timing: crate::live_capture_clock::RebasedSourcePts,
|
||||
bytes: usize,
|
||||
@ -407,6 +129,8 @@ fn log_microphone_stale_source_drop(
|
||||
) {
|
||||
}
|
||||
|
||||
/// Keeps `split_audio_sample` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn split_audio_sample(base_pts_us: u64, data: &[u8], target_bytes: usize) -> VecDeque<AudioPacket> {
|
||||
let frame_bytes = (MIC_CHANNELS * MIC_SAMPLE_BYTES).max(1);
|
||||
let target_bytes = frame_aligned_packet_bytes(target_bytes.max(frame_bytes));
|
||||
@ -518,6 +242,8 @@ fn mic_gain_from_env() -> f64 {
|
||||
.unwrap_or(1.0)
|
||||
}
|
||||
|
||||
/// Keeps `parse_mic_gain` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_mic_gain(raw: &str) -> Option<f64> {
|
||||
let value = raw.split_ascii_whitespace().next()?.parse::<f64>().ok()?;
|
||||
value.is_finite().then_some(clamp_mic_gain(value))
|
||||
@ -531,6 +257,8 @@ fn format_mic_gain_for_gst(gain: f64) -> String {
|
||||
format!("{:.3}", clamp_mic_gain(gain))
|
||||
}
|
||||
|
||||
/// Keeps `maybe_spawn_mic_gain_control` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn maybe_spawn_mic_gain_control(volume: gst::Element) {
|
||||
let Ok(path) = std::env::var(MIC_GAIN_CONTROL_ENV) else {
|
||||
return;
|
||||
@ -551,6 +279,8 @@ fn maybe_spawn_mic_gain_control(volume: gst::Element) {
|
||||
});
|
||||
}
|
||||
|
||||
/// Keeps `spawn_mic_level_tap` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn spawn_mic_level_tap(sink: gst_app::AppSink, path: PathBuf) -> Arc<AtomicBool> {
|
||||
let running = Arc::new(AtomicBool::new(true));
|
||||
let thread_running = Arc::clone(&running);
|
||||
@ -593,6 +323,8 @@ fn read_mic_gain_control(path: &StdPath) -> Option<f64> {
|
||||
}
|
||||
|
||||
impl Drop for MicrophoneCapture {
|
||||
/// Keeps `drop` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn drop(&mut self) {
|
||||
if let Some(running) = &self.level_tap_running {
|
||||
running.store(false, AtomicOrdering::Release);
|
||||
@ -602,82 +334,5 @@ impl Drop for MicrophoneCapture {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
MIC_CHANNELS, MIC_SAMPLE_BYTES, MIC_SAMPLE_RATE, buffer_duration_us,
|
||||
mic_packet_target_bytes, pcm_payload_duration_us, split_audio_sample,
|
||||
};
|
||||
use gstreamer as gst;
|
||||
|
||||
fn buffer_with_duration(size: usize, duration: Option<gst::ClockTime>) -> gst::Buffer {
|
||||
gst::init().ok();
|
||||
let mut buffer = gst::Buffer::with_size(size).expect("test buffer");
|
||||
buffer
|
||||
.get_mut()
|
||||
.expect("test buffer should be uniquely owned")
|
||||
.set_duration(duration);
|
||||
buffer
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mic_payload_duration_uses_pcm_frame_count() {
|
||||
let ten_ms_bytes = (MIC_SAMPLE_RATE as usize / 100) * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
|
||||
assert_eq!(pcm_payload_duration_us(ten_ms_bytes), 10_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_reported_duration_falls_back_to_pcm_payload_duration() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::ZERO));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 21_333);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn implausibly_tiny_reported_duration_falls_back_to_payload_duration() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::from_useconds(1)));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 21_333);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn plausible_reported_duration_is_preserved() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::from_useconds(20_000)));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 20_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn oversized_microphone_samples_split_into_live_sized_packets() {
|
||||
let bytes_per_frame = MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let hundred_ms_bytes = (MIC_SAMPLE_RATE as usize / 10) * bytes_per_frame;
|
||||
let data = vec![7_u8; hundred_ms_bytes];
|
||||
|
||||
let packets = split_audio_sample(1_000_000, &data, mic_packet_target_bytes());
|
||||
|
||||
assert_eq!(packets.len(), 5);
|
||||
assert!(packets.iter().all(|packet| packet.id == 0));
|
||||
assert!(packets.iter().all(|packet| packet.data.len() == 3_840));
|
||||
assert_eq!(packets.front().map(|packet| packet.pts), Some(1_000_000));
|
||||
assert_eq!(packets.get(1).map(|packet| packet.pts), Some(1_020_000));
|
||||
assert_eq!(packets.back().map(|packet| packet.pts), Some(1_080_000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn trailing_microphone_packet_keeps_remaining_bytes() {
|
||||
let bytes_per_frame = MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let forty_five_ms_bytes = ((MIC_SAMPLE_RATE as usize * 45) / 1_000) * bytes_per_frame;
|
||||
let data = vec![9_u8; forty_five_ms_bytes];
|
||||
|
||||
let packets = split_audio_sample(5_000, &data, mic_packet_target_bytes());
|
||||
|
||||
assert_eq!(packets.len(), 3);
|
||||
assert_eq!(packets[0].data.len(), 3_840);
|
||||
assert_eq!(packets[1].data.len(), 3_840);
|
||||
assert_eq!(packets[2].data.len(), 960);
|
||||
assert_eq!(packets[2].pts, 45_000);
|
||||
}
|
||||
}
|
||||
#[path = "microphone/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
297
client/src/input/microphone/capture_runtime.rs
Normal file
297
client/src/input/microphone/capture_runtime.rs
Normal file
@ -0,0 +1,297 @@
|
||||
impl MicrophoneCapture {
|
||||
pub fn new() -> Result<Self> {
|
||||
Self::new_with_source_and_env(None, true)
|
||||
}
|
||||
|
||||
pub fn new_with_source(source_override: Option<&str>) -> Result<Self> {
|
||||
Self::new_with_source_and_env(source_override, true)
|
||||
}
|
||||
|
||||
pub fn new_default_source() -> Result<Self> {
|
||||
Self::new_with_source_and_env(None, false)
|
||||
}
|
||||
|
||||
/// Keeps `new_with_source_and_env` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn new_with_source_and_env(
|
||||
source_override: Option<&str>,
|
||||
allow_env_source: bool,
|
||||
) -> Result<Self> {
|
||||
gst::init().ok(); // idempotent
|
||||
|
||||
/* preferred path: pipewiresrc; fallback: pulsesrc ----------------*/
|
||||
let selected_source = source_override.map(str::to_string).or_else(|| {
|
||||
allow_env_source
|
||||
.then(|| std::env::var("LESAVKA_MIC_SOURCE").ok())
|
||||
.flatten()
|
||||
});
|
||||
let source_desc = match selected_source {
|
||||
Some(s) if !s.is_empty() => match Self::resolve_source_desc(&s) {
|
||||
Some(desc) => desc,
|
||||
None => {
|
||||
if explicit_media_sources_required() {
|
||||
bail!(
|
||||
"requested mic '{s}' was not found; refusing to use default because {REQUIRE_EXPLICIT_MEDIA_SOURCES_ENV}=1"
|
||||
);
|
||||
}
|
||||
warn!("🎤 requested mic '{s}' not found; using default");
|
||||
Self::default_source_desc()
|
||||
}
|
||||
},
|
||||
_ => Self::default_source_desc(),
|
||||
};
|
||||
debug!("🎤 source: {source_desc}");
|
||||
let gain = mic_gain_from_env();
|
||||
let level_tap_path = mic_level_tap_path();
|
||||
let desc = microphone_pipeline_desc(&source_desc, gain, level_tap_path.is_some());
|
||||
|
||||
let pipeline: gst::Pipeline = gst::parse::launch(&desc)?.downcast().expect("pipeline");
|
||||
let sink: gst_app::AppSink = pipeline.by_name("asink").unwrap().downcast().unwrap();
|
||||
let volume = pipeline
|
||||
.by_name("mic_input_gain")
|
||||
.context("missing mic_input_gain volume")?;
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
{
|
||||
/* ─── bus for diagnostics ───────────────────────────────────────*/
|
||||
let bus = pipeline.bus().unwrap();
|
||||
std::thread::spawn(move || {
|
||||
use gst::MessageView::*;
|
||||
for msg in bus.iter_timed(gst::ClockTime::NONE) {
|
||||
match msg.view() {
|
||||
StateChanged(s)
|
||||
if s.current() == gst::State::Playing
|
||||
&& msg.src().map(|s| s.is::<gst::Pipeline>()).unwrap_or(false) =>
|
||||
{
|
||||
info!("🎤 mic pipeline ▶️")
|
||||
}
|
||||
Error(e) => error!(
|
||||
"🎤💥 mic: {} ({})",
|
||||
e.error(),
|
||||
e.debug().unwrap_or_default()
|
||||
),
|
||||
Warning(w) => warn!(
|
||||
"🎤⚠️ mic: {} ({})",
|
||||
w.error(),
|
||||
w.debug().unwrap_or_default()
|
||||
),
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
if let Err(err) = pipeline.set_state(gst::State::Playing) {
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
return Err(err).context("start mic pipeline");
|
||||
}
|
||||
maybe_spawn_mic_gain_control(volume);
|
||||
let level_tap_running = if let Some(path) = level_tap_path {
|
||||
let level_sink = pipeline
|
||||
.by_name("level_sink")
|
||||
.context("missing microphone level tap appsink")?
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("microphone level tap appsink");
|
||||
Some(spawn_mic_level_tap(level_sink, path))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
Ok(Self {
|
||||
pipeline,
|
||||
sink,
|
||||
level_tap_running,
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Mutex::default(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Blocking pull; call from an async wrapper
|
||||
pub fn pull(&self) -> Option<AudioPacket> {
|
||||
if let Some(packet) = self.pending_packets.lock().ok()?.pop_front() {
|
||||
return Some(packet);
|
||||
}
|
||||
match self.sink.pull_sample() {
|
||||
Ok(sample) => {
|
||||
let buf = sample.buffer().unwrap();
|
||||
let map = buf.map_readable().unwrap();
|
||||
let source_pts_us = buf.pts().map(|ts| ts.nseconds() / 1_000);
|
||||
let packet_duration_us = buffer_duration_us(buf, map.len());
|
||||
let timing = self.pts_rebaser.rebase_with_packet_duration(
|
||||
source_pts_us,
|
||||
packet_duration_us,
|
||||
crate::live_capture_clock::upstream_source_lag_cap(),
|
||||
);
|
||||
if timing.lag_clamped {
|
||||
log_microphone_stale_source_drop(timing, map.len());
|
||||
return None;
|
||||
}
|
||||
let pts = timing.packet_pts_us;
|
||||
let target_bytes = mic_packet_target_bytes();
|
||||
let mut packets = split_audio_sample(pts, map.as_slice(), target_bytes);
|
||||
let packet_count = packets.len();
|
||||
let first_packet = packets.pop_front();
|
||||
#[cfg(not(coverage))]
|
||||
{
|
||||
static CNT: AtomicU64 = AtomicU64::new(0);
|
||||
let n = CNT.fetch_add(1, Ordering::Relaxed);
|
||||
if crate::live_capture_clock::upstream_timing_trace_enabled()
|
||||
&& (n < 10 || n.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
packet_index = n,
|
||||
source_pts_us = timing.source_pts_us.unwrap_or_default(),
|
||||
source_base_us = timing.source_base_us.unwrap_or_default(),
|
||||
capture_base_us = timing.capture_base_us.unwrap_or_default(),
|
||||
capture_now_us = timing.capture_now_us,
|
||||
packet_pts_us = timing.packet_pts_us,
|
||||
pull_path_delay_us =
|
||||
timing.capture_now_us as i128 - timing.packet_pts_us as i128,
|
||||
used_source_pts = timing.used_source_pts,
|
||||
lag_clamped = timing.lag_clamped,
|
||||
lead_clamped = timing.lead_clamped,
|
||||
bytes = map.len(),
|
||||
packet_duration_us,
|
||||
split_packets = packet_count,
|
||||
target_packet_bytes = target_bytes,
|
||||
"🎤 upstream microphone timing sample"
|
||||
);
|
||||
}
|
||||
if n < 10 || n.is_multiple_of(300) {
|
||||
trace!(
|
||||
"🎤⇧ cli sample#{n} {} bytes -> {} packet(s)",
|
||||
map.len(),
|
||||
packet_count
|
||||
);
|
||||
}
|
||||
}
|
||||
if !packets.is_empty()
|
||||
&& let Ok(mut pending) = self.pending_packets.lock()
|
||||
{
|
||||
pending.extend(packets);
|
||||
}
|
||||
first_packet
|
||||
}
|
||||
Err(_) => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve launcher-selected mic names while preserving Pulse catalog routing.
|
||||
fn resolve_source_desc(fragment: &str) -> Option<String> {
|
||||
if looks_like_pulse_source_name(fragment)
|
||||
&& let Some(full) = Self::pulse_source_by_substr(fragment)
|
||||
{
|
||||
return Some(Self::pulse_source_desc(Some(&full)));
|
||||
}
|
||||
if Self::pipewire_source_available()
|
||||
&& let Some(full) = Self::pipewire_source_by_substr(fragment)
|
||||
{
|
||||
return Some(Self::pipewire_source_desc(Some(&full)));
|
||||
}
|
||||
Self::pulse_source_by_substr(fragment).map(|full| Self::pulse_source_desc(Some(&full)))
|
||||
}
|
||||
|
||||
/// Keeps `pipewire_source_available` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn pipewire_source_available() -> bool {
|
||||
#[cfg(coverage)]
|
||||
if std::env::var("LESAVKA_MIC_DISABLE_PIPEWIRE").is_ok() {
|
||||
return false;
|
||||
}
|
||||
gst::ElementFactory::find("pipewiresrc").is_some()
|
||||
}
|
||||
|
||||
/// Keeps `pipewire_source_desc` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn pipewire_source_desc(source: Option<&str>) -> String {
|
||||
match source {
|
||||
Some(source) if !source.trim().is_empty() => {
|
||||
format!(
|
||||
"pipewiresrc target-object={} do-timestamp=true",
|
||||
escape(source.to_string().into())
|
||||
)
|
||||
}
|
||||
_ => "pipewiresrc do-timestamp=true".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `pulse_source_desc` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn pulse_source_desc(source: Option<&str>) -> String {
|
||||
let buffer_time_us = mic_pulse_buffer_time_us();
|
||||
let latency_time_us = mic_pulse_latency_time_us().min(buffer_time_us);
|
||||
match source {
|
||||
Some(source) if !source.trim().is_empty() => {
|
||||
format!(
|
||||
"pulsesrc device={} do-timestamp=true buffer-time={buffer_time_us} latency-time={latency_time_us}",
|
||||
escape(source.to_string().into())
|
||||
)
|
||||
}
|
||||
_ => {
|
||||
format!(
|
||||
"pulsesrc do-timestamp=true buffer-time={buffer_time_us} latency-time={latency_time_us}"
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `pipewire_source_by_substr` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn pipewire_source_by_substr(fragment: &str) -> Option<String> {
|
||||
let out = std::process::Command::new("pw-dump").output().ok()?;
|
||||
let list = serde_json::from_slice::<serde_json::Value>(&out.stdout).ok()?;
|
||||
let objects = list.as_array()?;
|
||||
objects.iter().find_map(|object| {
|
||||
let props = object.get("info")?.get("props")?.as_object()?;
|
||||
if props.get("media.class")?.as_str()? != "Audio/Source" {
|
||||
return None;
|
||||
}
|
||||
let name = props
|
||||
.get("node.name")
|
||||
.or_else(|| props.get("node.nick"))?
|
||||
.as_str()?;
|
||||
if name.contains(fragment) && !name.ends_with(".monitor") {
|
||||
Some(name.to_owned())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `pulse_source_by_substr` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn pulse_source_by_substr(fragment: &str) -> Option<String> {
|
||||
use std::process::Command;
|
||||
let out = Command::new("pactl")
|
||||
.args(["list", "short", "sources"])
|
||||
.output()
|
||||
.ok()?;
|
||||
let list = String::from_utf8_lossy(&out.stdout);
|
||||
list.lines().find_map(|ln| {
|
||||
let mut cols = ln.split_whitespace();
|
||||
let _id = cols.next()?;
|
||||
let name = cols.next()?; // column #1
|
||||
if name.contains(fragment) {
|
||||
Some(name.to_owned())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `default_source_desc` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default_source_desc() -> String {
|
||||
#[cfg(coverage)]
|
||||
if let Ok(source) = std::env::var("LESAVKA_MIC_TEST_SOURCE_DESC")
|
||||
&& !source.trim().is_empty()
|
||||
{
|
||||
return source;
|
||||
}
|
||||
if Self::pipewire_source_available() {
|
||||
return Self::pipewire_source_desc(None);
|
||||
}
|
||||
Self::pulse_source_desc(None)
|
||||
}
|
||||
}
|
||||
|
||||
79
client/src/input/microphone/tests/mod.rs
Normal file
79
client/src/input/microphone/tests/mod.rs
Normal file
@ -0,0 +1,79 @@
|
||||
use super::{
|
||||
MIC_CHANNELS, MIC_SAMPLE_BYTES, MIC_SAMPLE_RATE, buffer_duration_us, mic_packet_target_bytes,
|
||||
pcm_payload_duration_us, split_audio_sample,
|
||||
};
|
||||
use gstreamer as gst;
|
||||
|
||||
fn buffer_with_duration(size: usize, duration: Option<gst::ClockTime>) -> gst::Buffer {
|
||||
gst::init().ok();
|
||||
let mut buffer = gst::Buffer::with_size(size).expect("test buffer");
|
||||
buffer
|
||||
.get_mut()
|
||||
.expect("test buffer should be uniquely owned")
|
||||
.set_duration(duration);
|
||||
buffer
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mic_payload_duration_uses_pcm_frame_count() {
|
||||
let ten_ms_bytes = (MIC_SAMPLE_RATE as usize / 100) * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
|
||||
assert_eq!(pcm_payload_duration_us(ten_ms_bytes), 10_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_reported_duration_falls_back_to_pcm_payload_duration() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::ZERO));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 21_333);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn implausibly_tiny_reported_duration_falls_back_to_payload_duration() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::from_useconds(1)));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 21_333);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn plausible_reported_duration_is_preserved() {
|
||||
let bytes = 1_024 * MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let buffer = buffer_with_duration(bytes, Some(gst::ClockTime::from_useconds(20_000)));
|
||||
|
||||
assert_eq!(buffer_duration_us(buffer.as_ref(), bytes), 20_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `oversized_microphone_samples_split_into_live_sized_packets` explicit because it sits on microphone capture setup, where host audio stacks expose different source names and latency controls.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn oversized_microphone_samples_split_into_live_sized_packets() {
|
||||
let bytes_per_frame = MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let hundred_ms_bytes = (MIC_SAMPLE_RATE as usize / 10) * bytes_per_frame;
|
||||
let data = vec![7_u8; hundred_ms_bytes];
|
||||
|
||||
let packets = split_audio_sample(1_000_000, &data, mic_packet_target_bytes());
|
||||
|
||||
assert_eq!(packets.len(), 5);
|
||||
assert!(packets.iter().all(|packet| packet.id == 0));
|
||||
assert!(packets.iter().all(|packet| packet.data.len() == 3_840));
|
||||
assert_eq!(packets.front().map(|packet| packet.pts), Some(1_000_000));
|
||||
assert_eq!(packets.get(1).map(|packet| packet.pts), Some(1_020_000));
|
||||
assert_eq!(packets.back().map(|packet| packet.pts), Some(1_080_000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn trailing_microphone_packet_keeps_remaining_bytes() {
|
||||
let bytes_per_frame = MIC_CHANNELS * MIC_SAMPLE_BYTES;
|
||||
let forty_five_ms_bytes = ((MIC_SAMPLE_RATE as usize * 45) / 1_000) * bytes_per_frame;
|
||||
let data = vec![9_u8; forty_five_ms_bytes];
|
||||
|
||||
let packets = split_audio_sample(5_000, &data, mic_packet_target_bytes());
|
||||
|
||||
assert_eq!(packets.len(), 3);
|
||||
assert_eq!(packets[0].data.len(), 3_840);
|
||||
assert_eq!(packets[1].data.len(), 3_840);
|
||||
assert_eq!(packets[2].data.len(), 960);
|
||||
assert_eq!(packets[2].pts, 45_000);
|
||||
}
|
||||
@ -17,6 +17,8 @@ pub enum InputRouting {
|
||||
}
|
||||
|
||||
impl InputRouting {
|
||||
/// Keeps `as_env` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn as_env(self) -> &'static str {
|
||||
match self {
|
||||
Self::Local => "0",
|
||||
@ -32,6 +34,8 @@ pub enum ViewMode {
|
||||
}
|
||||
|
||||
impl ViewMode {
|
||||
/// Keeps `as_env` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn as_env(self) -> &'static str {
|
||||
match self {
|
||||
Self::Unified => "unified",
|
||||
@ -47,6 +51,8 @@ pub enum DisplaySurface {
|
||||
}
|
||||
|
||||
impl DisplaySurface {
|
||||
/// Keeps `label` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn label(self) -> &'static str {
|
||||
match self {
|
||||
Self::Preview => "preview",
|
||||
@ -63,6 +69,8 @@ pub enum FeedSourcePreset {
|
||||
}
|
||||
|
||||
impl FeedSourcePreset {
|
||||
/// Keeps `as_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn as_id(self) -> &'static str {
|
||||
match self {
|
||||
Self::ThisEye => "self",
|
||||
@ -71,6 +79,8 @@ impl FeedSourcePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `from_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn from_id(raw: &str) -> Option<Self> {
|
||||
match raw {
|
||||
"self" => Some(Self::ThisEye),
|
||||
@ -80,6 +90,8 @@ impl FeedSourcePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `label` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn label(self, monitor_id: usize) -> &'static str {
|
||||
match (monitor_id, self) {
|
||||
(_, Self::Off) => "Off",
|
||||
@ -105,6 +117,8 @@ pub enum BreakoutSizePreset {
|
||||
}
|
||||
|
||||
impl BreakoutSizePreset {
|
||||
/// Keeps `as_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn as_id(self) -> &'static str {
|
||||
match self {
|
||||
Self::P360 => "360p",
|
||||
@ -118,6 +132,8 @@ impl BreakoutSizePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `from_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn from_id(raw: &str) -> Option<Self> {
|
||||
match raw {
|
||||
"360p" => Some(Self::P360),
|
||||
@ -132,6 +148,8 @@ impl BreakoutSizePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `label` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn label(self) -> &'static str {
|
||||
match self {
|
||||
Self::P360 => "360p",
|
||||
@ -159,6 +177,8 @@ pub enum CaptureSizePreset {
|
||||
}
|
||||
|
||||
impl CaptureSizePreset {
|
||||
/// Keeps `as_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn as_id(self) -> &'static str {
|
||||
match self {
|
||||
Self::Vga => "vga",
|
||||
@ -169,6 +189,8 @@ impl CaptureSizePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `from_id` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn from_id(raw: &str) -> Option<Self> {
|
||||
match raw {
|
||||
"vga" | "360p" => Some(Self::Vga),
|
||||
@ -180,6 +202,8 @@ impl CaptureSizePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `label` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn label(self) -> &'static str {
|
||||
match self {
|
||||
Self::Vga => "VGA",
|
||||
@ -194,6 +218,8 @@ impl CaptureSizePreset {
|
||||
"device H.264 pass-through"
|
||||
}
|
||||
|
||||
/// Keeps `source_mode` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn source_mode(self) -> EyeSourceMode {
|
||||
match normalize_capture_size_preset(self) {
|
||||
Self::P720 => native_eye_source_modes()[1],
|
||||
@ -202,6 +228,8 @@ impl CaptureSizePreset {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `from_source_mode` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn from_source_mode(mode: EyeSourceMode) -> Self {
|
||||
match (mode.width, mode.height, mode.fps) {
|
||||
(1280, 720, 60) => Self::P720,
|
||||
@ -294,240 +322,4 @@ impl Default for CapturePowerStatus {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct CalibrationStatus {
|
||||
pub available: bool,
|
||||
pub profile: String,
|
||||
pub factory_audio_offset_us: i64,
|
||||
pub factory_video_offset_us: i64,
|
||||
pub default_audio_offset_us: i64,
|
||||
pub default_video_offset_us: i64,
|
||||
pub active_audio_offset_us: i64,
|
||||
pub active_video_offset_us: i64,
|
||||
pub source: String,
|
||||
pub confidence: String,
|
||||
pub updated_at: String,
|
||||
pub detail: String,
|
||||
}
|
||||
|
||||
/// Convert relay calibration payloads into visible launcher state.
|
||||
impl CalibrationStatus {
|
||||
/// Convert the relay calibration RPC payload into launcher state.
|
||||
#[must_use]
|
||||
pub fn from_proto(reply: lesavka_common::lesavka::CalibrationState) -> Self {
|
||||
Self {
|
||||
available: true,
|
||||
profile: reply.profile,
|
||||
factory_audio_offset_us: reply.factory_audio_offset_us,
|
||||
factory_video_offset_us: reply.factory_video_offset_us,
|
||||
default_audio_offset_us: reply.default_audio_offset_us,
|
||||
default_video_offset_us: reply.default_video_offset_us,
|
||||
active_audio_offset_us: reply.active_audio_offset_us,
|
||||
active_video_offset_us: reply.active_video_offset_us,
|
||||
source: reply.source,
|
||||
confidence: reply.confidence,
|
||||
updated_at: reply.updated_at,
|
||||
detail: reply.detail,
|
||||
}
|
||||
}
|
||||
#[must_use]
|
||||
pub fn unavailable(detail: impl Into<String>) -> Self {
|
||||
Self {
|
||||
detail: detail.into(),
|
||||
..Self::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Provide factory MJPEG offsets until the relay reports saved calibration.
|
||||
impl Default for CalibrationStatus {
|
||||
/// Start with the current lab-validated MJPEG baseline.
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
available: false,
|
||||
profile: "mjpeg".to_string(),
|
||||
factory_audio_offset_us: 0,
|
||||
factory_video_offset_us: 0,
|
||||
default_audio_offset_us: 0,
|
||||
default_video_offset_us: 0,
|
||||
active_audio_offset_us: 0,
|
||||
active_video_offset_us: 0,
|
||||
source: "unknown".to_string(),
|
||||
confidence: "unknown".to_string(),
|
||||
updated_at: String::new(),
|
||||
detail: "calibration status unavailable".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub struct UpstreamSyncStatus {
|
||||
pub available: bool,
|
||||
pub session_id: u64,
|
||||
pub phase: String,
|
||||
pub latest_camera_remote_pts_us: Option<u64>,
|
||||
pub latest_microphone_remote_pts_us: Option<u64>,
|
||||
pub last_video_presented_pts_us: Option<u64>,
|
||||
pub last_audio_presented_pts_us: Option<u64>,
|
||||
pub live_lag_ms: Option<f32>,
|
||||
pub planner_skew_ms: Option<f32>,
|
||||
pub stale_audio_drops: u64,
|
||||
pub stale_video_drops: u64,
|
||||
pub skew_video_drops: u64,
|
||||
pub freshness_reanchors: u64,
|
||||
pub startup_timeouts: u64,
|
||||
pub video_freezes: u64,
|
||||
pub detail: String,
|
||||
}
|
||||
|
||||
impl UpstreamSyncStatus {
|
||||
#[must_use]
|
||||
pub fn from_proto(reply: lesavka_common::lesavka::UpstreamSyncState) -> Self {
|
||||
Self {
|
||||
available: true,
|
||||
session_id: reply.session_id,
|
||||
phase: reply.phase,
|
||||
latest_camera_remote_pts_us: reply.latest_camera_remote_pts_us,
|
||||
latest_microphone_remote_pts_us: reply.latest_microphone_remote_pts_us,
|
||||
last_video_presented_pts_us: reply.last_video_presented_pts_us,
|
||||
last_audio_presented_pts_us: reply.last_audio_presented_pts_us,
|
||||
live_lag_ms: reply.live_lag_ms,
|
||||
planner_skew_ms: reply.planner_skew_ms,
|
||||
stale_audio_drops: reply.stale_audio_drops,
|
||||
stale_video_drops: reply.stale_video_drops,
|
||||
skew_video_drops: reply.skew_video_drops,
|
||||
freshness_reanchors: reply.freshness_reanchors,
|
||||
startup_timeouts: reply.startup_timeouts,
|
||||
video_freezes: reply.video_freezes,
|
||||
detail: reply.last_reason,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn unavailable(detail: impl Into<String>) -> Self {
|
||||
Self {
|
||||
detail: detail.into(),
|
||||
..Self::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for UpstreamSyncStatus {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
available: false,
|
||||
session_id: 0,
|
||||
phase: "unknown".to_string(),
|
||||
latest_camera_remote_pts_us: None,
|
||||
latest_microphone_remote_pts_us: None,
|
||||
last_video_presented_pts_us: None,
|
||||
last_audio_presented_pts_us: None,
|
||||
live_lag_ms: None,
|
||||
planner_skew_ms: None,
|
||||
stale_audio_drops: 0,
|
||||
stale_video_drops: 0,
|
||||
skew_video_drops: 0,
|
||||
freshness_reanchors: 0,
|
||||
startup_timeouts: 0,
|
||||
video_freezes: 0,
|
||||
detail: "upstream sync planner unavailable".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default, Serialize, Deserialize)]
|
||||
pub struct DeviceSelection {
|
||||
pub camera: Option<String>,
|
||||
pub microphone: Option<String>,
|
||||
pub speaker: Option<String>,
|
||||
pub keyboard: Option<String>,
|
||||
pub mouse: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct ChannelSelection {
|
||||
pub camera: bool,
|
||||
pub microphone: bool,
|
||||
pub audio: bool,
|
||||
}
|
||||
|
||||
impl Default for ChannelSelection {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
camera: false,
|
||||
microphone: false,
|
||||
audio: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct LauncherState {
|
||||
pub server_available: bool,
|
||||
pub server_version: Option<String>,
|
||||
pub server_camera: Option<bool>,
|
||||
pub server_microphone: Option<bool>,
|
||||
pub server_camera_output: Option<String>,
|
||||
pub server_camera_codec: Option<String>,
|
||||
pub routing: InputRouting,
|
||||
pub view_mode: ViewMode,
|
||||
pub displays: [DisplaySurface; 2],
|
||||
pub feed_sources: [FeedSourcePreset; 2],
|
||||
pub preview_source: PreviewSourceSize,
|
||||
pub breakout_limit: PreviewSourceSize,
|
||||
pub breakout_display: PreviewSourceSize,
|
||||
pub capture_sizes: [CaptureSizePreset; 2],
|
||||
pub capture_fps: [u32; 2],
|
||||
pub capture_bitrates_kbit: [u32; 2],
|
||||
pub breakout_sizes: [BreakoutSizePreset; 2],
|
||||
pub devices: DeviceSelection,
|
||||
pub camera_quality: Option<CameraMode>,
|
||||
pub channels: ChannelSelection,
|
||||
pub audio_gain_percent: u32,
|
||||
pub mic_gain_percent: u32,
|
||||
pub swap_key: String,
|
||||
pub swap_key_binding: bool,
|
||||
pub swap_key_binding_token: u64,
|
||||
pub capture_power: CapturePowerStatus,
|
||||
pub calibration: CalibrationStatus,
|
||||
pub upstream_sync: UpstreamSyncStatus,
|
||||
pub remote_active: bool,
|
||||
pub notes: Vec<String>,
|
||||
}
|
||||
|
||||
impl Default for LauncherState {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
server_available: false,
|
||||
server_version: None,
|
||||
server_camera: None,
|
||||
server_microphone: None,
|
||||
server_camera_output: None,
|
||||
server_camera_codec: None,
|
||||
routing: InputRouting::Remote,
|
||||
view_mode: ViewMode::Unified,
|
||||
displays: [DisplaySurface::Preview, DisplaySurface::Preview],
|
||||
feed_sources: [FeedSourcePreset::ThisEye, FeedSourcePreset::ThisEye],
|
||||
preview_source: PreviewSourceSize::default(),
|
||||
breakout_limit: PreviewSourceSize::default(),
|
||||
breakout_display: PreviewSourceSize::default(),
|
||||
capture_sizes: [CaptureSizePreset::P1080, CaptureSizePreset::P1080],
|
||||
capture_fps: [60, 60],
|
||||
capture_bitrates_kbit: [18_000, 18_000],
|
||||
breakout_sizes: [BreakoutSizePreset::Source, BreakoutSizePreset::Source],
|
||||
devices: DeviceSelection::default(),
|
||||
camera_quality: None,
|
||||
channels: ChannelSelection::default(),
|
||||
audio_gain_percent: DEFAULT_AUDIO_GAIN_PERCENT,
|
||||
mic_gain_percent: DEFAULT_MIC_GAIN_PERCENT,
|
||||
swap_key: "pause".to_string(),
|
||||
swap_key_binding: false,
|
||||
swap_key_binding_token: 0,
|
||||
capture_power: CapturePowerStatus::default(),
|
||||
calibration: CalibrationStatus::default(),
|
||||
upstream_sync: UpstreamSyncStatus::default(),
|
||||
remote_active: false,
|
||||
notes: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
include!("selection_models/sync_and_state_status.rs");
|
||||
|
||||
@ -0,0 +1,243 @@
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct CalibrationStatus {
|
||||
pub available: bool,
|
||||
pub profile: String,
|
||||
pub factory_audio_offset_us: i64,
|
||||
pub factory_video_offset_us: i64,
|
||||
pub default_audio_offset_us: i64,
|
||||
pub default_video_offset_us: i64,
|
||||
pub active_audio_offset_us: i64,
|
||||
pub active_video_offset_us: i64,
|
||||
pub source: String,
|
||||
pub confidence: String,
|
||||
pub updated_at: String,
|
||||
pub detail: String,
|
||||
}
|
||||
|
||||
/// Convert relay calibration payloads into visible launcher state.
|
||||
impl CalibrationStatus {
|
||||
/// Convert the relay calibration RPC payload into launcher state.
|
||||
#[must_use]
|
||||
pub fn from_proto(reply: lesavka_common::lesavka::CalibrationState) -> Self {
|
||||
Self {
|
||||
available: true,
|
||||
profile: reply.profile,
|
||||
factory_audio_offset_us: reply.factory_audio_offset_us,
|
||||
factory_video_offset_us: reply.factory_video_offset_us,
|
||||
default_audio_offset_us: reply.default_audio_offset_us,
|
||||
default_video_offset_us: reply.default_video_offset_us,
|
||||
active_audio_offset_us: reply.active_audio_offset_us,
|
||||
active_video_offset_us: reply.active_video_offset_us,
|
||||
source: reply.source,
|
||||
confidence: reply.confidence,
|
||||
updated_at: reply.updated_at,
|
||||
detail: reply.detail,
|
||||
}
|
||||
}
|
||||
#[must_use]
|
||||
pub fn unavailable(detail: impl Into<String>) -> Self {
|
||||
Self {
|
||||
detail: detail.into(),
|
||||
..Self::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Provide factory MJPEG offsets until the relay reports saved calibration.
|
||||
impl Default for CalibrationStatus {
|
||||
/// Start with the current lab-validated MJPEG baseline.
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
available: false,
|
||||
profile: "mjpeg".to_string(),
|
||||
factory_audio_offset_us: 0,
|
||||
factory_video_offset_us: 0,
|
||||
default_audio_offset_us: 0,
|
||||
default_video_offset_us: 0,
|
||||
active_audio_offset_us: 0,
|
||||
active_video_offset_us: 0,
|
||||
source: "unknown".to_string(),
|
||||
confidence: "unknown".to_string(),
|
||||
updated_at: String::new(),
|
||||
detail: "calibration status unavailable".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
pub struct UpstreamSyncStatus {
|
||||
pub available: bool,
|
||||
pub session_id: u64,
|
||||
pub phase: String,
|
||||
pub latest_camera_remote_pts_us: Option<u64>,
|
||||
pub latest_microphone_remote_pts_us: Option<u64>,
|
||||
pub last_video_presented_pts_us: Option<u64>,
|
||||
pub last_audio_presented_pts_us: Option<u64>,
|
||||
pub live_lag_ms: Option<f32>,
|
||||
pub planner_skew_ms: Option<f32>,
|
||||
pub stale_audio_drops: u64,
|
||||
pub stale_video_drops: u64,
|
||||
pub skew_video_drops: u64,
|
||||
pub freshness_reanchors: u64,
|
||||
pub startup_timeouts: u64,
|
||||
pub video_freezes: u64,
|
||||
pub detail: String,
|
||||
}
|
||||
|
||||
impl UpstreamSyncStatus {
|
||||
#[must_use]
|
||||
/// Keeps `from_proto` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn from_proto(reply: lesavka_common::lesavka::UpstreamSyncState) -> Self {
|
||||
Self {
|
||||
available: true,
|
||||
session_id: reply.session_id,
|
||||
phase: reply.phase,
|
||||
latest_camera_remote_pts_us: reply.latest_camera_remote_pts_us,
|
||||
latest_microphone_remote_pts_us: reply.latest_microphone_remote_pts_us,
|
||||
last_video_presented_pts_us: reply.last_video_presented_pts_us,
|
||||
last_audio_presented_pts_us: reply.last_audio_presented_pts_us,
|
||||
live_lag_ms: reply.live_lag_ms,
|
||||
planner_skew_ms: reply.planner_skew_ms,
|
||||
stale_audio_drops: reply.stale_audio_drops,
|
||||
stale_video_drops: reply.stale_video_drops,
|
||||
skew_video_drops: reply.skew_video_drops,
|
||||
freshness_reanchors: reply.freshness_reanchors,
|
||||
startup_timeouts: reply.startup_timeouts,
|
||||
video_freezes: reply.video_freezes,
|
||||
detail: reply.last_reason,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn unavailable(detail: impl Into<String>) -> Self {
|
||||
Self {
|
||||
detail: detail.into(),
|
||||
..Self::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for UpstreamSyncStatus {
|
||||
/// Keeps `default` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
available: false,
|
||||
session_id: 0,
|
||||
phase: "unknown".to_string(),
|
||||
latest_camera_remote_pts_us: None,
|
||||
latest_microphone_remote_pts_us: None,
|
||||
last_video_presented_pts_us: None,
|
||||
last_audio_presented_pts_us: None,
|
||||
live_lag_ms: None,
|
||||
planner_skew_ms: None,
|
||||
stale_audio_drops: 0,
|
||||
stale_video_drops: 0,
|
||||
skew_video_drops: 0,
|
||||
freshness_reanchors: 0,
|
||||
startup_timeouts: 0,
|
||||
video_freezes: 0,
|
||||
detail: "upstream sync planner unavailable".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default, Serialize, Deserialize)]
|
||||
pub struct DeviceSelection {
|
||||
pub camera: Option<String>,
|
||||
pub microphone: Option<String>,
|
||||
pub speaker: Option<String>,
|
||||
pub keyboard: Option<String>,
|
||||
pub mouse: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct ChannelSelection {
|
||||
pub camera: bool,
|
||||
pub microphone: bool,
|
||||
pub audio: bool,
|
||||
}
|
||||
|
||||
impl Default for ChannelSelection {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
camera: false,
|
||||
microphone: false,
|
||||
audio: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct LauncherState {
|
||||
pub server_available: bool,
|
||||
pub server_version: Option<String>,
|
||||
pub server_camera: Option<bool>,
|
||||
pub server_microphone: Option<bool>,
|
||||
pub server_camera_output: Option<String>,
|
||||
pub server_camera_codec: Option<String>,
|
||||
pub routing: InputRouting,
|
||||
pub view_mode: ViewMode,
|
||||
pub displays: [DisplaySurface; 2],
|
||||
pub feed_sources: [FeedSourcePreset; 2],
|
||||
pub preview_source: PreviewSourceSize,
|
||||
pub breakout_limit: PreviewSourceSize,
|
||||
pub breakout_display: PreviewSourceSize,
|
||||
pub capture_sizes: [CaptureSizePreset; 2],
|
||||
pub capture_fps: [u32; 2],
|
||||
pub capture_bitrates_kbit: [u32; 2],
|
||||
pub breakout_sizes: [BreakoutSizePreset; 2],
|
||||
pub devices: DeviceSelection,
|
||||
pub camera_quality: Option<CameraMode>,
|
||||
pub channels: ChannelSelection,
|
||||
pub audio_gain_percent: u32,
|
||||
pub mic_gain_percent: u32,
|
||||
pub swap_key: String,
|
||||
pub swap_key_binding: bool,
|
||||
pub swap_key_binding_token: u64,
|
||||
pub capture_power: CapturePowerStatus,
|
||||
pub calibration: CalibrationStatus,
|
||||
pub upstream_sync: UpstreamSyncStatus,
|
||||
pub remote_active: bool,
|
||||
pub notes: Vec<String>,
|
||||
}
|
||||
|
||||
impl Default for LauncherState {
|
||||
/// Keeps `default` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
server_available: false,
|
||||
server_version: None,
|
||||
server_camera: None,
|
||||
server_microphone: None,
|
||||
server_camera_output: None,
|
||||
server_camera_codec: None,
|
||||
routing: InputRouting::Remote,
|
||||
view_mode: ViewMode::Unified,
|
||||
displays: [DisplaySurface::Preview, DisplaySurface::Preview],
|
||||
feed_sources: [FeedSourcePreset::ThisEye, FeedSourcePreset::ThisEye],
|
||||
preview_source: PreviewSourceSize::default(),
|
||||
breakout_limit: PreviewSourceSize::default(),
|
||||
breakout_display: PreviewSourceSize::default(),
|
||||
capture_sizes: [CaptureSizePreset::P1080, CaptureSizePreset::P1080],
|
||||
capture_fps: [60, 60],
|
||||
capture_bitrates_kbit: [18_000, 18_000],
|
||||
breakout_sizes: [BreakoutSizePreset::Source, BreakoutSizePreset::Source],
|
||||
devices: DeviceSelection::default(),
|
||||
camera_quality: None,
|
||||
channels: ChannelSelection::default(),
|
||||
audio_gain_percent: DEFAULT_AUDIO_GAIN_PERCENT,
|
||||
mic_gain_percent: DEFAULT_MIC_GAIN_PERCENT,
|
||||
swap_key: "pause".to_string(),
|
||||
swap_key_binding: false,
|
||||
swap_key_binding_token: 0,
|
||||
capture_power: CapturePowerStatus::default(),
|
||||
calibration: CalibrationStatus::default(),
|
||||
upstream_sync: UpstreamSyncStatus::default(),
|
||||
remote_active: false,
|
||||
notes: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -8,6 +8,8 @@ fn compact_device_name(value: &str) -> String {
|
||||
trimmed.rsplit('/').next().unwrap_or(trimmed).to_string()
|
||||
}
|
||||
|
||||
/// Keeps `capitalize` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn capitalize(value: &str) -> String {
|
||||
let mut chars = value.chars();
|
||||
match chars.next() {
|
||||
@ -16,6 +18,8 @@ pub fn capitalize(value: &str) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `selected_combo_value` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn selected_combo_value(combo: >k::ComboBoxText) -> Option<String> {
|
||||
combo
|
||||
.active_id()
|
||||
@ -35,6 +39,8 @@ pub fn selected_combo_value(combo: >k::ComboBoxText) -> Option<String> {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `selected_server_addr` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn selected_server_addr(entry: >k::Entry, fallback: &str) -> String {
|
||||
let current = entry.text();
|
||||
let trimmed = current.trim();
|
||||
@ -49,6 +55,8 @@ pub fn selected_server_addr(entry: >k::Entry, fallback: &str) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `normalize_server_addr` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn normalize_server_addr(raw: &str) -> String {
|
||||
let trimmed = raw.trim();
|
||||
if trimmed.is_empty() || trimmed.contains("://") {
|
||||
@ -132,6 +140,8 @@ pub fn write_mic_gain_request(path: &Path, gain_percent: u32) -> Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Keeps `write_media_control_request` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn write_media_control_request(path: &Path, state: &LauncherState) -> Result<()> {
|
||||
crate::live_media_control::write_media_control_request(
|
||||
path,
|
||||
@ -156,6 +166,8 @@ pub fn write_input_toggle_key_request(path: &Path, swap_key: &str) -> Result<()>
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Keeps `read_input_routing_state` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn read_input_routing_state(path: &Path) -> Option<InputRouting> {
|
||||
let raw = std::fs::read_to_string(path).ok()?;
|
||||
match raw
|
||||
@ -177,6 +189,8 @@ fn control_request_nonce() -> u128 {
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Keeps `routing_name` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn routing_name(routing: InputRouting) -> &'static str {
|
||||
match routing {
|
||||
InputRouting::Local => "local",
|
||||
@ -193,6 +207,8 @@ pub fn path_marker(path: &Path) -> u128 {
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Keeps `set_combo_active_text` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn set_combo_active_text(combo: >k::ComboBoxText, wanted: Option<&str>) {
|
||||
let wanted = wanted.unwrap_or("auto");
|
||||
if combo.set_active_id(Some(wanted)) {
|
||||
@ -204,6 +220,8 @@ pub fn set_combo_active_text(combo: >k::ComboBoxText, wanted: Option<&str>) {
|
||||
let _ = combo.set_active_id(Some("all"));
|
||||
}
|
||||
|
||||
/// Keeps `toggle_key_label` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn toggle_key_label(raw: &str) -> String {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"" | "off" | "none" | "disabled" => "Disabled".to_string(),
|
||||
@ -230,6 +248,8 @@ pub fn toggle_key_label(raw: &str) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `capture_swap_key` explicit because it sits on launcher state/UI wiring, where device choices should remain explainable across refreshes.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn capture_swap_key(key: gtk::gdk::Key) -> Option<String> {
|
||||
let normalized_name = key.name()?.to_string().to_ascii_lowercase();
|
||||
match normalized_name.as_str() {
|
||||
|
||||
@ -29,6 +29,8 @@ impl MediaDeviceChoice {
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
/// Keeps `resolve` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn resolve(&self, fallback: Option<&str>) -> Option<String> {
|
||||
match self {
|
||||
Self::Inherit => fallback.map(str::to_string),
|
||||
@ -182,6 +184,8 @@ fn parse_media_control_state(raw: &str) -> Option<MediaControlState> {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `encode_choice` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn encode_choice(choice: &MediaDeviceChoice) -> String {
|
||||
match choice {
|
||||
MediaDeviceChoice::Inherit => "inherit".to_string(),
|
||||
@ -190,6 +194,8 @@ fn encode_choice(choice: &MediaDeviceChoice) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `parse_choice` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_choice(value: &str) -> Option<MediaDeviceChoice> {
|
||||
let value = value.trim();
|
||||
if value.is_empty() || value.eq_ignore_ascii_case("auto") {
|
||||
@ -206,6 +212,8 @@ fn parse_choice(value: &str) -> Option<MediaDeviceChoice> {
|
||||
Some(MediaDeviceChoice::from_selection(Some(value.to_string())))
|
||||
}
|
||||
|
||||
/// Keeps `parse_bool_flag` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_bool_flag(value: &str) -> Option<bool> {
|
||||
match value.trim().to_ascii_lowercase().as_str() {
|
||||
"1" | "true" | "on" | "yes" => Some(true),
|
||||
@ -238,6 +246,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `parses_media_control_state_with_live_device_choices` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parses_media_control_state_with_live_device_choices() {
|
||||
let state = MediaControlState::with_devices(
|
||||
true,
|
||||
@ -269,6 +279,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `live_media_controls_refresh_after_file_changes` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn live_media_controls_refresh_after_file_changes() {
|
||||
let dir = tempfile::tempdir().expect("tempdir");
|
||||
let path = dir.path().join("media.control");
|
||||
@ -313,6 +325,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `refresh_falls_back_to_all_enabled_if_lock_is_poisoned` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn refresh_falls_back_to_all_enabled_if_lock_is_poisoned() {
|
||||
let controls = LiveMediaControls {
|
||||
path: PathBuf::from("/definitely/not/a/real/lesavka-media.control"),
|
||||
|
||||
@ -89,6 +89,8 @@ pub fn analyze_capture(
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `filter_segments_to_analysis_window` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn filter_segments_to_analysis_window(
|
||||
segments: Vec<PulseSegment>,
|
||||
options: &SyncAnalysisOptions,
|
||||
@ -128,6 +130,8 @@ fn filter_segments_to_analysis_window(
|
||||
Ok(filtered)
|
||||
}
|
||||
|
||||
/// Keeps `reconcile_video_timestamps` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn reconcile_video_timestamps(timestamps: Vec<f64>, frame_count: usize) -> Result<Vec<f64>> {
|
||||
if frame_count == 0 {
|
||||
bail!("capture did not contain any decoded video brightness frames");
|
||||
@ -168,6 +172,8 @@ mod tests {
|
||||
use crate::sync_probe::analyze::reconcile_video_timestamps;
|
||||
|
||||
#[test]
|
||||
/// Keeps `analyze_capture_runs_against_fake_media_tools` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn analyze_capture_runs_against_fake_media_tools() {
|
||||
let timestamps = (0..15).map(|index| index as f64 / 10.0).collect::<Vec<_>>();
|
||||
let brightness = timestamps
|
||||
@ -201,6 +207,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `analyze_capture_synthesizes_timestamps_when_mjpeg_metadata_overreports_frames` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn analyze_capture_synthesizes_timestamps_when_mjpeg_metadata_overreports_frames() {
|
||||
let metadata_timestamps = (0..301)
|
||||
.map(|index| index as f64 * 0.004)
|
||||
@ -232,6 +240,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `analyze_capture_uses_color_codes_for_mirrored_video_events` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn analyze_capture_uses_color_codes_for_mirrored_video_events() {
|
||||
let timestamps = (0..45).map(|index| index as f64 * 0.1).collect::<Vec<_>>();
|
||||
let colors = timestamps
|
||||
@ -277,6 +287,8 @@ mod tests {
|
||||
assert_eq!(reconciled, vec![0.0, 0.5, 1.0]);
|
||||
}
|
||||
|
||||
/// Keeps `add_sine` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn add_sine(
|
||||
samples: &mut [i16],
|
||||
sample_rate_hz: u32,
|
||||
|
||||
@ -24,6 +24,8 @@ struct ProbeFrameEntry {
|
||||
best_effort_timestamp_time: Option<String>,
|
||||
}
|
||||
|
||||
/// Keeps `extract_video_timestamps` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn extract_video_timestamps(capture_path: &Path) -> Result<Vec<f64>> {
|
||||
let output = run_command(
|
||||
Command::new("ffprobe")
|
||||
@ -54,6 +56,8 @@ pub(super) fn extract_video_timestamps(capture_path: &Path) -> Result<Vec<f64>>
|
||||
Ok(timestamps)
|
||||
}
|
||||
|
||||
/// Keeps `extract_video_brightness` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn extract_video_brightness(capture_path: &Path) -> Result<Vec<u8>> {
|
||||
let output = run_command(
|
||||
Command::new("ffmpeg")
|
||||
@ -97,6 +101,8 @@ pub(super) fn extract_video_brightness(capture_path: &Path) -> Result<Vec<u8>> {
|
||||
))
|
||||
}
|
||||
|
||||
/// Keeps `extract_video_colors` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn extract_video_colors(capture_path: &Path) -> Result<Vec<VideoColorFrame>> {
|
||||
let output = run_command(
|
||||
Command::new("ffmpeg")
|
||||
@ -140,6 +146,8 @@ pub(super) fn extract_video_colors(capture_path: &Path) -> Result<Vec<VideoColor
|
||||
))
|
||||
}
|
||||
|
||||
/// Keeps `extract_audio_samples` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn extract_audio_samples(capture_path: &Path) -> Result<Vec<i16>> {
|
||||
let output = run_command(
|
||||
Command::new("ffmpeg")
|
||||
@ -170,6 +178,8 @@ pub(super) fn extract_audio_samples(capture_path: &Path) -> Result<Vec<i16>> {
|
||||
.collect())
|
||||
}
|
||||
|
||||
/// Keeps `run_command` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn run_command(command: &mut Command, description: &str) -> Result<Vec<u8>> {
|
||||
let output = command
|
||||
.output()
|
||||
@ -205,6 +215,8 @@ fn summarize_rgb_frames_with_adaptive_roi<'a>(
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Keeps `summarize_frame_brightness` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn summarize_frame_brightness(frame: &[u8], mask: Option<&[bool]>) -> u8 {
|
||||
let mut sum = 0u64;
|
||||
let mut selected = 0u64;
|
||||
@ -222,6 +234,8 @@ fn summarize_frame_brightness(frame: &[u8], mask: Option<&[bool]>) -> u8 {
|
||||
mean.min(u64::from(u8::MAX)) as u8
|
||||
}
|
||||
|
||||
/// Keeps `summarize_frame_color` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn summarize_frame_color(frame: &[u8], mask: Option<&[bool]>) -> VideoColorFrame {
|
||||
let mut r_sum = 0u64;
|
||||
let mut g_sum = 0u64;
|
||||
@ -274,6 +288,8 @@ fn summarize_frame_color(frame: &[u8], mask: Option<&[bool]>) -> VideoColorFrame
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `adaptive_gray_roi_mask` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn adaptive_gray_roi_mask(frames: &[&[u8]], pixel_count: usize) -> Option<Vec<bool>> {
|
||||
if frames.len() < 2 || pixel_count == 0 {
|
||||
return None;
|
||||
@ -292,6 +308,8 @@ fn adaptive_gray_roi_mask(frames: &[&[u8]], pixel_count: usize) -> Option<Vec<bo
|
||||
adaptive_roi_mask_from_scores(&scores, MIN_GRAY_ROI_SCORE)
|
||||
}
|
||||
|
||||
/// Keeps `adaptive_rgb_roi_mask` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn adaptive_rgb_roi_mask(frames: &[&[u8]], pixel_count: usize) -> Option<Vec<bool>> {
|
||||
if frames.len() < 2 || pixel_count == 0 {
|
||||
return None;
|
||||
@ -335,6 +353,8 @@ fn adaptive_rgb_roi_mask(frames: &[&[u8]], pixel_count: usize) -> Option<Vec<boo
|
||||
adaptive_roi_mask_from_scores(&scores, MIN_RGB_ROI_SCORE)
|
||||
}
|
||||
|
||||
/// Keeps `adaptive_roi_mask_from_scores` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn adaptive_roi_mask_from_scores(scores: &[f64], min_score: f64) -> Option<Vec<bool>> {
|
||||
let max_score = scores.iter().copied().fold(0.0_f64, f64::max);
|
||||
if max_score < min_score {
|
||||
@ -366,6 +386,8 @@ fn adaptive_roi_mask_from_scores(scores: &[f64], min_score: f64) -> Option<Vec<b
|
||||
(selected >= MIN_ADAPTIVE_ROI_PIXELS).then_some(mask)
|
||||
}
|
||||
|
||||
/// Keeps `retain_largest_connected_roi` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_largest_connected_roi(mask: Vec<bool>) -> Vec<bool> {
|
||||
let side = (mask.len() as f64).sqrt().round() as usize;
|
||||
if side == 0 || side * side != mask.len() {
|
||||
@ -423,6 +445,8 @@ fn luma_u8(r: u8, g: u8, b: u8) -> u8 {
|
||||
((u16::from(r) * 77 + u16::from(g) * 150 + u16::from(b) * 29) / 256) as u8
|
||||
}
|
||||
|
||||
/// Keeps `dark_roi_factor` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn dark_roi_factor(min_luma: u8) -> f64 {
|
||||
match min_luma {
|
||||
0..=80 => 1.0,
|
||||
@ -432,6 +456,8 @@ fn dark_roi_factor(min_luma: u8) -> f64 {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `palette_match_score` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn palette_match_score(r: u8, g: u8, b: u8) -> f64 {
|
||||
let max = r.max(g).max(b);
|
||||
let min = r.min(g).min(b);
|
||||
@ -470,207 +496,5 @@ fn palette_match_score(r: u8, g: u8, b: u8) -> f64 {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
extract_audio_samples, extract_video_brightness, extract_video_colors,
|
||||
extract_video_timestamps, run_command,
|
||||
};
|
||||
use crate::sync_probe::analyze::test_support::{
|
||||
audio_samples_to_bytes, frame_json, thumbnail_rgb_video_bytes, thumbnail_video_bytes,
|
||||
with_fake_media_tools,
|
||||
};
|
||||
use std::process::Command;
|
||||
|
||||
#[test]
|
||||
fn extract_video_timestamps_reads_fake_ffprobe_output() {
|
||||
let timestamps = vec![0.0, 0.5, 1.0];
|
||||
with_fake_media_tools(
|
||||
&frame_json(×tamps),
|
||||
&[1, 2, 3],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_timestamps(capture_path).expect("video timestamps");
|
||||
assert_eq!(parsed, timestamps);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_timestamps_rejects_empty_and_invalid_outputs() {
|
||||
with_fake_media_tools(br#"{"frames":[]}"#, &[1], &[1, 0], |capture_path| {
|
||||
let error = extract_video_timestamps(capture_path).expect_err("empty frames fail");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not return any video frame timestamps")
|
||||
);
|
||||
});
|
||||
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"bad"}]}"#,
|
||||
&[1],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let error =
|
||||
extract_video_timestamps(capture_path).expect_err("invalid timestamp fails");
|
||||
assert!(error.to_string().contains("parsing frame timestamp"));
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_brightness_reads_fake_ffmpeg_output() {
|
||||
let brightness = vec![5u8, 100, 250];
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&thumbnail_video_bytes(&brightness),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_brightness(capture_path).expect("video brightness");
|
||||
assert_eq!(parsed, brightness);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_brightness_rejects_empty_output() {
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let error = extract_video_brightness(capture_path).expect_err("empty brightness");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not emit any video brightness data")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_brightness_uses_full_frame_thumbnail_average() {
|
||||
let brightness = vec![20u8, 45, 20];
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2]),
|
||||
&thumbnail_video_bytes(&brightness),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_brightness(capture_path).expect("video brightness");
|
||||
assert_eq!(parsed, brightness);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_brightness_rejects_truncated_frame_data() {
|
||||
with_fake_media_tools(&frame_json(&[0.0]), &[1, 2, 3], &[1, 0], |capture_path| {
|
||||
let error = extract_video_brightness(capture_path).expect_err("truncated frame bytes");
|
||||
assert!(error.to_string().contains("not divisible"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_colors_reads_fake_ffmpeg_output() {
|
||||
let colors = vec![(255, 45, 45), (0, 230, 118), (41, 121, 255)];
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2]),
|
||||
&thumbnail_rgb_video_bytes(&colors),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_colors(capture_path).expect("video colors");
|
||||
assert_eq!(parsed[0].r, 255);
|
||||
assert_eq!(parsed[1].g, 230);
|
||||
assert_eq!(parsed[2].b, 255);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_colors_tracks_small_flashing_screen_region() {
|
||||
const SIDE: usize = 64;
|
||||
let mut bytes = Vec::new();
|
||||
for color in [(24, 28, 32), (255, 45, 45), (24, 28, 32), (0, 230, 118)] {
|
||||
let mut frame = vec![34u8; SIDE * SIDE * 3];
|
||||
for y in 6..18 {
|
||||
for x in 40..54 {
|
||||
let offset = (y * SIDE + x) * 3;
|
||||
frame[offset] = color.0;
|
||||
frame[offset + 1] = color.1;
|
||||
frame[offset + 2] = color.2;
|
||||
}
|
||||
}
|
||||
bytes.extend_from_slice(&frame);
|
||||
}
|
||||
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2, 0.3]),
|
||||
&bytes,
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_colors(capture_path).expect("video colors");
|
||||
assert!(
|
||||
parsed[1].r > 220 && parsed[1].g < 80,
|
||||
"red pulse should dominate selected ROI: {:?}",
|
||||
parsed[1]
|
||||
);
|
||||
assert!(
|
||||
parsed[3].g > 190 && parsed[3].r < 60,
|
||||
"green pulse should dominate selected ROI: {:?}",
|
||||
parsed[3]
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_audio_samples_reads_fake_ffmpeg_output() {
|
||||
let samples = vec![1i16, -2, 32_000];
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[1],
|
||||
&audio_samples_to_bytes(&samples),
|
||||
|capture_path| {
|
||||
let parsed = extract_audio_samples(capture_path).expect("audio samples");
|
||||
assert_eq!(parsed, samples);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_audio_samples_rejects_too_short_output() {
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[1],
|
||||
&[7],
|
||||
|capture_path| {
|
||||
let error = extract_audio_samples(capture_path).expect_err("short audio");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not emit enough audio data to analyze")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn run_command_reports_success_and_failure() {
|
||||
let output = run_command(
|
||||
Command::new("sh").arg("-c").arg("printf 'ok'"),
|
||||
"success command",
|
||||
)
|
||||
.expect("success output");
|
||||
assert_eq!(output, b"ok");
|
||||
|
||||
let error = run_command(
|
||||
Command::new("sh")
|
||||
.arg("-c")
|
||||
.arg("printf 'boom' >&2; exit 7"),
|
||||
"failing command",
|
||||
)
|
||||
.expect_err("failing command should error");
|
||||
assert!(error.to_string().contains("failing command failed: boom"));
|
||||
}
|
||||
}
|
||||
#[path = "media_extract/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
222
client/src/sync_probe/analyze/media_extract/tests/mod.rs
Normal file
222
client/src/sync_probe/analyze/media_extract/tests/mod.rs
Normal file
@ -0,0 +1,222 @@
|
||||
use super::{
|
||||
extract_audio_samples, extract_video_brightness, extract_video_colors,
|
||||
extract_video_timestamps, run_command,
|
||||
};
|
||||
use crate::sync_probe::analyze::test_support::{
|
||||
audio_samples_to_bytes, frame_json, thumbnail_rgb_video_bytes, thumbnail_video_bytes,
|
||||
with_fake_media_tools,
|
||||
};
|
||||
use std::process::Command;
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_timestamps_reads_fake_ffprobe_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_timestamps_reads_fake_ffprobe_output() {
|
||||
let timestamps = vec![0.0, 0.5, 1.0];
|
||||
with_fake_media_tools(
|
||||
&frame_json(×tamps),
|
||||
&[1, 2, 3],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_timestamps(capture_path).expect("video timestamps");
|
||||
assert_eq!(parsed, timestamps);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_timestamps_rejects_empty_and_invalid_outputs` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_timestamps_rejects_empty_and_invalid_outputs() {
|
||||
with_fake_media_tools(br#"{"frames":[]}"#, &[1], &[1, 0], |capture_path| {
|
||||
let error = extract_video_timestamps(capture_path).expect_err("empty frames fail");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not return any video frame timestamps")
|
||||
);
|
||||
});
|
||||
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"bad"}]}"#,
|
||||
&[1],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let error =
|
||||
extract_video_timestamps(capture_path).expect_err("invalid timestamp fails");
|
||||
assert!(error.to_string().contains("parsing frame timestamp"));
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_brightness_reads_fake_ffmpeg_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_brightness_reads_fake_ffmpeg_output() {
|
||||
let brightness = vec![5u8, 100, 250];
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&thumbnail_video_bytes(&brightness),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_brightness(capture_path).expect("video brightness");
|
||||
assert_eq!(parsed, brightness);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_brightness_rejects_empty_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_brightness_rejects_empty_output() {
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[],
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let error = extract_video_brightness(capture_path).expect_err("empty brightness");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not emit any video brightness data")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_brightness_uses_full_frame_thumbnail_average` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_brightness_uses_full_frame_thumbnail_average() {
|
||||
let brightness = vec![20u8, 45, 20];
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2]),
|
||||
&thumbnail_video_bytes(&brightness),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_brightness(capture_path).expect("video brightness");
|
||||
assert_eq!(parsed, brightness);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn extract_video_brightness_rejects_truncated_frame_data() {
|
||||
with_fake_media_tools(&frame_json(&[0.0]), &[1, 2, 3], &[1, 0], |capture_path| {
|
||||
let error = extract_video_brightness(capture_path).expect_err("truncated frame bytes");
|
||||
assert!(error.to_string().contains("not divisible"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_colors_reads_fake_ffmpeg_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_colors_reads_fake_ffmpeg_output() {
|
||||
let colors = vec![(255, 45, 45), (0, 230, 118), (41, 121, 255)];
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2]),
|
||||
&thumbnail_rgb_video_bytes(&colors),
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_colors(capture_path).expect("video colors");
|
||||
assert_eq!(parsed[0].r, 255);
|
||||
assert_eq!(parsed[1].g, 230);
|
||||
assert_eq!(parsed[2].b, 255);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_video_colors_tracks_small_flashing_screen_region` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_video_colors_tracks_small_flashing_screen_region() {
|
||||
const SIDE: usize = 64;
|
||||
let mut bytes = Vec::new();
|
||||
for color in [(24, 28, 32), (255, 45, 45), (24, 28, 32), (0, 230, 118)] {
|
||||
let mut frame = vec![34u8; SIDE * SIDE * 3];
|
||||
for y in 6..18 {
|
||||
for x in 40..54 {
|
||||
let offset = (y * SIDE + x) * 3;
|
||||
frame[offset] = color.0;
|
||||
frame[offset + 1] = color.1;
|
||||
frame[offset + 2] = color.2;
|
||||
}
|
||||
}
|
||||
bytes.extend_from_slice(&frame);
|
||||
}
|
||||
|
||||
with_fake_media_tools(
|
||||
&frame_json(&[0.0, 0.1, 0.2, 0.3]),
|
||||
&bytes,
|
||||
&[1, 0],
|
||||
|capture_path| {
|
||||
let parsed = extract_video_colors(capture_path).expect("video colors");
|
||||
assert!(
|
||||
parsed[1].r > 220 && parsed[1].g < 80,
|
||||
"red pulse should dominate selected ROI: {:?}",
|
||||
parsed[1]
|
||||
);
|
||||
assert!(
|
||||
parsed[3].g > 190 && parsed[3].r < 60,
|
||||
"green pulse should dominate selected ROI: {:?}",
|
||||
parsed[3]
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_audio_samples_reads_fake_ffmpeg_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_audio_samples_reads_fake_ffmpeg_output() {
|
||||
let samples = vec![1i16, -2, 32_000];
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[1],
|
||||
&audio_samples_to_bytes(&samples),
|
||||
|capture_path| {
|
||||
let parsed = extract_audio_samples(capture_path).expect("audio samples");
|
||||
assert_eq!(parsed, samples);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `extract_audio_samples_rejects_too_short_output` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn extract_audio_samples_rejects_too_short_output() {
|
||||
with_fake_media_tools(
|
||||
br#"{"frames":[{"best_effort_timestamp_time":"0.0"}]}"#,
|
||||
&[1],
|
||||
&[7],
|
||||
|capture_path| {
|
||||
let error = extract_audio_samples(capture_path).expect_err("short audio");
|
||||
assert!(
|
||||
error
|
||||
.to_string()
|
||||
.contains("did not emit enough audio data to analyze")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `run_command_reports_success_and_failure` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn run_command_reports_success_and_failure() {
|
||||
let output = run_command(
|
||||
Command::new("sh").arg("-c").arg("printf 'ok'"),
|
||||
"success command",
|
||||
)
|
||||
.expect("success output");
|
||||
assert_eq!(output, b"ok");
|
||||
|
||||
let error = run_command(
|
||||
Command::new("sh")
|
||||
.arg("-c")
|
||||
.arg("printf 'boom' >&2; exit 7"),
|
||||
"failing command",
|
||||
)
|
||||
.expect_err("failing command should error");
|
||||
assert!(error.to_string().contains("failing command failed: boom"));
|
||||
}
|
||||
@ -45,727 +45,8 @@ pub(crate) struct PulseSegment {
|
||||
pub duration_s: f64,
|
||||
}
|
||||
|
||||
pub fn detect_video_onsets(timestamps_s: &[f64], brightness: &[u8]) -> Result<Vec<f64>> {
|
||||
Ok(detect_video_segments(timestamps_s, brightness)?
|
||||
.into_iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect())
|
||||
}
|
||||
|
||||
pub(crate) fn detect_video_segments(
|
||||
timestamps_s: &[f64],
|
||||
brightness: &[u8],
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
let frame_count = timestamps_s.len().min(brightness.len());
|
||||
if frame_count == 0 {
|
||||
bail!("capture did not contain any video frames");
|
||||
}
|
||||
|
||||
let slice = &brightness[..frame_count];
|
||||
let min = *slice.iter().min().expect("non-empty brightness slice");
|
||||
let max = *slice.iter().max().expect("non-empty brightness slice");
|
||||
if max.saturating_sub(min) < MIN_VIDEO_CONTRAST {
|
||||
bail!("video flash contrast is too low to detect sync pulses");
|
||||
}
|
||||
let threshold = ((u16::from(min) + u16::from(max)) / 2) as u8;
|
||||
let frame_step_s = median_frame_step_seconds(×tamps_s[..frame_count]).max(1.0 / 120.0);
|
||||
let active_frames = slice
|
||||
.iter()
|
||||
.copied()
|
||||
.filter(|level| *level >= threshold)
|
||||
.count();
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_active = false;
|
||||
let mut segment_start = 0.0_f64;
|
||||
let mut previous_timestamp = None;
|
||||
let mut last_active_timestamp = None;
|
||||
for (timestamp, level) in timestamps_s.iter().copied().zip(slice.iter().copied()) {
|
||||
let active = level >= threshold;
|
||||
if active && !previous_active {
|
||||
segment_start = previous_timestamp
|
||||
.map(|prior| edge_midpoint(prior, timestamp))
|
||||
.unwrap_or(timestamp);
|
||||
}
|
||||
if active {
|
||||
last_active_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_active && !active {
|
||||
let end_s = edge_midpoint(
|
||||
last_active_timestamp.unwrap_or(timestamp - frame_step_s),
|
||||
timestamp,
|
||||
)
|
||||
.max(segment_start + frame_step_s / 2.0);
|
||||
segments.push(PulseSegment {
|
||||
start_s: segment_start,
|
||||
end_s,
|
||||
duration_s: end_s - segment_start,
|
||||
});
|
||||
}
|
||||
previous_active = active;
|
||||
previous_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_active {
|
||||
let last_timestamp = timestamps_s[frame_count - 1];
|
||||
let end_s = last_timestamp + frame_step_s / 2.0;
|
||||
segments.push(PulseSegment {
|
||||
start_s: segment_start,
|
||||
end_s,
|
||||
duration_s: end_s - segment_start,
|
||||
});
|
||||
}
|
||||
|
||||
let active_fraction = active_frames as f64 / frame_count as f64;
|
||||
let median_segment_duration_s =
|
||||
median(segments.iter().map(|segment| segment.duration_s).collect());
|
||||
if active_fraction > MAX_VIDEO_ACTIVE_FRAME_FRACTION
|
||||
&& median_segment_duration_s <= frame_step_s * MAX_VIDEO_FLICKER_SEGMENT_FRAME_MULTIPLIER
|
||||
{
|
||||
bail!("video flash trace looks like frame-to-frame flicker, not sync pulses");
|
||||
}
|
||||
|
||||
Ok(segments)
|
||||
}
|
||||
|
||||
pub(crate) fn detect_color_coded_video_segments(
|
||||
timestamps_s: &[f64],
|
||||
frames: &[VideoColorFrame],
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
let frame_count = timestamps_s.len().min(frames.len());
|
||||
if frame_count == 0 {
|
||||
bail!("capture did not contain any video frames");
|
||||
}
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if event_codes.is_empty() {
|
||||
bail!("event code list must not be empty");
|
||||
}
|
||||
if let Some(unsupported) = event_codes
|
||||
.iter()
|
||||
.find(|code| color_for_event_code(**code).is_none())
|
||||
{
|
||||
bail!("event code {unsupported} has no video color signature");
|
||||
}
|
||||
|
||||
let frame_step_s = median_frame_step_seconds(×tamps_s[..frame_count]).max(1.0 / 120.0);
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_code = None::<u32>;
|
||||
let mut segment_start = 0.0_f64;
|
||||
let mut previous_timestamp = None;
|
||||
let mut last_active_timestamp = None;
|
||||
let mut segment_codes = Vec::<u32>::new();
|
||||
|
||||
for (timestamp, frame) in timestamps_s.iter().copied().zip(frames.iter().copied()) {
|
||||
let code = color_event_code_for_codes(frame, event_codes);
|
||||
if code.is_some() && previous_code.is_none() {
|
||||
segment_start = previous_timestamp
|
||||
.map(|prior| edge_midpoint(prior, timestamp))
|
||||
.unwrap_or(timestamp);
|
||||
segment_codes.clear();
|
||||
}
|
||||
if let Some(code) = code {
|
||||
last_active_timestamp = Some(timestamp);
|
||||
segment_codes.push(code);
|
||||
}
|
||||
if previous_code.is_some() && code.is_none() {
|
||||
push_color_segment(
|
||||
&mut segments,
|
||||
segment_start,
|
||||
edge_midpoint(
|
||||
last_active_timestamp.unwrap_or(timestamp - frame_step_s),
|
||||
timestamp,
|
||||
),
|
||||
pulse_width_s,
|
||||
&segment_codes,
|
||||
frame_step_s,
|
||||
);
|
||||
segment_codes.clear();
|
||||
}
|
||||
previous_code = code;
|
||||
previous_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_code.is_some() {
|
||||
let last_timestamp = timestamps_s[frame_count - 1];
|
||||
push_color_segment(
|
||||
&mut segments,
|
||||
segment_start,
|
||||
last_timestamp + frame_step_s / 2.0,
|
||||
pulse_width_s,
|
||||
&segment_codes,
|
||||
frame_step_s,
|
||||
);
|
||||
}
|
||||
|
||||
if segments.is_empty() {
|
||||
bail!("video did not contain any recognizable color-coded sync pulses");
|
||||
}
|
||||
|
||||
Ok(segments)
|
||||
}
|
||||
|
||||
fn merge_nearby_audio_segments(segments: Vec<PulseSegment>) -> Vec<PulseSegment> {
|
||||
let mut merged = Vec::<PulseSegment>::new();
|
||||
for segment in segments {
|
||||
match merged.last_mut() {
|
||||
Some(prior) if segment.start_s - prior.end_s <= MAX_AUDIO_PULSE_INTERNAL_GAP_S => {
|
||||
prior.end_s = segment.end_s;
|
||||
prior.duration_s = prior.end_s - prior.start_s;
|
||||
}
|
||||
_ => merged.push(segment),
|
||||
}
|
||||
}
|
||||
merged
|
||||
}
|
||||
|
||||
fn push_color_segment(
|
||||
segments: &mut Vec<PulseSegment>,
|
||||
start_s: f64,
|
||||
observed_end_s: f64,
|
||||
pulse_width_s: f64,
|
||||
codes: &[u32],
|
||||
frame_step_s: f64,
|
||||
) {
|
||||
let Some(code) = dominant_event_code(codes) else {
|
||||
return;
|
||||
};
|
||||
let observed_duration_s = observed_end_s - start_s;
|
||||
let max_observed_duration_s = (pulse_width_s * MAX_COLOR_OBSERVED_DURATION_MULTIPLIER)
|
||||
+ MAX_COLOR_OBSERVED_DURATION_SLACK_S;
|
||||
if observed_duration_s > max_observed_duration_s {
|
||||
return;
|
||||
}
|
||||
let encoded_duration_s = pulse_width_s * f64::from(code);
|
||||
segments.push(PulseSegment {
|
||||
start_s,
|
||||
end_s: observed_end_s.max(start_s + frame_step_s / 2.0),
|
||||
duration_s: encoded_duration_s,
|
||||
});
|
||||
}
|
||||
|
||||
fn dominant_event_code(codes: &[u32]) -> Option<u32> {
|
||||
let mut counts = std::collections::BTreeMap::<u32, usize>::new();
|
||||
for code in codes {
|
||||
*counts.entry(*code).or_default() += 1;
|
||||
}
|
||||
counts
|
||||
.into_iter()
|
||||
.max_by(|(left_code, left_count), (right_code, right_count)| {
|
||||
left_count
|
||||
.cmp(right_count)
|
||||
.then_with(|| right_code.cmp(left_code))
|
||||
})
|
||||
.map(|(code, _)| code)
|
||||
}
|
||||
|
||||
fn color_event_code_for_codes(frame: VideoColorFrame, event_codes: &[u32]) -> Option<u32> {
|
||||
let max = frame.r.max(frame.g).max(frame.b);
|
||||
let min = frame.r.min(frame.g).min(frame.b);
|
||||
if max < MIN_COLOR_PULSE_VALUE || max.saturating_sub(min) < MIN_COLOR_PULSE_SATURATION {
|
||||
return None;
|
||||
}
|
||||
|
||||
event_codes
|
||||
.iter()
|
||||
.copied()
|
||||
.filter_map(|code| {
|
||||
color_for_event_code(code).map(|color| (code, color_distance_squared(frame, color)))
|
||||
})
|
||||
.min_by_key(|(_, distance)| *distance)
|
||||
.and_then(|(code, distance)| (distance <= MAX_COLOR_DISTANCE_SQUARED).then_some(code))
|
||||
.or_else(|| dominant_color_event_code(frame).filter(|code| event_codes.contains(code)))
|
||||
}
|
||||
|
||||
fn dominant_color_event_code(frame: VideoColorFrame) -> Option<u32> {
|
||||
let r = i16::from(frame.r);
|
||||
let g = i16::from(frame.g);
|
||||
let b = i16::from(frame.b);
|
||||
|
||||
if r - b >= DOMINANT_COLOR_MARGIN
|
||||
&& g - b >= DOMINANT_COLOR_MARGIN
|
||||
&& (r - g).abs() <= DOMINANT_COLOR_MARGIN * 3
|
||||
{
|
||||
return Some(4);
|
||||
}
|
||||
if r - g >= DOMINANT_COLOR_MARGIN && r - b >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(1);
|
||||
}
|
||||
if g - r >= DOMINANT_COLOR_MARGIN && g - b >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(2);
|
||||
}
|
||||
if b - r >= DOMINANT_COLOR_MARGIN && b - g >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(3);
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
fn color_for_event_code(code: u32) -> Option<VideoColorFrame> {
|
||||
color_palette()
|
||||
.into_iter()
|
||||
.find_map(|(palette_code, color)| (palette_code == code).then_some(color))
|
||||
}
|
||||
|
||||
fn color_palette() -> [(u32, VideoColorFrame); 16] {
|
||||
[
|
||||
(
|
||||
1,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 45,
|
||||
b: 45,
|
||||
},
|
||||
),
|
||||
(
|
||||
2,
|
||||
VideoColorFrame {
|
||||
r: 0,
|
||||
g: 230,
|
||||
b: 118,
|
||||
},
|
||||
),
|
||||
(
|
||||
3,
|
||||
VideoColorFrame {
|
||||
r: 41,
|
||||
g: 121,
|
||||
b: 255,
|
||||
},
|
||||
),
|
||||
(
|
||||
4,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 179,
|
||||
b: 0,
|
||||
},
|
||||
),
|
||||
(
|
||||
5,
|
||||
VideoColorFrame {
|
||||
r: 216,
|
||||
g: 27,
|
||||
b: 96,
|
||||
},
|
||||
),
|
||||
(
|
||||
6,
|
||||
VideoColorFrame {
|
||||
r: 0,
|
||||
g: 188,
|
||||
b: 212,
|
||||
},
|
||||
),
|
||||
(
|
||||
7,
|
||||
VideoColorFrame {
|
||||
r: 205,
|
||||
g: 220,
|
||||
b: 57,
|
||||
},
|
||||
),
|
||||
(
|
||||
8,
|
||||
VideoColorFrame {
|
||||
r: 126,
|
||||
g: 87,
|
||||
b: 194,
|
||||
},
|
||||
),
|
||||
(
|
||||
9,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 112,
|
||||
b: 67,
|
||||
},
|
||||
),
|
||||
(
|
||||
10,
|
||||
VideoColorFrame {
|
||||
r: 38,
|
||||
g: 166,
|
||||
b: 154,
|
||||
},
|
||||
),
|
||||
(
|
||||
11,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 64,
|
||||
b: 129,
|
||||
},
|
||||
),
|
||||
(
|
||||
12,
|
||||
VideoColorFrame {
|
||||
r: 92,
|
||||
g: 107,
|
||||
b: 192,
|
||||
},
|
||||
),
|
||||
(
|
||||
13,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 235,
|
||||
b: 59,
|
||||
},
|
||||
),
|
||||
(
|
||||
14,
|
||||
VideoColorFrame {
|
||||
r: 105,
|
||||
g: 240,
|
||||
b: 174,
|
||||
},
|
||||
),
|
||||
(
|
||||
15,
|
||||
VideoColorFrame {
|
||||
r: 171,
|
||||
g: 71,
|
||||
b: 188,
|
||||
},
|
||||
),
|
||||
(
|
||||
16,
|
||||
VideoColorFrame {
|
||||
r: 3,
|
||||
g: 169,
|
||||
b: 244,
|
||||
},
|
||||
),
|
||||
]
|
||||
}
|
||||
|
||||
fn color_distance_squared(left: VideoColorFrame, right: VideoColorFrame) -> u32 {
|
||||
let dr = i32::from(left.r) - i32::from(right.r);
|
||||
let dg = i32::from(left.g) - i32::from(right.g);
|
||||
let db = i32::from(left.b) - i32::from(right.b);
|
||||
(dr * dr + dg * dg + db * db) as u32
|
||||
}
|
||||
|
||||
pub fn detect_audio_onsets(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
) -> Result<Vec<f64>> {
|
||||
Ok(detect_audio_segments(samples, sample_rate_hz, window_ms)?
|
||||
.into_iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect())
|
||||
}
|
||||
|
||||
pub(crate) fn detect_audio_segments(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
detect_audio_segments_with_optional_codes(samples, sample_rate_hz, window_ms, &[], 0.0)
|
||||
}
|
||||
|
||||
pub(crate) fn detect_coded_audio_segments(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if event_codes.is_empty() {
|
||||
bail!("event code list must not be empty");
|
||||
}
|
||||
if let Some(unsupported) = event_codes
|
||||
.iter()
|
||||
.find(|code| audio_frequency_for_event_code(**code).is_none())
|
||||
{
|
||||
bail!("event code {unsupported} has no audio tone signature");
|
||||
}
|
||||
|
||||
detect_audio_segments_with_optional_codes(
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_ms,
|
||||
event_codes,
|
||||
pulse_width_s,
|
||||
)
|
||||
}
|
||||
|
||||
fn detect_audio_segments_with_optional_codes(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
if samples.is_empty() {
|
||||
bail!("capture did not contain any audio samples");
|
||||
}
|
||||
if sample_rate_hz == 0 {
|
||||
bail!("audio sample rate must stay positive");
|
||||
}
|
||||
if window_ms == 0 {
|
||||
bail!("audio analysis window must stay positive");
|
||||
}
|
||||
|
||||
let window_samples = ((sample_rate_hz as usize * window_ms as usize) / 1000).max(1);
|
||||
let amplitude_envelope = samples
|
||||
.chunks(window_samples)
|
||||
.map(|chunk| {
|
||||
let total: u64 = chunk
|
||||
.iter()
|
||||
.map(|sample| i32::from(*sample).unsigned_abs() as u64)
|
||||
.sum();
|
||||
total as f64 / chunk.len() as f64
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
let tone_windows = samples
|
||||
.chunks(window_samples)
|
||||
.map(|chunk| strongest_probe_tone_window(chunk, sample_rate_hz, event_codes))
|
||||
.collect::<Vec<_>>();
|
||||
let tone_envelope = tone_windows
|
||||
.iter()
|
||||
.map(|window| window.level)
|
||||
.collect::<Vec<_>>();
|
||||
let envelope = choose_audio_detection_envelope(&litude_envelope, &tone_envelope);
|
||||
let peak = envelope.iter().copied().fold(0.0_f64, f64::max);
|
||||
if peak < MIN_AUDIO_PROBE_PEAK {
|
||||
bail!("audio probe peaks are too quiet to detect sync pulses");
|
||||
}
|
||||
let baseline = median(envelope.clone());
|
||||
let threshold = baseline + ((peak - baseline) * AUDIO_ENVELOPE_THRESHOLD_FRACTION);
|
||||
let sample_abs = samples
|
||||
.iter()
|
||||
.map(|sample| i32::from(*sample).unsigned_abs() as f64)
|
||||
.collect::<Vec<_>>();
|
||||
let sample_peak = sample_abs.iter().copied().fold(0.0_f64, f64::max);
|
||||
let sample_baseline = median(sample_abs.clone());
|
||||
let sample_threshold =
|
||||
sample_baseline + ((sample_peak - sample_baseline) * AUDIO_SAMPLE_THRESHOLD_FRACTION);
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_active = false;
|
||||
let mut segment_start = 0usize;
|
||||
let mut segment_codes = Vec::<u32>::new();
|
||||
for (index, level) in envelope.iter().copied().enumerate() {
|
||||
let active = level >= threshold;
|
||||
if active && !previous_active {
|
||||
segment_start = index;
|
||||
segment_codes.clear();
|
||||
}
|
||||
if !event_codes.is_empty()
|
||||
&& active
|
||||
&& let Some(code) = tone_windows.get(index).and_then(|window| window.code)
|
||||
{
|
||||
segment_codes.push(code);
|
||||
}
|
||||
if previous_active && !active {
|
||||
push_audio_segment(
|
||||
&mut segments,
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
segment_start,
|
||||
index,
|
||||
sample_threshold,
|
||||
dominant_event_code(&segment_codes).map(|code| pulse_width_s * f64::from(code)),
|
||||
);
|
||||
segment_codes.clear();
|
||||
}
|
||||
previous_active = active;
|
||||
}
|
||||
if previous_active {
|
||||
push_audio_segment(
|
||||
&mut segments,
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
segment_start,
|
||||
envelope.len(),
|
||||
sample_threshold,
|
||||
dominant_event_code(&segment_codes).map(|code| pulse_width_s * f64::from(code)),
|
||||
);
|
||||
}
|
||||
|
||||
if event_codes.is_empty() {
|
||||
Ok(merge_nearby_audio_segments(segments))
|
||||
} else {
|
||||
Ok(merge_nearby_coded_audio_segments(segments))
|
||||
}
|
||||
}
|
||||
|
||||
fn choose_audio_detection_envelope(amplitude_envelope: &[f64], tone_envelope: &[f64]) -> Vec<f64> {
|
||||
let smoothed_amplitude = smooth_envelope(amplitude_envelope);
|
||||
let smoothed_tone = smooth_envelope(tone_envelope);
|
||||
let amplitude_peak = smoothed_amplitude.iter().copied().fold(0.0_f64, f64::max);
|
||||
let amplitude_baseline = median(smoothed_amplitude.clone());
|
||||
let tone_peak = smoothed_tone.iter().copied().fold(0.0_f64, f64::max);
|
||||
let tone_baseline = median(smoothed_tone.clone());
|
||||
let amplitude_contrast = (amplitude_peak - amplitude_baseline).max(0.0);
|
||||
let tone_contrast = (tone_peak - tone_baseline).max(0.0);
|
||||
|
||||
if tone_peak >= MIN_TONE_ENVELOPE_PEAK
|
||||
&& tone_contrast >= amplitude_contrast * MIN_TONE_CONTRAST_FRACTION_OF_AMPLITUDE
|
||||
{
|
||||
smoothed_tone
|
||||
} else {
|
||||
smoothed_amplitude
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
struct ProbeToneWindow {
|
||||
code: Option<u32>,
|
||||
level: f64,
|
||||
}
|
||||
|
||||
fn strongest_probe_tone_window(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
event_codes: &[u32],
|
||||
) -> ProbeToneWindow {
|
||||
let code_iter: Box<dyn Iterator<Item = u32> + '_> = if event_codes.is_empty() {
|
||||
Box::new(1..=AUDIO_TONE_FREQUENCIES_HZ.len() as u32)
|
||||
} else {
|
||||
Box::new(event_codes.iter().copied())
|
||||
};
|
||||
let mut candidates = code_iter
|
||||
.filter_map(|code| {
|
||||
audio_frequency_for_event_code(code)
|
||||
.map(|frequency_hz| (code, goertzel_level(samples, sample_rate_hz, frequency_hz)))
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
candidates.sort_by(|(_, left_level), (_, right_level)| right_level.total_cmp(left_level));
|
||||
|
||||
let Some((code, level)) = candidates.first().copied() else {
|
||||
return ProbeToneWindow {
|
||||
code: None,
|
||||
level: 0.0,
|
||||
};
|
||||
};
|
||||
let runner_up = candidates.get(1).map(|(_, level)| *level).unwrap_or(0.0);
|
||||
ProbeToneWindow {
|
||||
code: (level >= MIN_TONE_ENVELOPE_PEAK
|
||||
&& level >= runner_up * MIN_TONE_CODE_DOMINANCE_RATIO)
|
||||
.then_some(code),
|
||||
level,
|
||||
}
|
||||
}
|
||||
|
||||
fn audio_frequency_for_event_code(code: u32) -> Option<f64> {
|
||||
AUDIO_TONE_FREQUENCIES_HZ
|
||||
.get(code.checked_sub(1)? as usize)
|
||||
.copied()
|
||||
}
|
||||
|
||||
fn goertzel_level(samples: &[i16], sample_rate_hz: u32, frequency_hz: f64) -> f64 {
|
||||
if samples.is_empty() || sample_rate_hz == 0 {
|
||||
return 0.0;
|
||||
}
|
||||
let omega = 2.0 * std::f64::consts::PI * frequency_hz / f64::from(sample_rate_hz);
|
||||
let coefficient = 2.0 * omega.cos();
|
||||
let mut q1 = 0.0_f64;
|
||||
let mut q2 = 0.0_f64;
|
||||
for sample in samples {
|
||||
let q0 = f64::from(*sample) + coefficient * q1 - q2;
|
||||
q2 = q1;
|
||||
q1 = q0;
|
||||
}
|
||||
let power = q1 * q1 + q2 * q2 - coefficient * q1 * q2;
|
||||
power.max(0.0).sqrt() / samples.len() as f64
|
||||
}
|
||||
|
||||
fn smooth_envelope(envelope: &[f64]) -> Vec<f64> {
|
||||
if envelope.len() < 3 {
|
||||
return envelope.to_vec();
|
||||
}
|
||||
|
||||
(0..envelope.len())
|
||||
.map(|index| {
|
||||
let start = index.saturating_sub(1);
|
||||
let end = (index + 2).min(envelope.len());
|
||||
envelope[start..end].iter().sum::<f64>() / (end - start) as f64
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub(super) fn edge_midpoint(previous_s: f64, current_s: f64) -> f64 {
|
||||
previous_s + ((current_s - previous_s) / 2.0)
|
||||
}
|
||||
|
||||
pub(super) fn window_segment(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_samples: usize,
|
||||
start_window_index: usize,
|
||||
end_window_index_exclusive: usize,
|
||||
sample_threshold: f64,
|
||||
) -> PulseSegment {
|
||||
let start_sample = start_window_index.saturating_mul(window_samples);
|
||||
let end_sample = end_window_index_exclusive
|
||||
.saturating_mul(window_samples)
|
||||
.min(samples.len());
|
||||
|
||||
let refined_start_sample = samples[start_sample..end_sample]
|
||||
.iter()
|
||||
.position(|sample| i32::from(*sample).unsigned_abs() as f64 >= sample_threshold)
|
||||
.map(|offset| start_sample + offset)
|
||||
.unwrap_or(start_sample);
|
||||
let refined_end_sample = samples[start_sample..end_sample]
|
||||
.iter()
|
||||
.rposition(|sample| i32::from(*sample).unsigned_abs() as f64 >= sample_threshold)
|
||||
.map(|offset| start_sample + offset + 1)
|
||||
.unwrap_or(end_sample);
|
||||
|
||||
let start_s = refined_start_sample as f64 / f64::from(sample_rate_hz);
|
||||
let end_s = refined_end_sample.max(refined_start_sample + 1) as f64 / f64::from(sample_rate_hz);
|
||||
PulseSegment {
|
||||
start_s,
|
||||
end_s,
|
||||
duration_s: end_s - start_s,
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn push_audio_segment(
|
||||
segments: &mut Vec<PulseSegment>,
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_samples: usize,
|
||||
start_window_index: usize,
|
||||
end_window_index_exclusive: usize,
|
||||
sample_threshold: f64,
|
||||
encoded_duration_s: Option<f64>,
|
||||
) {
|
||||
let mut segment = window_segment(
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
start_window_index,
|
||||
end_window_index_exclusive,
|
||||
sample_threshold,
|
||||
);
|
||||
if let Some(encoded_duration_s) = encoded_duration_s {
|
||||
segment.duration_s = encoded_duration_s;
|
||||
}
|
||||
segments.push(segment);
|
||||
}
|
||||
|
||||
fn merge_nearby_coded_audio_segments(segments: Vec<PulseSegment>) -> Vec<PulseSegment> {
|
||||
let mut merged = Vec::<PulseSegment>::new();
|
||||
for segment in segments {
|
||||
match merged.last_mut() {
|
||||
Some(prior) if segment.start_s - prior.end_s <= MAX_AUDIO_PULSE_INTERNAL_GAP_S => {
|
||||
prior.end_s = segment.end_s;
|
||||
prior.duration_s = prior.duration_s.max(segment.duration_s);
|
||||
}
|
||||
_ => merged.push(segment),
|
||||
}
|
||||
}
|
||||
merged
|
||||
}
|
||||
include!("onset_detection/video_segment_detection.rs");
|
||||
include!("onset_detection/audio_tone_detection.rs");
|
||||
|
||||
pub(super) fn median_frame_step_seconds(timestamps_s: &[f64]) -> f64 {
|
||||
let diffs = timestamps_s
|
||||
@ -778,6 +59,8 @@ pub(super) fn median_frame_step_seconds(timestamps_s: &[f64]) -> f64 {
|
||||
median(diffs)
|
||||
}
|
||||
|
||||
/// Keeps `median` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn median(mut values: Vec<f64>) -> f64 {
|
||||
if values.is_empty() {
|
||||
return 0.0;
|
||||
|
||||
@ -0,0 +1,338 @@
|
||||
pub fn detect_audio_onsets(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
) -> Result<Vec<f64>> {
|
||||
Ok(detect_audio_segments(samples, sample_rate_hz, window_ms)?
|
||||
.into_iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect())
|
||||
}
|
||||
|
||||
pub(crate) fn detect_audio_segments(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
detect_audio_segments_with_optional_codes(samples, sample_rate_hz, window_ms, &[], 0.0)
|
||||
}
|
||||
|
||||
/// Keeps `detect_coded_audio_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn detect_coded_audio_segments(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if event_codes.is_empty() {
|
||||
bail!("event code list must not be empty");
|
||||
}
|
||||
if let Some(unsupported) = event_codes
|
||||
.iter()
|
||||
.find(|code| audio_frequency_for_event_code(**code).is_none())
|
||||
{
|
||||
bail!("event code {unsupported} has no audio tone signature");
|
||||
}
|
||||
|
||||
detect_audio_segments_with_optional_codes(
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_ms,
|
||||
event_codes,
|
||||
pulse_width_s,
|
||||
)
|
||||
}
|
||||
|
||||
/// Keeps `detect_audio_segments_with_optional_codes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_with_optional_codes(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_ms: u32,
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
if samples.is_empty() {
|
||||
bail!("capture did not contain any audio samples");
|
||||
}
|
||||
if sample_rate_hz == 0 {
|
||||
bail!("audio sample rate must stay positive");
|
||||
}
|
||||
if window_ms == 0 {
|
||||
bail!("audio analysis window must stay positive");
|
||||
}
|
||||
|
||||
let window_samples = ((sample_rate_hz as usize * window_ms as usize) / 1000).max(1);
|
||||
let amplitude_envelope = samples
|
||||
.chunks(window_samples)
|
||||
.map(|chunk| {
|
||||
let total: u64 = chunk
|
||||
.iter()
|
||||
.map(|sample| i32::from(*sample).unsigned_abs() as u64)
|
||||
.sum();
|
||||
total as f64 / chunk.len() as f64
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
let tone_windows = samples
|
||||
.chunks(window_samples)
|
||||
.map(|chunk| strongest_probe_tone_window(chunk, sample_rate_hz, event_codes))
|
||||
.collect::<Vec<_>>();
|
||||
let tone_envelope = tone_windows
|
||||
.iter()
|
||||
.map(|window| window.level)
|
||||
.collect::<Vec<_>>();
|
||||
let envelope = choose_audio_detection_envelope(&litude_envelope, &tone_envelope);
|
||||
let peak = envelope.iter().copied().fold(0.0_f64, f64::max);
|
||||
if peak < MIN_AUDIO_PROBE_PEAK {
|
||||
bail!("audio probe peaks are too quiet to detect sync pulses");
|
||||
}
|
||||
let baseline = median(envelope.clone());
|
||||
let threshold = baseline + ((peak - baseline) * AUDIO_ENVELOPE_THRESHOLD_FRACTION);
|
||||
let sample_abs = samples
|
||||
.iter()
|
||||
.map(|sample| i32::from(*sample).unsigned_abs() as f64)
|
||||
.collect::<Vec<_>>();
|
||||
let sample_peak = sample_abs.iter().copied().fold(0.0_f64, f64::max);
|
||||
let sample_baseline = median(sample_abs.clone());
|
||||
let sample_threshold =
|
||||
sample_baseline + ((sample_peak - sample_baseline) * AUDIO_SAMPLE_THRESHOLD_FRACTION);
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_active = false;
|
||||
let mut segment_start = 0usize;
|
||||
let mut segment_codes = Vec::<u32>::new();
|
||||
for (index, level) in envelope.iter().copied().enumerate() {
|
||||
let active = level >= threshold;
|
||||
if active && !previous_active {
|
||||
segment_start = index;
|
||||
segment_codes.clear();
|
||||
}
|
||||
if !event_codes.is_empty()
|
||||
&& active
|
||||
&& let Some(code) = tone_windows.get(index).and_then(|window| window.code)
|
||||
{
|
||||
segment_codes.push(code);
|
||||
}
|
||||
if previous_active && !active {
|
||||
push_audio_segment(
|
||||
&mut segments,
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
segment_start,
|
||||
index,
|
||||
sample_threshold,
|
||||
dominant_event_code(&segment_codes).map(|code| pulse_width_s * f64::from(code)),
|
||||
);
|
||||
segment_codes.clear();
|
||||
}
|
||||
previous_active = active;
|
||||
}
|
||||
if previous_active {
|
||||
push_audio_segment(
|
||||
&mut segments,
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
segment_start,
|
||||
envelope.len(),
|
||||
sample_threshold,
|
||||
dominant_event_code(&segment_codes).map(|code| pulse_width_s * f64::from(code)),
|
||||
);
|
||||
}
|
||||
|
||||
if event_codes.is_empty() {
|
||||
Ok(merge_nearby_audio_segments(segments))
|
||||
} else {
|
||||
Ok(merge_nearby_coded_audio_segments(segments))
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `choose_audio_detection_envelope` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn choose_audio_detection_envelope(amplitude_envelope: &[f64], tone_envelope: &[f64]) -> Vec<f64> {
|
||||
let smoothed_amplitude = smooth_envelope(amplitude_envelope);
|
||||
let smoothed_tone = smooth_envelope(tone_envelope);
|
||||
let amplitude_peak = smoothed_amplitude.iter().copied().fold(0.0_f64, f64::max);
|
||||
let amplitude_baseline = median(smoothed_amplitude.clone());
|
||||
let tone_peak = smoothed_tone.iter().copied().fold(0.0_f64, f64::max);
|
||||
let tone_baseline = median(smoothed_tone.clone());
|
||||
let amplitude_contrast = (amplitude_peak - amplitude_baseline).max(0.0);
|
||||
let tone_contrast = (tone_peak - tone_baseline).max(0.0);
|
||||
|
||||
if tone_peak >= MIN_TONE_ENVELOPE_PEAK
|
||||
&& tone_contrast >= amplitude_contrast * MIN_TONE_CONTRAST_FRACTION_OF_AMPLITUDE
|
||||
{
|
||||
smoothed_tone
|
||||
} else {
|
||||
smoothed_amplitude
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
struct ProbeToneWindow {
|
||||
code: Option<u32>,
|
||||
level: f64,
|
||||
}
|
||||
|
||||
/// Keeps `strongest_probe_tone_window` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn strongest_probe_tone_window(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
event_codes: &[u32],
|
||||
) -> ProbeToneWindow {
|
||||
let code_iter: Box<dyn Iterator<Item = u32> + '_> = if event_codes.is_empty() {
|
||||
Box::new(1..=AUDIO_TONE_FREQUENCIES_HZ.len() as u32)
|
||||
} else {
|
||||
Box::new(event_codes.iter().copied())
|
||||
};
|
||||
let mut candidates = code_iter
|
||||
.filter_map(|code| {
|
||||
audio_frequency_for_event_code(code)
|
||||
.map(|frequency_hz| (code, goertzel_level(samples, sample_rate_hz, frequency_hz)))
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
candidates.sort_by(|(_, left_level), (_, right_level)| right_level.total_cmp(left_level));
|
||||
|
||||
let Some((code, level)) = candidates.first().copied() else {
|
||||
return ProbeToneWindow {
|
||||
code: None,
|
||||
level: 0.0,
|
||||
};
|
||||
};
|
||||
let runner_up = candidates.get(1).map(|(_, level)| *level).unwrap_or(0.0);
|
||||
ProbeToneWindow {
|
||||
code: (level >= MIN_TONE_ENVELOPE_PEAK
|
||||
&& level >= runner_up * MIN_TONE_CODE_DOMINANCE_RATIO)
|
||||
.then_some(code),
|
||||
level,
|
||||
}
|
||||
}
|
||||
|
||||
fn audio_frequency_for_event_code(code: u32) -> Option<f64> {
|
||||
AUDIO_TONE_FREQUENCIES_HZ
|
||||
.get(code.checked_sub(1)? as usize)
|
||||
.copied()
|
||||
}
|
||||
|
||||
/// Keeps `goertzel_level` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn goertzel_level(samples: &[i16], sample_rate_hz: u32, frequency_hz: f64) -> f64 {
|
||||
if samples.is_empty() || sample_rate_hz == 0 {
|
||||
return 0.0;
|
||||
}
|
||||
let omega = 2.0 * std::f64::consts::PI * frequency_hz / f64::from(sample_rate_hz);
|
||||
let coefficient = 2.0 * omega.cos();
|
||||
let mut q1 = 0.0_f64;
|
||||
let mut q2 = 0.0_f64;
|
||||
for sample in samples {
|
||||
let q0 = f64::from(*sample) + coefficient * q1 - q2;
|
||||
q2 = q1;
|
||||
q1 = q0;
|
||||
}
|
||||
let power = q1 * q1 + q2 * q2 - coefficient * q1 * q2;
|
||||
power.max(0.0).sqrt() / samples.len() as f64
|
||||
}
|
||||
|
||||
/// Keeps `smooth_envelope` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn smooth_envelope(envelope: &[f64]) -> Vec<f64> {
|
||||
if envelope.len() < 3 {
|
||||
return envelope.to_vec();
|
||||
}
|
||||
|
||||
(0..envelope.len())
|
||||
.map(|index| {
|
||||
let start = index.saturating_sub(1);
|
||||
let end = (index + 2).min(envelope.len());
|
||||
envelope[start..end].iter().sum::<f64>() / (end - start) as f64
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub(super) fn edge_midpoint(previous_s: f64, current_s: f64) -> f64 {
|
||||
previous_s + ((current_s - previous_s) / 2.0)
|
||||
}
|
||||
|
||||
/// Keeps `window_segment` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn window_segment(
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_samples: usize,
|
||||
start_window_index: usize,
|
||||
end_window_index_exclusive: usize,
|
||||
sample_threshold: f64,
|
||||
) -> PulseSegment {
|
||||
let start_sample = start_window_index.saturating_mul(window_samples);
|
||||
let end_sample = end_window_index_exclusive
|
||||
.saturating_mul(window_samples)
|
||||
.min(samples.len());
|
||||
|
||||
let refined_start_sample = samples[start_sample..end_sample]
|
||||
.iter()
|
||||
.position(|sample| i32::from(*sample).unsigned_abs() as f64 >= sample_threshold)
|
||||
.map(|offset| start_sample + offset)
|
||||
.unwrap_or(start_sample);
|
||||
let refined_end_sample = samples[start_sample..end_sample]
|
||||
.iter()
|
||||
.rposition(|sample| i32::from(*sample).unsigned_abs() as f64 >= sample_threshold)
|
||||
.map(|offset| start_sample + offset + 1)
|
||||
.unwrap_or(end_sample);
|
||||
|
||||
let start_s = refined_start_sample as f64 / f64::from(sample_rate_hz);
|
||||
let end_s = refined_end_sample.max(refined_start_sample + 1) as f64 / f64::from(sample_rate_hz);
|
||||
PulseSegment {
|
||||
start_s,
|
||||
end_s,
|
||||
duration_s: end_s - start_s,
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
/// Keeps `push_audio_segment` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn push_audio_segment(
|
||||
segments: &mut Vec<PulseSegment>,
|
||||
samples: &[i16],
|
||||
sample_rate_hz: u32,
|
||||
window_samples: usize,
|
||||
start_window_index: usize,
|
||||
end_window_index_exclusive: usize,
|
||||
sample_threshold: f64,
|
||||
encoded_duration_s: Option<f64>,
|
||||
) {
|
||||
let mut segment = window_segment(
|
||||
samples,
|
||||
sample_rate_hz,
|
||||
window_samples,
|
||||
start_window_index,
|
||||
end_window_index_exclusive,
|
||||
sample_threshold,
|
||||
);
|
||||
if let Some(encoded_duration_s) = encoded_duration_s {
|
||||
segment.duration_s = encoded_duration_s;
|
||||
}
|
||||
segments.push(segment);
|
||||
}
|
||||
|
||||
/// Keeps `merge_nearby_coded_audio_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn merge_nearby_coded_audio_segments(segments: Vec<PulseSegment>) -> Vec<PulseSegment> {
|
||||
let mut merged = Vec::<PulseSegment>::new();
|
||||
for segment in segments {
|
||||
match merged.last_mut() {
|
||||
Some(prior) if segment.start_s - prior.end_s <= MAX_AUDIO_PULSE_INTERNAL_GAP_S => {
|
||||
prior.end_s = segment.end_s;
|
||||
prior.duration_s = prior.duration_s.max(segment.duration_s);
|
||||
}
|
||||
_ => merged.push(segment),
|
||||
}
|
||||
}
|
||||
merged
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,401 @@
|
||||
#[cfg_attr(not(test), allow(dead_code))]
|
||||
/// Keeps `correlate_onsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn correlate_onsets(
|
||||
video_onsets_s: &[f64],
|
||||
audio_onsets_s: &[f64],
|
||||
pulse_period_s: f64,
|
||||
max_pair_gap_s: f64,
|
||||
) -> Result<SyncAnalysisReport> {
|
||||
if video_onsets_s.is_empty() {
|
||||
bail!("video onset list is empty");
|
||||
}
|
||||
if audio_onsets_s.is_empty() {
|
||||
bail!("audio onset list is empty");
|
||||
}
|
||||
if max_pair_gap_s <= 0.0 {
|
||||
bail!("max pair gap must stay positive");
|
||||
}
|
||||
if pulse_period_s <= 0.0 {
|
||||
bail!("pulse period must stay positive");
|
||||
}
|
||||
|
||||
let raw_first_video_activity_s = video_onsets_s[0];
|
||||
let raw_first_audio_activity_s = audio_onsets_s[0];
|
||||
let activity_start_delta_ms =
|
||||
(raw_first_audio_activity_s - raw_first_video_activity_s) * 1000.0;
|
||||
let (video_onsets_s, audio_onsets_s, common_window) =
|
||||
trim_onsets_to_common_activity_window(video_onsets_s, audio_onsets_s, max_pair_gap_s);
|
||||
let expected_start_skew_ms = (audio_onsets_s[0] - video_onsets_s[0]) * 1000.0;
|
||||
let video_pulses = index_onsets_by_spacing(video_onsets_s, pulse_period_s);
|
||||
let audio_pulses = index_onsets_by_spacing(audio_onsets_s, pulse_period_s);
|
||||
let offset_candidates = candidate_index_offsets(&video_pulses, &audio_pulses);
|
||||
let mut pairs = best_pairs_for_index_offsets(
|
||||
&video_pulses,
|
||||
&audio_pulses,
|
||||
&offset_candidates,
|
||||
max_pair_gap_s,
|
||||
expected_start_skew_ms,
|
||||
);
|
||||
|
||||
if pairs.is_empty() && video_onsets_s.len() == 1 && audio_onsets_s.len() == 1 {
|
||||
let video_phase_s = estimate_phase(video_onsets_s, pulse_period_s);
|
||||
let audio_phase_s = estimate_phase(audio_onsets_s, pulse_period_s);
|
||||
let phase_skew_ms =
|
||||
shortest_wrapped_difference(audio_phase_s - video_phase_s, pulse_period_s) * 1000.0;
|
||||
if phase_skew_ms.abs() <= max_pair_gap_s * 1000.0 {
|
||||
pairs.push(MatchedOnsetPair::new(
|
||||
video_onsets_s[0],
|
||||
audio_onsets_s[0],
|
||||
phase_skew_ms,
|
||||
max_pair_gap_s,
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
if pairs.is_empty() {
|
||||
bail!("no audio/video pulse pairs were close enough to compare");
|
||||
}
|
||||
|
||||
Ok(sync_report_from_pairs(
|
||||
common_window.filter_onsets(video_onsets_s),
|
||||
common_window.filter_onsets(audio_onsets_s),
|
||||
false,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
pairs,
|
||||
))
|
||||
}
|
||||
|
||||
/// Keeps `correlate_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn correlate_segments(
|
||||
video_segments: &[PulseSegment],
|
||||
audio_segments: &[PulseSegment],
|
||||
pulse_period_s: f64,
|
||||
pulse_width_s: f64,
|
||||
marker_tick_period: u32,
|
||||
max_pair_gap_s: f64,
|
||||
) -> Result<SyncAnalysisReport> {
|
||||
if video_segments.is_empty() {
|
||||
bail!("video onset list is empty");
|
||||
}
|
||||
if audio_segments.is_empty() {
|
||||
bail!("audio onset list is empty");
|
||||
}
|
||||
if pulse_period_s <= 0.0 {
|
||||
bail!("pulse period must stay positive");
|
||||
}
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if marker_tick_period == 0 {
|
||||
bail!("marker tick period must stay positive");
|
||||
}
|
||||
if max_pair_gap_s <= 0.0 {
|
||||
bail!("max pair gap must stay positive");
|
||||
}
|
||||
|
||||
let raw_first_video_activity_s = video_segments
|
||||
.first()
|
||||
.expect("validated video segment list is not empty")
|
||||
.start_s;
|
||||
let raw_first_audio_activity_s = audio_segments
|
||||
.first()
|
||||
.expect("validated audio segment list is not empty")
|
||||
.start_s;
|
||||
let activity_start_delta_ms =
|
||||
(raw_first_audio_activity_s - raw_first_video_activity_s) * 1000.0;
|
||||
|
||||
let phase_tolerance_s = segment_phase_tolerance(pulse_period_s, pulse_width_s, max_pair_gap_s);
|
||||
let video_segments =
|
||||
collapse_segments_by_phase(video_segments, pulse_period_s, phase_tolerance_s);
|
||||
let audio_segments =
|
||||
collapse_segments_by_phase(audio_segments, pulse_period_s, phase_tolerance_s);
|
||||
|
||||
let video_onsets_s = video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
if video_onsets_s.is_empty() {
|
||||
bail!("video onset list is empty");
|
||||
}
|
||||
if audio_onsets_s.is_empty() {
|
||||
bail!("audio onset list is empty");
|
||||
}
|
||||
|
||||
let (video_onsets_s, audio_onsets_s, common_window) =
|
||||
trim_onsets_to_common_activity_window(&video_onsets_s, &audio_onsets_s, max_pair_gap_s);
|
||||
let expected_start_skew_ms = (audio_onsets_s[0] - video_onsets_s[0]) * 1000.0;
|
||||
let video_marker_onsets = marker_onsets(&video_segments, pulse_width_s);
|
||||
let audio_marker_onsets = marker_onsets(&audio_segments, pulse_width_s);
|
||||
let video_marker_onsets = common_window.filter_onsets(&video_marker_onsets);
|
||||
let audio_marker_onsets = common_window.filter_onsets(&audio_marker_onsets);
|
||||
let video_indexed = index_onsets_by_spacing(video_onsets_s, pulse_period_s);
|
||||
let audio_indexed = index_onsets_by_spacing(audio_onsets_s, pulse_period_s);
|
||||
let offset_candidates = marker_index_offsets(
|
||||
&video_indexed,
|
||||
&audio_indexed,
|
||||
video_marker_onsets,
|
||||
audio_marker_onsets,
|
||||
);
|
||||
let mut pairs = best_pairs_for_index_offsets(
|
||||
&video_indexed,
|
||||
&audio_indexed,
|
||||
&offset_candidates,
|
||||
max_pair_gap_s,
|
||||
expected_start_skew_ms,
|
||||
);
|
||||
|
||||
if pairs.is_empty() && video_onsets_s.len() == 1 && audio_onsets_s.len() == 1 {
|
||||
let video_phase_s = estimate_phase(video_onsets_s, pulse_period_s);
|
||||
let audio_phase_s = estimate_phase(audio_onsets_s, pulse_period_s);
|
||||
let phase_skew_ms =
|
||||
shortest_wrapped_difference(audio_phase_s - video_phase_s, pulse_period_s) * 1000.0;
|
||||
if phase_skew_ms.abs() <= max_pair_gap_s * 1000.0 {
|
||||
pairs.push(MatchedOnsetPair::new(
|
||||
video_onsets_s[0],
|
||||
audio_onsets_s[0],
|
||||
phase_skew_ms,
|
||||
max_pair_gap_s,
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
if pairs.is_empty() {
|
||||
bail!("no audio/video pulse pairs were close enough to compare");
|
||||
}
|
||||
|
||||
Ok(sync_report_from_pairs(
|
||||
video_onsets_s,
|
||||
audio_onsets_s,
|
||||
false,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
pairs,
|
||||
))
|
||||
}
|
||||
|
||||
/// Keeps `correlate_coded_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn correlate_coded_segments(
|
||||
video_segments: &[PulseSegment],
|
||||
audio_segments: &[PulseSegment],
|
||||
pulse_period_s: f64,
|
||||
pulse_width_s: f64,
|
||||
event_width_codes: &[u32],
|
||||
max_pair_gap_s: f64,
|
||||
) -> Result<SyncAnalysisReport> {
|
||||
if event_width_codes.is_empty() {
|
||||
bail!("event width code sequence must not be empty");
|
||||
}
|
||||
if event_width_codes.contains(&0) {
|
||||
bail!("event width codes must stay positive");
|
||||
}
|
||||
if pulse_period_s <= 0.0 {
|
||||
bail!("pulse period must stay positive");
|
||||
}
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if max_pair_gap_s <= 0.0 {
|
||||
bail!("max pair gap must stay positive");
|
||||
}
|
||||
|
||||
let raw_first_video_activity_s = video_segments
|
||||
.first()
|
||||
.expect("validated video segment list is not empty")
|
||||
.start_s;
|
||||
let raw_first_audio_activity_s = audio_segments
|
||||
.first()
|
||||
.expect("validated audio segment list is not empty")
|
||||
.start_s;
|
||||
let activity_start_delta_ms =
|
||||
(raw_first_audio_activity_s - raw_first_video_activity_s) * 1000.0;
|
||||
|
||||
let raw_video_segments = video_segments.to_vec();
|
||||
let raw_audio_segments = audio_segments.to_vec();
|
||||
let phase_tolerance_s = segment_phase_tolerance(pulse_period_s, pulse_width_s, max_pair_gap_s);
|
||||
let video_segments =
|
||||
collapse_segments_by_phase(video_segments, pulse_period_s, phase_tolerance_s);
|
||||
let audio_segments =
|
||||
collapse_segments_by_phase(audio_segments, pulse_period_s, phase_tolerance_s);
|
||||
if video_segments.is_empty() {
|
||||
bail!("video onset list is empty");
|
||||
}
|
||||
if audio_segments.is_empty() {
|
||||
bail!("audio onset list is empty");
|
||||
}
|
||||
|
||||
let video_onsets_s = video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let (_, _, common_window) =
|
||||
trim_onsets_to_common_activity_window(&video_onsets_s, &audio_onsets_s, max_pair_gap_s);
|
||||
let filtered_video_segments = filter_segments_to_window(&video_segments, common_window);
|
||||
let filtered_audio_segments = filter_segments_to_window(&audio_segments, common_window);
|
||||
if filtered_video_segments.is_empty() || filtered_audio_segments.is_empty() {
|
||||
bail!(
|
||||
"coded pulse common window removed one stream entirely; refusing cadence-only fallback for coded proof (video={} audio={} raw activity delta {activity_start_delta_ms:+.1} ms)",
|
||||
video_segments.len(),
|
||||
audio_segments.len(),
|
||||
);
|
||||
}
|
||||
|
||||
let expected_start_skew_ms =
|
||||
(filtered_audio_segments[0].start_s - filtered_video_segments[0].start_s) * 1000.0;
|
||||
let video_indexed = index_coded_segments_by_spacing(
|
||||
&filtered_video_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let audio_indexed = index_coded_segments_by_spacing(
|
||||
&filtered_audio_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let offset_candidates = candidate_coded_index_offsets(&video_indexed, &audio_indexed);
|
||||
let pairs = best_coded_pairs_for_index_offsets(
|
||||
&video_indexed,
|
||||
&audio_indexed,
|
||||
&offset_candidates,
|
||||
event_width_codes,
|
||||
max_pair_gap_s,
|
||||
expected_start_skew_ms,
|
||||
);
|
||||
|
||||
if pairs.len() < MIN_CODED_PAIRS {
|
||||
let time_pairs = best_coded_pairs_by_time(
|
||||
&filtered_video_segments,
|
||||
&filtered_audio_segments,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
max_pair_gap_s,
|
||||
);
|
||||
if time_pairs.len() >= MIN_CODED_PAIRS {
|
||||
let video_onsets_s = filtered_video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = filtered_audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
return Ok(sync_report_from_pairs(
|
||||
&video_onsets_s,
|
||||
&audio_onsets_s,
|
||||
true,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
time_pairs,
|
||||
));
|
||||
}
|
||||
|
||||
if let Some((raw_filtered_video_segments, raw_filtered_audio_segments, raw_pairs)) =
|
||||
best_coded_pairs_for_raw_segments(
|
||||
&raw_video_segments,
|
||||
&raw_audio_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
max_pair_gap_s,
|
||||
)
|
||||
&& raw_pairs.len() >= MIN_CODED_PAIRS
|
||||
{
|
||||
let video_onsets_s = raw_filtered_video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = raw_filtered_audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
return Ok(sync_report_from_pairs(
|
||||
&video_onsets_s,
|
||||
&audio_onsets_s,
|
||||
true,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
raw_pairs,
|
||||
));
|
||||
}
|
||||
|
||||
let raw_full_video_indexed = index_coded_segments_by_spacing(
|
||||
&raw_video_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let raw_full_audio_indexed = index_coded_segments_by_spacing(
|
||||
&raw_audio_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let diagnostic_pairs = diagnostic_coded_pairs_for_index_offsets(
|
||||
&raw_full_video_indexed,
|
||||
&raw_full_audio_indexed,
|
||||
&candidate_coded_index_offsets(&raw_full_video_indexed, &raw_full_audio_indexed),
|
||||
event_width_codes,
|
||||
DIAGNOSTIC_CODED_MAX_PAIR_GAP_S,
|
||||
activity_start_delta_ms,
|
||||
);
|
||||
if diagnostic_pairs.len() >= MIN_CODED_PAIRS {
|
||||
let raw_full_video_onsets_s = raw_video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let raw_full_audio_onsets_s = raw_audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
return Ok(sync_report_from_pairs(
|
||||
&raw_full_video_onsets_s,
|
||||
&raw_full_audio_onsets_s,
|
||||
true,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
diagnostic_pairs,
|
||||
));
|
||||
}
|
||||
|
||||
bail!(
|
||||
"need at least {MIN_CODED_PAIRS} matching coded pulse pairs; saw {}; raw activity delta was {activity_start_delta_ms:+.1} ms (video={raw_first_video_activity_s:.3}s audio={raw_first_audio_activity_s:.3}s)",
|
||||
pairs.len()
|
||||
);
|
||||
}
|
||||
|
||||
let video_onsets_s = filtered_video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = filtered_audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
Ok(sync_report_from_pairs(
|
||||
&video_onsets_s,
|
||||
&audio_onsets_s,
|
||||
true,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
pairs,
|
||||
))
|
||||
}
|
||||
@ -0,0 +1,183 @@
|
||||
/// Keeps `best_coded_pairs_for_raw_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn best_coded_pairs_for_raw_segments(
|
||||
video_segments: &[PulseSegment],
|
||||
audio_segments: &[PulseSegment],
|
||||
pulse_period_s: f64,
|
||||
pulse_width_s: f64,
|
||||
event_width_codes: &[u32],
|
||||
max_pair_gap_s: f64,
|
||||
) -> Option<(Vec<PulseSegment>, Vec<PulseSegment>, Vec<MatchedOnsetPair>)> {
|
||||
if video_segments.is_empty() || audio_segments.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let video_onsets_s = video_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let audio_onsets_s = audio_segments
|
||||
.iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect::<Vec<_>>();
|
||||
let (_, _, common_window) =
|
||||
trim_onsets_to_common_activity_window(&video_onsets_s, &audio_onsets_s, max_pair_gap_s);
|
||||
let filtered_video_segments = filter_segments_to_window(video_segments, common_window);
|
||||
let filtered_audio_segments = filter_segments_to_window(audio_segments, common_window);
|
||||
if filtered_video_segments.is_empty() || filtered_audio_segments.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let expected_start_skew_ms =
|
||||
(filtered_audio_segments[0].start_s - filtered_video_segments[0].start_s) * 1000.0;
|
||||
let video_indexed = index_coded_segments_by_spacing(
|
||||
&filtered_video_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let audio_indexed = index_coded_segments_by_spacing(
|
||||
&filtered_audio_segments,
|
||||
pulse_period_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
);
|
||||
let index_pairs = best_coded_pairs_for_index_offsets(
|
||||
&video_indexed,
|
||||
&audio_indexed,
|
||||
&candidate_coded_index_offsets(&video_indexed, &audio_indexed),
|
||||
event_width_codes,
|
||||
max_pair_gap_s,
|
||||
expected_start_skew_ms,
|
||||
);
|
||||
let time_pairs = best_coded_pairs_by_time(
|
||||
&filtered_video_segments,
|
||||
&filtered_audio_segments,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
max_pair_gap_s,
|
||||
);
|
||||
let pairs = if time_pairs.len() > index_pairs.len() {
|
||||
time_pairs
|
||||
} else {
|
||||
index_pairs
|
||||
};
|
||||
|
||||
Some((filtered_video_segments, filtered_audio_segments, pairs))
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy)]
|
||||
struct CommonActivityWindow {
|
||||
start_s: f64,
|
||||
end_s: f64,
|
||||
}
|
||||
|
||||
impl CommonActivityWindow {
|
||||
fn filter_onsets(self, onsets_s: &[f64]) -> &[f64] {
|
||||
let start = onsets_s.partition_point(|onset_s| *onset_s < self.start_s);
|
||||
let end = onsets_s.partition_point(|onset_s| *onset_s <= self.end_s);
|
||||
&onsets_s[start..end]
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `trim_onsets_to_common_activity_window` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn trim_onsets_to_common_activity_window<'a>(
|
||||
video_onsets_s: &'a [f64],
|
||||
audio_onsets_s: &'a [f64],
|
||||
max_pair_gap_s: f64,
|
||||
) -> (&'a [f64], &'a [f64], CommonActivityWindow) {
|
||||
let common_window = CommonActivityWindow {
|
||||
start_s: (video_onsets_s
|
||||
.first()
|
||||
.copied()
|
||||
.expect("validated video onset list is not empty")
|
||||
.max(
|
||||
audio_onsets_s
|
||||
.first()
|
||||
.copied()
|
||||
.expect("validated audio onset list is not empty"),
|
||||
)
|
||||
- max_pair_gap_s)
|
||||
.max(0.0),
|
||||
end_s: video_onsets_s
|
||||
.last()
|
||||
.copied()
|
||||
.expect("validated video onset list is not empty")
|
||||
.min(
|
||||
audio_onsets_s
|
||||
.last()
|
||||
.copied()
|
||||
.expect("validated audio onset list is not empty"),
|
||||
)
|
||||
+ max_pair_gap_s,
|
||||
};
|
||||
let trimmed_video_onsets_s = common_window.filter_onsets(video_onsets_s);
|
||||
let trimmed_audio_onsets_s = common_window.filter_onsets(audio_onsets_s);
|
||||
if trimmed_video_onsets_s.is_empty() || trimmed_audio_onsets_s.is_empty() {
|
||||
return (video_onsets_s, audio_onsets_s, common_window);
|
||||
}
|
||||
|
||||
(
|
||||
trimmed_video_onsets_s,
|
||||
trimmed_audio_onsets_s,
|
||||
common_window,
|
||||
)
|
||||
}
|
||||
|
||||
fn filter_segments_to_window(
|
||||
segments: &[PulseSegment],
|
||||
common_window: CommonActivityWindow,
|
||||
) -> Vec<PulseSegment> {
|
||||
segments
|
||||
.iter()
|
||||
.copied()
|
||||
.filter(|segment| {
|
||||
segment.start_s >= common_window.start_s && segment.start_s <= common_window.end_s
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn segment_phase_tolerance(pulse_period_s: f64, pulse_width_s: f64, max_pair_gap_s: f64) -> f64 {
|
||||
(pulse_width_s * PHASE_TOLERANCE_WIDTH_MULTIPLIER)
|
||||
.max(max_pair_gap_s.min(pulse_period_s / 3.0))
|
||||
.min(pulse_period_s / 2.5)
|
||||
}
|
||||
|
||||
/// Keeps `estimate_phase` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn estimate_phase(onsets_s: &[f64], pulse_period_s: f64) -> f64 {
|
||||
let (sum_sin, sum_cos) =
|
||||
onsets_s
|
||||
.iter()
|
||||
.copied()
|
||||
.fold((0.0_f64, 0.0_f64), |(sum_sin, sum_cos), onset| {
|
||||
let wrapped = onset.rem_euclid(pulse_period_s);
|
||||
let angle = (wrapped / pulse_period_s) * std::f64::consts::TAU;
|
||||
(sum_sin + angle.sin(), sum_cos + angle.cos())
|
||||
});
|
||||
|
||||
let mean_angle = sum_sin.atan2(sum_cos).rem_euclid(std::f64::consts::TAU);
|
||||
(mean_angle / std::f64::consts::TAU) * pulse_period_s
|
||||
}
|
||||
|
||||
/// Keeps `index_onsets_by_spacing` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn index_onsets_by_spacing(onsets_s: &[f64], pulse_period_s: f64) -> BTreeMap<i64, f64> {
|
||||
let mut indexed = BTreeMap::new();
|
||||
let Some(first_onset) = onsets_s.first().copied() else {
|
||||
return indexed;
|
||||
};
|
||||
|
||||
let mut pulse_index = 0_i64;
|
||||
let mut previous_onset = first_onset;
|
||||
indexed.insert(pulse_index, first_onset);
|
||||
for onset in onsets_s.iter().copied().skip(1) {
|
||||
let pulse_steps = ((onset - previous_onset) / pulse_period_s).round().max(1.0) as i64;
|
||||
pulse_index += pulse_steps;
|
||||
indexed.insert(pulse_index, onset);
|
||||
previous_onset = onset;
|
||||
}
|
||||
indexed
|
||||
}
|
||||
|
||||
@ -0,0 +1,441 @@
|
||||
#[derive(Clone, Copy, Debug, PartialEq)]
|
||||
struct CodedPulseSegment {
|
||||
start_s: f64,
|
||||
code: u32,
|
||||
}
|
||||
|
||||
/// Keeps `index_coded_segments_by_spacing` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn index_coded_segments_by_spacing(
|
||||
segments: &[PulseSegment],
|
||||
pulse_period_s: f64,
|
||||
pulse_width_s: f64,
|
||||
event_width_codes: &[u32],
|
||||
) -> BTreeMap<i64, CodedPulseSegment> {
|
||||
let mut indexed = BTreeMap::new();
|
||||
let Some(first_segment) = segments.first().copied() else {
|
||||
return indexed;
|
||||
};
|
||||
|
||||
let mut pulse_index = 0_i64;
|
||||
let mut previous_start = first_segment.start_s;
|
||||
indexed.insert(
|
||||
pulse_index,
|
||||
CodedPulseSegment {
|
||||
start_s: first_segment.start_s,
|
||||
code: nearest_event_width_code(
|
||||
first_segment.duration_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
),
|
||||
},
|
||||
);
|
||||
for segment in segments.iter().copied().skip(1) {
|
||||
let pulse_steps = ((segment.start_s - previous_start) / pulse_period_s)
|
||||
.round()
|
||||
.max(1.0) as i64;
|
||||
pulse_index += pulse_steps;
|
||||
indexed.insert(
|
||||
pulse_index,
|
||||
CodedPulseSegment {
|
||||
start_s: segment.start_s,
|
||||
code: nearest_event_width_code(
|
||||
segment.duration_s,
|
||||
pulse_width_s,
|
||||
event_width_codes,
|
||||
),
|
||||
},
|
||||
);
|
||||
previous_start = segment.start_s;
|
||||
}
|
||||
indexed
|
||||
}
|
||||
|
||||
/// Keeps `nearest_event_width_code` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn nearest_event_width_code(duration_s: f64, pulse_width_s: f64, event_width_codes: &[u32]) -> u32 {
|
||||
let ratio = duration_s / pulse_width_s.max(f64::EPSILON);
|
||||
event_width_codes
|
||||
.iter()
|
||||
.copied()
|
||||
.min_by(|left, right| {
|
||||
let left_error = (ratio - f64::from(*left)).abs();
|
||||
let right_error = (ratio - f64::from(*right)).abs();
|
||||
left_error.total_cmp(&right_error)
|
||||
})
|
||||
.unwrap_or(1)
|
||||
}
|
||||
|
||||
/// Keeps `candidate_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn candidate_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, f64>,
|
||||
audio_indexed: &BTreeMap<i64, f64>,
|
||||
) -> Vec<i64> {
|
||||
if video_indexed.is_empty() || audio_indexed.is_empty() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let video_min = *video_indexed
|
||||
.keys()
|
||||
.next()
|
||||
.expect("non-empty indexed video onset map has a first key");
|
||||
let video_max = *video_indexed
|
||||
.keys()
|
||||
.next_back()
|
||||
.expect("non-empty indexed video onset map has a last key");
|
||||
let audio_min = *audio_indexed
|
||||
.keys()
|
||||
.next()
|
||||
.expect("non-empty indexed audio onset map has a first key");
|
||||
let audio_max = *audio_indexed
|
||||
.keys()
|
||||
.next_back()
|
||||
.expect("non-empty indexed audio onset map has a last key");
|
||||
|
||||
(audio_min - video_max..=audio_max - video_min).collect()
|
||||
}
|
||||
|
||||
/// Keeps `candidate_coded_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn candidate_coded_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
audio_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
) -> Vec<i64> {
|
||||
if video_indexed.is_empty() || audio_indexed.is_empty() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let video_min = *video_indexed
|
||||
.keys()
|
||||
.next()
|
||||
.expect("non-empty indexed video map has a first key");
|
||||
let video_max = *video_indexed
|
||||
.keys()
|
||||
.next_back()
|
||||
.expect("non-empty indexed video map has a last key");
|
||||
let audio_min = *audio_indexed
|
||||
.keys()
|
||||
.next()
|
||||
.expect("non-empty indexed audio map has a first key");
|
||||
let audio_max = *audio_indexed
|
||||
.keys()
|
||||
.next_back()
|
||||
.expect("non-empty indexed audio map has a last key");
|
||||
|
||||
(audio_min - video_max..=audio_max - video_min).collect()
|
||||
}
|
||||
|
||||
/// Keeps `marker_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn marker_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, f64>,
|
||||
audio_indexed: &BTreeMap<i64, f64>,
|
||||
video_marker_onsets: &[f64],
|
||||
audio_marker_onsets: &[f64],
|
||||
) -> Vec<i64> {
|
||||
let mut offsets = Vec::new();
|
||||
if !video_marker_onsets.is_empty() && !audio_marker_onsets.is_empty() {
|
||||
let video_markers = pulse_indices_for_onsets(video_indexed, video_marker_onsets);
|
||||
let audio_markers = pulse_indices_for_onsets(audio_indexed, audio_marker_onsets);
|
||||
for video_marker in &video_markers {
|
||||
for audio_marker in &audio_markers {
|
||||
offsets.push(audio_marker - video_marker);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
offsets.extend(candidate_index_offsets(video_indexed, audio_indexed));
|
||||
offsets.sort_unstable();
|
||||
offsets.dedup();
|
||||
offsets
|
||||
}
|
||||
|
||||
fn pulse_indices_for_onsets(indexed: &BTreeMap<i64, f64>, marker_onsets: &[f64]) -> Vec<i64> {
|
||||
marker_onsets
|
||||
.iter()
|
||||
.filter_map(|marker_onset| {
|
||||
indexed.iter().find_map(|(pulse_index, onset)| {
|
||||
((onset - marker_onset).abs() < 0.000_001).then_some(*pulse_index)
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
struct MatchedOnsetPair {
|
||||
video_time_s: f64,
|
||||
audio_time_s: f64,
|
||||
skew_ms: f64,
|
||||
confidence: f64,
|
||||
server_event_id: Option<usize>,
|
||||
event_code: Option<u32>,
|
||||
}
|
||||
|
||||
impl MatchedOnsetPair {
|
||||
/// Keeps `new` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn new(video_time_s: f64, audio_time_s: f64, skew_ms: f64, max_pair_gap_s: f64) -> Self {
|
||||
let max_pair_gap_ms = max_pair_gap_s * 1000.0;
|
||||
let confidence = if max_pair_gap_ms <= 0.0 {
|
||||
0.0
|
||||
} else {
|
||||
(1.0 - (skew_ms.abs() / max_pair_gap_ms)).clamp(0.0, 1.0)
|
||||
};
|
||||
Self {
|
||||
video_time_s,
|
||||
audio_time_s,
|
||||
skew_ms,
|
||||
confidence,
|
||||
server_event_id: None,
|
||||
event_code: None,
|
||||
}
|
||||
}
|
||||
|
||||
fn with_identity(mut self, event_width_codes: &[u32], event_code: u32) -> Self {
|
||||
self.server_event_id = unique_event_id_for_code(event_width_codes, event_code);
|
||||
self.event_code = Some(event_code);
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `best_pairs_for_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn best_pairs_for_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, f64>,
|
||||
audio_indexed: &BTreeMap<i64, f64>,
|
||||
offset_candidates: &[i64],
|
||||
max_pair_gap_s: f64,
|
||||
expected_start_skew_ms: f64,
|
||||
) -> Vec<MatchedOnsetPair> {
|
||||
let max_pair_gap_ms = max_pair_gap_s * 1000.0;
|
||||
let startup_phase_anchor_tolerance_ms =
|
||||
max_pair_gap_ms * STARTUP_PHASE_ANCHOR_TOLERANCE_FRACTION;
|
||||
let mut best: Option<(bool, usize, f64, f64, Vec<MatchedOnsetPair>)> = None;
|
||||
|
||||
for offset in offset_candidates.iter().copied() {
|
||||
let pairs = video_indexed
|
||||
.iter()
|
||||
.filter_map(|(pulse_index, video_time)| {
|
||||
audio_indexed
|
||||
.get(&(pulse_index + offset))
|
||||
.map(|audio_time| {
|
||||
let skew_ms = (audio_time - video_time) * 1000.0;
|
||||
MatchedOnsetPair::new(*video_time, *audio_time, skew_ms, max_pair_gap_s)
|
||||
})
|
||||
})
|
||||
.filter(|pair| pair.skew_ms.abs() <= max_pair_gap_ms)
|
||||
.collect::<Vec<_>>();
|
||||
if pairs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let mean_abs_skew_ms =
|
||||
pairs.iter().map(|pair| pair.skew_ms.abs()).sum::<f64>() / pairs.len() as f64;
|
||||
let startup_phase_anchor_error_ms = (pairs[0].skew_ms - expected_start_skew_ms).abs();
|
||||
let startup_phase_anchor_consistent =
|
||||
startup_phase_anchor_error_ms <= startup_phase_anchor_tolerance_ms;
|
||||
match &best {
|
||||
Some((
|
||||
best_anchor_consistent,
|
||||
best_count,
|
||||
best_anchor_error_ms,
|
||||
best_mean_abs_skew_ms,
|
||||
_,
|
||||
)) if (!startup_phase_anchor_consistent && *best_anchor_consistent)
|
||||
|| (startup_phase_anchor_consistent == *best_anchor_consistent
|
||||
&& (pairs.len() < *best_count
|
||||
|| (pairs.len() == *best_count
|
||||
&& (startup_phase_anchor_error_ms > *best_anchor_error_ms
|
||||
|| (startup_phase_anchor_error_ms == *best_anchor_error_ms
|
||||
&& mean_abs_skew_ms >= *best_mean_abs_skew_ms))))) => {}
|
||||
_ => {
|
||||
best = Some((
|
||||
startup_phase_anchor_consistent,
|
||||
pairs.len(),
|
||||
startup_phase_anchor_error_ms,
|
||||
mean_abs_skew_ms,
|
||||
pairs,
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
best.map(|(_, _, _, _, pairs)| pairs).unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Keeps `best_coded_pairs_for_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn best_coded_pairs_for_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
audio_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
offset_candidates: &[i64],
|
||||
event_width_codes: &[u32],
|
||||
max_pair_gap_s: f64,
|
||||
expected_start_skew_ms: f64,
|
||||
) -> Vec<MatchedOnsetPair> {
|
||||
let max_pair_gap_ms = max_pair_gap_s * 1000.0;
|
||||
let startup_phase_anchor_tolerance_ms =
|
||||
max_pair_gap_ms * STARTUP_PHASE_ANCHOR_TOLERANCE_FRACTION;
|
||||
let mut best: Option<(bool, usize, f64, f64, Vec<MatchedOnsetPair>)> = None;
|
||||
|
||||
for offset in offset_candidates.iter().copied() {
|
||||
let pairs = video_indexed
|
||||
.iter()
|
||||
.filter_map(|(pulse_index, video)| {
|
||||
audio_indexed
|
||||
.get(&(pulse_index + offset))
|
||||
.filter(|audio| audio.code == video.code)
|
||||
.map(|audio| {
|
||||
let skew_ms = (audio.start_s - video.start_s) * 1000.0;
|
||||
MatchedOnsetPair::new(video.start_s, audio.start_s, skew_ms, max_pair_gap_s)
|
||||
.with_identity(event_width_codes, video.code)
|
||||
})
|
||||
})
|
||||
.filter(|pair| pair.skew_ms.abs() <= max_pair_gap_ms)
|
||||
.collect::<Vec<_>>();
|
||||
if pairs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let mean_abs_skew_ms =
|
||||
pairs.iter().map(|pair| pair.skew_ms.abs()).sum::<f64>() / pairs.len() as f64;
|
||||
let startup_phase_anchor_error_ms = (pairs[0].skew_ms - expected_start_skew_ms).abs();
|
||||
let startup_phase_anchor_consistent =
|
||||
startup_phase_anchor_error_ms <= startup_phase_anchor_tolerance_ms;
|
||||
match &best {
|
||||
Some((
|
||||
best_anchor_consistent,
|
||||
best_count,
|
||||
best_anchor_error_ms,
|
||||
best_mean_abs_skew_ms,
|
||||
_,
|
||||
)) if (!startup_phase_anchor_consistent && *best_anchor_consistent)
|
||||
|| (startup_phase_anchor_consistent == *best_anchor_consistent
|
||||
&& (pairs.len() < *best_count
|
||||
|| (pairs.len() == *best_count
|
||||
&& (startup_phase_anchor_error_ms > *best_anchor_error_ms
|
||||
|| (startup_phase_anchor_error_ms == *best_anchor_error_ms
|
||||
&& mean_abs_skew_ms >= *best_mean_abs_skew_ms))))) => {}
|
||||
_ => {
|
||||
best = Some((
|
||||
startup_phase_anchor_consistent,
|
||||
pairs.len(),
|
||||
startup_phase_anchor_error_ms,
|
||||
mean_abs_skew_ms,
|
||||
pairs,
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
best.map(|(_, _, _, _, pairs)| pairs).unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Keeps `best_coded_pairs_by_time` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn best_coded_pairs_by_time(
|
||||
video_segments: &[PulseSegment],
|
||||
audio_segments: &[PulseSegment],
|
||||
pulse_width_s: f64,
|
||||
event_width_codes: &[u32],
|
||||
max_pair_gap_s: f64,
|
||||
) -> Vec<MatchedOnsetPair> {
|
||||
let max_pair_gap_ms = max_pair_gap_s * 1000.0;
|
||||
let mut used_audio = vec![false; audio_segments.len()];
|
||||
let mut pairs = Vec::new();
|
||||
|
||||
for video in video_segments {
|
||||
let video_code =
|
||||
nearest_event_width_code(video.duration_s, pulse_width_s, event_width_codes);
|
||||
let best_audio = audio_segments
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter(|(index, audio)| {
|
||||
!used_audio[*index]
|
||||
&& nearest_event_width_code(audio.duration_s, pulse_width_s, event_width_codes)
|
||||
== video_code
|
||||
})
|
||||
.map(|(index, audio)| {
|
||||
let skew_ms = (audio.start_s - video.start_s) * 1000.0;
|
||||
(index, audio, skew_ms)
|
||||
})
|
||||
.filter(|(_, _, skew_ms)| skew_ms.abs() <= max_pair_gap_ms)
|
||||
.min_by(|(_, _, left_skew), (_, _, right_skew)| {
|
||||
left_skew.abs().total_cmp(&right_skew.abs())
|
||||
});
|
||||
|
||||
if let Some((audio_index, audio, skew_ms)) = best_audio {
|
||||
used_audio[audio_index] = true;
|
||||
pairs.push(
|
||||
MatchedOnsetPair::new(video.start_s, audio.start_s, skew_ms, max_pair_gap_s)
|
||||
.with_identity(event_width_codes, video_code),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
pairs
|
||||
}
|
||||
|
||||
/// Keeps `diagnostic_coded_pairs_for_index_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn diagnostic_coded_pairs_for_index_offsets(
|
||||
video_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
audio_indexed: &BTreeMap<i64, CodedPulseSegment>,
|
||||
offset_candidates: &[i64],
|
||||
event_width_codes: &[u32],
|
||||
max_pair_gap_s: f64,
|
||||
expected_start_skew_ms: f64,
|
||||
) -> Vec<MatchedOnsetPair> {
|
||||
let max_pair_gap_ms = max_pair_gap_s * 1000.0;
|
||||
let mut best: Option<(f64, usize, f64, Vec<MatchedOnsetPair>)> = None;
|
||||
|
||||
for offset in offset_candidates.iter().copied() {
|
||||
let pairs = video_indexed
|
||||
.iter()
|
||||
.filter_map(|(pulse_index, video)| {
|
||||
audio_indexed
|
||||
.get(&(pulse_index + offset))
|
||||
.filter(|audio| audio.code == video.code)
|
||||
.map(|audio| {
|
||||
let skew_ms = (audio.start_s - video.start_s) * 1000.0;
|
||||
MatchedOnsetPair::new(video.start_s, audio.start_s, skew_ms, max_pair_gap_s)
|
||||
.with_identity(event_width_codes, video.code)
|
||||
})
|
||||
})
|
||||
.filter(|pair| pair.skew_ms.abs() <= max_pair_gap_ms)
|
||||
.collect::<Vec<_>>();
|
||||
if pairs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let anchor_error_ms = (pairs[0].skew_ms - expected_start_skew_ms).abs();
|
||||
let mean_abs_skew_ms =
|
||||
pairs.iter().map(|pair| pair.skew_ms.abs()).sum::<f64>() / pairs.len() as f64;
|
||||
match &best {
|
||||
Some((best_anchor_error_ms, best_count, best_mean_abs_skew_ms, _))
|
||||
if anchor_error_ms > *best_anchor_error_ms
|
||||
|| (anchor_error_ms == *best_anchor_error_ms
|
||||
&& (pairs.len() < *best_count
|
||||
|| (pairs.len() == *best_count
|
||||
&& mean_abs_skew_ms >= *best_mean_abs_skew_ms))) => {}
|
||||
_ => best = Some((anchor_error_ms, pairs.len(), mean_abs_skew_ms, pairs)),
|
||||
}
|
||||
}
|
||||
|
||||
best.map(|(_, _, _, pairs)| pairs).unwrap_or_default()
|
||||
}
|
||||
|
||||
pub(super) fn marker_onsets(segments: &[PulseSegment], pulse_width_s: f64) -> Vec<f64> {
|
||||
let threshold = pulse_width_s * MARKER_WIDTH_MULTIPLIER;
|
||||
segments
|
||||
.iter()
|
||||
.filter(|segment| segment.duration_s >= threshold)
|
||||
.map(|segment| segment.start_s)
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub(super) fn shortest_wrapped_difference(delta_s: f64, pulse_period_s: f64) -> f64 {
|
||||
let half_period = pulse_period_s / 2.0;
|
||||
((delta_s + half_period).rem_euclid(pulse_period_s)) - half_period
|
||||
}
|
||||
@ -0,0 +1,70 @@
|
||||
/// Keeps `sync_report_from_pairs` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn sync_report_from_pairs(
|
||||
video_onsets_s: &[f64],
|
||||
audio_onsets_s: &[f64],
|
||||
coded_events: bool,
|
||||
activity_start_delta_ms: f64,
|
||||
raw_first_video_activity_s: f64,
|
||||
raw_first_audio_activity_s: f64,
|
||||
pairs: Vec<MatchedOnsetPair>,
|
||||
) -> SyncAnalysisReport {
|
||||
let paired_events = pairs
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(event_id, pair)| SyncEventPair {
|
||||
event_id: pair.server_event_id.unwrap_or(event_id),
|
||||
server_event_id: pair.server_event_id,
|
||||
event_code: pair.event_code,
|
||||
video_time_s: pair.video_time_s,
|
||||
audio_time_s: pair.audio_time_s,
|
||||
skew_ms: pair.skew_ms,
|
||||
confidence: pair.confidence,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
let skews_ms = paired_events
|
||||
.iter()
|
||||
.map(|event| event.skew_ms)
|
||||
.collect::<Vec<_>>();
|
||||
let mut sorted_skews = skews_ms.clone();
|
||||
sorted_skews.sort_by(|left, right| left.total_cmp(right));
|
||||
let first_skew_ms = *skews_ms.first().expect("paired skew list is not empty");
|
||||
let last_skew_ms = *skews_ms.last().expect("paired skew list is not empty");
|
||||
let mean_skew_ms = skews_ms.iter().sum::<f64>() / skews_ms.len() as f64;
|
||||
let median_skew_ms = median(sorted_skews);
|
||||
let max_abs_skew_ms = skews_ms
|
||||
.iter()
|
||||
.copied()
|
||||
.map(f64::abs)
|
||||
.fold(0.0_f64, f64::max);
|
||||
|
||||
SyncAnalysisReport {
|
||||
video_event_count: video_onsets_s.len(),
|
||||
audio_event_count: audio_onsets_s.len(),
|
||||
paired_event_count: skews_ms.len(),
|
||||
coded_events,
|
||||
activity_start_delta_ms,
|
||||
raw_first_video_activity_s,
|
||||
raw_first_audio_activity_s,
|
||||
first_skew_ms,
|
||||
last_skew_ms,
|
||||
mean_skew_ms,
|
||||
median_skew_ms,
|
||||
max_abs_skew_ms,
|
||||
drift_ms: last_skew_ms - first_skew_ms,
|
||||
skews_ms,
|
||||
video_onsets_s: video_onsets_s.to_vec(),
|
||||
audio_onsets_s: audio_onsets_s.to_vec(),
|
||||
paired_events,
|
||||
}
|
||||
}
|
||||
|
||||
fn unique_event_id_for_code(event_width_codes: &[u32], event_code: u32) -> Option<usize> {
|
||||
let mut matches = event_width_codes
|
||||
.iter()
|
||||
.copied()
|
||||
.enumerate()
|
||||
.filter(|(_, code)| *code == event_code);
|
||||
let (index, _) = matches.next()?;
|
||||
matches.next().is_none().then_some(index)
|
||||
}
|
||||
@ -11,6 +11,8 @@ use crate::sync_probe::analyze::report::SyncAnalysisReport;
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_video_onsets_finds_bright_transitions` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_video_onsets_finds_bright_transitions() {
|
||||
let timestamps = (0..60).map(|idx| idx as f64 / 10.0).collect::<Vec<_>>();
|
||||
let brightness = timestamps
|
||||
@ -30,6 +32,8 @@ fn detect_video_onsets_finds_bright_transitions() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_onsets_finds_click_bursts` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_onsets_finds_click_bursts() {
|
||||
let mut samples = vec![0i16; 48_000];
|
||||
for start in [0usize, 48_000 / 2] {
|
||||
@ -57,6 +61,8 @@ fn detect_video_segments_keeps_regular_and_marker_durations_distinct() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_video_segments_ignores_dim_positioning_prelude` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_video_segments_ignores_dim_positioning_prelude() {
|
||||
let timestamps = (0..90).map(|idx| idx as f64 / 30.0).collect::<Vec<_>>();
|
||||
let brightness = timestamps
|
||||
@ -80,6 +86,8 @@ fn detect_video_segments_ignores_dim_positioning_prelude() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_color_coded_video_segments_ignores_generic_bright_changes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_color_coded_video_segments_ignores_generic_bright_changes() {
|
||||
let timestamps = (0..80).map(|idx| idx as f64 / 20.0).collect::<Vec<_>>();
|
||||
let frames = timestamps
|
||||
@ -119,6 +127,8 @@ fn detect_color_coded_video_segments_ignores_generic_bright_changes() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_color_coded_video_segments_accepts_camera_washed_palette` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_color_coded_video_segments_accepts_camera_washed_palette() {
|
||||
let timestamps = (0..90).map(|idx| idx as f64 / 30.0).collect::<Vec<_>>();
|
||||
let frames = timestamps
|
||||
@ -164,6 +174,8 @@ fn detect_color_coded_video_segments_accepts_camera_washed_palette() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_segments_keeps_regular_and_marker_durations_distinct` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_keeps_regular_and_marker_durations_distinct() {
|
||||
let mut samples = vec![0i16; 48_000];
|
||||
for sample in samples.iter_mut().take(3_000) {
|
||||
@ -178,6 +190,8 @@ fn detect_audio_segments_keeps_regular_and_marker_durations_distinct() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_segments_merges_short_internal_dropouts_inside_one_pulse` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_merges_short_internal_dropouts_inside_one_pulse() {
|
||||
let mut samples = vec![0i16; 48_000];
|
||||
for sample in samples.iter_mut().skip(4_800).take(5_760) {
|
||||
@ -193,6 +207,8 @@ fn detect_audio_segments_merges_short_internal_dropouts_inside_one_pulse() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_segments_accepts_faint_probe_tones` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_accepts_faint_probe_tones() {
|
||||
let mut samples = vec![0i16; 48_000];
|
||||
for start in [4_800usize, 24_000] {
|
||||
@ -221,6 +237,8 @@ fn detect_audio_segments_locks_onto_probe_tone_over_background_hum() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_coded_audio_segments_uses_probe_tone_frequency_for_event_code` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_coded_audio_segments_uses_probe_tone_frequency_for_event_code() {
|
||||
let mut samples = vec![0i16; 96_000];
|
||||
add_sine(&mut samples, 48_000, 0.0, 2.0, 120.0, 7_000.0);
|
||||
@ -238,6 +256,8 @@ fn detect_coded_audio_segments_uses_probe_tone_frequency_for_event_code() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_segments_merges_longer_probe_dropouts_inside_one_pulse` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_merges_longer_probe_dropouts_inside_one_pulse() {
|
||||
let mut samples = vec![0i16; 48_000];
|
||||
for sample in samples.iter_mut().skip(4_800).take(12_000) {
|
||||
@ -252,6 +272,8 @@ fn detect_audio_segments_merges_longer_probe_dropouts_inside_one_pulse() {
|
||||
assert!(segments[0].duration_s > 0.24);
|
||||
}
|
||||
|
||||
/// Keeps `add_sine` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn add_sine(
|
||||
samples: &mut [i16],
|
||||
sample_rate_hz: u32,
|
||||
@ -283,6 +305,8 @@ fn detect_video_segments_closes_a_pulse_that_stays_active_until_the_last_frame()
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_audio_segments_closes_a_click_that_stays_active_until_the_capture_ends` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_audio_segments_closes_a_click_that_stays_active_until_the_capture_ends() {
|
||||
let mut samples = vec![0i16; 4_800];
|
||||
let midpoint = samples.len() / 2;
|
||||
@ -314,6 +338,8 @@ fn correlate_onsets_single_pulse_uses_phase_fallback() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_onsets_ignores_leading_video_cadence_before_audio_becomes_active` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_onsets_ignores_leading_video_cadence_before_audio_becomes_active() {
|
||||
let report = correlate_onsets(
|
||||
&[0.15, 1.15, 2.15, 3.15, 10.15, 11.15, 12.15],
|
||||
@ -331,6 +357,8 @@ fn correlate_onsets_ignores_leading_video_cadence_before_audio_becomes_active()
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_onsets_prefers_the_phase_consistent_basin_over_a_larger_alias_cluster` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_onsets_prefers_the_phase_consistent_basin_over_a_larger_alias_cluster() {
|
||||
let report = correlate_onsets(
|
||||
&[
|
||||
@ -378,6 +406,8 @@ fn detect_video_onsets_rejects_empty_low_contrast_and_missing_edges() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `detect_video_onsets_rejects_frame_to_frame_flicker` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_video_onsets_rejects_frame_to_frame_flicker() {
|
||||
let timestamps = (0..120)
|
||||
.map(|index| index as f64 / 30.0)
|
||||
@ -412,6 +442,8 @@ fn correlate_onsets_rejects_empty_inputs_invalid_gap_and_unpairable_events() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_segments_validate_inputs_and_support_single_pulse_fallback` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_segments_validate_inputs_and_support_single_pulse_fallback() {
|
||||
let video = [PulseSegment {
|
||||
start_s: 0.95,
|
||||
@ -438,6 +470,8 @@ fn correlate_segments_validate_inputs_and_support_single_pulse_fallback() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_segments_preserves_whole_period_delay_evidence` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_segments_preserves_whole_period_delay_evidence() {
|
||||
fn segment(start_s: f64, duration_s: f64) -> PulseSegment {
|
||||
PulseSegment {
|
||||
@ -476,6 +510,8 @@ fn correlate_segments_preserves_whole_period_delay_evidence() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_preserves_raw_activity_before_cadence_cleanup` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_preserves_raw_activity_before_cadence_cleanup() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -505,6 +541,8 @@ fn correlate_coded_segments_preserves_raw_activity_before_cadence_cleanup() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_reports_large_but_decodable_skew` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_reports_large_but_decodable_skew() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -537,6 +575,8 @@ fn correlate_coded_segments_reports_large_but_decodable_skew() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_matches_preserved_event_width_codes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_matches_preserved_event_width_codes() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -568,6 +608,8 @@ fn correlate_coded_segments_matches_preserved_event_width_codes() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_recovers_when_extra_video_detections_win_phase_collapse` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_recovers_when_extra_video_detections_win_phase_collapse() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -599,6 +641,8 @@ fn correlate_coded_segments_recovers_when_extra_video_detections_win_phase_colla
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_rejects_nearby_wrong_width_codes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_rejects_nearby_wrong_width_codes() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -626,6 +670,8 @@ fn correlate_coded_segments_rejects_nearby_wrong_width_codes() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `correlate_coded_segments_refuses_cadence_fallback_when_windows_do_not_overlap` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn correlate_coded_segments_refuses_cadence_fallback_when_windows_do_not_overlap() {
|
||||
fn segment(start_s: f64, code: u32) -> PulseSegment {
|
||||
let duration_s = 0.12 * f64::from(code);
|
||||
@ -0,0 +1,417 @@
|
||||
pub fn detect_video_onsets(timestamps_s: &[f64], brightness: &[u8]) -> Result<Vec<f64>> {
|
||||
Ok(detect_video_segments(timestamps_s, brightness)?
|
||||
.into_iter()
|
||||
.map(|segment| segment.start_s)
|
||||
.collect())
|
||||
}
|
||||
|
||||
/// Keeps `detect_video_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn detect_video_segments(
|
||||
timestamps_s: &[f64],
|
||||
brightness: &[u8],
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
let frame_count = timestamps_s.len().min(brightness.len());
|
||||
if frame_count == 0 {
|
||||
bail!("capture did not contain any video frames");
|
||||
}
|
||||
|
||||
let slice = &brightness[..frame_count];
|
||||
let min = *slice.iter().min().expect("non-empty brightness slice");
|
||||
let max = *slice.iter().max().expect("non-empty brightness slice");
|
||||
if max.saturating_sub(min) < MIN_VIDEO_CONTRAST {
|
||||
bail!("video flash contrast is too low to detect sync pulses");
|
||||
}
|
||||
let threshold = ((u16::from(min) + u16::from(max)) / 2) as u8;
|
||||
let frame_step_s = median_frame_step_seconds(×tamps_s[..frame_count]).max(1.0 / 120.0);
|
||||
let active_frames = slice
|
||||
.iter()
|
||||
.copied()
|
||||
.filter(|level| *level >= threshold)
|
||||
.count();
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_active = false;
|
||||
let mut segment_start = 0.0_f64;
|
||||
let mut previous_timestamp = None;
|
||||
let mut last_active_timestamp = None;
|
||||
for (timestamp, level) in timestamps_s.iter().copied().zip(slice.iter().copied()) {
|
||||
let active = level >= threshold;
|
||||
if active && !previous_active {
|
||||
segment_start = previous_timestamp
|
||||
.map(|prior| edge_midpoint(prior, timestamp))
|
||||
.unwrap_or(timestamp);
|
||||
}
|
||||
if active {
|
||||
last_active_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_active && !active {
|
||||
let end_s = edge_midpoint(
|
||||
last_active_timestamp.unwrap_or(timestamp - frame_step_s),
|
||||
timestamp,
|
||||
)
|
||||
.max(segment_start + frame_step_s / 2.0);
|
||||
segments.push(PulseSegment {
|
||||
start_s: segment_start,
|
||||
end_s,
|
||||
duration_s: end_s - segment_start,
|
||||
});
|
||||
}
|
||||
previous_active = active;
|
||||
previous_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_active {
|
||||
let last_timestamp = timestamps_s[frame_count - 1];
|
||||
let end_s = last_timestamp + frame_step_s / 2.0;
|
||||
segments.push(PulseSegment {
|
||||
start_s: segment_start,
|
||||
end_s,
|
||||
duration_s: end_s - segment_start,
|
||||
});
|
||||
}
|
||||
|
||||
let active_fraction = active_frames as f64 / frame_count as f64;
|
||||
let median_segment_duration_s =
|
||||
median(segments.iter().map(|segment| segment.duration_s).collect());
|
||||
if active_fraction > MAX_VIDEO_ACTIVE_FRAME_FRACTION
|
||||
&& median_segment_duration_s <= frame_step_s * MAX_VIDEO_FLICKER_SEGMENT_FRAME_MULTIPLIER
|
||||
{
|
||||
bail!("video flash trace looks like frame-to-frame flicker, not sync pulses");
|
||||
}
|
||||
|
||||
Ok(segments)
|
||||
}
|
||||
|
||||
/// Keeps `detect_color_coded_video_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn detect_color_coded_video_segments(
|
||||
timestamps_s: &[f64],
|
||||
frames: &[VideoColorFrame],
|
||||
event_codes: &[u32],
|
||||
pulse_width_s: f64,
|
||||
) -> Result<Vec<PulseSegment>> {
|
||||
let frame_count = timestamps_s.len().min(frames.len());
|
||||
if frame_count == 0 {
|
||||
bail!("capture did not contain any video frames");
|
||||
}
|
||||
if pulse_width_s <= 0.0 {
|
||||
bail!("pulse width must stay positive");
|
||||
}
|
||||
if event_codes.is_empty() {
|
||||
bail!("event code list must not be empty");
|
||||
}
|
||||
if let Some(unsupported) = event_codes
|
||||
.iter()
|
||||
.find(|code| color_for_event_code(**code).is_none())
|
||||
{
|
||||
bail!("event code {unsupported} has no video color signature");
|
||||
}
|
||||
|
||||
let frame_step_s = median_frame_step_seconds(×tamps_s[..frame_count]).max(1.0 / 120.0);
|
||||
let mut segments = Vec::new();
|
||||
let mut previous_code = None::<u32>;
|
||||
let mut segment_start = 0.0_f64;
|
||||
let mut previous_timestamp = None;
|
||||
let mut last_active_timestamp = None;
|
||||
let mut segment_codes = Vec::<u32>::new();
|
||||
|
||||
for (timestamp, frame) in timestamps_s.iter().copied().zip(frames.iter().copied()) {
|
||||
let code = color_event_code_for_codes(frame, event_codes);
|
||||
if code.is_some() && previous_code.is_none() {
|
||||
segment_start = previous_timestamp
|
||||
.map(|prior| edge_midpoint(prior, timestamp))
|
||||
.unwrap_or(timestamp);
|
||||
segment_codes.clear();
|
||||
}
|
||||
if let Some(code) = code {
|
||||
last_active_timestamp = Some(timestamp);
|
||||
segment_codes.push(code);
|
||||
}
|
||||
if previous_code.is_some() && code.is_none() {
|
||||
push_color_segment(
|
||||
&mut segments,
|
||||
segment_start,
|
||||
edge_midpoint(
|
||||
last_active_timestamp.unwrap_or(timestamp - frame_step_s),
|
||||
timestamp,
|
||||
),
|
||||
pulse_width_s,
|
||||
&segment_codes,
|
||||
frame_step_s,
|
||||
);
|
||||
segment_codes.clear();
|
||||
}
|
||||
previous_code = code;
|
||||
previous_timestamp = Some(timestamp);
|
||||
}
|
||||
if previous_code.is_some() {
|
||||
let last_timestamp = timestamps_s[frame_count - 1];
|
||||
push_color_segment(
|
||||
&mut segments,
|
||||
segment_start,
|
||||
last_timestamp + frame_step_s / 2.0,
|
||||
pulse_width_s,
|
||||
&segment_codes,
|
||||
frame_step_s,
|
||||
);
|
||||
}
|
||||
|
||||
if segments.is_empty() {
|
||||
bail!("video did not contain any recognizable color-coded sync pulses");
|
||||
}
|
||||
|
||||
Ok(segments)
|
||||
}
|
||||
|
||||
/// Keeps `merge_nearby_audio_segments` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn merge_nearby_audio_segments(segments: Vec<PulseSegment>) -> Vec<PulseSegment> {
|
||||
let mut merged = Vec::<PulseSegment>::new();
|
||||
for segment in segments {
|
||||
match merged.last_mut() {
|
||||
Some(prior) if segment.start_s - prior.end_s <= MAX_AUDIO_PULSE_INTERNAL_GAP_S => {
|
||||
prior.end_s = segment.end_s;
|
||||
prior.duration_s = prior.end_s - prior.start_s;
|
||||
}
|
||||
_ => merged.push(segment),
|
||||
}
|
||||
}
|
||||
merged
|
||||
}
|
||||
|
||||
/// Keeps `push_color_segment` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn push_color_segment(
|
||||
segments: &mut Vec<PulseSegment>,
|
||||
start_s: f64,
|
||||
observed_end_s: f64,
|
||||
pulse_width_s: f64,
|
||||
codes: &[u32],
|
||||
frame_step_s: f64,
|
||||
) {
|
||||
let Some(code) = dominant_event_code(codes) else {
|
||||
return;
|
||||
};
|
||||
let observed_duration_s = observed_end_s - start_s;
|
||||
let max_observed_duration_s = (pulse_width_s * MAX_COLOR_OBSERVED_DURATION_MULTIPLIER)
|
||||
+ MAX_COLOR_OBSERVED_DURATION_SLACK_S;
|
||||
if observed_duration_s > max_observed_duration_s {
|
||||
return;
|
||||
}
|
||||
let encoded_duration_s = pulse_width_s * f64::from(code);
|
||||
segments.push(PulseSegment {
|
||||
start_s,
|
||||
end_s: observed_end_s.max(start_s + frame_step_s / 2.0),
|
||||
duration_s: encoded_duration_s,
|
||||
});
|
||||
}
|
||||
|
||||
/// Keeps `dominant_event_code` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn dominant_event_code(codes: &[u32]) -> Option<u32> {
|
||||
let mut counts = std::collections::BTreeMap::<u32, usize>::new();
|
||||
for code in codes {
|
||||
*counts.entry(*code).or_default() += 1;
|
||||
}
|
||||
counts
|
||||
.into_iter()
|
||||
.max_by(|(left_code, left_count), (right_code, right_count)| {
|
||||
left_count
|
||||
.cmp(right_count)
|
||||
.then_with(|| right_code.cmp(left_code))
|
||||
})
|
||||
.map(|(code, _)| code)
|
||||
}
|
||||
|
||||
/// Keeps `color_event_code_for_codes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn color_event_code_for_codes(frame: VideoColorFrame, event_codes: &[u32]) -> Option<u32> {
|
||||
let max = frame.r.max(frame.g).max(frame.b);
|
||||
let min = frame.r.min(frame.g).min(frame.b);
|
||||
if max < MIN_COLOR_PULSE_VALUE || max.saturating_sub(min) < MIN_COLOR_PULSE_SATURATION {
|
||||
return None;
|
||||
}
|
||||
|
||||
event_codes
|
||||
.iter()
|
||||
.copied()
|
||||
.filter_map(|code| {
|
||||
color_for_event_code(code).map(|color| (code, color_distance_squared(frame, color)))
|
||||
})
|
||||
.min_by_key(|(_, distance)| *distance)
|
||||
.and_then(|(code, distance)| (distance <= MAX_COLOR_DISTANCE_SQUARED).then_some(code))
|
||||
.or_else(|| dominant_color_event_code(frame).filter(|code| event_codes.contains(code)))
|
||||
}
|
||||
|
||||
/// Keeps `dominant_color_event_code` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn dominant_color_event_code(frame: VideoColorFrame) -> Option<u32> {
|
||||
let r = i16::from(frame.r);
|
||||
let g = i16::from(frame.g);
|
||||
let b = i16::from(frame.b);
|
||||
|
||||
if r - b >= DOMINANT_COLOR_MARGIN
|
||||
&& g - b >= DOMINANT_COLOR_MARGIN
|
||||
&& (r - g).abs() <= DOMINANT_COLOR_MARGIN * 3
|
||||
{
|
||||
return Some(4);
|
||||
}
|
||||
if r - g >= DOMINANT_COLOR_MARGIN && r - b >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(1);
|
||||
}
|
||||
if g - r >= DOMINANT_COLOR_MARGIN && g - b >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(2);
|
||||
}
|
||||
if b - r >= DOMINANT_COLOR_MARGIN && b - g >= DOMINANT_COLOR_MARGIN {
|
||||
return Some(3);
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
fn color_for_event_code(code: u32) -> Option<VideoColorFrame> {
|
||||
color_palette()
|
||||
.into_iter()
|
||||
.find_map(|(palette_code, color)| (palette_code == code).then_some(color))
|
||||
}
|
||||
|
||||
/// Keeps `color_palette` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn color_palette() -> [(u32, VideoColorFrame); 16] {
|
||||
[
|
||||
(
|
||||
1,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 45,
|
||||
b: 45,
|
||||
},
|
||||
),
|
||||
(
|
||||
2,
|
||||
VideoColorFrame {
|
||||
r: 0,
|
||||
g: 230,
|
||||
b: 118,
|
||||
},
|
||||
),
|
||||
(
|
||||
3,
|
||||
VideoColorFrame {
|
||||
r: 41,
|
||||
g: 121,
|
||||
b: 255,
|
||||
},
|
||||
),
|
||||
(
|
||||
4,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 179,
|
||||
b: 0,
|
||||
},
|
||||
),
|
||||
(
|
||||
5,
|
||||
VideoColorFrame {
|
||||
r: 216,
|
||||
g: 27,
|
||||
b: 96,
|
||||
},
|
||||
),
|
||||
(
|
||||
6,
|
||||
VideoColorFrame {
|
||||
r: 0,
|
||||
g: 188,
|
||||
b: 212,
|
||||
},
|
||||
),
|
||||
(
|
||||
7,
|
||||
VideoColorFrame {
|
||||
r: 205,
|
||||
g: 220,
|
||||
b: 57,
|
||||
},
|
||||
),
|
||||
(
|
||||
8,
|
||||
VideoColorFrame {
|
||||
r: 126,
|
||||
g: 87,
|
||||
b: 194,
|
||||
},
|
||||
),
|
||||
(
|
||||
9,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 112,
|
||||
b: 67,
|
||||
},
|
||||
),
|
||||
(
|
||||
10,
|
||||
VideoColorFrame {
|
||||
r: 38,
|
||||
g: 166,
|
||||
b: 154,
|
||||
},
|
||||
),
|
||||
(
|
||||
11,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 64,
|
||||
b: 129,
|
||||
},
|
||||
),
|
||||
(
|
||||
12,
|
||||
VideoColorFrame {
|
||||
r: 92,
|
||||
g: 107,
|
||||
b: 192,
|
||||
},
|
||||
),
|
||||
(
|
||||
13,
|
||||
VideoColorFrame {
|
||||
r: 255,
|
||||
g: 235,
|
||||
b: 59,
|
||||
},
|
||||
),
|
||||
(
|
||||
14,
|
||||
VideoColorFrame {
|
||||
r: 105,
|
||||
g: 240,
|
||||
b: 174,
|
||||
},
|
||||
),
|
||||
(
|
||||
15,
|
||||
VideoColorFrame {
|
||||
r: 171,
|
||||
g: 71,
|
||||
b: 188,
|
||||
},
|
||||
),
|
||||
(
|
||||
16,
|
||||
VideoColorFrame {
|
||||
r: 3,
|
||||
g: 169,
|
||||
b: 244,
|
||||
},
|
||||
),
|
||||
]
|
||||
}
|
||||
|
||||
fn color_distance_squared(left: VideoColorFrame, right: VideoColorFrame) -> u32 {
|
||||
let dr = i32::from(left.r) - i32::from(right.r);
|
||||
let dg = i32::from(left.g) - i32::from(right.g);
|
||||
let db = i32::from(left.b) - i32::from(right.b);
|
||||
(dr * dr + dg * dg + db * db) as u32
|
||||
}
|
||||
|
||||
@ -73,6 +73,8 @@ pub struct SyncCalibrationRecommendation {
|
||||
|
||||
impl SyncAnalysisReport {
|
||||
#[must_use]
|
||||
/// Keeps `verdict` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn verdict(&self) -> SyncAnalysisVerdict {
|
||||
let p95_abs_skew_ms = percentile_abs(&self.skews_ms, 0.95);
|
||||
let base = SyncAnalysisVerdict {
|
||||
@ -169,6 +171,8 @@ impl SyncAnalysisReport {
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
/// Keeps `calibration_recommendation` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn calibration_recommendation(&self) -> SyncCalibrationRecommendation {
|
||||
if self.paired_event_count < CALIBRATION_MIN_PAIRED_EVENTS {
|
||||
return SyncCalibrationRecommendation {
|
||||
@ -246,6 +250,8 @@ impl SyncAnalysisReport {
|
||||
<= RAW_ACTIVITY_CONFIRMS_PAIR_MAX_DISAGREEMENT_MS
|
||||
}
|
||||
|
||||
/// Keeps `raw_activity_start_is_verdict_relevant` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn raw_activity_start_is_verdict_relevant(&self) -> bool {
|
||||
if self.raw_activity_start_confirms_pairs() {
|
||||
return true;
|
||||
@ -260,6 +266,8 @@ impl SyncAnalysisReport {
|
||||
percentile_abs(&self.skews_ms, 0.95) <= VERDICT_ACCEPTABLE_P95_ABS_SKEW_MS
|
||||
}
|
||||
|
||||
/// Keeps `raw_activity_note` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn raw_activity_note(&self) -> String {
|
||||
if self.activity_start_delta_ms.abs() < VERDICT_CATASTROPHIC_MAX_ABS_SKEW_MS
|
||||
|| self.raw_activity_start_is_verdict_relevant()
|
||||
@ -274,6 +282,8 @@ impl SyncAnalysisReport {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `raw_activity_calibration_note` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn raw_activity_calibration_note(&self) -> String {
|
||||
if self.raw_activity_start_is_verdict_relevant() {
|
||||
String::new()
|
||||
@ -287,6 +297,8 @@ impl SyncAnalysisReport {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `percentile_abs` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn percentile_abs(values: &[f64], percentile: f64) -> f64 {
|
||||
if values.is_empty() {
|
||||
return 0.0;
|
||||
@ -313,6 +325,8 @@ pub struct SyncAnalysisOptions {
|
||||
}
|
||||
|
||||
impl Default for SyncAnalysisOptions {
|
||||
/// Keeps `default` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
audio_window_ms: DEFAULT_AUDIO_WINDOW_MS,
|
||||
@ -328,328 +342,5 @@ impl Default for SyncAnalysisOptions {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{SyncAnalysisOptions, SyncAnalysisReport};
|
||||
|
||||
#[test]
|
||||
fn default_options_match_live_probe_expectations() {
|
||||
let options = SyncAnalysisOptions::default();
|
||||
assert_eq!(options.audio_window_ms, 5);
|
||||
assert!((options.max_pair_gap_s - 0.5).abs() < f64::EPSILON);
|
||||
assert!((options.pulse_period_s - 1.0).abs() < f64::EPSILON);
|
||||
assert!((options.pulse_width_s - 0.12).abs() < f64::EPSILON);
|
||||
assert_eq!(options.marker_tick_period, 5);
|
||||
assert!(options.event_width_codes.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_requires_enough_pairs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 4,
|
||||
audio_event_count: 4,
|
||||
paired_event_count: 4,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 20.0,
|
||||
last_skew_ms: 20.0,
|
||||
mean_skew_ms: 20.0,
|
||||
median_skew_ms: 20.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![20.0; 4],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(!recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 0);
|
||||
assert!(
|
||||
recommendation
|
||||
.note
|
||||
.contains("need at least 13 paired pulses")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_rejects_unstable_drift() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 12,
|
||||
audio_event_count: 12,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 10.0,
|
||||
last_skew_ms: 70.0,
|
||||
mean_skew_ms: 40.0,
|
||||
median_skew_ms: 35.0,
|
||||
max_abs_skew_ms: 70.0,
|
||||
drift_ms: 60.0,
|
||||
skews_ms: vec![10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(!recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 0);
|
||||
assert!(recommendation.note.contains("drift 60.0 ms exceeds"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_maps_median_skew_to_audio_and_video_offsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 28.0,
|
||||
last_skew_ms: 32.0,
|
||||
mean_skew_ms: 30.0,
|
||||
median_skew_ms: 30.0,
|
||||
max_abs_skew_ms: 32.0,
|
||||
drift_ms: 4.0,
|
||||
skews_ms: vec![28.0, 30.0, 32.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, -30_000);
|
||||
assert_eq!(recommendation.recommended_video_offset_adjust_us, 30_000);
|
||||
assert!(recommendation.note.contains("move median skew"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_accepts_large_stable_measured_offsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 16,
|
||||
audio_event_count: 16,
|
||||
paired_event_count: 16,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: -766.4,
|
||||
raw_first_video_activity_s: 7.491,
|
||||
raw_first_audio_activity_s: 6.725,
|
||||
first_skew_ms: -766.4,
|
||||
last_skew_ms: -769.2,
|
||||
mean_skew_ms: -756.0,
|
||||
median_skew_ms: -766.4,
|
||||
max_abs_skew_ms: 775.7,
|
||||
drift_ms: -2.8,
|
||||
skews_ms: vec![-766.4; 16],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 766_400);
|
||||
assert!(recommendation.note.contains("move median skew"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_uses_pairs_when_raw_activity_disagrees() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 16,
|
||||
audio_event_count: 16,
|
||||
paired_event_count: 16,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 6_735.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 6.735,
|
||||
first_skew_ms: -90.0,
|
||||
last_skew_ms: -100.0,
|
||||
mean_skew_ms: -95.0,
|
||||
median_skew_ms: -99.0,
|
||||
max_abs_skew_ms: 120.0,
|
||||
drift_ms: -10.0,
|
||||
skews_ms: vec![-99.0; 16],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 99_000);
|
||||
assert!(recommendation.note.contains("paired pulses disagree"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_uses_coded_pairs_when_raw_activity_disagrees() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 13,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -270.0,
|
||||
last_skew_ms: -230.0,
|
||||
mean_skew_ms: -205.0,
|
||||
median_skew_ms: -188.4,
|
||||
max_abs_skew_ms: 273.8,
|
||||
drift_ms: 40.0,
|
||||
skews_ms: vec![-270.0, -240.0, -188.4, -175.0, -130.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 188_400);
|
||||
assert!(recommendation.note.contains("paired pulses disagree"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_recommendation_reports_when_skew_is_already_settled() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 3.0,
|
||||
last_skew_ms: 4.0,
|
||||
mean_skew_ms: 3.5,
|
||||
median_skew_ms: 4.0,
|
||||
max_abs_skew_ms: 4.0,
|
||||
drift_ms: 1.0,
|
||||
skews_ms: vec![3.0, 4.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, -4_000);
|
||||
assert!(recommendation.note.contains("already within the settled"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn verdict_passes_preferred_skew_band() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 5,
|
||||
audio_event_count: 5,
|
||||
paired_event_count: 5,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 10.0,
|
||||
last_skew_ms: 20.0,
|
||||
mean_skew_ms: 15.0,
|
||||
median_skew_ms: 15.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 10.0,
|
||||
skews_ms: vec![10.0, 12.0, 15.0, 18.0, 20.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(verdict.passed);
|
||||
assert_eq!(verdict.status, "preferred");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn verdict_flags_catastrophic_desync() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 5,
|
||||
audio_event_count: 5,
|
||||
paired_event_count: 5,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 8_000.0,
|
||||
last_skew_ms: 8_000.0,
|
||||
mean_skew_ms: 8_000.0,
|
||||
median_skew_ms: 8_000.0,
|
||||
max_abs_skew_ms: 8_000.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![8_000.0; 5],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(!verdict.passed);
|
||||
assert_eq!(verdict.status, "catastrophic_failure");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn verdict_flags_catastrophic_activity_start_delta() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 20,
|
||||
audio_event_count: 20,
|
||||
paired_event_count: 20,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 20_000.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 900.0,
|
||||
last_skew_ms: 900.0,
|
||||
mean_skew_ms: 19_900.0,
|
||||
median_skew_ms: 19_900.0,
|
||||
max_abs_skew_ms: 900.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![900.0; 20],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(!verdict.passed);
|
||||
assert_eq!(verdict.status, "catastrophic_failure");
|
||||
assert!(verdict.reason.contains("activity starts"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn verdict_ignores_uncorroborated_raw_activity_for_coded_runs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 20,
|
||||
audio_event_count: 20,
|
||||
paired_event_count: 20,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -20.0,
|
||||
last_skew_ms: -18.0,
|
||||
mean_skew_ms: -19.0,
|
||||
median_skew_ms: -19.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 2.0,
|
||||
skews_ms: vec![-20.0; 20],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(verdict.passed);
|
||||
assert_eq!(verdict.status, "preferred");
|
||||
assert!(verdict.reason.contains("raw activity start delta"));
|
||||
}
|
||||
}
|
||||
#[path = "report/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
345
client/src/sync_probe/analyze/report/tests/mod.rs
Normal file
345
client/src/sync_probe/analyze/report/tests/mod.rs
Normal file
@ -0,0 +1,345 @@
|
||||
use super::{SyncAnalysisOptions, SyncAnalysisReport};
|
||||
|
||||
#[test]
|
||||
fn default_options_match_live_probe_expectations() {
|
||||
let options = SyncAnalysisOptions::default();
|
||||
assert_eq!(options.audio_window_ms, 5);
|
||||
assert!((options.max_pair_gap_s - 0.5).abs() < f64::EPSILON);
|
||||
assert!((options.pulse_period_s - 1.0).abs() < f64::EPSILON);
|
||||
assert!((options.pulse_width_s - 0.12).abs() < f64::EPSILON);
|
||||
assert_eq!(options.marker_tick_period, 5);
|
||||
assert!(options.event_width_codes.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_requires_enough_pairs` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_requires_enough_pairs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 4,
|
||||
audio_event_count: 4,
|
||||
paired_event_count: 4,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 20.0,
|
||||
last_skew_ms: 20.0,
|
||||
mean_skew_ms: 20.0,
|
||||
median_skew_ms: 20.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![20.0; 4],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(!recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 0);
|
||||
assert!(
|
||||
recommendation
|
||||
.note
|
||||
.contains("need at least 13 paired pulses")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_rejects_unstable_drift` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_rejects_unstable_drift() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 12,
|
||||
audio_event_count: 12,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 10.0,
|
||||
last_skew_ms: 70.0,
|
||||
mean_skew_ms: 40.0,
|
||||
median_skew_ms: 35.0,
|
||||
max_abs_skew_ms: 70.0,
|
||||
drift_ms: 60.0,
|
||||
skews_ms: vec![10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(!recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 0);
|
||||
assert!(recommendation.note.contains("drift 60.0 ms exceeds"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_maps_median_skew_to_audio_and_video_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_maps_median_skew_to_audio_and_video_offsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 28.0,
|
||||
last_skew_ms: 32.0,
|
||||
mean_skew_ms: 30.0,
|
||||
median_skew_ms: 30.0,
|
||||
max_abs_skew_ms: 32.0,
|
||||
drift_ms: 4.0,
|
||||
skews_ms: vec![28.0, 30.0, 32.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, -30_000);
|
||||
assert_eq!(recommendation.recommended_video_offset_adjust_us, 30_000);
|
||||
assert!(recommendation.note.contains("move median skew"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_accepts_large_stable_measured_offsets` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_accepts_large_stable_measured_offsets() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 16,
|
||||
audio_event_count: 16,
|
||||
paired_event_count: 16,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: -766.4,
|
||||
raw_first_video_activity_s: 7.491,
|
||||
raw_first_audio_activity_s: 6.725,
|
||||
first_skew_ms: -766.4,
|
||||
last_skew_ms: -769.2,
|
||||
mean_skew_ms: -756.0,
|
||||
median_skew_ms: -766.4,
|
||||
max_abs_skew_ms: 775.7,
|
||||
drift_ms: -2.8,
|
||||
skews_ms: vec![-766.4; 16],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 766_400);
|
||||
assert!(recommendation.note.contains("move median skew"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_uses_pairs_when_raw_activity_disagrees` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_uses_pairs_when_raw_activity_disagrees() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 16,
|
||||
audio_event_count: 16,
|
||||
paired_event_count: 16,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 6_735.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 6.735,
|
||||
first_skew_ms: -90.0,
|
||||
last_skew_ms: -100.0,
|
||||
mean_skew_ms: -95.0,
|
||||
median_skew_ms: -99.0,
|
||||
max_abs_skew_ms: 120.0,
|
||||
drift_ms: -10.0,
|
||||
skews_ms: vec![-99.0; 16],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 99_000);
|
||||
assert!(recommendation.note.contains("paired pulses disagree"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_uses_coded_pairs_when_raw_activity_disagrees` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_uses_coded_pairs_when_raw_activity_disagrees() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 13,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -270.0,
|
||||
last_skew_ms: -230.0,
|
||||
mean_skew_ms: -205.0,
|
||||
median_skew_ms: -188.4,
|
||||
max_abs_skew_ms: 273.8,
|
||||
drift_ms: 40.0,
|
||||
skews_ms: vec![-270.0, -240.0, -188.4, -175.0, -130.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, 188_400);
|
||||
assert!(recommendation.note.contains("paired pulses disagree"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `calibration_recommendation_reports_when_skew_is_already_settled` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn calibration_recommendation_reports_when_skew_is_already_settled() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 14,
|
||||
audio_event_count: 14,
|
||||
paired_event_count: 14,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 3.0,
|
||||
last_skew_ms: 4.0,
|
||||
mean_skew_ms: 3.5,
|
||||
median_skew_ms: 4.0,
|
||||
max_abs_skew_ms: 4.0,
|
||||
drift_ms: 1.0,
|
||||
skews_ms: vec![3.0, 4.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let recommendation = report.calibration_recommendation();
|
||||
assert!(recommendation.ready);
|
||||
assert_eq!(recommendation.recommended_audio_offset_adjust_us, -4_000);
|
||||
assert!(recommendation.note.contains("already within the settled"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `verdict_passes_preferred_skew_band` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn verdict_passes_preferred_skew_band() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 5,
|
||||
audio_event_count: 5,
|
||||
paired_event_count: 5,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 10.0,
|
||||
last_skew_ms: 20.0,
|
||||
mean_skew_ms: 15.0,
|
||||
median_skew_ms: 15.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 10.0,
|
||||
skews_ms: vec![10.0, 12.0, 15.0, 18.0, 20.0],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(verdict.passed);
|
||||
assert_eq!(verdict.status, "preferred");
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `verdict_flags_catastrophic_desync` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn verdict_flags_catastrophic_desync() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 5,
|
||||
audio_event_count: 5,
|
||||
paired_event_count: 5,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 0.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 8_000.0,
|
||||
last_skew_ms: 8_000.0,
|
||||
mean_skew_ms: 8_000.0,
|
||||
median_skew_ms: 8_000.0,
|
||||
max_abs_skew_ms: 8_000.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![8_000.0; 5],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(!verdict.passed);
|
||||
assert_eq!(verdict.status, "catastrophic_failure");
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `verdict_flags_catastrophic_activity_start_delta` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn verdict_flags_catastrophic_activity_start_delta() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 20,
|
||||
audio_event_count: 20,
|
||||
paired_event_count: 20,
|
||||
coded_events: false,
|
||||
activity_start_delta_ms: 20_000.0,
|
||||
raw_first_video_activity_s: 0.0,
|
||||
raw_first_audio_activity_s: 0.0,
|
||||
first_skew_ms: 900.0,
|
||||
last_skew_ms: 900.0,
|
||||
mean_skew_ms: 19_900.0,
|
||||
median_skew_ms: 19_900.0,
|
||||
max_abs_skew_ms: 900.0,
|
||||
drift_ms: 0.0,
|
||||
skews_ms: vec![900.0; 20],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(!verdict.passed);
|
||||
assert_eq!(verdict.status, "catastrophic_failure");
|
||||
assert!(verdict.reason.contains("activity starts"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `verdict_ignores_uncorroborated_raw_activity_for_coded_runs` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn verdict_ignores_uncorroborated_raw_activity_for_coded_runs() {
|
||||
let report = SyncAnalysisReport {
|
||||
video_event_count: 20,
|
||||
audio_event_count: 20,
|
||||
paired_event_count: 20,
|
||||
coded_events: true,
|
||||
activity_start_delta_ms: -3_620.7,
|
||||
raw_first_video_activity_s: 9.361,
|
||||
raw_first_audio_activity_s: 5.740,
|
||||
first_skew_ms: -20.0,
|
||||
last_skew_ms: -18.0,
|
||||
mean_skew_ms: -19.0,
|
||||
median_skew_ms: -19.0,
|
||||
max_abs_skew_ms: 20.0,
|
||||
drift_ms: 2.0,
|
||||
skews_ms: vec![-20.0; 20],
|
||||
video_onsets_s: vec![],
|
||||
audio_onsets_s: vec![],
|
||||
paired_events: vec![],
|
||||
};
|
||||
|
||||
let verdict = report.verdict();
|
||||
assert!(verdict.passed);
|
||||
assert_eq!(verdict.status, "preferred");
|
||||
assert!(verdict.reason.contains("raw activity start delta"));
|
||||
}
|
||||
@ -6,6 +6,8 @@ use std::path::Path;
|
||||
use temp_env::with_var;
|
||||
use tempfile::tempdir;
|
||||
|
||||
/// Keeps `with_fake_media_tools` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn with_fake_media_tools<T>(
|
||||
ffprobe_output: &[u8],
|
||||
ffmpeg_video_output: &[u8],
|
||||
@ -58,6 +60,8 @@ pub(super) fn frame_json(timestamps: &[f64]) -> Vec<u8> {
|
||||
serde_json::to_vec(&serde_json::json!({ "frames": frames })).expect("frame json")
|
||||
}
|
||||
|
||||
/// Keeps `click_track_samples` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn click_track_samples(click_times_s: &[f64], total_samples: usize) -> Vec<i16> {
|
||||
let mut samples = vec![0i16; total_samples];
|
||||
for click_time_s in click_times_s {
|
||||
@ -69,6 +73,8 @@ pub(super) fn click_track_samples(click_times_s: &[f64], total_samples: usize) -
|
||||
samples
|
||||
}
|
||||
|
||||
/// Keeps `thumbnail_video_bytes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn thumbnail_video_bytes(brightness_values: &[u8]) -> Vec<u8> {
|
||||
const SIDE: usize = 64;
|
||||
let mut bytes = Vec::with_capacity(brightness_values.len() * SIDE * SIDE);
|
||||
@ -84,6 +90,8 @@ pub(super) fn thumbnail_video_bytes(brightness_values: &[u8]) -> Vec<u8> {
|
||||
bytes
|
||||
}
|
||||
|
||||
/// Keeps `thumbnail_rgb_video_bytes` explicit because it sits on sync-probe analysis, where small timestamp or pairing mistakes can hide real A/V skew.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn thumbnail_rgb_video_bytes(colors: &[(u8, u8, u8)]) -> Vec<u8> {
|
||||
const SIDE: usize = 64;
|
||||
let mut bytes = Vec::with_capacity(colors.len() * SIDE * SIDE * 3);
|
||||
|
||||
@ -32,6 +32,8 @@ const PROBE_BUNDLE_MAX_AUDIO_PACKETS: usize = 16;
|
||||
#[cfg(not(coverage))]
|
||||
const PROBE_BUNDLE_SESSION_ID: u64 = 1;
|
||||
|
||||
/// Keeps `run_sync_probe_from_args` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub async fn run_sync_probe_from_args<I, S>(args: I) -> Result<()>
|
||||
where
|
||||
I: IntoIterator<Item = S>,
|
||||
@ -47,6 +49,8 @@ where
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `run_sync_probe` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn run_sync_probe(config: ProbeConfig) -> Result<()> {
|
||||
let caps = handshake::negotiate(config.server.as_str()).await;
|
||||
if !caps.camera || !caps.microphone {
|
||||
@ -186,6 +190,8 @@ async fn run_sync_probe(config: ProbeConfig) -> Result<()> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `collect_probe_audio_grace` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn collect_probe_audio_grace(
|
||||
audio_queue: &crate::uplink_fresh_queue::FreshPacketQueue<AudioPacket>,
|
||||
pending_audio: &mut Vec<AudioPacket>,
|
||||
@ -243,6 +249,8 @@ fn packet_age_ms(capture_pts_us: u64, send_pts_us: u64) -> u32 {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `write_probe_audio_dump` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn write_probe_audio_dump(file: Option<&mut File>, packet: &AudioPacket) {
|
||||
if let Some(file) = file {
|
||||
let _ = file.write_all(&packet.data);
|
||||
@ -250,6 +258,8 @@ fn write_probe_audio_dump(file: Option<&mut File>, packet: &AudioPacket) {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `retain_newest_probe_audio` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_newest_probe_audio(pending_audio: &mut Vec<AudioPacket>) {
|
||||
if pending_audio.len() > PROBE_BUNDLE_MAX_AUDIO_PACKETS {
|
||||
let dropped = pending_audio.len() - PROBE_BUNDLE_MAX_AUDIO_PACKETS;
|
||||
@ -269,6 +279,8 @@ fn retain_probe_audio_for_video(pending_audio: &mut Vec<AudioPacket>, video_pts_
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `probe_bundle_capture_bounds` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn probe_bundle_capture_bounds(video: Option<&VideoPacket>, audio: &[AudioPacket]) -> (u64, u64) {
|
||||
let mut start = u64::MAX;
|
||||
let mut end = 0_u64;
|
||||
@ -290,6 +302,8 @@ fn probe_bundle_capture_bounds(video: Option<&VideoPacket>, audio: &[AudioPacket
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `packet_audio_capture_pts_us` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn packet_audio_capture_pts_us(packet: &AudioPacket) -> u64 {
|
||||
if packet.client_capture_pts_us == 0 {
|
||||
packet.pts
|
||||
@ -299,6 +313,8 @@ fn packet_audio_capture_pts_us(packet: &AudioPacket) -> u64 {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `packet_video_capture_pts_us` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn packet_video_capture_pts_us(packet: &VideoPacket) -> u64 {
|
||||
if packet.client_capture_pts_us == 0 {
|
||||
packet.pts
|
||||
@ -358,6 +374,8 @@ mod tests {
|
||||
|
||||
#[cfg(coverage)]
|
||||
#[test]
|
||||
/// Keeps `coverage_run_path_accepts_custom_probe_args` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn coverage_run_path_accepts_custom_probe_args() {
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
rt.block_on(async {
|
||||
|
||||
@ -188,6 +188,8 @@ mod tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `push_drops_oldest_when_queue_is_full` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn push_drops_oldest_when_queue_is_full() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 2,
|
||||
@ -210,6 +212,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `pop_fresh_discards_stale_packets_before_returning_live_media` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn pop_fresh_discards_stale_packets_before_returning_live_media() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 3,
|
||||
@ -242,6 +246,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `clone_shares_the_same_underlying_queue` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn clone_shares_the_same_underlying_queue() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 2,
|
||||
@ -258,6 +264,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `push_returns_default_after_close` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn push_returns_default_after_close() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 2,
|
||||
@ -275,6 +283,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `pop_fresh_waits_for_a_future_packet` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn pop_fresh_waits_for_a_future_packet() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 2,
|
||||
@ -294,6 +304,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `pop_fresh_waiter_wakes_when_the_queue_closes` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn pop_fresh_waiter_wakes_when_the_queue_closes() {
|
||||
let queue = FreshPacketQueue::<u8>::new(FreshQueueConfig {
|
||||
capacity: 2,
|
||||
@ -312,6 +324,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
/// Keeps `latest_only_policy_returns_newest_packet_and_drops_superseded_backlog` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn latest_only_policy_returns_newest_packet_and_drops_superseded_backlog() {
|
||||
let queue = FreshPacketQueue::new(FreshQueueConfig {
|
||||
capacity: 4,
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "lesavka_common"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
edition = "2024"
|
||||
build = "build.rs"
|
||||
|
||||
|
||||
@ -343,3 +343,148 @@ from `LESAVKA_CLIENT_PKI_SSH_SOURCE` over SSH. Runtime clients require the insta
|
||||
| `LESAVKA_UVC_MJPEG_IO_MODE` | UVC helper MJPEG streaming mode override |
|
||||
| `LESAVKA_UVC_MJPEG_SPOOL` | UVC helper MJPEG spool toggle |
|
||||
| `LESAVKA_UVC_SESSION_CLOCK_ALIGN` | UVC helper timing override |
|
||||
|
||||
## Recently documented upstream probe and calibration knobs
|
||||
|
||||
These entries are intentionally concise because most are manual lab or CI harness controls. The detailed behavior lives in the scripts and source that consume them; this table keeps every `LESAVKA_*` knob discoverable for operators and the hygiene gate.
|
||||
|
||||
| `LESAVKA_CAM_EMIT_UI_PROFILE` | client camera/profile negotiation override; used by launcher or lab probes to control emitted capture profile metadata |
|
||||
| `LESAVKA_CAM_LOCK_TO_SERVER_PROFILE` | client camera/profile negotiation override; used by launcher or lab probes to control emitted capture profile metadata |
|
||||
| `LESAVKA_EYE_FIRST_FRAME_TIMEOUT_MS` | runtime/install/session override; document near use before promoting to broader operator config |
|
||||
| `LESAVKA_INSTALL_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US` | installer default override; seeds server calibration env files with known lab-measured output-path offsets |
|
||||
| `LESAVKA_INSTALL_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US` | installer default override; seeds server calibration env files with known lab-measured output-path offsets |
|
||||
| `LESAVKA_LEGACY_SPLIT_UPLINK` | runtime/install/session override; document near use before promoting to broader operator config |
|
||||
| `LESAVKA_MIC_PACKET_TARGET_US` | client microphone capture override; tunes Pulse/PipeWire packet sizing, buffering, or selected source behavior |
|
||||
| `LESAVKA_MIC_PULSE_BUFFER_TIME_US` | client microphone capture override; tunes Pulse/PipeWire packet sizing, buffering, or selected source behavior |
|
||||
| `LESAVKA_MIC_PULSE_LATENCY_TIME_US` | client microphone capture override; tunes Pulse/PipeWire packet sizing, buffering, or selected source behavior |
|
||||
| `LESAVKA_MODE` | manual probe/server connection override used to resolve the target Lesavka server and mode under test |
|
||||
| `LESAVKA_OPEN_MANUAL_REVIEW_DOLPHIN` | manual browser-stimulus probe override; controls local review browser behavior for mirrored upstream A/V tests |
|
||||
| `LESAVKA_OUTPUT_DELAY_CONFIRMING` | manual direct UVC/UAC output-delay probe override; controls server-generated output calibration, confirmation, freshness, or reporting |
|
||||
| `LESAVKA_OUTPUT_DELAY_PROBE_AUDIO_DELAY_US` | manual direct UVC/UAC output-delay probe override; controls server-generated output calibration, confirmation, freshness, or reporting |
|
||||
| `LESAVKA_OUTPUT_DELAY_PROBE_VIDEO_DELAY_US` | manual direct UVC/UAC output-delay probe override; controls server-generated output calibration, confirmation, freshness, or reporting |
|
||||
| `LESAVKA_OUTPUT_FRESHNESS_MIN_PAIRS` | manual direct UVC/UAC output-delay probe override; controls server-generated output calibration, confirmation, freshness, or reporting |
|
||||
| `LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES` | test/build contract variable; keeps deterministic coverage and probe preconditions explicit in CI or manual harnesses |
|
||||
| `LESAVKA_REQUIRE_SYNC_PROBE` | test/build contract variable; keeps deterministic coverage and probe preconditions explicit in CI or manual harnesses |
|
||||
| `LESAVKA_SERVER_PORT` | manual probe/server connection override used to resolve the target Lesavka server and mode under test |
|
||||
| `LESAVKA_SERVER_RC_ALLOW_GADGET_RESET` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_ANALYSIS_TIMELINE_WINDOW` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_CONTINUE_ON_FAIL` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_CORE_WEBCAM_MODES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_DEFAULT_VIDEO_DELAY_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_FORCE_GADGET_REBUILD` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_FRESHNESS_MAX_CLOCK_UNCERTAINTY_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_FRESHNESS_MAX_DRIFT_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_AUDIO_HICCUPS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_AUDIO_LOW_RMS_WINDOWS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_AUDIO_P95_JITTER_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_VIDEO_DUPLICATE_FRAMES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_VIDEO_HICCUPS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_VIDEO_MISSING_FRAMES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_VIDEO_P95_JITTER_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MAX_VIDEO_UNDECODABLE_FRAMES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MIN_CODED_PAIRS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_DELAYS_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_DISCOVERY_EXCLUDE_REGEX` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_DISCOVERY_FPS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_DISCOVERY_INCLUDE_REGEX` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_DISCOVERY_SIZES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_MODE_SOURCE` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_PROBE_PREBUILD` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_PROMPT_SUDO_EARLY` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_CODEC` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_COMMAND` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_REF` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_SETTLE_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_STRATEGY` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_UPDATE` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_RECONFIGURE_VERBOSE` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REMOTE_SUDO_PASSWORD` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REPEAT_COUNT` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REQUIRE_ALL_CODED_PAIRS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REQUIRE_FRESHNESS_PASS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REQUIRE_SMOOTHNESS_PASS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_REQUIRE_SYNC_PASS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_CONDITION_EVENT_WIDTH_CODES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_CONDITION_GAP_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_CONDITION_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_CONDITION_WARMUP_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_DURATION_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_MIN_PAIRS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_MODE` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_SIGNAL_READY_WARMUP_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_START_DELAY_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_MAX_MEDIAN_SKEW_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_MAX_P95_SKEW_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_MAX_SPREAD_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_MIN_RUNS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_REQUIRE_FRESHNESS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_STATIC_REQUIRE_SMOOTHNESS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TETHYS_READY_TIMEOUT_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TETHYS_SETTLE_SECONDS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_CONFIRM` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_DELAYS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_MAX_ABS_SKEW_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_MAX_DRIFT_MS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_MAX_STEP_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_MIN_CHANGE_US` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_TUNE_MIN_PAIRS` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_VERBOSE_PROBES` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_RC_WAIT_TETHYS_READY` | manual server-to-RCT mode-matrix probe override; used to tune, confirm, or summarize server-generated UVC/UAC output against the RC capture target |
|
||||
| `LESAVKA_SERVER_REPO` | manual probe/server connection override used to resolve the target Lesavka server and mode under test |
|
||||
| `LESAVKA_SERVER_SCHEME` | manual probe/server connection override used to resolve the target Lesavka server and mode under test |
|
||||
| `LESAVKA_STIMULUS_BROWSER_KIOSK` | manual browser-stimulus probe override; controls local review browser behavior for mirrored upstream A/V tests |
|
||||
| `LESAVKA_STIMULUS_PREVIEW_SECONDS` | manual browser-stimulus probe override; controls local review browser behavior for mirrored upstream A/V tests |
|
||||
| `LESAVKA_SYNC_ADAPTIVE_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_APPLY_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CALIBRATION_SEGMENTS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CALIBRATION_SEGMENTS_SET` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CALIBRATION_TARGET` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CONFIRMATION_SEGMENTS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CONTINUE_ON_ANALYSIS_FAILURE` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_CONTINUOUS_BROWSER` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROBE_REPORT_DIR` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROBE_REPORT_JSON` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_GAIN` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_MAX_DRIFT_MS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_MAX_P95_MS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_MAX_STEP_US` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_PROVISIONAL_MIN_PAIRS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_RAW_FAILURE_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_RAW_FAILURE_MAX_ABS_DELTA_MS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_REQUIRE_CONFIRMATION_PASS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_SAVE_CALIBRATION` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_SYNC_TOTAL_SEGMENTS` | manual upstream A/V sync probe override; controls mirrored/client-to-server calibration, confirmation, reporting, or failure handling |
|
||||
| `LESAVKA_TEST_CAP_BUNDLED` | test/build contract variable; keeps deterministic coverage and probe preconditions explicit in CI or manual harnesses |
|
||||
| `LESAVKA_UAC_APP_MAX_BUFFERS` | server UAC appsrc buffering override for lab tuning of microphone gadget output latency and stability |
|
||||
| `LESAVKA_UAC_APP_MAX_BYTES` | server UAC appsrc buffering override for lab tuning of microphone gadget output latency and stability |
|
||||
| `LESAVKA_UAC_APP_MAX_TIME_NS` | server UAC appsrc buffering override for lab tuning of microphone gadget output latency and stability |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_COOLDOWN_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_DEADBAND_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_GAIN` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_INTERVAL_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_CLIENT_SEND_P95_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_HANDOFF_P95_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_QUEUE_AGE_P95_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_SERVER_RECEIVE_P95_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_SINK_LATE_P95_MS` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MAX_STEP_US` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_MIN_SAMPLES` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_BLIND_HEAL_TARGET` | server upstream media blind-healer tuning knob; adjusts cautious runtime offset correction when telemetry indicates persistent skew |
|
||||
| `LESAVKA_UPSTREAM_SOURCE_LEAD_CAP_MS` | server upstream media timing override; bounds live source lead or playout behavior while tuning client-to-server transport |
|
||||
| `LESAVKA_UVC_CONFIGFS_BASE` | server UVC gadget mode/configfs override used by runtime reconfiguration and hardware-in-the-loop probes |
|
||||
| `LESAVKA_UVC_MODE` | server UVC gadget mode/configfs override used by runtime reconfiguration and hardware-in-the-loop probes |
|
||||
|
||||
@ -267,581 +267,8 @@ impl UpstreamMediaRuntime {
|
||||
|
||||
include!("upstream_media_runtime/lease_lifecycle.rs");
|
||||
|
||||
impl UpstreamMediaRuntime {
|
||||
/// Rebase one upstream video packet timestamp onto the shared session clock.
|
||||
#[must_use]
|
||||
pub fn map_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> Option<u64> {
|
||||
match self.plan_video_pts(remote_pts_us, frame_step_us.max(1)) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Rebase one upstream audio packet timestamp onto the shared session clock.
|
||||
#[must_use]
|
||||
pub fn map_audio_pts(&self, remote_pts_us: u64) -> Option<u64> {
|
||||
match self.plan_audio_pts(remote_pts_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Rebase and schedule one upstream video packet on the shared playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_pts(
|
||||
UpstreamMediaKind::Camera,
|
||||
remote_pts_us,
|
||||
frame_step_us.max(1),
|
||||
)
|
||||
}
|
||||
|
||||
/// Rebase and schedule one upstream audio packet on the shared playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_audio_pts(&self, remote_pts_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_pts(UpstreamMediaKind::Microphone, remote_pts_us, 1)
|
||||
}
|
||||
|
||||
/// Schedule a packet from the bundled webcam/microphone transport.
|
||||
///
|
||||
/// Inputs: the media kind, client capture timestamp, packet cadence floor,
|
||||
/// and the client-owned bundle epoch chosen for this gRPC stream.
|
||||
/// Outputs: the server playout decision for that packet.
|
||||
/// Why: bundled webcam media has already been synchronized on the client,
|
||||
/// so the server should not re-solve cross-stream startup pairing. It only
|
||||
/// rebases the shared client clock onto a fresh local playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_bundled_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
bundle_base_remote_pts_us: u64,
|
||||
bundle_epoch: Instant,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let session_id = state.session_id;
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.camera_packet_count = state.camera_packet_count.saturating_add(1);
|
||||
state
|
||||
.first_camera_remote_pts_us
|
||||
.get_or_insert(remote_pts_us);
|
||||
state.camera_startup_ready = true;
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.microphone_packet_count = state.microphone_packet_count.saturating_add(1);
|
||||
state
|
||||
.first_microphone_remote_pts_us
|
||||
.get_or_insert(remote_pts_us);
|
||||
}
|
||||
}
|
||||
update_latest_remote_pts(&mut state, kind, remote_pts_us);
|
||||
if state.session_base_remote_pts_us.is_none() {
|
||||
state.session_base_remote_pts_us = Some(bundle_base_remote_pts_us);
|
||||
state.playout_epoch = Some(bundle_epoch);
|
||||
state.pairing_anchor_deadline = Some(bundle_epoch);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "client-bundled upstream media epoch established".to_string();
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
info!(
|
||||
session_id,
|
||||
bundle_base_remote_pts_us, "client-bundled upstream media epoch established"
|
||||
);
|
||||
}
|
||||
|
||||
let session_base_remote_pts_us = state
|
||||
.session_base_remote_pts_us
|
||||
.unwrap_or(bundle_base_remote_pts_us);
|
||||
if remote_pts_us < session_base_remote_pts_us {
|
||||
return UpstreamPlanDecision::DropBeforeOverlap;
|
||||
}
|
||||
|
||||
let max_live_lag = upstream_max_live_lag();
|
||||
let source_lag = source_lag_for_kind(&state, kind, remote_pts_us);
|
||||
if source_lag > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped stale bundled video beyond max live lag".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped stale bundled audio beyond max live lag".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("bundled packet exceeded max live lag");
|
||||
}
|
||||
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(session_base_remote_pts_us);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
|
||||
let sink_offset_us = self.bundled_playout_offset_us(kind);
|
||||
let epoch = state.playout_epoch.unwrap_or(bundle_epoch);
|
||||
let mut due_at =
|
||||
apply_playout_offset(epoch + Duration::from_micros(local_pts_us), sink_offset_us);
|
||||
let now = Instant::now();
|
||||
let mut late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
let playout_delay = upstream_bundled_playout_delay().min(max_live_lag);
|
||||
let reanchor_threshold = upstream_reanchor_late_threshold(playout_delay);
|
||||
let max_future_wait = max_live_lag.saturating_sub(source_lag);
|
||||
let output_offset = if sink_offset_us >= 0 {
|
||||
Duration::from_micros(sink_offset_us as u64)
|
||||
} else {
|
||||
Duration::ZERO
|
||||
};
|
||||
let due_future_wait = due_at.saturating_duration_since(now);
|
||||
let due_future_wait_before_output_compensation =
|
||||
due_future_wait.saturating_sub(output_offset);
|
||||
if late_by > reanchor_threshold
|
||||
|| due_future_wait_before_output_compensation > max_future_wait
|
||||
{
|
||||
let desired_delay = playout_delay.min(max_future_wait);
|
||||
let desired_due_at = now + desired_delay;
|
||||
let unoffset_due_at = apply_playout_offset(desired_due_at, -sink_offset_us);
|
||||
let recovered_epoch = unoffset_due_at
|
||||
.checked_sub(Duration::from_micros(local_pts_us))
|
||||
.unwrap_or(unoffset_due_at);
|
||||
state.playout_epoch = Some(recovered_epoch);
|
||||
state.pairing_anchor_deadline = Some(desired_due_at);
|
||||
state.freshness_reanchors = state.freshness_reanchors.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason =
|
||||
"reanchored bundled upstream playhead to preserve freshness".to_string();
|
||||
due_at = apply_playout_offset(
|
||||
recovered_epoch + Duration::from_micros(local_pts_us),
|
||||
sink_offset_us,
|
||||
);
|
||||
late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
local_pts_us,
|
||||
remote_pts_us,
|
||||
recovery_buffer_ms = desired_delay.as_millis(),
|
||||
max_live_lag_ms = max_live_lag.as_millis(),
|
||||
source_lag_ms = source_lag.as_millis(),
|
||||
output_offset_ms = output_offset.as_millis(),
|
||||
"bundled upstream media playhead reanchored to preserve freshness"
|
||||
);
|
||||
}
|
||||
let predicted_lag_before_output_compensation = source_lag.saturating_add(
|
||||
due_at
|
||||
.saturating_duration_since(now)
|
||||
.saturating_sub(output_offset),
|
||||
);
|
||||
if predicted_lag_before_output_compensation > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason = "dropped bundled video that would exceed max live lag before output compensation".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason = "dropped bundled audio that would exceed max live lag before output compensation".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale(
|
||||
"bundled packet would exceed max live lag before output compensation",
|
||||
);
|
||||
}
|
||||
|
||||
if kind == UpstreamMediaKind::Microphone {
|
||||
self.audio_progress_notify.notify_waiters();
|
||||
}
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag,
|
||||
})
|
||||
}
|
||||
|
||||
/// Hold video until the audio master has at least reached the same capture
|
||||
/// moment, or until the bounded sync grace is exhausted.
|
||||
pub async fn wait_for_audio_master(&self, video_local_pts_us: u64, due_at: Instant) -> bool {
|
||||
let slack_us = upstream_pairing_master_slack()
|
||||
.as_micros()
|
||||
.min(u64::MAX as u128) as u64;
|
||||
let audio_delay_allowance_us = self.positive_audio_delay_allowance_us();
|
||||
let deadline = due_at + upstream_audio_master_wait_grace();
|
||||
loop {
|
||||
let notified = self.audio_progress_notify.notified();
|
||||
{
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
if state.active_microphone_generation.is_none() {
|
||||
return true;
|
||||
}
|
||||
let audio_presented_pts_us = state.last_audio_presented_pts_us.unwrap_or(0);
|
||||
if audio_presented_pts_us
|
||||
.saturating_add(slack_us)
|
||||
.saturating_add(audio_delay_allowance_us)
|
||||
>= video_local_pts_us
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if Instant::now() >= deadline {
|
||||
return false;
|
||||
}
|
||||
tokio::select! {
|
||||
_ = notified => {}
|
||||
_ = tokio::time::sleep_until(deadline) => return false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn plan_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let session_id = state.session_id;
|
||||
let packet_count = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.camera_packet_count = state.camera_packet_count.saturating_add(1);
|
||||
state.camera_packet_count
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.microphone_packet_count = state.microphone_packet_count.saturating_add(1);
|
||||
state.microphone_packet_count
|
||||
}
|
||||
};
|
||||
update_latest_remote_pts(&mut state, kind, remote_pts_us);
|
||||
let mut first_remote_for_kind = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
let first_slot = &mut state.first_camera_remote_pts_us;
|
||||
*first_slot.get_or_insert(remote_pts_us)
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
let first_slot = &mut state.first_microphone_remote_pts_us;
|
||||
*first_slot.get_or_insert(remote_pts_us)
|
||||
}
|
||||
};
|
||||
if kind == UpstreamMediaKind::Camera {
|
||||
let startup_grace_us = upstream_camera_startup_grace_us();
|
||||
if !state.camera_startup_ready
|
||||
&& (startup_grace_us == 0
|
||||
|| remote_pts_us.saturating_sub(first_remote_for_kind) >= startup_grace_us)
|
||||
{
|
||||
state.camera_startup_ready = true;
|
||||
state.first_camera_remote_pts_us = Some(remote_pts_us);
|
||||
first_remote_for_kind = remote_pts_us;
|
||||
}
|
||||
}
|
||||
let now = Instant::now();
|
||||
let pairing_deadline = *state
|
||||
.pairing_anchor_deadline
|
||||
.get_or_insert_with(|| now + upstream_playout_delay());
|
||||
let playout_delay = upstream_playout_delay();
|
||||
let max_live_lag = upstream_max_live_lag();
|
||||
|
||||
if state.session_base_remote_pts_us.is_none() {
|
||||
if state.session_started_at.is_some_and(|started_at| {
|
||||
now.saturating_duration_since(started_at) > upstream_startup_timeout()
|
||||
}) {
|
||||
state.phase = UpstreamSyncPhase::Failed;
|
||||
state.startup_timeouts = state.startup_timeouts.saturating_add(1);
|
||||
state.last_reason =
|
||||
"paired upstream startup did not converge before timeout".to_string();
|
||||
return UpstreamPlanDecision::StartupFailed(
|
||||
"paired upstream startup did not converge before timeout",
|
||||
);
|
||||
}
|
||||
if state.first_camera_remote_pts_us.is_some()
|
||||
&& state.first_microphone_remote_pts_us.is_some()
|
||||
&& state.camera_startup_ready
|
||||
{
|
||||
let first_camera_remote_pts_us =
|
||||
state.first_camera_remote_pts_us.unwrap_or_default();
|
||||
let first_microphone_remote_pts_us =
|
||||
state.first_microphone_remote_pts_us.unwrap_or_default();
|
||||
state.session_base_remote_pts_us =
|
||||
Some(first_camera_remote_pts_us.max(first_microphone_remote_pts_us));
|
||||
let overlap_epoch = now + playout_delay;
|
||||
state.playout_epoch = Some(overlap_epoch);
|
||||
state.pairing_anchor_deadline = Some(overlap_epoch);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "fresh audio/video overlap anchor established".to_string();
|
||||
if !state.startup_anchor_logged {
|
||||
let startup_delta_us =
|
||||
first_camera_remote_pts_us as i128 - first_microphone_remote_pts_us as i128;
|
||||
info!(
|
||||
session_id,
|
||||
first_camera_remote_pts_us,
|
||||
first_microphone_remote_pts_us,
|
||||
overlap_base_remote_pts_us =
|
||||
state.session_base_remote_pts_us.unwrap_or_default(),
|
||||
startup_delta_us,
|
||||
"upstream media overlap anchors established"
|
||||
);
|
||||
state.startup_anchor_logged = true;
|
||||
}
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
} else if now < pairing_deadline {
|
||||
state.phase = UpstreamSyncPhase::Acquiring;
|
||||
state.last_reason = "awaiting both upstream media streams".to_string();
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
wait_ms = pairing_deadline.saturating_duration_since(now).as_millis(),
|
||||
"upstream media packet buffered while awaiting the counterpart stream"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else if state.first_camera_remote_pts_us.is_some() && !state.camera_startup_ready {
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "camera startup warm-up is still in progress".to_string();
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
"upstream media packet buffered while camera startup warm-up is still in progress"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else if upstream_require_paired_startup() {
|
||||
let refreshed = refresh_unpaired_pairing_anchor(
|
||||
&mut state,
|
||||
kind,
|
||||
remote_pts_us,
|
||||
now + playout_delay,
|
||||
);
|
||||
if refreshed || upstream_timing_trace_enabled() {
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
refreshed_anchor = refreshed,
|
||||
healing_window_ms = playout_delay.as_millis(),
|
||||
"upstream media pairing window expired; holding one-sided stream for synced startup"
|
||||
);
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "holding one-sided stream for synced startup".to_string();
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else {
|
||||
let single_stream_base_remote_pts_us = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.first_camera_remote_pts_us.unwrap_or(remote_pts_us)
|
||||
}
|
||||
UpstreamMediaKind::Microphone => state
|
||||
.first_microphone_remote_pts_us
|
||||
.unwrap_or(remote_pts_us),
|
||||
};
|
||||
state.session_base_remote_pts_us = Some(single_stream_base_remote_pts_us);
|
||||
let one_sided_epoch = now + playout_delay;
|
||||
state.playout_epoch = Some(one_sided_epoch);
|
||||
state.pairing_anchor_deadline = Some(one_sided_epoch);
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
single_stream_base_remote_pts_us,
|
||||
"upstream media pairing window expired; continuing with one-sided playout"
|
||||
);
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
}
|
||||
}
|
||||
|
||||
let session_base_remote_pts_us = state.session_base_remote_pts_us.unwrap_or(remote_pts_us);
|
||||
if remote_pts_us < session_base_remote_pts_us {
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
session_base_remote_pts_us,
|
||||
"upstream media packet dropped before the shared overlap base"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::DropBeforeOverlap;
|
||||
}
|
||||
|
||||
let source_lag = source_lag_for_kind(&state, kind, remote_pts_us);
|
||||
if source_lag > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason = "dropped stale video beyond max live lag".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason = "dropped stale audio beyond max live lag".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("packet exceeded max live lag");
|
||||
}
|
||||
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(session_base_remote_pts_us);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
let audio_ahead_video_allowance_us = self.audio_ahead_video_allowance_us();
|
||||
if kind == UpstreamMediaKind::Camera
|
||||
&& state
|
||||
.last_audio_presented_pts_us
|
||||
.is_some_and(|audio_pts_us| {
|
||||
video_is_too_far_behind_audio(
|
||||
local_pts_us,
|
||||
audio_pts_us,
|
||||
audio_ahead_video_allowance_us,
|
||||
)
|
||||
})
|
||||
{
|
||||
state.skew_video_drops = state.skew_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason =
|
||||
"dropped video frame that was too far behind the audio master".to_string();
|
||||
return UpstreamPlanDecision::DropStale("video frame was too far behind audio master");
|
||||
}
|
||||
let epoch = *state.playout_epoch.get_or_insert(pairing_deadline);
|
||||
let sink_offset_us = self.playout_offset_us(kind);
|
||||
let playout_delay = upstream_playout_delay().min(max_live_lag);
|
||||
let mut due_at =
|
||||
apply_playout_offset(epoch + Duration::from_micros(local_pts_us), sink_offset_us);
|
||||
let mut late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
let reanchor_threshold = upstream_reanchor_late_threshold(playout_delay);
|
||||
let intentional_future_wait_allowance =
|
||||
Duration::from_micros(self.intentional_future_wait_allowance_us(kind));
|
||||
let max_future_wait = max_live_lag
|
||||
.saturating_sub(source_lag)
|
||||
.saturating_add(intentional_future_wait_allowance);
|
||||
let due_future_wait = due_at.saturating_duration_since(now);
|
||||
if late_by > reanchor_threshold || due_future_wait > max_future_wait {
|
||||
let old_late_by = late_by;
|
||||
let old_future_wait = due_future_wait;
|
||||
let desired_delay = playout_delay.min(max_future_wait);
|
||||
let desired_due_at = now + desired_delay;
|
||||
let unoffset_due_at = apply_playout_offset(desired_due_at, -sink_offset_us);
|
||||
let recovered_epoch = unoffset_due_at
|
||||
.checked_sub(Duration::from_micros(local_pts_us))
|
||||
.unwrap_or(unoffset_due_at);
|
||||
state.playout_epoch = Some(recovered_epoch);
|
||||
state.pairing_anchor_deadline = Some(desired_due_at);
|
||||
state.freshness_reanchors = state.freshness_reanchors.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason = "reanchored upstream playhead to preserve freshness".to_string();
|
||||
due_at = apply_playout_offset(
|
||||
recovered_epoch + Duration::from_micros(local_pts_us),
|
||||
sink_offset_us,
|
||||
);
|
||||
late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
local_pts_us,
|
||||
remote_pts_us,
|
||||
old_late_by_ms = old_late_by.as_millis(),
|
||||
old_future_wait_ms = old_future_wait.as_millis(),
|
||||
recovery_buffer_ms = playout_delay.as_millis(),
|
||||
reanchor_threshold_ms = reanchor_threshold.as_millis(),
|
||||
max_live_lag_ms = max_live_lag.as_millis(),
|
||||
source_lag_ms = source_lag.as_millis(),
|
||||
"upstream media playhead reanchored to preserve freshness"
|
||||
);
|
||||
}
|
||||
let predicted_lag_at_playout =
|
||||
source_lag.saturating_add(due_at.saturating_duration_since(now));
|
||||
if predicted_lag_at_playout > max_live_lag.saturating_add(intentional_future_wait_allowance)
|
||||
{
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped video that would exceed max live lag at playout".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped audio that would exceed max live lag at playout".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("packet would exceed max live lag at playout");
|
||||
}
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
let playout_delay_us = due_at.saturating_duration_since(now).as_micros();
|
||||
let late_by_us = late_by.as_micros();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
session_base_remote_pts_us,
|
||||
first_remote_for_kind,
|
||||
remote_elapsed_us = remote_pts_us.saturating_sub(session_base_remote_pts_us),
|
||||
local_pts_us,
|
||||
playout_delay_us,
|
||||
sink_offset_us,
|
||||
late_by_us,
|
||||
source_lag_us = source_lag.as_micros(),
|
||||
"upstream media rebase sample"
|
||||
);
|
||||
}
|
||||
if kind == UpstreamMediaKind::Microphone {
|
||||
self.audio_progress_notify.notify_waiters();
|
||||
}
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag,
|
||||
})
|
||||
}
|
||||
}
|
||||
include!("upstream_media_runtime/playout_planning_methods.rs");
|
||||
include!("upstream_media_runtime/playout_decision_core.rs");
|
||||
|
||||
fn update_latest_remote_pts(
|
||||
state: &mut UpstreamClockState,
|
||||
|
||||
@ -0,0 +1,331 @@
|
||||
impl UpstreamMediaRuntime {
|
||||
fn plan_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let session_id = state.session_id;
|
||||
let packet_count = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.camera_packet_count = state.camera_packet_count.saturating_add(1);
|
||||
state.camera_packet_count
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.microphone_packet_count = state.microphone_packet_count.saturating_add(1);
|
||||
state.microphone_packet_count
|
||||
}
|
||||
};
|
||||
update_latest_remote_pts(&mut state, kind, remote_pts_us);
|
||||
let mut first_remote_for_kind = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
let first_slot = &mut state.first_camera_remote_pts_us;
|
||||
*first_slot.get_or_insert(remote_pts_us)
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
let first_slot = &mut state.first_microphone_remote_pts_us;
|
||||
*first_slot.get_or_insert(remote_pts_us)
|
||||
}
|
||||
};
|
||||
if kind == UpstreamMediaKind::Camera {
|
||||
let startup_grace_us = upstream_camera_startup_grace_us();
|
||||
if !state.camera_startup_ready
|
||||
&& (startup_grace_us == 0
|
||||
|| remote_pts_us.saturating_sub(first_remote_for_kind) >= startup_grace_us)
|
||||
{
|
||||
state.camera_startup_ready = true;
|
||||
state.first_camera_remote_pts_us = Some(remote_pts_us);
|
||||
first_remote_for_kind = remote_pts_us;
|
||||
}
|
||||
}
|
||||
let now = Instant::now();
|
||||
let pairing_deadline = *state
|
||||
.pairing_anchor_deadline
|
||||
.get_or_insert_with(|| now + upstream_playout_delay());
|
||||
let playout_delay = upstream_playout_delay();
|
||||
let max_live_lag = upstream_max_live_lag();
|
||||
|
||||
if state.session_base_remote_pts_us.is_none() {
|
||||
if state.session_started_at.is_some_and(|started_at| {
|
||||
now.saturating_duration_since(started_at) > upstream_startup_timeout()
|
||||
}) {
|
||||
state.phase = UpstreamSyncPhase::Failed;
|
||||
state.startup_timeouts = state.startup_timeouts.saturating_add(1);
|
||||
state.last_reason =
|
||||
"paired upstream startup did not converge before timeout".to_string();
|
||||
return UpstreamPlanDecision::StartupFailed(
|
||||
"paired upstream startup did not converge before timeout",
|
||||
);
|
||||
}
|
||||
if state.first_camera_remote_pts_us.is_some()
|
||||
&& state.first_microphone_remote_pts_us.is_some()
|
||||
&& state.camera_startup_ready
|
||||
{
|
||||
let first_camera_remote_pts_us =
|
||||
state.first_camera_remote_pts_us.unwrap_or_default();
|
||||
let first_microphone_remote_pts_us =
|
||||
state.first_microphone_remote_pts_us.unwrap_or_default();
|
||||
state.session_base_remote_pts_us =
|
||||
Some(first_camera_remote_pts_us.max(first_microphone_remote_pts_us));
|
||||
let overlap_epoch = now + playout_delay;
|
||||
state.playout_epoch = Some(overlap_epoch);
|
||||
state.pairing_anchor_deadline = Some(overlap_epoch);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "fresh audio/video overlap anchor established".to_string();
|
||||
if !state.startup_anchor_logged {
|
||||
let startup_delta_us =
|
||||
first_camera_remote_pts_us as i128 - first_microphone_remote_pts_us as i128;
|
||||
info!(
|
||||
session_id,
|
||||
first_camera_remote_pts_us,
|
||||
first_microphone_remote_pts_us,
|
||||
overlap_base_remote_pts_us =
|
||||
state.session_base_remote_pts_us.unwrap_or_default(),
|
||||
startup_delta_us,
|
||||
"upstream media overlap anchors established"
|
||||
);
|
||||
state.startup_anchor_logged = true;
|
||||
}
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
} else if now < pairing_deadline {
|
||||
state.phase = UpstreamSyncPhase::Acquiring;
|
||||
state.last_reason = "awaiting both upstream media streams".to_string();
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
wait_ms = pairing_deadline.saturating_duration_since(now).as_millis(),
|
||||
"upstream media packet buffered while awaiting the counterpart stream"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else if state.first_camera_remote_pts_us.is_some() && !state.camera_startup_ready {
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "camera startup warm-up is still in progress".to_string();
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
"upstream media packet buffered while camera startup warm-up is still in progress"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else if upstream_require_paired_startup() {
|
||||
let refreshed = refresh_unpaired_pairing_anchor(
|
||||
&mut state,
|
||||
kind,
|
||||
remote_pts_us,
|
||||
now + playout_delay,
|
||||
);
|
||||
if refreshed || upstream_timing_trace_enabled() {
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
refreshed_anchor = refreshed,
|
||||
healing_window_ms = playout_delay.as_millis(),
|
||||
"upstream media pairing window expired; holding one-sided stream for synced startup"
|
||||
);
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "holding one-sided stream for synced startup".to_string();
|
||||
return UpstreamPlanDecision::AwaitingPair;
|
||||
} else {
|
||||
let single_stream_base_remote_pts_us = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.first_camera_remote_pts_us.unwrap_or(remote_pts_us)
|
||||
}
|
||||
UpstreamMediaKind::Microphone => state
|
||||
.first_microphone_remote_pts_us
|
||||
.unwrap_or(remote_pts_us),
|
||||
};
|
||||
state.session_base_remote_pts_us = Some(single_stream_base_remote_pts_us);
|
||||
let one_sided_epoch = now + playout_delay;
|
||||
state.playout_epoch = Some(one_sided_epoch);
|
||||
state.pairing_anchor_deadline = Some(one_sided_epoch);
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
single_stream_base_remote_pts_us,
|
||||
"upstream media pairing window expired; continuing with one-sided playout"
|
||||
);
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
}
|
||||
}
|
||||
|
||||
let session_base_remote_pts_us = state.session_base_remote_pts_us.unwrap_or(remote_pts_us);
|
||||
if remote_pts_us < session_base_remote_pts_us {
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
session_base_remote_pts_us,
|
||||
"upstream media packet dropped before the shared overlap base"
|
||||
);
|
||||
}
|
||||
return UpstreamPlanDecision::DropBeforeOverlap;
|
||||
}
|
||||
|
||||
let source_lag = source_lag_for_kind(&state, kind, remote_pts_us);
|
||||
if source_lag > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason = "dropped stale video beyond max live lag".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason = "dropped stale audio beyond max live lag".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("packet exceeded max live lag");
|
||||
}
|
||||
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(session_base_remote_pts_us);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
let audio_ahead_video_allowance_us = self.audio_ahead_video_allowance_us();
|
||||
if kind == UpstreamMediaKind::Camera
|
||||
&& state
|
||||
.last_audio_presented_pts_us
|
||||
.is_some_and(|audio_pts_us| {
|
||||
video_is_too_far_behind_audio(
|
||||
local_pts_us,
|
||||
audio_pts_us,
|
||||
audio_ahead_video_allowance_us,
|
||||
)
|
||||
})
|
||||
{
|
||||
state.skew_video_drops = state.skew_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason =
|
||||
"dropped video frame that was too far behind the audio master".to_string();
|
||||
return UpstreamPlanDecision::DropStale("video frame was too far behind audio master");
|
||||
}
|
||||
let epoch = *state.playout_epoch.get_or_insert(pairing_deadline);
|
||||
let sink_offset_us = self.playout_offset_us(kind);
|
||||
let playout_delay = upstream_playout_delay().min(max_live_lag);
|
||||
let mut due_at =
|
||||
apply_playout_offset(epoch + Duration::from_micros(local_pts_us), sink_offset_us);
|
||||
let mut late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
let reanchor_threshold = upstream_reanchor_late_threshold(playout_delay);
|
||||
let intentional_future_wait_allowance =
|
||||
Duration::from_micros(self.intentional_future_wait_allowance_us(kind));
|
||||
let max_future_wait = max_live_lag
|
||||
.saturating_sub(source_lag)
|
||||
.saturating_add(intentional_future_wait_allowance);
|
||||
let due_future_wait = due_at.saturating_duration_since(now);
|
||||
if late_by > reanchor_threshold || due_future_wait > max_future_wait {
|
||||
let old_late_by = late_by;
|
||||
let old_future_wait = due_future_wait;
|
||||
let desired_delay = playout_delay.min(max_future_wait);
|
||||
let desired_due_at = now + desired_delay;
|
||||
let unoffset_due_at = apply_playout_offset(desired_due_at, -sink_offset_us);
|
||||
let recovered_epoch = unoffset_due_at
|
||||
.checked_sub(Duration::from_micros(local_pts_us))
|
||||
.unwrap_or(unoffset_due_at);
|
||||
state.playout_epoch = Some(recovered_epoch);
|
||||
state.pairing_anchor_deadline = Some(desired_due_at);
|
||||
state.freshness_reanchors = state.freshness_reanchors.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason = "reanchored upstream playhead to preserve freshness".to_string();
|
||||
due_at = apply_playout_offset(
|
||||
recovered_epoch + Duration::from_micros(local_pts_us),
|
||||
sink_offset_us,
|
||||
);
|
||||
late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
local_pts_us,
|
||||
remote_pts_us,
|
||||
old_late_by_ms = old_late_by.as_millis(),
|
||||
old_future_wait_ms = old_future_wait.as_millis(),
|
||||
recovery_buffer_ms = playout_delay.as_millis(),
|
||||
reanchor_threshold_ms = reanchor_threshold.as_millis(),
|
||||
max_live_lag_ms = max_live_lag.as_millis(),
|
||||
source_lag_ms = source_lag.as_millis(),
|
||||
"upstream media playhead reanchored to preserve freshness"
|
||||
);
|
||||
}
|
||||
let predicted_lag_at_playout =
|
||||
source_lag.saturating_add(due_at.saturating_duration_since(now));
|
||||
if predicted_lag_at_playout > max_live_lag.saturating_add(intentional_future_wait_allowance)
|
||||
{
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped video that would exceed max live lag at playout".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped audio that would exceed max live lag at playout".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("packet would exceed max live lag at playout");
|
||||
}
|
||||
if upstream_timing_trace_enabled()
|
||||
&& (packet_count <= 10 || packet_count.is_multiple_of(300))
|
||||
{
|
||||
let playout_delay_us = due_at.saturating_duration_since(now).as_micros();
|
||||
let late_by_us = late_by.as_micros();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
packet_count,
|
||||
remote_pts_us,
|
||||
session_base_remote_pts_us,
|
||||
first_remote_for_kind,
|
||||
remote_elapsed_us = remote_pts_us.saturating_sub(session_base_remote_pts_us),
|
||||
local_pts_us,
|
||||
playout_delay_us,
|
||||
sink_offset_us,
|
||||
late_by_us,
|
||||
source_lag_us = source_lag.as_micros(),
|
||||
"upstream media rebase sample"
|
||||
);
|
||||
}
|
||||
if kind == UpstreamMediaKind::Microphone {
|
||||
self.audio_progress_notify.notify_waiters();
|
||||
}
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag,
|
||||
})
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,245 @@
|
||||
impl UpstreamMediaRuntime {
|
||||
/// Rebase one upstream video packet timestamp onto the shared session clock.
|
||||
#[must_use]
|
||||
pub fn map_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> Option<u64> {
|
||||
match self.plan_video_pts(remote_pts_us, frame_step_us.max(1)) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Rebase one upstream audio packet timestamp onto the shared session clock.
|
||||
#[must_use]
|
||||
pub fn map_audio_pts(&self, remote_pts_us: u64) -> Option<u64> {
|
||||
match self.plan_audio_pts(remote_pts_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Rebase and schedule one upstream video packet on the shared playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_pts(
|
||||
UpstreamMediaKind::Camera,
|
||||
remote_pts_us,
|
||||
frame_step_us.max(1),
|
||||
)
|
||||
}
|
||||
|
||||
/// Rebase and schedule one upstream audio packet on the shared playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_audio_pts(&self, remote_pts_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_pts(UpstreamMediaKind::Microphone, remote_pts_us, 1)
|
||||
}
|
||||
|
||||
/// Schedule a packet from the bundled webcam/microphone transport.
|
||||
///
|
||||
/// Inputs: the media kind, client capture timestamp, packet cadence floor,
|
||||
/// and the client-owned bundle epoch chosen for this gRPC stream.
|
||||
/// Outputs: the server playout decision for that packet.
|
||||
/// Why: bundled webcam media has already been synchronized on the client,
|
||||
/// so the server should not re-solve cross-stream startup pairing. It only
|
||||
/// rebases the shared client clock onto a fresh local playout epoch.
|
||||
#[must_use]
|
||||
pub fn plan_bundled_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
bundle_base_remote_pts_us: u64,
|
||||
bundle_epoch: Instant,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let session_id = state.session_id;
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.camera_packet_count = state.camera_packet_count.saturating_add(1);
|
||||
state
|
||||
.first_camera_remote_pts_us
|
||||
.get_or_insert(remote_pts_us);
|
||||
state.camera_startup_ready = true;
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.microphone_packet_count = state.microphone_packet_count.saturating_add(1);
|
||||
state
|
||||
.first_microphone_remote_pts_us
|
||||
.get_or_insert(remote_pts_us);
|
||||
}
|
||||
}
|
||||
update_latest_remote_pts(&mut state, kind, remote_pts_us);
|
||||
if state.session_base_remote_pts_us.is_none() {
|
||||
state.session_base_remote_pts_us = Some(bundle_base_remote_pts_us);
|
||||
state.playout_epoch = Some(bundle_epoch);
|
||||
state.pairing_anchor_deadline = Some(bundle_epoch);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "client-bundled upstream media epoch established".to_string();
|
||||
self.pairing_state_notify.notify_waiters();
|
||||
info!(
|
||||
session_id,
|
||||
bundle_base_remote_pts_us, "client-bundled upstream media epoch established"
|
||||
);
|
||||
}
|
||||
|
||||
let session_base_remote_pts_us = state
|
||||
.session_base_remote_pts_us
|
||||
.unwrap_or(bundle_base_remote_pts_us);
|
||||
if remote_pts_us < session_base_remote_pts_us {
|
||||
return UpstreamPlanDecision::DropBeforeOverlap;
|
||||
}
|
||||
|
||||
let max_live_lag = upstream_max_live_lag();
|
||||
let source_lag = source_lag_for_kind(&state, kind, remote_pts_us);
|
||||
if source_lag > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped stale bundled video beyond max live lag".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason =
|
||||
"dropped stale bundled audio beyond max live lag".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale("bundled packet exceeded max live lag");
|
||||
}
|
||||
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(session_base_remote_pts_us);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
|
||||
let sink_offset_us = self.bundled_playout_offset_us(kind);
|
||||
let epoch = state.playout_epoch.unwrap_or(bundle_epoch);
|
||||
let mut due_at =
|
||||
apply_playout_offset(epoch + Duration::from_micros(local_pts_us), sink_offset_us);
|
||||
let now = Instant::now();
|
||||
let mut late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
let playout_delay = upstream_bundled_playout_delay().min(max_live_lag);
|
||||
let reanchor_threshold = upstream_reanchor_late_threshold(playout_delay);
|
||||
let max_future_wait = max_live_lag.saturating_sub(source_lag);
|
||||
let output_offset = if sink_offset_us >= 0 {
|
||||
Duration::from_micros(sink_offset_us as u64)
|
||||
} else {
|
||||
Duration::ZERO
|
||||
};
|
||||
let due_future_wait = due_at.saturating_duration_since(now);
|
||||
let due_future_wait_before_output_compensation =
|
||||
due_future_wait.saturating_sub(output_offset);
|
||||
if late_by > reanchor_threshold
|
||||
|| due_future_wait_before_output_compensation > max_future_wait
|
||||
{
|
||||
let desired_delay = playout_delay.min(max_future_wait);
|
||||
let desired_due_at = now + desired_delay;
|
||||
let unoffset_due_at = apply_playout_offset(desired_due_at, -sink_offset_us);
|
||||
let recovered_epoch = unoffset_due_at
|
||||
.checked_sub(Duration::from_micros(local_pts_us))
|
||||
.unwrap_or(unoffset_due_at);
|
||||
state.playout_epoch = Some(recovered_epoch);
|
||||
state.pairing_anchor_deadline = Some(desired_due_at);
|
||||
state.freshness_reanchors = state.freshness_reanchors.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason =
|
||||
"reanchored bundled upstream playhead to preserve freshness".to_string();
|
||||
due_at = apply_playout_offset(
|
||||
recovered_epoch + Duration::from_micros(local_pts_us),
|
||||
sink_offset_us,
|
||||
);
|
||||
late_by = now.checked_duration_since(due_at).unwrap_or_default();
|
||||
info!(
|
||||
session_id,
|
||||
?kind,
|
||||
local_pts_us,
|
||||
remote_pts_us,
|
||||
recovery_buffer_ms = desired_delay.as_millis(),
|
||||
max_live_lag_ms = max_live_lag.as_millis(),
|
||||
source_lag_ms = source_lag.as_millis(),
|
||||
output_offset_ms = output_offset.as_millis(),
|
||||
"bundled upstream media playhead reanchored to preserve freshness"
|
||||
);
|
||||
}
|
||||
let predicted_lag_before_output_compensation = source_lag.saturating_add(
|
||||
due_at
|
||||
.saturating_duration_since(now)
|
||||
.saturating_sub(output_offset),
|
||||
);
|
||||
if predicted_lag_before_output_compensation > max_live_lag {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.stale_video_drops = state.stale_video_drops.saturating_add(1);
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.last_reason = "dropped bundled video that would exceed max live lag before output compensation".to_string();
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.stale_audio_drops = state.stale_audio_drops.saturating_add(1);
|
||||
state.last_reason = "dropped bundled audio that would exceed max live lag before output compensation".to_string();
|
||||
}
|
||||
}
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
return UpstreamPlanDecision::DropStale(
|
||||
"bundled packet would exceed max live lag before output compensation",
|
||||
);
|
||||
}
|
||||
|
||||
if kind == UpstreamMediaKind::Microphone {
|
||||
self.audio_progress_notify.notify_waiters();
|
||||
}
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag,
|
||||
})
|
||||
}
|
||||
|
||||
/// Hold video until the audio master has at least reached the same capture
|
||||
/// moment, or until the bounded sync grace is exhausted.
|
||||
pub async fn wait_for_audio_master(&self, video_local_pts_us: u64, due_at: Instant) -> bool {
|
||||
let slack_us = upstream_pairing_master_slack()
|
||||
.as_micros()
|
||||
.min(u64::MAX as u128) as u64;
|
||||
let audio_delay_allowance_us = self.positive_audio_delay_allowance_us();
|
||||
let deadline = due_at + upstream_audio_master_wait_grace();
|
||||
loop {
|
||||
let notified = self.audio_progress_notify.notified();
|
||||
{
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
if state.active_microphone_generation.is_none() {
|
||||
return true;
|
||||
}
|
||||
let audio_presented_pts_us = state.last_audio_presented_pts_us.unwrap_or(0);
|
||||
if audio_presented_pts_us
|
||||
.saturating_add(slack_us)
|
||||
.saturating_add(audio_delay_allowance_us)
|
||||
>= video_local_pts_us
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if Instant::now() >= deadline {
|
||||
return false;
|
||||
}
|
||||
tokio::select! {
|
||||
_ = notified => {}
|
||||
_ = tokio::time::sleep_until(deadline) => return false,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@ -4,6 +4,7 @@
|
||||
# the UVC modes the UI advertises. This is still a hardware-in-the-loop probe:
|
||||
# it captures the real Tethys UVC/UAC endpoints and summarizes sync,
|
||||
# freshness, and smoothness for each mode.
|
||||
# Not part of CI: it needs the Theia/Tethys lab hosts and live USB gadget state.
|
||||
#
|
||||
# Reconfigure mode is intentionally a fast runtime path: it updates the remote
|
||||
# Lesavka env files and cycles the UVC gadget, but it does not rebuild or
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# scripts/manual/run_upstream_mirrored_av_sync.sh
|
||||
# Manual: full mirrored upstream A/V sync probe.
|
||||
# Not part of CI: it needs the workstation browser, Theia server, and Tethys recorder.
|
||||
#
|
||||
# This probe intentionally uses the normal lesavka-client capture path as the
|
||||
# sender. A local browser stimulus is captured by the real webcam and real mic,
|
||||
|
||||
@ -10,7 +10,7 @@ bench = false
|
||||
|
||||
[package]
|
||||
name = "lesavka_server"
|
||||
version = "0.19.30"
|
||||
version = "0.20.0"
|
||||
edition = "2024"
|
||||
autobins = false
|
||||
|
||||
|
||||
@ -14,6 +14,8 @@ impl ClipTap {
|
||||
period,
|
||||
}
|
||||
}
|
||||
/// Keeps `feed` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn feed(&mut self, bytes: &[u8]) {
|
||||
self.buf.extend_from_slice(bytes);
|
||||
if self.buf.len() > 256_000 {
|
||||
@ -24,6 +26,8 @@ impl ClipTap {
|
||||
self.next_dump += self.period;
|
||||
}
|
||||
}
|
||||
/// Keeps `flush` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn flush(&mut self) {
|
||||
if self.buf.is_empty() {
|
||||
return;
|
||||
@ -83,6 +87,8 @@ fn voice_sink_compensation_us() -> i64 {
|
||||
.unwrap_or_else(default_voice_sink_compensation_us)
|
||||
}
|
||||
|
||||
/// Keeps `default_voice_sink_compensation_us` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default_voice_sink_compensation_us() -> i64 {
|
||||
let cfg = crate::camera::current_camera_config();
|
||||
if cfg.output == crate::camera::CameraOutput::Hdmi {
|
||||
@ -111,6 +117,8 @@ fn non_negative_voice_sink_timing_env(name: &str, default: i64) -> i64 {
|
||||
.unwrap_or(default)
|
||||
}
|
||||
|
||||
/// Keeps `voice_sink_session_clock_align_enabled` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn voice_sink_session_clock_align_enabled() -> bool {
|
||||
std::env::var("LESAVKA_UAC_SESSION_CLOCK_ALIGN")
|
||||
.ok()
|
||||
@ -149,6 +157,8 @@ fn positive_voice_appsrc_limit_env(name: &str, default: u64) -> u64 {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `configure_voice_appsrc` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn configure_voice_appsrc(appsrc: &gst_app::AppSrc) {
|
||||
use gst::prelude::*;
|
||||
|
||||
@ -169,6 +179,8 @@ fn configure_voice_appsrc(appsrc: &gst_app::AppSrc) {
|
||||
|
||||
impl Voice {
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `new` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub async fn new(_alsa_dev: &str) -> anyhow::Result<Self> {
|
||||
gst::init().context("gst init")?;
|
||||
|
||||
@ -198,6 +210,8 @@ impl Voice {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `new` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub async fn new(alsa_dev: &str) -> anyhow::Result<Self> {
|
||||
use gst::prelude::*;
|
||||
|
||||
@ -351,6 +365,8 @@ impl Voice {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `push` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn push(&mut self, pkt: &AudioPacket) {
|
||||
self.tap.feed(&pkt.data);
|
||||
if !self.clock_aligned {
|
||||
@ -393,154 +409,5 @@ fn voice_packet_duration_us(bytes: usize) -> u64 {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod voice_sink_timing_tests {
|
||||
use crate::camera::update_camera_config;
|
||||
use super::{voice_sink_buffer_time_us, voice_sink_latency_time_us};
|
||||
use super::{default_voice_sink_compensation_us, voice_sink_compensation_us};
|
||||
use super::{voice_appsrc_max_buffers, voice_appsrc_max_bytes, voice_appsrc_max_time_ns};
|
||||
|
||||
#[test]
|
||||
fn voice_sink_timing_defaults_stay_live_call_friendly() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_BUFFER_TIME_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_LATENCY_TIME_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_COMPENSATION_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_HDMI_COMPENSATION_US", || {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn voice_appsrc_limits_default_to_a_call_stability_window() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_BUFFERS", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_BYTES", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_TIME_NS", || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 16);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 200_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn voice_appsrc_limits_accept_positive_overrides_only() {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BUFFERS", Some("12"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BYTES", Some("65536"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_TIME_NS", Some("10000000"), || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 12);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 10_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BUFFERS", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BYTES", Some("nope"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_TIME_NS", Some("0"), || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 16);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 200_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn voice_sink_timing_env_accepts_positive_overrides_only() {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("42000"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("7000"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 42_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 7_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("-5"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("166667"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 166_667);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("-5"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("-5"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn hdmi_sink_compensation_defaults_to_hdmi_specific_delay() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_COMPENSATION_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_HDMI_COMPENSATION_US", || {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("hdmi"), || {
|
||||
update_camera_config();
|
||||
assert_eq!(default_voice_sink_compensation_us(), 205_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 205_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn explicit_compensation_override_wins_over_hdmi_default() {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("hdmi"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_HDMI_COMPENSATION_US", Some("120000"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("90000"), || {
|
||||
assert_eq!(default_voice_sink_compensation_us(), 120_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 90_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn session_clock_alignment_defaults_on_and_accepts_disable_overrides() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_SESSION_CLOCK_ALIGN", || {
|
||||
assert!(super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
|
||||
for disabled in ["0", "false", "no", "off"] {
|
||||
temp_env::with_var("LESAVKA_UAC_SESSION_CLOCK_ALIGN", Some(disabled), || {
|
||||
assert!(!super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
}
|
||||
|
||||
temp_env::with_var("LESAVKA_UAC_SESSION_CLOCK_ALIGN", Some("1"), || {
|
||||
assert!(super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delay_queue_turns_on_only_for_positive_compensation() {
|
||||
assert!(!super::voice_sink_delay_queue_enabled(-1));
|
||||
assert!(!super::voice_sink_delay_queue_enabled(0));
|
||||
assert!(super::voice_sink_delay_queue_enabled(1));
|
||||
assert!(super::voice_sink_delay_queue_enabled(90_000));
|
||||
}
|
||||
}
|
||||
#[path = "voice_input/tests/mod.rs"]
|
||||
mod voice_sink_timing_tests;
|
||||
|
||||
157
server/src/audio/voice_input/tests/mod.rs
Normal file
157
server/src/audio/voice_input/tests/mod.rs
Normal file
@ -0,0 +1,157 @@
|
||||
use crate::camera::update_camera_config;
|
||||
use super::{voice_sink_buffer_time_us, voice_sink_latency_time_us};
|
||||
use super::{default_voice_sink_compensation_us, voice_sink_compensation_us};
|
||||
use super::{voice_appsrc_max_buffers, voice_appsrc_max_bytes, voice_appsrc_max_time_ns};
|
||||
|
||||
#[test]
|
||||
/// Keeps `voice_sink_timing_defaults_stay_live_call_friendly` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn voice_sink_timing_defaults_stay_live_call_friendly() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_BUFFER_TIME_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_LATENCY_TIME_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_COMPENSATION_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_HDMI_COMPENSATION_US", || {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn voice_appsrc_limits_default_to_a_call_stability_window() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_BUFFERS", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_BYTES", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_APP_MAX_TIME_NS", || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 16);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 200_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `voice_appsrc_limits_accept_positive_overrides_only` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn voice_appsrc_limits_accept_positive_overrides_only() {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BUFFERS", Some("12"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BYTES", Some("65536"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_TIME_NS", Some("10000000"), || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 12);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 10_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BUFFERS", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_BYTES", Some("nope"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_APP_MAX_TIME_NS", Some("0"), || {
|
||||
assert_eq!(voice_appsrc_max_buffers(), 16);
|
||||
assert_eq!(voice_appsrc_max_bytes(), 65_536);
|
||||
assert_eq!(voice_appsrc_max_time_ns(), 200_000_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `voice_sink_timing_env_accepts_positive_overrides_only` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn voice_sink_timing_env_accepts_positive_overrides_only() {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("42000"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("7000"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 42_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 7_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("-5"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("166667"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 166_667);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("uvc"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("-5"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_BUFFER_TIME_US", Some("0"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_LATENCY_TIME_US", Some("-5"), || {
|
||||
assert_eq!(voice_sink_buffer_time_us(), 120_000);
|
||||
assert_eq!(voice_sink_latency_time_us(), 40_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 0);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn hdmi_sink_compensation_defaults_to_hdmi_specific_delay() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_COMPENSATION_US", || {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_HDMI_COMPENSATION_US", || {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("hdmi"), || {
|
||||
update_camera_config();
|
||||
assert_eq!(default_voice_sink_compensation_us(), 205_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 205_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn explicit_compensation_override_wins_over_hdmi_default() {
|
||||
temp_env::with_var("LESAVKA_CAM_OUTPUT", Some("hdmi"), || {
|
||||
update_camera_config();
|
||||
temp_env::with_var("LESAVKA_UAC_HDMI_COMPENSATION_US", Some("120000"), || {
|
||||
temp_env::with_var("LESAVKA_UAC_COMPENSATION_US", Some("90000"), || {
|
||||
assert_eq!(default_voice_sink_compensation_us(), 120_000);
|
||||
assert_eq!(voice_sink_compensation_us(), 90_000);
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `session_clock_alignment_defaults_on_and_accepts_disable_overrides` explicit because it sits on this module contract, where hidden behavior would make regressions difficult to diagnose.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn session_clock_alignment_defaults_on_and_accepts_disable_overrides() {
|
||||
temp_env::with_var_unset("LESAVKA_UAC_SESSION_CLOCK_ALIGN", || {
|
||||
assert!(super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
|
||||
for disabled in ["0", "false", "no", "off"] {
|
||||
temp_env::with_var("LESAVKA_UAC_SESSION_CLOCK_ALIGN", Some(disabled), || {
|
||||
assert!(!super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
}
|
||||
|
||||
temp_env::with_var("LESAVKA_UAC_SESSION_CLOCK_ALIGN", Some("1"), || {
|
||||
assert!(super::voice_sink_session_clock_align_enabled());
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delay_queue_turns_on_only_for_positive_compensation() {
|
||||
assert!(!super::voice_sink_delay_queue_enabled(-1));
|
||||
assert!(!super::voice_sink_delay_queue_enabled(0));
|
||||
assert!(super::voice_sink_delay_queue_enabled(1));
|
||||
assert!(super::voice_sink_delay_queue_enabled(90_000));
|
||||
}
|
||||
@ -58,6 +58,8 @@ pub struct CalibrationStore {
|
||||
}
|
||||
|
||||
impl CalibrationStore {
|
||||
/// Keeps `load` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn load(runtime: Arc<UpstreamMediaRuntime>) -> Self {
|
||||
let path = calibration_path();
|
||||
let state = std::fs::read_to_string(&path)
|
||||
@ -79,6 +81,8 @@ impl CalibrationStore {
|
||||
.to_proto()
|
||||
}
|
||||
|
||||
/// Keeps `apply` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn apply(&self, request: CalibrationRequest) -> Result<ProtoCalibrationState> {
|
||||
let mut state = self.state.lock().expect("calibration mutex poisoned");
|
||||
let action =
|
||||
@ -159,6 +163,8 @@ impl CalibrationStore {
|
||||
Ok(state.to_proto())
|
||||
}
|
||||
|
||||
/// Keeps `apply_transient_blind_estimate` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn apply_transient_blind_estimate(
|
||||
&self,
|
||||
audio_delta_us: i64,
|
||||
@ -191,6 +197,8 @@ impl CalibrationStore {
|
||||
}
|
||||
|
||||
impl CalibrationSnapshot {
|
||||
/// Keeps `to_proto` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn to_proto(&self) -> ProtoCalibrationState {
|
||||
ProtoCalibrationState {
|
||||
profile: self.profile.clone(),
|
||||
@ -216,6 +224,8 @@ pub fn calibration_path() -> PathBuf {
|
||||
.unwrap_or_else(|| PathBuf::from("/var/lib/lesavka/calibration.toml"))
|
||||
}
|
||||
|
||||
/// Keeps `snapshot_from_env` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn snapshot_from_env() -> CalibrationSnapshot {
|
||||
let mode = current_uvc_mode();
|
||||
let factory_audio_offset_us = mode
|
||||
@ -265,6 +275,8 @@ fn snapshot_from_env() -> CalibrationSnapshot {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `parse_snapshot` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_snapshot(raw: &str) -> CalibrationSnapshot {
|
||||
let fallback = snapshot_from_env();
|
||||
let value = |key: &str| -> Option<String> {
|
||||
@ -301,6 +313,8 @@ fn parse_snapshot(raw: &str) -> CalibrationSnapshot {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `migrate_legacy_snapshot` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn migrate_legacy_snapshot(mut state: CalibrationSnapshot) -> CalibrationSnapshot {
|
||||
let source_allows_migration = matches!(state.source.as_str(), "factory" | "env");
|
||||
let confidence_allows_migration = matches!(state.confidence.as_str(), "factory" | "configured");
|
||||
@ -342,6 +356,8 @@ fn migrate_legacy_snapshot(mut state: CalibrationSnapshot) -> CalibrationSnapsho
|
||||
state
|
||||
}
|
||||
|
||||
/// Keeps `persist_snapshot` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn persist_snapshot(path: &PathBuf, state: &CalibrationSnapshot) -> Result<()> {
|
||||
if let Some(parent) = path.parent() {
|
||||
std::fs::create_dir_all(parent)
|
||||
@ -351,6 +367,8 @@ fn persist_snapshot(path: &PathBuf, state: &CalibrationSnapshot) -> Result<()> {
|
||||
.with_context(|| format!("writing calibration state {}", path.display()))
|
||||
}
|
||||
|
||||
/// Keeps `serialize_snapshot` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn serialize_snapshot(state: &CalibrationSnapshot) -> String {
|
||||
format!(
|
||||
"profile=\"{}\"\ndefault_audio_offset_us={}\ndefault_video_offset_us={}\nactive_audio_offset_us={}\nactive_video_offset_us={}\nsource=\"{}\"\nconfidence=\"{}\"\nupdated_at=\"{}\"\ndetail=\"{}\"\n",
|
||||
@ -380,6 +398,8 @@ fn configured_offset_us(
|
||||
.or_else(|| env_i64(scalar_name).filter(|offset| !is_stale_scalar(*offset)))
|
||||
}
|
||||
|
||||
/// Keeps `current_uvc_mode` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn current_uvc_mode() -> Option<String> {
|
||||
env_mode("UVC_MODE")
|
||||
.or_else(|| env_mode("LESAVKA_UVC_MODE"))
|
||||
@ -402,6 +422,8 @@ fn current_uvc_mode() -> Option<String> {
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `env_mode` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn env_mode(name: &str) -> Option<String> {
|
||||
std::env::var(name).ok().and_then(|value| {
|
||||
let trimmed = value.trim();
|
||||
@ -473,632 +495,5 @@ fn escape_value(value: &str) -> String {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use tempfile::NamedTempFile;
|
||||
|
||||
const BLESSED_SERVER_TO_RCT_VIDEO_OFFSETS: &[(&str, i64)] = &[
|
||||
("1280x720@20", 162_659),
|
||||
("1280x720@30", 135_090),
|
||||
("1920x1080@20", 160_045),
|
||||
("1920x1080@30", 127_952),
|
||||
];
|
||||
|
||||
fn with_clean_offset_env(test: impl FnOnce()) {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("UVC_MODE", None::<&str>),
|
||||
("LESAVKA_UVC_MODE", None::<&str>),
|
||||
("LESAVKA_UVC_WIDTH", None::<&str>),
|
||||
("LESAVKA_UVC_HEIGHT", None::<&str>),
|
||||
("LESAVKA_UVC_FPS", None::<&str>),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
("LESAVKA_CAM_WIDTH", None::<&str>),
|
||||
("LESAVKA_CAM_HEIGHT", None::<&str>),
|
||||
("LESAVKA_CAM_FPS", None::<&str>),
|
||||
],
|
||||
test,
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn blessed_server_to_rct_offsets_are_release_defaults() {
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_US, FACTORY_MJPEG_VIDEO_OFFSET_1280X720_30_US,
|
||||
"720p30 is the blessed default profile until a new lab matrix replaces it"
|
||||
);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1280X720_20_US, 162_659);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1280X720_30_US, 135_090);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_20_US, 160_045);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US, 127_952);
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_AUDIO_MODE_OFFSETS_US,
|
||||
"1280x720@20=0,1280x720@30=0,1920x1080@20=0,1920x1080@30=0"
|
||||
);
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_VIDEO_MODE_OFFSETS_US,
|
||||
"1280x720@20=162659,1280x720@30=135090,1920x1080@20=160045,1920x1080@30=127952"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn every_supported_uvc_mode_loads_tailored_factory_offset() {
|
||||
for (mode, expected_offset_us) in BLESSED_SERVER_TO_RCT_VIDEO_OFFSETS {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_var("UVC_MODE", Some(*mode), || {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.factory_video_offset_us, *expected_offset_us,
|
||||
"{mode} should use its baked server-to-RCT factory offset"
|
||||
);
|
||||
assert_eq!(
|
||||
state.default_video_offset_us, *expected_offset_us,
|
||||
"{mode} should default to its baked server-to-RCT offset"
|
||||
);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(state.confidence, FACTORY_CONFIDENCE);
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_snapshot_uses_factory_mjpeg_calibration() {
|
||||
with_clean_offset_env(|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_snapshot_uses_uvc_mode_factory_calibration() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1920")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("1080")),
|
||||
("LESAVKA_UVC_FPS", Some("30")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.default_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US
|
||||
);
|
||||
assert_eq!(
|
||||
state.factory_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US
|
||||
);
|
||||
assert_eq!(state.source, "factory");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mode_offset_map_overrides_stale_scalar_offset() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("170000")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
Some("1280x720@20=123456"),
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1280")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("720")),
|
||||
("LESAVKA_UVC_FPS", Some("20")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_video_offset_us, 123_456);
|
||||
assert_eq!(state.source, "env");
|
||||
assert_eq!(state.confidence, "configured");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn stale_scalar_video_offset_falls_back_to_mode_factory() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("170000")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1920")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("1080")),
|
||||
("LESAVKA_UVC_FPS", Some("20")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.default_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_20_US
|
||||
);
|
||||
assert_eq!(state.source, "factory");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn store_persists_manual_adjustments_and_updates_runtime() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::AdjustActive as i32,
|
||||
audio_delta_us: -5_000,
|
||||
video_delta_us: 0,
|
||||
observed_delivery_skew_ms: 0.0,
|
||||
observed_enqueue_skew_ms: 0.0,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("manual adjust applies");
|
||||
assert_eq!(state.active_audio_offset_us, -5_000);
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, -5_000)
|
||||
);
|
||||
let raw = std::fs::read_to_string(file.path()).expect("persisted calibration");
|
||||
assert!(raw.contains("active_audio_offset_us=-5000"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_path_uses_default_for_blank_override() {
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(""), || {
|
||||
assert_eq!(
|
||||
calibration_path(),
|
||||
PathBuf::from("/var/lib/lesavka/calibration.toml")
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn snapshot_from_env_uses_configured_offsets_and_clamps_extremes() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", Some("-9999999")),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("12345")),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_audio_offset_us, -OFFSET_LIMIT_US);
|
||||
assert_eq!(state.default_video_offset_us, 12_345);
|
||||
assert_eq!(state.source, "env");
|
||||
assert_eq!(state.confidence, "configured");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_snapshot_falls_back_for_missing_and_malformed_fields() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = parse_snapshot(
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=bad
|
||||
default_video_offset_us=2500
|
||||
active_audio_offset_us=-1600000
|
||||
source="saved"
|
||||
detail="loaded \"quoted\" value"
|
||||
"#,
|
||||
);
|
||||
assert_eq!(state.default_audio_offset_us, FACTORY_MJPEG_AUDIO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, 2_500);
|
||||
assert_eq!(state.active_audio_offset_us, -OFFSET_LIMIT_US);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "saved");
|
||||
assert_eq!(state.confidence, FACTORY_CONFIDENCE);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_untouched_legacy_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=-45000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=-45000
|
||||
active_video_offset_us=0
|
||||
source="env"
|
||||
confidence="configured"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("legacy calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("migrated legacy MJPEG"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_untouched_previous_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=720000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=720000
|
||||
active_video_offset_us=0
|
||||
source="env"
|
||||
confidence="configured"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("previous calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("to audio +0.0ms/video +135.1ms"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_overshot_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=1090000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=1090000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("overshot video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("from audio +0.0ms/video +1090.0ms"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_browser_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=130000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=130000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("browser video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_delayed_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=350000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=350000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("delayed video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_migrates_early_zero_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=0
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("early zero calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_keeps_manual_legacy_sized_calibration() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=-45000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=-45000
|
||||
active_video_offset_us=0
|
||||
source="manual"
|
||||
confidence="manual"
|
||||
detail="operator-set"
|
||||
"#,
|
||||
)
|
||||
.expect("manual calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, -45_000);
|
||||
assert_eq!(state.active_video_offset_us, 0);
|
||||
assert_eq!(state.source, "manual");
|
||||
assert_eq!(runtime.playout_offsets(), (0, -45_000));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn store_applies_all_calibration_actions_and_persists_defaults() {
|
||||
let dir = tempfile::tempdir().expect("calibration dir");
|
||||
let path = dir.path().join("calibration.toml");
|
||||
let path_string = path.to_string_lossy().to_string();
|
||||
temp_env::with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(path_string.as_str()),
|
||||
|| {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
|
||||
let blind = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::BlindEstimate as i32,
|
||||
audio_delta_us: 5_000,
|
||||
video_delta_us: -2_000,
|
||||
observed_delivery_skew_ms: 44.0,
|
||||
observed_enqueue_skew_ms: 3.5,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("blind estimate");
|
||||
assert_eq!(blind.source, "blind");
|
||||
assert!(blind.detail.contains("delivery skew 44.0ms"));
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US - 2_000, 5_000)
|
||||
);
|
||||
|
||||
let manual = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::AdjustActive as i32,
|
||||
audio_delta_us: 1_999_999,
|
||||
video_delta_us: 0,
|
||||
observed_delivery_skew_ms: 0.0,
|
||||
observed_enqueue_skew_ms: 0.0,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("manual clamp");
|
||||
assert_eq!(manual.active_audio_offset_us, OFFSET_LIMIT_US);
|
||||
|
||||
let saved = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::SaveActiveAsDefault as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("save default");
|
||||
assert_eq!(saved.default_audio_offset_us, saved.active_audio_offset_us);
|
||||
assert_eq!(saved.confidence, "measured");
|
||||
|
||||
let factory = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::RestoreFactory as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("factory restore");
|
||||
assert_eq!(
|
||||
factory.active_audio_offset_us,
|
||||
FACTORY_MJPEG_AUDIO_OFFSET_US
|
||||
);
|
||||
assert_eq!(
|
||||
factory.active_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_US
|
||||
);
|
||||
assert_eq!(factory.source, "factory");
|
||||
|
||||
let restored = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::RestoreDefault as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("default restore");
|
||||
assert_eq!(
|
||||
restored.active_audio_offset_us,
|
||||
restored.default_audio_offset_us
|
||||
);
|
||||
assert_eq!(
|
||||
store.current().active_audio_offset_us,
|
||||
restored.active_audio_offset_us
|
||||
);
|
||||
|
||||
let no_op = store
|
||||
.apply(CalibrationRequest::default())
|
||||
.expect("unspecified action is ok");
|
||||
assert_eq!(
|
||||
no_op.active_audio_offset_us,
|
||||
restored.active_audio_offset_us
|
||||
);
|
||||
|
||||
let raw = std::fs::read_to_string(&path).expect("persisted actions");
|
||||
assert!(raw.contains("confidence="));
|
||||
assert!(raw.contains("detail="));
|
||||
assert_eq!(escape_value("a\\b\"c"), "a\\\\b\\\"c");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn transient_blind_estimate_updates_runtime_without_persisting_active_file_state() {
|
||||
let dir = tempfile::tempdir().expect("calibration dir");
|
||||
let path = dir.path().join("calibration.toml");
|
||||
let path_string = path.to_string_lossy().to_string();
|
||||
temp_env::with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(path_string.as_str()),
|
||||
|| {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let before_raw = std::fs::read_to_string(&path).ok();
|
||||
|
||||
let state = store.apply_transient_blind_estimate(
|
||||
0,
|
||||
-12_000,
|
||||
48.0,
|
||||
7.0,
|
||||
"runtime blind healer nudge",
|
||||
);
|
||||
|
||||
assert_eq!(state.source, "blind");
|
||||
assert_eq!(state.confidence, "runtime-estimated");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US - 12_000, 0)
|
||||
);
|
||||
assert_eq!(std::fs::read_to_string(&path).ok(), before_raw);
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
#[path = "calibration/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
663
server/src/calibration/tests/mod.rs
Normal file
663
server/src/calibration/tests/mod.rs
Normal file
@ -0,0 +1,663 @@
|
||||
use super::*;
|
||||
use tempfile::NamedTempFile;
|
||||
|
||||
const BLESSED_SERVER_TO_RCT_VIDEO_OFFSETS: &[(&str, i64)] = &[
|
||||
("1280x720@20", 162_659),
|
||||
("1280x720@30", 135_090),
|
||||
("1920x1080@20", 160_045),
|
||||
("1920x1080@30", 127_952),
|
||||
];
|
||||
|
||||
/// Keeps `with_clean_offset_env` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn with_clean_offset_env(test: impl FnOnce()) {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("UVC_MODE", None::<&str>),
|
||||
("LESAVKA_UVC_MODE", None::<&str>),
|
||||
("LESAVKA_UVC_WIDTH", None::<&str>),
|
||||
("LESAVKA_UVC_HEIGHT", None::<&str>),
|
||||
("LESAVKA_UVC_FPS", None::<&str>),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
("LESAVKA_CAM_WIDTH", None::<&str>),
|
||||
("LESAVKA_CAM_HEIGHT", None::<&str>),
|
||||
("LESAVKA_CAM_FPS", None::<&str>),
|
||||
],
|
||||
test,
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `blessed_server_to_rct_offsets_are_release_defaults` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn blessed_server_to_rct_offsets_are_release_defaults() {
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_US, FACTORY_MJPEG_VIDEO_OFFSET_1280X720_30_US,
|
||||
"720p30 is the blessed default profile until a new lab matrix replaces it"
|
||||
);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1280X720_20_US, 162_659);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1280X720_30_US, 135_090);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_20_US, 160_045);
|
||||
assert_eq!(FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US, 127_952);
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_AUDIO_MODE_OFFSETS_US,
|
||||
"1280x720@20=0,1280x720@30=0,1920x1080@20=0,1920x1080@30=0"
|
||||
);
|
||||
assert_eq!(
|
||||
FACTORY_MJPEG_VIDEO_MODE_OFFSETS_US,
|
||||
"1280x720@20=162659,1280x720@30=135090,1920x1080@20=160045,1920x1080@30=127952"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `every_supported_uvc_mode_loads_tailored_factory_offset` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn every_supported_uvc_mode_loads_tailored_factory_offset() {
|
||||
for (mode, expected_offset_us) in BLESSED_SERVER_TO_RCT_VIDEO_OFFSETS {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_var("UVC_MODE", Some(*mode), || {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.factory_video_offset_us, *expected_offset_us,
|
||||
"{mode} should use its baked server-to-RCT factory offset"
|
||||
);
|
||||
assert_eq!(
|
||||
state.default_video_offset_us, *expected_offset_us,
|
||||
"{mode} should default to its baked server-to-RCT offset"
|
||||
);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(state.confidence, FACTORY_CONFIDENCE);
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_snapshot_uses_factory_mjpeg_calibration() {
|
||||
with_clean_offset_env(|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `default_snapshot_uses_uvc_mode_factory_calibration` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn default_snapshot_uses_uvc_mode_factory_calibration() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1920")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("1080")),
|
||||
("LESAVKA_UVC_FPS", Some("30")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.default_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US
|
||||
);
|
||||
assert_eq!(
|
||||
state.factory_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_30_US
|
||||
);
|
||||
assert_eq!(state.source, "factory");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `mode_offset_map_overrides_stale_scalar_offset` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn mode_offset_map_overrides_stale_scalar_offset() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("170000")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
Some("1280x720@20=123456"),
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1280")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("720")),
|
||||
("LESAVKA_UVC_FPS", Some("20")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_video_offset_us, 123_456);
|
||||
assert_eq!(state.source, "env");
|
||||
assert_eq!(state.confidence, "configured");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `stale_scalar_video_offset_falls_back_to_mode_factory` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn stale_scalar_video_offset_falls_back_to_mode_factory() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("170000")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", Some("1920")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("1080")),
|
||||
("LESAVKA_UVC_FPS", Some("20")),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(
|
||||
state.default_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_1920X1080_20_US
|
||||
);
|
||||
assert_eq!(state.source, "factory");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `store_persists_manual_adjustments_and_updates_runtime` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn store_persists_manual_adjustments_and_updates_runtime() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::AdjustActive as i32,
|
||||
audio_delta_us: -5_000,
|
||||
video_delta_us: 0,
|
||||
observed_delivery_skew_ms: 0.0,
|
||||
observed_enqueue_skew_ms: 0.0,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("manual adjust applies");
|
||||
assert_eq!(state.active_audio_offset_us, -5_000);
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, -5_000)
|
||||
);
|
||||
let raw = std::fs::read_to_string(file.path()).expect("persisted calibration");
|
||||
assert!(raw.contains("active_audio_offset_us=-5000"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn calibration_path_uses_default_for_blank_override() {
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(""), || {
|
||||
assert_eq!(
|
||||
calibration_path(),
|
||||
PathBuf::from("/var/lib/lesavka/calibration.toml")
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `snapshot_from_env_uses_configured_offsets_and_clamps_extremes` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn snapshot_from_env_uses_configured_offsets_and_clamps_extremes() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", Some("-9999999")),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("12345")),
|
||||
],
|
||||
|| {
|
||||
let state = snapshot_from_env();
|
||||
assert_eq!(state.default_audio_offset_us, -OFFSET_LIMIT_US);
|
||||
assert_eq!(state.default_video_offset_us, 12_345);
|
||||
assert_eq!(state.source, "env");
|
||||
assert_eq!(state.confidence, "configured");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `parse_snapshot_falls_back_for_missing_and_malformed_fields` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_snapshot_falls_back_for_missing_and_malformed_fields() {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
],
|
||||
|| {
|
||||
let state = parse_snapshot(
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=bad
|
||||
default_video_offset_us=2500
|
||||
active_audio_offset_us=-1600000
|
||||
source="saved"
|
||||
detail="loaded \"quoted\" value"
|
||||
"#,
|
||||
);
|
||||
assert_eq!(state.default_audio_offset_us, FACTORY_MJPEG_AUDIO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, 2_500);
|
||||
assert_eq!(state.active_audio_offset_us, -OFFSET_LIMIT_US);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "saved");
|
||||
assert_eq!(state.confidence, FACTORY_CONFIDENCE);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_untouched_legacy_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_untouched_legacy_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=-45000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=-45000
|
||||
active_video_offset_us=0
|
||||
source="env"
|
||||
confidence="configured"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("legacy calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("migrated legacy MJPEG"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_untouched_previous_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_untouched_previous_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=720000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=720000
|
||||
active_video_offset_us=0
|
||||
source="env"
|
||||
confidence="configured"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("previous calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("to audio +0.0ms/video +135.1ms"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_overshot_video_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_overshot_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=1090000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=1090000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("overshot video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
assert!(state.detail.contains("from audio +0.0ms/video +1090.0ms"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_browser_video_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_browser_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=130000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=130000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("browser video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_delayed_video_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_delayed_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=350000
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=350000
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("delayed video calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_migrates_early_zero_video_factory_mjpeg_baseline` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_migrates_early_zero_video_factory_mjpeg_baseline() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=0
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=0
|
||||
active_video_offset_us=0
|
||||
source="factory"
|
||||
confidence="factory"
|
||||
detail="loaded upstream A/V calibration defaults"
|
||||
"#,
|
||||
)
|
||||
.expect("early zero calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, 0);
|
||||
assert_eq!(state.default_audio_offset_us, 0);
|
||||
assert_eq!(state.active_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.default_video_offset_us, FACTORY_MJPEG_VIDEO_OFFSET_US);
|
||||
assert_eq!(state.source, "factory");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US, 0)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `load_keeps_manual_legacy_sized_calibration` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn load_keeps_manual_legacy_sized_calibration() {
|
||||
let file = NamedTempFile::new().expect("temp calibration file");
|
||||
std::fs::write(
|
||||
file.path(),
|
||||
r#"
|
||||
profile="mjpeg"
|
||||
default_audio_offset_us=-45000
|
||||
default_video_offset_us=0
|
||||
active_audio_offset_us=-45000
|
||||
active_video_offset_us=0
|
||||
source="manual"
|
||||
confidence="manual"
|
||||
detail="operator-set"
|
||||
"#,
|
||||
)
|
||||
.expect("manual calibration seed");
|
||||
let path = file.path().to_string_lossy().to_string();
|
||||
temp_env::with_var("LESAVKA_CALIBRATION_PATH", Some(path.as_str()), || {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let state = store.current();
|
||||
assert_eq!(state.active_audio_offset_us, -45_000);
|
||||
assert_eq!(state.active_video_offset_us, 0);
|
||||
assert_eq!(state.source, "manual");
|
||||
assert_eq!(runtime.playout_offsets(), (0, -45_000));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `store_applies_all_calibration_actions_and_persists_defaults` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn store_applies_all_calibration_actions_and_persists_defaults() {
|
||||
let dir = tempfile::tempdir().expect("calibration dir");
|
||||
let path = dir.path().join("calibration.toml");
|
||||
let path_string = path.to_string_lossy().to_string();
|
||||
temp_env::with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(path_string.as_str()),
|
||||
|| {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
|
||||
let blind = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::BlindEstimate as i32,
|
||||
audio_delta_us: 5_000,
|
||||
video_delta_us: -2_000,
|
||||
observed_delivery_skew_ms: 44.0,
|
||||
observed_enqueue_skew_ms: 3.5,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("blind estimate");
|
||||
assert_eq!(blind.source, "blind");
|
||||
assert!(blind.detail.contains("delivery skew 44.0ms"));
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US - 2_000, 5_000)
|
||||
);
|
||||
|
||||
let manual = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::AdjustActive as i32,
|
||||
audio_delta_us: 1_999_999,
|
||||
video_delta_us: 0,
|
||||
observed_delivery_skew_ms: 0.0,
|
||||
observed_enqueue_skew_ms: 0.0,
|
||||
note: String::new(),
|
||||
})
|
||||
.expect("manual clamp");
|
||||
assert_eq!(manual.active_audio_offset_us, OFFSET_LIMIT_US);
|
||||
|
||||
let saved = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::SaveActiveAsDefault as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("save default");
|
||||
assert_eq!(saved.default_audio_offset_us, saved.active_audio_offset_us);
|
||||
assert_eq!(saved.confidence, "measured");
|
||||
|
||||
let factory = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::RestoreFactory as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("factory restore");
|
||||
assert_eq!(
|
||||
factory.active_audio_offset_us,
|
||||
FACTORY_MJPEG_AUDIO_OFFSET_US
|
||||
);
|
||||
assert_eq!(
|
||||
factory.active_video_offset_us,
|
||||
FACTORY_MJPEG_VIDEO_OFFSET_US
|
||||
);
|
||||
assert_eq!(factory.source, "factory");
|
||||
|
||||
let restored = store
|
||||
.apply(CalibrationRequest {
|
||||
action: CalibrationAction::RestoreDefault as i32,
|
||||
..CalibrationRequest::default()
|
||||
})
|
||||
.expect("default restore");
|
||||
assert_eq!(
|
||||
restored.active_audio_offset_us,
|
||||
restored.default_audio_offset_us
|
||||
);
|
||||
assert_eq!(
|
||||
store.current().active_audio_offset_us,
|
||||
restored.active_audio_offset_us
|
||||
);
|
||||
|
||||
let no_op = store
|
||||
.apply(CalibrationRequest::default())
|
||||
.expect("unspecified action is ok");
|
||||
assert_eq!(
|
||||
no_op.active_audio_offset_us,
|
||||
restored.active_audio_offset_us
|
||||
);
|
||||
|
||||
let raw = std::fs::read_to_string(&path).expect("persisted actions");
|
||||
assert!(raw.contains("confidence="));
|
||||
assert!(raw.contains("detail="));
|
||||
assert_eq!(escape_value("a\\b\"c"), "a\\\\b\\\"c");
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `transient_blind_estimate_updates_runtime_without_persisting_active_file_state` explicit because it sits on calibration state, where persisted and factory offsets must stay auditable.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn transient_blind_estimate_updates_runtime_without_persisting_active_file_state() {
|
||||
let dir = tempfile::tempdir().expect("calibration dir");
|
||||
let path = dir.path().join("calibration.toml");
|
||||
let path_string = path.to_string_lossy().to_string();
|
||||
temp_env::with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(path_string.as_str()),
|
||||
|| {
|
||||
let runtime = Arc::new(UpstreamMediaRuntime::new());
|
||||
let store = CalibrationStore::load(runtime.clone());
|
||||
let before_raw = std::fs::read_to_string(&path).ok();
|
||||
|
||||
let state = store.apply_transient_blind_estimate(
|
||||
0,
|
||||
-12_000,
|
||||
48.0,
|
||||
7.0,
|
||||
"runtime blind healer nudge",
|
||||
);
|
||||
|
||||
assert_eq!(state.source, "blind");
|
||||
assert_eq!(state.confidence, "runtime-estimated");
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(FACTORY_MJPEG_VIDEO_OFFSET_US - 12_000, 0)
|
||||
);
|
||||
assert_eq!(std::fs::read_to_string(&path).ok(), before_raw);
|
||||
},
|
||||
);
|
||||
}
|
||||
@ -21,6 +21,8 @@ struct LiveUvcProfile {
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `select_camera_config` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn select_camera_config() -> CameraConfig {
|
||||
let output_override = std::env::var("LESAVKA_CAM_OUTPUT")
|
||||
.ok()
|
||||
@ -34,6 +36,8 @@ pub(super) fn select_camera_config() -> CameraConfig {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `select_camera_config` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(super) fn select_camera_config() -> CameraConfig {
|
||||
let output_env = std::env::var("LESAVKA_CAM_OUTPUT").ok();
|
||||
let output_override = output_env.as_deref().and_then(parse_camera_output);
|
||||
@ -75,6 +79,8 @@ pub(super) fn select_camera_config() -> CameraConfig {
|
||||
cfg
|
||||
}
|
||||
|
||||
/// Keeps `parse_camera_output` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_camera_output(raw: &str) -> Option<CameraOutput> {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"uvc" => Some(CameraOutput::Uvc),
|
||||
@ -84,6 +90,8 @@ fn parse_camera_output(raw: &str) -> Option<CameraOutput> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `parse_camera_codec` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_camera_codec(raw: &str) -> Option<CameraCodec> {
|
||||
match raw.trim().to_ascii_lowercase().as_str() {
|
||||
"h264" => Some(CameraCodec::H264),
|
||||
@ -115,6 +123,8 @@ fn select_uvc_codec(uvc_env: Option<&HashMap<String, String>>) -> CameraCodec {
|
||||
.unwrap_or(CameraCodec::Mjpeg)
|
||||
}
|
||||
|
||||
/// Keeps `select_hdmi_config` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn select_hdmi_config(hdmi: Option<HdmiConnector>) -> CameraConfig {
|
||||
let hw_decode = has_hw_h264_decode();
|
||||
let (default_width, default_height) = if hw_decode { (1920, 1080) } else { (1280, 720) };
|
||||
@ -155,6 +165,8 @@ fn select_hdmi_config(hdmi: Option<HdmiConnector>) -> CameraConfig {
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `select_uvc_config` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn select_uvc_config() -> CameraConfig {
|
||||
let width = read_u32_from_env("LESAVKA_UVC_WIDTH").unwrap_or(1280);
|
||||
let height = read_u32_from_env("LESAVKA_UVC_HEIGHT").unwrap_or(720);
|
||||
@ -182,6 +194,8 @@ fn select_uvc_config() -> CameraConfig {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `select_uvc_config` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn select_uvc_config() -> CameraConfig {
|
||||
let mut uvc_env = HashMap::new();
|
||||
if let Ok(text) = fs::read_to_string("/etc/lesavka/uvc.env") {
|
||||
@ -239,6 +253,8 @@ fn select_uvc_config() -> CameraConfig {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `read_live_uvc_configfs_profile` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn read_live_uvc_configfs_profile() -> Option<LiveUvcProfile> {
|
||||
let base = std::env::var(UVC_CONFIGFS_BASE_ENV)
|
||||
.map(PathBuf::from)
|
||||
@ -260,6 +276,8 @@ fn read_live_uvc_configfs_profile() -> Option<LiveUvcProfile> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `live_uvc_frame_dir` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn live_uvc_frame_dir(base: &Path) -> Option<PathBuf> {
|
||||
let preferred = base.join("streaming/mjpeg/m/720p");
|
||||
if preferred.join("wWidth").is_file() && preferred.join("wHeight").is_file() {
|
||||
@ -279,6 +297,8 @@ fn live_uvc_frame_dir(base: &Path) -> Option<PathBuf> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `read_u32_file` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn read_u32_file(path: impl AsRef<Path>) -> Option<u32> {
|
||||
fs::read_to_string(path).ok()?.trim().parse::<u32>().ok()
|
||||
}
|
||||
@ -297,6 +317,8 @@ fn has_hw_h264_decode() -> bool {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `has_hw_h264_decode` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn has_hw_h264_decode() -> bool {
|
||||
if gst::init().is_err() {
|
||||
return false;
|
||||
@ -310,6 +332,8 @@ fn has_hw_h264_decode() -> bool {
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `detect_hdmi_connector` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_hdmi_connector(require_connected: bool) -> Option<HdmiConnector> {
|
||||
let _ = require_connected;
|
||||
std::env::var("LESAVKA_HDMI_CONNECTOR")
|
||||
@ -325,6 +349,8 @@ fn detect_hdmi_connector(require_connected: bool) -> Option<HdmiConnector> {
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `detect_hdmi_connector` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn detect_hdmi_connector(require_connected: bool) -> Option<HdmiConnector> {
|
||||
let preferred = std::env::var("LESAVKA_HDMI_CONNECTOR").ok();
|
||||
let entries = fs::read_dir("/sys/class/drm").ok()?;
|
||||
@ -400,6 +426,8 @@ pub(crate) fn parse_hdmi_modes(raw: &str) -> Vec<HdmiMode> {
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Keeps `parse_hdmi_mode` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn parse_hdmi_mode(raw: &str) -> Option<HdmiMode> {
|
||||
let raw = raw.trim();
|
||||
let (width, rest) = raw.split_once('x')?;
|
||||
@ -413,6 +441,8 @@ pub(crate) fn parse_hdmi_mode(raw: &str) -> Option<HdmiMode> {
|
||||
(width > 0 && height > 0).then_some(HdmiMode { width, height })
|
||||
}
|
||||
|
||||
/// Keeps `preferred_hdmi_mode` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub(crate) fn preferred_hdmi_mode(modes: &[HdmiMode]) -> Option<HdmiMode> {
|
||||
for preferred in [
|
||||
HdmiMode {
|
||||
@ -438,33 +468,4 @@ pub(crate) fn preferred_hdmi_mode(modes: &[HdmiMode]) -> Option<HdmiMode> {
|
||||
.or_else(|| modes.first().copied())
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn parse_env_file(text: &str) -> HashMap<String, String> {
|
||||
let mut out = HashMap::new();
|
||||
for line in text.lines() {
|
||||
let line = line.trim();
|
||||
if line.is_empty() || line.starts_with('#') {
|
||||
continue;
|
||||
}
|
||||
let mut parts = line.splitn(2, '=');
|
||||
let key = match parts.next() {
|
||||
Some(v) => v.trim(),
|
||||
None => continue,
|
||||
};
|
||||
let val = match parts.next() {
|
||||
Some(v) => v.trim(),
|
||||
None => continue,
|
||||
};
|
||||
out.insert(key.to_string(), val.to_string());
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
pub(crate) fn read_u32_from_env(key: &str) -> Option<u32> {
|
||||
std::env::var(key).ok().and_then(|v| v.parse::<u32>().ok())
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn read_u32_from_map(map: &HashMap<String, String>, key: &str) -> Option<u32> {
|
||||
map.get(key).and_then(|v| v.parse::<u32>().ok())
|
||||
}
|
||||
include!("selection/config_env.rs");
|
||||
|
||||
32
server/src/camera/selection/config_env.rs
Normal file
32
server/src/camera/selection/config_env.rs
Normal file
@ -0,0 +1,32 @@
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `parse_env_file` explicit because it sits on camera selection, where negotiated profiles must match the server output contract.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn parse_env_file(text: &str) -> HashMap<String, String> {
|
||||
let mut out = HashMap::new();
|
||||
for line in text.lines() {
|
||||
let line = line.trim();
|
||||
if line.is_empty() || line.starts_with('#') {
|
||||
continue;
|
||||
}
|
||||
let mut parts = line.splitn(2, '=');
|
||||
let key = match parts.next() {
|
||||
Some(v) => v.trim(),
|
||||
None => continue,
|
||||
};
|
||||
let val = match parts.next() {
|
||||
Some(v) => v.trim(),
|
||||
None => continue,
|
||||
};
|
||||
out.insert(key.to_string(), val.to_string());
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
pub(crate) fn read_u32_from_env(key: &str) -> Option<u32> {
|
||||
std::env::var(key).ok().and_then(|v| v.parse::<u32>().ok())
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn read_u32_from_map(map: &HashMap<String, String>, key: &str) -> Option<u32> {
|
||||
map.get(key).and_then(|v| v.parse::<u32>().ok())
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
271
server/src/main/relay_service/camera_stream_rpc.rs
Normal file
271
server/src/main/relay_service/camera_stream_rpc.rs
Normal file
@ -0,0 +1,271 @@
|
||||
impl Handler {
|
||||
/// Accept synthetic upstream webcam packets without UVC/HDMI hardware.
|
||||
async fn stream_camera_rpc(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<VideoPacket>>,
|
||||
) -> Result<Response<ReceiverStream<Result<Empty, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
let cfg = camera::current_camera_config();
|
||||
info!(
|
||||
rpc_id,
|
||||
output = cfg.output.as_str(),
|
||||
codec = cfg.codec.as_str(),
|
||||
width = cfg.width,
|
||||
height = cfg.height,
|
||||
fps = cfg.fps,
|
||||
hdmi = cfg.hdmi.as_ref().map(|h| h.name.as_str()).unwrap_or("none"),
|
||||
"🎥 stream_camera output selected"
|
||||
);
|
||||
|
||||
let upstream_lease = self.upstream_media_rt.activate_camera();
|
||||
let (camera_session_id, relay, _relay_reused) = self.camera_rt.activate(&cfg).await?;
|
||||
let camera_rt = self.camera_rt.clone();
|
||||
let upstream_media_rt = self.upstream_media_rt.clone();
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"🎥 stream_camera opened"
|
||||
);
|
||||
let frame_step_us = (1_000_000u64 / u64::from(cfg.fps.max(1))).max(1);
|
||||
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(1);
|
||||
|
||||
tokio::spawn(async move {
|
||||
let mut cleanup = UpstreamStreamCleanup::camera(
|
||||
upstream_media_rt.clone(),
|
||||
upstream_lease.generation,
|
||||
rpc_id,
|
||||
upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
);
|
||||
let mut s = req.into_inner();
|
||||
let mut pending = std::collections::VecDeque::new();
|
||||
let mut inbound_closed = false;
|
||||
let stale_drop_budget = upstream_stale_drop_budget();
|
||||
let mut startup_video_settled = false;
|
||||
'camera_loop: loop {
|
||||
if !camera_rt.is_active(camera_session_id)
|
||||
|| !upstream_media_rt.is_camera_active(upstream_lease.generation)
|
||||
{
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"🎥 stream_camera session superseded"
|
||||
);
|
||||
cleanup.mark_superseded();
|
||||
break;
|
||||
}
|
||||
if !inbound_closed {
|
||||
let next_packet = tokio::select! {
|
||||
packet = s.next() => Some(packet),
|
||||
_ = tokio::time::sleep(Duration::from_millis(50)) => None,
|
||||
};
|
||||
if let Some(next_packet) = next_packet {
|
||||
match next_packet.transpose() {
|
||||
Ok(Some(pkt)) => {
|
||||
enqueue_pending_upstream_video_packet(
|
||||
&upstream_media_rt,
|
||||
&mut pending,
|
||||
pkt,
|
||||
rpc_id,
|
||||
upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"poll",
|
||||
);
|
||||
}
|
||||
Ok(None) => inbound_closed = true,
|
||||
Err(err) => {
|
||||
cleanup.mark_aborted();
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"🎥 stream_camera inbound error before clean EOF: {err}"
|
||||
);
|
||||
break 'camera_loop;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let Some(mut pkt) = pending.pop_front() else {
|
||||
if inbound_closed {
|
||||
cleanup.mark_closed();
|
||||
break;
|
||||
}
|
||||
continue;
|
||||
};
|
||||
let plan = match upstream_media_rt.plan_video_pts(pkt.pts, frame_step_us) {
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::AwaitingPair => {
|
||||
if inbound_closed {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
pts = pkt.pts,
|
||||
"🎥 dropping trailing upstream video frame because no paired audio arrived before stream close"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
pending.push_front(pkt);
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::DropBeforeOverlap => {
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::DropStale(reason) => {
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
pts = pkt.pts,
|
||||
reason,
|
||||
"🎥 upstream video frame dropped by authoritative freshness planner"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::StartupFailed(reason) => {
|
||||
tracing::error!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
reason,
|
||||
"🎥 upstream video startup failed"
|
||||
);
|
||||
cleanup.mark_aborted();
|
||||
break;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::Play(plan) => plan,
|
||||
};
|
||||
let mut audio_master_wait = std::pin::pin!(
|
||||
upstream_media_rt.wait_for_audio_master(plan.local_pts_us, plan.due_at)
|
||||
);
|
||||
let audio_master_ready = loop {
|
||||
tokio::select! {
|
||||
ready = &mut audio_master_wait => break ready,
|
||||
next_packet = s.next(), if !inbound_closed => {
|
||||
match next_packet.transpose() {
|
||||
Ok(Some(next_pkt)) => {
|
||||
enqueue_pending_upstream_video_packet(
|
||||
&upstream_media_rt,
|
||||
&mut pending,
|
||||
next_pkt,
|
||||
rpc_id,
|
||||
upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"audio-master-wait",
|
||||
);
|
||||
}
|
||||
Ok(None) => inbound_closed = true,
|
||||
Err(err) => {
|
||||
cleanup.mark_aborted();
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"🎥 stream_camera inbound error while waiting for audio master: {err}"
|
||||
);
|
||||
break 'camera_loop;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
if !audio_master_ready {
|
||||
upstream_media_rt.record_video_freeze(
|
||||
"video froze because audio master did not reach the frame timestamp",
|
||||
);
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
pts = plan.local_pts_us,
|
||||
"🎥 upstream video frame dropped because the audio master never caught up inside the pairing window"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
if plan.late_by > stale_drop_budget {
|
||||
let coalesced = retain_freshest_video_packet(&mut pending);
|
||||
if startup_video_settled {
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
late_by_ms = plan.late_by.as_millis(),
|
||||
pts = plan.local_pts_us,
|
||||
dropped_pending = coalesced,
|
||||
"🎥 upstream video frame dropped after missing its freshness budget"
|
||||
);
|
||||
} else {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
late_by_ms = plan.late_by.as_millis(),
|
||||
pts = plan.local_pts_us,
|
||||
dropped_pending = coalesced,
|
||||
"🎥 dropping startup-stale upstream video until the playout window settles"
|
||||
);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
let sleep_until_due = tokio::time::sleep_until(plan.due_at);
|
||||
tokio::pin!(sleep_until_due);
|
||||
loop {
|
||||
tokio::select! {
|
||||
_ = &mut sleep_until_due => break,
|
||||
next_packet = s.next(), if !inbound_closed => {
|
||||
match next_packet.transpose() {
|
||||
Ok(Some(next_pkt)) => {
|
||||
enqueue_pending_upstream_video_packet(
|
||||
&upstream_media_rt,
|
||||
&mut pending,
|
||||
next_pkt,
|
||||
rpc_id,
|
||||
upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"due-wait",
|
||||
);
|
||||
}
|
||||
Ok(None) => inbound_closed = true,
|
||||
Err(err) => {
|
||||
cleanup.mark_aborted();
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
"🎥 stream_camera inbound error while waiting for frame due time: {err}"
|
||||
);
|
||||
break 'camera_loop;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let actual_late_by = tokio::time::Instant::now()
|
||||
.checked_duration_since(plan.due_at)
|
||||
.unwrap_or_default();
|
||||
if actual_late_by > stale_drop_budget {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = upstream_lease.session_id,
|
||||
camera_session_id,
|
||||
late_by_ms = actual_late_by.as_millis(),
|
||||
pts = plan.local_pts_us,
|
||||
"🎥 emitting video after waiting for the audio master to preserve sync"
|
||||
);
|
||||
}
|
||||
pkt.pts = plan.local_pts_us;
|
||||
startup_video_settled = true;
|
||||
let presented_pts = pkt.pts;
|
||||
relay.feed(pkt); // ← all logging inside video.rs
|
||||
upstream_media_rt.mark_video_presented(presented_pts, plan.due_at);
|
||||
}
|
||||
tx.send(Ok(Empty {})).await.ok();
|
||||
Ok::<(), Status>(())
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
}
|
||||
99
server/src/main/relay_service/input_stream_rpc.rs
Normal file
99
server/src/main/relay_service/input_stream_rpc.rs
Normal file
@ -0,0 +1,99 @@
|
||||
impl Handler {
|
||||
/// Keeps `stream_keyboard` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_keyboard_rpc(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<KeyboardReport>>,
|
||||
) -> Result<Response<ReceiverStream<Result<KeyboardReport, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
info!(rpc_id, "⌨️ stream_keyboard opened");
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(32);
|
||||
let kb = self.kb.clone();
|
||||
let ms = self.ms.clone();
|
||||
let kb_path = hid_endpoint(0);
|
||||
let ms_path = hid_endpoint(1);
|
||||
let gadget = self.gadget.clone();
|
||||
let did_cycle = self.did_cycle.clone();
|
||||
let session_lease = self.capture_power.acquire_session().await;
|
||||
let report_delay = live_keyboard_report_delay();
|
||||
|
||||
tokio::spawn(async move {
|
||||
let _session_lease = session_lease;
|
||||
let mut s = req.into_inner();
|
||||
while let Some(pkt) = s.next().await.transpose()? {
|
||||
if let Err(e) = runtime_support::write_hid_report(&kb, &kb_path, &pkt.data).await {
|
||||
if e.raw_os_error() == Some(libc::EAGAIN) {
|
||||
debug!(rpc_id, "⌨️ write would block (dropped)");
|
||||
} else {
|
||||
warn!(rpc_id, "⌨️ write failed: {e} (dropped)");
|
||||
runtime_support::recover_hid_if_needed(
|
||||
&e,
|
||||
gadget.clone(),
|
||||
kb.clone(),
|
||||
ms.clone(),
|
||||
kb_path.clone(),
|
||||
ms_path.clone(),
|
||||
did_cycle.clone(),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
}
|
||||
tx.send(Ok(pkt)).await.ok();
|
||||
if !report_delay.is_zero() {
|
||||
tokio::time::sleep(report_delay).await;
|
||||
}
|
||||
}
|
||||
info!(rpc_id, "⌨️ stream_keyboard closed");
|
||||
Ok::<(), Status>(())
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `stream_mouse` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_mouse_rpc(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<MouseReport>>,
|
||||
) -> Result<Response<ReceiverStream<Result<MouseReport, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
info!(rpc_id, "🖱️ stream_mouse opened");
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(1024);
|
||||
let ms = self.ms.clone();
|
||||
let kb = self.kb.clone();
|
||||
let kb_path = hid_endpoint(0);
|
||||
let ms_path = hid_endpoint(1);
|
||||
let gadget = self.gadget.clone();
|
||||
let did_cycle = self.did_cycle.clone();
|
||||
let session_lease = self.capture_power.acquire_session().await;
|
||||
|
||||
tokio::spawn(async move {
|
||||
let _session_lease = session_lease;
|
||||
let mut s = req.into_inner();
|
||||
while let Some(pkt) = s.next().await.transpose()? {
|
||||
if let Err(e) = runtime_support::write_hid_report(&ms, &ms_path, &pkt.data).await {
|
||||
if e.raw_os_error() == Some(libc::EAGAIN) {
|
||||
debug!(rpc_id, "🖱️ write would block (dropped)");
|
||||
} else {
|
||||
warn!(rpc_id, "🖱️ write failed: {e} (dropped)");
|
||||
runtime_support::recover_hid_if_needed(
|
||||
&e,
|
||||
gadget.clone(),
|
||||
kb.clone(),
|
||||
ms.clone(),
|
||||
kb_path.clone(),
|
||||
ms_path.clone(),
|
||||
did_cycle.clone(),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
}
|
||||
tx.send(Ok(pkt)).await.ok();
|
||||
}
|
||||
info!(rpc_id, "🖱️ stream_mouse closed");
|
||||
Ok::<(), Status>(())
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
}
|
||||
214
server/src/main/relay_service/microphone_stream_rpc.rs
Normal file
214
server/src/main/relay_service/microphone_stream_rpc.rs
Normal file
@ -0,0 +1,214 @@
|
||||
impl Handler {
|
||||
/// Accept synthetic upstream microphone packets without ALSA hardware.
|
||||
async fn stream_microphone_rpc(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<AudioPacket>>,
|
||||
) -> Result<Response<ReceiverStream<Result<Empty, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
let lease = self.upstream_media_rt.activate_microphone();
|
||||
info!(rpc_id, session_id = lease.session_id, "🎤 stream_microphone opened");
|
||||
let Some(microphone_sink_permit) = self
|
||||
.upstream_media_rt
|
||||
.reserve_microphone_sink(lease.generation)
|
||||
.await
|
||||
else {
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
"🎤 stream_microphone stood down before the sink became available"
|
||||
);
|
||||
self.upstream_media_rt.close_microphone(lease.generation);
|
||||
return Err(Status::aborted(
|
||||
"microphone stream superseded before sink became available",
|
||||
));
|
||||
};
|
||||
let uac_dev = std::env::var("LESAVKA_UAC_DEV").unwrap_or_else(|_| "hw:UAC2Gadget,0".into());
|
||||
info!(%uac_dev, "🎤 stream_microphone using UAC sink");
|
||||
let mut sink = runtime_support::open_voice_with_retry(&uac_dev)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
self.upstream_media_rt.close_microphone(lease.generation);
|
||||
Status::internal(format!("{e:#}"))
|
||||
})?;
|
||||
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(1);
|
||||
let upstream_media_rt = self.upstream_media_rt.clone();
|
||||
|
||||
tokio::spawn(async move {
|
||||
let _microphone_sink_permit = microphone_sink_permit;
|
||||
let mut cleanup = UpstreamStreamCleanup::microphone(
|
||||
upstream_media_rt.clone(),
|
||||
lease.generation,
|
||||
rpc_id,
|
||||
lease.session_id,
|
||||
);
|
||||
let mut inbound = req.into_inner();
|
||||
let mut pending = std::collections::VecDeque::new();
|
||||
let mut inbound_closed = false;
|
||||
let stale_drop_budget = upstream_stale_drop_budget();
|
||||
static CNT: std::sync::atomic::AtomicU64 = std::sync::atomic::AtomicU64::new(0);
|
||||
|
||||
'microphone_loop: loop {
|
||||
if !upstream_media_rt.is_microphone_active(lease.generation) {
|
||||
info!(rpc_id, session_id = lease.session_id, "🎤 stream_microphone session superseded");
|
||||
cleanup.mark_superseded();
|
||||
break;
|
||||
}
|
||||
if !inbound_closed {
|
||||
let next_packet = tokio::select! {
|
||||
packet = inbound.next() => Some(packet),
|
||||
_ = tokio::time::sleep(Duration::from_millis(50)) => None,
|
||||
};
|
||||
if let Some(next_packet) = next_packet {
|
||||
match next_packet.transpose() {
|
||||
Ok(Some(pkt)) => {
|
||||
upstream_media_rt.record_client_timing(
|
||||
UpstreamMediaKind::Microphone,
|
||||
audio_client_timing(&pkt),
|
||||
);
|
||||
pending.push_back(pkt);
|
||||
let coalesced = retain_freshest_audio_packet(&mut pending);
|
||||
if coalesced > 0 {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
dropped = coalesced,
|
||||
"🎤 coalesced stale upstream audio backlog down to the freshest chunk"
|
||||
);
|
||||
}
|
||||
}
|
||||
Ok(None) => inbound_closed = true,
|
||||
Err(err) => {
|
||||
cleanup.mark_aborted();
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
"🎤 stream_microphone inbound error before clean EOF: {err}"
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let Some(mut pkt) = pending.pop_front() else {
|
||||
if inbound_closed {
|
||||
cleanup.mark_closed();
|
||||
break;
|
||||
}
|
||||
continue;
|
||||
};
|
||||
let plan = match upstream_media_rt.plan_audio_pts(pkt.pts) {
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::AwaitingPair => {
|
||||
if inbound_closed {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
pts = pkt.pts,
|
||||
"🎤 dropping trailing upstream audio because no paired video arrived before stream close"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
pending.push_front(pkt);
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::DropBeforeOverlap => {
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::DropStale(reason) => {
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
pts = pkt.pts,
|
||||
reason,
|
||||
"🎤 upstream audio packet dropped by authoritative freshness planner"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::StartupFailed(reason) => {
|
||||
tracing::error!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
reason,
|
||||
"🎤 upstream audio startup failed"
|
||||
);
|
||||
cleanup.mark_aborted();
|
||||
break;
|
||||
}
|
||||
lesavka_server::upstream_media_runtime::UpstreamPlanDecision::Play(plan) => plan,
|
||||
};
|
||||
if plan.late_by > stale_drop_budget {
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
late_by_ms = plan.late_by.as_millis(),
|
||||
pts = plan.local_pts_us,
|
||||
"🎤 upstream audio packet dropped after missing its freshness budget"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
while tokio::time::Instant::now() < plan.due_at {
|
||||
let sleep_until_due = tokio::time::sleep_until(plan.due_at);
|
||||
tokio::pin!(sleep_until_due);
|
||||
tokio::select! {
|
||||
_ = &mut sleep_until_due => break,
|
||||
next_packet = inbound.next(), if !inbound_closed => {
|
||||
match next_packet.transpose() {
|
||||
Ok(Some(next_pkt)) => {
|
||||
upstream_media_rt.record_client_timing(
|
||||
UpstreamMediaKind::Microphone,
|
||||
audio_client_timing(&next_pkt),
|
||||
);
|
||||
pending.push_back(next_pkt);
|
||||
let coalesced = retain_freshest_audio_packet(&mut pending);
|
||||
if coalesced > 0 {
|
||||
tracing::debug!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
dropped = coalesced,
|
||||
"🎤 coalesced stale upstream audio while waiting for scheduled playout"
|
||||
);
|
||||
}
|
||||
}
|
||||
Ok(None) => inbound_closed = true,
|
||||
Err(err) => {
|
||||
cleanup.mark_aborted();
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
"🎤 stream_microphone inbound error before clean EOF: {err}"
|
||||
);
|
||||
break 'microphone_loop;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let actual_late_by = tokio::time::Instant::now()
|
||||
.checked_duration_since(plan.due_at)
|
||||
.unwrap_or_default();
|
||||
if actual_late_by > stale_drop_budget {
|
||||
tracing::warn!(
|
||||
rpc_id,
|
||||
session_id = lease.session_id,
|
||||
late_by_ms = actual_late_by.as_millis(),
|
||||
pts = plan.local_pts_us,
|
||||
"🎤 upstream audio packet dropped after waking too late for fresh playout"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
pkt.pts = plan.local_pts_us;
|
||||
let n = CNT.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
|
||||
if n < 5 || n.is_multiple_of(3_000) {
|
||||
tracing::info!(rpc_id, "🎤⬇ srv pkt#{n} {} bytes", pkt.data.len());
|
||||
}
|
||||
sink.push(&pkt);
|
||||
upstream_media_rt.mark_audio_presented(pkt.pts, plan.due_at);
|
||||
}
|
||||
sink.finish(); // flush on EOS
|
||||
let _ = tx.send(Ok(Empty {})).await;
|
||||
Ok::<(), Status>(())
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
}
|
||||
109
server/src/main/relay_service/output_delay_probe_rpc.rs
Normal file
109
server/src/main/relay_service/output_delay_probe_rpc.rs
Normal file
@ -0,0 +1,109 @@
|
||||
impl Handler {
|
||||
/// Generate deterministic media on the server and feed UVC/UAC directly.
|
||||
async fn run_output_delay_probe_rpc(
|
||||
&self,
|
||||
req: Request<OutputDelayProbeRequest>,
|
||||
) -> Result<Response<ReceiverStream<Result<OutputDelayProbeReply, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
let request = req.into_inner();
|
||||
let camera_cfg = camera::current_camera_config();
|
||||
let microphone_lease = self.upstream_media_rt.activate_microphone();
|
||||
let camera_lease = self.upstream_media_rt.activate_camera();
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
camera_generation = camera_lease.generation,
|
||||
microphone_generation = microphone_lease.generation,
|
||||
output = camera_cfg.output.as_str(),
|
||||
codec = camera_cfg.codec.as_str(),
|
||||
width = camera_cfg.width,
|
||||
height = camera_cfg.height,
|
||||
fps = camera_cfg.fps,
|
||||
"🧪 server output-delay probe opened"
|
||||
);
|
||||
let (camera_session_id, relay, _relay_reused) =
|
||||
match self.camera_rt.activate(&camera_cfg).await {
|
||||
Ok(active) => active,
|
||||
Err(err) => {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
let Some(microphone_sink_permit) = self
|
||||
.upstream_media_rt
|
||||
.reserve_microphone_sink(microphone_lease.generation)
|
||||
.await
|
||||
else {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
return Err(Status::aborted(
|
||||
"output-delay probe superseded before microphone sink became available",
|
||||
));
|
||||
};
|
||||
let uac_dev = std::env::var("LESAVKA_UAC_DEV").unwrap_or_else(|_| "hw:UAC2Gadget,0".into());
|
||||
let mut sink = runtime_support::open_voice_with_retry(&uac_dev)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
Status::internal(format!("{e:#}"))
|
||||
})?;
|
||||
let camera_rt = self.camera_rt.clone();
|
||||
let upstream_media_rt = self.upstream_media_rt.clone();
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(1);
|
||||
|
||||
tokio::spawn(async move {
|
||||
let _microphone_sink_permit = microphone_sink_permit;
|
||||
let result = if !camera_rt.is_active(camera_session_id)
|
||||
|| !upstream_media_rt.is_camera_active(camera_lease.generation)
|
||||
|| !upstream_media_rt.is_microphone_active(microphone_lease.generation)
|
||||
{
|
||||
Err(anyhow::anyhow!("output-delay probe superseded before start"))
|
||||
} else {
|
||||
lesavka_server::output_delay_probe::run_server_output_delay_probe(
|
||||
relay,
|
||||
&mut sink,
|
||||
&camera_cfg,
|
||||
&request,
|
||||
)
|
||||
.await
|
||||
};
|
||||
upstream_media_rt.close_camera(camera_lease.generation);
|
||||
upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
match result {
|
||||
Ok(summary) => {
|
||||
let detail = format!(
|
||||
"server-generated UVC/UAC output-delay probe complete: video_frames={} audio_packets={} events={}",
|
||||
summary.video_frames, summary.audio_packets, summary.event_count
|
||||
);
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
camera_session_id,
|
||||
detail,
|
||||
"🧪 server output-delay probe closed"
|
||||
);
|
||||
tx.send(Ok(OutputDelayProbeReply {
|
||||
ok: true,
|
||||
detail,
|
||||
server_timeline_json: summary.timeline_json,
|
||||
}))
|
||||
.await
|
||||
.ok();
|
||||
}
|
||||
Err(err) => {
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
camera_session_id,
|
||||
"🧪 server output-delay probe failed: {err:#}"
|
||||
);
|
||||
tx.send(Err(Status::internal(format!("{err:#}")))).await.ok();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
}
|
||||
72
server/src/main/relay_service/state_control_rpc.rs
Normal file
72
server/src/main/relay_service/state_control_rpc.rs
Normal file
@ -0,0 +1,72 @@
|
||||
impl Handler {
|
||||
async fn capture_video_rpc(
|
||||
&self,
|
||||
req: Request<MonitorRequest>,
|
||||
) -> Result<Response<VideoStream>, Status> {
|
||||
self.capture_video_reply(req.into_inner()).await
|
||||
}
|
||||
|
||||
/// Keeps `capture_audio` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn capture_audio_rpc(
|
||||
&self,
|
||||
req: Request<MonitorRequest>,
|
||||
) -> Result<Response<AudioStream>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
// Only one speaker stream for now; both 0/1 → same ALSA dev.
|
||||
let _id = req.into_inner().id;
|
||||
// Allow override (`LESAVKA_ALSA_DEV=hw:2,0` for debugging).
|
||||
let dev = std::env::var("LESAVKA_ALSA_DEV").unwrap_or_else(|_| "hw:UAC2Gadget,0".into());
|
||||
info!(rpc_id, %dev, "🔊 capture_audio opened");
|
||||
|
||||
let s = runtime_support::open_ear_with_retry(&dev, 0)
|
||||
.await
|
||||
.map_err(|e| remote_audio_status(format!("{e:#}")))?;
|
||||
|
||||
Ok(Response::new(Box::pin(s)))
|
||||
}
|
||||
|
||||
async fn paste_text_rpc(&self, req: Request<PasteRequest>) -> Result<Response<PasteReply>, Status> {
|
||||
self.paste_text_reply(req).await
|
||||
}
|
||||
|
||||
async fn recover_usb_rpc(&self, _req: Request<Empty>) -> ResetReply { self.recover_usb_reply().await }
|
||||
async fn recover_uac_rpc(&self, _req: Request<Empty>) -> ResetReply { self.recover_uac_reply().await }
|
||||
async fn recover_uvc_rpc(&self, _req: Request<Empty>) -> ResetReply { self.recover_uvc_reply().await }
|
||||
async fn reset_usb_rpc(&self, _req: Request<Empty>) -> ResetReply { self.reset_usb_reply().await }
|
||||
|
||||
async fn get_capture_power_rpc(
|
||||
&self,
|
||||
_req: Request<Empty>,
|
||||
) -> Result<Response<CapturePowerState>, Status> {
|
||||
self.get_capture_power_reply().await
|
||||
}
|
||||
|
||||
async fn set_capture_power_rpc(
|
||||
&self,
|
||||
req: Request<SetCapturePowerRequest>,
|
||||
) -> Result<Response<CapturePowerState>, Status> {
|
||||
self.set_capture_power_reply(req).await
|
||||
}
|
||||
|
||||
async fn get_calibration_rpc(
|
||||
&self,
|
||||
_req: Request<Empty>,
|
||||
) -> Result<Response<CalibrationState>, Status> {
|
||||
self.get_calibration_reply().await
|
||||
}
|
||||
|
||||
async fn calibrate_rpc(
|
||||
&self,
|
||||
req: Request<CalibrationRequest>,
|
||||
) -> Result<Response<CalibrationState>, Status> {
|
||||
self.calibrate_reply(req).await
|
||||
}
|
||||
|
||||
async fn get_upstream_sync_rpc(
|
||||
&self,
|
||||
_req: Request<Empty>,
|
||||
) -> Result<Response<UpstreamSyncState>, Status> {
|
||||
self.get_upstream_sync_reply().await
|
||||
}
|
||||
}
|
||||
254
server/src/main/relay_service/upstream_media_rpc.rs
Normal file
254
server/src/main/relay_service/upstream_media_rpc.rs
Normal file
@ -0,0 +1,254 @@
|
||||
impl Handler {
|
||||
/// Accept client-bundled webcam and microphone packets on the v2 upstream path.
|
||||
async fn stream_webcam_media_rpc(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<UpstreamMediaBundle>>,
|
||||
) -> Result<Response<ReceiverStream<Result<Empty, Status>>>, Status> {
|
||||
let rpc_id = runtime_support::next_stream_id();
|
||||
let camera_cfg = camera::current_camera_config();
|
||||
let microphone_lease = self.upstream_media_rt.activate_microphone();
|
||||
let camera_lease = self.upstream_media_rt.activate_camera();
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
camera_generation = camera_lease.generation,
|
||||
microphone_generation = microphone_lease.generation,
|
||||
output = camera_cfg.output.as_str(),
|
||||
codec = camera_cfg.codec.as_str(),
|
||||
width = camera_cfg.width,
|
||||
height = camera_cfg.height,
|
||||
fps = camera_cfg.fps,
|
||||
"📦 stream_webcam_media v2 opened"
|
||||
);
|
||||
let (camera_session_id, relay, _relay_reused) =
|
||||
match self.camera_rt.activate(&camera_cfg).await {
|
||||
Ok(active) => active,
|
||||
Err(err) => {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
return Err(err);
|
||||
}
|
||||
};
|
||||
let Some(microphone_sink_permit) = self
|
||||
.upstream_media_rt
|
||||
.reserve_microphone_sink(microphone_lease.generation)
|
||||
.await
|
||||
else {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
return Err(Status::aborted(
|
||||
"v2 bundled media stream superseded before microphone sink became available",
|
||||
));
|
||||
};
|
||||
let uac_dev = std::env::var("LESAVKA_UAC_DEV").unwrap_or_else(|_| "hw:UAC2Gadget,0".into());
|
||||
let mut sink = runtime_support::open_voice_with_retry(&uac_dev)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
self.upstream_media_rt.close_camera(camera_lease.generation);
|
||||
self.upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
Status::internal(format!("{e:#}"))
|
||||
})?;
|
||||
let camera_rt = self.camera_rt.clone();
|
||||
let upstream_media_rt = self.upstream_media_rt.clone();
|
||||
let (tx, rx) = tokio::sync::mpsc::channel(1);
|
||||
|
||||
tokio::spawn(async move {
|
||||
let _microphone_sink_permit = microphone_sink_permit;
|
||||
let mut inbound = req.into_inner();
|
||||
let mut clock = MediaV2Clock::default();
|
||||
let mut last_bundle_session_id = None;
|
||||
let mut last_bundle_seq = None;
|
||||
let mut video_presented_once = false;
|
||||
let mut outcome = "aborted";
|
||||
|
||||
while let Some(bundle_result) = inbound.next().await {
|
||||
let mut bundle = match bundle_result {
|
||||
Ok(bundle) => bundle,
|
||||
Err(err) => {
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
"📦 stream_webcam_media v2 inbound error before clean EOF: {err}"
|
||||
);
|
||||
break;
|
||||
}
|
||||
};
|
||||
if !camera_rt.is_active(camera_session_id)
|
||||
|| !upstream_media_rt.is_camera_active(camera_lease.generation)
|
||||
|| !upstream_media_rt.is_microphone_active(microphone_lease.generation)
|
||||
{
|
||||
outcome = "superseded";
|
||||
break;
|
||||
}
|
||||
if last_bundle_session_id.is_some_and(|session_id| session_id != bundle.session_id) {
|
||||
warn!(
|
||||
rpc_id,
|
||||
previous_session_id = last_bundle_session_id.unwrap_or_default(),
|
||||
next_session_id = bundle.session_id,
|
||||
"📦 v2 bundled client session changed; resetting local media clock"
|
||||
);
|
||||
clock = MediaV2Clock::default();
|
||||
last_bundle_seq = None;
|
||||
}
|
||||
last_bundle_session_id = Some(bundle.session_id);
|
||||
if last_bundle_seq.is_some_and(|seq| bundle.seq <= seq) {
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
client_bundle_session_id = bundle.session_id,
|
||||
bundle_seq = bundle.seq,
|
||||
previous_bundle_seq = last_bundle_seq.unwrap_or_default(),
|
||||
"📦 v2 dropping duplicate/stale bundled packet"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
last_bundle_seq = Some(bundle.seq);
|
||||
|
||||
let Some(facts) = summarize_media_v2_bundle(&bundle) else {
|
||||
continue;
|
||||
};
|
||||
if facts.has_audio && facts.has_video && facts.capture_span_us > MEDIA_V2_MAX_MIXED_CAPTURE_SPAN_US {
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
client_bundle_session_id = bundle.session_id,
|
||||
bundle_seq = bundle.seq,
|
||||
span_ms = facts.capture_span_us / 1000,
|
||||
"📦 v2 dropping mixed bundle with impossible A/V capture span"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
for audio in &bundle.audio {
|
||||
upstream_media_rt.record_client_timing(
|
||||
UpstreamMediaKind::Microphone,
|
||||
audio_client_timing(audio),
|
||||
);
|
||||
}
|
||||
if let Some(video) = bundle.video.as_ref() {
|
||||
upstream_media_rt.record_client_timing(
|
||||
UpstreamMediaKind::Camera,
|
||||
video_client_timing(video),
|
||||
);
|
||||
}
|
||||
let (video_offset_us, audio_offset_us) = upstream_media_rt.playout_offsets();
|
||||
let Some(schedule) = media_v2_handoff_schedule(facts, audio_offset_us, video_offset_us) else {
|
||||
if facts.has_video {
|
||||
upstream_media_rt.record_video_freeze(
|
||||
"v2 dropped stale bundled A/V before UVC/UAC handoff",
|
||||
);
|
||||
}
|
||||
warn!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
client_bundle_session_id = bundle.session_id,
|
||||
bundle_seq = bundle.seq,
|
||||
max_queue_age_ms = facts.max_queue_age_ms,
|
||||
"📦 v2 dropping whole bundle because it is already outside the freshness budget"
|
||||
);
|
||||
continue;
|
||||
};
|
||||
debug!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
client_bundle_session_id = bundle.session_id,
|
||||
bundle_seq = bundle.seq,
|
||||
max_queue_age_ms = facts.max_queue_age_ms,
|
||||
common_delay_ms = schedule.common_delay.as_millis(),
|
||||
relative_audio_delay_ms = schedule.relative_audio_delay.as_millis(),
|
||||
relative_video_delay_ms = schedule.relative_video_delay.as_millis(),
|
||||
audio_offset_us,
|
||||
video_offset_us,
|
||||
"📦 v2 scheduled bundled UAC/UVC handoff from one capture clock"
|
||||
);
|
||||
|
||||
match (schedule.audio_due_at, schedule.video_due_at) {
|
||||
(Some(audio_due_at), Some(video_due_at)) if audio_due_at <= video_due_at => {
|
||||
push_media_v2_audio(
|
||||
&mut bundle.audio,
|
||||
&mut clock,
|
||||
&mut sink,
|
||||
&upstream_media_rt,
|
||||
audio_due_at,
|
||||
)
|
||||
.await;
|
||||
feed_media_v2_video(
|
||||
bundle.video.take(),
|
||||
&mut clock,
|
||||
&relay,
|
||||
&upstream_media_rt,
|
||||
video_due_at,
|
||||
&mut video_presented_once,
|
||||
rpc_id,
|
||||
camera_lease.session_id,
|
||||
camera_session_id,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
(Some(audio_due_at), Some(video_due_at)) => {
|
||||
feed_media_v2_video(
|
||||
bundle.video.take(),
|
||||
&mut clock,
|
||||
&relay,
|
||||
&upstream_media_rt,
|
||||
video_due_at,
|
||||
&mut video_presented_once,
|
||||
rpc_id,
|
||||
camera_lease.session_id,
|
||||
camera_session_id,
|
||||
)
|
||||
.await;
|
||||
push_media_v2_audio(
|
||||
&mut bundle.audio,
|
||||
&mut clock,
|
||||
&mut sink,
|
||||
&upstream_media_rt,
|
||||
audio_due_at,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
(Some(audio_due_at), None) => {
|
||||
push_media_v2_audio(
|
||||
&mut bundle.audio,
|
||||
&mut clock,
|
||||
&mut sink,
|
||||
&upstream_media_rt,
|
||||
audio_due_at,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
(None, Some(video_due_at)) => {
|
||||
feed_media_v2_video(
|
||||
bundle.video.take(),
|
||||
&mut clock,
|
||||
&relay,
|
||||
&upstream_media_rt,
|
||||
video_due_at,
|
||||
&mut video_presented_once,
|
||||
rpc_id,
|
||||
camera_lease.session_id,
|
||||
camera_session_id,
|
||||
)
|
||||
.await;
|
||||
}
|
||||
(None, None) => {}
|
||||
}
|
||||
}
|
||||
|
||||
outcome = if outcome == "aborted" { "closed" } else { outcome };
|
||||
sink.finish();
|
||||
upstream_media_rt.close_camera(camera_lease.generation);
|
||||
upstream_media_rt.close_microphone(microphone_lease.generation);
|
||||
info!(
|
||||
rpc_id,
|
||||
session_id = camera_lease.session_id,
|
||||
camera_session_id,
|
||||
outcome,
|
||||
"📦 stream_webcam_media v2 lifecycle ended"
|
||||
);
|
||||
tx.send(Ok(Empty {})).await.ok();
|
||||
Ok::<(), Status>(())
|
||||
});
|
||||
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
}
|
||||
@ -8,6 +8,8 @@ fn upstream_stale_drop_budget() -> Duration {
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `retain_freshest_video_packet` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_freshest_video_packet(
|
||||
pending: &mut std::collections::VecDeque<VideoPacket>,
|
||||
) -> usize {
|
||||
@ -25,6 +27,8 @@ fn retain_freshest_video_packet(
|
||||
const AUDIO_PENDING_LIVE_WINDOW_PACKETS: usize = 8;
|
||||
|
||||
#[cfg(coverage)]
|
||||
/// Keeps `retain_freshest_audio_packet` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_freshest_audio_packet(
|
||||
pending: &mut std::collections::VecDeque<AudioPacket>,
|
||||
) -> usize {
|
||||
@ -48,6 +52,8 @@ impl Relay for Handler {
|
||||
type StreamWebcamMediaStream = ReceiverStream<Result<Empty, Status>>;
|
||||
type RunOutputDelayProbeStream = ReceiverStream<Result<OutputDelayProbeReply, Status>>;
|
||||
|
||||
/// Keeps `stream_keyboard` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_keyboard(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<KeyboardReport>>,
|
||||
@ -72,6 +78,8 @@ impl Relay for Handler {
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `stream_mouse` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_mouse(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<MouseReport>>,
|
||||
@ -91,6 +99,8 @@ impl Relay for Handler {
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `stream_microphone` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_microphone(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<AudioPacket>>,
|
||||
@ -188,6 +198,8 @@ impl Relay for Handler {
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `stream_webcam_media` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_webcam_media(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<UpstreamMediaBundle>>,
|
||||
@ -222,6 +234,8 @@ impl Relay for Handler {
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `stream_camera` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn stream_camera(
|
||||
&self,
|
||||
req: Request<tonic::Streaming<VideoPacket>>,
|
||||
@ -310,6 +324,8 @@ impl Relay for Handler {
|
||||
Ok(Response::new(ReceiverStream::new(rx)))
|
||||
}
|
||||
|
||||
/// Keeps `run_output_delay_probe` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn run_output_delay_probe(
|
||||
&self,
|
||||
req: Request<OutputDelayProbeRequest>,
|
||||
|
||||
@ -12,6 +12,8 @@ mod tests {
|
||||
use std::sync::Arc;
|
||||
|
||||
#[test]
|
||||
/// Keeps `retain_freshest_video_packet_keeps_only_the_latest_frame` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_freshest_video_packet_keeps_only_the_latest_frame() {
|
||||
let mut pending = std::collections::VecDeque::from(vec![
|
||||
VideoPacket {
|
||||
@ -36,6 +38,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `retain_freshest_audio_packet_keeps_a_short_live_window` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn retain_freshest_audio_packet_keeps_a_short_live_window() {
|
||||
let mut pending = (0..10)
|
||||
.map(|idx| AudioPacket {
|
||||
@ -53,6 +57,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `enqueue_pending_upstream_video_packet_records_timing_before_scheduler_waits` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn enqueue_pending_upstream_video_packet_records_timing_before_scheduler_waits() {
|
||||
let runtime = UpstreamMediaRuntime::new();
|
||||
let mut pending = std::collections::VecDeque::from(vec![VideoPacket {
|
||||
@ -98,6 +104,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `media_v2_bundle_summary_uses_client_capture_sidecar_not_packet_pts` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn media_v2_bundle_summary_uses_client_capture_sidecar_not_packet_pts() {
|
||||
let bundle = UpstreamMediaBundle {
|
||||
capture_start_us: 1,
|
||||
@ -128,6 +136,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `media_v2_schedule_offsets_outputs_without_creating_split_planner` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn media_v2_schedule_offsets_outputs_without_creating_split_planner() {
|
||||
let facts = MediaV2BundleFacts {
|
||||
has_audio: true,
|
||||
@ -160,6 +170,8 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `legacy_bundled_event_timing_example_documents_quarantined_v1_behavior` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn legacy_bundled_event_timing_example_documents_quarantined_v1_behavior() {
|
||||
let video = VideoPacket {
|
||||
pts: 9_999_000,
|
||||
|
||||
@ -23,6 +23,8 @@ fn retain_freshest_video_packet(
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
/// Keeps `enqueue_pending_upstream_video_packet` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn enqueue_pending_upstream_video_packet(
|
||||
runtime: &UpstreamMediaRuntime,
|
||||
pending: &mut std::collections::VecDeque<VideoPacket>,
|
||||
|
||||
@ -48,6 +48,8 @@ impl Handler {
|
||||
Ok(Response::new(ResetUsbReply { ok: true }))
|
||||
}
|
||||
|
||||
/// Keeps `paste_text_reply` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn paste_text_reply(
|
||||
&self,
|
||||
req: Request<PasteRequest>,
|
||||
@ -66,6 +68,8 @@ impl Handler {
|
||||
}))
|
||||
}
|
||||
|
||||
/// Keeps `reset_usb_reply` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn reset_usb_reply(&self) -> Result<Response<ResetUsbReply>, Status> {
|
||||
#[cfg(not(coverage))]
|
||||
info!("🔴 explicit ResetUsb() called");
|
||||
@ -132,6 +136,8 @@ impl Handler {
|
||||
))
|
||||
}
|
||||
|
||||
/// Keeps `set_capture_power_reply` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn set_capture_power_reply(
|
||||
&self,
|
||||
req: Request<SetCapturePowerRequest>,
|
||||
@ -165,6 +171,8 @@ impl Handler {
|
||||
.map_err(|e| Status::internal(format!("{e:#}")))
|
||||
}
|
||||
|
||||
/// Keeps `get_upstream_sync_reply` explicit because it sits on relay RPC orchestration, where hardware failures must surface without stopping the server.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
async fn get_upstream_sync_reply(&self) -> Result<Response<UpstreamSyncState>, Status> {
|
||||
let snapshot = self.upstream_media_rt.snapshot();
|
||||
Ok(Response::new(UpstreamSyncState {
|
||||
|
||||
@ -187,799 +187,10 @@ struct OutputDelayProbeEventTimeline {
|
||||
server_feed_delta_ms: Option<f64>,
|
||||
}
|
||||
|
||||
impl OutputDelayProbeTimeline {
|
||||
fn new(config: &ProbeConfig, camera: &CameraConfig, server_start_unix_ns: u128) -> Self {
|
||||
let event_count = config.event_count();
|
||||
let events = (0..event_count)
|
||||
.map(|event_id| {
|
||||
let slot = config.event_slot_by_id(event_id as usize);
|
||||
OutputDelayProbeEventTimeline {
|
||||
event_id: event_id as usize,
|
||||
code: slot.code,
|
||||
planned_start_us: slot.planned_start_us,
|
||||
planned_end_us: slot.planned_end_us,
|
||||
video_seq: None,
|
||||
audio_seq: None,
|
||||
video_feed_monotonic_us: None,
|
||||
audio_push_monotonic_us: None,
|
||||
video_feed_unix_ns: None,
|
||||
audio_push_unix_ns: None,
|
||||
server_feed_delta_ms: None,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
Self {
|
||||
schema: "lesavka.output-delay-server-timeline.v1",
|
||||
origin: "theia-server-generated",
|
||||
media_path: "server generator -> UVC/UAC sinks",
|
||||
injection_scope: "server-final-output-handoff",
|
||||
server_pipeline_reference: "generated media PTS before intentional sync delay",
|
||||
sink_handoff_path: "video CameraRelay::feed; audio Voice::push",
|
||||
client_uplink_included: false,
|
||||
camera_width: camera.width,
|
||||
camera_height: camera.height,
|
||||
camera_fps: camera.fps,
|
||||
audio_sample_rate: AUDIO_SAMPLE_RATE,
|
||||
audio_channels: AUDIO_CHANNELS,
|
||||
audio_chunk_ms: AUDIO_CHUNK_MS,
|
||||
audio_delay_us: duration_us(config.audio_delay),
|
||||
video_delay_us: duration_us(config.video_delay),
|
||||
server_start_unix_ns,
|
||||
pulse_period_ms: config.pulse_period.as_millis() as u64,
|
||||
pulse_width_ms: config.pulse_width.as_millis() as u64,
|
||||
warmup_us: duration_us(config.warmup),
|
||||
duration_us: duration_us(config.duration),
|
||||
events,
|
||||
}
|
||||
}
|
||||
|
||||
fn mark_audio(&mut self, slot: ProbeEventSlot, seq: u64, monotonic_us: u64, unix_ns: u128) {
|
||||
let Some(event) = self.events.get_mut(slot.event_id) else {
|
||||
return;
|
||||
};
|
||||
if event.audio_push_monotonic_us.is_none() {
|
||||
event.audio_seq = Some(seq);
|
||||
event.audio_push_monotonic_us = Some(monotonic_us);
|
||||
event.audio_push_unix_ns = Some(unix_ns);
|
||||
event.update_delta();
|
||||
}
|
||||
}
|
||||
|
||||
fn mark_video(&mut self, slot: ProbeEventSlot, seq: u64, monotonic_us: u64, unix_ns: u128) {
|
||||
let Some(event) = self.events.get_mut(slot.event_id) else {
|
||||
return;
|
||||
};
|
||||
if event.video_feed_monotonic_us.is_none() {
|
||||
event.video_seq = Some(seq);
|
||||
event.video_feed_monotonic_us = Some(monotonic_us);
|
||||
event.video_feed_unix_ns = Some(unix_ns);
|
||||
event.update_delta();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl OutputDelayProbeEventTimeline {
|
||||
fn update_delta(&mut self) {
|
||||
let (Some(audio_us), Some(video_us)) =
|
||||
(self.audio_push_monotonic_us, self.video_feed_monotonic_us)
|
||||
else {
|
||||
return;
|
||||
};
|
||||
self.server_feed_delta_ms = Some((audio_us as f64 - video_us as f64) / 1000.0);
|
||||
}
|
||||
}
|
||||
|
||||
impl ProbeConfig {
|
||||
fn from_request(request: &OutputDelayProbeRequest) -> Result<Self> {
|
||||
let duration_seconds = non_zero_or_default(
|
||||
request.duration_seconds,
|
||||
DEFAULT_DURATION_SECONDS,
|
||||
"duration_seconds",
|
||||
)?;
|
||||
let warmup_seconds = if request.warmup_seconds == 0 {
|
||||
DEFAULT_WARMUP_SECONDS
|
||||
} else {
|
||||
request.warmup_seconds
|
||||
};
|
||||
let pulse_period_ms = non_zero_or_default(
|
||||
request.pulse_period_ms,
|
||||
DEFAULT_PULSE_PERIOD_MS,
|
||||
"pulse_period_ms",
|
||||
)?;
|
||||
let pulse_width_ms = non_zero_or_default(
|
||||
request.pulse_width_ms,
|
||||
DEFAULT_PULSE_WIDTH_MS,
|
||||
"pulse_width_ms",
|
||||
)?;
|
||||
if pulse_width_ms >= pulse_period_ms {
|
||||
bail!("pulse_width_ms must stay smaller than pulse_period_ms");
|
||||
}
|
||||
|
||||
let event_width_codes = parse_event_width_codes(&request.event_width_codes)?;
|
||||
Ok(Self {
|
||||
duration: Duration::from_secs(u64::from(duration_seconds)),
|
||||
warmup: Duration::from_secs(u64::from(warmup_seconds)),
|
||||
pulse_period: Duration::from_millis(u64::from(pulse_period_ms)),
|
||||
pulse_width: Duration::from_millis(u64::from(pulse_width_ms)),
|
||||
event_width_codes,
|
||||
audio_delay: positive_delay(request.audio_delay_us, "audio_delay_us")?,
|
||||
video_delay: positive_delay(request.video_delay_us, "video_delay_us")?,
|
||||
})
|
||||
}
|
||||
|
||||
fn event_code_at(&self, pts: Duration) -> Option<u32> {
|
||||
self.event_slot_at(pts).map(|slot| slot.code)
|
||||
}
|
||||
|
||||
fn event_slot_at(&self, pts: Duration) -> Option<ProbeEventSlot> {
|
||||
if pts < self.warmup {
|
||||
return None;
|
||||
}
|
||||
let since_warmup = pts.saturating_sub(self.warmup);
|
||||
let period_ns = self.pulse_period.as_nanos().max(1);
|
||||
let pulse_index = (since_warmup.as_nanos() / period_ns) as usize;
|
||||
let pulse_offset_ns = since_warmup.as_nanos() % period_ns;
|
||||
let active_ns = self.pulse_width.as_nanos();
|
||||
(pulse_offset_ns < active_ns).then(|| self.event_slot_by_id(pulse_index))
|
||||
}
|
||||
|
||||
fn event_slot_by_id(&self, event_id: usize) -> ProbeEventSlot {
|
||||
let code = self.event_width_codes[event_id % self.event_width_codes.len()];
|
||||
let planned_start = self
|
||||
.warmup
|
||||
.saturating_add(duration_mul(self.pulse_period, event_id as u64));
|
||||
let planned_end = planned_start.saturating_add(self.pulse_width);
|
||||
ProbeEventSlot {
|
||||
event_id,
|
||||
code,
|
||||
planned_start_us: duration_us(planned_start),
|
||||
planned_end_us: duration_us(planned_end),
|
||||
}
|
||||
}
|
||||
|
||||
fn event_count(&self) -> u64 {
|
||||
if self.duration <= self.warmup {
|
||||
return 0;
|
||||
}
|
||||
let active = self.duration - self.warmup;
|
||||
let count = active.as_nanos() / self.pulse_period.as_nanos().max(1);
|
||||
count.try_into().unwrap_or(u64::MAX)
|
||||
}
|
||||
}
|
||||
|
||||
fn non_zero_or_default(value: u32, default: u32, name: &str) -> Result<u32> {
|
||||
if value == 0 {
|
||||
return Ok(default);
|
||||
}
|
||||
if value == u32::MAX {
|
||||
bail!("{name} is too large");
|
||||
}
|
||||
Ok(value)
|
||||
}
|
||||
|
||||
fn positive_delay(value_us: i64, name: &str) -> Result<Duration> {
|
||||
if value_us < 0 {
|
||||
bail!("{name} must be zero or positive for the direct output-delay probe");
|
||||
}
|
||||
Ok(Duration::from_micros(value_us as u64))
|
||||
}
|
||||
|
||||
fn parse_event_width_codes(raw: &str) -> Result<Vec<u32>> {
|
||||
let trimmed = raw.trim();
|
||||
if trimmed.is_empty() {
|
||||
return Ok(DEFAULT_EVENT_WIDTH_CODES.to_vec());
|
||||
}
|
||||
let codes = trimmed
|
||||
.split(',')
|
||||
.filter_map(|part| {
|
||||
let part = part.trim();
|
||||
(!part.is_empty()).then_some(part)
|
||||
})
|
||||
.map(|part| {
|
||||
let code = part
|
||||
.parse::<u32>()
|
||||
.with_context(|| format!("parsing event width code `{part}`"))?;
|
||||
if !(1..=16).contains(&code) {
|
||||
bail!("event signature code {code} is unsupported; use values 1..16");
|
||||
}
|
||||
Ok(code)
|
||||
})
|
||||
.collect::<Result<Vec<_>>>()?;
|
||||
if codes.is_empty() {
|
||||
bail!("event_width_codes must contain at least one code");
|
||||
}
|
||||
Ok(codes)
|
||||
}
|
||||
|
||||
/// Generate a server-local A/V signature and feed the physical UVC/UAC sinks.
|
||||
///
|
||||
/// Inputs: the active camera relay, active UAC voice sink, camera profile, and
|
||||
/// probe request timing.
|
||||
/// Outputs: a small count summary after the last generated packet.
|
||||
/// Why: this probe intentionally bypasses client capture/uplink but uses the
|
||||
/// same final server output handoff calls as received client media, so the
|
||||
/// measured skew/freshness is the server final-handoff-to-RCT path.
|
||||
#[cfg(not(coverage))]
|
||||
pub async fn run_server_output_delay_probe(
|
||||
relay: Arc<CameraRelay>,
|
||||
sink: &mut Voice,
|
||||
camera: &CameraConfig,
|
||||
request: &OutputDelayProbeRequest,
|
||||
) -> Result<OutputDelayProbeSummary> {
|
||||
let config = ProbeConfig::from_request(request)?;
|
||||
if config.event_count() == 0 {
|
||||
bail!("probe duration must extend beyond warmup");
|
||||
}
|
||||
|
||||
let frame_step = Duration::from_nanos(1_000_000_000u64 / u64::from(camera.fps.max(1)));
|
||||
let audio_chunk = Duration::from_millis(AUDIO_CHUNK_MS);
|
||||
let samples_per_chunk = ((u64::from(AUDIO_SAMPLE_RATE) * AUDIO_CHUNK_MS) / 1_000) as usize;
|
||||
let frames = EncodedProbeFrames::new(camera, &config, frame_step)?;
|
||||
let server_start_unix_ns = unix_ns_now();
|
||||
let start = tokio::time::Instant::now();
|
||||
let mut timeline = OutputDelayProbeTimeline::new(&config, camera, server_start_unix_ns);
|
||||
let mut frame_index = 0u64;
|
||||
let mut audio_index = 0u64;
|
||||
let mut video_frames = 0u64;
|
||||
let mut audio_packets = 0u64;
|
||||
|
||||
loop {
|
||||
let next_frame_pts = duration_mul(frame_step, frame_index);
|
||||
let next_audio_pts = duration_mul(audio_chunk, audio_index);
|
||||
let frame_active = next_frame_pts <= config.duration;
|
||||
let audio_active = next_audio_pts <= config.duration;
|
||||
if !frame_active && !audio_active {
|
||||
break;
|
||||
}
|
||||
let next_frame_due = if frame_active {
|
||||
next_frame_pts.saturating_add(config.video_delay)
|
||||
} else {
|
||||
Duration::MAX
|
||||
};
|
||||
let next_audio_due = if audio_active {
|
||||
next_audio_pts.saturating_add(config.audio_delay)
|
||||
} else {
|
||||
Duration::MAX
|
||||
};
|
||||
tokio::time::sleep_until(start + next_frame_due.min(next_audio_due)).await;
|
||||
|
||||
if audio_active && next_audio_due <= next_frame_due {
|
||||
let pts_us = duration_us(next_audio_pts);
|
||||
let event_slot = config.event_slot_at(next_audio_pts);
|
||||
let data = render_audio_chunk(&config, next_audio_pts, samples_per_chunk);
|
||||
let seq = audio_index.saturating_add(1);
|
||||
sink.push(&AudioPacket {
|
||||
id: 0,
|
||||
pts: pts_us,
|
||||
data,
|
||||
seq,
|
||||
client_capture_pts_us: pts_us,
|
||||
client_send_pts_us: pts_us,
|
||||
client_queue_depth: 0,
|
||||
client_queue_age_ms: 0,
|
||||
});
|
||||
if let Some(slot) = event_slot {
|
||||
let monotonic_us = monotonic_us_since(start);
|
||||
timeline.mark_audio(
|
||||
slot,
|
||||
seq,
|
||||
monotonic_us,
|
||||
unix_ns_from_start(server_start_unix_ns, monotonic_us),
|
||||
);
|
||||
}
|
||||
audio_packets = audio_packets.saturating_add(1);
|
||||
audio_index = audio_index.saturating_add(1);
|
||||
}
|
||||
|
||||
if frame_active && next_frame_due <= next_audio_due {
|
||||
let pts_us = duration_us(next_frame_pts);
|
||||
let event_slot = config.event_slot_at(next_frame_pts);
|
||||
let seq = frame_index.saturating_add(1);
|
||||
relay.feed(VideoPacket {
|
||||
id: 0,
|
||||
pts: pts_us,
|
||||
data: frames.packet_for_frame(frame_index)?.to_vec(),
|
||||
seq,
|
||||
effective_fps: camera.fps,
|
||||
client_capture_pts_us: pts_us,
|
||||
client_send_pts_us: pts_us,
|
||||
client_queue_depth: 0,
|
||||
client_queue_age_ms: 0,
|
||||
..Default::default()
|
||||
});
|
||||
if let Some(slot) = event_slot {
|
||||
let monotonic_us = monotonic_us_since(start);
|
||||
timeline.mark_video(
|
||||
slot,
|
||||
seq,
|
||||
monotonic_us,
|
||||
unix_ns_from_start(server_start_unix_ns, monotonic_us),
|
||||
);
|
||||
}
|
||||
video_frames = video_frames.saturating_add(1);
|
||||
frame_index = frame_index.saturating_add(1);
|
||||
}
|
||||
}
|
||||
|
||||
sink.finish();
|
||||
Ok(OutputDelayProbeSummary {
|
||||
video_frames,
|
||||
audio_packets,
|
||||
event_count: config.event_count(),
|
||||
timeline_json: serde_json::to_string(&timeline)
|
||||
.context("serializing output-delay server timeline")?,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
pub async fn run_server_output_delay_probe(
|
||||
_relay: Arc<CameraRelay>,
|
||||
_sink: &mut Voice,
|
||||
camera: &CameraConfig,
|
||||
request: &OutputDelayProbeRequest,
|
||||
) -> Result<OutputDelayProbeSummary> {
|
||||
let config = ProbeConfig::from_request(request)?;
|
||||
Ok(OutputDelayProbeSummary {
|
||||
video_frames: 1,
|
||||
audio_packets: 1,
|
||||
event_count: config.event_count(),
|
||||
timeline_json: serde_json::to_string(&OutputDelayProbeTimeline::new(
|
||||
&config,
|
||||
camera,
|
||||
unix_ns_now(),
|
||||
))
|
||||
.unwrap_or_else(|_| "{}".to_string()),
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
struct EncodedProbeFrames {
|
||||
frames: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl EncodedProbeFrames {
|
||||
fn new(camera: &CameraConfig, config: &ProbeConfig, frame_step: Duration) -> Result<Self> {
|
||||
if !matches!(camera.codec, CameraCodec::Mjpeg) {
|
||||
bail!(
|
||||
"server-generated output-delay probe currently requires MJPEG UVC output, got {}",
|
||||
camera.codec.as_str()
|
||||
);
|
||||
}
|
||||
|
||||
let mut encoder = MjpegFrameEncoder::new(camera)?;
|
||||
let mut frames = Vec::new();
|
||||
let mut frame_index = 0u64;
|
||||
loop {
|
||||
let pts = duration_mul(frame_step, frame_index);
|
||||
if pts > config.duration {
|
||||
break;
|
||||
}
|
||||
let code = config.event_code_at(pts);
|
||||
frames.push(encoder.encode_probe_frame(probe_color_for_code(code), frame_index)?);
|
||||
frame_index = frame_index.saturating_add(1);
|
||||
}
|
||||
Ok(Self { frames })
|
||||
}
|
||||
|
||||
fn packet_for_frame(&self, frame_index: u64) -> Result<&[u8]> {
|
||||
self.frames
|
||||
.get(usize::try_from(frame_index).unwrap_or(usize::MAX))
|
||||
.map(Vec::as_slice)
|
||||
.with_context(|| format!("missing pre-encoded probe frame {frame_index}"))
|
||||
}
|
||||
}
|
||||
|
||||
fn probe_color_for_code(code: Option<u32>) -> Rgb {
|
||||
code.and_then(|code| EVENT_COLORS.get(code.checked_sub(1)? as usize).copied())
|
||||
.unwrap_or(DARK_FRAME_RGB)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
struct MjpegFrameEncoder {
|
||||
src: gst_app::AppSrc,
|
||||
sink: gst_app::AppSink,
|
||||
pipeline: gst::Pipeline,
|
||||
width: usize,
|
||||
height: usize,
|
||||
frame_step_us: u64,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl MjpegFrameEncoder {
|
||||
fn new(camera: &CameraConfig) -> Result<Self> {
|
||||
gst::init().context("gst init")?;
|
||||
let width = camera.width as i32;
|
||||
let height = camera.height as i32;
|
||||
let fps = camera.fps.max(1) as i32;
|
||||
let raw_caps = gst::Caps::builder("video/x-raw")
|
||||
.field("format", "RGB")
|
||||
.field("width", width)
|
||||
.field("height", height)
|
||||
.field("framerate", gst::Fraction::new(fps, 1))
|
||||
.build();
|
||||
let jpeg_caps = gst::Caps::builder("image/jpeg")
|
||||
.field("parsed", true)
|
||||
.field("width", width)
|
||||
.field("height", height)
|
||||
.field("framerate", gst::Fraction::new(fps, 1))
|
||||
.build();
|
||||
let pipeline = gst::Pipeline::new();
|
||||
let src = gst::ElementFactory::make("appsrc")
|
||||
.name("output_delay_probe_src")
|
||||
.build()?
|
||||
.downcast::<gst_app::AppSrc>()
|
||||
.expect("appsrc");
|
||||
src.set_is_live(false);
|
||||
src.set_format(gst::Format::Time);
|
||||
src.set_property("do-timestamp", false);
|
||||
src.set_caps(Some(&raw_caps));
|
||||
let convert = gst::ElementFactory::make("videoconvert").build()?;
|
||||
let encoder = gst::ElementFactory::make("jpegenc")
|
||||
.property("quality", 95i32)
|
||||
.build()?;
|
||||
let capsfilter = gst::ElementFactory::make("capsfilter")
|
||||
.property("caps", &jpeg_caps)
|
||||
.build()?;
|
||||
let sink = gst::ElementFactory::make("appsink")
|
||||
.name("output_delay_probe_sink")
|
||||
.property("sync", false)
|
||||
.property("emit-signals", false)
|
||||
.property("max-buffers", 8u32)
|
||||
.build()?
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink");
|
||||
pipeline.add_many([
|
||||
src.upcast_ref(),
|
||||
&convert,
|
||||
&encoder,
|
||||
&capsfilter,
|
||||
sink.upcast_ref(),
|
||||
])?;
|
||||
gst::Element::link_many([
|
||||
src.upcast_ref(),
|
||||
&convert,
|
||||
&encoder,
|
||||
&capsfilter,
|
||||
sink.upcast_ref(),
|
||||
])?;
|
||||
pipeline
|
||||
.set_state(gst::State::Playing)
|
||||
.context("starting output-delay probe MJPEG encoder")?;
|
||||
|
||||
Ok(Self {
|
||||
src,
|
||||
sink,
|
||||
pipeline,
|
||||
width: camera.width as usize,
|
||||
height: camera.height as usize,
|
||||
frame_step_us: (1_000_000u64 / u64::from(camera.fps.max(1))).max(1),
|
||||
})
|
||||
}
|
||||
|
||||
fn encode_probe_frame(&mut self, color: Rgb, sequence: u64) -> Result<Vec<u8>> {
|
||||
let pts_us = sequence.saturating_mul(self.frame_step_us);
|
||||
let frame = probe_rgb_frame(self.width, self.height, color, sequence);
|
||||
let mut buffer = gst::Buffer::from_slice(frame);
|
||||
if let Some(meta) = buffer.get_mut() {
|
||||
let pts = gst::ClockTime::from_useconds(pts_us);
|
||||
meta.set_pts(Some(pts));
|
||||
meta.set_dts(Some(pts));
|
||||
meta.set_duration(Some(gst::ClockTime::from_useconds(self.frame_step_us)));
|
||||
}
|
||||
self.src
|
||||
.push_buffer(buffer)
|
||||
.context("encoding output-delay probe frame")?;
|
||||
let sample = self
|
||||
.sink
|
||||
.pull_sample()
|
||||
.context("pulling encoded output-delay probe frame")?;
|
||||
let buffer = sample.buffer().context("encoded frame had no buffer")?;
|
||||
let map = buffer
|
||||
.map_readable()
|
||||
.context("mapping encoded output-delay probe frame")?;
|
||||
Ok(map.as_slice().to_vec())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl Drop for MjpegFrameEncoder {
|
||||
fn drop(&mut self) {
|
||||
let _ = self.src.end_of_stream();
|
||||
let _ = self.pipeline.set_state(gst::State::Null);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn probe_rgb_frame(width: usize, height: usize, color: Rgb, sequence: u64) -> Vec<u8> {
|
||||
let mut frame = vec![0u8; width.saturating_mul(height).saturating_mul(3)];
|
||||
for pixel in frame.chunks_exact_mut(3) {
|
||||
pixel[0] = color.r;
|
||||
pixel[1] = color.g;
|
||||
pixel[2] = color.b;
|
||||
}
|
||||
draw_frame_continuity_watermark(&mut frame, width, height, sequence);
|
||||
frame
|
||||
}
|
||||
|
||||
fn draw_frame_continuity_watermark(frame: &mut [u8], width: usize, height: usize, sequence: u64) {
|
||||
if width < VIDEO_CONTINUITY_BLOCKS || height < 8 {
|
||||
return;
|
||||
}
|
||||
let stripe_height = (height / 18).clamp(8, 48);
|
||||
let stripe_top = height.saturating_sub(stripe_height);
|
||||
let block_width = (width / VIDEO_CONTINUITY_BLOCKS).max(1);
|
||||
let seq = (sequence & 0xffff) as u16;
|
||||
let parity = (seq.count_ones() & 1) != 0;
|
||||
for block in 0..VIDEO_CONTINUITY_BLOCKS {
|
||||
let white = match block {
|
||||
0 => true,
|
||||
1 => false,
|
||||
2..=17 => {
|
||||
let bit = VIDEO_CONTINUITY_DATA_BITS - 1 - (block - 2);
|
||||
((seq >> bit) & 1) != 0
|
||||
}
|
||||
18 => parity,
|
||||
_ => !parity,
|
||||
};
|
||||
let value = if white { 255 } else { 0 };
|
||||
let x_start = block * block_width;
|
||||
let x_end = if block + 1 == VIDEO_CONTINUITY_BLOCKS {
|
||||
width
|
||||
} else {
|
||||
((block + 1) * block_width).min(width)
|
||||
};
|
||||
for y in stripe_top..height {
|
||||
for x in x_start..x_end {
|
||||
let offset = (y * width + x) * 3;
|
||||
if let Some(pixel) = frame.get_mut(offset..offset + 3) {
|
||||
pixel[0] = value;
|
||||
pixel[1] = value;
|
||||
pixel[2] = value;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn render_audio_chunk(
|
||||
config: &ProbeConfig,
|
||||
chunk_pts: Duration,
|
||||
samples_per_chunk: usize,
|
||||
) -> Vec<u8> {
|
||||
let sample_step = Duration::from_nanos(1_000_000_000u64 / u64::from(AUDIO_SAMPLE_RATE));
|
||||
let mut pcm =
|
||||
Vec::with_capacity(samples_per_chunk * AUDIO_CHANNELS * std::mem::size_of::<i16>());
|
||||
for sample_index in 0..samples_per_chunk {
|
||||
let sample_pts = chunk_pts + duration_mul(sample_step, sample_index as u64);
|
||||
let pilot_phase = TAU * AUDIO_PILOT_FREQUENCY_HZ * sample_pts.as_secs_f64();
|
||||
let pilot = pilot_phase.sin() * AUDIO_PILOT_AMPLITUDE;
|
||||
let event = config
|
||||
.event_code_at(sample_pts)
|
||||
.and_then(event_frequency_hz)
|
||||
.map(|frequency| {
|
||||
let phase = TAU * frequency * sample_pts.as_secs_f64();
|
||||
phase.sin() * AUDIO_AMPLITUDE
|
||||
})
|
||||
.unwrap_or(0.0);
|
||||
let sample = (pilot + event).clamp(f64::from(i16::MIN), f64::from(i16::MAX)) as i16;
|
||||
for _ in 0..AUDIO_CHANNELS {
|
||||
pcm.extend_from_slice(&sample.to_le_bytes());
|
||||
}
|
||||
}
|
||||
pcm
|
||||
}
|
||||
|
||||
fn event_frequency_hz(code: u32) -> Option<f64> {
|
||||
EVENT_FREQUENCIES_HZ
|
||||
.get(code.checked_sub(1)? as usize)
|
||||
.copied()
|
||||
}
|
||||
|
||||
fn duration_us(duration: Duration) -> u64 {
|
||||
duration.as_micros().min(u128::from(u64::MAX)) as u64
|
||||
}
|
||||
|
||||
fn unix_ns_now() -> u128 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_nanos()
|
||||
}
|
||||
|
||||
fn unix_ns_from_start(server_start_unix_ns: u128, monotonic_us: u64) -> u128 {
|
||||
server_start_unix_ns.saturating_add(u128::from(monotonic_us).saturating_mul(1_000))
|
||||
}
|
||||
|
||||
fn monotonic_us_since(start: tokio::time::Instant) -> u64 {
|
||||
duration_us(tokio::time::Instant::now().saturating_duration_since(start))
|
||||
}
|
||||
|
||||
fn duration_mul(duration: Duration, count: u64) -> Duration {
|
||||
let nanos = duration
|
||||
.as_nanos()
|
||||
.saturating_mul(u128::from(count))
|
||||
.min(u128::from(u64::MAX));
|
||||
Duration::from_nanos(nanos as u64)
|
||||
}
|
||||
include!("output_delay_probe/timeline_config.rs");
|
||||
include!("output_delay_probe/probe_runtime.rs");
|
||||
include!("output_delay_probe/media_encoding.rs");
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
DARK_FRAME_RGB, EVENT_COLORS, EVENT_FREQUENCIES_HZ, OutputDelayProbeTimeline, ProbeConfig,
|
||||
duration_us, probe_color_for_code, render_audio_chunk, unix_ns_from_start,
|
||||
};
|
||||
use crate::camera::{CameraCodec, CameraConfig, CameraOutput};
|
||||
use lesavka_common::lesavka::OutputDelayProbeRequest;
|
||||
use std::collections::BTreeSet;
|
||||
use std::time::Duration;
|
||||
|
||||
#[test]
|
||||
fn request_defaults_to_long_coded_server_probe() {
|
||||
let config =
|
||||
ProbeConfig::from_request(&OutputDelayProbeRequest::default()).expect("default config");
|
||||
|
||||
assert_eq!(config.duration, Duration::from_secs(20));
|
||||
assert_eq!(config.warmup, Duration::from_secs(4));
|
||||
assert_eq!(config.event_count(), 16);
|
||||
assert_eq!(config.event_code_at(Duration::from_secs(4)), Some(1));
|
||||
assert_eq!(config.event_code_at(Duration::from_secs(5)), Some(2));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_codes_reject_unsupported_signatures() {
|
||||
let request = OutputDelayProbeRequest {
|
||||
event_width_codes: "1,17".to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
assert!(ProbeConfig::from_request(&request).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_probe_signatures_are_unique_for_all_coded_pairs() {
|
||||
assert_eq!(EVENT_COLORS.len(), 16);
|
||||
assert_eq!(EVENT_FREQUENCIES_HZ.len(), 16);
|
||||
|
||||
let colors = EVENT_COLORS
|
||||
.iter()
|
||||
.map(|color| (color.r, color.g, color.b))
|
||||
.collect::<BTreeSet<_>>();
|
||||
let frequencies = EVENT_FREQUENCIES_HZ
|
||||
.iter()
|
||||
.map(|frequency| (*frequency * 10.0).round() as i64)
|
||||
.collect::<BTreeSet<_>>();
|
||||
|
||||
assert_eq!(colors.len(), EVENT_COLORS.len());
|
||||
assert_eq!(frequencies.len(), EVENT_FREQUENCIES_HZ.len());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn audio_chunk_contains_tone_only_during_coded_pulse() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "3".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
|
||||
let active = render_audio_chunk(&config, Duration::from_secs(1), 480);
|
||||
let idle = render_audio_chunk(&config, Duration::from_millis(500), 480);
|
||||
|
||||
assert!(active.iter().any(|byte| *byte != 0));
|
||||
assert!(idle.iter().any(|byte| *byte != 0));
|
||||
assert!(rms_i16_le(&active) > rms_i16_le(&idle) * 10.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generated_video_and_audio_share_the_same_event_schedule() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "2".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
let idle_pts = Duration::from_millis(500);
|
||||
let active_pts = Duration::from_secs(1);
|
||||
|
||||
let idle_color = probe_color_for_code(config.event_code_at(idle_pts));
|
||||
let active_color = probe_color_for_code(config.event_code_at(active_pts));
|
||||
let idle_audio = render_audio_chunk(&config, idle_pts, 480);
|
||||
let active_audio = render_audio_chunk(&config, active_pts, 480);
|
||||
|
||||
assert_eq!(
|
||||
(idle_color.r, idle_color.g, idle_color.b),
|
||||
(DARK_FRAME_RGB.r, DARK_FRAME_RGB.g, DARK_FRAME_RGB.b)
|
||||
);
|
||||
assert_ne!(
|
||||
(active_color.r, active_color.g, active_color.b),
|
||||
(DARK_FRAME_RGB.r, DARK_FRAME_RGB.g, DARK_FRAME_RGB.b)
|
||||
);
|
||||
assert!(rms_i16_le(&active_audio) > rms_i16_le(&idle_audio) * 10.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn timeline_exports_wall_clock_fields_for_freshness() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "1".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
let camera = CameraConfig {
|
||||
output: CameraOutput::Uvc,
|
||||
codec: CameraCodec::Mjpeg,
|
||||
width: 640,
|
||||
height: 480,
|
||||
fps: 20,
|
||||
hdmi: None,
|
||||
};
|
||||
let start_unix_ns = 1_700_000_000_000_000_000u128;
|
||||
let mut timeline = OutputDelayProbeTimeline::new(&config, &camera, start_unix_ns);
|
||||
let slot = config.event_slot_by_id(0);
|
||||
let video_us = duration_us(Duration::from_micros(slot.planned_start_us));
|
||||
let audio_us = video_us.saturating_add(500);
|
||||
|
||||
timeline.mark_video(
|
||||
slot,
|
||||
1,
|
||||
video_us,
|
||||
unix_ns_from_start(start_unix_ns, video_us),
|
||||
);
|
||||
timeline.mark_audio(
|
||||
slot,
|
||||
1,
|
||||
audio_us,
|
||||
unix_ns_from_start(start_unix_ns, audio_us),
|
||||
);
|
||||
|
||||
let json = serde_json::to_value(timeline).expect("timeline json");
|
||||
assert_eq!(
|
||||
json["injection_scope"].as_str(),
|
||||
Some("server-final-output-handoff")
|
||||
);
|
||||
assert_eq!(
|
||||
json["sink_handoff_path"].as_str(),
|
||||
Some("video CameraRelay::feed; audio Voice::push")
|
||||
);
|
||||
assert_eq!(json["client_uplink_included"].as_bool(), Some(false));
|
||||
assert_eq!(
|
||||
json["server_start_unix_ns"].as_u64(),
|
||||
Some(start_unix_ns as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["video_feed_unix_ns"].as_u64(),
|
||||
Some(unix_ns_from_start(start_unix_ns, video_us) as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["audio_push_unix_ns"].as_u64(),
|
||||
Some(unix_ns_from_start(start_unix_ns, audio_us) as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["server_feed_delta_ms"].as_f64(),
|
||||
Some(0.5)
|
||||
);
|
||||
}
|
||||
|
||||
fn rms_i16_le(bytes: &[u8]) -> f64 {
|
||||
let samples = bytes
|
||||
.chunks_exact(2)
|
||||
.map(|chunk| f64::from(i16::from_le_bytes([chunk[0], chunk[1]])))
|
||||
.collect::<Vec<_>>();
|
||||
let mean_square =
|
||||
samples.iter().map(|sample| sample * sample).sum::<f64>() / samples.len().max(1) as f64;
|
||||
mean_square.sqrt()
|
||||
}
|
||||
}
|
||||
#[path = "output_delay_probe/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
270
server/src/output_delay_probe/media_encoding.rs
Normal file
270
server/src/output_delay_probe/media_encoding.rs
Normal file
@ -0,0 +1,270 @@
|
||||
#[cfg(not(coverage))]
|
||||
struct EncodedProbeFrames {
|
||||
frames: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl EncodedProbeFrames {
|
||||
fn new(camera: &CameraConfig, config: &ProbeConfig, frame_step: Duration) -> Result<Self> {
|
||||
if !matches!(camera.codec, CameraCodec::Mjpeg) {
|
||||
bail!(
|
||||
"server-generated output-delay probe currently requires MJPEG UVC output, got {}",
|
||||
camera.codec.as_str()
|
||||
);
|
||||
}
|
||||
|
||||
let mut encoder = MjpegFrameEncoder::new(camera)?;
|
||||
let mut frames = Vec::new();
|
||||
let mut frame_index = 0u64;
|
||||
loop {
|
||||
let pts = duration_mul(frame_step, frame_index);
|
||||
if pts > config.duration {
|
||||
break;
|
||||
}
|
||||
let code = config.event_code_at(pts);
|
||||
frames.push(encoder.encode_probe_frame(probe_color_for_code(code), frame_index)?);
|
||||
frame_index = frame_index.saturating_add(1);
|
||||
}
|
||||
Ok(Self { frames })
|
||||
}
|
||||
|
||||
fn packet_for_frame(&self, frame_index: u64) -> Result<&[u8]> {
|
||||
self.frames
|
||||
.get(usize::try_from(frame_index).unwrap_or(usize::MAX))
|
||||
.map(Vec::as_slice)
|
||||
.with_context(|| format!("missing pre-encoded probe frame {frame_index}"))
|
||||
}
|
||||
}
|
||||
|
||||
fn probe_color_for_code(code: Option<u32>) -> Rgb {
|
||||
code.and_then(|code| EVENT_COLORS.get(code.checked_sub(1)? as usize).copied())
|
||||
.unwrap_or(DARK_FRAME_RGB)
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
struct MjpegFrameEncoder {
|
||||
src: gst_app::AppSrc,
|
||||
sink: gst_app::AppSink,
|
||||
pipeline: gst::Pipeline,
|
||||
width: usize,
|
||||
height: usize,
|
||||
frame_step_us: u64,
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl MjpegFrameEncoder {
|
||||
fn new(camera: &CameraConfig) -> Result<Self> {
|
||||
gst::init().context("gst init")?;
|
||||
let width = camera.width as i32;
|
||||
let height = camera.height as i32;
|
||||
let fps = camera.fps.max(1) as i32;
|
||||
let raw_caps = gst::Caps::builder("video/x-raw")
|
||||
.field("format", "RGB")
|
||||
.field("width", width)
|
||||
.field("height", height)
|
||||
.field("framerate", gst::Fraction::new(fps, 1))
|
||||
.build();
|
||||
let jpeg_caps = gst::Caps::builder("image/jpeg")
|
||||
.field("parsed", true)
|
||||
.field("width", width)
|
||||
.field("height", height)
|
||||
.field("framerate", gst::Fraction::new(fps, 1))
|
||||
.build();
|
||||
let pipeline = gst::Pipeline::new();
|
||||
let src = gst::ElementFactory::make("appsrc")
|
||||
.name("output_delay_probe_src")
|
||||
.build()?
|
||||
.downcast::<gst_app::AppSrc>()
|
||||
.expect("appsrc");
|
||||
src.set_is_live(false);
|
||||
src.set_format(gst::Format::Time);
|
||||
src.set_property("do-timestamp", false);
|
||||
src.set_caps(Some(&raw_caps));
|
||||
let convert = gst::ElementFactory::make("videoconvert").build()?;
|
||||
let encoder = gst::ElementFactory::make("jpegenc")
|
||||
.property("quality", 95i32)
|
||||
.build()?;
|
||||
let capsfilter = gst::ElementFactory::make("capsfilter")
|
||||
.property("caps", &jpeg_caps)
|
||||
.build()?;
|
||||
let sink = gst::ElementFactory::make("appsink")
|
||||
.name("output_delay_probe_sink")
|
||||
.property("sync", false)
|
||||
.property("emit-signals", false)
|
||||
.property("max-buffers", 8u32)
|
||||
.build()?
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink");
|
||||
pipeline.add_many([
|
||||
src.upcast_ref(),
|
||||
&convert,
|
||||
&encoder,
|
||||
&capsfilter,
|
||||
sink.upcast_ref(),
|
||||
])?;
|
||||
gst::Element::link_many([
|
||||
src.upcast_ref(),
|
||||
&convert,
|
||||
&encoder,
|
||||
&capsfilter,
|
||||
sink.upcast_ref(),
|
||||
])?;
|
||||
pipeline
|
||||
.set_state(gst::State::Playing)
|
||||
.context("starting output-delay probe MJPEG encoder")?;
|
||||
|
||||
Ok(Self {
|
||||
src,
|
||||
sink,
|
||||
pipeline,
|
||||
width: camera.width as usize,
|
||||
height: camera.height as usize,
|
||||
frame_step_us: (1_000_000u64 / u64::from(camera.fps.max(1))).max(1),
|
||||
})
|
||||
}
|
||||
|
||||
fn encode_probe_frame(&mut self, color: Rgb, sequence: u64) -> Result<Vec<u8>> {
|
||||
let pts_us = sequence.saturating_mul(self.frame_step_us);
|
||||
let frame = probe_rgb_frame(self.width, self.height, color, sequence);
|
||||
let mut buffer = gst::Buffer::from_slice(frame);
|
||||
if let Some(meta) = buffer.get_mut() {
|
||||
let pts = gst::ClockTime::from_useconds(pts_us);
|
||||
meta.set_pts(Some(pts));
|
||||
meta.set_dts(Some(pts));
|
||||
meta.set_duration(Some(gst::ClockTime::from_useconds(self.frame_step_us)));
|
||||
}
|
||||
self.src
|
||||
.push_buffer(buffer)
|
||||
.context("encoding output-delay probe frame")?;
|
||||
let sample = self
|
||||
.sink
|
||||
.pull_sample()
|
||||
.context("pulling encoded output-delay probe frame")?;
|
||||
let buffer = sample.buffer().context("encoded frame had no buffer")?;
|
||||
let map = buffer
|
||||
.map_readable()
|
||||
.context("mapping encoded output-delay probe frame")?;
|
||||
Ok(map.as_slice().to_vec())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
impl Drop for MjpegFrameEncoder {
|
||||
fn drop(&mut self) {
|
||||
let _ = self.src.end_of_stream();
|
||||
let _ = self.pipeline.set_state(gst::State::Null);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(coverage))]
|
||||
fn probe_rgb_frame(width: usize, height: usize, color: Rgb, sequence: u64) -> Vec<u8> {
|
||||
let mut frame = vec![0u8; width.saturating_mul(height).saturating_mul(3)];
|
||||
for pixel in frame.chunks_exact_mut(3) {
|
||||
pixel[0] = color.r;
|
||||
pixel[1] = color.g;
|
||||
pixel[2] = color.b;
|
||||
}
|
||||
draw_frame_continuity_watermark(&mut frame, width, height, sequence);
|
||||
frame
|
||||
}
|
||||
|
||||
fn draw_frame_continuity_watermark(frame: &mut [u8], width: usize, height: usize, sequence: u64) {
|
||||
if width < VIDEO_CONTINUITY_BLOCKS || height < 8 {
|
||||
return;
|
||||
}
|
||||
let stripe_height = (height / 18).clamp(8, 48);
|
||||
let stripe_top = height.saturating_sub(stripe_height);
|
||||
let block_width = (width / VIDEO_CONTINUITY_BLOCKS).max(1);
|
||||
let seq = (sequence & 0xffff) as u16;
|
||||
let parity = (seq.count_ones() & 1) != 0;
|
||||
for block in 0..VIDEO_CONTINUITY_BLOCKS {
|
||||
let white = match block {
|
||||
0 => true,
|
||||
1 => false,
|
||||
2..=17 => {
|
||||
let bit = VIDEO_CONTINUITY_DATA_BITS - 1 - (block - 2);
|
||||
((seq >> bit) & 1) != 0
|
||||
}
|
||||
18 => parity,
|
||||
_ => !parity,
|
||||
};
|
||||
let value = if white { 255 } else { 0 };
|
||||
let x_start = block * block_width;
|
||||
let x_end = if block + 1 == VIDEO_CONTINUITY_BLOCKS {
|
||||
width
|
||||
} else {
|
||||
((block + 1) * block_width).min(width)
|
||||
};
|
||||
for y in stripe_top..height {
|
||||
for x in x_start..x_end {
|
||||
let offset = (y * width + x) * 3;
|
||||
if let Some(pixel) = frame.get_mut(offset..offset + 3) {
|
||||
pixel[0] = value;
|
||||
pixel[1] = value;
|
||||
pixel[2] = value;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn render_audio_chunk(
|
||||
config: &ProbeConfig,
|
||||
chunk_pts: Duration,
|
||||
samples_per_chunk: usize,
|
||||
) -> Vec<u8> {
|
||||
let sample_step = Duration::from_nanos(1_000_000_000u64 / u64::from(AUDIO_SAMPLE_RATE));
|
||||
let mut pcm =
|
||||
Vec::with_capacity(samples_per_chunk * AUDIO_CHANNELS * std::mem::size_of::<i16>());
|
||||
for sample_index in 0..samples_per_chunk {
|
||||
let sample_pts = chunk_pts + duration_mul(sample_step, sample_index as u64);
|
||||
let pilot_phase = TAU * AUDIO_PILOT_FREQUENCY_HZ * sample_pts.as_secs_f64();
|
||||
let pilot = pilot_phase.sin() * AUDIO_PILOT_AMPLITUDE;
|
||||
let event = config
|
||||
.event_code_at(sample_pts)
|
||||
.and_then(event_frequency_hz)
|
||||
.map(|frequency| {
|
||||
let phase = TAU * frequency * sample_pts.as_secs_f64();
|
||||
phase.sin() * AUDIO_AMPLITUDE
|
||||
})
|
||||
.unwrap_or(0.0);
|
||||
let sample = (pilot + event).clamp(f64::from(i16::MIN), f64::from(i16::MAX)) as i16;
|
||||
for _ in 0..AUDIO_CHANNELS {
|
||||
pcm.extend_from_slice(&sample.to_le_bytes());
|
||||
}
|
||||
}
|
||||
pcm
|
||||
}
|
||||
|
||||
fn event_frequency_hz(code: u32) -> Option<f64> {
|
||||
EVENT_FREQUENCIES_HZ
|
||||
.get(code.checked_sub(1)? as usize)
|
||||
.copied()
|
||||
}
|
||||
|
||||
fn duration_us(duration: Duration) -> u64 {
|
||||
duration.as_micros().min(u128::from(u64::MAX)) as u64
|
||||
}
|
||||
|
||||
fn unix_ns_now() -> u128 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap_or_default()
|
||||
.as_nanos()
|
||||
}
|
||||
|
||||
fn unix_ns_from_start(server_start_unix_ns: u128, monotonic_us: u64) -> u128 {
|
||||
server_start_unix_ns.saturating_add(u128::from(monotonic_us).saturating_mul(1_000))
|
||||
}
|
||||
|
||||
fn monotonic_us_since(start: tokio::time::Instant) -> u64 {
|
||||
duration_us(tokio::time::Instant::now().saturating_duration_since(start))
|
||||
}
|
||||
|
||||
fn duration_mul(duration: Duration, count: u64) -> Duration {
|
||||
let nanos = duration
|
||||
.as_nanos()
|
||||
.saturating_mul(u128::from(count))
|
||||
.min(u128::from(u64::MAX));
|
||||
Duration::from_nanos(nanos as u64)
|
||||
}
|
||||
139
server/src/output_delay_probe/probe_runtime.rs
Normal file
139
server/src/output_delay_probe/probe_runtime.rs
Normal file
@ -0,0 +1,139 @@
|
||||
/// Generate a server-local A/V signature and feed the physical UVC/UAC sinks.
|
||||
///
|
||||
/// Inputs: the active camera relay, active UAC voice sink, camera profile, and
|
||||
/// probe request timing.
|
||||
/// Outputs: a small count summary after the last generated packet.
|
||||
/// Why: this probe intentionally bypasses client capture/uplink but uses the
|
||||
/// same final server output handoff calls as received client media, so the
|
||||
/// measured skew/freshness is the server final-handoff-to-RCT path.
|
||||
pub async fn run_server_output_delay_probe(
|
||||
relay: Arc<CameraRelay>,
|
||||
sink: &mut Voice,
|
||||
camera: &CameraConfig,
|
||||
request: &OutputDelayProbeRequest,
|
||||
) -> Result<OutputDelayProbeSummary> {
|
||||
let config = ProbeConfig::from_request(request)?;
|
||||
if config.event_count() == 0 {
|
||||
bail!("probe duration must extend beyond warmup");
|
||||
}
|
||||
|
||||
let frame_step = Duration::from_nanos(1_000_000_000u64 / u64::from(camera.fps.max(1)));
|
||||
let audio_chunk = Duration::from_millis(AUDIO_CHUNK_MS);
|
||||
let samples_per_chunk = ((u64::from(AUDIO_SAMPLE_RATE) * AUDIO_CHUNK_MS) / 1_000) as usize;
|
||||
let frames = EncodedProbeFrames::new(camera, &config, frame_step)?;
|
||||
let server_start_unix_ns = unix_ns_now();
|
||||
let start = tokio::time::Instant::now();
|
||||
let mut timeline = OutputDelayProbeTimeline::new(&config, camera, server_start_unix_ns);
|
||||
let mut frame_index = 0u64;
|
||||
let mut audio_index = 0u64;
|
||||
let mut video_frames = 0u64;
|
||||
let mut audio_packets = 0u64;
|
||||
|
||||
loop {
|
||||
let next_frame_pts = duration_mul(frame_step, frame_index);
|
||||
let next_audio_pts = duration_mul(audio_chunk, audio_index);
|
||||
let frame_active = next_frame_pts <= config.duration;
|
||||
let audio_active = next_audio_pts <= config.duration;
|
||||
if !frame_active && !audio_active {
|
||||
break;
|
||||
}
|
||||
let next_frame_due = if frame_active {
|
||||
next_frame_pts.saturating_add(config.video_delay)
|
||||
} else {
|
||||
Duration::MAX
|
||||
};
|
||||
let next_audio_due = if audio_active {
|
||||
next_audio_pts.saturating_add(config.audio_delay)
|
||||
} else {
|
||||
Duration::MAX
|
||||
};
|
||||
tokio::time::sleep_until(start + next_frame_due.min(next_audio_due)).await;
|
||||
|
||||
if audio_active && next_audio_due <= next_frame_due {
|
||||
let pts_us = duration_us(next_audio_pts);
|
||||
let event_slot = config.event_slot_at(next_audio_pts);
|
||||
let data = render_audio_chunk(&config, next_audio_pts, samples_per_chunk);
|
||||
let seq = audio_index.saturating_add(1);
|
||||
sink.push(&AudioPacket {
|
||||
id: 0,
|
||||
pts: pts_us,
|
||||
data,
|
||||
seq,
|
||||
client_capture_pts_us: pts_us,
|
||||
client_send_pts_us: pts_us,
|
||||
client_queue_depth: 0,
|
||||
client_queue_age_ms: 0,
|
||||
});
|
||||
if let Some(slot) = event_slot {
|
||||
let monotonic_us = monotonic_us_since(start);
|
||||
timeline.mark_audio(
|
||||
slot,
|
||||
seq,
|
||||
monotonic_us,
|
||||
unix_ns_from_start(server_start_unix_ns, monotonic_us),
|
||||
);
|
||||
}
|
||||
audio_packets = audio_packets.saturating_add(1);
|
||||
audio_index = audio_index.saturating_add(1);
|
||||
}
|
||||
|
||||
if frame_active && next_frame_due <= next_audio_due {
|
||||
let pts_us = duration_us(next_frame_pts);
|
||||
let event_slot = config.event_slot_at(next_frame_pts);
|
||||
let seq = frame_index.saturating_add(1);
|
||||
relay.feed(VideoPacket {
|
||||
id: 0,
|
||||
pts: pts_us,
|
||||
data: frames.packet_for_frame(frame_index)?.to_vec(),
|
||||
seq,
|
||||
effective_fps: camera.fps,
|
||||
client_capture_pts_us: pts_us,
|
||||
client_send_pts_us: pts_us,
|
||||
client_queue_depth: 0,
|
||||
client_queue_age_ms: 0,
|
||||
..Default::default()
|
||||
});
|
||||
if let Some(slot) = event_slot {
|
||||
let monotonic_us = monotonic_us_since(start);
|
||||
timeline.mark_video(
|
||||
slot,
|
||||
seq,
|
||||
monotonic_us,
|
||||
unix_ns_from_start(server_start_unix_ns, monotonic_us),
|
||||
);
|
||||
}
|
||||
video_frames = video_frames.saturating_add(1);
|
||||
frame_index = frame_index.saturating_add(1);
|
||||
}
|
||||
}
|
||||
|
||||
sink.finish();
|
||||
Ok(OutputDelayProbeSummary {
|
||||
video_frames,
|
||||
audio_packets,
|
||||
event_count: config.event_count(),
|
||||
timeline_json: serde_json::to_string(&timeline)
|
||||
.context("serializing output-delay server timeline")?,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(coverage)]
|
||||
pub async fn run_server_output_delay_probe(
|
||||
_relay: Arc<CameraRelay>,
|
||||
_sink: &mut Voice,
|
||||
camera: &CameraConfig,
|
||||
request: &OutputDelayProbeRequest,
|
||||
) -> Result<OutputDelayProbeSummary> {
|
||||
let config = ProbeConfig::from_request(request)?;
|
||||
Ok(OutputDelayProbeSummary {
|
||||
video_frames: 1,
|
||||
audio_packets: 1,
|
||||
event_count: config.event_count(),
|
||||
timeline_json: serde_json::to_string(&OutputDelayProbeTimeline::new(
|
||||
&config,
|
||||
camera,
|
||||
unix_ns_now(),
|
||||
))
|
||||
.unwrap_or_else(|_| "{}".to_string()),
|
||||
})
|
||||
}
|
||||
177
server/src/output_delay_probe/tests/mod.rs
Normal file
177
server/src/output_delay_probe/tests/mod.rs
Normal file
@ -0,0 +1,177 @@
|
||||
use super::{
|
||||
DARK_FRAME_RGB, EVENT_COLORS, EVENT_FREQUENCIES_HZ, OutputDelayProbeTimeline, ProbeConfig,
|
||||
duration_us, probe_color_for_code, render_audio_chunk, unix_ns_from_start,
|
||||
};
|
||||
use crate::camera::{CameraCodec, CameraConfig, CameraOutput};
|
||||
use lesavka_common::lesavka::OutputDelayProbeRequest;
|
||||
use std::collections::BTreeSet;
|
||||
use std::time::Duration;
|
||||
|
||||
#[test]
|
||||
fn request_defaults_to_long_coded_server_probe() {
|
||||
let config =
|
||||
ProbeConfig::from_request(&OutputDelayProbeRequest::default()).expect("default config");
|
||||
|
||||
assert_eq!(config.duration, Duration::from_secs(20));
|
||||
assert_eq!(config.warmup, Duration::from_secs(4));
|
||||
assert_eq!(config.event_count(), 16);
|
||||
assert_eq!(config.event_code_at(Duration::from_secs(4)), Some(1));
|
||||
assert_eq!(config.event_code_at(Duration::from_secs(5)), Some(2));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_codes_reject_unsupported_signatures() {
|
||||
let request = OutputDelayProbeRequest {
|
||||
event_width_codes: "1,17".to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
assert!(ProbeConfig::from_request(&request).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn default_probe_signatures_are_unique_for_all_coded_pairs() {
|
||||
assert_eq!(EVENT_COLORS.len(), 16);
|
||||
assert_eq!(EVENT_FREQUENCIES_HZ.len(), 16);
|
||||
|
||||
let colors = EVENT_COLORS
|
||||
.iter()
|
||||
.map(|color| (color.r, color.g, color.b))
|
||||
.collect::<BTreeSet<_>>();
|
||||
let frequencies = EVENT_FREQUENCIES_HZ
|
||||
.iter()
|
||||
.map(|frequency| (*frequency * 10.0).round() as i64)
|
||||
.collect::<BTreeSet<_>>();
|
||||
|
||||
assert_eq!(colors.len(), EVENT_COLORS.len());
|
||||
assert_eq!(frequencies.len(), EVENT_FREQUENCIES_HZ.len());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn audio_chunk_contains_tone_only_during_coded_pulse() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "3".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
|
||||
let active = render_audio_chunk(&config, Duration::from_secs(1), 480);
|
||||
let idle = render_audio_chunk(&config, Duration::from_millis(500), 480);
|
||||
|
||||
assert!(active.iter().any(|byte| *byte != 0));
|
||||
assert!(idle.iter().any(|byte| *byte != 0));
|
||||
assert!(rms_i16_le(&active) > rms_i16_le(&idle) * 10.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn generated_video_and_audio_share_the_same_event_schedule() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "2".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
let idle_pts = Duration::from_millis(500);
|
||||
let active_pts = Duration::from_secs(1);
|
||||
|
||||
let idle_color = probe_color_for_code(config.event_code_at(idle_pts));
|
||||
let active_color = probe_color_for_code(config.event_code_at(active_pts));
|
||||
let idle_audio = render_audio_chunk(&config, idle_pts, 480);
|
||||
let active_audio = render_audio_chunk(&config, active_pts, 480);
|
||||
|
||||
assert_eq!(
|
||||
(idle_color.r, idle_color.g, idle_color.b),
|
||||
(DARK_FRAME_RGB.r, DARK_FRAME_RGB.g, DARK_FRAME_RGB.b)
|
||||
);
|
||||
assert_ne!(
|
||||
(active_color.r, active_color.g, active_color.b),
|
||||
(DARK_FRAME_RGB.r, DARK_FRAME_RGB.g, DARK_FRAME_RGB.b)
|
||||
);
|
||||
assert!(rms_i16_le(&active_audio) > rms_i16_le(&idle_audio) * 10.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn timeline_exports_wall_clock_fields_for_freshness() {
|
||||
let config = ProbeConfig::from_request(&OutputDelayProbeRequest {
|
||||
duration_seconds: 6,
|
||||
warmup_seconds: 1,
|
||||
pulse_period_ms: 1_000,
|
||||
pulse_width_ms: 120,
|
||||
event_width_codes: "1".to_string(),
|
||||
audio_delay_us: 0,
|
||||
video_delay_us: 0,
|
||||
})
|
||||
.expect("config");
|
||||
let camera = CameraConfig {
|
||||
output: CameraOutput::Uvc,
|
||||
codec: CameraCodec::Mjpeg,
|
||||
width: 640,
|
||||
height: 480,
|
||||
fps: 20,
|
||||
hdmi: None,
|
||||
};
|
||||
let start_unix_ns = 1_700_000_000_000_000_000u128;
|
||||
let mut timeline = OutputDelayProbeTimeline::new(&config, &camera, start_unix_ns);
|
||||
let slot = config.event_slot_by_id(0);
|
||||
let video_us = duration_us(Duration::from_micros(slot.planned_start_us));
|
||||
let audio_us = video_us.saturating_add(500);
|
||||
|
||||
timeline.mark_video(
|
||||
slot,
|
||||
1,
|
||||
video_us,
|
||||
unix_ns_from_start(start_unix_ns, video_us),
|
||||
);
|
||||
timeline.mark_audio(
|
||||
slot,
|
||||
1,
|
||||
audio_us,
|
||||
unix_ns_from_start(start_unix_ns, audio_us),
|
||||
);
|
||||
|
||||
let json = serde_json::to_value(timeline).expect("timeline json");
|
||||
assert_eq!(
|
||||
json["injection_scope"].as_str(),
|
||||
Some("server-final-output-handoff")
|
||||
);
|
||||
assert_eq!(
|
||||
json["sink_handoff_path"].as_str(),
|
||||
Some("video CameraRelay::feed; audio Voice::push")
|
||||
);
|
||||
assert_eq!(json["client_uplink_included"].as_bool(), Some(false));
|
||||
assert_eq!(
|
||||
json["server_start_unix_ns"].as_u64(),
|
||||
Some(start_unix_ns as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["video_feed_unix_ns"].as_u64(),
|
||||
Some(unix_ns_from_start(start_unix_ns, video_us) as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["audio_push_unix_ns"].as_u64(),
|
||||
Some(unix_ns_from_start(start_unix_ns, audio_us) as u64)
|
||||
);
|
||||
assert_eq!(
|
||||
json["events"][0]["server_feed_delta_ms"].as_f64(),
|
||||
Some(0.5)
|
||||
);
|
||||
}
|
||||
|
||||
fn rms_i16_le(bytes: &[u8]) -> f64 {
|
||||
let samples = bytes
|
||||
.chunks_exact(2)
|
||||
.map(|chunk| f64::from(i16::from_le_bytes([chunk[0], chunk[1]])))
|
||||
.collect::<Vec<_>>();
|
||||
let mean_square =
|
||||
samples.iter().map(|sample| sample * sample).sum::<f64>() / samples.len().max(1) as f64;
|
||||
mean_square.sqrt()
|
||||
}
|
||||
203
server/src/output_delay_probe/timeline_config.rs
Normal file
203
server/src/output_delay_probe/timeline_config.rs
Normal file
@ -0,0 +1,203 @@
|
||||
impl OutputDelayProbeTimeline {
|
||||
fn new(config: &ProbeConfig, camera: &CameraConfig, server_start_unix_ns: u128) -> Self {
|
||||
let event_count = config.event_count();
|
||||
let events = (0..event_count)
|
||||
.map(|event_id| {
|
||||
let slot = config.event_slot_by_id(event_id as usize);
|
||||
OutputDelayProbeEventTimeline {
|
||||
event_id: event_id as usize,
|
||||
code: slot.code,
|
||||
planned_start_us: slot.planned_start_us,
|
||||
planned_end_us: slot.planned_end_us,
|
||||
video_seq: None,
|
||||
audio_seq: None,
|
||||
video_feed_monotonic_us: None,
|
||||
audio_push_monotonic_us: None,
|
||||
video_feed_unix_ns: None,
|
||||
audio_push_unix_ns: None,
|
||||
server_feed_delta_ms: None,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
Self {
|
||||
schema: "lesavka.output-delay-server-timeline.v1",
|
||||
origin: "theia-server-generated",
|
||||
media_path: "server generator -> UVC/UAC sinks",
|
||||
injection_scope: "server-final-output-handoff",
|
||||
server_pipeline_reference: "generated media PTS before intentional sync delay",
|
||||
sink_handoff_path: "video CameraRelay::feed; audio Voice::push",
|
||||
client_uplink_included: false,
|
||||
camera_width: camera.width,
|
||||
camera_height: camera.height,
|
||||
camera_fps: camera.fps,
|
||||
audio_sample_rate: AUDIO_SAMPLE_RATE,
|
||||
audio_channels: AUDIO_CHANNELS,
|
||||
audio_chunk_ms: AUDIO_CHUNK_MS,
|
||||
audio_delay_us: duration_us(config.audio_delay),
|
||||
video_delay_us: duration_us(config.video_delay),
|
||||
server_start_unix_ns,
|
||||
pulse_period_ms: config.pulse_period.as_millis() as u64,
|
||||
pulse_width_ms: config.pulse_width.as_millis() as u64,
|
||||
warmup_us: duration_us(config.warmup),
|
||||
duration_us: duration_us(config.duration),
|
||||
events,
|
||||
}
|
||||
}
|
||||
|
||||
fn mark_audio(&mut self, slot: ProbeEventSlot, seq: u64, monotonic_us: u64, unix_ns: u128) {
|
||||
let Some(event) = self.events.get_mut(slot.event_id) else {
|
||||
return;
|
||||
};
|
||||
if event.audio_push_monotonic_us.is_none() {
|
||||
event.audio_seq = Some(seq);
|
||||
event.audio_push_monotonic_us = Some(monotonic_us);
|
||||
event.audio_push_unix_ns = Some(unix_ns);
|
||||
event.update_delta();
|
||||
}
|
||||
}
|
||||
|
||||
fn mark_video(&mut self, slot: ProbeEventSlot, seq: u64, monotonic_us: u64, unix_ns: u128) {
|
||||
let Some(event) = self.events.get_mut(slot.event_id) else {
|
||||
return;
|
||||
};
|
||||
if event.video_feed_monotonic_us.is_none() {
|
||||
event.video_seq = Some(seq);
|
||||
event.video_feed_monotonic_us = Some(monotonic_us);
|
||||
event.video_feed_unix_ns = Some(unix_ns);
|
||||
event.update_delta();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl OutputDelayProbeEventTimeline {
|
||||
fn update_delta(&mut self) {
|
||||
let (Some(audio_us), Some(video_us)) =
|
||||
(self.audio_push_monotonic_us, self.video_feed_monotonic_us)
|
||||
else {
|
||||
return;
|
||||
};
|
||||
self.server_feed_delta_ms = Some((audio_us as f64 - video_us as f64) / 1000.0);
|
||||
}
|
||||
}
|
||||
|
||||
impl ProbeConfig {
|
||||
fn from_request(request: &OutputDelayProbeRequest) -> Result<Self> {
|
||||
let duration_seconds = non_zero_or_default(
|
||||
request.duration_seconds,
|
||||
DEFAULT_DURATION_SECONDS,
|
||||
"duration_seconds",
|
||||
)?;
|
||||
let warmup_seconds = if request.warmup_seconds == 0 {
|
||||
DEFAULT_WARMUP_SECONDS
|
||||
} else {
|
||||
request.warmup_seconds
|
||||
};
|
||||
let pulse_period_ms = non_zero_or_default(
|
||||
request.pulse_period_ms,
|
||||
DEFAULT_PULSE_PERIOD_MS,
|
||||
"pulse_period_ms",
|
||||
)?;
|
||||
let pulse_width_ms = non_zero_or_default(
|
||||
request.pulse_width_ms,
|
||||
DEFAULT_PULSE_WIDTH_MS,
|
||||
"pulse_width_ms",
|
||||
)?;
|
||||
if pulse_width_ms >= pulse_period_ms {
|
||||
bail!("pulse_width_ms must stay smaller than pulse_period_ms");
|
||||
}
|
||||
|
||||
let event_width_codes = parse_event_width_codes(&request.event_width_codes)?;
|
||||
Ok(Self {
|
||||
duration: Duration::from_secs(u64::from(duration_seconds)),
|
||||
warmup: Duration::from_secs(u64::from(warmup_seconds)),
|
||||
pulse_period: Duration::from_millis(u64::from(pulse_period_ms)),
|
||||
pulse_width: Duration::from_millis(u64::from(pulse_width_ms)),
|
||||
event_width_codes,
|
||||
audio_delay: positive_delay(request.audio_delay_us, "audio_delay_us")?,
|
||||
video_delay: positive_delay(request.video_delay_us, "video_delay_us")?,
|
||||
})
|
||||
}
|
||||
|
||||
fn event_code_at(&self, pts: Duration) -> Option<u32> {
|
||||
self.event_slot_at(pts).map(|slot| slot.code)
|
||||
}
|
||||
|
||||
fn event_slot_at(&self, pts: Duration) -> Option<ProbeEventSlot> {
|
||||
if pts < self.warmup {
|
||||
return None;
|
||||
}
|
||||
let since_warmup = pts.saturating_sub(self.warmup);
|
||||
let period_ns = self.pulse_period.as_nanos().max(1);
|
||||
let pulse_index = (since_warmup.as_nanos() / period_ns) as usize;
|
||||
let pulse_offset_ns = since_warmup.as_nanos() % period_ns;
|
||||
let active_ns = self.pulse_width.as_nanos();
|
||||
(pulse_offset_ns < active_ns).then(|| self.event_slot_by_id(pulse_index))
|
||||
}
|
||||
|
||||
fn event_slot_by_id(&self, event_id: usize) -> ProbeEventSlot {
|
||||
let code = self.event_width_codes[event_id % self.event_width_codes.len()];
|
||||
let planned_start = self
|
||||
.warmup
|
||||
.saturating_add(duration_mul(self.pulse_period, event_id as u64));
|
||||
let planned_end = planned_start.saturating_add(self.pulse_width);
|
||||
ProbeEventSlot {
|
||||
event_id,
|
||||
code,
|
||||
planned_start_us: duration_us(planned_start),
|
||||
planned_end_us: duration_us(planned_end),
|
||||
}
|
||||
}
|
||||
|
||||
fn event_count(&self) -> u64 {
|
||||
if self.duration <= self.warmup {
|
||||
return 0;
|
||||
}
|
||||
let active = self.duration - self.warmup;
|
||||
let count = active.as_nanos() / self.pulse_period.as_nanos().max(1);
|
||||
count.try_into().unwrap_or(u64::MAX)
|
||||
}
|
||||
}
|
||||
|
||||
fn non_zero_or_default(value: u32, default: u32, name: &str) -> Result<u32> {
|
||||
if value == 0 {
|
||||
return Ok(default);
|
||||
}
|
||||
if value == u32::MAX {
|
||||
bail!("{name} is too large");
|
||||
}
|
||||
Ok(value)
|
||||
}
|
||||
|
||||
fn positive_delay(value_us: i64, name: &str) -> Result<Duration> {
|
||||
if value_us < 0 {
|
||||
bail!("{name} must be zero or positive for the direct output-delay probe");
|
||||
}
|
||||
Ok(Duration::from_micros(value_us as u64))
|
||||
}
|
||||
|
||||
fn parse_event_width_codes(raw: &str) -> Result<Vec<u32>> {
|
||||
let trimmed = raw.trim();
|
||||
if trimmed.is_empty() {
|
||||
return Ok(DEFAULT_EVENT_WIDTH_CODES.to_vec());
|
||||
}
|
||||
let codes = trimmed
|
||||
.split(',')
|
||||
.filter_map(|part| {
|
||||
let part = part.trim();
|
||||
(!part.is_empty()).then_some(part)
|
||||
})
|
||||
.map(|part| {
|
||||
let code = part
|
||||
.parse::<u32>()
|
||||
.with_context(|| format!("parsing event width code `{part}`"))?;
|
||||
if !(1..=16).contains(&code) {
|
||||
bail!("event signature code {code} is unsupported; use values 1..16");
|
||||
}
|
||||
Ok(code)
|
||||
})
|
||||
.collect::<Result<Vec<_>>>()?;
|
||||
if codes.is_empty() {
|
||||
bail!("event_width_codes must contain at least one code");
|
||||
}
|
||||
Ok(codes)
|
||||
}
|
||||
@ -110,6 +110,8 @@ struct ScalarWindow {
|
||||
}
|
||||
|
||||
impl ScalarWindow {
|
||||
/// Keeps `push` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn push(&mut self, value: f64) {
|
||||
if self.values.len() >= TIMING_WINDOW_CAPACITY {
|
||||
self.values.pop_front();
|
||||
@ -140,6 +142,8 @@ enum UpstreamSyncPhase {
|
||||
}
|
||||
|
||||
impl UpstreamSyncPhase {
|
||||
/// Keeps `as_str` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::Acquiring => "acquiring",
|
||||
@ -200,390 +204,9 @@ pub struct UpstreamMediaRuntime {
|
||||
state: Mutex<RuntimeState>,
|
||||
}
|
||||
|
||||
impl UpstreamMediaRuntime {
|
||||
#[must_use]
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
next_session_id: AtomicU64::new(0),
|
||||
next_camera_generation: AtomicU64::new(0),
|
||||
next_microphone_generation: AtomicU64::new(0),
|
||||
microphone_sink_gate: Arc::new(Semaphore::new(1)),
|
||||
camera_playout_offset_us: AtomicI64::new(playout_offset_us(UpstreamMediaKind::Camera)),
|
||||
microphone_playout_offset_us: AtomicI64::new(playout_offset_us(
|
||||
UpstreamMediaKind::Microphone,
|
||||
)),
|
||||
state: Mutex::new(RuntimeState::default()),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_playout_offsets(&self, camera_offset_us: i64, microphone_offset_us: i64) {
|
||||
self.camera_playout_offset_us
|
||||
.store(camera_offset_us, Ordering::Relaxed);
|
||||
self.microphone_playout_offset_us
|
||||
.store(microphone_offset_us, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn playout_offsets(&self) -> (i64, i64) {
|
||||
(
|
||||
self.camera_playout_offset_us.load(Ordering::Relaxed),
|
||||
self.microphone_playout_offset_us.load(Ordering::Relaxed),
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn activate_camera(&self) -> UpstreamStreamLease {
|
||||
self.activate(UpstreamMediaKind::Camera)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn activate_microphone(&self) -> UpstreamStreamLease {
|
||||
self.activate(UpstreamMediaKind::Microphone)
|
||||
}
|
||||
|
||||
pub async fn reserve_microphone_sink(&self, generation: u64) -> Option<OwnedSemaphorePermit> {
|
||||
let permit = self
|
||||
.microphone_sink_gate
|
||||
.clone()
|
||||
.acquire_owned()
|
||||
.await
|
||||
.ok()?;
|
||||
self.is_microphone_active(generation).then_some(permit)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_camera_active(&self, generation: u64) -> bool {
|
||||
self.is_active(UpstreamMediaKind::Camera, generation)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_microphone_active(&self, generation: u64) -> bool {
|
||||
self.is_active(UpstreamMediaKind::Microphone, generation)
|
||||
}
|
||||
|
||||
pub fn close_camera(&self, generation: u64) {
|
||||
self.close(UpstreamMediaKind::Camera, generation);
|
||||
}
|
||||
|
||||
pub fn close_microphone(&self, generation: u64) {
|
||||
self.close(UpstreamMediaKind::Microphone, generation);
|
||||
}
|
||||
|
||||
pub fn soft_recover_microphone(&self) {
|
||||
let lease = self.activate_microphone();
|
||||
self.close_microphone(lease.generation);
|
||||
}
|
||||
|
||||
pub fn record_client_timing(&self, kind: UpstreamMediaKind, timing: UpstreamClientTiming) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let sample = TimingSample {
|
||||
capture_pts_us: timing.capture_pts_us,
|
||||
send_pts_us: timing.send_pts_us,
|
||||
queue_age_ms: timing.queue_age_ms,
|
||||
received_at: Instant::now(),
|
||||
};
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.latest_camera_timing = Some(sample);
|
||||
state.latest_camera_remote_pts_us = Some(timing.capture_pts_us);
|
||||
state
|
||||
.camera_client_queue_age_window_ms
|
||||
.push(f64::from(timing.queue_age_ms));
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.latest_microphone_timing = Some(sample);
|
||||
state.latest_microphone_remote_pts_us = Some(timing.capture_pts_us);
|
||||
state
|
||||
.microphone_client_queue_age_window_ms
|
||||
.push(f64::from(timing.queue_age_ms));
|
||||
}
|
||||
}
|
||||
record_timing_pair(&mut state);
|
||||
}
|
||||
|
||||
pub fn mark_audio_presented(&self, local_pts_us: u64, due_at: Instant) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.last_audio_presented_pts_us = Some(local_pts_us);
|
||||
record_presentation(&mut state, UpstreamMediaKind::Microphone, due_at);
|
||||
state.phase = UpstreamSyncPhase::Live;
|
||||
state.last_reason = "v2 audio handed to UAC".to_string();
|
||||
}
|
||||
|
||||
pub fn mark_video_presented(&self, local_pts_us: u64, due_at: Instant) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.last_video_presented_pts_us = Some(local_pts_us);
|
||||
record_presentation(&mut state, UpstreamMediaKind::Camera, due_at);
|
||||
state.phase = UpstreamSyncPhase::Live;
|
||||
state.last_reason = "v2 video handed to UVC".to_string();
|
||||
}
|
||||
|
||||
pub fn record_video_freeze(&self, reason: impl Into<String>) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason = reason.into();
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn snapshot(&self) -> UpstreamPlannerSnapshot {
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let now = Instant::now();
|
||||
UpstreamPlannerSnapshot {
|
||||
session_id: state.session_id,
|
||||
phase: state.phase.as_str(),
|
||||
latest_camera_remote_pts_us: state.latest_camera_remote_pts_us,
|
||||
latest_microphone_remote_pts_us: state.latest_microphone_remote_pts_us,
|
||||
last_video_presented_pts_us: state.last_video_presented_pts_us,
|
||||
last_audio_presented_pts_us: state.last_audio_presented_pts_us,
|
||||
live_lag_ms: live_lag_ms(&state),
|
||||
planner_skew_ms: planner_skew_ms(&state),
|
||||
stale_audio_drops: state.stale_audio_drops,
|
||||
stale_video_drops: state.stale_video_drops,
|
||||
skew_video_drops: state.skew_video_drops,
|
||||
freshness_reanchors: state.freshness_reanchors,
|
||||
startup_timeouts: state.startup_timeouts,
|
||||
video_freezes: state.video_freezes,
|
||||
last_reason: state.last_reason.clone(),
|
||||
client_capture_skew_ms: state.latest_paired_client_capture_skew_ms,
|
||||
client_send_skew_ms: state.latest_paired_client_send_skew_ms,
|
||||
server_receive_skew_ms: state.latest_paired_server_receive_skew_ms,
|
||||
camera_client_queue_age_ms: state
|
||||
.latest_camera_timing
|
||||
.map(|sample| f64::from(sample.queue_age_ms)),
|
||||
microphone_client_queue_age_ms: state
|
||||
.latest_microphone_timing
|
||||
.map(|sample| f64::from(sample.queue_age_ms)),
|
||||
camera_server_receive_age_ms: state
|
||||
.latest_camera_timing
|
||||
.map(|sample| age_ms(now, sample.received_at)),
|
||||
microphone_server_receive_age_ms: state
|
||||
.latest_microphone_timing
|
||||
.map(|sample| age_ms(now, sample.received_at)),
|
||||
client_capture_abs_skew_p95_ms: state.client_capture_skew_window_ms.p95_abs(),
|
||||
client_send_abs_skew_p95_ms: state.client_send_skew_window_ms.p95_abs(),
|
||||
server_receive_abs_skew_p95_ms: state.server_receive_skew_window_ms.p95_abs(),
|
||||
camera_client_queue_age_p95_ms: state.camera_client_queue_age_window_ms.p95(),
|
||||
microphone_client_queue_age_p95_ms: state.microphone_client_queue_age_window_ms.p95(),
|
||||
sink_handoff_skew_ms: latest_sink_handoff_skew_ms(&state),
|
||||
sink_handoff_abs_skew_p95_ms: state.sink_handoff_skew_window_ms.p95_abs(),
|
||||
camera_sink_late_ms: state.latest_camera_presentation.map(presentation_late_ms),
|
||||
microphone_sink_late_ms: state
|
||||
.latest_microphone_presentation
|
||||
.map(presentation_late_ms),
|
||||
camera_sink_late_p95_ms: state.camera_sink_late_window_ms.p95(),
|
||||
microphone_sink_late_p95_ms: state.microphone_sink_late_window_ms.p95(),
|
||||
client_timing_window_samples: state.client_capture_skew_window_ms.len() as u64,
|
||||
sink_handoff_window_samples: state.sink_handoff_skew_window_ms.len() as u64,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn map_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> Option<u64> {
|
||||
match self.plan_video_pts(remote_pts_us, frame_step_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn map_audio_pts(&self, remote_pts_us: u64) -> Option<u64> {
|
||||
match self.plan_audio_pts(remote_pts_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_legacy_pts(
|
||||
UpstreamMediaKind::Camera,
|
||||
remote_pts_us,
|
||||
frame_step_us.max(1),
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_audio_pts(&self, remote_pts_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_legacy_pts(UpstreamMediaKind::Microphone, remote_pts_us, 1)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_bundled_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
bundle_base_remote_pts_us: u64,
|
||||
bundle_epoch: Instant,
|
||||
) -> UpstreamPlanDecision {
|
||||
self.plan_rebased_pts(
|
||||
kind,
|
||||
remote_pts_us,
|
||||
min_step_us.max(1),
|
||||
Some(bundle_base_remote_pts_us),
|
||||
Some(bundle_epoch),
|
||||
)
|
||||
}
|
||||
|
||||
pub async fn wait_for_audio_master(&self, _video_local_pts_us: u64, _due_at: Instant) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn activate(&self, kind: UpstreamMediaKind) -> UpstreamStreamLease {
|
||||
let generation = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
self.next_camera_generation.fetch_add(1, Ordering::SeqCst) + 1
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
self.next_microphone_generation
|
||||
.fetch_add(1, Ordering::SeqCst)
|
||||
+ 1
|
||||
}
|
||||
};
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
if state.active_camera_generation.is_none() && state.active_microphone_generation.is_none()
|
||||
{
|
||||
state.session_id = self.next_session_id.fetch_add(1, Ordering::SeqCst) + 1;
|
||||
reset_session_state(&mut state);
|
||||
state.session_started_at = Some(Instant::now());
|
||||
state.phase = UpstreamSyncPhase::Acquiring;
|
||||
state.last_reason = "v2 upstream session acquiring media".to_string();
|
||||
}
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.active_camera_generation = Some(generation),
|
||||
UpstreamMediaKind::Microphone => state.active_microphone_generation = Some(generation),
|
||||
}
|
||||
UpstreamStreamLease {
|
||||
session_id: state.session_id,
|
||||
generation,
|
||||
}
|
||||
}
|
||||
|
||||
fn is_active(&self, kind: UpstreamMediaKind, generation: u64) -> bool {
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.active_camera_generation == Some(generation),
|
||||
UpstreamMediaKind::Microphone => state.active_microphone_generation == Some(generation),
|
||||
}
|
||||
}
|
||||
|
||||
fn close(&self, kind: UpstreamMediaKind, generation: u64) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera if state.active_camera_generation == Some(generation) => {
|
||||
state.active_camera_generation = None
|
||||
}
|
||||
UpstreamMediaKind::Microphone
|
||||
if state.active_microphone_generation == Some(generation) =>
|
||||
{
|
||||
state.active_microphone_generation = None
|
||||
}
|
||||
_ => return,
|
||||
}
|
||||
if state.active_camera_generation.is_none() && state.active_microphone_generation.is_none()
|
||||
{
|
||||
reset_session_state(&mut state);
|
||||
}
|
||||
}
|
||||
|
||||
fn plan_legacy_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
) -> UpstreamPlanDecision {
|
||||
self.plan_rebased_pts(kind, remote_pts_us, min_step_us.max(1), None, None)
|
||||
}
|
||||
|
||||
fn plan_rebased_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
explicit_base: Option<u64>,
|
||||
explicit_epoch: Option<Instant>,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.latest_camera_remote_pts_us = Some(remote_pts_us),
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.latest_microphone_remote_pts_us = Some(remote_pts_us)
|
||||
}
|
||||
}
|
||||
let base = match explicit_base {
|
||||
Some(base) => *state.base_remote_pts_us.get_or_insert(base),
|
||||
None => *state.base_remote_pts_us.get_or_insert(remote_pts_us),
|
||||
};
|
||||
let epoch = match explicit_epoch {
|
||||
Some(epoch) => *state.playout_epoch.get_or_insert(epoch),
|
||||
None => *state
|
||||
.playout_epoch
|
||||
.get_or_insert(Instant::now() + upstream_playout_delay()),
|
||||
};
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(base);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "v2 legacy packet mapped without cross-stream planner".to_string();
|
||||
let due_at = apply_offset(
|
||||
epoch + Duration::from_micros(local_pts_us),
|
||||
self.playout_offset_us(kind),
|
||||
);
|
||||
let late_by = Instant::now()
|
||||
.checked_duration_since(due_at)
|
||||
.unwrap_or_default();
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag: Duration::ZERO,
|
||||
})
|
||||
}
|
||||
|
||||
fn playout_offset_us(&self, kind: UpstreamMediaKind) -> i64 {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => self.camera_playout_offset_us.load(Ordering::Relaxed),
|
||||
UpstreamMediaKind::Microphone => {
|
||||
self.microphone_playout_offset_us.load(Ordering::Relaxed)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
include!("upstream_media_runtime/stream_lifecycle_methods.rs");
|
||||
include!("upstream_media_runtime/planner_snapshot_methods.rs");
|
||||
include!("upstream_media_runtime/playout_planning_methods.rs");
|
||||
|
||||
impl Default for UpstreamMediaRuntime {
|
||||
fn default() -> Self {
|
||||
@ -591,6 +214,8 @@ impl Default for UpstreamMediaRuntime {
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `reset_session_state` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn reset_session_state(state: &mut RuntimeState) {
|
||||
state.base_remote_pts_us = None;
|
||||
state.playout_epoch = None;
|
||||
@ -615,6 +240,8 @@ fn reset_session_state(state: &mut RuntimeState) {
|
||||
state.video_freezes = 0;
|
||||
}
|
||||
|
||||
/// Keeps `record_timing_pair` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn record_timing_pair(state: &mut RuntimeState) {
|
||||
let (Some(camera), Some(microphone)) =
|
||||
(state.latest_camera_timing, state.latest_microphone_timing)
|
||||
@ -632,6 +259,8 @@ fn record_timing_pair(state: &mut RuntimeState) {
|
||||
state.server_receive_skew_window_ms.push(receive_skew_ms);
|
||||
}
|
||||
|
||||
/// Keeps `record_presentation` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn record_presentation(state: &mut RuntimeState, kind: UpstreamMediaKind, due_at: Instant) {
|
||||
let sample = PresentationSample {
|
||||
due_at,
|
||||
@ -666,6 +295,8 @@ fn live_lag_ms(state: &RuntimeState) -> Option<f64> {
|
||||
Some(latest.saturating_sub(base) as f64 / 1000.0)
|
||||
}
|
||||
|
||||
/// Keeps `planner_skew_ms` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn planner_skew_ms(state: &RuntimeState) -> Option<f64> {
|
||||
match (
|
||||
state.last_audio_presented_pts_us,
|
||||
@ -694,6 +325,8 @@ fn age_ms(now: Instant, then: Instant) -> f64 {
|
||||
now.saturating_duration_since(then).as_secs_f64() * 1000.0
|
||||
}
|
||||
|
||||
/// Keeps `signed_duration_ms` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn signed_duration_ms(left: Instant, right: Instant) -> f64 {
|
||||
if left >= right {
|
||||
left.duration_since(right).as_secs_f64() * 1000.0
|
||||
@ -706,6 +339,8 @@ fn delta_ms(left_us: u64, right_us: u64) -> f64 {
|
||||
(left_us as i128 - right_us as i128) as f64 / 1000.0
|
||||
}
|
||||
|
||||
/// Keeps `percentile` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn percentile(values: impl Iterator<Item = f64>, quantile: f64) -> Option<f64> {
|
||||
let mut sorted = values.filter(|value| value.is_finite()).collect::<Vec<_>>();
|
||||
if sorted.is_empty() {
|
||||
@ -724,6 +359,8 @@ fn upstream_playout_delay() -> Duration {
|
||||
Duration::from_millis(delay_ms)
|
||||
}
|
||||
|
||||
/// Keeps `playout_offset_us` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn playout_offset_us(kind: UpstreamMediaKind) -> i64 {
|
||||
let (scalar_name, mode_map_name, factory_map, factory_offset_us) = match kind {
|
||||
UpstreamMediaKind::Camera => (
|
||||
@ -789,6 +426,8 @@ fn env_u32(name: &str) -> Option<u32> {
|
||||
.and_then(|value| value.trim().parse::<u32>().ok())
|
||||
}
|
||||
|
||||
/// Keeps `apply_offset` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn apply_offset(instant: Instant, offset_us: i64) -> Instant {
|
||||
if offset_us >= 0 {
|
||||
instant + Duration::from_micros(offset_us as u64)
|
||||
@ -800,78 +439,5 @@ fn apply_offset(instant: Instant, offset_us: i64) -> Instant {
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn with_clean_offset_env(test: impl FnOnce()) {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", None::<&str>),
|
||||
("LESAVKA_UVC_HEIGHT", None::<&str>),
|
||||
("LESAVKA_UVC_FPS", None::<&str>),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
test,
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn runtime_uses_baked_mode_offsets_before_calibration_store_loads() {
|
||||
for (width, height, fps, expected_video_offset_us) in [
|
||||
("1280", "720", "20", 162_659),
|
||||
("1280", "720", "30", 135_090),
|
||||
("1920", "1080", "20", 160_045),
|
||||
("1920", "1080", "30", 127_952),
|
||||
] {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UVC_WIDTH", Some(width)),
|
||||
("LESAVKA_UVC_HEIGHT", Some(height)),
|
||||
("LESAVKA_UVC_FPS", Some(fps)),
|
||||
],
|
||||
|| {
|
||||
let runtime = UpstreamMediaRuntime::new();
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(expected_video_offset_us, 0),
|
||||
"{width}x{height}@{fps} should use its baked startup offset"
|
||||
);
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn runtime_prefers_mode_offset_map_over_scalar_fallback() {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UVC_WIDTH", Some("1280")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("720")),
|
||||
("LESAVKA_UVC_FPS", Some("30")),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("999999")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
Some("1280x720@30=135090"),
|
||||
),
|
||||
],
|
||||
|| {
|
||||
let runtime = UpstreamMediaRuntime::new();
|
||||
assert_eq!(runtime.playout_offsets(), (135_090, 0));
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
#[path = "upstream_media_runtime/tests/mod.rs"]
|
||||
mod tests;
|
||||
|
||||
114
server/src/upstream_media_runtime/planner_snapshot_methods.rs
Normal file
114
server/src/upstream_media_runtime/planner_snapshot_methods.rs
Normal file
@ -0,0 +1,114 @@
|
||||
impl UpstreamMediaRuntime {
|
||||
#[must_use]
|
||||
/// Keeps `snapshot` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
pub fn snapshot(&self) -> UpstreamPlannerSnapshot {
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let now = Instant::now();
|
||||
UpstreamPlannerSnapshot {
|
||||
session_id: state.session_id,
|
||||
phase: state.phase.as_str(),
|
||||
latest_camera_remote_pts_us: state.latest_camera_remote_pts_us,
|
||||
latest_microphone_remote_pts_us: state.latest_microphone_remote_pts_us,
|
||||
last_video_presented_pts_us: state.last_video_presented_pts_us,
|
||||
last_audio_presented_pts_us: state.last_audio_presented_pts_us,
|
||||
live_lag_ms: live_lag_ms(&state),
|
||||
planner_skew_ms: planner_skew_ms(&state),
|
||||
stale_audio_drops: state.stale_audio_drops,
|
||||
stale_video_drops: state.stale_video_drops,
|
||||
skew_video_drops: state.skew_video_drops,
|
||||
freshness_reanchors: state.freshness_reanchors,
|
||||
startup_timeouts: state.startup_timeouts,
|
||||
video_freezes: state.video_freezes,
|
||||
last_reason: state.last_reason.clone(),
|
||||
client_capture_skew_ms: state.latest_paired_client_capture_skew_ms,
|
||||
client_send_skew_ms: state.latest_paired_client_send_skew_ms,
|
||||
server_receive_skew_ms: state.latest_paired_server_receive_skew_ms,
|
||||
camera_client_queue_age_ms: state
|
||||
.latest_camera_timing
|
||||
.map(|sample| f64::from(sample.queue_age_ms)),
|
||||
microphone_client_queue_age_ms: state
|
||||
.latest_microphone_timing
|
||||
.map(|sample| f64::from(sample.queue_age_ms)),
|
||||
camera_server_receive_age_ms: state
|
||||
.latest_camera_timing
|
||||
.map(|sample| age_ms(now, sample.received_at)),
|
||||
microphone_server_receive_age_ms: state
|
||||
.latest_microphone_timing
|
||||
.map(|sample| age_ms(now, sample.received_at)),
|
||||
client_capture_abs_skew_p95_ms: state.client_capture_skew_window_ms.p95_abs(),
|
||||
client_send_abs_skew_p95_ms: state.client_send_skew_window_ms.p95_abs(),
|
||||
server_receive_abs_skew_p95_ms: state.server_receive_skew_window_ms.p95_abs(),
|
||||
camera_client_queue_age_p95_ms: state.camera_client_queue_age_window_ms.p95(),
|
||||
microphone_client_queue_age_p95_ms: state.microphone_client_queue_age_window_ms.p95(),
|
||||
sink_handoff_skew_ms: latest_sink_handoff_skew_ms(&state),
|
||||
sink_handoff_abs_skew_p95_ms: state.sink_handoff_skew_window_ms.p95_abs(),
|
||||
camera_sink_late_ms: state.latest_camera_presentation.map(presentation_late_ms),
|
||||
microphone_sink_late_ms: state
|
||||
.latest_microphone_presentation
|
||||
.map(presentation_late_ms),
|
||||
camera_sink_late_p95_ms: state.camera_sink_late_window_ms.p95(),
|
||||
microphone_sink_late_p95_ms: state.microphone_sink_late_window_ms.p95(),
|
||||
client_timing_window_samples: state.client_capture_skew_window_ms.len() as u64,
|
||||
sink_handoff_window_samples: state.sink_handoff_skew_window_ms.len() as u64,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
/// Keeps `map_video_pts` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn map_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> Option<u64> {
|
||||
match self.plan_video_pts(remote_pts_us, frame_step_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
/// Keeps `map_audio_pts` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn map_audio_pts(&self, remote_pts_us: u64) -> Option<u64> {
|
||||
match self.plan_audio_pts(remote_pts_us) {
|
||||
UpstreamPlanDecision::Play(plan) => Some(plan.local_pts_us),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_video_pts(&self, remote_pts_us: u64, frame_step_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_legacy_pts(
|
||||
UpstreamMediaKind::Camera,
|
||||
remote_pts_us,
|
||||
frame_step_us.max(1),
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_audio_pts(&self, remote_pts_us: u64) -> UpstreamPlanDecision {
|
||||
self.plan_legacy_pts(UpstreamMediaKind::Microphone, remote_pts_us, 1)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plan_bundled_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
bundle_base_remote_pts_us: u64,
|
||||
bundle_epoch: Instant,
|
||||
) -> UpstreamPlanDecision {
|
||||
self.plan_rebased_pts(
|
||||
kind,
|
||||
remote_pts_us,
|
||||
min_step_us.max(1),
|
||||
Some(bundle_base_remote_pts_us),
|
||||
Some(bundle_epoch),
|
||||
)
|
||||
}
|
||||
|
||||
pub async fn wait_for_audio_master(&self, _video_local_pts_us: u64, _due_at: Instant) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
150
server/src/upstream_media_runtime/playout_planning_methods.rs
Normal file
150
server/src/upstream_media_runtime/playout_planning_methods.rs
Normal file
@ -0,0 +1,150 @@
|
||||
impl UpstreamMediaRuntime {
|
||||
/// Keeps `activate` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
fn activate(&self, kind: UpstreamMediaKind) -> UpstreamStreamLease {
|
||||
let generation = match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
self.next_camera_generation.fetch_add(1, Ordering::SeqCst) + 1
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
self.next_microphone_generation
|
||||
.fetch_add(1, Ordering::SeqCst)
|
||||
+ 1
|
||||
}
|
||||
};
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
if state.active_camera_generation.is_none() && state.active_microphone_generation.is_none()
|
||||
{
|
||||
state.session_id = self.next_session_id.fetch_add(1, Ordering::SeqCst) + 1;
|
||||
reset_session_state(&mut state);
|
||||
state.session_started_at = Some(Instant::now());
|
||||
state.phase = UpstreamSyncPhase::Acquiring;
|
||||
state.last_reason = "v2 upstream session acquiring media".to_string();
|
||||
}
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.active_camera_generation = Some(generation),
|
||||
UpstreamMediaKind::Microphone => state.active_microphone_generation = Some(generation),
|
||||
}
|
||||
UpstreamStreamLease {
|
||||
session_id: state.session_id,
|
||||
generation,
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `is_active` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn is_active(&self, kind: UpstreamMediaKind, generation: u64) -> bool {
|
||||
let state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.active_camera_generation == Some(generation),
|
||||
UpstreamMediaKind::Microphone => state.active_microphone_generation == Some(generation),
|
||||
}
|
||||
}
|
||||
|
||||
/// Keeps `close` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn close(&self, kind: UpstreamMediaKind, generation: u64) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera if state.active_camera_generation == Some(generation) => {
|
||||
state.active_camera_generation = None
|
||||
}
|
||||
UpstreamMediaKind::Microphone
|
||||
if state.active_microphone_generation == Some(generation) =>
|
||||
{
|
||||
state.active_microphone_generation = None
|
||||
}
|
||||
_ => return,
|
||||
}
|
||||
if state.active_camera_generation.is_none() && state.active_microphone_generation.is_none()
|
||||
{
|
||||
reset_session_state(&mut state);
|
||||
}
|
||||
}
|
||||
|
||||
fn plan_legacy_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
) -> UpstreamPlanDecision {
|
||||
self.plan_rebased_pts(kind, remote_pts_us, min_step_us.max(1), None, None)
|
||||
}
|
||||
|
||||
/// Keeps `plan_rebased_pts` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn plan_rebased_pts(
|
||||
&self,
|
||||
kind: UpstreamMediaKind,
|
||||
remote_pts_us: u64,
|
||||
min_step_us: u64,
|
||||
explicit_base: Option<u64>,
|
||||
explicit_epoch: Option<Instant>,
|
||||
) -> UpstreamPlanDecision {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => state.latest_camera_remote_pts_us = Some(remote_pts_us),
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.latest_microphone_remote_pts_us = Some(remote_pts_us)
|
||||
}
|
||||
}
|
||||
let base = match explicit_base {
|
||||
Some(base) => *state.base_remote_pts_us.get_or_insert(base),
|
||||
None => *state.base_remote_pts_us.get_or_insert(remote_pts_us),
|
||||
};
|
||||
let epoch = match explicit_epoch {
|
||||
Some(epoch) => *state.playout_epoch.get_or_insert(epoch),
|
||||
None => *state
|
||||
.playout_epoch
|
||||
.get_or_insert(Instant::now() + upstream_playout_delay()),
|
||||
};
|
||||
let mut local_pts_us = remote_pts_us.saturating_sub(base);
|
||||
let last_slot = match kind {
|
||||
UpstreamMediaKind::Camera => &mut state.last_video_local_pts_us,
|
||||
UpstreamMediaKind::Microphone => &mut state.last_audio_local_pts_us,
|
||||
};
|
||||
if let Some(last_pts_us) = *last_slot
|
||||
&& local_pts_us <= last_pts_us
|
||||
{
|
||||
local_pts_us = last_pts_us.saturating_add(min_step_us.max(1));
|
||||
}
|
||||
*last_slot = Some(local_pts_us);
|
||||
state.phase = UpstreamSyncPhase::Syncing;
|
||||
state.last_reason = "v2 legacy packet mapped without cross-stream planner".to_string();
|
||||
let due_at = apply_offset(
|
||||
epoch + Duration::from_micros(local_pts_us),
|
||||
self.playout_offset_us(kind),
|
||||
);
|
||||
let late_by = Instant::now()
|
||||
.checked_duration_since(due_at)
|
||||
.unwrap_or_default();
|
||||
UpstreamPlanDecision::Play(PlannedUpstreamPacket {
|
||||
local_pts_us,
|
||||
due_at,
|
||||
late_by,
|
||||
source_lag: Duration::ZERO,
|
||||
})
|
||||
}
|
||||
|
||||
/// Keeps `playout_offset_us` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn playout_offset_us(&self, kind: UpstreamMediaKind) -> i64 {
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => self.camera_playout_offset_us.load(Ordering::Relaxed),
|
||||
UpstreamMediaKind::Microphone => {
|
||||
self.microphone_playout_offset_us.load(Ordering::Relaxed)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
140
server/src/upstream_media_runtime/stream_lifecycle_methods.rs
Normal file
140
server/src/upstream_media_runtime/stream_lifecycle_methods.rs
Normal file
@ -0,0 +1,140 @@
|
||||
impl UpstreamMediaRuntime {
|
||||
#[must_use]
|
||||
/// Keeps `new` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
next_session_id: AtomicU64::new(0),
|
||||
next_camera_generation: AtomicU64::new(0),
|
||||
next_microphone_generation: AtomicU64::new(0),
|
||||
microphone_sink_gate: Arc::new(Semaphore::new(1)),
|
||||
camera_playout_offset_us: AtomicI64::new(playout_offset_us(UpstreamMediaKind::Camera)),
|
||||
microphone_playout_offset_us: AtomicI64::new(playout_offset_us(
|
||||
UpstreamMediaKind::Microphone,
|
||||
)),
|
||||
state: Mutex::new(RuntimeState::default()),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_playout_offsets(&self, camera_offset_us: i64, microphone_offset_us: i64) {
|
||||
self.camera_playout_offset_us
|
||||
.store(camera_offset_us, Ordering::Relaxed);
|
||||
self.microphone_playout_offset_us
|
||||
.store(microphone_offset_us, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn playout_offsets(&self) -> (i64, i64) {
|
||||
(
|
||||
self.camera_playout_offset_us.load(Ordering::Relaxed),
|
||||
self.microphone_playout_offset_us.load(Ordering::Relaxed),
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn activate_camera(&self) -> UpstreamStreamLease {
|
||||
self.activate(UpstreamMediaKind::Camera)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn activate_microphone(&self) -> UpstreamStreamLease {
|
||||
self.activate(UpstreamMediaKind::Microphone)
|
||||
}
|
||||
|
||||
pub async fn reserve_microphone_sink(&self, generation: u64) -> Option<OwnedSemaphorePermit> {
|
||||
let permit = self
|
||||
.microphone_sink_gate
|
||||
.clone()
|
||||
.acquire_owned()
|
||||
.await
|
||||
.ok()?;
|
||||
self.is_microphone_active(generation).then_some(permit)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_camera_active(&self, generation: u64) -> bool {
|
||||
self.is_active(UpstreamMediaKind::Camera, generation)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_microphone_active(&self, generation: u64) -> bool {
|
||||
self.is_active(UpstreamMediaKind::Microphone, generation)
|
||||
}
|
||||
|
||||
pub fn close_camera(&self, generation: u64) {
|
||||
self.close(UpstreamMediaKind::Camera, generation);
|
||||
}
|
||||
|
||||
pub fn close_microphone(&self, generation: u64) {
|
||||
self.close(UpstreamMediaKind::Microphone, generation);
|
||||
}
|
||||
|
||||
pub fn soft_recover_microphone(&self) {
|
||||
let lease = self.activate_microphone();
|
||||
self.close_microphone(lease.generation);
|
||||
}
|
||||
|
||||
/// Keeps `record_client_timing` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
pub fn record_client_timing(&self, kind: UpstreamMediaKind, timing: UpstreamClientTiming) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
let sample = TimingSample {
|
||||
capture_pts_us: timing.capture_pts_us,
|
||||
send_pts_us: timing.send_pts_us,
|
||||
queue_age_ms: timing.queue_age_ms,
|
||||
received_at: Instant::now(),
|
||||
};
|
||||
match kind {
|
||||
UpstreamMediaKind::Camera => {
|
||||
state.latest_camera_timing = Some(sample);
|
||||
state.latest_camera_remote_pts_us = Some(timing.capture_pts_us);
|
||||
state
|
||||
.camera_client_queue_age_window_ms
|
||||
.push(f64::from(timing.queue_age_ms));
|
||||
}
|
||||
UpstreamMediaKind::Microphone => {
|
||||
state.latest_microphone_timing = Some(sample);
|
||||
state.latest_microphone_remote_pts_us = Some(timing.capture_pts_us);
|
||||
state
|
||||
.microphone_client_queue_age_window_ms
|
||||
.push(f64::from(timing.queue_age_ms));
|
||||
}
|
||||
}
|
||||
record_timing_pair(&mut state);
|
||||
}
|
||||
|
||||
pub fn mark_audio_presented(&self, local_pts_us: u64, due_at: Instant) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.last_audio_presented_pts_us = Some(local_pts_us);
|
||||
record_presentation(&mut state, UpstreamMediaKind::Microphone, due_at);
|
||||
state.phase = UpstreamSyncPhase::Live;
|
||||
state.last_reason = "v2 audio handed to UAC".to_string();
|
||||
}
|
||||
|
||||
pub fn mark_video_presented(&self, local_pts_us: u64, due_at: Instant) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.last_video_presented_pts_us = Some(local_pts_us);
|
||||
record_presentation(&mut state, UpstreamMediaKind::Camera, due_at);
|
||||
state.phase = UpstreamSyncPhase::Live;
|
||||
state.last_reason = "v2 video handed to UVC".to_string();
|
||||
}
|
||||
|
||||
pub fn record_video_freeze(&self, reason: impl Into<String>) {
|
||||
let mut state = self
|
||||
.state
|
||||
.lock()
|
||||
.expect("upstream media state mutex poisoned");
|
||||
state.video_freezes = state.video_freezes.saturating_add(1);
|
||||
state.phase = UpstreamSyncPhase::Healing;
|
||||
state.last_reason = reason.into();
|
||||
}
|
||||
}
|
||||
79
server/src/upstream_media_runtime/tests/mod.rs
Normal file
79
server/src/upstream_media_runtime/tests/mod.rs
Normal file
@ -0,0 +1,79 @@
|
||||
use super::*;
|
||||
|
||||
/// Keeps `with_clean_offset_env` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn with_clean_offset_env(test: impl FnOnce()) {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UPSTREAM_AUDIO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", None::<&str>),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_AUDIO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
None::<&str>,
|
||||
),
|
||||
("LESAVKA_UVC_WIDTH", None::<&str>),
|
||||
("LESAVKA_UVC_HEIGHT", None::<&str>),
|
||||
("LESAVKA_UVC_FPS", None::<&str>),
|
||||
("LESAVKA_UVC_INTERVAL", None::<&str>),
|
||||
],
|
||||
test,
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `runtime_uses_baked_mode_offsets_before_calibration_store_loads` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn runtime_uses_baked_mode_offsets_before_calibration_store_loads() {
|
||||
for (width, height, fps, expected_video_offset_us) in [
|
||||
("1280", "720", "20", 162_659),
|
||||
("1280", "720", "30", 135_090),
|
||||
("1920", "1080", "20", 160_045),
|
||||
("1920", "1080", "30", 127_952),
|
||||
] {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UVC_WIDTH", Some(width)),
|
||||
("LESAVKA_UVC_HEIGHT", Some(height)),
|
||||
("LESAVKA_UVC_FPS", Some(fps)),
|
||||
],
|
||||
|| {
|
||||
let runtime = UpstreamMediaRuntime::new();
|
||||
assert_eq!(
|
||||
runtime.playout_offsets(),
|
||||
(expected_video_offset_us, 0),
|
||||
"{width}x{height}@{fps} should use its baked startup offset"
|
||||
);
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Keeps `runtime_prefers_mode_offset_map_over_scalar_fallback` explicit because it sits on server upstream media scheduling, where timing choices directly affect lip sync.
|
||||
/// Inputs are the typed parameters; output is the return value or side effect.
|
||||
fn runtime_prefers_mode_offset_map_over_scalar_fallback() {
|
||||
with_clean_offset_env(|| {
|
||||
temp_env::with_vars(
|
||||
[
|
||||
("LESAVKA_UVC_WIDTH", Some("1280")),
|
||||
("LESAVKA_UVC_HEIGHT", Some("720")),
|
||||
("LESAVKA_UVC_FPS", Some("30")),
|
||||
("LESAVKA_UPSTREAM_VIDEO_PLAYOUT_OFFSET_US", Some("999999")),
|
||||
(
|
||||
"LESAVKA_UPSTREAM_VIDEO_PLAYOUT_MODE_OFFSETS_US",
|
||||
Some("1280x720@30=135090"),
|
||||
),
|
||||
],
|
||||
|| {
|
||||
let runtime = UpstreamMediaRuntime::new();
|
||||
assert_eq!(runtime.playout_offsets(), (135_090, 0));
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
250
testing/tests/client_browser_sync_script_contract.rs
Normal file
250
testing/tests/client_browser_sync_script_contract.rs
Normal file
@ -0,0 +1,250 @@
|
||||
//! Contract tests for browser-backed upstream sync probes.
|
||||
//!
|
||||
//! Scope: statically guard the browser, mirrored, and local stimulus probe
|
||||
//! drivers used for client-to-server transport tuning.
|
||||
//! Targets: `scripts/manual/run_upstream_browser_av_sync.sh`,
|
||||
//! `scripts/manual/run_upstream_mirrored_av_sync.sh`, and local stimulus code.
|
||||
//! Why: after server-to-RCT calibration, these scripts become the next tuning
|
||||
//! surface and must not drift back to synthetic or split upstream paths.
|
||||
|
||||
const BROWSER_SYNC_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_upstream_browser_av_sync.sh");
|
||||
const MIRRORED_SYNC_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_upstream_mirrored_av_sync.sh");
|
||||
const LOCAL_STIMULUS: &str = include_str!("../../scripts/manual/local_av_stimulus.py");
|
||||
const SYNC_PROBE_RUNNER: &str = include_str!("../../client/src/sync_probe/runner.rs");
|
||||
|
||||
#[test]
|
||||
fn browser_sync_script_can_delegate_to_a_real_path_driver() {
|
||||
for expected in [
|
||||
"BROWSER_RECORD_SECONDS=${BROWSER_RECORD_SECONDS:-${PROBE_DURATION_SECONDS}}",
|
||||
"BROWSER_SYNC_DRIVER_COMMAND=${BROWSER_SYNC_DRIVER_COMMAND:-}",
|
||||
"BROWSER_CONSUMER_REUSE_SESSION=${BROWSER_CONSUMER_REUSE_SESSION:-0}",
|
||||
"BROWSER_ANALYSIS_REQUIRED=${BROWSER_ANALYSIS_REQUIRED:-1}",
|
||||
"SYNC_ANALYZE_EVENT_WIDTH_CODES=${SYNC_ANALYZE_EVENT_WIDTH_CODES:-}",
|
||||
"==> running custom browser sync driver",
|
||||
"bash -lc \"${BROWSER_SYNC_DRIVER_COMMAND}\"",
|
||||
"browser_start_token=${browser_start_token}",
|
||||
"uploaded_start_token",
|
||||
"BROWSER_START_TOKEN",
|
||||
"analysis-failure.json",
|
||||
"BROWSER_ANALYSIS_REQUIRED=${BROWSER_ANALYSIS_REQUIRED}",
|
||||
"BROWSER_START_ATTEMPTS=${BROWSER_START_ATTEMPTS:-5}",
|
||||
"browser consumer start attempt ${attempt}/${BROWSER_START_ATTEMPTS} failed; retrying",
|
||||
"--event-width-codes",
|
||||
"--report-dir \"${LOCAL_REPORT_DIR}\"",
|
||||
"for attempt in 1 2 3 4 5",
|
||||
"capture fetch attempt ${attempt} failed; retrying",
|
||||
"failed to fetch browser capture from ${TETHYS_HOST}:${REMOTE_CAPTURE}",
|
||||
r"raw activity delta was ([+-]?[0-9]+(?:\.[0-9]+)?) ms ",
|
||||
r"\(video=([0-9]+(?:\.[0-9]+)?)s audio=([0-9]+(?:\.[0-9]+)?)s\)",
|
||||
] {
|
||||
assert!(
|
||||
BROWSER_SYNC_SCRIPT.contains(expected),
|
||||
"browser sync script should contain {expected}"
|
||||
);
|
||||
}
|
||||
assert!(
|
||||
!BROWSER_SYNC_SCRIPT.contains(r"(?:\\.[0-9]+)?"),
|
||||
"browser sync raw-delta parser should not require a literal backslash before decimals"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_probe_runner_uses_bundled_webcam_media_path() {
|
||||
for expected in [
|
||||
"bundled_webcam_media",
|
||||
"refusing to measure split upstream",
|
||||
"UpstreamMediaBundle",
|
||||
"stream_webcam_media",
|
||||
"PROBE_BUNDLE_SESSION_ID",
|
||||
"PROBE_BUNDLE_AUDIO_GRACE",
|
||||
"server does not advertise bundled webcam media",
|
||||
] {
|
||||
assert!(
|
||||
SYNC_PROBE_RUNNER.contains(expected),
|
||||
"sync probe runner should contain {expected}"
|
||||
);
|
||||
}
|
||||
for forbidden in [
|
||||
".stream_camera(Request::new(outbound))",
|
||||
".stream_microphone(Request::new(outbound))",
|
||||
] {
|
||||
assert!(
|
||||
!SYNC_PROBE_RUNNER.contains(forbidden),
|
||||
"sync probe runner must not use old split upstream RPC {forbidden}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mirrored_sync_script_uses_real_client_capture_path() {
|
||||
for expected in [
|
||||
"local_av_stimulus.py",
|
||||
"lesavka-client",
|
||||
"LESAVKA_HEADLESS=1",
|
||||
"LESAVKA_MEDIA_CONTROL=\"${MEDIA_CONTROL}\"",
|
||||
"LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES=\"${LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES:-1}\"",
|
||||
"--no-launcher --server \"${RESOLVED_LESAVKA_SERVER_ADDR}\"",
|
||||
"BROWSER_SYNC_DRIVER_COMMAND=\"${driver_command}\"",
|
||||
"SYNC_ANALYZE_EVENT_WIDTH_CODES=\"${PROBE_EVENT_WIDTH_CODES}\"",
|
||||
"run_upstream_browser_av_sync.sh",
|
||||
"wait_for_stimulus_page_ready 15",
|
||||
"Point the real webcam at the stimulus window",
|
||||
"==> Lesavka versions under test",
|
||||
"lesavka-relayctl",
|
||||
"--bin lesavka-relayctl",
|
||||
"client_revision=",
|
||||
"server_version=",
|
||||
"server_revision=",
|
||||
"combined version+revision",
|
||||
"run_status=0",
|
||||
"run_mirrored_segments || run_status=$?",
|
||||
"LESAVKA_SYNC_APPLY_CALIBRATION=${LESAVKA_SYNC_APPLY_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_SAVE_CALIBRATION=${LESAVKA_SYNC_SAVE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_CALIBRATION_TARGET=${LESAVKA_SYNC_CALIBRATION_TARGET:-video}",
|
||||
"LESAVKA_SYNC_ADAPTIVE_CALIBRATION=${LESAVKA_SYNC_ADAPTIVE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS=${LESAVKA_SYNC_CALIBRATION_SEGMENTS:-1}",
|
||||
"LESAVKA_SYNC_CONTINUOUS_BROWSER=${LESAVKA_SYNC_CONTINUOUS_BROWSER:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONTINUE_ON_ANALYSIS_FAILURE=${LESAVKA_SYNC_CONTINUE_ON_ANALYSIS_FAILURE:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS=${LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS:-3}",
|
||||
"PROBE_AUDIO_GAIN=${PROBE_AUDIO_GAIN:-0.55}",
|
||||
"LESAVKA_STIMULUS_PREVIEW_SECONDS=${LESAVKA_STIMULUS_PREVIEW_SECONDS:-4}",
|
||||
"LESAVKA_OPEN_MANUAL_REVIEW_DOLPHIN=${LESAVKA_OPEN_MANUAL_REVIEW_DOLPHIN:-1}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_CALIBRATION=${LESAVKA_SYNC_PROVISIONAL_CALIBRATION:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MIN_PAIRS=${LESAVKA_SYNC_PROVISIONAL_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_P95_MS=${LESAVKA_SYNC_PROVISIONAL_MAX_P95_MS:-350}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_DRIFT_MS=${LESAVKA_SYNC_PROVISIONAL_MAX_DRIFT_MS:-250}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_GAIN=${LESAVKA_SYNC_PROVISIONAL_GAIN:-0.5}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_STEP_US=${LESAVKA_SYNC_PROVISIONAL_MAX_STEP_US:-150000}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_CALIBRATION=${LESAVKA_SYNC_RAW_FAILURE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS=${LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_MAX_ABS_DELTA_MS=${LESAVKA_SYNC_RAW_FAILURE_MAX_ABS_DELTA_MS:-350}",
|
||||
"LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION=${LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONFIRMATION_SEGMENTS=${LESAVKA_SYNC_CONFIRMATION_SEGMENTS:-1}",
|
||||
"LESAVKA_SYNC_REQUIRE_CONFIRMATION_PASS=${LESAVKA_SYNC_REQUIRE_CONFIRMATION_PASS:-${LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONFIRMATION_SEGMENTS must be a non-negative integer",
|
||||
"LESAVKA_SYNC_TOTAL_SEGMENTS=$((LESAVKA_SYNC_CALIBRATION_SEGMENTS + LESAVKA_SYNC_CONFIRMATION_SEGMENTS))",
|
||||
"export LESAVKA_SYNC_PROVISIONAL_CALIBRATION",
|
||||
"export LESAVKA_SYNC_RAW_FAILURE_CALIBRATION",
|
||||
"export LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS",
|
||||
"LESAVKA_SYNC_ADAPTIVE_CALIBRATION",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS=4",
|
||||
"browser_consumer_reuse_session=${reuse_browser_session}",
|
||||
"browser_analysis_required=${analysis_required}",
|
||||
"BROWSER_CONSUMER_REUSE_SESSION=\"${reuse_browser_session}\"",
|
||||
"BROWSER_ANALYSIS_REQUIRED=\"${analysis_required}\"",
|
||||
"--audio-gain \"${PROBE_AUDIO_GAIN}\"",
|
||||
"run_stimulus_preview",
|
||||
"write_stimulus_driver_script",
|
||||
"local_stimulus_started=true",
|
||||
"observed_start_token",
|
||||
"audio_state",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS must be a positive integer",
|
||||
"run_mirrored_segments",
|
||||
"summarize_adaptive_probe_metrics",
|
||||
"for segment in $(seq 1 \"${LESAVKA_SYNC_TOTAL_SEGMENTS}\")",
|
||||
"segment_phase",
|
||||
"confirmation segment: calibration apply disabled so this segment tests the active calibration",
|
||||
"segment-${segment}",
|
||||
"calibration-before.env",
|
||||
"planner-before.env",
|
||||
"calibration-decision.env",
|
||||
"segment-metrics.csv",
|
||||
"segment-metrics.jsonl",
|
||||
"segment-events.csv",
|
||||
"segment-events.jsonl",
|
||||
"manual-review",
|
||||
"manual_review_html",
|
||||
"manual_review_dir",
|
||||
"open_manual_review_in_dolphin",
|
||||
"dolphin",
|
||||
"capture_path",
|
||||
"confirmation-summary.json",
|
||||
"confirmation_passed",
|
||||
"check_confirmation_result",
|
||||
"confirmation check failed",
|
||||
"analysis_failure_reason",
|
||||
"probe_activity_start_delta_ms",
|
||||
"blind-targets.json",
|
||||
"no segment produced a passing probe verdict; refusing to invent blind targets",
|
||||
"confirmation did not pass; refusing to promote calibration-only segments to blind targets",
|
||||
"candidate_good_calibration_segments",
|
||||
"decision_mode",
|
||||
"decision_provisional_video_recommendation_us",
|
||||
"planner_live_lag_ms_after",
|
||||
"probe_p95_abs_skew_ms",
|
||||
"transport/server receive jitter",
|
||||
"settling ${LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS}s before next segment",
|
||||
"print_upstream_calibration_state \"before mirrored run\"",
|
||||
"maybe_apply_probe_calibration",
|
||||
"calibration_ready=${calibration_ready}",
|
||||
"calibration_decision_mode=${calibration_decision_mode}",
|
||||
"bounded provisional correction from median skew",
|
||||
"bounded provisional correction from analyzer-failure raw activity",
|
||||
"raw_failure_calibration_enabled",
|
||||
"raw analyzer-failure calibration refused: ",
|
||||
"raw_failure_min_pairs",
|
||||
"provisional calibration not saved",
|
||||
"calibration apply refused: ${calibration_decision_note}",
|
||||
"calibrate \"${calibration_apply_audio_delta_us}\" \"${calibration_apply_video_delta_us}\"",
|
||||
"LESAVKA_UPSTREAM_BLIND_HEAL is server-side",
|
||||
"calibration-save-default",
|
||||
"print_upstream_sync_state \"after mirrored run\"",
|
||||
"print_upstream_calibration_state \"after mirrored run\"",
|
||||
"==> mirrored probe failed",
|
||||
] {
|
||||
assert!(
|
||||
MIRRORED_SYNC_SCRIPT.contains(expected),
|
||||
"mirrored sync script should contain {expected}"
|
||||
);
|
||||
}
|
||||
assert!(
|
||||
!MIRRORED_SYNC_SCRIPT.contains("lesavka-sync-probe"),
|
||||
"mirrored sync must not use the synthetic direct sender"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn local_stimulus_matches_sync_analyzer_pulse_contract() {
|
||||
for expected in [
|
||||
"--warmup-seconds",
|
||||
"--pulse-period-ms",
|
||||
"--pulse-width-ms",
|
||||
"--marker-tick-period",
|
||||
"--audio-gain",
|
||||
"/preview",
|
||||
"preview_token",
|
||||
"observed_preview_token",
|
||||
"completed_preview_token",
|
||||
"stimulus preview running",
|
||||
"stimulus preview completed",
|
||||
"audio_state",
|
||||
"--event-width-codes",
|
||||
"event_width_codes",
|
||||
"audio_gain",
|
||||
"widthCode",
|
||||
"oscillator.frequency.value = 880",
|
||||
"setStatus(`ready",
|
||||
"Point the real webcam at this window",
|
||||
] {
|
||||
assert!(
|
||||
LOCAL_STIMULUS.contains(expected),
|
||||
"local stimulus should contain {expected}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn manual_probe_python_servers_use_reentrant_state_locks() {
|
||||
const BROWSER_CONSUMER: &str = include_str!("../../scripts/manual/browser_consumer_probe.py");
|
||||
for (name, script) in [
|
||||
("local stimulus", LOCAL_STIMULUS),
|
||||
("browser consumer", BROWSER_CONSUMER),
|
||||
] {
|
||||
assert!(
|
||||
script.contains("threading.RLock()"),
|
||||
"{name} server request handlers call snapshot while holding state lock; use RLock to avoid /start deadlocks"
|
||||
);
|
||||
}
|
||||
}
|
||||
@ -1,22 +1,24 @@
|
||||
//! Contracts for keeping steady-state live-media recovery logs actionable.
|
||||
//!
|
||||
//! Scope: inspect client live-media logging and recovery source text.
|
||||
//! Targets: `client/src/app/uplink_media.rs`, `client/src/app/downlink_media.rs`.
|
||||
//! Targets: `client/src/app/uplink_media/drop_logging.rs`,
|
||||
//! `client/src/app/downlink_media.rs`.
|
||||
//! Why: freshness-first media handling should not turn normal queue churn into
|
||||
//! high-volume warnings that hide real failures or steal runtime budget.
|
||||
|
||||
const UPLINK_MEDIA_SRC: &str = include_str!("../../client/src/app/uplink_media.rs");
|
||||
const UPLINK_DROP_LOGGING_SRC: &str =
|
||||
include_str!("../../client/src/app/uplink_media/drop_logging.rs");
|
||||
const DOWNLINK_MEDIA_SRC: &str = include_str!("../../client/src/app/downlink_media.rs");
|
||||
const AUDIO_RECOVERY_SRC: &str = include_str!("../../client/src/app/audio_recovery_config.rs");
|
||||
|
||||
#[test]
|
||||
fn upstream_queue_drops_are_rate_limited_instead_of_warn_spammed() {
|
||||
assert!(
|
||||
UPLINK_MEDIA_SRC.contains("UplinkDropLogLimiter"),
|
||||
UPLINK_DROP_LOGGING_SRC.contains("UplinkDropLogLimiter"),
|
||||
"uplink queue drops should flow through a rate-limited logger"
|
||||
);
|
||||
assert!(
|
||||
UPLINK_MEDIA_SRC.contains("UPLINK_DROP_WARN_INTERVAL"),
|
||||
UPLINK_DROP_LOGGING_SRC.contains("UPLINK_DROP_WARN_INTERVAL"),
|
||||
"uplink drop warnings should have an explicit aggregation interval"
|
||||
);
|
||||
for noisy in [
|
||||
@ -26,7 +28,7 @@ fn upstream_queue_drops_are_rate_limited_instead_of_warn_spammed() {
|
||||
"upstream microphone queue dropped the oldest packet because it was full",
|
||||
] {
|
||||
assert!(
|
||||
!UPLINK_MEDIA_SRC.contains(noisy),
|
||||
!UPLINK_DROP_LOGGING_SRC.contains(noisy),
|
||||
"normal freshness-first drop churn should not WARN every packet: {noisy}"
|
||||
);
|
||||
}
|
||||
|
||||
@ -6,15 +6,6 @@
|
||||
//! port is not exposed on the public SSH endpoint.
|
||||
|
||||
const SYNC_SCRIPT: &str = include_str!("../../scripts/manual/run_upstream_av_sync.sh");
|
||||
const SERVER_RC_MODE_MATRIX_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_server_to_rc_mode_matrix.sh");
|
||||
const BROWSER_SYNC_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_upstream_browser_av_sync.sh");
|
||||
const MIRRORED_SYNC_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_upstream_mirrored_av_sync.sh");
|
||||
const LOCAL_STIMULUS: &str = include_str!("../../scripts/manual/local_av_stimulus.py");
|
||||
const SYNC_PROBE_RUNNER: &str = include_str!("../../client/src/sync_probe/runner.rs");
|
||||
|
||||
#[test]
|
||||
fn upstream_sync_script_tunnels_auto_server_addr_through_ssh() {
|
||||
for expected in [
|
||||
@ -370,398 +361,3 @@ fn output_freshness_still_invalidates_real_event_timing_contradictions() {
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn server_rc_mode_matrix_validates_advertised_uvc_profiles() {
|
||||
for expected in [
|
||||
"LESAVKA_SERVER_RC_CORE_WEBCAM_MODES=${LESAVKA_SERVER_RC_CORE_WEBCAM_MODES:-1280x720@20,1280x720@30,1920x1080@20,1920x1080@30}",
|
||||
"LESAVKA_SERVER_RC_MODES=${LESAVKA_SERVER_RC_MODES:-${LESAVKA_SERVER_RC_CORE_WEBCAM_MODES}}",
|
||||
"LESAVKA_SERVER_REPO=${LESAVKA_SERVER_REPO:-auto}",
|
||||
"LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US:-${LESAVKA_OUTPUT_DELAY_PROBE_AUDIO_DELAY_US:-0}}",
|
||||
"LESAVKA_SERVER_RC_DEFAULT_VIDEO_DELAY_US=${LESAVKA_SERVER_RC_DEFAULT_VIDEO_DELAY_US:-135090}",
|
||||
"LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US=${LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US:-1280x720@20=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1280x720@30=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1920x1080@20=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1920x1080@30=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US}}",
|
||||
"LESAVKA_SERVER_RC_MODE_DELAYS_US=${LESAVKA_SERVER_RC_MODE_DELAYS_US:-1280x720@20=162659,1280x720@30=135090,1920x1080@20=160045,1920x1080@30=127952}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_SIZES=${LESAVKA_SERVER_RC_MODE_DISCOVERY_SIZES:-1280x720,1920x1080}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_FPS=${LESAVKA_SERVER_RC_MODE_DISCOVERY_FPS:-20,30}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_INCLUDE_REGEX=${LESAVKA_SERVER_RC_MODE_DISCOVERY_INCLUDE_REGEX:-Logitech|BRIO|C9[0-9]+|HD UVC WebCam|USB2[.]0 HD|Integrated Camera|Webcam|Camera}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_EXCLUDE_REGEX=${LESAVKA_SERVER_RC_MODE_DISCOVERY_EXCLUDE_REGEX:-Lesavka|UGREEN|MACROSILICON|Composite|Capture}",
|
||||
"LESAVKA_SERVER_RC_MODE_SOURCE=${LESAVKA_SERVER_RC_MODE_SOURCE:-configured}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE=${LESAVKA_SERVER_RC_RECONFIGURE:-0}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_UPDATE=${LESAVKA_SERVER_RC_RECONFIGURE_UPDATE:-0}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_STRATEGY=${LESAVKA_SERVER_RC_RECONFIGURE_STRATEGY:-runtime}",
|
||||
"LESAVKA_SERVER_RC_ALLOW_GADGET_RESET=${LESAVKA_SERVER_RC_ALLOW_GADGET_RESET:-1}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_VERBOSE=${LESAVKA_SERVER_RC_RECONFIGURE_VERBOSE:-0}",
|
||||
"LESAVKA_SERVER_RC_PROMPT_SUDO_EARLY=${LESAVKA_SERVER_RC_PROMPT_SUDO_EARLY:-1}",
|
||||
"LESAVKA_SERVER_RC_START_DELAY_SECONDS=${LESAVKA_SERVER_RC_START_DELAY_SECONDS:-0}",
|
||||
"LESAVKA_SERVER_RC_WAIT_TETHYS_READY=${LESAVKA_SERVER_RC_WAIT_TETHYS_READY:-1}",
|
||||
"LESAVKA_SERVER_RC_TETHYS_READY_TIMEOUT_SECONDS=${LESAVKA_SERVER_RC_TETHYS_READY_TIMEOUT_SECONDS:-60}",
|
||||
"LESAVKA_SERVER_RC_TETHYS_SETTLE_SECONDS=${LESAVKA_SERVER_RC_TETHYS_SETTLE_SECONDS:-6}",
|
||||
"LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS=${LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS:-3}",
|
||||
"LESAVKA_SERVER_RC_PROBE_PREBUILD=${LESAVKA_SERVER_RC_PROBE_PREBUILD:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_DELAYS=${LESAVKA_SERVER_RC_TUNE_DELAYS:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_CONFIRM=${LESAVKA_SERVER_RC_TUNE_CONFIRM:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MIN_PAIRS=${LESAVKA_SERVER_RC_TUNE_MIN_PAIRS:-13}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MAX_ABS_SKEW_MS=${LESAVKA_SERVER_RC_TUNE_MAX_ABS_SKEW_MS:-1000}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MAX_STEP_US=${LESAVKA_SERVER_RC_TUNE_MAX_STEP_US:-500000}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MIN_CHANGE_US=${LESAVKA_SERVER_RC_TUNE_MIN_CHANGE_US:-5000}",
|
||||
"Theia sudo password for %s",
|
||||
"==> priming remote sudo on ${LESAVKA_SERVER_HOST}",
|
||||
"sleep_start_delay",
|
||||
"==> delaying server-to-RC matrix start for ${LESAVKA_SERVER_RC_START_DELAY_SECONDS}s",
|
||||
"remote sudo has already been primed; sleeping before prebuild/reconfigure/capture",
|
||||
"LESAVKA_SERVER_RC_START_DELAY_SECONDS must be a non-negative number",
|
||||
"start_delay=${LESAVKA_SERVER_RC_START_DELAY_SECONDS}s",
|
||||
"==> prebuilding relay control/analyzer once for the mode matrix",
|
||||
"LESAVKA_SERVER_RC_MODES=auto",
|
||||
"discover_local_webcam_modes",
|
||||
"lookup_audio_delay_us",
|
||||
"local webcam",
|
||||
"mode_source=${LESAVKA_SERVER_RC_MODE_SOURCE}",
|
||||
"video_delays=${LESAVKA_SERVER_RC_MODE_DELAYS_US}",
|
||||
"audio_delays=${LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US}",
|
||||
"pulse_tool=${REMOTE_PULSE_CAPTURE_TOOL}",
|
||||
"fast runtime env updated: CAM_OUTPUT=uvc",
|
||||
"cycling UVC gadget descriptors",
|
||||
"lesavka-core reconfigure log:",
|
||||
"missing /usr/local/bin/lesavka-core.sh",
|
||||
"wait_tethys_media_ready",
|
||||
"==> waiting for Tethys media endpoints for ${mode}",
|
||||
"Tethys media ready: video=%s mode=%s audio_stack=%s",
|
||||
"timed out waiting for Tethys Lesavka media endpoints",
|
||||
"LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS:-350}",
|
||||
"LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS=${LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS:-${LESAVKA_SERVER_RC_TUNE_MIN_PAIRS}}",
|
||||
"LESAVKA_SERVER_RC_MIN_CODED_PAIRS=${LESAVKA_SERVER_RC_MIN_CODED_PAIRS:-${LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS}}",
|
||||
"LESAVKA_SERVER_RC_REQUIRE_ALL_CODED_PAIRS=${LESAVKA_SERVER_RC_REQUIRE_ALL_CODED_PAIRS:-0}",
|
||||
"LESAVKA_SERVER_RC_REQUIRE_SMOOTHNESS_PASS=${LESAVKA_SERVER_RC_REQUIRE_SMOOTHNESS_PASS:-0}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY=${LESAVKA_SERVER_RC_SIGNAL_READY:-1}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_MODE=${LESAVKA_SERVER_RC_SIGNAL_READY_MODE:-conditioned_capture}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_MIN_PAIRS=${LESAVKA_SERVER_RC_SIGNAL_READY_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_DURATION_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_READY_DURATION_SECONDS:-12}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS=${LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS:-4}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS:-5}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_SECONDS:-12}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_WARMUP_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_WARMUP_SECONDS:-1}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_GAP_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_GAP_SECONDS:-1}",
|
||||
"LESAVKA_SERVER_RC_ANALYSIS_TIMELINE_WINDOW=${LESAVKA_SERVER_RC_ANALYSIS_TIMELINE_WINDOW:-auto}",
|
||||
"signal_readiness_passed",
|
||||
"write_signal_readiness_attempt_result",
|
||||
"write_signal_readiness_attempts_summary",
|
||||
"schema\": \"lesavka.server-rc-signal-readiness-attempt.v1\"",
|
||||
"schema\": \"lesavka.server-rc-signal-readiness-summary.v1\"",
|
||||
"using same-capture signal conditioning before measured probe",
|
||||
"proving Tethys signal readiness before measured probe",
|
||||
"readiness attempt ${readiness_attempt}/${LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS}",
|
||||
"waiting ${LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS}s before retrying signal readiness",
|
||||
"signal readiness did not pass",
|
||||
"signal attempt ",
|
||||
"smoothness_required",
|
||||
"smoothness_warnings",
|
||||
"smoothness warning:",
|
||||
"coded visibility:",
|
||||
"LESAVKA_SERVER_RC_MAX_VIDEO_HICCUPS=${LESAVKA_SERVER_RC_MAX_VIDEO_HICCUPS:-0}",
|
||||
"LESAVKA_SERVER_RC_MAX_AUDIO_HICCUPS=${LESAVKA_SERVER_RC_MAX_AUDIO_HICCUPS:-0}",
|
||||
"LESAVKA_SERVER_RC_MAX_VIDEO_MISSING_FRAMES=${LESAVKA_SERVER_RC_MAX_VIDEO_MISSING_FRAMES:-12}",
|
||||
"mode-matrix-summary.json",
|
||||
"mode-matrix-summary.csv",
|
||||
"mode-matrix-summary.txt",
|
||||
"mode-delay-recommendations.json",
|
||||
"mode-delay-recommendations.env",
|
||||
"schema\": \"lesavka.server-rc-mode-result.v1\"",
|
||||
"schema\": \"lesavka.server-rc-mode-matrix-summary.v1\"",
|
||||
"schema\": \"lesavka.server-rc-mode-delay-recommendations.v1\"",
|
||||
"output_delay_calibration",
|
||||
"write_tune_candidate_env",
|
||||
"annotate_mode_result",
|
||||
"LESAVKA_SERVER_RC_REPEAT_COUNT=${LESAVKA_SERVER_RC_REPEAT_COUNT:-1}",
|
||||
"mode-static-calibration.json",
|
||||
"mode-matrix-run.log",
|
||||
"mode-result-seed.json",
|
||||
"mode-result-tuned.json",
|
||||
"==> mode ${mode} run ${mode_run_index}: confirming tuned delays",
|
||||
"calibration_ready",
|
||||
"calibration_video_target_offset_us",
|
||||
"calibration_audio_target_offset_us",
|
||||
"calibration:",
|
||||
"capture_timebase_status",
|
||||
"capture timebase invalid",
|
||||
"capture timing:",
|
||||
"signature_coverage",
|
||||
"paired coded signatures",
|
||||
"signature_missing_codes",
|
||||
"probe_env=(",
|
||||
"\"REMOTE_PULSE_CAPTURE_TOOL=${REMOTE_PULSE_CAPTURE_TOOL}\"",
|
||||
"\"REMOTE_PULSE_VIDEO_MODE=${REMOTE_PULSE_VIDEO_MODE}\"",
|
||||
"\"REMOTE_CAPTURE_STACK=${REMOTE_CAPTURE_STACK}\"",
|
||||
"\"REMOTE_CAPTURE_ALLOW_ALSA_FALLBACK=${REMOTE_CAPTURE_ALLOW_ALSA_FALLBACK}\"",
|
||||
"\"REMOTE_CAPTURE_PREROLL_DISCARD_SECONDS=${LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS}\"",
|
||||
"\"REMOTE_CAPTURE_READY_SETTLE_SECONDS=${REMOTE_CAPTURE_READY_SETTLE_SECONDS}\"",
|
||||
"\"PROBE_PREBUILD=0\"",
|
||||
"\"VIDEO_SIZE=${width}x${height}\"",
|
||||
"\"VIDEO_FPS=${fps}\"",
|
||||
"\"REMOTE_EXPECT_UVC_WIDTH=${width}\"",
|
||||
"\"REMOTE_EXPECT_UVC_HEIGHT=${height}\"",
|
||||
"\"REMOTE_EXPECT_UVC_FPS=${fps}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_PROBE_AUDIO_DELAY_US=${audio_delay_us}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_PROBE_VIDEO_DELAY_US=${video_delay_us}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_APPLY=0\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_SAVE=0\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MAX_AGE_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS}\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MAX_CLOCK_UNCERTAINTY_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_CLOCK_UNCERTAINTY_MS}\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MIN_PAIRS=${min_pairs}\"",
|
||||
"sync did not pass",
|
||||
"freshness did not pass",
|
||||
"video hiccups",
|
||||
"estimated missing video frames",
|
||||
"audio hiccups",
|
||||
] {
|
||||
assert!(
|
||||
SERVER_RC_MODE_MATRIX_SCRIPT.contains(expected),
|
||||
"server-to-RC mode matrix script should contain {expected}"
|
||||
);
|
||||
}
|
||||
let prime = SERVER_RC_MODE_MATRIX_SCRIPT
|
||||
.find("prime_remote_sudo\nsleep_start_delay\nprebuild_probe_tools")
|
||||
.expect("matrix should prime remote sudo before delayed start and prebuild");
|
||||
let prompt = SERVER_RC_MODE_MATRIX_SCRIPT
|
||||
.find("Theia sudo password for %s")
|
||||
.expect("matrix should contain the immediate Theia password prompt");
|
||||
assert!(
|
||||
prompt < prime,
|
||||
"password prompt machinery should be defined before the matrix startup sequence"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn browser_sync_script_can_delegate_to_a_real_path_driver() {
|
||||
for expected in [
|
||||
"BROWSER_RECORD_SECONDS=${BROWSER_RECORD_SECONDS:-${PROBE_DURATION_SECONDS}}",
|
||||
"BROWSER_SYNC_DRIVER_COMMAND=${BROWSER_SYNC_DRIVER_COMMAND:-}",
|
||||
"BROWSER_CONSUMER_REUSE_SESSION=${BROWSER_CONSUMER_REUSE_SESSION:-0}",
|
||||
"BROWSER_ANALYSIS_REQUIRED=${BROWSER_ANALYSIS_REQUIRED:-1}",
|
||||
"SYNC_ANALYZE_EVENT_WIDTH_CODES=${SYNC_ANALYZE_EVENT_WIDTH_CODES:-}",
|
||||
"==> running custom browser sync driver",
|
||||
"bash -lc \"${BROWSER_SYNC_DRIVER_COMMAND}\"",
|
||||
"browser_start_token=${browser_start_token}",
|
||||
"uploaded_start_token",
|
||||
"BROWSER_START_TOKEN",
|
||||
"analysis-failure.json",
|
||||
"BROWSER_ANALYSIS_REQUIRED=${BROWSER_ANALYSIS_REQUIRED}",
|
||||
"BROWSER_START_ATTEMPTS=${BROWSER_START_ATTEMPTS:-5}",
|
||||
"browser consumer start attempt ${attempt}/${BROWSER_START_ATTEMPTS} failed; retrying",
|
||||
"--event-width-codes",
|
||||
"--report-dir \"${LOCAL_REPORT_DIR}\"",
|
||||
"for attempt in 1 2 3 4 5",
|
||||
"capture fetch attempt ${attempt} failed; retrying",
|
||||
"failed to fetch browser capture from ${TETHYS_HOST}:${REMOTE_CAPTURE}",
|
||||
r"raw activity delta was ([+-]?[0-9]+(?:\.[0-9]+)?) ms ",
|
||||
r"\(video=([0-9]+(?:\.[0-9]+)?)s audio=([0-9]+(?:\.[0-9]+)?)s\)",
|
||||
] {
|
||||
assert!(
|
||||
BROWSER_SYNC_SCRIPT.contains(expected),
|
||||
"browser sync script should contain {expected}"
|
||||
);
|
||||
}
|
||||
assert!(
|
||||
!BROWSER_SYNC_SCRIPT.contains(r"(?:\\.[0-9]+)?"),
|
||||
"browser sync raw-delta parser should not require a literal backslash before decimals"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_probe_runner_uses_bundled_webcam_media_path() {
|
||||
for expected in [
|
||||
"bundled_webcam_media",
|
||||
"refusing to measure split upstream",
|
||||
"UpstreamMediaBundle",
|
||||
"stream_webcam_media",
|
||||
"PROBE_BUNDLE_SESSION_ID",
|
||||
"PROBE_BUNDLE_AUDIO_GRACE",
|
||||
"server does not advertise bundled webcam media",
|
||||
] {
|
||||
assert!(
|
||||
SYNC_PROBE_RUNNER.contains(expected),
|
||||
"sync probe runner should contain {expected}"
|
||||
);
|
||||
}
|
||||
for forbidden in [
|
||||
".stream_camera(Request::new(outbound))",
|
||||
".stream_microphone(Request::new(outbound))",
|
||||
] {
|
||||
assert!(
|
||||
!SYNC_PROBE_RUNNER.contains(forbidden),
|
||||
"sync probe runner must not use old split upstream RPC {forbidden}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mirrored_sync_script_uses_real_client_capture_path() {
|
||||
for expected in [
|
||||
"local_av_stimulus.py",
|
||||
"lesavka-client",
|
||||
"LESAVKA_HEADLESS=1",
|
||||
"LESAVKA_MEDIA_CONTROL=\"${MEDIA_CONTROL}\"",
|
||||
"LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES=\"${LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES:-1}\"",
|
||||
"--no-launcher --server \"${RESOLVED_LESAVKA_SERVER_ADDR}\"",
|
||||
"BROWSER_SYNC_DRIVER_COMMAND=\"${driver_command}\"",
|
||||
"SYNC_ANALYZE_EVENT_WIDTH_CODES=\"${PROBE_EVENT_WIDTH_CODES}\"",
|
||||
"run_upstream_browser_av_sync.sh",
|
||||
"wait_for_stimulus_page_ready 15",
|
||||
"Point the real webcam at the stimulus window",
|
||||
"==> Lesavka versions under test",
|
||||
"lesavka-relayctl",
|
||||
"--bin lesavka-relayctl",
|
||||
"client_revision=",
|
||||
"server_version=",
|
||||
"server_revision=",
|
||||
"combined version+revision",
|
||||
"run_status=0",
|
||||
"run_mirrored_segments || run_status=$?",
|
||||
"LESAVKA_SYNC_APPLY_CALIBRATION=${LESAVKA_SYNC_APPLY_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_SAVE_CALIBRATION=${LESAVKA_SYNC_SAVE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_CALIBRATION_TARGET=${LESAVKA_SYNC_CALIBRATION_TARGET:-video}",
|
||||
"LESAVKA_SYNC_ADAPTIVE_CALIBRATION=${LESAVKA_SYNC_ADAPTIVE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS=${LESAVKA_SYNC_CALIBRATION_SEGMENTS:-1}",
|
||||
"LESAVKA_SYNC_CONTINUOUS_BROWSER=${LESAVKA_SYNC_CONTINUOUS_BROWSER:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONTINUE_ON_ANALYSIS_FAILURE=${LESAVKA_SYNC_CONTINUE_ON_ANALYSIS_FAILURE:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS=${LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS:-3}",
|
||||
"PROBE_AUDIO_GAIN=${PROBE_AUDIO_GAIN:-0.55}",
|
||||
"LESAVKA_STIMULUS_PREVIEW_SECONDS=${LESAVKA_STIMULUS_PREVIEW_SECONDS:-4}",
|
||||
"LESAVKA_OPEN_MANUAL_REVIEW_DOLPHIN=${LESAVKA_OPEN_MANUAL_REVIEW_DOLPHIN:-1}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_CALIBRATION=${LESAVKA_SYNC_PROVISIONAL_CALIBRATION:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MIN_PAIRS=${LESAVKA_SYNC_PROVISIONAL_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_P95_MS=${LESAVKA_SYNC_PROVISIONAL_MAX_P95_MS:-350}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_DRIFT_MS=${LESAVKA_SYNC_PROVISIONAL_MAX_DRIFT_MS:-250}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_GAIN=${LESAVKA_SYNC_PROVISIONAL_GAIN:-0.5}",
|
||||
"LESAVKA_SYNC_PROVISIONAL_MAX_STEP_US=${LESAVKA_SYNC_PROVISIONAL_MAX_STEP_US:-150000}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_CALIBRATION=${LESAVKA_SYNC_RAW_FAILURE_CALIBRATION:-0}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS=${LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SYNC_RAW_FAILURE_MAX_ABS_DELTA_MS=${LESAVKA_SYNC_RAW_FAILURE_MAX_ABS_DELTA_MS:-350}",
|
||||
"LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION=${LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION:-${LESAVKA_SYNC_ADAPTIVE_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONFIRMATION_SEGMENTS=${LESAVKA_SYNC_CONFIRMATION_SEGMENTS:-1}",
|
||||
"LESAVKA_SYNC_REQUIRE_CONFIRMATION_PASS=${LESAVKA_SYNC_REQUIRE_CONFIRMATION_PASS:-${LESAVKA_SYNC_CONFIRM_AFTER_CALIBRATION}}",
|
||||
"LESAVKA_SYNC_CONFIRMATION_SEGMENTS must be a non-negative integer",
|
||||
"LESAVKA_SYNC_TOTAL_SEGMENTS=$((LESAVKA_SYNC_CALIBRATION_SEGMENTS + LESAVKA_SYNC_CONFIRMATION_SEGMENTS))",
|
||||
"export LESAVKA_SYNC_PROVISIONAL_CALIBRATION",
|
||||
"export LESAVKA_SYNC_RAW_FAILURE_CALIBRATION",
|
||||
"export LESAVKA_SYNC_RAW_FAILURE_MIN_PAIRS",
|
||||
"LESAVKA_SYNC_ADAPTIVE_CALIBRATION",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS=4",
|
||||
"browser_consumer_reuse_session=${reuse_browser_session}",
|
||||
"browser_analysis_required=${analysis_required}",
|
||||
"BROWSER_CONSUMER_REUSE_SESSION=\"${reuse_browser_session}\"",
|
||||
"BROWSER_ANALYSIS_REQUIRED=\"${analysis_required}\"",
|
||||
"--audio-gain \"${PROBE_AUDIO_GAIN}\"",
|
||||
"run_stimulus_preview",
|
||||
"write_stimulus_driver_script",
|
||||
"local_stimulus_started=true",
|
||||
"observed_start_token",
|
||||
"audio_state",
|
||||
"LESAVKA_SYNC_CALIBRATION_SEGMENTS must be a positive integer",
|
||||
"run_mirrored_segments",
|
||||
"summarize_adaptive_probe_metrics",
|
||||
"for segment in $(seq 1 \"${LESAVKA_SYNC_TOTAL_SEGMENTS}\")",
|
||||
"segment_phase",
|
||||
"confirmation segment: calibration apply disabled so this segment tests the active calibration",
|
||||
"segment-${segment}",
|
||||
"calibration-before.env",
|
||||
"planner-before.env",
|
||||
"calibration-decision.env",
|
||||
"segment-metrics.csv",
|
||||
"segment-metrics.jsonl",
|
||||
"segment-events.csv",
|
||||
"segment-events.jsonl",
|
||||
"manual-review",
|
||||
"manual_review_html",
|
||||
"manual_review_dir",
|
||||
"open_manual_review_in_dolphin",
|
||||
"dolphin",
|
||||
"capture_path",
|
||||
"confirmation-summary.json",
|
||||
"confirmation_passed",
|
||||
"check_confirmation_result",
|
||||
"confirmation check failed",
|
||||
"analysis_failure_reason",
|
||||
"probe_activity_start_delta_ms",
|
||||
"blind-targets.json",
|
||||
"no segment produced a passing probe verdict; refusing to invent blind targets",
|
||||
"confirmation did not pass; refusing to promote calibration-only segments to blind targets",
|
||||
"candidate_good_calibration_segments",
|
||||
"decision_mode",
|
||||
"decision_provisional_video_recommendation_us",
|
||||
"planner_live_lag_ms_after",
|
||||
"probe_p95_abs_skew_ms",
|
||||
"transport/server receive jitter",
|
||||
"settling ${LESAVKA_SYNC_SEGMENT_SETTLE_SECONDS}s before next segment",
|
||||
"print_upstream_calibration_state \"before mirrored run\"",
|
||||
"maybe_apply_probe_calibration",
|
||||
"calibration_ready=${calibration_ready}",
|
||||
"calibration_decision_mode=${calibration_decision_mode}",
|
||||
"bounded provisional correction from median skew",
|
||||
"bounded provisional correction from analyzer-failure raw activity",
|
||||
"raw_failure_calibration_enabled",
|
||||
"raw analyzer-failure calibration refused: ",
|
||||
"raw_failure_min_pairs",
|
||||
"provisional calibration not saved",
|
||||
"calibration apply refused: ${calibration_decision_note}",
|
||||
"calibrate \"${calibration_apply_audio_delta_us}\" \"${calibration_apply_video_delta_us}\"",
|
||||
"LESAVKA_UPSTREAM_BLIND_HEAL is server-side",
|
||||
"calibration-save-default",
|
||||
"print_upstream_sync_state \"after mirrored run\"",
|
||||
"print_upstream_calibration_state \"after mirrored run\"",
|
||||
"==> mirrored probe failed",
|
||||
] {
|
||||
assert!(
|
||||
MIRRORED_SYNC_SCRIPT.contains(expected),
|
||||
"mirrored sync script should contain {expected}"
|
||||
);
|
||||
}
|
||||
assert!(
|
||||
!MIRRORED_SYNC_SCRIPT.contains("lesavka-sync-probe"),
|
||||
"mirrored sync must not use the synthetic direct sender"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn local_stimulus_matches_sync_analyzer_pulse_contract() {
|
||||
for expected in [
|
||||
"--warmup-seconds",
|
||||
"--pulse-period-ms",
|
||||
"--pulse-width-ms",
|
||||
"--marker-tick-period",
|
||||
"--audio-gain",
|
||||
"/preview",
|
||||
"preview_token",
|
||||
"observed_preview_token",
|
||||
"completed_preview_token",
|
||||
"stimulus preview running",
|
||||
"stimulus preview completed",
|
||||
"audio_state",
|
||||
"--event-width-codes",
|
||||
"event_width_codes",
|
||||
"audio_gain",
|
||||
"widthCode",
|
||||
"oscillator.frequency.value = 880",
|
||||
"setStatus(`ready",
|
||||
"Point the real webcam at this window",
|
||||
] {
|
||||
assert!(
|
||||
LOCAL_STIMULUS.contains(expected),
|
||||
"local stimulus should contain {expected}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn manual_probe_python_servers_use_reentrant_state_locks() {
|
||||
const BROWSER_CONSUMER: &str = include_str!("../../scripts/manual/browser_consumer_probe.py");
|
||||
for (name, script) in [
|
||||
("local stimulus", LOCAL_STIMULUS),
|
||||
("browser consumer", BROWSER_CONSUMER),
|
||||
] {
|
||||
assert!(
|
||||
script.contains("threading.RLock()"),
|
||||
"{name} server request handlers call snapshot while holding state lock; use RLock to avoid /start deadlocks"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
290
testing/tests/client_microphone_gain_control_contract.rs
Normal file
290
testing/tests/client_microphone_gain_control_contract.rs
Normal file
@ -0,0 +1,290 @@
|
||||
//! Contract tests for microphone gain control and level taps.
|
||||
//!
|
||||
//! Scope: include `client/src/input/microphone.rs` and exercise live gain,
|
||||
//! level telemetry, and shared-clock packet extraction helpers.
|
||||
//! Targets: `client/src/input/microphone.rs`.
|
||||
//! Why: microphone tuning is part of upstream transport quality, so gain and
|
||||
//! tap behavior must remain deterministic without a live microphone.
|
||||
|
||||
#[allow(warnings)]
|
||||
mod live_capture_clock {
|
||||
include!("support/live_capture_clock_shim.rs");
|
||||
}
|
||||
|
||||
#[allow(warnings)]
|
||||
mod microphone_include_contract {
|
||||
include!(env!("LESAVKA_CLIENT_MICROPHONE_SRC"));
|
||||
|
||||
use serial_test::serial;
|
||||
use std::fs;
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
use std::path::Path;
|
||||
use temp_env::with_var;
|
||||
use tempfile::tempdir;
|
||||
|
||||
fn write_executable(dir: &Path, name: &str, body: &str) {
|
||||
let path = dir.join(name);
|
||||
fs::write(&path, body).expect("write script");
|
||||
let mut perms = fs::metadata(&path).expect("metadata").permissions();
|
||||
perms.set_mode(0o755);
|
||||
fs::set_permissions(path, perms).expect("chmod");
|
||||
}
|
||||
|
||||
fn with_fake_command(name: &str, script_body: &str, f: impl FnOnce()) {
|
||||
let dir = tempdir().expect("tempdir");
|
||||
write_executable(dir.path(), name, script_body);
|
||||
let prior = std::env::var("PATH").unwrap_or_default();
|
||||
let merged = if prior.is_empty() {
|
||||
dir.path().display().to_string()
|
||||
} else {
|
||||
format!("{}:{prior}", dir.path().display())
|
||||
};
|
||||
with_var("PATH", Some(merged), f);
|
||||
}
|
||||
|
||||
fn with_fake_pactl(script_body: &str, f: impl FnOnce()) {
|
||||
with_fake_command("pactl", script_body, f);
|
||||
}
|
||||
|
||||
fn with_fake_pw_dump(script_body: &str, f: impl FnOnce()) {
|
||||
with_fake_command("pw-dump", script_body, f);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mic_gain_control_reads_first_token_and_clamps() {
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-gain.control");
|
||||
fs::write(&path, "3.250 nonce\n").expect("write gain");
|
||||
assert_eq!(read_mic_gain_control(&path), Some(3.25));
|
||||
|
||||
fs::write(&path, "20.0 nonce\n").expect("write clamped gain");
|
||||
assert_eq!(read_mic_gain_control(&path), Some(4.0));
|
||||
|
||||
fs::write(&path, "bad nonce\n").expect("write invalid gain");
|
||||
assert_eq!(read_mic_gain_control(&path), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_level_tap_env_and_payload_helpers_are_stable() {
|
||||
with_var("LESAVKA_UPLINK_MIC_LEVEL", None::<&str>, || {
|
||||
assert!(mic_level_tap_path().is_none());
|
||||
});
|
||||
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("uplink-mic-level.value");
|
||||
with_var(
|
||||
"LESAVKA_UPLINK_MIC_LEVEL",
|
||||
Some(path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
assert_eq!(mic_level_tap_path().as_deref(), Some(path.as_path()));
|
||||
},
|
||||
);
|
||||
|
||||
assert_eq!(pcm_peak_fraction(&0_i16.to_le_bytes()), 0.0);
|
||||
assert!(pcm_peak_fraction(&i16::MAX.to_le_bytes()) > 0.99);
|
||||
|
||||
write_mic_level_tap(&path, 0.375).expect("write level tap");
|
||||
assert_eq!(
|
||||
fs::read_to_string(&path).expect("read level tap").trim(),
|
||||
"0.375000"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_gain_control_returns_without_env() {
|
||||
gst::init().ok();
|
||||
let volume = gst::ElementFactory::make("volume")
|
||||
.build()
|
||||
.expect("volume element");
|
||||
volume.set_property("volume", 1.75_f64);
|
||||
|
||||
with_var("LESAVKA_MIC_GAIN_CONTROL", None::<&str>, || {
|
||||
maybe_spawn_mic_gain_control(volume.clone());
|
||||
});
|
||||
|
||||
assert_eq!(volume.property::<f64>("volume"), 1.75);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_gain_control_updates_volume_element_live() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-gain.control");
|
||||
fs::write(&path, "2.500 nonce\n").expect("write gain");
|
||||
let volume = gst::ElementFactory::make("volume")
|
||||
.build()
|
||||
.expect("volume element");
|
||||
|
||||
with_var(
|
||||
"LESAVKA_MIC_GAIN_CONTROL",
|
||||
Some(path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
maybe_spawn_mic_gain_control(volume.clone());
|
||||
for _ in 0..20 {
|
||||
if (volume.property::<f64>("volume") - 2.5).abs() < 0.001 {
|
||||
return;
|
||||
}
|
||||
std::thread::sleep(std::time::Duration::from_millis(25));
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
assert!(
|
||||
(volume.property::<f64>("volume") - 2.5).abs() < 0.001,
|
||||
"live mic gain control should update the GStreamer volume"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pull_returns_none_for_empty_appsink() {
|
||||
gst::init().ok();
|
||||
let sink: gst_app::AppSink = gst::ElementFactory::make("appsink")
|
||||
.build()
|
||||
.expect("appsink")
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink cast");
|
||||
let running = std::sync::Arc::new(AtomicBool::new(true));
|
||||
let cap = MicrophoneCapture {
|
||||
pipeline: gst::Pipeline::new(),
|
||||
sink,
|
||||
level_tap_running: Some(std::sync::Arc::clone(&running)),
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Default::default(),
|
||||
};
|
||||
assert!(
|
||||
cap.pull().is_none(),
|
||||
"empty appsink should produce no packet"
|
||||
);
|
||||
drop(cap);
|
||||
assert!(!running.load(AtomicOrdering::Acquire));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn spawned_mic_level_tap_publishes_peak_from_appsink() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-level.value");
|
||||
let pipeline: gst::Pipeline = gst::parse::launch(
|
||||
"appsrc name=src is-live=true format=time caps=audio/x-raw,format=S16LE,channels=2,rate=48000 ! \
|
||||
appsink name=sink emit-signals=false sync=false max-buffers=4 drop=true",
|
||||
)
|
||||
.expect("pipeline")
|
||||
.downcast()
|
||||
.expect("pipeline cast");
|
||||
let src: gst_app::AppSrc = pipeline
|
||||
.by_name("src")
|
||||
.expect("appsrc")
|
||||
.downcast()
|
||||
.expect("appsrc cast");
|
||||
let sink: gst_app::AppSink = pipeline
|
||||
.by_name("sink")
|
||||
.expect("appsink")
|
||||
.downcast()
|
||||
.expect("appsink cast");
|
||||
pipeline.set_state(gst::State::Playing).expect("playing");
|
||||
|
||||
let running = spawn_mic_level_tap(sink, path.clone());
|
||||
src.push_buffer(gst::Buffer::from_slice(i16::MAX.to_le_bytes().repeat(4)))
|
||||
.expect("push buffer");
|
||||
|
||||
for _ in 0..20 {
|
||||
if let Ok(raw) = fs::read_to_string(&path) {
|
||||
let level = raw.trim().parse::<f64>().expect("level");
|
||||
assert!(level > 0.99);
|
||||
running.store(false, AtomicOrdering::Release);
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
return;
|
||||
}
|
||||
std::thread::sleep(std::time::Duration::from_millis(25));
|
||||
}
|
||||
|
||||
running.store(false, AtomicOrdering::Release);
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
panic!("microphone level tap did not publish a value");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn microphone_capture_with_level_tap_uses_the_same_uplink_pipeline() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let level_path = dir.path().join("uplink-mic-level.value");
|
||||
|
||||
with_var("LESAVKA_MIC_SOURCE", None::<&str>, || {
|
||||
with_var(
|
||||
"LESAVKA_MIC_TEST_SOURCE_DESC",
|
||||
Some("audiotestsrc is-live=true wave=sine freq=440".to_string()),
|
||||
|| {
|
||||
with_var(
|
||||
"LESAVKA_UPLINK_MIC_LEVEL",
|
||||
Some(level_path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
let cap = MicrophoneCapture::new().expect("synthetic mic capture");
|
||||
assert!(cap.level_tap_running.is_some());
|
||||
},
|
||||
);
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pull_returns_packet_when_appsink_has_buffered_sample_with_shared_capture_clock_pts() {
|
||||
gst::init().ok();
|
||||
let pipeline = gst::Pipeline::new();
|
||||
let src = gst::ElementFactory::make("appsrc")
|
||||
.build()
|
||||
.expect("appsrc")
|
||||
.downcast::<gst_app::AppSrc>()
|
||||
.expect("appsrc cast");
|
||||
let sink = gst::ElementFactory::make("appsink")
|
||||
.property("emit-signals", false)
|
||||
.property("sync", false)
|
||||
.build()
|
||||
.expect("appsink")
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink cast");
|
||||
pipeline
|
||||
.add_many([
|
||||
src.upcast_ref::<gst::Element>(),
|
||||
sink.upcast_ref::<gst::Element>(),
|
||||
])
|
||||
.expect("add appsrc/appsink");
|
||||
src.link(&sink).expect("link appsrc->appsink");
|
||||
pipeline.set_state(gst::State::Playing).ok();
|
||||
|
||||
let mut first = gst::Buffer::from_slice(vec![1_u8, 2, 3, 4]);
|
||||
first
|
||||
.get_mut()
|
||||
.expect("buffer mut")
|
||||
.set_pts(Some(gst::ClockTime::from_useconds(321)));
|
||||
src.push_buffer(first).expect("push first sample");
|
||||
|
||||
let mut second = gst::Buffer::from_slice(vec![5_u8, 6, 7, 8]);
|
||||
second
|
||||
.get_mut()
|
||||
.expect("buffer mut")
|
||||
.set_pts(Some(gst::ClockTime::from_useconds(999_999)));
|
||||
src.push_buffer(second).expect("push second sample");
|
||||
|
||||
let cap = MicrophoneCapture {
|
||||
pipeline,
|
||||
sink,
|
||||
level_tap_running: None,
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Default::default(),
|
||||
};
|
||||
let first_pkt = cap.pull().expect("first audio packet");
|
||||
let second_pkt = cap.pull().expect("second audio packet");
|
||||
assert_eq!(first_pkt.id, 0);
|
||||
assert_eq!(first_pkt.data, vec![1, 2, 3, 4]);
|
||||
assert_eq!(second_pkt.data, vec![5, 6, 7, 8]);
|
||||
assert!(second_pkt.pts >= first_pkt.pts);
|
||||
assert_ne!(first_pkt.pts, 321);
|
||||
assert_ne!(second_pkt.pts, 999_999);
|
||||
}
|
||||
}
|
||||
@ -210,331 +210,4 @@ JSON
|
||||
);
|
||||
assert!(without_tap.contains("appsink name=asink emit-signals=true max-buffers=8"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mic_gain_control_reads_first_token_and_clamps() {
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-gain.control");
|
||||
fs::write(&path, "3.250 nonce\n").expect("write gain");
|
||||
assert_eq!(read_mic_gain_control(&path), Some(3.25));
|
||||
|
||||
fs::write(&path, "20.0 nonce\n").expect("write clamped gain");
|
||||
assert_eq!(read_mic_gain_control(&path), Some(4.0));
|
||||
|
||||
fs::write(&path, "bad nonce\n").expect("write invalid gain");
|
||||
assert_eq!(read_mic_gain_control(&path), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_level_tap_env_and_payload_helpers_are_stable() {
|
||||
with_var("LESAVKA_UPLINK_MIC_LEVEL", None::<&str>, || {
|
||||
assert!(mic_level_tap_path().is_none());
|
||||
});
|
||||
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("uplink-mic-level.value");
|
||||
with_var(
|
||||
"LESAVKA_UPLINK_MIC_LEVEL",
|
||||
Some(path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
assert_eq!(mic_level_tap_path().as_deref(), Some(path.as_path()));
|
||||
},
|
||||
);
|
||||
|
||||
assert_eq!(pcm_peak_fraction(&0_i16.to_le_bytes()), 0.0);
|
||||
assert!(pcm_peak_fraction(&i16::MAX.to_le_bytes()) > 0.99);
|
||||
|
||||
write_mic_level_tap(&path, 0.375).expect("write level tap");
|
||||
assert_eq!(
|
||||
fs::read_to_string(&path).expect("read level tap").trim(),
|
||||
"0.375000"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_gain_control_returns_without_env() {
|
||||
gst::init().ok();
|
||||
let volume = gst::ElementFactory::make("volume")
|
||||
.build()
|
||||
.expect("volume element");
|
||||
volume.set_property("volume", 1.75_f64);
|
||||
|
||||
with_var("LESAVKA_MIC_GAIN_CONTROL", None::<&str>, || {
|
||||
maybe_spawn_mic_gain_control(volume.clone());
|
||||
});
|
||||
|
||||
assert_eq!(volume.property::<f64>("volume"), 1.75);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn mic_gain_control_updates_volume_element_live() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-gain.control");
|
||||
fs::write(&path, "2.500 nonce\n").expect("write gain");
|
||||
let volume = gst::ElementFactory::make("volume")
|
||||
.build()
|
||||
.expect("volume element");
|
||||
|
||||
with_var(
|
||||
"LESAVKA_MIC_GAIN_CONTROL",
|
||||
Some(path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
maybe_spawn_mic_gain_control(volume.clone());
|
||||
for _ in 0..20 {
|
||||
if (volume.property::<f64>("volume") - 2.5).abs() < 0.001 {
|
||||
return;
|
||||
}
|
||||
std::thread::sleep(std::time::Duration::from_millis(25));
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
assert!(
|
||||
(volume.property::<f64>("volume") - 2.5).abs() < 0.001,
|
||||
"live mic gain control should update the GStreamer volume"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pull_returns_none_for_empty_appsink() {
|
||||
gst::init().ok();
|
||||
let sink: gst_app::AppSink = gst::ElementFactory::make("appsink")
|
||||
.build()
|
||||
.expect("appsink")
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink cast");
|
||||
let running = std::sync::Arc::new(AtomicBool::new(true));
|
||||
let cap = MicrophoneCapture {
|
||||
pipeline: gst::Pipeline::new(),
|
||||
sink,
|
||||
level_tap_running: Some(std::sync::Arc::clone(&running)),
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Default::default(),
|
||||
};
|
||||
assert!(
|
||||
cap.pull().is_none(),
|
||||
"empty appsink should produce no packet"
|
||||
);
|
||||
drop(cap);
|
||||
assert!(!running.load(AtomicOrdering::Acquire));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn spawned_mic_level_tap_publishes_peak_from_appsink() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let path = dir.path().join("mic-level.value");
|
||||
let pipeline: gst::Pipeline = gst::parse::launch(
|
||||
"appsrc name=src is-live=true format=time caps=audio/x-raw,format=S16LE,channels=2,rate=48000 ! \
|
||||
appsink name=sink emit-signals=false sync=false max-buffers=4 drop=true",
|
||||
)
|
||||
.expect("pipeline")
|
||||
.downcast()
|
||||
.expect("pipeline cast");
|
||||
let src: gst_app::AppSrc = pipeline
|
||||
.by_name("src")
|
||||
.expect("appsrc")
|
||||
.downcast()
|
||||
.expect("appsrc cast");
|
||||
let sink: gst_app::AppSink = pipeline
|
||||
.by_name("sink")
|
||||
.expect("appsink")
|
||||
.downcast()
|
||||
.expect("appsink cast");
|
||||
pipeline.set_state(gst::State::Playing).expect("playing");
|
||||
|
||||
let running = spawn_mic_level_tap(sink, path.clone());
|
||||
src.push_buffer(gst::Buffer::from_slice(i16::MAX.to_le_bytes().repeat(4)))
|
||||
.expect("push buffer");
|
||||
|
||||
for _ in 0..20 {
|
||||
if let Ok(raw) = fs::read_to_string(&path) {
|
||||
let level = raw.trim().parse::<f64>().expect("level");
|
||||
assert!(level > 0.99);
|
||||
running.store(false, AtomicOrdering::Release);
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
return;
|
||||
}
|
||||
std::thread::sleep(std::time::Duration::from_millis(25));
|
||||
}
|
||||
|
||||
running.store(false, AtomicOrdering::Release);
|
||||
let _ = pipeline.set_state(gst::State::Null);
|
||||
panic!("microphone level tap did not publish a value");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn microphone_capture_with_level_tap_uses_the_same_uplink_pipeline() {
|
||||
gst::init().ok();
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let level_path = dir.path().join("uplink-mic-level.value");
|
||||
|
||||
with_var("LESAVKA_MIC_SOURCE", None::<&str>, || {
|
||||
with_var(
|
||||
"LESAVKA_MIC_TEST_SOURCE_DESC",
|
||||
Some("audiotestsrc is-live=true wave=sine freq=440".to_string()),
|
||||
|| {
|
||||
with_var(
|
||||
"LESAVKA_UPLINK_MIC_LEVEL",
|
||||
Some(level_path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
let cap = MicrophoneCapture::new().expect("synthetic mic capture");
|
||||
assert!(cap.level_tap_running.is_some());
|
||||
},
|
||||
);
|
||||
},
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pull_returns_packet_when_appsink_has_buffered_sample_with_shared_capture_clock_pts() {
|
||||
gst::init().ok();
|
||||
let pipeline = gst::Pipeline::new();
|
||||
let src = gst::ElementFactory::make("appsrc")
|
||||
.build()
|
||||
.expect("appsrc")
|
||||
.downcast::<gst_app::AppSrc>()
|
||||
.expect("appsrc cast");
|
||||
let sink = gst::ElementFactory::make("appsink")
|
||||
.property("emit-signals", false)
|
||||
.property("sync", false)
|
||||
.build()
|
||||
.expect("appsink")
|
||||
.downcast::<gst_app::AppSink>()
|
||||
.expect("appsink cast");
|
||||
pipeline
|
||||
.add_many([
|
||||
src.upcast_ref::<gst::Element>(),
|
||||
sink.upcast_ref::<gst::Element>(),
|
||||
])
|
||||
.expect("add appsrc/appsink");
|
||||
src.link(&sink).expect("link appsrc->appsink");
|
||||
pipeline.set_state(gst::State::Playing).ok();
|
||||
|
||||
let mut first = gst::Buffer::from_slice(vec![1_u8, 2, 3, 4]);
|
||||
first
|
||||
.get_mut()
|
||||
.expect("buffer mut")
|
||||
.set_pts(Some(gst::ClockTime::from_useconds(321)));
|
||||
src.push_buffer(first).expect("push first sample");
|
||||
|
||||
let mut second = gst::Buffer::from_slice(vec![5_u8, 6, 7, 8]);
|
||||
second
|
||||
.get_mut()
|
||||
.expect("buffer mut")
|
||||
.set_pts(Some(gst::ClockTime::from_useconds(999_999)));
|
||||
src.push_buffer(second).expect("push second sample");
|
||||
|
||||
let cap = MicrophoneCapture {
|
||||
pipeline,
|
||||
sink,
|
||||
level_tap_running: None,
|
||||
pts_rebaser: crate::live_capture_clock::DurationPacedSourcePtsRebaser::default(),
|
||||
pending_packets: Default::default(),
|
||||
};
|
||||
let first_pkt = cap.pull().expect("first audio packet");
|
||||
let second_pkt = cap.pull().expect("second audio packet");
|
||||
assert_eq!(first_pkt.id, 0);
|
||||
assert_eq!(first_pkt.data, vec![1, 2, 3, 4]);
|
||||
assert_eq!(second_pkt.data, vec![5, 6, 7, 8]);
|
||||
assert!(second_pkt.pts >= first_pkt.pts);
|
||||
assert_ne!(first_pkt.pts, 321);
|
||||
assert_ne!(second_pkt.pts, 999_999);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn new_uses_requested_source_fragment_when_available() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "1 alsa_input.usb-LavMic_abc-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("LavMic_abc"), || {
|
||||
let result = MicrophoneCapture::new();
|
||||
if let Err(err) = result {
|
||||
assert!(!err.to_string().trim().is_empty());
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_source_desc_prefers_pipewire_named_source_when_available() {
|
||||
if !MicrophoneCapture::pipewire_source_available() {
|
||||
return;
|
||||
}
|
||||
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
cat <<'JSON'
|
||||
[
|
||||
{"info":{"props":{"media.class":"Audio/Source","node.name":"alsa_input.usb-UpstreamMic"}}}
|
||||
]
|
||||
JSON
|
||||
"#;
|
||||
with_fake_pw_dump(script, || {
|
||||
let desc =
|
||||
MicrophoneCapture::resolve_source_desc("UpstreamMic").expect("pipewire source");
|
||||
assert!(desc.contains("pipewiresrc target-object=alsa_input.usb-UpstreamMic"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn new_falls_back_to_default_source_when_requested_fragment_is_missing() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "0 alsa_input.pci.monitor module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
echo "1 alsa_input.usb-DeskMic_777-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz IDLE"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("missing-fragment"), || {
|
||||
let result = MicrophoneCapture::new();
|
||||
if let Err(err) = result {
|
||||
assert!(!err.to_string().trim().is_empty());
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn strict_probe_mode_rejects_missing_requested_source() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "0 alsa_input.pci.monitor module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
echo "1 alsa_input.usb-DeskMic_777-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz IDLE"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("missing-fragment"), || {
|
||||
with_var("LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES", Some("1"), || {
|
||||
match MicrophoneCapture::new() {
|
||||
Ok(_) => panic!("missing mic should fail"),
|
||||
Err(err) => assert!(
|
||||
err.to_string()
|
||||
.contains("requested mic 'missing-fragment' was not found"),
|
||||
"unexpected error: {err}"
|
||||
),
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
141
testing/tests/client_microphone_requested_source_contract.rs
Normal file
141
testing/tests/client_microphone_requested_source_contract.rs
Normal file
@ -0,0 +1,141 @@
|
||||
//! Contract tests for requested microphone source selection.
|
||||
//!
|
||||
//! Scope: include `client/src/input/microphone.rs` and exercise requested-source
|
||||
//! success, fallback, and strict-probe failure behavior.
|
||||
//! Targets: `client/src/input/microphone.rs`.
|
||||
//! Why: transport probes should fail clearly when the named microphone is absent
|
||||
//! instead of silently measuring the wrong host source.
|
||||
|
||||
#[allow(warnings)]
|
||||
mod live_capture_clock {
|
||||
include!("support/live_capture_clock_shim.rs");
|
||||
}
|
||||
|
||||
#[allow(warnings)]
|
||||
mod microphone_include_contract {
|
||||
include!(env!("LESAVKA_CLIENT_MICROPHONE_SRC"));
|
||||
|
||||
use serial_test::serial;
|
||||
use std::fs;
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
use std::path::Path;
|
||||
use temp_env::with_var;
|
||||
use tempfile::tempdir;
|
||||
|
||||
fn write_executable(dir: &Path, name: &str, body: &str) {
|
||||
let path = dir.join(name);
|
||||
fs::write(&path, body).expect("write script");
|
||||
let mut perms = fs::metadata(&path).expect("metadata").permissions();
|
||||
perms.set_mode(0o755);
|
||||
fs::set_permissions(path, perms).expect("chmod");
|
||||
}
|
||||
|
||||
fn with_fake_command(name: &str, script_body: &str, f: impl FnOnce()) {
|
||||
let dir = tempdir().expect("tempdir");
|
||||
write_executable(dir.path(), name, script_body);
|
||||
let prior = std::env::var("PATH").unwrap_or_default();
|
||||
let merged = if prior.is_empty() {
|
||||
dir.path().display().to_string()
|
||||
} else {
|
||||
format!("{}:{prior}", dir.path().display())
|
||||
};
|
||||
with_var("PATH", Some(merged), f);
|
||||
}
|
||||
|
||||
fn with_fake_pactl(script_body: &str, f: impl FnOnce()) {
|
||||
with_fake_command("pactl", script_body, f);
|
||||
}
|
||||
|
||||
fn with_fake_pw_dump(script_body: &str, f: impl FnOnce()) {
|
||||
with_fake_command("pw-dump", script_body, f);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn new_uses_requested_source_fragment_when_available() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "1 alsa_input.usb-LavMic_abc-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("LavMic_abc"), || {
|
||||
let result = MicrophoneCapture::new();
|
||||
if let Err(err) = result {
|
||||
assert!(!err.to_string().trim().is_empty());
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn resolve_source_desc_prefers_pipewire_named_source_when_available() {
|
||||
if !MicrophoneCapture::pipewire_source_available() {
|
||||
return;
|
||||
}
|
||||
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
cat <<'JSON'
|
||||
[
|
||||
{"info":{"props":{"media.class":"Audio/Source","node.name":"alsa_input.usb-UpstreamMic"}}}
|
||||
]
|
||||
JSON
|
||||
"#;
|
||||
with_fake_pw_dump(script, || {
|
||||
let desc =
|
||||
MicrophoneCapture::resolve_source_desc("UpstreamMic").expect("pipewire source");
|
||||
assert!(desc.contains("pipewiresrc target-object=alsa_input.usb-UpstreamMic"));
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn new_falls_back_to_default_source_when_requested_fragment_is_missing() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "0 alsa_input.pci.monitor module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
echo "1 alsa_input.usb-DeskMic_777-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz IDLE"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("missing-fragment"), || {
|
||||
let result = MicrophoneCapture::new();
|
||||
if let Err(err) = result {
|
||||
assert!(!err.to_string().trim().is_empty());
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[serial]
|
||||
fn strict_probe_mode_rejects_missing_requested_source() {
|
||||
let script = r#"#!/usr/bin/env sh
|
||||
if [ "$1" = "list" ] && [ "$2" = "short" ] && [ "$3" = "sources" ]; then
|
||||
echo "0 alsa_input.pci.monitor module-alsa-card.c s16le 2ch 48000Hz RUNNING"
|
||||
echo "1 alsa_input.usb-DeskMic_777-00.analog-stereo module-alsa-card.c s16le 2ch 48000Hz IDLE"
|
||||
exit 0
|
||||
fi
|
||||
exit 0
|
||||
"#;
|
||||
with_fake_pactl(script, || {
|
||||
with_var("LESAVKA_MIC_SOURCE", Some("missing-fragment"), || {
|
||||
with_var("LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES", Some("1"), || {
|
||||
match MicrophoneCapture::new() {
|
||||
Ok(_) => panic!("missing mic should fail"),
|
||||
Err(err) => assert!(
|
||||
err.to_string()
|
||||
.contains("requested mic 'missing-fragment' was not found"),
|
||||
"unexpected error: {err}"
|
||||
),
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
170
testing/tests/client_server_rc_matrix_script_contract.rs
Normal file
170
testing/tests/client_server_rc_matrix_script_contract.rs
Normal file
@ -0,0 +1,170 @@
|
||||
//! Contract tests for the server-to-RCT mode matrix harness.
|
||||
//!
|
||||
//! Scope: statically guard the hardware-in-the-loop matrix script defaults and
|
||||
//! summary artifacts.
|
||||
//! Targets: `scripts/manual/run_server_to_rc_mode_matrix.sh`.
|
||||
//! Why: server-to-RCT calibration is now considered complete, so the matrix
|
||||
//! script must keep the blessed offsets and evidence outputs stable.
|
||||
|
||||
const SERVER_RC_MODE_MATRIX_SCRIPT: &str =
|
||||
include_str!("../../scripts/manual/run_server_to_rc_mode_matrix.sh");
|
||||
|
||||
#[test]
|
||||
fn server_rc_mode_matrix_validates_advertised_uvc_profiles() {
|
||||
for expected in [
|
||||
"LESAVKA_SERVER_RC_CORE_WEBCAM_MODES=${LESAVKA_SERVER_RC_CORE_WEBCAM_MODES:-1280x720@20,1280x720@30,1920x1080@20,1920x1080@30}",
|
||||
"LESAVKA_SERVER_RC_MODES=${LESAVKA_SERVER_RC_MODES:-${LESAVKA_SERVER_RC_CORE_WEBCAM_MODES}}",
|
||||
"LESAVKA_SERVER_REPO=${LESAVKA_SERVER_REPO:-auto}",
|
||||
"LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US:-${LESAVKA_OUTPUT_DELAY_PROBE_AUDIO_DELAY_US:-0}}",
|
||||
"LESAVKA_SERVER_RC_DEFAULT_VIDEO_DELAY_US=${LESAVKA_SERVER_RC_DEFAULT_VIDEO_DELAY_US:-135090}",
|
||||
"LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US=${LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US:-1280x720@20=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1280x720@30=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1920x1080@20=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US},1920x1080@30=${LESAVKA_SERVER_RC_DEFAULT_AUDIO_DELAY_US}}",
|
||||
"LESAVKA_SERVER_RC_MODE_DELAYS_US=${LESAVKA_SERVER_RC_MODE_DELAYS_US:-1280x720@20=162659,1280x720@30=135090,1920x1080@20=160045,1920x1080@30=127952}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_SIZES=${LESAVKA_SERVER_RC_MODE_DISCOVERY_SIZES:-1280x720,1920x1080}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_FPS=${LESAVKA_SERVER_RC_MODE_DISCOVERY_FPS:-20,30}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_INCLUDE_REGEX=${LESAVKA_SERVER_RC_MODE_DISCOVERY_INCLUDE_REGEX:-Logitech|BRIO|C9[0-9]+|HD UVC WebCam|USB2[.]0 HD|Integrated Camera|Webcam|Camera}",
|
||||
"LESAVKA_SERVER_RC_MODE_DISCOVERY_EXCLUDE_REGEX=${LESAVKA_SERVER_RC_MODE_DISCOVERY_EXCLUDE_REGEX:-Lesavka|UGREEN|MACROSILICON|Composite|Capture}",
|
||||
"LESAVKA_SERVER_RC_MODE_SOURCE=${LESAVKA_SERVER_RC_MODE_SOURCE:-configured}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE=${LESAVKA_SERVER_RC_RECONFIGURE:-0}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_UPDATE=${LESAVKA_SERVER_RC_RECONFIGURE_UPDATE:-0}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_STRATEGY=${LESAVKA_SERVER_RC_RECONFIGURE_STRATEGY:-runtime}",
|
||||
"LESAVKA_SERVER_RC_ALLOW_GADGET_RESET=${LESAVKA_SERVER_RC_ALLOW_GADGET_RESET:-1}",
|
||||
"LESAVKA_SERVER_RC_RECONFIGURE_VERBOSE=${LESAVKA_SERVER_RC_RECONFIGURE_VERBOSE:-0}",
|
||||
"LESAVKA_SERVER_RC_PROMPT_SUDO_EARLY=${LESAVKA_SERVER_RC_PROMPT_SUDO_EARLY:-1}",
|
||||
"LESAVKA_SERVER_RC_START_DELAY_SECONDS=${LESAVKA_SERVER_RC_START_DELAY_SECONDS:-0}",
|
||||
"LESAVKA_SERVER_RC_WAIT_TETHYS_READY=${LESAVKA_SERVER_RC_WAIT_TETHYS_READY:-1}",
|
||||
"LESAVKA_SERVER_RC_TETHYS_READY_TIMEOUT_SECONDS=${LESAVKA_SERVER_RC_TETHYS_READY_TIMEOUT_SECONDS:-60}",
|
||||
"LESAVKA_SERVER_RC_TETHYS_SETTLE_SECONDS=${LESAVKA_SERVER_RC_TETHYS_SETTLE_SECONDS:-6}",
|
||||
"LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS=${LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS:-3}",
|
||||
"LESAVKA_SERVER_RC_PROBE_PREBUILD=${LESAVKA_SERVER_RC_PROBE_PREBUILD:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_DELAYS=${LESAVKA_SERVER_RC_TUNE_DELAYS:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_CONFIRM=${LESAVKA_SERVER_RC_TUNE_CONFIRM:-1}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MIN_PAIRS=${LESAVKA_SERVER_RC_TUNE_MIN_PAIRS:-13}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MAX_ABS_SKEW_MS=${LESAVKA_SERVER_RC_TUNE_MAX_ABS_SKEW_MS:-1000}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MAX_STEP_US=${LESAVKA_SERVER_RC_TUNE_MAX_STEP_US:-500000}",
|
||||
"LESAVKA_SERVER_RC_TUNE_MIN_CHANGE_US=${LESAVKA_SERVER_RC_TUNE_MIN_CHANGE_US:-5000}",
|
||||
"Theia sudo password for %s",
|
||||
"==> priming remote sudo on ${LESAVKA_SERVER_HOST}",
|
||||
"sleep_start_delay",
|
||||
"==> delaying server-to-RC matrix start for ${LESAVKA_SERVER_RC_START_DELAY_SECONDS}s",
|
||||
"remote sudo has already been primed; sleeping before prebuild/reconfigure/capture",
|
||||
"LESAVKA_SERVER_RC_START_DELAY_SECONDS must be a non-negative number",
|
||||
"start_delay=${LESAVKA_SERVER_RC_START_DELAY_SECONDS}s",
|
||||
"==> prebuilding relay control/analyzer once for the mode matrix",
|
||||
"LESAVKA_SERVER_RC_MODES=auto",
|
||||
"discover_local_webcam_modes",
|
||||
"lookup_audio_delay_us",
|
||||
"local webcam",
|
||||
"mode_source=${LESAVKA_SERVER_RC_MODE_SOURCE}",
|
||||
"video_delays=${LESAVKA_SERVER_RC_MODE_DELAYS_US}",
|
||||
"audio_delays=${LESAVKA_SERVER_RC_MODE_AUDIO_DELAYS_US}",
|
||||
"pulse_tool=${REMOTE_PULSE_CAPTURE_TOOL}",
|
||||
"fast runtime env updated: CAM_OUTPUT=uvc",
|
||||
"cycling UVC gadget descriptors",
|
||||
"lesavka-core reconfigure log:",
|
||||
"missing /usr/local/bin/lesavka-core.sh",
|
||||
"wait_tethys_media_ready",
|
||||
"==> waiting for Tethys media endpoints for ${mode}",
|
||||
"Tethys media ready: video=%s mode=%s audio_stack=%s",
|
||||
"timed out waiting for Tethys Lesavka media endpoints",
|
||||
"LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS:-350}",
|
||||
"LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS=${LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS:-${LESAVKA_SERVER_RC_TUNE_MIN_PAIRS}}",
|
||||
"LESAVKA_SERVER_RC_MIN_CODED_PAIRS=${LESAVKA_SERVER_RC_MIN_CODED_PAIRS:-${LESAVKA_SERVER_RC_FRESHNESS_MIN_PAIRS}}",
|
||||
"LESAVKA_SERVER_RC_REQUIRE_ALL_CODED_PAIRS=${LESAVKA_SERVER_RC_REQUIRE_ALL_CODED_PAIRS:-0}",
|
||||
"LESAVKA_SERVER_RC_REQUIRE_SMOOTHNESS_PASS=${LESAVKA_SERVER_RC_REQUIRE_SMOOTHNESS_PASS:-0}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY=${LESAVKA_SERVER_RC_SIGNAL_READY:-1}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_MODE=${LESAVKA_SERVER_RC_SIGNAL_READY_MODE:-conditioned_capture}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_MIN_PAIRS=${LESAVKA_SERVER_RC_SIGNAL_READY_MIN_PAIRS:-3}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_DURATION_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_READY_DURATION_SECONDS:-12}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS=${LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS:-4}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS:-5}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_SECONDS:-12}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_WARMUP_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_WARMUP_SECONDS:-1}",
|
||||
"LESAVKA_SERVER_RC_SIGNAL_CONDITION_GAP_SECONDS=${LESAVKA_SERVER_RC_SIGNAL_CONDITION_GAP_SECONDS:-1}",
|
||||
"LESAVKA_SERVER_RC_ANALYSIS_TIMELINE_WINDOW=${LESAVKA_SERVER_RC_ANALYSIS_TIMELINE_WINDOW:-auto}",
|
||||
"signal_readiness_passed",
|
||||
"write_signal_readiness_attempt_result",
|
||||
"write_signal_readiness_attempts_summary",
|
||||
"schema\": \"lesavka.server-rc-signal-readiness-attempt.v1\"",
|
||||
"schema\": \"lesavka.server-rc-signal-readiness-summary.v1\"",
|
||||
"using same-capture signal conditioning before measured probe",
|
||||
"proving Tethys signal readiness before measured probe",
|
||||
"readiness attempt ${readiness_attempt}/${LESAVKA_SERVER_RC_SIGNAL_READY_ATTEMPTS}",
|
||||
"waiting ${LESAVKA_SERVER_RC_SIGNAL_READY_RETRY_DELAY_SECONDS}s before retrying signal readiness",
|
||||
"signal readiness did not pass",
|
||||
"signal attempt ",
|
||||
"smoothness_required",
|
||||
"smoothness_warnings",
|
||||
"smoothness warning:",
|
||||
"coded visibility:",
|
||||
"LESAVKA_SERVER_RC_MAX_VIDEO_HICCUPS=${LESAVKA_SERVER_RC_MAX_VIDEO_HICCUPS:-0}",
|
||||
"LESAVKA_SERVER_RC_MAX_AUDIO_HICCUPS=${LESAVKA_SERVER_RC_MAX_AUDIO_HICCUPS:-0}",
|
||||
"LESAVKA_SERVER_RC_MAX_VIDEO_MISSING_FRAMES=${LESAVKA_SERVER_RC_MAX_VIDEO_MISSING_FRAMES:-12}",
|
||||
"mode-matrix-summary.json",
|
||||
"mode-matrix-summary.csv",
|
||||
"mode-matrix-summary.txt",
|
||||
"mode-delay-recommendations.json",
|
||||
"mode-delay-recommendations.env",
|
||||
"schema\": \"lesavka.server-rc-mode-result.v1\"",
|
||||
"schema\": \"lesavka.server-rc-mode-matrix-summary.v1\"",
|
||||
"schema\": \"lesavka.server-rc-mode-delay-recommendations.v1\"",
|
||||
"output_delay_calibration",
|
||||
"write_tune_candidate_env",
|
||||
"annotate_mode_result",
|
||||
"LESAVKA_SERVER_RC_REPEAT_COUNT=${LESAVKA_SERVER_RC_REPEAT_COUNT:-1}",
|
||||
"mode-static-calibration.json",
|
||||
"mode-matrix-run.log",
|
||||
"mode-result-seed.json",
|
||||
"mode-result-tuned.json",
|
||||
"==> mode ${mode} run ${mode_run_index}: confirming tuned delays",
|
||||
"calibration_ready",
|
||||
"calibration_video_target_offset_us",
|
||||
"calibration_audio_target_offset_us",
|
||||
"calibration:",
|
||||
"capture_timebase_status",
|
||||
"capture timebase invalid",
|
||||
"capture timing:",
|
||||
"signature_coverage",
|
||||
"paired coded signatures",
|
||||
"signature_missing_codes",
|
||||
"probe_env=(",
|
||||
"\"REMOTE_PULSE_CAPTURE_TOOL=${REMOTE_PULSE_CAPTURE_TOOL}\"",
|
||||
"\"REMOTE_PULSE_VIDEO_MODE=${REMOTE_PULSE_VIDEO_MODE}\"",
|
||||
"\"REMOTE_CAPTURE_STACK=${REMOTE_CAPTURE_STACK}\"",
|
||||
"\"REMOTE_CAPTURE_ALLOW_ALSA_FALLBACK=${REMOTE_CAPTURE_ALLOW_ALSA_FALLBACK}\"",
|
||||
"\"REMOTE_CAPTURE_PREROLL_DISCARD_SECONDS=${LESAVKA_SERVER_RC_PREROLL_DISCARD_SECONDS}\"",
|
||||
"\"REMOTE_CAPTURE_READY_SETTLE_SECONDS=${REMOTE_CAPTURE_READY_SETTLE_SECONDS}\"",
|
||||
"\"PROBE_PREBUILD=0\"",
|
||||
"\"VIDEO_SIZE=${width}x${height}\"",
|
||||
"\"VIDEO_FPS=${fps}\"",
|
||||
"\"REMOTE_EXPECT_UVC_WIDTH=${width}\"",
|
||||
"\"REMOTE_EXPECT_UVC_HEIGHT=${height}\"",
|
||||
"\"REMOTE_EXPECT_UVC_FPS=${fps}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_PROBE_AUDIO_DELAY_US=${audio_delay_us}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_PROBE_VIDEO_DELAY_US=${video_delay_us}\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_APPLY=0\"",
|
||||
"\"LESAVKA_OUTPUT_DELAY_SAVE=0\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MAX_AGE_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_AGE_MS}\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MAX_CLOCK_UNCERTAINTY_MS=${LESAVKA_SERVER_RC_FRESHNESS_MAX_CLOCK_UNCERTAINTY_MS}\"",
|
||||
"\"LESAVKA_OUTPUT_FRESHNESS_MIN_PAIRS=${min_pairs}\"",
|
||||
"sync did not pass",
|
||||
"freshness did not pass",
|
||||
"video hiccups",
|
||||
"estimated missing video frames",
|
||||
"audio hiccups",
|
||||
] {
|
||||
assert!(
|
||||
SERVER_RC_MODE_MATRIX_SCRIPT.contains(expected),
|
||||
"server-to-RC mode matrix script should contain {expected}"
|
||||
);
|
||||
}
|
||||
let prime = SERVER_RC_MODE_MATRIX_SCRIPT
|
||||
.find("prime_remote_sudo\nsleep_start_delay\nprebuild_probe_tools")
|
||||
.expect("matrix should prime remote sudo before delayed start and prebuild");
|
||||
let prompt = SERVER_RC_MODE_MATRIX_SCRIPT
|
||||
.find("Theia sudo password for %s")
|
||||
.expect("matrix should contain the immediate Theia password prompt");
|
||||
assert!(
|
||||
prompt < prime,
|
||||
"password prompt machinery should be defined before the matrix startup sequence"
|
||||
);
|
||||
}
|
||||
@ -1,11 +1,16 @@
|
||||
//! Contract guardrails for uplink queue freshness budgets.
|
||||
//!
|
||||
//! Scope: source-level checks over client uplink queue constants.
|
||||
//! Targets: `client/src/app/uplink_media.rs`, `client/src/sync_probe/capture.rs`.
|
||||
//! Targets: `client/src/app/uplink_media/`, `client/src/sync_probe/capture.rs`.
|
||||
//! Why: lip-sync quality depends on bounded queue age; accidental widening can
|
||||
//! create near-second video lag under load.
|
||||
|
||||
const UPLINK_MEDIA_SRC: &str = include_str!("../../client/src/app/uplink_media.rs");
|
||||
const UPLINK_MEDIA_QUEUE_SRC: &str =
|
||||
include_str!("../../client/src/app/uplink_media/bundled_media_queue.rs");
|
||||
const MEDIA_SOURCE_REQUIREMENTS_SRC: &str =
|
||||
include_str!("../../client/src/app/uplink_media/media_source_requirements.rs");
|
||||
const VOICE_LOOP_SRC: &str = include_str!("../../client/src/app/uplink_media/voice_loop.rs");
|
||||
const CAMERA_LOOP_SRC: &str = include_str!("../../client/src/app/uplink_media/camera_loop.rs");
|
||||
const SYNC_PROBE_CAPTURE_SRC: &str = include_str!("../../client/src/sync_probe/capture.rs");
|
||||
|
||||
fn queue_block<'a>(src: &'a str, queue_const: &str) -> &'a str {
|
||||
@ -65,7 +70,7 @@ fn assert_queue_policy(block: &str, queue_const: &str, policy: &str) {
|
||||
|
||||
#[test]
|
||||
fn camera_uplink_queue_freshness_budget_stays_within_lipsync_window() {
|
||||
let block = queue_block(UPLINK_MEDIA_SRC, "VIDEO_UPLINK_QUEUE");
|
||||
let block = queue_block(UPLINK_MEDIA_QUEUE_SRC, "VIDEO_UPLINK_QUEUE");
|
||||
let max_age_ms = parse_queue_max_age_ms(block, "VIDEO_UPLINK_QUEUE");
|
||||
assert!(
|
||||
max_age_ms <= 350,
|
||||
@ -76,7 +81,7 @@ fn camera_uplink_queue_freshness_budget_stays_within_lipsync_window() {
|
||||
|
||||
#[test]
|
||||
fn microphone_uplink_queue_freshness_budget_stays_within_live_audio_window() {
|
||||
let block = queue_block(UPLINK_MEDIA_SRC, "AUDIO_UPLINK_QUEUE");
|
||||
let block = queue_block(UPLINK_MEDIA_QUEUE_SRC, "AUDIO_UPLINK_QUEUE");
|
||||
let max_age_ms = parse_queue_max_age_ms(block, "AUDIO_UPLINK_QUEUE");
|
||||
assert!(
|
||||
max_age_ms <= 400,
|
||||
@ -92,7 +97,7 @@ fn microphone_uplink_queue_freshness_budget_stays_within_live_audio_window() {
|
||||
|
||||
#[test]
|
||||
fn camera_uplink_queue_capacity_remains_bounded() {
|
||||
let block = queue_block(UPLINK_MEDIA_SRC, "VIDEO_UPLINK_QUEUE");
|
||||
let block = queue_block(UPLINK_MEDIA_QUEUE_SRC, "VIDEO_UPLINK_QUEUE");
|
||||
let capacity = parse_queue_capacity(block, "VIDEO_UPLINK_QUEUE");
|
||||
assert!(
|
||||
capacity <= 32,
|
||||
@ -124,6 +129,12 @@ fn sync_probe_audio_queue_preserves_bounded_marker_continuity() {
|
||||
|
||||
#[test]
|
||||
fn strict_explicit_media_source_failures_abort_the_headless_probe_client() {
|
||||
let source = [
|
||||
MEDIA_SOURCE_REQUIREMENTS_SRC,
|
||||
VOICE_LOOP_SRC,
|
||||
CAMERA_LOOP_SRC,
|
||||
]
|
||||
.join("\n");
|
||||
for expected in [
|
||||
"LESAVKA_REQUIRE_EXPLICIT_MEDIA_SOURCES",
|
||||
"abort_if_required_media_source_failed",
|
||||
@ -133,7 +144,7 @@ fn strict_explicit_media_source_failures_abort_the_headless_probe_client() {
|
||||
"abort_if_required_media_source_failed(\"camera\"",
|
||||
] {
|
||||
assert!(
|
||||
UPLINK_MEDIA_SRC.contains(expected),
|
||||
source.contains(expected),
|
||||
"required-source setup failures must be fatal in strict probe mode: missing {expected}"
|
||||
);
|
||||
}
|
||||
|
||||
@ -355,168 +355,4 @@ mod server_main_rpc {
|
||||
};
|
||||
assert_eq!(err.code(), tonic::Code::Internal);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn capture_power_rpcs_surface_stub_snapshot_and_manual_modes() {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
with_var(
|
||||
"LESAVKA_TEST_UDEV_CAPTURE_DEVICES",
|
||||
Some("not-a-number"),
|
||||
|| {
|
||||
assert_eq!(Handler::detected_capture_devices_from_udev(), 0);
|
||||
},
|
||||
);
|
||||
with_var("LESAVKA_TEST_UDEV_CAPTURE_DEVICES", Some("9"), || {
|
||||
assert_eq!(Handler::detected_capture_devices_from_udev(), 2);
|
||||
});
|
||||
|
||||
let snapshot = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.get_capture_power(tonic::Request::new(Empty {}))
|
||||
.await
|
||||
})
|
||||
.expect("capture power snapshot")
|
||||
.into_inner();
|
||||
assert!(snapshot.available);
|
||||
assert!(!snapshot.enabled);
|
||||
assert_eq!(snapshot.mode, "auto");
|
||||
|
||||
let forced_on = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: true,
|
||||
command: CapturePowerCommand::ForceOn as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("force capture power on")
|
||||
.into_inner();
|
||||
assert!(forced_on.available);
|
||||
assert!(forced_on.enabled);
|
||||
assert_eq!(forced_on.mode, "forced-on");
|
||||
|
||||
let forced_off = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: false,
|
||||
command: CapturePowerCommand::ForceOff as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("force capture power off")
|
||||
.into_inner();
|
||||
assert!(forced_off.available);
|
||||
assert!(!forced_off.enabled);
|
||||
assert_eq!(forced_off.mode, "forced-off");
|
||||
|
||||
let auto = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: false,
|
||||
command: CapturePowerCommand::Auto as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("return capture power to auto")
|
||||
.into_inner();
|
||||
assert!(auto.available);
|
||||
assert!(!auto.enabled);
|
||||
assert_eq!(auto.mode, "auto");
|
||||
|
||||
let legacy_fallback = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: true,
|
||||
command: CapturePowerCommand::Unspecified as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("legacy bool fallback")
|
||||
.into_inner();
|
||||
assert!(legacy_fallback.available);
|
||||
assert!(legacy_fallback.enabled);
|
||||
assert_eq!(legacy_fallback.mode, "forced-on");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn calibration_rpcs_surface_current_state_and_apply_updates() {
|
||||
let dir = tempdir().expect("calibration dir");
|
||||
let calibration_path = dir.path().join("calibration.toml");
|
||||
with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(calibration_path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
let initial = rt
|
||||
.block_on(async {
|
||||
handler.get_calibration(tonic::Request::new(Empty {})).await
|
||||
})
|
||||
.expect("initial calibration")
|
||||
.into_inner();
|
||||
assert_eq!(initial.profile, "mjpeg");
|
||||
assert_eq!(initial.active_audio_offset_us, 0);
|
||||
|
||||
let adjusted = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.calibrate(tonic::Request::new(CalibrationRequest {
|
||||
action: lesavka_common::lesavka::CalibrationAction::BlindEstimate
|
||||
as i32,
|
||||
audio_delta_us: 10_000,
|
||||
video_delta_us: 2_000,
|
||||
observed_delivery_skew_ms: 42.0,
|
||||
observed_enqueue_skew_ms: 2.5,
|
||||
note: "coverage estimate".to_string(),
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("calibrate")
|
||||
.into_inner();
|
||||
assert_eq!(adjusted.source, "blind");
|
||||
assert_eq!(adjusted.active_audio_offset_us, 10_000);
|
||||
assert_eq!(adjusted.active_video_offset_us, 132_000);
|
||||
assert!(
|
||||
std::fs::read_to_string(calibration_path)
|
||||
.expect("persisted")
|
||||
.contains("active_audio_offset_us=-35000")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn upstream_sync_rpc_surfaces_planner_snapshot() {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
let lease_camera = handler.upstream_media_rt.activate_camera();
|
||||
let lease_microphone = handler.upstream_media_rt.activate_microphone();
|
||||
assert_eq!(lease_camera.session_id, lease_microphone.session_id);
|
||||
|
||||
let initial = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.get_upstream_sync(tonic::Request::new(Empty {}))
|
||||
.await
|
||||
})
|
||||
.expect("planner sync state")
|
||||
.into_inner();
|
||||
assert_eq!(initial.phase, "acquiring");
|
||||
assert_eq!(initial.session_id, lease_camera.session_id);
|
||||
}
|
||||
}
|
||||
|
||||
230
testing/tests/server_main_state_rpc_contract.rs
Normal file
230
testing/tests/server_main_state_rpc_contract.rs
Normal file
@ -0,0 +1,230 @@
|
||||
//! Integration coverage for server state-oriented RPC handler branches.
|
||||
//!
|
||||
//! Scope: include `server/src/main.rs` and exercise calibration, capture-power,
|
||||
//! and upstream-sync RPC surfaces.
|
||||
//! Targets: `server/src/main.rs`.
|
||||
//! Why: these RPCs expose live operational state, so tests should guard reply
|
||||
//! shapes without requiring gadget, HID, or capture hardware.
|
||||
|
||||
#[allow(warnings)]
|
||||
mod server_main_state_rpc {
|
||||
include!(env!("LESAVKA_SERVER_MAIN_SRC"));
|
||||
|
||||
use serial_test::serial;
|
||||
use temp_env::with_var;
|
||||
use tempfile::tempdir;
|
||||
|
||||
fn build_handler_for_tests_with_modes(
|
||||
kb_writable: bool,
|
||||
ms_writable: bool,
|
||||
) -> (tempfile::TempDir, Handler) {
|
||||
let dir = tempdir().expect("tempdir");
|
||||
let kb_path = dir.path().join("hidg0.bin");
|
||||
let ms_path = dir.path().join("hidg1.bin");
|
||||
std::fs::write(&kb_path, []).expect("create kb file");
|
||||
std::fs::write(&ms_path, []).expect("create ms file");
|
||||
let kb = tokio::fs::File::from_std(
|
||||
std::fs::OpenOptions::new()
|
||||
.read(true)
|
||||
.write(kb_writable)
|
||||
.create(kb_writable)
|
||||
.truncate(kb_writable)
|
||||
.open(&kb_path)
|
||||
.expect("open kb"),
|
||||
);
|
||||
let ms = tokio::fs::File::from_std(
|
||||
std::fs::OpenOptions::new()
|
||||
.read(true)
|
||||
.write(ms_writable)
|
||||
.create(ms_writable)
|
||||
.truncate(ms_writable)
|
||||
.open(&ms_path)
|
||||
.expect("open ms"),
|
||||
);
|
||||
let handler = with_var("LESAVKA_CAPTURE_POWER_UNIT", Some("none"), || Handler {
|
||||
kb: std::sync::Arc::new(tokio::sync::Mutex::new(Some(kb))),
|
||||
ms: std::sync::Arc::new(tokio::sync::Mutex::new(Some(ms))),
|
||||
gadget: UsbGadget::new("lesavka"),
|
||||
did_cycle: std::sync::Arc::new(std::sync::atomic::AtomicBool::new(false)),
|
||||
camera_rt: std::sync::Arc::new(CameraRuntime::new()),
|
||||
upstream_media_rt: std::sync::Arc::new(UpstreamMediaRuntime::new()),
|
||||
calibration: std::sync::Arc::new(CalibrationStore::load(std::sync::Arc::new(
|
||||
UpstreamMediaRuntime::new(),
|
||||
))),
|
||||
capture_power: CapturePowerManager::new(),
|
||||
eye_hubs: std::sync::Arc::new(
|
||||
tokio::sync::Mutex::new(std::collections::HashMap::new()),
|
||||
),
|
||||
});
|
||||
|
||||
(dir, handler)
|
||||
}
|
||||
|
||||
fn build_handler_for_tests() -> (tempfile::TempDir, Handler) {
|
||||
build_handler_for_tests_with_modes(true, true)
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn capture_power_rpcs_surface_stub_snapshot_and_manual_modes() {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
with_var(
|
||||
"LESAVKA_TEST_UDEV_CAPTURE_DEVICES",
|
||||
Some("not-a-number"),
|
||||
|| {
|
||||
assert_eq!(Handler::detected_capture_devices_from_udev(), 0);
|
||||
},
|
||||
);
|
||||
with_var("LESAVKA_TEST_UDEV_CAPTURE_DEVICES", Some("9"), || {
|
||||
assert_eq!(Handler::detected_capture_devices_from_udev(), 2);
|
||||
});
|
||||
|
||||
let snapshot = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.get_capture_power(tonic::Request::new(Empty {}))
|
||||
.await
|
||||
})
|
||||
.expect("capture power snapshot")
|
||||
.into_inner();
|
||||
assert!(snapshot.available);
|
||||
assert!(!snapshot.enabled);
|
||||
assert_eq!(snapshot.mode, "auto");
|
||||
|
||||
let forced_on = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: true,
|
||||
command: CapturePowerCommand::ForceOn as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("force capture power on")
|
||||
.into_inner();
|
||||
assert!(forced_on.available);
|
||||
assert!(forced_on.enabled);
|
||||
assert_eq!(forced_on.mode, "forced-on");
|
||||
|
||||
let forced_off = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: false,
|
||||
command: CapturePowerCommand::ForceOff as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("force capture power off")
|
||||
.into_inner();
|
||||
assert!(forced_off.available);
|
||||
assert!(!forced_off.enabled);
|
||||
assert_eq!(forced_off.mode, "forced-off");
|
||||
|
||||
let auto = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: false,
|
||||
command: CapturePowerCommand::Auto as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("return capture power to auto")
|
||||
.into_inner();
|
||||
assert!(auto.available);
|
||||
assert!(!auto.enabled);
|
||||
assert_eq!(auto.mode, "auto");
|
||||
|
||||
let legacy_fallback = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.set_capture_power(tonic::Request::new(SetCapturePowerRequest {
|
||||
enabled: true,
|
||||
command: CapturePowerCommand::Unspecified as i32,
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("legacy bool fallback")
|
||||
.into_inner();
|
||||
assert!(legacy_fallback.available);
|
||||
assert!(legacy_fallback.enabled);
|
||||
assert_eq!(legacy_fallback.mode, "forced-on");
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn calibration_rpcs_surface_current_state_and_apply_updates() {
|
||||
let dir = tempdir().expect("calibration dir");
|
||||
let calibration_path = dir.path().join("calibration.toml");
|
||||
with_var(
|
||||
"LESAVKA_CALIBRATION_PATH",
|
||||
Some(calibration_path.to_string_lossy().to_string()),
|
||||
|| {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
let initial = rt
|
||||
.block_on(async {
|
||||
handler.get_calibration(tonic::Request::new(Empty {})).await
|
||||
})
|
||||
.expect("initial calibration")
|
||||
.into_inner();
|
||||
assert_eq!(initial.profile, "mjpeg");
|
||||
assert_eq!(initial.active_audio_offset_us, 0);
|
||||
|
||||
let adjusted = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.calibrate(tonic::Request::new(CalibrationRequest {
|
||||
action: lesavka_common::lesavka::CalibrationAction::BlindEstimate
|
||||
as i32,
|
||||
audio_delta_us: 10_000,
|
||||
video_delta_us: 2_000,
|
||||
observed_delivery_skew_ms: 42.0,
|
||||
observed_enqueue_skew_ms: 2.5,
|
||||
note: "coverage estimate".to_string(),
|
||||
}))
|
||||
.await
|
||||
})
|
||||
.expect("calibrate")
|
||||
.into_inner();
|
||||
assert_eq!(adjusted.source, "blind");
|
||||
assert_eq!(adjusted.active_audio_offset_us, 10_000);
|
||||
assert_eq!(adjusted.active_video_offset_us, 132_000);
|
||||
assert!(
|
||||
std::fs::read_to_string(calibration_path)
|
||||
.expect("persisted")
|
||||
.contains("active_audio_offset_us=-35000")
|
||||
);
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg(coverage)]
|
||||
#[serial]
|
||||
fn upstream_sync_rpc_surfaces_planner_snapshot() {
|
||||
let (_dir, handler) = build_handler_for_tests();
|
||||
let rt = tokio::runtime::Runtime::new().expect("runtime");
|
||||
|
||||
let lease_camera = handler.upstream_media_rt.activate_camera();
|
||||
let lease_microphone = handler.upstream_media_rt.activate_microphone();
|
||||
assert_eq!(lease_camera.session_id, lease_microphone.session_id);
|
||||
|
||||
let initial = rt
|
||||
.block_on(async {
|
||||
handler
|
||||
.get_upstream_sync(tonic::Request::new(Empty {}))
|
||||
.await
|
||||
})
|
||||
.expect("planner sync state")
|
||||
.into_inner();
|
||||
assert_eq!(initial.phase, "acquiring");
|
||||
assert_eq!(initial.session_id, lease_camera.session_id);
|
||||
}
|
||||
}
|
||||
Loading…
x
Reference in New Issue
Block a user