Voice-driven dynamic analysis · powered by local LLM
Trip view Repo
Ready
◼ Lens 1 · Prediction
What is likely
Probability of the next batch outcome based on in-tank sensor history. Useful for early alerts, resource planning, SLA tracking.
Loading...
◆ Lens 2 · Explanation
Why it happens
Joins in-tank sensors with shift, lot, weather, crew, water, and spatial context. Useful for root cause analysis and targeted intervention.
Loading...
◉ Lens 3 · Findings
What the data shows
Educational reference — patterns observable in the current run set, with a short operations note for each. Refreshed with every new batch; numbers stabilize as more runs accumulate.
Loading…
Timeline — contamination across recent hours
Snapshots captured every 5 minutes. Watch how the rolling window shifts as shifts change and new lots come online.
◉ Lens 6 · Run dynamics
In-run trajectories from sensor agents — OUR, CER, viable cell density, intracellular pH — for the latest batch. Phase boundaries marked; contamination onset (if any) shown as a red rule.
Methodology · metrics, math, schema
How each number on this page is computed. Open for the full reference.

Metrics tracked

Summary KPIs (top of page)
total_runs
Count of rows in ferm_runs. Ticks up as new batches stream in.
last100_contam_pct
Share of the last 100 runs where contamination = 'Y'. Recent-window read.
contamination_pct
All-time share with contamination = 'Y'. Fallback when last-100 is unavailable.
last100_high_yield_pct
Share of last 100 where yield_category = 'High'.
avg_temp_c · avg_ph · avg_yield
Column means. avg_yield is max_fpu_ml — Filter Paper Units per mL.
latest_*
Run_id, shift, yeast_lot, humidity, contamination, yield of the most recent row by run_start_ts. Drives the live ticker + risk banner.
Lens 1 — Prediction
baseline.rate
Population outcome rate (contamination or high yield) across all runs.
baseline.n
Sample size backing it.
prediction.value
Equals baseline.rate. The next-batch probability is a rolling average — deliberately simple; the work happens in Lens 2.
Lens 1 — Prediction (with stratified baseline)
prediction.value
Population baseline rate (contamination or high yield) across all runs. The number a global average gives you.
prediction.contextual
The expected rate for this batch's specific configuration. {rate, ci_low, ci_high, n, label, key, fallback}. Drawn from the latest batch's stratum (shift × yeast_band × humidity_band × hardness_band) when n ≥ 200; otherwise fallback=true and the population baseline is used instead.
prediction.contextual_delta_pp
The contextual rate minus the population baseline, in percentage points. Positive = this configuration runs hotter than average; negative = cooler.
prediction.contextual.label
Human-readable description of the stratum (e.g. "shift NIGHT · yeast B44/B47 · HUMID air · hard water").
Lens 2 — Explanation
factor
Human-readable predicate (e.g. shift = NIGHT).
rate
Outcome rate within the subset where the predicate is true.
n · succ
Size of that subset and the count of outcome-matches within it.
ci_low · ci_high
95% Wilson score confidence interval on the rate.
lift_pp
rate − baseline.rate, in percentage points. Positive = predicate raises the outcome rate.
p_value
Raw Yates-corrected chi-square p-value against the complement group.
p_adjusted
Benjamini-Hochberg FDR-corrected across the factor set.
significant
Boolean: p_adjusted < 0.05. Drives the row-fade in Lens 2.
adj_or · adj_or_ci_low/high
Odds ratio with 95% Wald CI from the logistic regression — effect of this factor holding all other predictors constant.
adj_p · adj_significant
Wald p-value (two-sided) and p < 0.05 boolean. A "confounded" amber flag appears when the marginal lift is significant but the adjusted OR is not — the effect was a proxy for a correlated predictor.
compound
Same CI + chi-square fields applied to the predicate conjunction NIGHT ∧ yeast_lot ∈ {B44, B47} ∧ humidity ≥ 75%. Interaction term not included in the logistic model (would need a dedicated cross-term).
Lens 3 — Findings
finding
Short factual statement of an observable pattern in the current run set (e.g. "Ambient humidity above 75%"). Seven hardcoded patterns covering shift, lot, humidity, hardness, and one compound conjunction.
ops_note
One-sentence operations consideration tied to the pattern. Educational, not prescriptive — the panel surfaces what the data shows, the operator decides how to act.
rate · n · ci_low · ci_high · p_value · p_adjusted
Same statistics as factors — 95% Wilson CI on the rate, BH-adjusted chi-square p across the 7 finding tests.
magnitude
Neutral severity tag derived from FDR-adjusted significance + lift size: strong (≥15pp lift, significant), moderate (≥5pp), weak (significant but small), weak_unconfirmed (visible trend but n.s. under FDR), negligible. No prior beliefs are tested or judged.
Lens 4 — Biomass kinetics
peak_od
Max optical density reached during the run (OD600, dimensionless absorbance). Proxy for viable biomass; higher = healthier culture.
time_to_peak_h
Hours from inoculation to peak density. Shorter = faster growth.
mu_h1
Specific growth rate µ during exponential phase, 1/h. Slope of ln(OD) vs t. The fundamental kinetic parameter.
mean · sd · n
Population statistics used to standardize the predictor before fitting (z-score).
adj_or_per_sd
Odds ratio for a +1 SD change in the predictor, adjusted for shift, lot, humidity, and all other predictors in the model.
adj_or_ci_low/high · adj_p
95% Wald CI and two-sided Wald p-value for the per-SD coefficient.
Hot-tier run state (the inversion)
FERM_RUN_STATE
Per-run cached state — keyed by run_id, holds the live decision card + drift score + alerts + narrative + last_updated_at. Read by the dashboard in O(1); written by Python whenever the analysis is recomputed.
narrative
One-line plain-language summary the run "says" about itself, synthesized from regime + drift + earliest alert. Replaces having operators read four panels to figure out the gist.
drift_score
0..1 continuous metric: RMS deviation of the actual trace from the healthy reference trajectory of the same configuration, summed across OUR/CER/VCD/pHi/viability and saturated through 1 − exp(−x/1.5). Rises smoothly before any binary detector fires.
drift_trend
rising / steady / falling — compares last 24 h drift to prior 24 h. Flags accelerating deterioration before it crosses an alert threshold.
drift_components
Per-agent breakdown of the drift score in noise-SD units. Lets the operator see which sensor is driving the deviation.
last_updated_at · age_seconds
Cache freshness. The hot-tier reader returns from_cache=true when serving a row younger than the TTL (default 300 s); otherwise recomputes and persists.
Operator authorization tiers (Tier 13)
role
One of viewer / operator / supervisor. Stored in the qm_role cookie; surfaced via /api/me. The role pill in the dashboard header shows the current tier and (in demo mode) cycles through all three on click.
require_role(req, "operator")
Backend guard on write endpoints. Returns 403 with code: "role_insufficient" when the cookie's role is below the required tier. Currently gates /api/label_outcome and /api/refresh_run_state.
POST /api/role
Demo-mode role switcher. In production this would be replaced by an SSO IdP integration that issues role-claims at login.
Confirm-by automation (Tier 14)
auto_check_available
Boolean on each differential candidate. true when the hypothesis has an automated probe; UI shows a "▶ Run automated check" button when so.
GET /api/confirm_check?run_id=…&hypothesis=…
Returns {result, evidence, details}. Result is one of supports / refutes / weakly_refutes / inconclusive / not_automated.
Currently automated
NIGHT-shift event (OUR drop alignment with 8h boundary), Yeast lot lag-failure (pHi-before-viability sequence), Sterilization breach (SIP duration vs 45 min threshold), Aeration limit (DO vs 20% threshold).
Attention router (Tier 15)
priority
Numeric urgency score per run. regime + drift × 50 + alerts × 10 + severity bonus + vessel_load × 20. Higher = more attention needed.
GET /api/attention_router?n=8&max_age=600
Scans the n most recent LIVE_ rows; reads each one's cached state from FERM_RUN_STATE (recomputes only when cache is missing or older than max_age seconds). Returns the runs ranked by priority.
UI
Top-of-page card listing each run's priority, narrative, regime tag, and drift/alert summary. Click a row to refocus the dashboard's per-run panels (trace, alerts, run-state) on that run.
Trace shape features (Tier 16)
our_slope_late_per_h
Linear slope of OUR over the last 12 h of trace, units mmol/L/h². Negative = falling. The watchlist sees this alongside its other continuous predictors.
our_curvature_late
Quadratic curvature term over last 24 h — distinguishes "OUR holding" from "OUR accelerating downward". Positive when y(t) is convex (decline accelerating).
pHi_slope_late_per_h, viab_slope_late_per_h
Same idea for intracellular pH and biomass viability — captures the rate at which they're moving rather than just their current value.
cer_to_our_late
Late-window respiratory quotient. RQ ≈ 1.0 = balanced aerobic; deviations indicate substrate-switch or O₂ limitation.
vcd_plateau_frac
Fraction of the run completed when VCD reached 95% of its peak. Early plateau (low fraction) = constraint signal.
Vessel allostatic load (Tier 10)
tank_id · runs_total
The vessel and how many runs have passed through it (lifetime counter).
contam_pct_recent
Contamination rate over the last window runs (default 50). The "recent stress" component.
trend
rising / steady / falling within the window — compares first-half rate to second-half. Detects accelerating wear.
load_score
0..1 composite. Weighted blend: 40% age (runs / 500), 40% recent contam (pct / 80), 20% trend direction. The single number that summarizes vessel-level wear.
severity · recommendation
low / medium / high tier with the operator-facing maintenance recommendation. "Schedule preventive maintenance" when load_score ≥ 0.65.
Outcome labels (Tier 11)
run_id · confirmed_cause
The run and the operator's chosen root cause. confirmed_cause matches a hypothesis name from the differential, building the labeled dataset that future scoring can use as posterior priors.
confidence
definite (lab confirmed) / likely / uncertain. The "how sure are you" filter applied when training on this label later.
operator_notes · labeled_by · labeled_at
Free-text notes, viewer attribution, and timestamp. Audit trail.
Differential diagnosis (Lens 8 enhancement)
differentials[]
Ranked list of up to 3 candidate causes when regime ∈ {watch, intervene}. Empty for nominal regime.
name · stars · tier
Hypothesis label, ★ rating, and tier (most_likely / alternative / lower_probability) — visual ranking only, not a calibrated probability.
evidence_for[]
List of observable signals that triggered this hypothesis (e.g. "yeast lot B44 on flagged-lot watchlist").
confirm_by
The check or test that would confirm this hypothesis if positive (e.g. "Pull the SIP cycle log…").
distinguishes_from
Which alternative hypothesis the confirm_by check would rule out. The discriminator that turns a list into a triage decision.
if_confirmed
Concrete action to take when this hypothesis is confirmed (e.g. "Quarantine the lot…").
Lens 7 — Early warnings
detector
Name of the rule that fired (OUR sharp drop, intracellular pH stress, viability collapse, RQ regime shift).
severity
info / medium / high / critical. Maps to recommended action: continue / verify / hold / abort.
trigger_t_h
Earliest time within the run when the rule's condition was met.
hours_before_harvest
duration_h − trigger_t_h. The killer metric — how many hours of fermentation you would have saved if you'd caught this issue at trigger time.
message · rationale
One-line description of what happened, plus a one-sentence biological / process explanation.
Lens 8 — Decision card
regime
nominal (continue) / watch (verify) / intervene (hold or abort). Set by alert severity + driver count.
recommended_action
continue / verify / hold / abort. The single operator-facing call.
rationale
One-sentence explanation of why this regime / action.
drivers
Ranked list of {factor, evidence, source_lens} explaining what fed the regime classification. Pulls from Lens 2 (top factors), Lens 4 (biomass adj OR), Lens 5 (watchlist).
confidence
0..1 score. Blends average stability of top-5 watchlist features with a sample-size factor (saturates at 5000 historical batches).
earliest_alert
The earliest-firing alert from Lens 7, surfaced for quick reference.
Lens 9 — Process graph
inputs[]
Env feeders for the focused run — yeast lot, shift, vessel, water hardness, ambient humidity. Pulled from ferm_full_v (the join of FERM_RUNS with FERM_ENV).
phases[]
Lag → Exponential → Stationary → Harvest, each with t_start_h / t_end_h from the same phase model Lens 6 uses. status is one of pending, active, done, alert.
phases[].alerts
Lens 7 detector hits attached to whichever phase the trigger time falls in. The numeric badge on the SVG node shows how many alerts landed in that phase; hover the node for full text.
active_phase_key
Phase containing the latest live timepoint in the trace. The active node pulses; in production this would come from FERM_RUN_STATE instead of being derived.
outcome
Right-most node — "Clean batch" or "Contamination". For contaminated runs the badge time is contam_onset_h.
live[]
Last value of each key trace (OUR · CER · RQ · iPH · Viability) for the side rail. Same numbers Lens 6 plots — just the most recent point.
The graph and Lens 6 share the same phase model and the same alert list. The graph is the at-a-glance "where am I in the lifecycle?" view; Lens 6 is the actual trajectory of the metrics inside each phase.
Lens 6 — Run dynamics
traces[param_code]
Time-series for one parameter as a list of {t_h, value} points. Sample interval defaults to 60 minutes; clamped to [10, 240].
phases.lag_end_h · exp_end_h · duration_h
Phase boundaries set deterministically from the run's kinetic state — lag occupies ~30% of time-to-peak, exponential ends at time_to_peak_h, stationary fills the remainder. Marked as dashed rules on the chart.
phases.contam_onset_h
For contaminated runs only — when the contamination event becomes detectable in the traces. Drawn as a red rule on the chart. Sampled per-run from (0.5·exp_end, exp_end + 0.6·(duration − exp_end)) so it can fall inside late-exponential or anywhere in stationary.
interpretations[param_code]
End-of-run plain-language status from the agent's interpret() method (e.g. "OUR 38 mmol/L/h — within healthy aerobic range").
Lens 5 — Watchlist (elastic-net + stability)
label
Human-readable name of the candidate predictor (e.g. "intracellular pH", "vessel age (runs)").
kind
binary for one-hot categorical, continuous for z-score-standardized.
simulated_by
SensorAgent class that produced the value, or null if it comes from a real DB column.
stability
Fraction of bootstrap resamples where the elastic-net assigned this predictor a non-zero coefficient (0.0–1.0). The watchlist sorts by stability × |effect|; rows with stability < 50% render dimmed.
effect
Mean coefficient (in log-odds units) over the resamples where it was selected. For continuous: per +1 SD; for binary: relative to its reference category.
direction
raises (effect > 0) or lowers (effect < 0) the contamination odds.

Math & algorithms

Plain groupby arithmetic for the point estimates, proper proportion statistics for the uncertainty. Every number is reproducible directly from SQL plus a handful of named Python functions — no black-box ML.

Baseline rate
baseline.rate = ( count(rows where outcome_matches) / count(rows) ) × 100
# outcome_matches = contamination='Y'  OR  yield_category='High'
Factor rate and lift
S_p       = { rows where predicate p(row) is true }
rate_p    = ( count(rows in S_p where outcome_matches) / |S_p| ) × 100
lift_pp_p = rate_p − baseline.rate

# Factors ranked by |lift_pp| descending; top 6 returned.
95% confidence intervals — Wilson score
z = 1.96   # 95% two-sided
phat    = succ / n
denom   = 1 + z² / n
center  = ( phat + z² / (2n) ) / denom
halfwdt = z × √( phat(1−phat)/n + z²/(4n²) ) / denom
CI      = [ (center − halfwdt) × 100 ,  (center + halfwdt) × 100 ]

# Wilson (not the normal-approximation CI) — it's well-behaved at
# rates near 0% or 100% and on small n, where the normal CI can
# produce impossible values like a "−3%" lower bound.
Per-factor significance — chi-square test
# 2x2 contingency for each factor:
                outcome=Y    outcome=N
  predicate=T      a            b
  predicate=F      c            d

# Yates-corrected χ² statistic, df=1:
χ² = Σ (|obs − exp| − 0.5)² / exp      # over the 4 cells
p  = 2 × P(|Z| > √χ²)                  # via math.erf, stdlib only
Multiple-testing correction — Benjamini-Hochberg FDR

We test ~10 factors and 7 findings simultaneously. Raw chi-square p-values would produce several false positives by chance. BH step-up controls the expected false discovery rate at α = 0.05 — the p_adjusted shown to the magnitude tag and the factor-significance pill is FDR-corrected, not raw.

Adjusted odds ratios — logistic regression

The marginal lifts above are confounded: a +16pp lift for shift = NIGHT may leak in effects from humidity, lot, and crew that happen to correlate with night shift. To separate them we fit a single logistic regression on the full row set:

logit( P(outcome=1) ) = β₀ + Σ βⱼ · xⱼ

# xⱼ ∈ {shift_NIGHT, shift_DAY, yeast_bad_lot, humidity_high,
#       humidity_low, hardness_high, temp_high, ph_low}
# neighbor_contaminated and crew=night-foxtrot are excluded — they're
# outcome proxies in the data generator (set only when contam=Y) and
# would cause quasi-separation in the fit. They still show as Lens 2
# marginal rates, just without an adjusted-OR sub-line.

adj_OR(xⱼ)    = exp( βⱼ )                      # effect holding others constant
adj_OR_CI(xⱼ) = exp( βⱼ ± 1.96 · SE(βⱼ) )      # 95% Wald CI
adj_p(xⱼ)     = 2 · P( |Z| > |βⱼ / SE(βⱼ)| )   # Wald test

Fitting uses Newton-Raphson IRLS (converges in ~10 iterations on well-behaved data). Standard errors come from the diagonal of the inverse observed Fisher information — the textbook frequentist asymptotic. Zero-variance predictors are dropped to avoid singularity.

Why we exclude outcome proxies

Some fields are assigned as a consequence of contamination rather than as a cause of it. In this dataset, crew = night-foxtrot and neighbor_contaminated = 'Y' are set only when contamination = 'Y' is already decided — that's the live-injector's bookkeeping model for "a contaminated batch flags itself." These fields carry 100% correlation with the outcome by construction, which makes them outcome proxies, not predictors.

Including outcome proxies in a regression causes quasi-separation: the coefficient wants to be ±∞, the Hessian becomes singular, IRLS diverges, and the whole fit returns no results. So they're excluded from the predictor set. They still appear in Lens 2 as descriptive marginal rates (a crew with 100% contamination is meaningful information) — just without an adjusted-OR sub-line.

# Rule: a field belongs in the regression only if it's measurable
# BEFORE the outcome is known. Otherwise it's a post-hoc label,
# not a predictor.

included:  shift, yeast_lot, humidity, water hardness, temp, pH
excluded:  neighbor_contaminated, crew = night-foxtrot
           (both set as a function of contamination)
L2 ridge — why the fit always converges now

Real operational data frequently has predictors that are nearly collinear with the outcome on a subset (even after outcome proxies are excluded — e.g., shift=NIGHT + yeast=B44 + humidity≥75% might have a 98% contamination rate, which pushes the Hessian close to singular). To keep the fit stable, we add a small L2 penalty on the non-intercept coefficients:

β̂ = argmax  log L(β)  −  (λ/2) · β₋₀ᵀ · β₋₀       # ridge log-likelihood

# IRLS update with ridge:
β ← β + ( XᵀWX + λQ )⁻¹ · ( Xᵀ(y−p) − λQβ )

#   Q = diag(0, 1, 1, ..., 1)  — intercept not penalized
#   λ = 1e-3 · n               — scales with sample size
#   eta clipped to [-30, 30]   — keeps sigmoid numerically stable

The penalty is very light — on well-behaved data it leaves MLE point estimates essentially unchanged (peak_od adj OR shifted from 0.57 to 0.59 in the ground-truth synthetic test). Its only job is to prevent the coefficient from running to infinity when the data is pathological; then you get a finite estimate with an honestly wide confidence interval, rather than a silently missing row. The intercept is not penalized, so the baseline rate stays unbiased.

When a row shows a strong marginal lift but adj OR ≈ 1.0 · n.s., the effect is confounded — it was a proxy for a correlated predictor. That's the row flagged in amber. The big real story is the factors whose adjusted OR stays large after controlling for everything else.
Fit-failed fallback

Despite the ridge, a fit can still fail if n is too small (≤ p + 1), numpy isn't available, or the data is truly pathological. In that case _build_adjusted_ors returns empty ORs but keeps the population mean / SD / n metadata for the continuous predictors. Lens 4 then renders as a descriptive panel — three rows with mean/SD/n and a fit did not converge badge — instead of silently hiding. An operator always sees that biomass is being tracked; the adjusted effect is just labeled as unavailable.

Biomass kinetics — continuous predictors (Lens 4)

Three continuous measurements from the tank enter the same logistic regression alongside the categorical factors above. Each is standardized to a z-score before fitting so the coefficient reads as "OR per +1 SD change":

# Continuous predictors, standardized in the design matrix:
peak_od         # max OD600 reached during the run
time_to_peak_h  # hours from inoculation to peak
mu_h1           # specific growth rate (1/h) in exponential phase

# For each continuous predictor xc:
zc            = (xc − mean(xc)) / sd(xc)
adj_OR_per_SD = exp( β_c )           # odds multiplier for each +1 SD
adj_OR_CI     = exp( β_c ± 1.96·SE )

Interpretation in the UI translates the OR back into plain-language odds change. For example, if adj OR 0.72 on peak OD with SD ≈ 7, the line reads "each +1 SD (≈7 OD) → ~28% lower contamination odds". Because these are fit in the same model as shift/lot/humidity, the biomass effect is already adjusted for those covariates.

Continuous predictors need complete values. Rows with missing peak_od, time_to_peak_h, or mu_h1 are dropped from the biomass fit. The n shown in Lens 4 is the usable count after that filter.
Watchlist — elastic-net + stability selection (Lens 5)

Lens 2 and Lens 4 cover the dominant predictors — the handful of factors a domain expert would have flagged anyway. Lens 5 takes the wider feature surface (currently-tracked + agent-simulated, ~28 columns) and runs elastic-net penalized logistic regression with bootstrap stability selection: which predictors keep getting picked across many resamples?

# Penalized log-likelihood (elastic net = L1 + L2):
β̂ = argmin  −(1/n)·logL(β)
            + λ·α·||β₋₀||₁                # L1 — drives weak features to exactly zero
            + λ·(1−α)/2·||β₋₀||²            # L2 — keeps surviving features stable under collinearity

# Solved with FISTA (Beck & Teboulle 2009): proximal-gradient with momentum,
# pure-numpy. ~50 ms per fit at n=5000, p=30.

# Stability selection (Meinshausen & Bühlmann 2010):
for b = 1..40:
    sub_b = subsample 70% of rows with replacement
    β_b   = fit_elastic_net(X[sub_b], y[sub_b], α=0.5, λ=0.02)
    selected_b = { j : |β_b[j]| > 0 }

stability(j) = (1/40) · Σ_b 1[j ∈ selected_b]
score(j)     = stability(j) · |mean β_j across resamples where selected|

Each predictor in the panel shows: stability (selection frequency, as a bar), standardized log-odds effect, direction (↑ raises contamination odds / ↓ lowers them), and whether it comes from a real DB column or a sensor agent. Rows with stability < 50% render dimmed — they're real signals but the regression isn't confident enough to act on them yet.

A note on interpretation: not every weak signal is a "predictor". Some agent-simulated columns (OUR, CER, viability, intracellular pH) are state estimators — they covary with contamination because they're physically downstream of it. Those are useful for in-flight diagnosis ("the batch is going wrong, abort?") but they're not setup-time predictors ("is this configuration likely to fail?"). Real setup-time predictors in this catalog include vessel age, days since pH cal, SIP/CIP cycle length, raw-material lot attributes — variables locked in before the run starts. Future iterations may split Lens 5 into those two cohorts.
Stratified baselines — context, not global average

Showing one number to all batches misleads operators the way comparing your blood pressure to a global mean misleads patients. The right baseline for a specific batch is the rate observed in batches with the same dominant configuration. We compute that on the fly per request:

# Strata: cross-product of dominant categorical predicates
key(row) = shift                                                 # DAY / SWING / NIGHT
        ⊕ (yeast_lot in {B44,B47} ? 'B44/B47' : 'OTHER')
        ⊕ humidity_band                                           # HUMID ≥75 / DRY ≤55 / NORMAL
        ⊕ (water_hardness ≥ 180 ? 'HARD' : 'NORMAL')

# 36 cells; 57K rows ⇒ ~1500 rows/cell on average
strata[key] = { n, succ, rate, ci_low, ci_high }      # Wilson CI as everywhere else

# Guardrail: cells with n < 200 fall back to population baseline.
# Reported as fallback=true so the UI can be honest about the punt.
contextual = strata[key(latest_batch)] if n ≥ 200 else POPULATION
delta_pp   = contextual.rate − population.rate

For a specific NIGHT + B44/B47 + HUMID + HARD batch the population baseline of 47% may understate the real expected rate (closer to 80% in practice). The contextual block in Lens 1 shows both numbers side by side, plus the delta as an explicit "+34pp vs population" tag. Operators read the right number for the configuration they're actually running.

In-Python compute is appropriate up to ~500K rows. Past that, data/09_baseline_strata.sql creates a materialized view FERM_BASELINE_STRATA refreshed nightly, with the same schema. The Python read path can swap from in-memory groupby to ORDS lookup with a one-line change. Baselines drift slowly — adding 100 LIVE rows to a 1500-row stratum shifts its rate by < 1pp — so daily refresh is honest.
Attention routing — across runs, not within one

Lens 1–8 answer "how is THIS run?". The attention router answers "across all runs in flight, which one needs my eye next?" — the operator-cockpit question for facilities running multiple vessels in parallel. It's a pure read against the hot tier; no analysis is recomputed unless a run's state cache is empty or stale.

priority(run) =
    100 if regime == "intervene"
  +  30 if regime == "watch"
  +  drift_score × 50
  +  n_active_alerts × 10
  + (50 if earliest_alert_severity == "critical" else
     20 if earliest_alert_severity == "high" else 0)
  +  vessel_load × 20

Click any row in the attention router card and the dashboard's per-run panels (Lens 6 trace, Lens 7 alerts, Lens 8 decision + drift) reload focused on that run. Pure UI affordance — same data, different lens.

Confirm-by automation — running the check for the operator

The differential's confirm_by instructions tell the operator what to look at. Some of those checks are automatable — they're queries against data already in the system. The "▶ Run automated check" button on each differential candidate fires the matching probe and returns a verdict inline:

CONFIRM_CHECKS registry — 4 of 7 hypotheses currently automated:

  NIGHT-shift event       → does OUR drop align with an 8h boundary (±2h)?
  Yeast lot lag failure   → did pHi cross 6.7 BEFORE viability dropped < 80%?
  Sterilization breach    → was SIP cycle < 45 min?
  Aeration-transfer limit → did min DO drop < 20%?

Each check returns a verdict in {supports, refutes, weakly_refutes, inconclusive}
plus a one-sentence evidence string and structured details for audit.
The remaining 3 hypotheses (substrate uncoupling, equipment fatigue, raw-material trace deficiency) need data that isn't on the row yet (cross-batch lookups, supplier COA fields). The button hides for those — operators see the manual instructions only. As more data sources come online, more checks can be automated by adding entries to the registry.
Trace shape features — what the trace's CHANGE looks like

Phase-summary statistics (means, mins, maxes) compress 168 h of trace into a handful of static numbers but lose dynamic information. Slopes and curvatures capture how the trajectory is changing — the difference between "OUR is at 12 mmol/L/h, holding" and "OUR is at 12, dropping at 1.5/h with positive curvature (acceleration)". The watchlist's elastic-net now sees these alongside the existing predictors:

# Late-window slopes (last 12-24h):
our_slope_late_per_h        # mmol/L/h² — first derivative of OUR
pHi_slope_late_per_h        # pH units / h
viab_slope_late_per_h       # % / h

# Late-window curvature (last 24h, fits y = a·t² + b·t + c):
our_curvature_late          # 2a — accelerating decline detector

# Cross-trace and shape derivatives:
cer_to_our_late             # late-window RQ (CER/OUR)
vcd_plateau_frac            # t at which VCD reaches 95% of peak / total duration

Each is computed once per row from the trace already generated for the watchlist's per-row simulation step, so the marginal cost is small. Stability selection then surfaces whichever ones turn out to be stable predictors across resamples.

Vessel allostatic load — chronic stress accumulator

Body analogue: chronic stress accumulates measurably (cortisol exposure, AGE buildup) even when every individual test is normal. A vessel is the same — every SIP cycle, every contamination event, every high-antifoam batch deposits a little wear. Eventually a healthy-looking vessel is operating in a quietly compromised state.

# For one vessel, summarized over the last `window` runs:
runs_total           = total runs through this vessel (lifetime)
contam_pct_recent    = contam rate in last `window` runs (default 50)
trend                = compare first-half-of-window vs second-half:
                         rising  if recent_pct > prior_pct + 5
                         falling if recent_pct < prior_pct − 5
                         steady  otherwise

# Composite 0..1 score, weighted blend:
age_term    = min(1.0, runs_total / 500)
contam_term = min(1.0, contam_pct_recent / 80)
trend_term  = { rising: 1.0, steady: 0.5, falling: 0.2 }
load_score  = 0.40 · age_term + 0.40 · contam_term + 0.20 · trend_term

# Maintenance recommendation tier:
load ≥ 0.65 → "Schedule preventive maintenance" (severity high)
load ≥ 0.40 → "Monitor closely; inspect within 50 runs" (medium)
otherwise   → "Within nominal envelope" (low)
The score is a heuristic, not a calibrated lifetime model. It surfaces vessels that have been quietly wearing across multiple batches in a way that any single run wouldn't reveal — which is exactly what the human-body metaphor implies the system should do.
Closing the loop — outcome labels (Tier 11)

Differential diagnosis stars are heuristic ranks today. To turn them into calibrated probabilities, the system needs ground truth: when an event resolved, what was the actual root cause? The label form on each differential candidate captures that:

POST /api/label_outcome  →  ferm_outcome_label (PK: run_id, MERGE upsert)
  { run_id, confirmed_cause, confidence, operator_notes, labeled_by }

                  ┌──────────────────────────────────────────────┐
                  │ Future closing-the-loop pipeline:            │
                  │  ferm_outcome_label  ←  ground truth labels  │
                  │           ↓                                  │
                  │  Bayesian update of hypothesis priors:       │
                  │    P(cause | evidence) =                     │
                  │      P(evidence | cause) · P(cause) / Σ      │
                  │           ↓                                  │
                  │  Stars → calibrated probabilities            │
                  └──────────────────────────────────────────────┘

Today the labels accumulate without back-feeding the scorer. At ~50 confirmed cases per common cause, the next PR can swap heuristic ★ ratings for posterior probabilities derived from this dataset. The schema is ready; the data collection starts now.

Compare two batches (Tier 12)

The "↔ Compare with another batch" button on the decision card opens a modal that fetches /api/run_state for both run_ids in parallel and renders side-by-side. Differences (regime, drift, action, contextual baseline, top differential, alert count) are highlighted with a violet outline. No new endpoint — just a new UI surface over the hot-tier read path. Useful for "this batch went wrong, what was different about the last clean one?" triage workflow.

Lambda tuning (elastic-net, Lens 5)

The earlier elastic-net used a fixed λ = 0.02. That worked but wasn't auditable: nothing in the response said why 0.02 vs 0.05. Tuning fixes this — a coarse log-spaced grid {0.005, 0.01, 0.02, 0.05, 0.10, 0.20}, fit each, count nonzero coefficients, pick the λ that yields ≈ 10 selected predictors. The chosen value flows back through the response in method.lambda. This is the Meinshausen-Bühlmann "select λ for desired sparsity" recipe — more transparent than holdout CV because operators can read it as "I want this many predictors surfaced and the algorithm finds the regularization strength that delivers that."

Differential diagnosis — triage between hypotheses

Doctors don't say "you have a disease" — they propose a ranked list of candidates and tell you what test would distinguish them. Same idea here. When Lens 8 says "watch" or "intervene", the rationale used to be one sentence; now it's accompanied by 2–3 hypothesis cards the operator can triage between.

# Each hypothesis is a small, transparent scorer:
def hypothesis(row, trace_phases, alerts, drivers):
    score = 0.0; evidence = []
    if condition_1: score += w1; evidence.append("...")
    if condition_2: score += w2; evidence.append("...")
    return { score, name, evidence, confirm_by, distinguishes_from, if_confirmed }

# Run all hypotheses; threshold; rank; tier the top 3.
candidates = [h for h in HYPOTHESES if h.score ≥ 0.20]
candidates.sort(by score, descending)
candidates[0].tier = "most_likely"        ★★★
candidates[1].tier = "alternative"        ★★
candidates[2].tier = "lower_probability"  ★

# Library of 7 hypotheses (each ~10 lines):
NIGHT-shift contamination event   # shift + alert alignment
yeast lot lag-phase failure        # bad lot + low peak/μ + viability
sterilization breach (SIP/CIP)     # RQ shift + utility stress
oxygen-transfer-limited (kLa)      # low DO + RQ shift
substrate uncoupling               # RQ regime change without contam
equipment fatigue                  # multiple alerts in one vessel
trace deficiency (biotin/Fe)       # slow growth without viability drop

Scores are not calibrated probabilities — that would require a labeled outcome dataset (confirmed root cause per past contamination event). We don't have one. They're heuristic rankings: which candidate has the most supporting evidence right now. The operator interprets the rank with judgment; the system surfaces what to look at first.

When a contamination event later resolves, the operator records the confirmed cause. Over time that builds the labeled dataset that would let stars become real probabilities. Until then, the framework provides structure — same value as a doctor's differential list — without overclaiming precision.
Hot-tier inversion — current state as the brain

Traditionally the data lake is the brain and queries are nerves: every read recomputes from raw rows. The body works the other way around — your liver doesn't get queried, it knows. FERM_RUN_STATE mirrors that. Per-run intelligence (regime, alerts, drift, narrative) lives in a small hot-tier table that the dashboard reads in O(1). The data lake (FERM_FULL_V, snapshots, audit) is the historian — consulted for population baselines and trend analysis, not for "how is run X right now."

# Read path:                                          (operator-facing)
#   GET /api/run_state?run_id=X                                          
#   ↓                                                                    
#   try: row = SELECT * FROM ferm_run_state_v WHERE run_id = :X         
#        if row.age_seconds < max_age:  return row  (HIT, < 50 ms)     
#   else:                                                                
#        fresh = _compute_run_state(X)              (MISS, ~3 s)        
#        POST /ords/.../ferm_state_upsert/  (MERGE)                     
#        return fresh                                                    

# Persistence layer (single MERGE-based upsert proc):                     
PROCEDURE ferm_upsert_run_state(p_run_id, p_regime, ..., p_drift_score, ...)
    MERGE INTO ferm_run_state ON (run_id) WHEN MATCHED UPDATE / NOT MATCHED INSERT

Refresh model is eventual consistency with operator-controlled staleness. Default TTL is 5 minutes — appropriate when the live injector adds rows every 30 seconds but operators don't make sub-minute decisions. The freshness is exposed in the UI ("cache hit · 90 s old" or "fresh compute · persisted") so operators always know how stale the answer they're looking at is. Optional pre-warming via a DBMS_SCHEDULER job (commented in data/08_run_state.sql) keeps the latest LIVE_ row hot even when nobody's watching.

Why this matters: every previous lens was computed at request time on the full row set, with response latency proportional to row count. The hot tier decouples those — first read after new data is the slow one, every subsequent read for the same run is fast and cheap. As the dataset grows from tens of thousands to millions of batches, only the recompute step gets slower; the operator-facing read path doesn't.
Drift score — pre-symptomatic monitoring

Lens 7 fires when a rule is tripped (fever already broken). The drift score is the rising cortisol BEFORE the fever — a continuous 0–1 metric that climbs smoothly as the run departs from a healthy reference trajectory:

# For each tracked agent (OUR, CER, VCD, pHi, viability):
healthy_trace = simulate_run_trace(state.with(contamination=False))
diff[t]       = (actual_trace[t] − healthy_trace[t]) / agent_noise_scale
rms_agent     = sqrt(mean(diff[t]² over the run))

drift_score   = 1 − exp(−mean(rms_agent across agents) / 1.5)
                # 0    = exactly on healthy trajectory
                # 0.30 = mild deviation, watch
                # 0.60 = significant deviation, often crossed before any rule trips
                # 0.85 = severe (contamination in full effect)

drift_trend   = compare RMS over last 24h vs prior 24h:
                  recent / prior > 1.30 → "rising"
                  recent / prior < 0.70 → "falling"
                  otherwise              → "steady"
In simulation the drift score for clean runs is exactly 0 because actual and reference share the same RNG seed — a property of the deterministic simulator, not of the formula. In real operations with independent noise across instruments, clean runs show a small nonzero baseline (typically 0.05–0.10).
Phase-resolved features — feeding the watchlist (Lens 5 ⊃ Lens 6)

The trace orchestrator (Lens 6) per-run produces ~7 time-series. extract_phase_features(trace_result) collapses each into a small set of per-batch scalars — phase-specific means, late-window minimums, terminal values, max negative slopes — and the watchlist's elastic-net consumes them alongside its other predictors:

our_mean_exp        # OUR averaged across exp phase
our_mean_stat       # OUR averaged across stationary phase
our_max_drop_per_h  # sharpest negative slope of OUR (6h window)
cer_mean_stat       # CER averaged across stationary
vcd_terminal        # viable cell density in last 12 h
pHi_min_late        # minimum intracellular pH in last 24 h
pHi_mean_late       # mean intracellular pH in last 24 h
viab_terminal       # end-of-batch viability
viab_max_drop_per_h # sharpest negative slope of viability (4h window)
rq_mean_stat        # RQ averaged across stationary

Phase features carry information that per-batch summaries lose. "Average OUR over the run" is dominated by the long stationary plateau; "OUR mean during exp phase" or "max drop per hour" are far more diagnostic of where the run went wrong. The watchlist's elastic-net surfaces whichever of these features turn out to be stable predictors.

Early-warning detectors (Lens 7)

Rule-based, transparent, fast. Each detector consumes the live trace for one run and either fires once at the earliest qualifying timepoint or stays silent. The point of "earliest qualifying" is to maximize hours-before-harvest — the operator's saved time if they'd acted at trigger.

OUR sharp drop         severity high      # fall > 25% in 4h window when above 5 mmol/L/h
intracellular pH stress severity high      # pHi < 6.7 sustained ≥ 3h
viability collapse      severity critical  # 90%+ → < 80% in 2h window
RQ regime shift         severity medium    # RQ outside [0.85, 1.20] for ≥ 3h, post-lag only

Thresholds in this PR are demo-realistic (would come from validated SOPs in production). Severity maps to action via the decision card: critical → abort, high → hold + verify, medium → verify, info → continue.

Decision intelligence (Lens 8)

The decision card is the synthesizer — it doesn't introduce new statistics, it reads what the other lenses already produced and combines them into one operator-facing call. Implementation is rule-based and traces back to numbers visible elsewhere on the page, so any decision is auditable line-by-line.

# Regime selection — alert-driven; drivers are informational.
# Population-level drivers describe the risk landscape but don't say
# whether THIS batch is in trouble; alerts and run state do.
if any alert.severity == "critical":
    regime = "intervene", action = "abort"
elif any alert.severity == "high":
    regime = "intervene", action = "hold"
elif any alert.severity == "medium" OR run.contam == "Y":
    regime = "watch",     action = "verify"
else:
    regime = "nominal",   action = "continue"

# Drivers selected (de-duped, capped at 6):
Lens 2 top_factors with significant FDR-adjusted lift ≥ 5pp
Lens 4 biomass adj_or_per_sd with |OR − 1| ≥ 0.20 and significant
Lens 5 watchlist with stability ≥ 0.80 and |effect| ≥ 0.30

# Confidence:
0.5 · mean(top-5 watchlist stability) + 0.5 · min(1, n_total/5000)
Operators retain decision authority. The card is an opinion synthesized from the data; it doesn't take action. Every driver shown links back to a number on the page above it.
Run dynamics — time-series traces (Lens 6)

Per-batch summaries lose information that the actual trajectory carries — when did the deviation start, how fast did it propagate, did multiple sensors agree on the timing. Each trace-capable agent (OUR, CER, viable cell density, intracellular pH, viability, kLa) implements a produce_trace(state, rng, dt_min) method that returns a deterministic-by-run_id time-series:

phases       = compute_phases(state)         # lag_end_h, exp_end_h, contam_onset_h
for t = 0, dt_min, 2·dt_min, ..., duration_h:
    f = phase_factors(t, phases)              # biomass_x, metab_q, decay, phase
    # biomass_x ramps 0.05 → 1.0 across lag→exp; slow 5% decline in stationary
    # metab_q   ramps 0.30 → 1.0; 10% decline in stationary (substrate depletion)
    # decay     1.0 if no contamination yet, else exp(−(t − onset)/5h)
    value(t) = mechanistic_model(state, f) + sensor_noise

# Derived: RQ trace = CER[t] / OUR[t] pointwise (when OUR > 0.5)

Phase boundaries are the same for every agent on the same run — that's what lets the dashboard align the dashed rules across all the metric tabs. Contamination onset is sampled deterministically from SHA-1(run_id) ⊕ 0x1ABCDEF, so the same run yields the same onset on every request — important for audit, and means the chart doesn't visually "jump around" when re-rendered.

When real probes come online (Hamilton capacitance, off-gas mass spec, Raman), each agent's produce_trace() is replaced by a thin SCADA bridge that reads from the historian. The phase model and downstream rendering don't change.
Compound effect
Same rate/lift/CI/χ² formulas with predicate = p1 ∧ p2 ∧ p3.
# Today only one compound is computed. Architecture supports more;
# the UI reads d.explanation.compound (singular) for now.
Magnitude classification (Lens 3 · Findings)

Each finding gets a neutral magnitude tag based on FDR-adjusted significance and absolute lift. No prior expectation is tested; no belief is confirmed or disproven. The tag exists so the eye lands on the bigger effects first when scanning the panel.

Significant under FDR |lift| ≥ 15pp → strong
|lift| ≥ 5pp → moderate
otherwise → weak
Not significant under FDR |lift| ≥ 2pp → weak_unconfirmed
otherwise → negligible
A 3pp lift on n=200 (noisy) and a 3pp lift on n=20,000 (rock solid) render differently — the larger sample passes FDR. The framing throughout the panel is factual ("ambient humidity above 75% co-occurs with elevated contamination") rather than evaluative ("operators were wrong about humidity").
Low-n gating + significance fading

Any factor or assumption with n < 150 renders dimmed with a low n badge. Rows with p_adjusted > 0.05 render dimmed with an n.s. badge. Numbers still show; only the visual weight drops, so nothing is hidden.

Snapshot polling

A DBMS_SCHEDULER job (FERM_SNAPSHOT_CAPTURE) fires every 5 min and writes one row to ferm_snapshots with the current baseline, factor rates, and compound rate. The timeline chart reads the pre-aggregated series — it's not re-computed per request.

What the LLM does (and doesn't)

The /api/narrate endpoint sends the already-computed numbers to Gemma with an instruction to restate them in plain language. The LLM never computes new numbers. It's a translator, not a model. If narration fails or times out, the UI falls back to a structured auto-summary built from the same data.

Parameter catalog

Real fermentation has hundreds-to-thousands of parameters across timescales and subsystems — in-tank sensors, ambient conditions, raw-material lots, equipment integrity, cellular metabolism, off-gas mass balance, personnel. The dashboard tracks a handful today and the rest are on a roadmap. FERM_PARAMETER_CATALOG is the explicit registry: one row per parameter with its category, units, role in the analysis (predictor / outcome / outcome-proxy), regulatory class (CPP / CQA / CMA), expected range, source location, and tracked-vs-roadmap status.

Why this matters in practice: it's the difference between "the system knows about X" and "the system happens to read X from a column somewhere." Future versions of _build_adjusted_ors will read predictor membership directly from the catalog instead of from hardcoded Python lists. For audit, the catalog answers "what does the dashboard claim to know, and how is each value being collected?" in a single SELECT.

Sensor agents — simulating the roadmap parameters

For roadmap parameters that aren't yet instrumented (off-gas mass-balance OUR/CER/RQ, capacitance-probe viable cell density, intracellular pH, kLa, equipment-age counters, raw-material lot attributes), we generate values from sensor agents instead of waiting on hardware. Each agent in api/sensor_agents.py is a small autonomous component:

class OURAgent(SensorAgent):
    name        = "OURAgent"
    param_code  = "OUR_MMOL_L_H"

    def measure(self, state, rng):
        # Physics: OUR ≈ q_O2 · X (specific O2 uptake × biomass)
        x_g_l = state.peak_od_true * 0.35
        q_o2  = 4.5 * (0.62 if state.contamination else 1.0)
        drift = max(0, state.days_since_ph_cal - 21) * 0.04
        v = q_o2 * x_g_l * (1 - drift) + rng.gauss(0, 1.5)
        return {"value": round(max(0, v), 2), "flag": "ok"}

    def interpret(self, value, state):
        # Plain-language status the agent attaches to its own reading
        if value < 8:  return f"OUR {value} — stalled metabolism"
        if value < 25: return f"OUR {value} — healthy aerobic range"
        return                f"OUR {value} — vigorous respiration"

Three properties make these "agents" rather than just functions: each owns its own state (calibration drift, fouling, span shift), each carries a mechanistic model (van't Riet for kLa, off-gas mass balance for OUR/CER, fluorescence calibration for intracellular pH), and each produces an opinion about its current reading via interpret(). When real probes come online, the agent's measure() is replaced with a thin SCADA bridge — every other layer of the pipeline (catalog, regression, UI) is unchanged. The orchestrator simulate_run(state) fires every agent in sequence, then the derived agents (RQ = CER/OUR) run on the others' outputs. The /api/simulate?n=10&seed=42 endpoint exposes this for verification — paste it in a tab to inspect the readings.

In the catalog table below, parameters with a sim · AgentName tag are populated by their agent. Parameters tagged roadmap are on the catalog but neither instrumented nor simulated yet — they're the explicit "data we don't have" set.
Loading catalog…
Categories: in_tank (broth probes), kinetic (biomass dynamics), cellular (intracellular state), off_gas (mass-balance derived), env (ambient), utility (process water), material (raw-material lots), equipment (vessel / SIP / CIP integrity), spatial (facility zone, neighbors), personnel (shift, operator), outcome (the dependent variables), metadata (identifiers).

High-level schema

Two source tables (in-tank sensors + around-the-tank environmental context), one pre-aggregated time-series, and one audit log.

FERM_RUNS — what the in-tank sensor sees
  run_id PK · avg_temp · avg_ph · min_do_pct · inoculum_size_ml
  max_rpm · lactose_feed_ml · duration_days · media_volume_ml
  max_fpu_ml (enzyme yield)
  peak_od · time_to_peak_h · mu_h1 (biomass kinetics)
  contamination Y/N · yield_category
       │
       │ run_id (1:1)
       ▼
FERM_ENV — the "moat" layer: what the sensor can't see
  run_id PK · run_start_ts · tank_id · shift · crew_id
  ambient_temp_c · ambient_humidity_pct · barometric_hpa
  water_hardness_ppm · chlorine_ppm · yeast_lot · media_lot
  neighbor_contaminated Y/N

FERM_SNAPSHOTS — dashboard time-series, captured every 5 min
  id PK · captured_at · total_runs
  contam_pct_all · contam_pct_window (last 200)
  night_n, night_contam_pct
  bad_yeast_n, bad_yeast_contam_pct
  humid_n, humid_contam_pct
  compound_n, compound_rate

FERM_AUDIT — viewer attribution: every page load, question, explain
  id PK · ts · viewer · endpoint · question · answer_summary

FERM_PARAMETER_CATALOG — Tier 4 registry of every parameter the system tracks or could track
  param_code PK · display_name · category · subsystem · data_type
  unit · expected_min/max · phase_relevance · sampling
  source_table · source_column · calibration_source · regulatory_class
  is_predictor · is_outcome · is_outcome_proxy · tracked Y/N
  simulated_by (SensorAgent class name; NULL = not simulated) · notes

FERM_BASELINE_STRATA — Tier 8 materialized view · contextual rates per cell (optional)
  shift · yeast_band · humidity_band · hardness_band  (stratum_key)
  n_runs · n_contam · n_high_yield · contam_pct · high_yield_pct
  contam_ci_low / contam_ci_high (95% Wilson, computed in view)
  last_refreshed (refreshed nightly via DBMS_SCHEDULER)

FERM_OUTCOME_LABEL — Tier 11 · operator-recorded confirmed root causes (the loop closer)
  run_id PK · confirmed_cause · confidence (definite/likely/uncertain)
  operator_notes · labeled_by · labeled_at
       ▲
       │ POST /api/label_outcome → ferm_outcome_upsert/ ORDS module (MERGE)

FERM_RUN_STATE — hot tier · per-run cached intelligence (the inversion)
  run_id PK · last_updated_at
  regime · regime_label · recommended_action · rationale · confidence
  drift_score · drift_trend · drift_components_json
  n_active_alerts · earliest_alert_severity/t_h/detector
  alerts_json · drivers_json · narrative · n_similar_batches
       ▲
       │ MERGE-based upsert via ferm_state_upsert/ ORDS module
       │  ── /api/run_state recomputes + persists when stale (TTL = 5 min)
Plus a joined view v_ferm_full = ferm_runs ⋈ ferm_env ON run_id, and a pre-aggregated summary view (05_summary_view_v2.sql) that /api/summary reads from.
Confidence accounting for industrial fermentation — which metrics to trust, which to interrogate, and which you've been misreading. Voice-first so operators can query hands-free. Local LLM option so data never leaves the facility. Dynamic — any question, not pre-built dashboards. Oracle-backed for enterprise scale and audit.
Why so much contamination?
Night vs day shift
Worst yeast lot
Humidity effect
Top 3 root causes
Best temperature for yield?
Click the mic or type a question.