Loading
Loading
This page now answers one question: what still needs to be done to reach the end, usable product. Historical phase lists, old completed work, and broad backlog items are intentionally left off this page.
North star: Brent's primary tool for Google Ads optimisation across active Shopify clients.
The app can run real Rover causal studies, but Brent still has to supervise interpretation. The work now is to make the report tell him what can be trusted, what changed, and what must not be shared.
End-product usable means: a study runs, declares its trust state, survives known Rover regressions, and can later be scoped safely to another client.
1 to do first
after trust gate
after trust gate
MVP-AUDIT-01, MVP-AUDIT-02, MVP-AUDIT-04, MVP-AUDIT-06, and source-truth reconciliation are shipped. Build MVP-AUDIT-03 before broad productisation or new analytics surfaces.
Why: Source/model changes need a written pass/fail before a plausible estimate is trusted.
Done when: Known Rover windows produce a regression summary with direction, interval, selected model, source changes, and frame fingerprint.
Why: The operator needs to see frame fingerprint, freshness, snapshot status, and stale-comparison risk before reading the estimate.
Done when: Every saved report opens with same-frame / different-frame / stale-comparison status without needing the audit table.
Why: Warnings after sampling are too late; weak windows and stale sources should be visible before the run starts without blocking operator flow.
Done when: The same backend validation issues appear before and after run, while the run action remains available for supervised operator use.
Why: Source/model changes need a written pass/fail before a plausible estimate is trusted.
Done when: Known Rover windows produce a regression summary with direction, interval, selected model, source changes, and frame fingerprint.
Why: The model is only trustworthy if Shopify, Google, Meta, Klaviyo, discount, calendar, and model-control inputs match source truth.
Done when: Saved reports show pass/fail rows for model-frame inputs against current source marts, and Data Sources reconciles raw/staging/mart/model-control contracts with Klaviyo and platform conversions explicitly excluded from model controls.
Why: Long-running validation must fail, retry, and render status cleanly before it becomes default product behaviour.
Done when: Forced worker failures do not duplicate suite members, hide the primary report, or leave confusing partial results.
Why: The report needs one clear state: decisive, directional, inconclusive, stale comparison, or do not rely.
Done when: Saved reports render one top-level actionability state derived from source parity, saved-frame trust, validation, suite status, model interval, and study quality.
Why: Promo, calendar, and stockout context can change whether a media-impact read is believable.
Done when: Adding a context note changes pre-flight/report diagnostics and lowers reliance language where relevant.
Why: The product cannot safely support another client while causal-engine paths still contain Rover-specific dataset literals.
Done when: Client dataset resolution is tested and no product-path query hardcodes `client_rover` outside fixtures/docs.
Why: Verification rate is a north-star component; recommendations need reality feedback after the change window closes.
Done when: Seeded change-log entries trigger follow-up studies and produce verified / partially verified / not verified outcomes.
Why: Client access only matters after the operator report loop is trustworthy and client scoping is proven.
Done when: Client A cannot access Client B by URL, cookie, or request mutation; export works for a provisioned client.
Why: The build needs gates tied to usefulness, not broad phase-complete claims.
Done when: Gate reports record agreement rate, verification rate, sessions, format conformance, regression results, source parity, and exceptions.
Why: A usable product needs repeatable operation and clear client-conversation rules.
Done when: A new operator can run a study, interpret its trust state, and know when not to share a finding.
Do not work from the old historical task archive as the active queue.
Do not add broader dashboard features before the trust layer lands.
Do not start second-client/auth/export work until the Rover trust gate passes.
Marketing with Brent · Insights Platform · Active correction queue
source: AUDIT_INTENT.md / CORRECTION_QUEUE.md