Loading
Loading
This page now answers one question: what still needs to be done to reach the end, usable product. Historical phase lists, old completed work, and broad backlog items are intentionally left off this page.
North star: Brent's primary tool for Google Ads optimisation across active Shopify clients.
The app can run real Rover causal studies, but Brent still has to supervise interpretation. The work now is to make the report tell him what can be trusted, what changed, and what must not be shared.
End-product usable means: a study runs, declares its trust state, survives known-window regressions, dashboard/report surfaces share one visual system, can be scoped safely to another client, and has a repeatable source onboarding path.
done
1 next
after trust gate
MVP-AUDIT-01 through MVP-AUDIT-09 are shipped, the fixed regression proof passed on 2026-05-12, and the study modelling logic is documented. MVP-AUDIT-10 auth/export/client isolation is the next productisation slice.
Why: The operator needs to see frame fingerprint, freshness, snapshot status, and stale-comparison risk before reading the estimate.
Done when: Every saved report opens with same-frame / different-frame / stale-comparison status without needing the audit table.
Why: Warnings after sampling are too late; weak windows and stale sources should be visible before the run starts without blocking operator flow.
Done when: The same backend validation issues appear before and after run, while the run action remains available for supervised operator use.
Why: Source/model changes need a written pass/fail before a plausible estimate is trusted.
Done when: Known windows produce a pass/fail summary with source-frame checksums and saved-result integrity checks.
Why: The model is only trustworthy if Shopify, Google, Meta, Klaviyo, discount, calendar, and model-control inputs match source truth.
Done when: Saved reports show pass/fail rows for model-frame inputs against current source marts, Data Sources reconciles raw/staging/mart/model-control contracts, and Shopify source certification compares the active client's exported baseline against the outcome mart.
Why: Long-running validation must fail, retry, and render status cleanly before it becomes default product behaviour.
Done when: Forced worker failures do not duplicate suite members, hide the primary report, or leave confusing partial results.
Why: The report needs one clear state: decisive, directional, inconclusive, stale comparison, or do not rely.
Done when: Saved reports render one top-level actionability state derived from source parity, saved-frame trust, validation, suite status, model interval, and study quality.
Why: Promo, calendar, and stockout context can change whether a media-impact read is believable.
Done when: Adding a context note changes pre-flight/report diagnostics and lowers reliance language where relevant.
Why: The product cannot safely support another client while causal-engine paths still contain Rover-specific dataset literals.
Done when: Client dataset resolution is tested and product-path queries resolve through the active client rather than Rover-only literals.
Why: Verification rate is a north-star component; recommendations need reality feedback after the change window closes.
Done when: Linked change-log entries trigger follow-up studies and produce verified / partially verified / not verified outcomes.
Why: The trust loop has a source-certification path, outcome verification, and a passing regression proof; productisation is now the active path to multi-client use.
Done when: Customer, marketing, impact, and report surfaces expose scoped data with consistent UI; Client A cannot access Client B by URL, cookie, or request mutation; export works for a provisioned client.
Why: The build needs gates tied to usefulness, not broad phase-complete claims.
Done when: Gate reports record agreement rate, verification rate, sessions, format conformance, regression results, source parity, and exceptions.
Why: A usable product needs repeatable operation and clear client-conversation rules.
Done when: A new operator can run a study, interpret its trust state, and know when not to share a finding.
Why: The product needs to support multiple Shopify clients without hand-built Rover-style setup each time.
Done when: A new Shopify client can be provisioned through a documented path that creates source connections, dataset/dbt targets, source-certification baselines, dashboard access, and first regression/source checks.
Why: Magento, WooCommerce, HubSpot, and future platforms should plug into the same canonical commerce/customer/source-truth contracts instead of creating bespoke pipelines.
Done when: A connector spec defines required raw/staging/mart fields, identity mapping, revenue/order semantics, source certification, freshness, and dashboard eligibility for non-Shopify sources.
Do not work from the old historical task archive as the active queue.
Do not add non-Shopify integrations as one-off hacks; define the connector contract first.
Do not treat a new client as live until its source-certification baseline and isolation checks pass.
Marketing with Brent · Insights Platform · Active correction queue
source: AUDIT_INTENT.md / CORRECTION_QUEUE.md