Claim library — the substrate
Atomic units of what you believe. Every variant borrows from here. Click a claim or variant to see performance, related claims, and source links.
Claims · 22
AI Tutor sees a Spanish learner confusing ser/estar across three help requests. The tutor opens the session cold the next morning — none of it shows up in the pre-session brief.
When a customer tells an agent 'the POÄNG is going in the sunroom, ceiling slopes, we have toddlers' — that context should arrive with the Tasker.
Match conversion gates everything downstream — retention, NPS, platform LTV.
This isn't a ranker — it's the brief-format layer your ranker sits downstream of.
Context doesn't survive the agent-to-human handoff — the parent tells an agent five things that matter; the caregiver sees three keywords.
90-day pilot, one cohort, outcome-tied on conversion-lift vs. control cohort. Dollars-on-dollars.
Personalization is a ranker; the brief the pro reads is upstream of it. Internal ML teams sit downstream of a brief-format layer.
AI-originated demand moves from rounding error to double-digits in the next 12 months. Platforms win on match quality at the handoff, not on supply depth.
Trust-graph is the only wedge smaller platforms have. The defensibility question is whether the graph survives the agent handoff or gets flattened.
32-day PE Year-One board-deck window. After day 100, the story calcifies.
Most companies describe agent-to-human handoff theoretically. Preply has the problem in production — the architecture that solves it is a stateful context-passing layer.
Auction prices per-quote; home-management prices per-homeowner-lifetime. Those are different objectives, and the platform can't serve both without a stateful-context layer.
Default path: parent company commits publicly, pilots at scale, misses. Non-default path: run the pilot in Handy first. Findings feed the Angi rollout with real numbers.
200 2-star reviews: the owner told the sitter something in the messages thread that was already on the booking page. The sitter didn't see it.
Tutors burn the first 15 minutes of a paid session on diagnostic — figuring out where the student actually is. That context already existed upstream.
Return-reduction on assembled items is the cleanest cross-subsidy anchor IKEA can price. Tasker brief-quality moves it directly.
Students who asked an AI assistant first arrive higher-intent and with denser context. Substitution reads as threat; it's actually a filter.
Outschool retains when the teacher walks into class already knowing the kid. First-60-seconds retention gets weaker, not stronger, as upstream context gets richer — without a bridge.
SEMrush: SAT-prep organic queries declining quarter-over-quarter as searches migrate to ChatGPT. Leading indicator, not lagging.
Repeat-book rate decays after the first mismatch — gated by first-match quality, which is gated by the context handoff.
Profiles are static — age, preferences, purchase history. This is stateful context carried across sessions and across the agent↔human seam.
Led Jules — Google's autonomous coding agent — before founding Foible AI, Khosla-backed, building the context bridge between AI and human providers.
Variants
lineage tree · click to openProposed claims from this week's Learning Loop
Five candidate claims distilled from 11 meetings, 9 email replies, and 4 LinkedIn signal threads. You're in the loop — nothing lands unless you approve.
Pain — "CI build times exceeding 30 minutes" is showing up as a gating concern three weeks running.
Feature benefit — "our parallel test sharding cuts CI by 5×" · strong quotable from two sales calls.
Objection — “data residency for EU customers” → auto-routed to reply-playbook.