Every Unseen Progress app is built on published research about how a specific caregiver population actually experiences change over long timescales. This page explains the cross-app methodology — what we measure, why we measure it that way, and how one research-backed engine is specialized into 18 different apps for 18 different caregiver populations.
For market-specific research pages — top problems, frames, citations, and evidence-based practice for one specific caregiver population — see Research by market below.
Most of what caregivers describe as an emotional problem is actually a measurement problem.
Human memory and attention evolved to notice change day-to-day. Whether the child ate today, whether the dog reacted at the corner, whether the parent with dementia recognized you this afternoon. That short feedback loop works well for immediate survival — but it fails predictably when the thing you care about moves across months and years.
Three well-documented cognitive biases conspire here:
For caregivers, these biases combine to produce a systematic perception gap. The underlying relationship, behavior, or condition is often improving on the months-to-years timescale that the research shows is realistic. The caregiver's felt sense of it, day-to-day, is flat or declining. They feel like nothing is working when something is.
This is not a character flaw or a motivational problem. It is how memory works.
The invisible progress problem has a specific and costly failure mode: people stop doing the thing that was quietly working.
When the feedback loop on an approach is measured in weeks and the results arrive in months, a caregiver operating on unaided memory will usually conclude, somewhere around week three or four, that it isn't working. They switch approaches. The new approach needs another few weeks to build signal. They conclude that one isn't working either. They switch again.
Each switch resets the clock. Over a year, the caregiver has tried six things and abandoned each at the exact moment it was starting to produce the change they were looking for.
This pattern shows up across every population we have studied:
The common underlying structure: the feedback the caregiver gets is too noisy, too slow, and too emotionally asymmetric for unaided human memory to integrate correctly.
The Unseen Progress apps are designed around Outcome-Driven Innovation (ODI), a research methodology developed by Tony Ulwick. ODI treats caregivers as executors of a specific job, and identifies the measurable outcomes they are trying to achieve within that job — ranked by importance and current satisfaction.
The outcomes are not the apps' engagement metrics. The outcomes are what the caregiver is actually trying to accomplish.
For a stepparent, the top-ranked outcome is "minimize the likelihood of concluding that nothing is working when slow progress is actually occurring" — scored 19 out of 20 on the combined importance-satisfaction scale, meaning it is both extremely important and almost entirely unserved by existing tools. For an owner of a reactive dog, the equivalent outcome scores 16 out of 16 — also unserved. For family caregivers of a person with Alzheimer's, the pattern repeats.
Across every caregiver population we have analyzed, the highest-scoring outcomes cluster around the same underlying need: the ability to detect and hold onto slow progress that happens on a longer timescale than daily experience.
This is why the apps are built the way they are.
We are deliberately not a habit tracker. The goal is not the streak; the goal is the trend in the underlying outcome the streak was supposed to serve.
Every Unseen Progress app shares the same core engine:
The content is market-specific. Each of the 18 apps has its own taxonomy (what counts as an interaction, a trigger, a response), its own assessment (what the daily sliders measure), its own course content, and its own perspective-shift cards grounded in the research for that specific population.
This is a deliberate architectural choice. The underlying cognitive biases are universal; the caregiving work is not. Stepparent-stepchild dynamics require different content than dog reactivity training, which requires different content than dementia caregiving. But the measurement problem is the same, so the engine is the same.
It also means that when we improve the engine — better trend smoothing, better pattern detection, better perspective-card timing — all 18 apps benefit simultaneously.
We are not therapy. The apps are not clinical tools. They are not diagnostic instruments, not a substitute for a licensed therapist, veterinary behaviorist, neurologist, or pediatrician, and not regulated as medical devices. If you are dealing with clinical-level anxiety, suicidal ideation, dangerous aggression in an animal, or a medical condition, the appropriate next step is a trained professional, not an app.
What we are: a memory aid for people who are doing the work and cannot perceive the results. A measurement layer that closes the feedback-loop gap between "I made a hundred small attempts this month" and "something is actually changing." A set of research-backed reframes available at the moment the caregiver's own memory is about to mislead them.
The research on this site is written for caregivers, but also for researchers and practitioners. If you are working in stepfamily integration, veterinary behavior, caregiving for cognitive decline, or any of the 18 populations we serve, we are happy to share the methodology in detail.
The market-specific research pages below are the deep reference material for caregivers, practitioners, and researchers in each area. Each page covers the top problems in that population, the research-backed frames that explain them, what the evidence says works, and the citations behind each claim.
Additional market pages are in preparation. Our research queue prioritizes by search volume and evidence depth — populations where the academic literature is strongest and the caregiver audience largest come first.
Every claim on a research page links back to a citation in site/research/citations.json, which is a version-controlled, publicly auditable file in the repository. Each citation entry includes authors, year, title, publisher, a stable URL, a one-sentence summary of the finding, and a list of which apps use it.
We maintain the citation library directly in the repository because claim-provenance matters. If a caregiver, researcher, or journalist wants to verify a specific claim, the chain is: page → citation key → citations.json entry → original source.
If you are researching any of these ideas for academic or clinical purposes — long-timescale caregiver perception, the application of ODI to underserved caregiving populations, invisible progress in behavior modification, or the mapping of one engine across 18 markets — we are happy to share our methodology in detail.
Contact: feedback@unseenprogress.com