Research

Every Unseen Progress app is built on published research about how a specific caregiver population actually experiences change over long timescales. This page explains the cross-app methodology — what we measure, why we measure it that way, and how one research-backed engine is specialized into 18 different apps for 18 different caregiver populations.

For market-specific research pages — top problems, frames, citations, and evidence-based practice for one specific caregiver population — see Research by market below.

The invisible progress problem

Most of what caregivers describe as an emotional problem is actually a measurement problem.

Human memory and attention evolved to notice change day-to-day. Whether the child ate today, whether the dog reacted at the corner, whether the parent with dementia recognized you this afternoon. That short feedback loop works well for immediate survival — but it fails predictably when the thing you care about moves across months and years.

Three well-documented cognitive biases conspire here:

  • The availability heuristic — recent events are easier to recall, so they feel more representative of the underlying trend than they actually are. The last bad walk outweighs five good ones in memory.
  • The negativity bias — negative events are encoded more deeply than positive ones of equal magnitude. A single rejection is remembered; three weeks of slowly warming behavior is not.
  • The recency effect — the most recent experience dominates judgment. Yesterday's setback overwrites last month's improvement in the mental model of "how is this going?"

For caregivers, these biases combine to produce a systematic perception gap. The underlying relationship, behavior, or condition is often improving on the months-to-years timescale that the research shows is realistic. The caregiver's felt sense of it, day-to-day, is flat or declining. They feel like nothing is working when something is.

This is not a character flaw or a motivational problem. It is how memory works.

Why caregivers abandon approaches that are working

The invisible progress problem has a specific and costly failure mode: people stop doing the thing that was quietly working.

When the feedback loop on an approach is measured in weeks and the results arrive in months, a caregiver operating on unaided memory will usually conclude, somewhere around week three or four, that it isn't working. They switch approaches. The new approach needs another few weeks to build signal. They conclude that one isn't working either. They switch again.

Each switch resets the clock. Over a year, the caregiver has tried six things and abandoned each at the exact moment it was starting to produce the change they were looking for.

This pattern shows up across every population we have studied:

  • Stepparents abandon consistent, low-pressure relationship building after a few weeks because rejection still feels active — and reset back to zero (Papernow, 2013).
  • Owners of reactive dogs abandon careful under-threshold counter-conditioning after one bad walk erases the perceived progress — and switch to a different methodology or trainer.
  • Parents of children with selective mutism, stuttering, or anxiety change routines, therapists, or medications prematurely when the day-to-day data doesn't show what the month-to-month data would.
  • Family caregivers for dementia and Parkinson's patients burn out not from the absolute workload but from the loss of perceived meaning — because daily decline dominates the memory of what they preserved.

The common underlying structure: the feedback the caregiver gets is too noisy, too slow, and too emotionally asymmetric for unaided human memory to integrate correctly.

What we measure — Outcome-Driven Innovation

The Unseen Progress apps are designed around Outcome-Driven Innovation (ODI), a research methodology developed by Tony Ulwick. ODI treats caregivers as executors of a specific job, and identifies the measurable outcomes they are trying to achieve within that job — ranked by importance and current satisfaction.

The outcomes are not the apps' engagement metrics. The outcomes are what the caregiver is actually trying to accomplish.

For a stepparent, the top-ranked outcome is "minimize the likelihood of concluding that nothing is working when slow progress is actually occurring" — scored 19 out of 20 on the combined importance-satisfaction scale, meaning it is both extremely important and almost entirely unserved by existing tools. For an owner of a reactive dog, the equivalent outcome scores 16 out of 16 — also unserved. For family caregivers of a person with Alzheimer's, the pattern repeats.

Across every caregiver population we have analyzed, the highest-scoring outcomes cluster around the same underlying need: the ability to detect and hold onto slow progress that happens on a longer timescale than daily experience.

This is why the apps are built the way they are.

  • We measure outcomes the caregiver cares about (relationship warmth, behavioral incident severity, cognitive stability) — not engagement proxies (streaks, session length, in-app actions).
  • We surface trends on the months-to-years timescale the research shows is realistic — not weekly-average dashboards that re-create the invisible progress problem in digital form.
  • We provide reframing at the moment data hurts, grounded in the actual research on that caregiver population — so that one bad day doesn't wipe out the caregiver's model of whether the approach is working.

We are deliberately not a habit tracker. The goal is not the streak; the goal is the trend in the underlying outcome the streak was supposed to serve.

One engine, many markets

Every Unseen Progress app shares the same core engine:

  • A 30-second daily check-in built around five weighted sliders
  • A timeline that visualizes daily check-in data as a trend across weeks and months
  • A pattern engine that cross-references inputs to surface which actions correlate with which outcomes
  • A perspective-shift card format that delivers research-backed reframes when negative signals dominate
  • A research-structured course library with short modules on the specific caregiver population
  • Everything stored in local IndexedDB on-device — no accounts, no servers, no tracking

The content is market-specific. Each of the 18 apps has its own taxonomy (what counts as an interaction, a trigger, a response), its own assessment (what the daily sliders measure), its own course content, and its own perspective-shift cards grounded in the research for that specific population.

This is a deliberate architectural choice. The underlying cognitive biases are universal; the caregiving work is not. Stepparent-stepchild dynamics require different content than dog reactivity training, which requires different content than dementia caregiving. But the measurement problem is the same, so the engine is the same.

It also means that when we improve the engine — better trend smoothing, better pattern detection, better perspective-card timing — all 18 apps benefit simultaneously.

What we are not

We are not therapy. The apps are not clinical tools. They are not diagnostic instruments, not a substitute for a licensed therapist, veterinary behaviorist, neurologist, or pediatrician, and not regulated as medical devices. If you are dealing with clinical-level anxiety, suicidal ideation, dangerous aggression in an animal, or a medical condition, the appropriate next step is a trained professional, not an app.

What we are: a memory aid for people who are doing the work and cannot perceive the results. A measurement layer that closes the feedback-loop gap between "I made a hundred small attempts this month" and "something is actually changing." A set of research-backed reframes available at the moment the caregiver's own memory is about to mislead them.

The research on this site is written for caregivers, but also for researchers and practitioners. If you are working in stepfamily integration, veterinary behavior, caregiving for cognitive decline, or any of the 18 populations we serve, we are happy to share the methodology in detail.

Research by market

The market-specific research pages below are the deep reference material for caregivers, practitioners, and researchers in each area. Each page covers the top problems in that population, the research-backed frames that explain them, what the evidence says works, and the citations behind each claim.

Additional market pages are in preparation. Our research queue prioritizes by search volume and evidence depth — populations where the academic literature is strongest and the caregiver audience largest come first.

Citation library

Every claim on a research page links back to a citation in site/research/citations.json, which is a version-controlled, publicly auditable file in the repository. Each citation entry includes authors, year, title, publisher, a stable URL, a one-sentence summary of the finding, and a list of which apps use it.

We maintain the citation library directly in the repository because claim-provenance matters. If a caregiver, researcher, or journalist wants to verify a specific claim, the chain is: page → citation key → citations.json entry → original source.

Methodology sharing

If you are researching any of these ideas for academic or clinical purposes — long-timescale caregiver perception, the application of ODI to underserved caregiving populations, invisible progress in behavior modification, or the mapping of one engine across 18 markets — we are happy to share our methodology in detail.

Contact: feedback@unseenprogress.com