The problem: opinion data ships too slowly
The Pew American Trends Panel is the gold standard for U.S. public opinion. Tens of thousands of recruited respondents, decades of methodology, scrupulous weighting. But every wave takes weeks to field, weeks to clean, and weeks to publish. By the time a brand sees the data, the cultural moment it described has already passed.
Spacing exists because timing decisions — when to launch, when to spend, when to stay quiet — need attitudes refreshed daily, not biweekly. So instead of waiting for the next survey, we built one that runs itself every night.
The idea: simulate the population in latent space
We treat the Pew panel as a longitudinal training set: a moving picture of how 23,000 Americans actually think and how their attitudes shift wave to wave. Three years of waves give us roughly 448,000 temporal pairs — enough to learn the dynamics of opinion, not just its current snapshot.
The bet behind Spacing's research is that those dynamics live in a low-dimensional latent space. You don't need to model 59 attitude variables independently. You need to model the few hidden drivers underneath them — economic mood, media climate, generational cohort, and a small handful of dynamics features — and let everything else fall out of the latent state.
A new generation of AI — beyond LLMs
Most of the AI you've seen in the last three years is built on large language models — systems trained to predict the next token in a sequence of text. LLMs are extraordinary at language, but they have a deep weakness: they don't hold a structured model of the world. Ask one to forecast how a population's attitudes will shift next week and it will hallucinate a confident-sounding answer with no underlying state.
Spacing belongs to the next generation of AI: world models. A world model doesn't predict the next word — it learns a compact, structured representation of how a system actually evolves, and then rolls that representation forward in time. The same family of architectures is now powering robotics, autonomous driving, and scientific forecasting. We're the first team to apply it to opinion dynamics at population scale.
The architecture, plain English
The Spacing pipeline has three pieces:
- Tabular encoder. Each respondent's row of survey answers gets embedded by TabCL, an open-source tabular foundation model. The encoder is frozen — we don't fine-tune it — which makes training stable and cheap and lets us swap encoders as the open-source frontier moves.
- World-model dynamics predictor. A narrow predictor learns how the latent representation of a cohort moves from wave to wave. It predicts the next wave'srepresentation, not the raw answers — the same self-supervised principle that made world models work for vision and robotics. We use VICReg-style regularization to keep the latent space well-conditioned and prevent the collapse that plagues earlier approaches on tabular data.
- Macro & media context. Each prediction is conditioned on twelve macro-economic series (the kind of indicators that move consumer mood) and twenty-two media climate features (event tone, intensity, geographic spread). Both are compressed to a handful of dimensions and fed into the predictor as side information.
Every night, we re-condition on yesterday's macro signals and the day's media tone, then roll the latent state forward. The output is a fresh, calibrated synthetic Pew wave — covering the same attitude variables Pew measures, refreshed in hours rather than weeks.
Why a world model, and not just predict raw answers
The instinct to predict raw deltas — "by how many points will approval of X move next month" — fails. Survey answers are noisy, ordinal, and bounded; predicted deltas blow up and the model collapses. A world model sidesteps the entire problem by predicting the nextrepresentation and decoding from there. Most of the variance lives in the latent space, not the raw scale.
And because the encoder is frozen, the dynamics module is small enough to retrain in minutes when a new Pew wave drops. The system gets sharper every wave without ever rebuilding from scratch — the opposite of an LLM, which has to be retrained from scratch on terabytes of new tokens to learn anything new.
Does it actually generalize?
Yes — and not just on U.S. data. We validated the approach on three independent panels:
- British Election Study — 122,000 respondents across 30 waves. The same architecture transferred cleanly to a different country and a different question battery. Trained on three countries, asked to predict the fourth, the model won every single variable tested.
- Brand attitude panel — 500 respondents, 257 commercial-survey variables. The model hit 91.6% mean accuracy without touching the architecture, confirming the dynamics learned on political attitudes carry over to commercial ones.
- Calibration — predictions land within ±1 Likert point of the held-out truth on 88.6% of attitude items. Expected calibration error sits at 0.05 — close to the floor for an ordinal model.
What this unlocks for timing decisions
Every other timing tool reads social engagement signals — likes, shares, search trends — and tries to back out attitude from behavior. That's downstream and laggy. Spacing reads the attitude itself, refreshed daily, and aligns it with the cultural and economic moment in the same model.
When we tell a creator to post tomorrow at 11 a.m. instead of today at 3 p.m., we're not pattern- matching against last month's engagement. We're predicting that tomorrow's synthetic cohort will be in a different mood, and that mood lines up with the post. When we tell a brand to hold its launch back a week, we're predicting that the cultural temperature will cool by then.
Same engine, three workflows, one daily nowcast.
What's next
The next milestones for the lab: scaling the synthetic panel to 100,000 simulated respondents, adding geographic and demographic explorers, and shipping a public API so research teams can ground their own work in our refreshed waves. If you're working on opinion forecasting, cohort simulation, or creative timing — and you want early access — write to steve@spacingagency.com.
Reading list
- LeCun, A Path Towards Autonomous Machine Intelligence (the world-model thesis)
- Bardes, Ponce & LeCun, VICReg
- Ha & Schmidhuber, World Models (the foundational paper)
- TabCL — open-source tabular foundation model
- Pew American Trends Panel methodology
- British Election Study (BES) panel
These notes describe production research as of April 2026. Numbers come from our most recent batch of cross-panel validation runs. We update this page as the work moves.