Skip to content
Adneural — fMRI-Encoded Brains Meet 200 Agents
AI Project

Adneural — fMRI-Encoded Brains Meet 200 Agents

3 min read578 words

Hello everyone!

The last few weeks have been absolutely consumed by Adneural, and I want to write about it while it's still fresh. The very short version: you upload an ad, we tell you how brains will respond to it and how a simulated population of 200 people will talk about it. Before you spend a dollar on media.

Why does this matter? Brand teams test ads in focus groups and with panels after the ad is already made. That's expensive and it's late — by the time you realize the ad doesn't land, you've already burned the budget on production. A lot of the "test before you launch" services out there are survey-based and slow. Our bet is that we can give a more useful signal, faster, by combining two things: neural encoding models trained on fMRI data and a social simulation with personality-diverse agents.

Half one — the brain side. We're using Meta's TRIBE v2 brain encoding model, which was trained on over 450 hours of fMRI data to predict voxel-level brain responses to images and video. You feed it an ad; it gives you back predicted activity in regions associated with attention, emotion, memory encoding, and novelty detection. We score the ad on a handful of meaningful axes — does this hit the attention network, does it hit the reward circuit, does it degrade interest after 5 seconds.

Half two — the social side. A predicted brain response is interesting, but the real question is "what will people say about this ad." For that, we use MiroFish / OASIS, a multi-agent social simulation framework. We spin up 200 agents with different personality vectors (Big Five traits, age, interest embeddings, discourse styles) and let them "see" the ad and discuss it with each other. You end up with a simulated discourse graph — who shares, who pushes back, what clusters of reaction emerge.

Tying it together. The interesting part is the bridge between the two halves. A pure brain signal can tell you "this ad feels arousing in region X" but not "this ad will go viral in the fitness community." A pure social sim can tell you "this ad generates polarized response" but won't tell you why. When you have both, you can trace a social reaction back to a neural signature and vice versa. That's the thing the platform is actually selling.

A few things I'm proud of under the hood:

  • Neo4j GraphRAG over 22 neuroscience papers so the explanations the LLM generates about predicted brain responses are grounded in actual literature, not hallucinated.
  • LangGraph diagnostic analysis that walks through the simulation output structurally instead of blobbing it into one big prompt.
  • Three.js brain visualization so the brain responses don't just show up as numbers — you see them on a 3D brain. This took longer to get right than anything else, and it's the part everyone remembers from a demo.

Stack notes: Next.js 15 on the frontend with React 19, FastAPI on the backend, OpenRouter to route model calls to whichever provider makes sense per task. The brain model and the agent sim are both GPU-bound services I expose via internal RPC.

It's early. We have things that work and things that embarrassingly don't. If you're in ad-tech, applied neuroscience, or just like the idea of simulating a crowd of 200 opinionated agents arguing about your TV spot — come talk to me. I'd love to show you what it does.

More soon!

neurosciencefmrimulti-agentnext.jsgraphrag

related posts