Human Hallucination Prediction
Predict what visual hallucinations humans may experience using neural networks.
How to predict hallucinations:
- Select an example image below and click "Load Parameters" to set the prediction settings
- Click "Run Generative Inference" to predict what hallucination humans may perceive
- View the prediction: Watch as the model reveals the perceptual structures it expects—matching what humans typically hallucinate
- You can upload your own images
- You can download the results as a .gif file together with the configs.json

🎯 Prior bias
Examples
Select an example and click Load Parameters to apply its settings
🧠 About Hallucination Prediction
This tool predicts human visual hallucinations using generative inference with adversarially robust neural networks. Robust models develop human-like perceptual biases, allowing them to forecast what perceptual structures humans will experience.
Prediction Methods:
Prior-Guided Drift Diffusion (Primary Method)
Starting from a noisy representation, the model converges toward what it expects to perceive—revealing predicted hallucinations
IncreaseConfidence
Moving away from unlikely interpretations to reveal the most probable perceptual experience
Parameters:
- Prior strength: How strongly each step moves toward the model’s expected percept
- Number of Iterations: How many prediction steps to perform
- Hierarchy Level: Which perceptual level to predict from (early edges vs. high-level objects)
- Epsilon (Stimulus Fidelity): How closely the prediction must match the input stimulus
Why Does This Work?
Adversarially robust neural networks develop perceptual representations similar to human vision. When we use generative inference to reveal what these networks "expect" to see, it matches what humans hallucinate in ambiguous images—allowing us to predict human perception.
Developed by Tahereh Toosi