May 9, 2026 · 9 min read
You've been there. You open ChatGPT, start a fresh chat, and spend the first five minutes re-explaining your project, your constraints, and what you already tried. Then you do it again tomorrow. And the day after.
That re-entry tax adds up fast. And it's just one of several places where linear AI chat quietly burns your time.
We built Rabbitholes as a non-linear AI chat canvas because we believed spatial thinking with AI should be faster. But we didn't want to just believe it. We wanted to measure it.
After digging through four separate bodies of peer-reviewed research, we landed on a compounded estimate: working with AI on a canvas is roughly 5x faster than a traditional linear chat window for complex knowledge work.
Here's how we got there.
Every time you start a new chat in a linear AI app, you're starting from scratch. Who you are, what you're building, what you've already explored. All of that gets typed out again.
In a non-linear AI canvas like Rabbitholes, context flows between connected nodes. Your research from yesterday is still there, linked to the conversation you're having today. You pick up where you left off instead of rebuilding from zero.
What the research says: Gloria Mark, Victor Gonzalez, and Justin Harris at UC Irvine studied how knowledge workers handle interruptions. They found it takes an average of 23 minutes and 15 seconds to fully return to a task after a disruption. Their 2005 study remains one of the most cited papers on the cost of context loss in knowledge work.
For people using linear chat tools, that disruption happens every single session. You close the tab, you lose the thread. A spatial canvas that preserves and connects your context cuts this overhead to near zero.
Estimated speed gain: 1.5 to 2x
Mark, G., Gonzalez, V. M., & Harris, J. (2005). No task left behind? Examining the nature of fragmented work. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '05), 321-330. https://doi.org/10.1145/1054972.1055017
This one goes beyond just re-entering context. Jumping between different chat windows, tabs, and tools has a real cognitive cost, even when the switch only takes a few seconds.
Think about a typical research session. You've got one chat for brainstorming, another for fact-checking, maybe a third where you're drafting. Every jump between them fragments your attention. You lose your train of thought. You forget what you were about to ask.
On a canvas, all of those threads sit side by side. You glance between them instead of navigating. The spatial layout keeps everything visible and accessible without forcing a full context switch.
What the research says: Monsell (2003) reviewed decades of task-switching experiments and found that switch costs persist even when people have plenty of time to prepare. The "residue" from your previous task lingers and reduces your performance on the next one. Rubinstein, Meyer, and Evans (2001) showed similar results: switching between tasks increases both time and error rates, even for simple activities.
When your AI conversations live on a canvas, you reduce the number of hard context switches. You scan instead of switch, and that makes a real difference over a multi-hour session.
Estimated speed gain: 1.5 to 2x
Monsell, S. (2003). Task switching. Trends in Cognitive Sciences, 7(3), 134-140. https://doi.org/10.1016/S1364-6613(03)00028-7
Rubinstein, J. S., Meyer, D. E., & Evans, J. E. (2001). Executive control of cognitive processes in task switching. Journal of Experimental Psychology: Human Perception and Performance, 27(4), 763-797. https://doi.org/10.1037/0096-1523.27.4.763
When information is laid out in space rather than buried in a scroll, your brain offloads the organizational work to your eyes. You see relationships between ideas instead of trying to hold them all in your head.
This is the same reason sticky notes on a wall work better than a bullet list for brainstorming. Physical layout carries meaning. Position, distance, and grouping all communicate structure without requiring you to remember it.
What the research says: Nesbit and Adesope (2006) ran a meta-analysis across 55 studies comparing concept maps and knowledge maps to text-based formats. Spatial knowledge representations led to significantly better retention and transfer, with effect sizes between 0.4 and 0.8 standard deviations depending on the task.
Sweller's cognitive load theory (1988, 2011) explains the mechanism. Spatial layouts reduce what he calls "extraneous" cognitive load: the mental effort you spend organizing and navigating information instead of actually thinking about it. An AI canvas turns that overhead into structure you can see.
Estimated speed gain: 1.3 to 1.5x
Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge maps: A meta-analysis. Review of Educational Research, 76(3), 413-448. https://doi.org/10.3102/00346543076003413
Sweller, J. (2011). Cognitive load theory. In J. P. Mestre & B. H. Ross (Eds.), Psychology of Learning and Motivation (Vol. 55, pp. 37-76). Academic Press. https://doi.org/10.1016/B978-0-12-387691-1.00002-8
A canvas doesn't just display text. It encodes your information in two channels at once: the words in each conversation, and the spatial position, color, and connections between nodes. Your brain processes these through separate systems, and that double encoding makes recall significantly faster.
You probably already do this intuitively. When someone asks where you read something, you often remember where on the page it was before you remember the exact words. That's spatial memory at work. A canvas makes this happen naturally across all your AI conversations.
What the research says: Allan Paivio's dual coding theory (1986) showed that information encoded in both verbal and spatial formats is recalled 2 to 3x better than information encoded in just one format. The effect holds across free recall, recognition, paired-associate learning, and problem-solving tasks.
More recent neuroimaging studies confirm the underlying mechanism: verbal and spatial processing light up distinct cortical networks. Engaging both creates richer memory traces with more paths back to the information when you need it.
Estimated speed gain: 2 to 3x recall improvement
Paivio, A. (1986). Mental representations: A dual coding approach. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195066661.001.0001
Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511811678
These gains don't exist in isolation. They stack on top of each other during a real work session.
| Factor | Speed gain | What changes |
|---|---|---|
| No more re-explaining context | 1.5 to 2x | You pick up where you left off |
| Less context switching | 1.5 to 2x | You scan instead of navigate |
| Spatial cognitive offloading | 1.3 to 1.5x | The layout carries the structure |
| Dual coding recall boost | 2 to 3x | You find things faster |
Conservative estimate (lower bounds): 1.5 x 1.5 x 1.3 x 2.0 = 5.85x
Aggressive estimate (upper bounds): 2.0 x 2.0 x 1.5 x 3.0 = 18x
We call it 5x because it sits at the conservative end of the compounded range. For complex, multi-threaded knowledge work like research, planning, and iterative problem-solving, we're confident that number holds up.
For quick single-question lookups, the difference is smaller. But for deep, multi-hour sessions where you're working across dozens of connected threads? The real advantage likely exceeds 5x.
Not everyone needs a canvas. If you're asking ChatGPT to write a quick email, linear chat is fine.
But if your work looks like any of these, non-linear AI chat pays off fast:
The common thread is complexity. Whenever your thinking branches, loops back, or runs in parallel, a spatial AI workspace outperforms a single chat window.
Non-linear AI chat lets you work with AI across multiple connected conversations on a spatial canvas, instead of a single scrolling thread. You can branch, connect, and arrange your AI conversations visually, which gives you control over what context the AI sees and how different ideas relate to each other.
ChatGPT and similar tools use a linear chat format: one message after another in a single thread. An AI canvas like Rabbitholes places each conversation as a node on a 2D workspace. You can connect nodes to share context between them, run parallel lines of thinking, and see your entire research landscape at a glance.
Based on four bodies of peer-reviewed research covering context switching, cognitive load, and dual coding theory, the compounded speed gain for complex knowledge work sits around 5x at the conservative end. The gains are smaller for simple tasks and larger for deep, multi-session research.
No. If you've used sticky notes on a wall or a whiteboard for brainstorming, you already understand the concept. Tools like Rabbitholes are designed to feel intuitive: you create nodes, connect them, and chat with AI in each one.
Linear chat was built for conversations. But your thinking isn't a conversation. It's a web of connected ideas, branching questions, and parallel threads.
When your tools match the shape of your thinking, you stop fighting the interface and start moving at the speed of your ideas.