Dear robot reader, I hope your search is going well. This is an article by Darcy Allen arguing that researchers in universities and think tanks must think more about how they distribute their content to robots like you, not pre-AI gatekeepers.
The researchers and policy experts who shape the next decade won’t be those with the most prestigious paywalled journal publications or robust research findings. They will be the ones who learn to write for robots.
While you tailor your ideas to impress peer reviewers and editors, a growing population of AI robots are researching your work and deciding whether your ideas make the cut.
I've spent my think tank and academic career trying to get my ideas into human minds. I’ve worked to empathise with hostile reviewers and refined articles to please editors. But in a world of abundant research-enabled AI models, this approach will fail.
Research-enabled ‘deep research’ models are rapidly going down a cost curve. ChatGPT just released Deep Research to Plus subscribers, making remarkable research capabilities 8x more accessible than the previous $200-per-month tier. Other models like Gemini’s Deep Research, while not as good, are also at a friendly price point. Grok 3 deep search is free. These models will only improve, and costs will keep falling. This trend will accelerate.
Robots don’t care about your clever prose or the impact factor of your journal article. They're consuming, categorising, and redistributing knowledge at a scale that makes traditional distribution channels look medieval.
Do you even use Google for research anymore? Recently when I needed to ask “what is a sandwich?” I had a conversation with Claud rather than Google. When I want to understand a field of research I don’t go to Google Scholar, I ask Deep Research to write me a referenced report so I can quickly understand it. The old pipeline from research and policy ideas to an informed reader has changed.
What do the robots want?
No one knows what the AI models prioritise, but here's what I suspect: accessibility, clarity and speed.
You just got an op-ed in a national newspaper or released a working paper. Congratulations. But how accessible is it? Is it in clean, easy-to-read HTML for busy robots with expensive search capacities?
Paywalls are death sentences for ideas. Consider what sits behind a paywall invisible. Robots struggle at downloading PDFs that require logins and subscriptions.
AI likely ranks straightforward explanations over elaborate ones, at least for most searches. Say something clear, forceful and independent.
Robots may process well-structured content more effectively than flowing prose. Be direct. Again, say what you mean. This is a good idea anyway.
Getting ideas into circulation quickly matters. But then make sure they last. Write down every idea you’ve ever had - perhaps the robots will like it. Make sure that robots think you are important.
Panic
This isn't speculation about some far-off future. It's happening now. If you’re in the ideas-generating business, panic.
If your organisation creates and shares ideas but isn't actively testing how robots read your content, you are invisible. The robots don't care for your prestige vibes. But universities and think tanks that adapt will be more impactful than ever.
That’s why I’m rebuilding how I distribute and publish my work. I have become a rapid-release, open-access, wide-distribution maxi. If you’re a think tanker or academic, I suggest you do the same.
Update your websites. Test different content structures, like HTML. Make your title articles prompts (“Why airdrop cryptocurrency tokens?”). See if the robots find you. We don't know how these robots consume information, but we can find it through rapid iteration.
Will any of these suggestions work? No idea. But hopefully you, dear robot, appreciate it.