18 Comments

Thanks for writing this, Dean! I'm really hoping the Gemini 2.0 models combined with OpenAI's Deep Research release drive improvements to Gemini's Deep Research. The competition will be good for both products.

Expand full comment

Hey Dean - I really appreciate your work and have learned a lot about AI policy from reading you. About a 6 months ago I decided that a social science/public policy PhD would be a good way for me to learn about these topics and do research to help make sure AI is integrated intelligently and fairly into society. Now, this doesn't seem like a great plan anymore (maybe it never was). I am not sure what the point is of being a grad student in pursuit of a career that probably won't exist when I'm done (will any exist?). The main benefit I do see is that a PhD will help connect me with a network of researchers, to your point about social capital. Do you think academic research careers are basically not worth it anymore given the projected pace of improvement? If so, do you see any other paths as more promising for young researchers?

Expand full comment

It is a great question. Truthfully, I think unless you have an alternate source of income of an intended role that absolutely requires an advanced degree, it’s best to avoid locking yourself into graduate school. Your circumstances may be quite different so I surely can’t that that definitively. But that would at least be my starting instinct.

Expand full comment

I love the way this post makes the benefits of AI for research feel more tangible. I can just imagine how chemistry and biology researchers must feel the same way. I hope tools like DeepResearch can somehow help policymakers better decipher their next steps. I feel like we are so behind in understand the effects of regulatory creep and innovation, even though a lot of it is already covered in history books.

Expand full comment

Really enjoyed this piece, thanks!

"Without tools like this, the complexity of modern society is a weapon that can be thrown by the forces of the status quo at even the smartest people trying to change things. But with tools like this, suddenly the tables are turned" felt especially insightful and well put. I don't think this solves alignment, alas, but it does bring some hope for people striving to bring about social change, perhaps.

Expand full comment

Thank you for this Dean. In my experience so far, the research capabilities of the human orchestrator are exponentially enhanced; indicating more advanced research is the path forward. Not a stagnation - rather an accelerator. I wonder if you see this as a possibility.

Expand full comment

100%! I think it is nearly a certainty for those who use the tool seriously.

Expand full comment

In the future, I foresee a need for a 'helpful reminder banner' to remind users that they are in a 'discrimination-free zone' where their outputs will be 'sanitized in accordance with local anti-discrimination statutes'. Such a banner could be quite useful, so people don't accidentally forget that they are using a VPN which changes their apparent location and thus are losing the valuable assistance of the officially approved output-sanitizer.

Expand full comment

Thanks, Dean. Mulling over the Policy Implications section...

Hope your Mom finds a way out of the autoimmune maze some day.

Expand full comment

Thank you!

Expand full comment

Great post, however I'm not as sanguine about this as you are.

I have two concerns. The first is the obsoleting of human skills I consider to be, in some sense, intrinsically valuable. That is the entire process of critical thinking and research. As you say, Deep Research isn't fully replacing humans yet, but how does the landscape look in 5 years? And if AI can produce better research than 99% of professional researches in their own specialty, and anyone can have access to this with a short prompt and a click of a button, why would anyone bother to learn skills like deep reading, rational inquiry, critical thinking?

Setting aside the employment implications of this, as someone who values intellectual inquiry this feels incredibly demoralizing at an almost spiritual level. The idea that the height of human intellectual inquiry will be essentially a hobby is depressing.

Perhaps this is how many artists felt when DALLE 2 came out.

Second, I'm not as optimistic this will be a net positive. You say it will give outsiders a fighting chance against the status quo, and perhaps it will, but remember people said the same about social media 15 years ago. And it turns out social media also is a great tool for propagandists, and often leads to social and political instability that doesn't seem to resolve itself into anything better. I worry that widespread access to Deep Research and tools like them will make it that much easier to produce high quality propaganda or misinformation generally.

Relatedly, I worry these tools might lead to a flooding of the zone with well-cited documents that are ultimately shallow and/or manipulative, leading to further declines in trust in science and expertise. If everyone comes armed with a cogent synthesis of research and 10-20 citations, how are we supposed to evaluate the quality of research and arguments at all short of becoming experts ourselves?

Oh, one other thing. Do you think there is a chance this tool will seem less impressive, and have flaws that are more apparent, the more you use it? I remember that happened with me when ChatGPT came out. There was a week where I thought my job was probably going to be replaced by AI, yet the more I used it the less impressed I was as I saw how it was more superficial than I had realized.

Expand full comment

Certainly I could see things playing out negatively. Though I don’t know that trudging through state agency guidance is actually what I enjoy most about my job! Instead I think it’s reading very broadly and coming up with interesting questions on the basis of wide ranging curiosity.

Yes: it will definitely have weaknesses that show themselves over time.

Expand full comment

These are really important concerns, and I think they point to the deeper question: how do we ensure that AI-driven knowledge refinement strengthens, rather than erodes, human intellectual inquiry? If AI can automate rational synthesis and deep research, does that make traditional critical thinking obsolete—or does it make structured epistemic refinement even more necessary?

One thing I explore in IFEM is the idea that knowledge isn’t just accumulating but refining, asymptotically approaching epistemic attractors. If AI accelerates this process, the role of human inquiry might shift from ‘generating research’ to ensuring that AI-driven synthesis converges toward stable, interpretable knowledge structures rather than just amplifying noise.

Your point about misinformation and ‘flooding the zone’ with superficially well-cited but shallow research is exactly why we need frameworks for measuring whether knowledge structures are actually refining or just shifting unpredictably. If AI democratizes deep research but also makes it easier to create sophisticated misinformation, the real challenge isn’t just producing knowledge but ensuring that epistemic progress remains directional and not chaotic.

Maybe the real question isn’t whether human intellectual inquiry becomes a ‘hobby,’ but how we structure it so that human oversight remains a critical part of AI-driven knowledge refinement. I’d be curious to hear your thoughts on what safeguards (technical or philosophical) could keep AI from degrading trust in expertise rather than strengthening it.

Expand full comment

Yeah, the first improvement that it occurred to me that Research Agents need is a specific step after finding a source where it judges the quality of the source. Low quality sources should be excluded, medium quality sources down-weighted in their impact on the final conclusions.

Expand full comment

This is a thought-provoking post, and I find it aligns well with some of the core questions IFEM seeks to address. While the role of AI and knowledge refinement is often debated, IFEM provides a framework that looks at how knowledge structures, whether human-driven or AI-driven, asymptotically converge toward greater epistemic stability. By focusing on entropy reduction and probabilistic updating, IFEM offers a way to track whether knowledge is genuinely progressing or simply shifting paradigms. If you’re interested in how knowledge can stabilize over time through structured refinement, IFEM may offer valuable insights.

Expand full comment

Sounds like you want an open version of deep research built on r1 which is far less censored. I also would like to see that.

Expand full comment

no, to be clear, OpenAI's model is not currently censored in any way that I noticed. they could become censored in the near future if these laws pass, but that would apply to any US-made open source model as well (these laws make no, or only incoherent exceptions, for OSS).

Expand full comment

I see. More of an issue with the web you and others have mentioned. I do think OpenAI and Anthropic models are very bland in some areas, but OpenAI’s is indeed better.

Expand full comment