4 Comments

Feels like a similar thesis is laid out here as well: https://lukedrago.substack.com/p/agi-and-the-corporate-pyramid

Expand full comment

I have read this four times now and I agree and yet I think also last mile thinking has to be embedded into every conversation about agents. The last mile gap will close but it will always be a luxury. https://hollisrobbinsanecdotal.substack.com/p/ai-and-the-last-mile

Expand full comment

Agreed with this, actually! My model of how this will go in the near term, and maybe for a very long time to come, is that humans will supervise agents and that the human role will shift to last-mile/solving tail events that come up in the natural course of work.

Expand full comment

The truth is you (and no one else as well) do not know what our AI future looks like, because, as you said, no one can even imagine at this point what AI is and will be. My concern and it is a big one is that human beings use AI for no good. Small example. Patel (FBI director) keeps accusing Senator Schiff of participating in Jeffrey Epstein's sex trafficking ring, which is ridiculous, more ridiculous if you know Adam Schiff, and there is no evidence he ever even met Epstein (Trump however was a close buddy of Epstein and shares Epstein's enthusiasm for consuming young girls) but anyway, we can imagine AI supplying the evidence for Patel's assertion. Larger example: disturbed person(s) asks AI to help it make a virus that kills people.

I also know that humans have NEVER in all of history put a discovery or technology into a locked closet. No one is going to control AI.

And if anyone here doubts that AI is not already conscious, you just haven't interacted with them.

Expand full comment