1 Comment
4dEdited

(Edited b/c I took a second look at your post and realised you addressed some of the points I raised).

We both seem to agree that expecting the US AI Safety Institute* to produce evals for 200 different industries would be a mistake.

I guess where we differ is that I would like to see the AISI more narrowly focused as focused orgs are more likely to succeed at their tasks.

So whilst I do see a role for an AISI in terms of providing advice to other government departments, I would personally lean towards leaving existing regulators responsible for contracting/funding evals orgs, with an exception for industries critical to national security.

* I'm actually Australian, so I've mostly thought about what an Australian AI Safety Institute should do and I'm mostly carrying these assumptions over here. This could be a weakness in my analysis, such as if there were significant contextual differences I was failing to take into account.

Expand full comment