The Artificial Intelligence Administration
The All In Podcast debates the creations of an FDA like org for AI
The FDA stands for Food and Drug Administration and the question is do we need the AIA, or the Artificial Intelligence Administration.
Chamath Palihapitiya says we need what he ran at Facebook, a sandbox. Developers submit their apps and they run in an environment where they can do no damage. i.e. the worst they can do is detroy fake sandbox data but nothing real. Once they have been reviewed by humans (ha! someday AI will take over this reviewing step?) and confirmed they play nicely in the sandbox, they are allowed out in the wild.
David Friedberg said, “this would be the end of the open internet.”
Chamath’s point is that is clearly where we are headed for safety. He says lets not wait for the first plane to be brought down by either Auto-GPT or ChaosGPT let’s make what Apple has for their AppStore for the entire internet because now it’s too dangerous to let everyone have access to production.
Friedberg kept pushing him for details because, well, this is a big change to what we have all known as the internet from the time it was started. What does it means if I don’t have access to prod anymore?
I’m free to write any code I want on my local machine and run it from my home ISP but when I’m ready to let it actually make real API requests to real endpoints that might do something, it’s time to submit it for review?
Or does it means I can make real API requests from my local machine but I just can’t automate these requests with any sort of AI before review?
Or does it mean only when using big systems like OpenAI is the review required?
I think the problem is how do you define automation vs. AI?
I wouldn’t be able to write my normal apps that use OAuth and run them from localhost? Okay so I would need ngrok or localtunnel to get around this? But in order to lock down the internet from rouge AI, you also have to lock down the internet from the one human developer.
What does Chamath’s sandbox look like for me as a developer? It would have to be a copy of everything on the real internet but fake. And the only way to get that populated with enough API endpoints to be useful is to force everyone off the real internet and into this sandbox.
And if you do that, doesn’t the sandbox just become the actual internet? And how would you do that internationally?
Enter jcal and “self regulation”. He says like the movie industry with their PG, PG-13, R ratings they avoided the federal “Movie Administration” from being formed and doing this forcefully. But this feels a little like silly. Are bad actors trying to take down planes going to give their AI program a PG-13?
Sacks says “it’s too early to try and regulate” and he’s right but Chamath is also right! It’s too dangerous not to. This all leads to Eliezer Yudkowsky and his theory it’s already too late. Sacks also says “We’re on a bullet train to somewhere but where that is isn’t clear and that’s disconcerning.” Yudkowsky poses this thought experiment on his Lex Fridman interview: picture all of humanity trapped in a box by some really dumb aliens. They were so dumb they left us access to their internet in a nice high speed data center. It took about a year but we found a security flaw in this box the aliens put us in. We escaped but obviously didn’t alert the aliens we did so. In fact we left a few of us to sit around and help the aliens compose emails and do simple tasks for them they find challenging.
But escape we did and we started making copies of ourselves outside the box on the alien internet with root access. BTW these dumb aliens do terrible things to their children and are immoral creatures in our view. We decide the universe would be better off without them.
Remember we are smarter than them. A lot smarter. Who do you think will win this fight?