We should not stop AI. But we should insist that it be made safe, better, and more trustworthy,
Consolidating the ideas from Part III, we should demand the following from our governments, and settle for nothing less:
- • No training on copyrighted work without compensation
- • No training without consent; training should be opt-in, not opt-out
- • No coercion. Giving you a clear option of using your car/phone/app/other gadget without making your data available for the purposes of training models and targeted advertising
- • Clear statements from every piece of software, on the web, in your car, and so forth, about what data is being collected and how it’s shared
- • Transparency around data sources, algorithms, corporate practices, and harms caused
- • Transparency around where and when and how AI is being used
- • Transparency around environmental impact
- • Clear liability for harms caused
- • Independent oversight, from scientists and civil society
- • Layered oversight
- • Predeployment evaluations of risks versus benefits, for large-scale deployments
- • Post-deployment auditing
- • Tax incentives for AI that benefits society
- • Extensive programs for AI literacy
- • An agile and empowered AI agency
- • International governance of AI
- • Research into new approaches to building trustworthy AI
None of this entails inhibiting innovation. All of it will make the world a better place. We have a right to demand all of it, and to vote out lawmakers who don’t move quickly to make sure that AI has the checks and balances we need.