Please check out these other websites that are making an effort to stand to deter rapid and widespread AI around the world.
We need a pause. Stop the development of AI systems more powerful than GPT-4. This needs to happen on an international level, and it needs to happen soon.
THE CENTER FOR AI SAFETY
IS A RESEARCH AND FIELD-BUILDING NONPROFIT.
We run courses to support people to learn about these potential harms, the interventions currently being proposed, and how they could use their skills and .
OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.
To avoid an existential catastrophe, we need a global moratorium on large AI training runs until the technical problem of AI alignment is solved.
"Responsible Scaling" means that the companies can scale up the power of their AI without limits. AI safety checks will not be able to improve anywhere near as quickly as companies scale up their models. Adequate safety checks won't be possible.
We need your consent to load the translations
We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.