What is the name of OpenAI's AI safety research initiative?
AI Alignment
AI Security
AI Ethics
AI Governance
Answer
OpenAI's AI safety research initiative is called AI Alignment. It focuses on ensuring that AI systems are developed in a way that aligns with human values and safety concerns.