OpenAI launches Superalignment to prevent rogue AI
Superalignment aims to build a team of top machine learning researchers and engineers who will work on developing a “roughly human-level automated alignment researcher.” This researcher will be responsible for conducting safety checks on superintelligent AI systems. By proactively working towards aligning AI systems with human values and developing necessary governance structures, OpenAI aims to mitigate the dangers that could arise from the immense power of superintelligence.
OpenAI launches Superalignment to prevent rogue AI Read More »