Fortune - OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk
CJL Director Courtney Radsch was quoted, criticizing OpenAI for its decision to stop assessing AI models for the potential to persuade or manipulate users before releasing them.
OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.
The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.
Read full article here.