Meta allegedly replacing humans with AI to assess product risks


According to the new review of internal documents by NPR, Meta has planned to replace human risk assessors with AI, because the company is close to complete automation.

Historically, META is based on human analysts to assess the potential damage posed by new technologies on its platforms, including algorithm updates and security characteristics, part of a process called privations of confidentiality and integrity.

But in the near future, these essential assessments can be taken care of by bots, while the company seeks to Automate 90% of this work Use of artificial intelligence.

See also:

The Deepseek R1 update proves that it is an active threat to Openai and Google

Despite the fact that AI would only be used to assess the “low -risk” versions, Meta now deploys the use of technology in decisions on AI security, young people and integrity, which includes disinformation and moderation of violent content, reported NPR. As part of the new system, the teams of products submit questionnaires and receive decisions and recommendations of instant risks, the engineers taking greater decision -making powers.

Mashable lighting speed

Although automation can accelerate applications and developer versions in accordance with META’s efficiency objectives, initiates claim that it can also present a higher risk for billions of users, including unnecessary threats to data confidentiality.

In April, the Meta supervisory board published a Decisions series This simultaneously validated the position of the company on the authorization of “controversial” speech and reprimanded the technology giant for its content moderation policies.

“While these changes are deployed on a global scale, the board of directors stresses that it is now essential that Meta identifies and addresses the negative impacts on human rights that can result,” said the decision. “This should include evaluating whether the reduction in his dependence on the automated detection of political violations could have uneven consequences worldwide, especially in countries with current or recent crises, such as armed conflicts.”

Earlier this month, Meta closed her human fact verification program, replacing him with community notes from the crowd and based more on her content -moving algorithm – internal technology that is known to miss and wrongly report disinformation and other articles that violate the recently revised content policies of society.

Subjects
Meta of artificial intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *