Meta and MCA Announce Deepfake Reporting Helpline on WhatsApp

Business View
4 Min Read
Meta and MCA Announce Deepfake Reporting Helpline on WhatsApp

WhatsApp Launch Helpline for Verifying Deepfakes

As the concern for deepfakes surfaces, Meta on Monday, in collaboration with a cross-industry alliance has decided to put a joint effort to set afloat a helpline on WhatsApp to tackle the distribution of AI-generated deepfakes. 

The helpline will be available from March 2024 for access to Whatsapp users and will be providing assistance in multiple languages, namely English, Hindi, Tamil, and Telugu. The users will be able to flag deepfakes by sending alert signals to a dedicated Whatsapp chatbot.

​Aiming to eradicate the spread of fake content generated by artificial intelligence (AI), this initiative will facilitate the distribution of correct and verified information to the public. 

As Meta has grouped itself with the Misinformation Combat Alliance (MCA), this plan pledges to verify information through a rigid network of research organizations and independent fact-checkers, to take into account misinformation (deepfakes in particular), and send them to scrutiny. 

MCA has also announced the setting of a central ‘deepfake analysis unit’ to direct all messages received on the Whatsapp helpline. It is also said that the unit will work closely in accordance with fact-checking organizations, industry partners, and digital labs to govern and verify the messages and, respond to them accordingly, simultaneously identifying untrue and wrong information. 

For this program, Meta has partnered with 11 fact-checking organizations for verification purposes of its messages. This partnership enables users to check, analyze, and verify information, leading to the prevention of the spread of misinformation. As per recent updates, Whatsapp has created multiple channels on its platform for users to receive accurate information and data. 

For security reasons, WhatsApp also limits the number of times a content is forwarded, further restricting the virality on the application. This rigid framework adapts a four-pillar approach to monitor messages and detect misuse, if any by detecting, preventing, reporting, and sharing awareness among the users regarding the increase in the number of deepfakes. 

Shivnath Thukral, director, Public Policy India, Meta said “We recognize the concerns around AI-generated misinformation and believe combating this requires concrete and cooperative measures across the industry.” He further added that the collaboration with MCA that can curb the spread of misinformation is consistent with their pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. 

Before the arrival of the 2024 Lok Sabha elections, 20 giant companies such as Google, Meta, Amazon, IBM, and Microsoft signed a mutual treaty to combat the distribution of AI-generated deepfakes, in the preceding week. 

Apart from this collaboration, the Indian government too has announced that it would introduce strict arrangements to handle the widespread deepfakes under the Information Technology (IT) rules, 2021, through alterations.

A few time back, Meta introduced its AI Labeling policy. It states that posted pictures on Facebook, Instagram, and Threads would be labeled based on standards set by industries of AI-generated content.  

What is AI-generated Deepfake

As the name suggests, AI-generated deepfakes are videos, audio, and images that are created using artificial intelligence. This content is usually fake and is intended to influence the viewers. They seem to sway over the minds of people, mostly depicting them as something or doing something that they didn’t do in reality. The main objective of this type of content is to spread false information, degrading an individual’s sanity.

Share This Article