{"id":193424,"date":"2023-09-08T15:40:10","date_gmt":"2023-09-08T15:40:10","guid":{"rendered":"https:\/\/tokenstalk.info\/?p=193424"},"modified":"2023-09-08T15:40:10","modified_gmt":"2023-09-08T15:40:10","slug":"ai-usage-on-social-media-has-potential-to-impact-voter-sentiment","status":"publish","type":"post","link":"https:\/\/tokenstalk.info\/crypto\/ai-usage-on-social-media-has-potential-to-impact-voter-sentiment\/","title":{"rendered":"AI usage on social media has potential to impact voter sentiment"},"content":{"rendered":"
The use of artificial intelligence (AI) in social media has been targeted as a potential threat to impact or sway voter sentiment in the upcoming 2024 presidential elections in the United States.\u00a0<\/p>\n
Major tech companies and U.S. governmental entities have been actively monitoring the situation surrounding disinformation. On Sept. 7, the Microsoft Threat Analysis Center, a Microsoft research unit,\u00a0published\u00a0a report\u00a0claiming \u201cChina-affiliated actors\u201d are leveraging the technology. <\/p>\n
The report says these actors utilized AI-generated visual media in a \u201cbroad campaign\u201d that heavily emphasized \u201cpolitically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols.\u201d<\/p>\n
It says it anticipates that China \u201cwill continue to hone this technology over time,\u201d and it remains to be seen how it will be deployed at scale for such purposes. <\/p>\n
On the other hand, AI is also being employed to help detect such disinformation. On Aug. 29, Accrete AI was awarded a contract by the U.S. Special Operations Command to deploy artificial intelligence software for real-time disinformation threat prediction from social media. <\/p>\n
Prashant Bhuyan, founder and CEO of Accrete, said that deep fakes and other \u201csocial media-based applications of AI\u201d pose a serious threat.<\/p>\n
\u201cSocial media is widely recognized as an unregulated environment where adversaries routinely exploit reasoning vulnerabilities and manipulate behavior through the intentional spread of disinformation.\u201d<\/p><\/blockquote>\n
In the previous U.S. election in 2020, troll farms reached 140 million Americans each month, according to MIT.\u00a0<\/p>\n
Troll farms are an \u201cinstitutionalized group\u201d of internet trolls with the intent to interfere with political opinions and decision-making. <\/p>\n
Related: <\/em><\/strong>Meta\u2019s assault on privacy should serve as a warning against AI<\/em><\/strong><\/p>\n
Regulators in the U.S. have been looking at ways to regulate deep fakes ahead of the election.\u00a0<\/p>\n
On Aug. 10, the U.S. Federal Election Commission unanimously voted to advance a petition that would regulate political ads using AI. One of the commission members behind the petition called deep fakes a \u201csignificant threat to democracy.\u201d<\/p>\n
Google announced on Sept. 7 that it will be\u00a0updating its political content policy\u00a0in mid-November 2023 to make AI disclosure mandatory for political campaign ads.<\/p>\n
It said the disclosures will be required where there is \u201csynthetic content that inauthentically depicts real or realistic-looking people or events.\u201d<\/p>\n
Magazine: <\/em><\/strong>Should we ban ransomware payments? It\u2019s an attractive but dangerous idea<\/em><\/strong><\/p>\n