Zeynep Burcu Yenipinar’s Post

View profile for Zeynep Burcu Yenipinar, graphic

Female Founder | Advisor & Growth Professional | Digital Marketing Enthusiast | Passionate Startup Builder | Branding Expert | Athlete

Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies Microsoft’s AI chatbot is answering political questions with conspiracies, false facts, and outdated or inaccurate data, with less than a year before one of the most important elections in US history. Based on OpenAI's GPT-4, research from AI Forensics and AlgorithmWatch, two NGOs that monitor the social effects of AI advancements, indicates that Copilot routinely disseminated false information during the October elections in Germany and Switzerland. Ahead of the highly anticipated elections in 2024, Microsoft unveiled this month its strategies for countering misinformation, including how it intends to address the possible threat posed by generative AI technologies. However, the researchers asserted that although some changes had been made, problems persisted when they informed Microsoft about these findings in October. The program was deemed "an unreliable source of information for voters" by AI Forensics and AlgorithmWatch researchers, who found that factual mistakes were present in one-third of the answers provided by Copilot. Copilot provided erroneous responses, some of which were made up altogether, in 31% of the smaller subset of recorded interactions, they discovered. Furthermore, the report alleges that Copilot not only produced answers based on erroneous information about polling numbers, election dates, candidates, and controversies but also mixed disparate polling numbers into a single response, producing an entirely false answer from initially accurate data. The chatbot would misrepresent its summary of the material presented while still providing links to reliable web sources. Last but not least, Josh A. Goldstein, a research fellow on the CyberAI Project at Georgetown University's Center for Security and Emerging Technology, said, "The tendency to produce misinformation related to elections is problematic if voters treat outputs from language models or chatbots as fact." "It could impede democratic processes if people rely on these systems to learn where or how to vote, for example, and the model output is inaccurate.” https://lnkd.in/dRdn6CYS? Platform: WIRED Author: David Gilbert #disinformation #fakenews #misinformation #elections2024 #conspiracytheories #politics #microsoftcopilot #artificialintelligence #openai

Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies

Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies

wired.com

To view or add a comment, sign in

Explore topics