Mittelstedt added that President Trump could punish companies in a variety of ways. He cited, for example, the Trump administration’s decision to cancel a major federal contract with Amazon Web Services, a decision that may have influenced the former president’s views on the Washington Post and its owner, Jeff Bezos. It is highly likely that you received it.
It would not be difficult for policymakers to point out evidence of political bias in AI models, even if it affects both directions.
A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found different political leanings across different large-scale language models. We also showed how this bias can affect the performance of hate speech and misinformation detection systems.
Another study conducted by researchers at the Hong Kong University of Science and Technology found bias in some open-source AI models when it comes to polarizing issues such as immigration, reproductive rights, and climate change. Yejin Bang, a doctoral candidate who worked on the study, said that while most models tend to be liberal and US-centric, the same model can express different liberal or conservative biases depending on the topic. said.
AI models capture political bias because they are trained on swaths of internet data that necessarily include all kinds of viewpoints. Most users may not be aware of the bias in the tools they use, as models have built-in guardrails to limit the generation of certain harmful or biased content. However, these biases can leak out subtly, and the additional training the model undergoes to limit its output can introduce further partisanship. “Developers can expose models to multiple perspectives on divisive topics so they can respond with a balanced view,” says Bunn.
As AI systems become more prevalent, this problem could become even worse, says computer scientist Ashikeh Khudabkush of the Rochester Institute of Technology. He has developed a tool called the Toxicity Rabbit Hole Framework that removes various social biases from large-scale language models. “I’m concerned that a vicious cycle is about to start, as a new generation of LLMs will be trained on data contaminated by AI-generated content,” he says.
“Bias within LLMs is already a problem and is likely to become an even bigger problem in the future,” says Luca Lettenberger, a postdoctoral researcher at Karlsruhe Institute of Technology who conducted the bias-related LLM analysis. I am confident.” In German politics.
Lettenberger also suggests that political groups may try to influence LLMs to prioritize their views over those of others. “If someone is very ambitious and malicious, it could be possible to manipulate the LLM in a certain direction,” he says. “I think tampering with training data is really dangerous.”
Some efforts are already underway to rebalance the biases in AI models. Last March, a programmer developed a more right-leaning chatbot to highlight the subtle biases seen in tools like ChatGPT. Mr. Musk himself has promised that the AI chatbot Grok, built by xAI, will be “as truthful as possible” and more unbiased than other AI tools, but in reality it’s bound to involve some thorny political issues. It also has a workaround feature for problems. (Musk’s own “less bigoted” views, an avid Trump supporter and immigration hawk, could also lead to a more right-leaning outcome).
Next week’s U.S. elections are unlikely to resolve the divide between Democrats and Republicans, but a Trump victory could make the conversation about anti-woke AI even louder.
Musk took an apocalyptic view of the issue at an event this week, referencing the incident in which Google’s Gemini said he would prefer nuclear war to misgendering Caitlyn Jenner. “If you have an AI programmed to do something like that, it might conclude that the best way to ensure no one gets misgendered is to wipe out the human race. Then , the chance of future misgendering is reduced to zero,” he says.