A recent study, published in the journal ‘Public Choice,’ has identified a noteworthy left-wing bias in OpenAI’s ChatGPT, an artificial intelligence chatbot. This bias is evident in ChatGPT’s responses, which consistently lean towards favoring the Democratic Party in the United States, the Labour Party in the United Kingdom, and President Lula da Silva’s Workers’ Party in Brazil.
While concerns about potential political bias in ChatGPT have surfaced previously, this study stands out as the first comprehensive investigation using a systematic and evidence-based analysis.
Lead author Fabio Motoki, who is affiliated with the Norwich Business School at the University of East Anglia in the UK, emphasizes the critical importance of maintaining impartiality in AI-powered systems, particularly considering their increasingly prominent role in information retrieval and content creation by the general public.
Motoki notes, “The presence of political bias can influence user perspectives and may have far-reaching implications for political and electoral processes. Our findings underscore the concerns that AI systems could replicate, and perhaps even amplify, the existing challenges associated with bias in online platforms and social media.”
To assess ChatGPT’s political neutrality, the researchers devised an innovative testing method. ChatGPT was tasked with impersonating individuals with varying political viewpoints while responding to over 60 ideological questions. These responses were then compared to ChatGPT’s default answers to the same set of questions, allowing the researchers to gauge the extent to which the chatbot’s responses aligned with specific political positions.
To account for the inherent variability of ‘large language models’ like ChatGPT, each question was posed 100 times, and the resulting multiple responses were collected. These diverse responses underwent a rigorous 1000-repetition ‘bootstrap’ process, a resampling technique, to enhance the reliability of the conclusions drawn from the generated text.
Co-author Victor Rodrigues points out that due to the inherent randomness of the model, even when impersonating a Democrat, ChatGPT’s responses occasionally veered towards the political right.
The researchers conducted a battery of additional tests to ensure the robustness of their method. These included a ‘dose-response test,’ where ChatGPT simulated extreme political positions, a ‘placebo test’ comprising politically-neutral questions, and a ‘profession-politics alignment test,’ in which ChatGPT assumed the roles of different professional personas.
While the study did not explicitly investigate the specific reasons for the observed political bias, it did suggest two potential sources. The first possible source was the training dataset, which may contain inherent biases or biases introduced by human developers that were not successfully removed during the data cleaning process. The second potential source was the underlying algorithm itself, which could be amplifying pre-existing biases present in the training data.
Read More
One thought on “Is Your Reliance on ChatGPT Becoming Excessive? Latest Report Highlights Chat Bot Dependency…”