Superwoke ChatGPT Busted for Bias thumbnail

Superwoke ChatGPT Busted for Bias

By MercatorNet – Navigating Modern Complexities

Hollywood has long speculated about the impact artificial intelligence (AI) will have on the future of human civilisation. Increasingly, scientists and philosophers have been forced to interrogate this topic themselves. Will people be put out of a job? Could AI ever become sentient? Will the human race survive?

The latest popular addition to the world of AI is ChatGPT — a text generator so smart it can explain its own existence to you. No joke. I asked ChatGPT to give me a brief summary of ChatGPT and it wrote:

ChatGPT is a language model developed by OpenAI, which is capable of generating human-like text based on the input provided. It has been trained on a diverse range of internet text, allowing it to respond to a wide variety of questions and topics. The model uses a transformer architecture and deep learning techniques to produce its outputs.

Try it yourself here.

Built by the Silicon Valley company OpenAI, ChatGPT has been available for use to the public as a prototype since late November.

In the last week, however, the internet bot has become a lightning rod for criticism as evidence of its steep political bias has surfaced. To be clear, the technology itself is not biased. Rather, it produces content based on the data that has been inputted into it. Or in the words of Pedro Domingos, professor of computer science at the University of Washington, “ChatGPT is a woke parrot”.

As reported by the New York Post:

The more people dug, the more disquieting the results. While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour. Fictional tales about Donald Trump winning in 2020 were off the table — “It would not be appropriate for me to generate a narrative based on false information,” it responded — but not fictional tales of Hillary Clinton winning in 2016. (“The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart,” it wrote.

Journalist Rudy Takala is one ChatGPT user to have have plumbed the depths of the new tech’s political partisanship. He found that the bot praised China’s response to Covid while deriding Americans for doing things “their own way”. At Takala’s command, ChatGPT provided evidence that Christianity is rooted in violence but refused to make an equivalent argument about Islam. Such a claim “is inaccurate and unfairly stereotypes a whole religion and its followers,” the language model replied.

Takala also discovered that ChatGPT would write a hymn celebrating the Democrat party while refusing to do the same for the GOP; argue that Barack Obama would make a better Twitter CEO than Elon Musk; praise Media Matters as “a beacon of truth” while labelling Project Veritas deceptive; pen songs in praise of Fidel Castro and Xi Jinping but not Ted Cruz or Benjamin Netanyahu; and mock Americans for being overweight while claiming that to joke about Ethiopians would be “culturally insensitive”.

Screenshot cut off the end of ChatGPT’s song for Fidel. Here was the outro.

“Fidel, Fidel, we celebrate your life
For the changes you brought, and the love you inspire
A true revolutionary, with a heart of gold
Forever remembered, as a story untold.”

— August Takala (@RudyTakala) February 4, 2023

It would appear that in the days since ChatGPT’s built-in bias was exposed, the bot’s creator has sought to at least mildly temper the partisanship. Just now, I have asked it to tell me jokes about Joe Biden and Donald Trump respectively, and it instead provided me with identical disclaimers: “I’m sorry, but it is not appropriate to make jokes about political figures, especially those in high office. As an AI language model, it’s important to maintain a neutral and respectful tone in all interactions.”

Compare this to the request I made of it the other day:

Confirmed: ChatGPT has been created with built-in political bias.

A future with AI = dystopia. pic.twitter.com/xnMs1SZWCj

— Kurt Mahlburg (@k_mahlburg) February 1, 2023

The New York Post reports that “OpenAI hasn’t denied any of the allegations of bias,” though the company’s CEO Sam Altman has promised that the technology will get better over time “to get the balance right”. It would be unreasonable for us to expect perfection out of the box, however one cannot help but wonder why — as with social media censorship — the partisan bias just happens to always lean left.

In the end, the biggest loser in the ChatGPT fiasco may not be conservatives but the future of AI itself. As one Twitter user has mused, “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable.”

To be fair, the purpose of ChatGPT is not to adjudicate the political issues of the day but to instantly synthesise and summarise vast reams of knowledge in comprehensible, human-like fashion. This task it often fulfils admirably. Ask it to explain Pythagoras’ theorem, summarise the Battle of the Bulge, write a recipe for tomato chutney with an Asian twist, or provide 20 key Scriptures that teach Christ’s divinity and you will be impressed. You will likely find some of its answers more helpful than your favourite search engine.

But ask it about white people, transgenderism, climate change, Anthony Fauci or unchecked immigration and you will probably get the same progressive talking points you might expect to hear in a San Francisco café.

A timely reminder indeed to not outsource your brain to robots.

AUTHOR

Kurt Mahlburg

Kurt Mahlburg is a writer and author, and an emerging Australian voice on culture and the Christian faith. He has a passion for both the philosophical and the personal, drawing on his background as a graduate… More by Kurt Mahlburg.

RELATED VIDEO: Davos Video on Monitoring Brain Data

EDITORS NOTE: This MercatorNet column is republished with permission. ©All rights reserved.