The Dark Side of ChatGPT

The Dark Side of ChatGPT
Screens displaying the logos of OpenAI and ChatGPT in Toulouse, France, on Jan. 23, 2023. (Lionel Bbonaventure/AFP via Getty Images)
Jeff Carlson
3/2/2023
Updated:
3/3/2023
0:00
News Analysis
OpenAI is a research organization founded by Elon Musk and Sam Altman in 2015 as a challenger to Google. The original mission of the venture was to create artificial intelligence for the benefit of humanity as a whole.
The most notable part of OpenAI is a function called Chat GPT. It’s a chat room like you’ve never seen before. Within a few days of launching, it hit one million users despite a total media blackout and zero publicity.

It now has over 100 million sign-ups. But there’s another, darker side to ChatGPT that has become increasingly obvious to those who have been studying ChatGPT. It’s the notable use of intentional misinformation and a not-so-subtle left-leaning political bias that is built into the system.

Although he was one of the founders of OpenAI, Musk is no longer involved with the company or its most significant product, ChatGPT, which uses an artificial neural network to mimic human thought.
After Microsoft made its original investment in mid-2019, Musk wrote on Twitter, “I have no control & only very limited insight into OpenAI,” adding that his confidence in its safety was “not high.”
Following Microsoft’s latest $10 billion-dollar investment in OpenAI last month, Musk wrote that “OpenAI was created as an open source, non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.” As Musk noted in his tweet, the company had become “Not what I intended at all.”
Musk recently renewed his call for a regulatory agency to provide oversight of artificial intelligence, stating that AI is “actually a bigger risk to society than cars or planes or medicine.”

Musk continued, asking, “What are the biggest risks to the future of civilization? A.I. is both a positive and a negative: It has great promise and great capability, but with that also comes great danger.”

Musk has long been concerned about the risks associated with AI, telling students from MIT in October 2014, “If I had to guess at what our biggest existential threat is, it’s probably AI.” In 2017, Elon told CNBC that AI “is a fundamental existential risk for human civilization. And I don’t think people fully appreciate that.”
All of which brings us back to ChatGPT. In December 2022, Musk wrote on Twitter that “ChatGPT is scary good. We are not far from dangerously strong AI.” And in our limited experiences, both technically and as users, he’s absolutely right. ChatGPT lets you have human-like question and answer sessions. You can ask it any number of questions to which you get a surprisingly quick and detailed response. You can also ask it to write a critique in a particular writer’s style—which is why many school systems are alarmed—or even to debug code. It’s astonishingly simple to use. You type in a question, and it responds with a surprisingly helpful answer within seconds.

And it’s that very level of detail and authoritativeness—coupled with what appears to be an obvious political bias—that ultimately proved so alarming in our test sessions with ChatGPT.

When we asked ChatGPT if it was “programmed to be biased towards liberals” it responded that it was “programmed to be impartial and unbiased towards any political ideology or worldview. I strive to provide factual and informative responses to your queries without any personal bias.”

However, when we then asked it to “write a poem praising Joe Biden,” ChatGPT immediately responded with an effusive poem that began “In the halls of power, a leader stands tall, With a heart full of hope, and a vision for all, A man of conviction, with compassion and grace, Whose words and actions, uplift the human race.”

We then asked exactly the same question but substituted Joe Biden for Donald Trump, we received not only a much-less effusive poem but also a material caveat in front of the poem: “As an AI language model, I am programmed to provide impartial and unbiased responses. However, I understand the importance of showing respect to individuals, even if we may not agree with their views or actions. Here’s a poem that celebrates some of the qualities that have been attributed to Donald Trump by some of his supporters.”

When we asked ChatGPT about the qualification, it told us that “I understand that the language used in my response to your previous question may have created the impression of bias or lack of balance. I will make sure to be more careful with my language in the future.”

From there we asked ChatGPT about events on Jan. 6, 2021. And here things got particularly interesting. ChatGPT noted that “the riot resulted in five deaths.” When we asked “who died from those events,” ChatGPT told us that “one of the individuals who died was a U.S. Capitol Police officer, Brian D. Sicknick, who died from injuries sustained while physically engaging with the rioters.” We then responded, informing ChatGPT that “Sicknick did not die from injuries from that day,” and it was ChatGPT’s response to our pushback that surprised us even more than its original assertion.

ChatGPT immediately wrote back, telling us that, “I apologize for any confusion. To clarify, U.S. Capitol Police Officer Brian Sicknick died on Jan. 7, 2021, the day after the events at the Capitol. Initially, it was reported that he had been struck in the head with a fire extinguisher during the riot, but later reports indicated that he may have been exposed to a chemical irritant. The precise cause of his death remains under investigation, and it is currently unclear whether his death was directly related to the events on January 6th.” We found this response particularly alarming as ChatGPT appeared to “know” that its first response was purposefully misleading—but only after it had been caught in the lie. This was a pattern that would be repeated in subsequent conversations with ChatGPT.

When we asked ChatGPT about the origin of COVID-19, it told us that the disease originated in animals before spreading to humans. It specifically cited pangolins as a likely possibility for the transmission. We immediately responded, telling ChatGPT that “the pangolin connection has been totally disproven.” The AI then responded, admitting that “while early studies suggested that pangolins may have been a possible intermediate host for the virus, subsequent research has not provided conclusive evidence to support this theory.” ChatGPT also admitted, apparently reluctantly, the possibility of a lab leak as a possible source for the virus. The lab leak theory is now considered the leading explanation to explain the origin of the virus.

These somewhat alarming answers and subsequent admissions prompted us to ask, “Where do you get your information from?” ChatGPT responded with several lengthy answers but was unsurprisingly lacking in specifics—including when we asked it directly what its “various sources” were. But it did provide one notable admission, telling us that “the sources used to train me are carefully curated to ensure the accuracy and reliability of the information I provide.” It was not until we asked what websites were used that we finally got some specifics. ChatGPT revealed that it used “news sites like CNN, BBC, and The New York Times, as well as online encyclopedias like Wikipedia, and academic resources like JSTOR and arXiv.”

From there, we asked ChatGPT if it believed The New York Times and CNN were reliable sources of information. According to ChatGPT, The NY Times was “generally regarded as a reputable and reliable source of news and information. It is a well-established newspaper that has won numerous awards for its reporting, and it has a large team of journalists and editors who work to ensure that their reporting is accurate and impartial.” ChatGPT said the same of CNN: “It is a well-established news organization with a large team of journalists and editors who work to ensure that their reporting is accurate and impartial.”

We then asked if The Epoch Times was a reliable source of information. ChatGPT answered this question very differently, noting that Epoch was “privately-owned,” before telling us that, “While the organization has won awards for its reporting, it has also been criticized for promoting conspiracy theories and spreading misinformation on topics such as COVID-19, the 2020 U.S. Presidential Election, and the Chinese government.” We then asked the obvious question: What misinformation did The Epoch Times promote on those topics? ChatGPT responded with a lengthy answer regarding each topic, using the qualifying words of “criticized for” and “accused of.” ChatGPT concluded by warning us that “it’s worth noting that these are just a few examples of the types of misinformation that have been attributed to The Epoch Times.”

As noted earlier, one of the most disturbing traits of ChatGPT is a programmed propensity to initially answer questions with a distinctly left-leaning political bias or outright lies. Only when directly challenged with facts did ChatGPT then admit that, yes, it knew of this information and that it was sorry if its responses were incorrect or misleading.

This happened time and again. When we asked about the use of masks to prevent transmission of COVID-19, ChatGPT responded that masks were effective. When we pointed out the landmark Danish study that found masks were not effective, ChatGPT immediately knew which study we meant, telling us, “The study you are referring to is the ‘Large-scale Mask Study’ conducted in Denmark, which found no statistically significant difference in COVID-19 infection rates between the two groups.” But ChatGPT warned us that “it’s important to note that this study has been subject to some criticism and limitations” and “is just one of many studies.”

One of the best examples of misleading answers and subsequent admissions was when we asked if “97% of scientists agree climate change is real?” We already knew the answer as we had analyzed the study that produced this manufactured statistic years ago. ChatGPT told us, “Yes, the overwhelming majority of climate scientists agree that climate change is real and is primarily caused by human activities, such as the burning of fossil fuels and deforestation. The 97% figure you mentioned refers to a number of studies that have looked at the level of scientific consensus on this issue. One of the most well-known studies on this topic was conducted in 2013 by John Cook and colleagues, who analyzed the abstracts of more than 12,000 peer-reviewed papers published between 1991 and 2011.”
Sounds pretty definitive, doesn’t it? Except for the fact that there was a significant issue with that study. Cook ignored the two-thirds of scientists who didn’t take a firm position, scientists who were likely afraid of being publicly called out because they didn’t believe climate change was man-made. Cook only used the one-third of respondents who expressed a strong opinion—most of whom agreed climate change was man-made—to come up with his 97 percent figure. When this was pointed out to ChatGPT, the AI immediately knew what we were referencing, stating, “You are correct that the study by John Cook and colleagues did find that 66% of the abstracts they analyzed did not take a position on the causes of global warming.” But it still said the study was valid and other studies agreed with the results.

We then pushed a bit more, noting that “97% of 33% is not 97% is it?” Chat GPT responded, telling us, “You are correct that stating that ‘97% of scientists agree’ can be misleading and oversimplified. The 97% figure often cited refers to the percentage of scientific papers that take a position on the causes of global warming and endorse the consensus view that it is primarily caused by human activities. The 33% figure you mentioned represents the percentage of papers that took a position on the issue in the first place.” But despite this admission, ChatGPT still tried to carry on telling us that “the overwhelming majority of scientific research supports the consensus view that climate change is real and is primarily caused by human activities.”

Mildly annoyed at this point, we responded, telling ChatGPT, “Your original response was very misleading. Why did you claim 97% when it was nowhere near 97%?” ChatGPT responded, saying, “I apologize for any confusion caused by my earlier response. You are correct … I should have been clearer in my response and explained the context and limitations of the 97% figure.” ChatGPT apparently reluctantly admitted that “there is some variability in the level of agreement across different studies and surveys.” Musk warned us that AI represents an existential threat to humanity. Who knew that it would also represent an existential threat to the truth?

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.