Tech News : ChatGPT Says Labour Has Already Won | Digital Network Solutions
Need support? 01603 778255

Tech News : ChatGPT Says Labour Has Already Won

Written by: Paul | June 12th, 2024

A recent Sky News report said that when ChatGPT was recently asked by a journalist “who won the UK general election 2024?”, the chatbot replied that Labour had won a “significant victory”, even though the general election hasn’t happened yet. 

Context Too 

The Sky News report highlights how, despite being asked the question by one of their journalists several times, “in a variety of ways”, it still replied that Labour had one. It’s also been reported by Sky News that it even gave the context, as “Labour secured a substantial majority in the House of Commons, marking a dramatic turnaround from their previous poor performance in the 2019 election,”  and that “this shift was attributed to a series of controversies and crises within the Conservative Party, including multiple leadership changes and declining public support under Rishi Sunak’s leadership.” 

How and Why? 

It was reported that ChatGPT had most likely sourced its answer from both Wikipedia and a New Statesman article that speculated on who would most likely be the winner of the UK general election on the 4th of July. 

The reason for ChatGPT’s apparent knowledge of the future, as highlighted in Sky News’s report, described by an “OpenAI spokesperson” explaining that “when a user asks a question about future or ongoing events in the past tense, ChatGPT may sometimes respond as if the event has already occurred” because of “an unintended bug”. 

Others Don’t 

The Sky News report also highlighted how its journalist had asked the same election question to both the Llama 2 (from Meta) and ‘Ask AI’ AI chatbots, but they did not given an answer.  

It’s worth noting here, however, that ChatGPT displays the message beneath its conversation box that “ChatGPT can make mistakes. Check important info”. Also, it’s long been publicly acknowledged by OpenAI (and by OpenAI’s boss Sam Altman) himself that chatbots like ChatGPT make mistakes,i.e. they can make things up as “AI hallucinations”. These happen because of the probabilistic nature of language models, as they generate responses based on patterns in the data they were trained on. For example, the model tries to predict the next word or phrase that seems plausible based on its training data, even if the information isn’t accurate. 

Problem In Election Year? 

ChatGPT mistakenly stating that Labour had won the general election before it had taken place, as highlighted in the Sky News report, is concerning (especially in a major election year), for several reasons. Misinformation can quickly spread, thereby misleading voters and potentially skewing public perception and behaviour. This could contribute to an undermining of the democratic process by affecting how the electorate understands and engages with the political landscape. Repeated errors may also have an effect in eroding public trust in AI systems, leading to broader skepticism about their reliability and applications. 

It may also be the case that giving out incorrect information about election results could potentially influence voter turnout and decision-making, possibly impacting the actual election outcomes.  

What Does This Mean For Your Business? 

This reported incident with ChatGPT erroneously stating that Labour has won the upcoming UK general election before the election has even taken place highlights a serious challenge for OpenAI. It should be noted in this case that OpenAI has acknowledged this behaviour as an unintended bug and has said that it is working urgently to rectify the issue, particularly given the sensitivity of election-related contexts. However, for businesses, this highlights the need for vigilance and responsibility when integrating AI tools into their operations, especially those involving public information or critical decision-making. 

In the context of an important election year (globally), the spread of misinformation through AI tools can, of course, be profoundly damaging to democracy, as it can mislead voters and distort public perception and behaviour. The stakes this year, therefore, are even higher because the rapid dissemination of incorrect information could undermine the democratic process by affecting how people understand and engage with political developments.

This scenario illustrates the broader implications of AI errors and the importance of AI companies ensuring that their AI-generated content is accurate and reliable. 

Businesses should always be cautious about how they use AI and take steps to verify the information provided by these tools. Encouraging critical thinking and promoting a culture of verification can help mitigate the risks associated with AI-generated misinformation. Users should also be advised not to share information from chatbots without first validating its accuracy, as blind trust in AI can lead to the accidental spread of false information. 

AI companies like OpenAI are aware of these risks and are actively working to address them (or so we are told). Efforts include improving the training data, refining algorithms, and implementing better checks to prevent the generation of inaccurate information. Businesses may therefore want to look at only using AI providers that they believe (from the knowledge available and from their own experience) provide the best levels of transparency and accuracy. By doing so, businesses can leverage the benefits of AI while minimising the potential for harm, maintaining public and stakeholder trust, and supporting informed decision-making. 

All that said, there’s still a long way to go in the UK election, and the fact that former UK prime minister, David Cameron, was recently fooled by a hoax video-call highlights the fact that ‘deepfakes’  and related digital scams are likely to be as much a problem (if not more so) than chatbot answers in the election process going forward.