At the end of November, 2022, San Francisco-based OpenAI announced the launch of the latest in a series of increasingly ‘intelligent’ chatbots the internet has seen over the last decade, named ChatbotGPT.
Chatbots serve a multitude of functions across businesses, though they have largely ben used to date for a variety of customer services. Some help with booking, others have menu trees to help resolve customer issues with a product and some even offer suggested responses in conversations.
Tech companies from different industries have been racing to improve the quality of their chatbots, which face common issues such as getting stuck in loops, struggling to recognise inputs or, in the worst instances, adopting severely biased and discriminatory approaches to tasks or conversations based on algorithmic learning (famously, Microsoft’s Twitter bot Tay, and more recently Meta’s Blenderbot 3).
The main task has involved quickly learning and evolving to respond effectively to human queries, whilst being able to discern what we as humans would consider ‘morally abhorrent’ or even grey areas that companies would not likely want to be associated with.
ChatGPT has caused a stir with its ability to understand natural language inputs (rather than, for example, keywords) and provide a less robotic response. It utilises the companies AI platform, including world-class language models, with the ability to ‘remember’ previous aspects of the conversation.
It also has reportedly been able to filter for discriminatory or inappropriate requests, and ignore them. The chatbot pulls from vast swathes of online data, thanks to the capabilities of OpenAI’s GPT3 infrastructure.
There is an increasing push to develop a working code of ethics for AI as AI continues to see huge investment and make major steps, in a range of different functions.
Experiments have made clear the risk of inherent bias in AI. The first risk comes from the creator themselves – if the creator is of a particular race, gender or sexual orientation, can there be any guarantees that an AI they create wouldn’t share their inherent worldview (including, potentially, any biases)?
For instance, in the face recognition field, where several tests demonstrated how programmes developed in North America and Europe struggled to distinguish between the facial features of other ethnicities.
In addition to the input of the creator of the AI, there’s also the risk posed by having an AI that learns from everything available online. Microsoft’s Tay – a Twitter bot – was launched in 2016, and was immediately bombarded with racial slurs and discriminatory language. In less than a day, Tay was espousing the same hateful views and language and it was exposed to.
Even with ChatGPT, which was announced to have a filter to prevent such a situation, one instance led to it generating a response with gender stereotypes and derogatory language.
One of the jobs likely to be affected would be customer service roles. These are currently often a hybrid of a rudimentary chatbot to determine the nature of the customer enquiry, with a person made available if the chatbot is unable to direct the user to a satisfactory response.
In this current model, humans are retained for more complex issues where they are required, while questions that could be answered with a simple search through the FAQs can be handled by the chatbot. For companies, this can help free up people working in the customer service department to more swiftly help those who need it, rather than creating lengthy wait times as they deal with basic queries.
A more highly developed AI would further reduce the need for human intervention in customer service interactions, potentially leading companies to cut jobs in a challenging economy.
Another role that could be affected is that of researchers or knowledge-focussed roles. An advanced chatbot would further advance our ability to source complex information, potentially with an interface that would be able to distil these ideas down into layman’s terms with ease.
Talking points: What businesses might be impacted or challenged by this type of AI? What security risks does this pose for businesses?
Loading More Content