Sam Altman, co-founder of OpenAI, the company behind generative artificial intelligence model ChatGPT, released a statement on February 24th regarding the short-term and long-term prospects of their invention.
Although well-intentioned, many technologists were perturbed by its tone – it felt as though Altman expected the world to change to suit the interests of the company, rather than the company opening itself up to social and political regulation to accommodate itself to our world.
But what is ChatGPT and why is it causing concern?
ChatGPT is arguably the first successful generative model of artificial general intelligence. As a large-language model, it offers several advantages over its competitors and predecessors.
Firstly, ChatGPT is trained on a massive amount of data, a corpus of over 45 terabytes of text data, including books, articles, and websites, with a model that allows it to produce more diverse and coherent responses to user questions.
Secondly, the model is highly flexible and adaptable, allowing it to be fine-tuned to various tasks and domains and thus produce high-quality responses to a wide range of prompts and questions.
Thirdly, it is designed to produce coherent, natural-sounding and context-sensitive responses to its prompts.
Fourthly, ChatGPT is capable of generating responses in multiple languages, including English, Spanish, French, German, Italian, and Chinese, and this multilingual support enables the model to generate high-quality responses for a global audience.
Finally, ChatGPT is designed to simulate human-like conversations, making it an ideal model for chatbot development and other conversational AI applications.
The model is trained to understand and respond to natural language inputs, allowing it to engage in conversations with users in a way that feels natural and human-like. At least, those are the advantages that ChatGPT claimed to offer when I asked it directly.
There are fears that ChatGPT could undermine journalism, especially the kind of reporting that relies on editing and reproducing press releases in business-to-business markets, as well as concerns that it could undermine the capacity of universities to reliably assess students based on submitted essays. However, there also scare stories for their own sake: for example, asking the chatbot to tell them a dark joke and then feigning offence, or asking ChatGPT to outline a plan for destroying human civilisation, and performing hysterical pessimism when it responds with a few generic statements in the form of bullet points.
Nevertheless, at these early stages, we must not fall prey to the self-servicing hype. There are reports of ChatGPT being unable to correctly answer mathematical problems if they are written out as puzzles rather than in directly numerical terms, or if there are rhetorical tricks that undermine the necessity of doing any calculation whatsoever. There are reports of ChatGPT fabricating electoral polls, including the fabrication of hyperlinks, as though the generative AI has learned what must be presented in order to achieve epistemic credibility to the point of being good at fake news production. A New York lawyer used ChatGPT to supplement his casework and discovered to his cost that he had presented fake legal cases to support his argument in court – the first professional victim of AI corner-cutting.
In their famous philosophy article, Clark and Chalmers invite the reader to consider Otto and Inga. Otto lives with Alzheimer’s disease, so he writes information he might forget in a notebook for constant reference, whereas Inga is capable of remembering her important tasks without an external aid. The argument is that both Otto and Inga draw upon their memories to fulfil tasks, the only difference being that Inga’s memory is internally stored and processed by the brain, and Otto’s memory exists externally in the notebook to be reprocessed by the brain later. Clark and Chalmers suggest that Otto’s mind (conventionally understood by physicalists to be conceptually identical to the brain) has been extended to include the notebook as a memory support system. The notebook is considered an extension of the mind insofar as it is constantly and immediately accessible to Otto, and as he constantly refers to it, it is automatically endorsed by Otto as a reliable memory carrier. Could a large language model artificial intelligence informed by our personal preferences and computational history be considered a plausible smart supplement in line with Andy Clark and David Chalmer’s extended mind hypothesis? The difference here being that instead of being used to recall information, the AI takes on the work of laborious tasks we would be otherwise capable of doing, should we have the time or inclination, but that might be considered low-stakes, such as summarising the contents of a document, or planning a meeting. The vision here is one of supplementary systems performing the “under-labouring” of written and communicative activities, whilst the creative and critical thinking tasks remain the purview of human intelligence.
Perhaps the political approach to ChatGPT must begin with a critical analysis of what many fear it will replace: why fear the loss of mundane and banal tasks unless these activities are a pre-requisite for employment? It seems we do not directly fear the prospect of ChatGPT taking control of certain tasks so much as we are all too aware that our social systems fail to accommodate those whose labour is no longer in economic demand.
Aside from exposing the problematic relationship societies already have with emergent technologies, the disruptive narrative prompting anxiety may itself be a fiction that obscures a system that is completely unsustainable – AI systems require massive amounts of water for their cooling systems and these thirsty machines may be simply insatiable. In terms of environmental wastage, studies show that the average conversation between ChatGPT and a user amounts to the environmental impact equivalent of dumping a large bottle of fresh water on the ground.
The question remains – ultimately, what is the problem for which ChatGPT is the solution?