top of page

Is the ChatGPT Fervour Premature?

The success that ChatGPT has had, at least in generating public interest, has had the inevitable consequence of prompting some writers to question its credentials and generally pour tepid if not actually cold water over what it can do. The latest of these is Will Knight writing in the January 13, 2023 edition of Wired. "ChatGPT Has Investors Drooling - but Can It Bring Home the Bacon?".

In that article he makes two observations that merit closer attention, one of which I think has merit and the other of which I think harks back to a Dreyfus-like "What Computers Still Can't Do" mentality. And both can be seen as examples of Schadenfreude.

"A Tsunami of Bullshit"

Right at the end of the article Wright makes a legitimate point that he has gleaned from Phil Libin who was the CEO of the note-taking app Evernote from 2007-2015. Wright, summarising some of the downsides Libin anticipates, says "One is that ChatGPT and other generative AI models are currently created by scraping content made by humans from the web, but are increasingly contributing to the text and images found online. “All of these models are about to shit all over their own training data,” he [Libin] says. 'We’re about to be flooded with a tsunami of bullshit.'"

This is absolutely fair comment: what happens when the content on which successors are trained has itself been generated by chatbots that have produced false, mistaken, misleading information as a result of their earlier programming? Nevertheless, even if chatbots create "a tsunami of bullshit", that is just to say that they will be assuming a place comparable to every human being on the planet in the discourse of which they form a part.

But this "tsunami" accusation isn't a fatal blow because one could just as easily ask sceptical questions about how human learning is going to cope when it has to deal with all the mistaken, misleading, false and sometimes fraudulent information generated by human beings. In other words, exactly as we are about to see in a much more serious way with the second comment, we are now having to face the fact that our AI and chatbots will have to solve just the same problems that humans have to solve in order to function reliably. And yes, you are right to think that we might need to put the "reliably" in scare quotes: "reliably" as judged by whom and to what end?

Words and Meanings

The second point that Wright makes, although he makes it earlier in the piece, is this.

[B]ecause of how ChatGPT works—by finding statistical patterns in text rather than connecting words to meaning—it will also often fabricate facts and figures, misunderstand questions, and exhibit biases found in its training data.

This is an extraordinary contrast to make because it suggests that we have some clear alternative conception of how human brains connect words to meanings, and I think it reasonable to say that we do not. We certainly do not know enough about how human brains process language to exclude the possibility that they do so by "finding statistical patterns in text". After all, given the untold number of instances of every word like "table" we encounter in a lifetime, every one of us has a vast array of statistical data about how the word "table" is used, yet each of us manages nevertheless to use it reasonably reliably (I guess you can put some scare quotes in there as well if you like), yet none of us and no neurophysiologist has a remotely clear understanding of how we do that.

If Wright is suggesting a quasi-Platonic realm of meanings outside the world that words reference, then I respectfully suggest that he is mistaken. I suspect, however, that he is just using "meanings" loosely without giving much thought to what the term references, and so buying into a latent metaphysics that is capable of systematically misleading us about cognitive processes.

Whatever else he may "mean" by "meanings" is completely unclear, and that they represent the result of a statistical analysis of all the memories, processed, analysed and raw - cf. chatGPT's neural net and its training process - that have arisen for a particular brain is at least one of the possible feasible explanations for how human beings generate "meanings" from words.

The point here is that although we should freely acknowledge that we connect words with meanings, it is quite unclear either how we do so or what meanings consist in neurologically or cognitively at a hardware level.

In other words, the very contrast Wright chooses to draw may be bogus, a massive piece of misdirection based upon an implicit metaphysics that would serve to leverage our dismissive tendencies over AI rather as Dreyfus and Searle dismissed earlier attempts.

Fabrication of Facts and Figures

It is also rather amusing that levelled against chatGPT's capacities is that it may be led by its processes, training-data and structure to "fabricate facts and figures", a charge made apparently without tongue-in-cheek as if human brains have never committed such a "sin". Leave aside deliberate lies, fraud, fake news and false facts: human brains, exactly like chatGPT although almost certainly using a different mechanism, seek to interpolate, infer, and make what chatGPT calls "educated guesses" on the basis of data gleaned from experiences that are insufficient to the task of making those inferences. So human brains "fabricate facts and figures" entirely innocently just because we never operate with perfect data and we are always engaged in a process such as Isaac Asimov describes as a peculiar capacity of his Foundation hero Golan Trevize: the ability to reach correct conclusions on the basis of insufficient data.

Another False Dawn?

Scepticism among AI commentators is understandable: there have been many false dawns, many exaggerated and misleading claims, and no end of speculation based upon little more than wishful-thinking. But rather as David Hume taught us, we should be careful of performing an inductive inference from the fact that every dawn so far has been false to the conclusion that all dawns are false.

Things change. I suspect AI just did.

bottom of page