Education's Bitter Pill - II
In the first of these posts we explored the way education has tended to default to understanding itself in terms of content-delivery, specifically the transfer of information from the teacher to the pupil variously aided and abetted by text-books, libraries and latterly the Internet. The particular way this mirrors early attempts program machines to act intelligently is that both assume that what matter are knowledge and deliberate, painstaking application of predetermined algorithms.
In the human case we commonly talk of execution of these 'algorithms' as 'thinking'. Our rueful observation in the first part of this sequence was that success in AI has accrued to those who eschewed this approach and instead invested their efforts in what Sutton calls 'search and learning'. Specific, detailed, fixed knowledge based on preconceived notions of what we need to know and laborious predesignated analysis fails in relation to non-specific, general, fluid knowledge based upon trial and error, statistics, probability and vagueness, and the only things we are ever likely to be any good at, which are the things that interest us. To put this in familiar but perhaps controversial terms, form matters more than content.
I recently came across a reference to remarks by Noam Chomsky where he jumps aboard the AI-sceptical band-wagon that derides the statistical and probabilistic characteristics of LLMs which he more or less accuses of solving problems by 'brute force' when compared with the nuanced, efficient, nicely-selective and mysterious operations of human brains. In this, as in so much else, Chomsky is simply wrong. He has absolutely no basis upon which to make such claims about the human brain because he no more knows how it works than anyone else; all he has are the irruptions of thoughts into consciousness thrown up by the impenetrable chaos of the non-conscious brain whose operations are even today utterly 'mysterious' but which seem nonetheless as we learn more about LLMs more and more statistical and less and less mystical. In other words, brains are far more like neural nets than Chomsky and many others like to think.
This 'like to think' is based upon a residual anthropocentric theism, the tired remnants of the view that human beings are somehow special, different, endowed by a god with special powers and status that set them apart from all other animals across a gulf that cannot be bridged. But the human brain, for all that it is powerful and remarkable, is not in the least special in any other sense; it operates according to exactly the same principles as all other animal brains but with a superior evolved architecture.
"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. Let's stop calling it Artificial Intelligence and call it what it is: Plagiarism Software. It doesn’t create anything, just copies existing works from artists and alters them sufficiently to escape copyright laws. It's the largest theft of property since Native American lands by European settlers. " -- Noam Chomsky
Note the accusatory, dismissive language: "lumbering", "gorging"; "Plagiarism Software" which "doesn't create anything", instead engaging in "theft of property". A more prejudiced, loaded, unfair description of the operation of LLMs could scarcely be imagined: has everything they say already been said; has every piece of code they write already been written; has every problem they solve already been solved; has every blind statistical concatenation of gorged terabytes already been concatenated? None of these is true. None of this is even remotely true.
We may not be engaging with human-like intelligence here, but Chomsky's description is ignorant, misguided, prejudiced and just plain wrong. Were we to look at any human writer, artist, poet, composer, would absolutely everything they do be original, agenealogetos, fresh-sprung like Pallas Athena from the head of Zeus? On the contrary: everything everyone writes, paints, says, does, composes, codes, invents is derivative of things that have gone before. And so are we. LLMs are no different, so either we are all guilty or none of us is guilty.
Of course, Chomsky is in celebrated company: Dreyfus, Searle and many others have dismissed AI as incapable of 'understanding' or 'creating' - Searle's celebrated 'Chinese Room' argument applies - and insisted instead that all AIs do is pattern-match. But this is a spurious and deeply unhelpful argument because it begs all the important questions about what facilitates and enables human understanding, preferring - by default - mystification to genuine attempts at understanding. What now seems more than likely is that the human brain operates along very similar lines. It just does so hidden from conscious perception, churning away beneath the surface, lumbering through the terabytes of information the senses have furnished and attempting to present what it concatenates as things it has found by brilliance, created through genius, or thought by deploying processes beyond all understanding. Human beings - and Dreyfus, Searle and Chomsky are prime examples - desperately want to be possessed of some spirit that places them above and beyond nature, to be chosen and gifted by a god with supernatural powers that manifest in soul, spirit, mind and thought. But we are not: we are rather lame, predictable, foolish, stupid consequences of a lumbering evolution that explores the possibilities of physical space and time.
Chomsky's is an attitude typical of those who think they know how they think, and that same attitude is what has fuelled our historic preoccupation with knowledge, facts, content that can be listed, tested, measured, ranked rather than with the amorphous fluid ocean of possibilities that are embodied in brains and neural nets. Let's be clear: nobody has any remotely satisfactory notion of how brains operate, think; all we know is that they come up with suggestions articulated in language that come from somewhere, and that very imperfectly and unreliably. As Wittgenstein famously asked, when a word 'just won't come', what do we do? There is almost nothing we can do. We are slaves to our neurophysiology, but we are also - I am also - essentially identical with that neurophysiology, so there is no surprise in that. Things only start to go awry when people try to introduce or reintroduce obscurantist notions like the soul or argue for some impenetrable mystery in the brain whose explanation lies beyond science.
But the attentive reader will have identified a peculiar feature of this argument: how can we reconcile our unequivocal rejection of the mystery-mongering quasi-theological anthropocentric nonsense of Chomsky, Dreyfus and Searle with our equally unequivocal rejection of education as a planned, predetermined, programmed, predictable, controllable process that consists of feeding endless quantieies of information into children's brains?
As so often proves to be the case, it is in navigating our way between these two erroneous extremes that we find clues to the better path ...
Comments