AI has certainly put the cat among the pigeons in this forum!
First, I would like to question all prognoses. Even the experts in the AI field fall into three groups. Some opine:
- AI will degrade if it is both flooding the internet with content and using the internet to learn, because its own confabulation will recursively pollute it.
- AI won’t get better or worse, it has quickly reached its functional asymptote and only minor further improvement will be possible.
- AI will soon reach a singularity whereat it will bootstrap itself to superintelligence at unimaginable speed.
Not only are those three positions mutually exclusive, they are widely distant from each other.
The programme of AI has been around since the 1950s and has at different times been considered an aspiration, a failure, a fantasy, a threat, a saviour and a mere tool. It could turn out to be any of those things.
No one knows.
Second, AI does not spell the end of art or creativity. It may suck all the money out of the arts, but that is a different problem. Creative people are called that because they create under whatever circumstances they find themselves in, no matter how adverse.
Similar artistic panics greeted the inventions of photography and musical recording. How would you feel if the painters and musicians who railed against these innovations had succeeded in getting them banned? Justin Guitar depends for its very existence on both of them - and several other technologies. So do most of the things we use today.
Third, there is a category error in most contemporary AI discourse. AI is not large language models like ChatGPT. LLMs are built on transformers, which are the latest approach taken by AI researchers, just as in the past they tried logic, neural networks, genetic algorithms, machine learning and so on. AI itself is a programme to create thinking machines, and at present its practitioners are still trying to figure out which approach will work best.
Fourth, we do not yet have sound definitions of art or creativity, or even life, consciousness, and intelligence. Questions remain. However, the field of AI has been quite helpful in throwing light on what those things may or may not be. It and the greater field of cognitive science have actually helped us better frame questions and understand ourselves a little more clearly.
Fifth, AI might be too pervasive for any of us to take in, and therefore could be simultaneously something we ignore and something that fixates us. How do we respond if AI proves to be superior to collective human wisdom in addressing the great issues of our times? Environmental instability, global disease control, fair distribution of resources and opportunity. Do we insist on staying in control, knowing that doing so continues to imperil us, or could we be rational enough to let the real expert do the job?
If that’s too big a picture to grapple with, here’s a smaller one. If it becomes quite clear that robot operators do a better job of eye surgery than shaky and all-too-imperfect surgeons, who do you choose? After answering that question, just take it in steps to bigger and bigger scenarios. Who will you choose?
Lastly, the most crucial question, or perhaps I should say the most clear and present danger, is not whether AI will ‘replace’ us all, but rather how those who have control of it use it for their own ends, but at our expense. This is what concerns me the most. Each great technological leap we’ve taken has disproportionately empowered the already powerful. We need to guard against that, as this time may be the last chance. I will refrain from getting any more political than that.
To summarise my point, now is a time to be asking questions and carefully evaluating the evidence used to support answers.


