More specifically:
Games like Go and Chess have very specific and distinct rules that apply at every step, and there are a very limited set of options.
In Go, thereâs, at most, 362 options at each move. Thatâs a lot of options for a human, but it nothing for a computer.
AlphaGo used a Monte Carlo Tree Search which is an algorithm that is now, around 20 years old. Really the difference between AlphaGo and earlier Go playing applications is the quality of the machine learning, and the speed of the computer running it, which allowed it to consider many, many more options, and in far more detail than it could have in 2006.
Consider that, when humans are taught complex games like Go, chess, or even Poker, they are not taught to analyse the game in the same way as computers are. They arenât taught to analyse every possible path in the game and weight that path based on the likelihood of success. They are also not taught to apply random factors.
Humans are taught human strategies and to play against other people employing human strategies.
And, yes, applications like AlphaGo are trained on human games, so they will be weighted towards chosing moves based on that, that is only one factor.
Personally, I donât see a computer playing Go program choosing a move that a human wouldnât have made to be at all surprising, yet alone âoriginalâ or any of the other breathless words used to describe it. Frankly, that is why we use computers for such tasks.
I actually worked on a system several years ago which wasnât that dissimilar to some of the analytics used by AlphaGo: it used a set of rules and some AI technology to automate and optimise the planning and placement of connections in complex telecommunications networks. I spent over a year working on a project with BT to help the system learn the network topology, circuit placement rules, equipment characteristics and limitations, etc. to drive the model.
It regularly came up with solutions a human wouldnât have chosen. That was the point.
And this is the crux of it, IMO.
Is choosing heads when most people would have chosen tails âoriginalâ or âcreativeâ? That is how I see AlphaGoâs move 37.
As you say, there is, in music, a set of rules and, just like for Go or Chess, those rules can be programmed into computers. And, for the last 40 years or so, computer generated music has been a thing and itâs been getting better and better.
But, in reality, if you look in terms of music (at least Western music) we all mostly use the same notes, the same chords, the same chord sequences the same time signatures, the same rhythms, etc. These form the basic rules of music and is part of what makes it recognisable and enjoyable. In that respect, thereâs been almost nothing âoriginalâ in music for, maybe, 100 years.
What we consider as âoriginalityâ in music isnât deviation from those rules, but more about how we use them. What instrumentation we use, what expression, the lyrics, the emotion, how the song is arranged and mixed, how we apply effects, dynamics, ornamentation and other expression.
Often itâs just âthe soundâ which, to be original, often could just be in the songâs production.
Generative AI is, at itâs basis, a tool which is based on mimicking patterns it is fed, whether that is musical patterns, speech, images, or text. A large part of that is literal copying.
Often, with AI, when it does do something remarkable, itâs because thereâs actually humans behind the scenes steering it.
Thatâs not to say that AI isnât capable of originality, to some degree: witness some of the strange and, often, creepy, grotesque, or nightmarish images AI has produced.
But thatâs where we come back to how we, as humans, define creativity. Having 6 fingers on one hand, a body that doesnât join up with itself, or 3 eyes are the sort of âhallucinationsâ that AI image creation frequently does and these are, certainly, not the sort of decisions a human artist would normally make, unless it was with specific intent and purpose.
But AI doesnât do these things with intent and purpose. It does them specifically because it has no intent and purpose, only data. It does them because it has no real understanding of what itâs doing.
Unlike a game, in art there is no specific measure of âgoodâ which can be used to drive the algorithms.
AI can mimic human works, if they exist already. But if it does produce something truly original that is seen to have value, that would be entirely by accident and random.
Itâs basically like Shakespeare typing monkeys.
Cheers,
Keith