Some further thoughts on AI

AI has certainly put the cat among the pigeons in this forum!

First, I would like to question all prognoses. Even the experts in the AI field fall into three groups. Some opine:

  • AI will degrade if it is both flooding the internet with content and using the internet to learn, because its own confabulation will recursively pollute it.
  • AI won’t get better or worse, it has quickly reached its functional asymptote and only minor further improvement will be possible.
  • AI will soon reach a singularity whereat it will bootstrap itself to superintelligence at unimaginable speed.

Not only are those three positions mutually exclusive, they are widely distant from each other.

The programme of AI has been around since the 1950s and has at different times been considered an aspiration, a failure, a fantasy, a threat, a saviour and a mere tool. It could turn out to be any of those things.

No one knows.

Second, AI does not spell the end of art or creativity. It may suck all the money out of the arts, but that is a different problem. Creative people are called that because they create under whatever circumstances they find themselves in, no matter how adverse.

Similar artistic panics greeted the inventions of photography and musical recording. How would you feel if the painters and musicians who railed against these innovations had succeeded in getting them banned? Justin Guitar depends for its very existence on both of them - and several other technologies. So do most of the things we use today.

Third, there is a category error in most contemporary AI discourse. AI is not large language models like ChatGPT. LLMs are built on transformers, which are the latest approach taken by AI researchers, just as in the past they tried logic, neural networks, genetic algorithms, machine learning and so on. AI itself is a programme to create thinking machines, and at present its practitioners are still trying to figure out which approach will work best.

Fourth, we do not yet have sound definitions of art or creativity, or even life, consciousness, and intelligence. Questions remain. However, the field of AI has been quite helpful in throwing light on what those things may or may not be. It and the greater field of cognitive science have actually helped us better frame questions and understand ourselves a little more clearly.

Fifth, AI might be too pervasive for any of us to take in, and therefore could be simultaneously something we ignore and something that fixates us. How do we respond if AI proves to be superior to collective human wisdom in addressing the great issues of our times? Environmental instability, global disease control, fair distribution of resources and opportunity. Do we insist on staying in control, knowing that doing so continues to imperil us, or could we be rational enough to let the real expert do the job?

If that’s too big a picture to grapple with, here’s a smaller one. If it becomes quite clear that robot operators do a better job of eye surgery than shaky and all-too-imperfect surgeons, who do you choose? After answering that question, just take it in steps to bigger and bigger scenarios. Who will you choose?

Lastly, the most crucial question, or perhaps I should say the most clear and present danger, is not whether AI will ‘replace’ us all, but rather how those who have control of it use it for their own ends, but at our expense. This is what concerns me the most. Each great technological leap we’ve taken has disproportionately empowered the already powerful. We need to guard against that, as this time may be the last chance. I will refrain from getting any more political than that.

To summarise my point, now is a time to be asking questions and carefully evaluating the evidence used to support answers.

3 Likes

"knowledge is power ", commonly attributed to [Sir Francis Bacon](Francis Bacon - Wikipedia 1597.

In a generalist view those that control and have unlimited access to AI will ultimately have the power to utilise knowledge to their advantage and therefore have the greatest control over others. This is already evident in most societies where those who are well educated and have access to knowledge can utilise this to rise to power and wealth, whether elected or not to positions of power.

How AI is used and by whom will have big impacts on societies and will likely create further inequalities.

1 Like

What if AI’s answer is the human race is the problem and recommends reducing the population is the best way to solve all problems?

3 Likes

Don’t seem like we need AI to let us in on that non secret.

Personally, I would like to avoid AI at all costs (imho, no way to do that). While I think it could do good things for ‘all peoples’, my suspicion is that it will do much harm before it becomes a worthwhile endeavor. If we make it that far w/o destroying ourselves. It seems we don’t need AI’s help in doing just that. imho, AI will help accelerate our demise though.
I’m kinda of the opinion that AI has already started to infiltrate us and we just don’t know how much it is.
Things may not be as the seem to be.

Sorry, I don’t mean to be a debbie downer.
Play some guitar for recovery. Even if your the worst player on the planet. At least it’s real!

ymmv

Sucking all the money out of the arts is an interesting one.

I have friends and relatives who are trying to eke a living (some are doing it successfully) out of the arts. For those who are doing it by playing live music, even if / when AI produces better music / art than humans, listening to such generated content verses seeing a live performer are two different things. I’m confident live music will last a lot longer than recorded music.

Still, it’s more likely to be a downside for those performers.

On the other hand, the money that’s been available to rock stars and such has been obscene and while I’m confident some of them have used their riches to good effect, the disastrous consequences of money and fame are abundant, horrific and well known. The entertainment industry has a LOT to answer for in it’s provision of drugs and access to under-aged and legal aged groupies. So if sucking all the money out of the arts means this no longer happens that’s a pretty good side effect.

1 Like

I was a freelance academic copywriter and editor when ChatGPT arrived. My business model crumbled overnight. Now, academics will feed their proposals to ChatGPT rather than pay me to edit them. I’m OK with this, because to not be would be pure self-interest on my part. Ironically, the last document I worked on was a proposal for a national AI strategy.

2 Likes

Hey, what if we decided to put our collective effort into optimizing human intelligence instead of passing off our future to big warehouses full of chips and wires?

Optimizing Human Intelligence is what got the planet into the sad state it is in today.

1 Like

Hi Mark,
This is a well-balanced, thoughtful post.
Thank you :grinning_face:
I agree with most of your musings, although remain a bit sceptical that we (the masses) are going to be very effective in reigning in the uses of AI and esp. curbing the multinationals from running riot… :roll_eyes:
It’s here to stay, and most of us are already consuming it quite naturally.

The only things humans are better at than creating crises, is adapting to and surviving crises…
Nobody ever seems to mention the fact that the human brain is just a series of chemical reactions and electrical impulses, quite similar to an AI model :wink:

1 Like

Curious if the AI naysayers have actually tried to use it. Maybe try to find its usefulness before you draw all kinds of whacky conclusions. I’m sure some of the same things were said about the invention of the printing press (or any other invention for that matter). It’s been a timesaver for me in certain aspects, but has a ways to go in other areas.

Maybe lighten up just a little, check it out for yourself and keep an open mind about it. It’s really just another tool on our tool belt.

Here is smirky Justin:

Here’s an AI generated happy baby Justin:


Head spinning comments to arrive in 3…2…1

1 Like

Thanks Brian.

Actually, I share that scepticism. We worry about giving untrammelled power to the AI bogeyman, while repeatedly giving it to megalomaniacs.

Ooh, touchy subject! But yes, the philosophical materialists are still a minority.

1 Like

Thankfully, all the long words in this thread are going right over my head!

Ignorance is bliss!! :grinning_face_with_big_eyes:

1 Like

Ahh… but but we’re a noisy bunch! :wink:

Some of us even write shouty songs about it :rofl:

Any 2-bit-algorhythm can create a smiley/cutie Justin :wink:
It would take proper computing power to conjure a smiley/cutie Clint, though…
Over to you :rofl:

1 Like

Well, I see 2 outcomes to that.

We either pull it’s battery.

Or

It’s War.

:joy: this in the most jovial possable way of saying I think its unlikely for anyone to volunteer to be first in line to hop into the volcano on the suggestion of AI commands.

Edit:

This was great.

:joy::joy:

2 Likes

I do think you have to worry about AI destroying humanity.

Consider this analogy. Humans are ants in the great scheme of the universe and everything. Ant colonies when they become overpopulated they leave and try to find another place to make another colony or worse they attack and destroy a neighbouring colony. Eventually there are so many colonies in one area that they start to die from starvation due to lack of resources as their impact on it has been catastrophic.

If you look at the history of great civilisations in human culture, they rise and fall like the Roman Empire, and another comes along to replace it. The difference now is we are talking globally, as we are running out of everything and screwing the planet, so it is most likely we will disappear through war, famine, pandemics, global warming or combinations of them all, leaving AI machines to tidy up.

Because True independent thinking and robust discernment are anathema to a society built on conformity, obedience and control.
Until that society implodes, we’ll keep promoting and pushing these narrow forms of intelligence - at least publicly.

2 Likes

I’ve been following this topic for well over 2 decades. Back around 2000 I remember reading Ray Kurzweil’s (yes the keyboard guy) book The age of spiritual machines.

In that book he talks about enhancing biological intelligence with artificial intelligence leading to an event he calls the Singularity. The Singularity Is Near - Wikipedia

Some would say this is the ultimate optimizing of human intelligence,

Ray is predicting the singularity by 2045.

When I read his book in 2000, he made a number of predictions for the next 10 years. When I checked them in 2010 it was uncanny how accurate he had been.

While I’m optimistic about the potential of AI, a lot of the arguments about not doing it are mute because it’s being done whether we like it or not and I have grave concerns that the guard rails one would hope for are being ignored.

We can postulate all day long as to whether it should or shouldn’t be done but one might as well break wind into a tornado as it’s happening regardless.

1 Like

In a few years :roll_eyes:


:grin:

5 Likes

Actually, I would argue that they aren’t.

First of all, all of the discussions of AI art are specifically talking about a particular type of AI: generative AI.

There are other types of AI in development and in current use which could make your 3rd statement come true (if only partially).

I don’t think AI will bootstrap itself, as all current AI isn’t really intelligent at all. But I can see other ways we can destroy ourselves by relying too heavily on AI. In fact I think one of the dangers is that AI can appear to be intelligent, and that leads people to trust it when they should not.

Note that there are, already, AI driven Kamakazi drones being used on the battle fields of Eastern Europe.

Then, regarding generative AI in particular, I see the possibility of the technology reaching a functional asymptote and the potential degradation of output quality as being two sides of the same coin.

In a way, they are different vectors: the quality of generated content is largely driven by the quality of the data being used for training, whilst the ability to use that training data is a function of the development of the technology…

…and, with current LLM technology, how much computing power, electrical power, and clean water you are prepared to throw at it, and how much pollution you are prepared to create.

The development can plateau as it has with LLM, and the quality of the training data can degrade at the same time. I don’t see these as mutually exclusive.

Cheers,

Keith

… and that’s part of the beauty! :rofl:
Artists are going to have to be creative (and I have no doubt they will!) :wink:

1 Like