These examples are hilarious. But they are hilarious because you know better and can spot the errors. If someone is searching for information about something they know nothing about and only find AI-generated âinformationâ thatâs problematic.
I struggle to understand how we got here. Werenât computers / software built to always and reliably be right and give correct answers? Well, once youâve fixed all the known bugs. (For context: I have written / designed / tested mathematical software and systems my entire professional life).
So what has changed that there is such a hype and push for tools that tell you on the outset that they might give you an incorrect answer? We might be better off asking a random person on the street, but we would trust their answer less because we are so used to rely on the information obtained from a computer.
In a professional and educational environment, I have some experience with these kinds of computer-based tools helping to classify images of wildlife.
The one used by the inaturalist website can be quite good when it has a sufficiently large dataset of verified observations to draw/train from. Some of the other models available right now to do similar things can be pretty terrible. I was using a similar sort of one for work up until recently. I wasnât using it to identify wildlife to species. Rather, I was using it to help pull the thousands of âblankâ images out so I could spend more time focusing my own attention on identifying the wildlife. It kept misclassifying things as a âvehicleâ instead (it was supposed to be able to classify blank images, images with a vehicle, images of a human, and then anything with an animal). With my camera protocol, pictures of vehicles are impossible and yet I had to go through and fix THOUSANDS of images that it incorrectly tagged as a vehicle. Most were blank, but distressingly there were a quite a few with very obvious animals in them (I get lots of armadillo pictures).
My first experience with these types of tools was around 2008. I was taking a biostatistics course and the professor in his own research was using computer software to measure pupfish morphology. It wasnât an automated tool (yet), but the current AI/computer vision models that attempt to identify wildlife seem to have a similar tool at their core. I made an attempt to see if that tool would work to identify individual bobcats based on their specific markings. Whale researchers were already using tools like that to help identify individuals from the shape and markings of their dorsal fins at the time. The short of it is that the shapes of bobcats are a lot more complex and their body positions varied more, making it very difficult for tools of the time to be able to do that.
Iâm pretty sure the software that pathologists use to read tissue sample slides is based on the same idea. Things like blood counts get fed through a computer. Iâm not sure where they draw the line between using the computer to read samples and using people for that task. My wife (who is a veterinarian and consults pathologists regularly but also has machines in her clinic that do routine stuff like blood counts) probably couldnât tell you exactly.
As far as Iâm concerned, AI is over-hyped garbage being pushed on society to make money on the latest bandwagon. It has certain uses in some specific circumstances, but providing knowledge to the general public certainly isnât one of them.
I mean, you havenât even been able to rely on internet searches for accurate information long before AI came along. For years the top results have not been the pages that provide the best information on the search term, theyâre the pages that have the best search optimisation and exploit the algorithms in the best way. Literal careers exist in doing just that as a job.
And now we have AI, often largely trained on the same garbage information we can find ourselves all over the web, serving up that information as fact beyond doubt. Even in a manual search half the results you find these days are âblogsâ or websites that are blatantly AI generated and full of nonsense. And because people are generally quite lazy and not interested in doing proper research, they take the first answer they find at face value. It must be true⌠the internet said so.
AI is just making people more lazy, more stupid and more misinformed then many of them already are. The sooner this fad passes, the better off society will be.
Make no mistake AI will completely change the world in ways we canât imagine. The current iteration of AI as people have said is unreliable and largely useless. When scientists figure out how to build a neural network (ie something similar to the human brain) thatâs when things will get serious.
Actually, although I cited some funny examples of how AI can give some very silly reviews of products, in research in biology and medicine (which is the field I work in), AI has already completely transformed what we can do in our research in a way you could not have imagined even 5 years ago. So while AI aids like Chat GPT can give us misleading information, in research it has massively accelerated what we can do, from better engineering proteins, writing more efficient and less computing intensive bioinformatic algorithms to predicting antibiotics to which bacteria will not already have resistance. Most people equate AI with the sort of online tools like ChatGPT or the summaries of information collected by AI tools in Google and X. Thatâs the everyday personâs use of AI. But in research and development AI is already very well advanced. Itâs just that most people donât realise that.
AI has its uses, I agree.the problem (IMO) is the rush to use AI to in more generalised, every-day applications, driven by the desire to capture peopleâs online attention.
And to circumvent existing laws and regulations which are designed to moderate the behaviour of robber-barons.
Because where you have this attention, you also have power, influence, control, and money. And if you can circumvent or neuter regulation, the robber-barons can exert more and more influence.
And it is in these general applications where AI is itâs most useless, and most damaging to society.
The Cambridge Analytica scandal is a great example of how technology can be used to mislead and control. And yet no-one was prosecuted for it (including those who were shown to have broken political campaigning laws.