The problem with AI is that it will take a single Facebook discussion, as present it as “fact”.
Eventually, people will pick up on this.
That is what it just did when I asked ChatGPT “What’s the best?” in my product niche. While my product was the one touted as “best”, that’s because it serves a very narrow niche, and has been around far longer than any of the pathetic attempts at knock-offs, which come and go from time to time.
Worse yet, there’s only so much “online” that is not aggressively pay-walled (journals) or incomplete (Google Books), so it was only a matter of time until AIs started scraping the web only to be misled by the inaccurate output of other AIs. - It’s going to get very messy.
https://futurism.com/ai-models-falling-apart
And it’s Artificial IGNORANCE – hear me out on this…
I’ve worked with “AI” since the 70s. Back then, we coded in PROLOG, and called it “expert systems” “robotics”, and “neural networks”. But the current “AIs” are really nothing more than sophisticated guessing programs, a search engine that guesses how to move the CONVERSATION along, without any regard for things like checking multiple sources to verify a fact.
“Large Language Models” are inherently a dead end, as they depend upon a large mass of “information”, which inevitably will include the internet, which has become a swamp of MISinformation. The AI becomes a font of misinformation, or agrees with depressed people, suggesting suicide, or states that the Nazis had the right idea. This is because there is no “mind” at all in an AI, just a set of programs that guess what the NEXT word it should say or type should be to make a “valid sentence”. It is linguistically “correct” speech, but it is the empty-headed “thought” made up of scraps found on a trash heap of Reddit posts, newspaper reports, Myspace pages, fiction books, whatever the AI company could shovel into the maw of its “AI” as “training data”.
As the AIs generate more “content” on the internet, future AIs get “polluted” by being fed the output of prior AIs, turbo-charging the “hallucinations”, which is not that at all. It is just the ultimate form of “Garbage In, Garbage Out” a truism that has stuck with us since the Control Data Cyber 60 series of batch-run FORTRAN-on punchcard engines.
But most of the massive investment in GPUs, cooling for GPUs, and investment capital in the “intellectual property” of the LLMs has been undercut by “DeepSeek”, which has been open-sourced, and has shown to require far fewer GPUs, and far less energy to get to the same place as ChatGPT and the other hardware-intensive options.
The “gold rush” was prompted by platforms blowing large amounts of cash on “acquiring their own LLM AI”, which push valuations into bubble territory. Everyone thought that they HAD to add an “AI” to their social media platform offering, or be left behind. This is when a wise man plans to short the stocks of the AI companies, and Nvidia, to boot. If it walks like a bubble, and quacks like a bubble…
There are still neural networks, and they do narrow jobs incredibly well, with the drawback that they do not reveal their inner workings well, and may sometimes “cheat”, rather than do the work expected. (One example was an AI trained to differ between huskies, coyotes, and wolves was relying more on the photo backgrounds than the images of the canines, but it gave good results ON THE TRAINING DATA.)
I do not use ANY AI at all. I turn it off in my google searches, by replacing the standard google search engine URL with “ {google:baseURL}/search?udm=14&q=%s ” (without the quotes!) you can google this search string to find out how to implement this in your web browser of choice.
The entire soap opera of “AIs taking over” was nothing but a PR stunt to pump up the share price of the AI companies. For AI to “take over”, it would have to be good, and it isn’t good at all. But AI will be waved around to scare employees into longer hours, smaller raises, and worse conditions, as the AI need not be very good to be a credible THREAT to the average employee.