Support CleanTechnica’s work through a Substack subscription or on Stripe.
Artificial intelligence (AI) is all the rage. It’s almost given god-like praise by a lot of people in tech world. It’s going to solve all our problems and bring everlasting prosperity to all, or and it will do so almost magically. The more information we feed it, the more it has answers we don’t, and the less we humans are even needed.
Naturally, though, there have been a number of issues with it so far. That includes people overestimating its abilities and its veracity. That also relates to the knowledge and judgement of people running these companies — many fans think they have all the answers and know exactly what they are doing. It’s becoming clear more and more that’s simply not the case.
We’ve got two stories today along these lines. Check out the interesting anecdotes below, and chime in with your own thoughts down in the comments.
AI Keeps Claiming to Know Stuff It Doesn’t — Blatant Misinformation
So, OpenAI CEO Sam Altman was recently being interviewed for the Mostly Human podcast. During the interview, they brought in a question from TikToker Husk. Husk brought in the following issue: he used ChatGPT’s voice mode to set a timer for a one-mile run he was going to do, but then he stopped the timer seconds later … and the AI service told him it took more than 10 minutes to run the mile. After trying to tell ChatGPT it was wrong, the AI insisted it was not, that it was right.
Altman took a few moments to respond, but then he said it was a “known issue” and that it would take “maybe another year” to fix it.
What?!?! The AI has an obvious, blatant problem in which it just makes stuff up and insists it’s right, and OpenAI has no solution to it? That seems ridiculous, but it fits with other issues. These AI services have a few big problems that keep popping up:
- When it doesn’t know something, instead of saying it doesn’t know, it has the habit of just making things up. That’s a ridiculous approach for a service that’s supposed to provide you with reliable, accurate answers.
- Furthermore, it presents its answers as authoritative. It doesn’t hedge them well enough, but makes responses seem official and authoritative.
- Perhaps the most shocking, though, is that the AI will sometimes double down and claim it’s right even when it’s corrected or challenged.
Sam Altman Can Barely Code? Doesn’t Understand Basic Machine Learning Concepts
Perhaps there’s some exaggeration here. Or perhaps not. The New Yorker recently spoke to OpenAI insiders and found that at least some of them see Altman as a bit of a dunderhead technically.
“According to numerous engineers interviewed for the article, Altman lacks experience in both programming and in machine learning — a shortage of expertise that becomes obvious when the CEO mixes up basic AI terms,” Futurism summarizes. “It’s important to note that Altman dropped out of a Stanford computer science program after two years. We’re not here to shame anyone based on their education, but as the CEO of what may soon become the world’s most valuable publicly-traded company, the myth surrounding Altman matters.” Hmm, yes. I can see how Altman and Elon Musk could get along, and then also have a massive falling out with each other.
A former OpenAI researcher, Carroll Wainwright, had this to add: “he sets up structures that, on paper, constrain him in the future. But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”
“This knack for papering over technical shortcomings with boardroom maneuvers earned Altman a reputation as a practitioner of ‘Jedi mind tricks,’ one tech insider who worked with the CEO explained,” Futurism added.
Then there’s this big possibility tossed out there by a senior executive at Microsoft: “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.” Hmm. Yikes. That’s sharp.
To end: I was trying to find an image to use for this article, so I went to the ChatGPT site, but there was nothing at all aesthetically interesting there. So, eventually, I thought of the prompt above to type in and take a screenshot of. Then I went ahead and clicked the button to see how ChatGPT responded. This was the answer:

Interesting.
Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica’s Comment Policy
