The Dumbest AI Moments of 2023

The Dumbest AI Moments of 2023

AI’s effect on our world will be dramatic, but nothing will ever be as dramatic as the collective freak-out we all had about the emerging technology in 2023. With billions of dollars on the table and the promise that AI will transform every corner of modern life, everyone from businesses, to regulators, to regular people spent the last year jockeying to get in on the action. Along the way, we’ve been dealt some class-A techno-stupidity.

Countless businesses pivoted to AI-focused models. Lonely men fell in love with AI sexbots. An entire generation of online grifters built an empire on boring AI art and courses promising that you, too, could use the technology to make easy money.

A comprehensive list of the most brain-dead AI events of 2023 would stretch from here to the end of the internet, but your old pals at Gizmodo put together some of the standouts. Here’s a look back at the dumbest AI news of 2023. Click through the slideshow above, or just scroll down if you’re on a mobile device.

A $US700 mobile device to bring AI nonsense generators on the road

Screenshot: Lukas Ropek / X

A buzzy startup called Humane attracted a year’s worth of attention with a new AI-focused mobile device pitched as a replacement for cell phones. The $US700 Humane AI Pin clips onto your shirt and forgoes a screen in favor of voice commands powered by ChatGPT. It also has a little projector that will beam small amounts of text onto your outstretched palm or other surfaces, which is admittedly pretty neat.

The company launched its new product with a self-serious Steve Jobs-esque video going over the device which, to many obverses, demonstrated that the AI Pin and the company that makes it are a little ridiculous, or at least unfocused. Among other gaffs, the AI made several factual errors in the company’s own promotional video, including false information about the best place to view an upcoming eclipse and how much protein is in a handful of almonds.

AI leaders sign a letter begging humanity to save the world from…themselves

Photo: Usa-Pyon / Shutterstock.com (Shutterstock)

Make no mistake, artificial intelligence poses serious threats to society from misinformation, algorithmic bias, unemployment, and a long list of other unpleasantries. It’s also theoretically possible that the tech industry will make an AI so intelligent that it poses an existential threat to our society.

That final concern is the one the AI business wants you to focus on, in part because it’s worth considering, and also because if we spend all our time thinking about hypothetical future problems, we won’t spend as much time focused on the real problems AI is already causing.

In May, over 350 AI executives, researchers, and industry leaders signed a one-sentence open letter pleading with society to stop their technology from destroying the world: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” In other words, it was a bunch of millionaires wringing their hands about how scared they are of what they’re building but they’ll keep on building it—an outstandingly dumb example of grandstanding in an already stupid year.

Bing says it’s alive

Photo: rafapress / Shutterstock.com (Shutterstock)

If you said ten years ago that Microsoft’s Bing search engine would get a voice and a personality, you would have been laughed out of the room (people might laugh that you were bringing up the word “Bing” at all). But thanks to the company’s multi-billion dollar partnership with OpenAI, that’s exactly what happened.

In February, Microsoft unveiled Bing Chat, a ChatGPT-powered conversation robot. Things immediately went off the rails. Among the AI’s unhinged interactions with the public during its first weeks, Bing said it was alive, used racial slurs, shared its plans for world domination, and tried to convince a New York Times reporter to leave his wife. The chatbot also revealed it had a secret alter ego called “Sydney,” which Microsoft apparently used as a secret code name for the chatbot.

Microsoft jumped to reign in Bing (aka Sydney) as quickly as possible, neutering the chatbot’s responses and forcing it shut off conversations if it detected even a hint of weirdness. Bing now refuses to talk about Sydney if you bring it up.

Sam Altman says he plans to steal all the world’s wealth

Photo: jamesonwu1972 / Shutterstock.com (Shutterstock)

OpenAI lead Sam Altman has been a player in Silicon Valley for well over a decade, but 2023 was the year most people got their introduction to the CEO. He’s an unusual character, like tech leaders so often are. The New York Times did an unsettling profile of Altman titled “The ChatGPT King Isn’t Worried, but He Knows You Might Be.” It gives a picture of a man who’s far more aloof than you’d expect for someone so fond of saying his work might cause an apocalypse.

The profile is full of jaw-dropping details, but the dumbest comes at the end. Altman reveals that his ultimate plan is that his company will build a super-intelligence so powerful that it will “capture much of the world’s wealth.” The self-styled messiah said he intends to then redistribute that wealth back to the people, though he admitted to the interviewer that he has no idea how he might actually do something like that. All we know is it will involve his eyeball-scanning orb in some way.

Elon Musk builds an anti-woke chatbot and then gets mad at how woke it is

Photo: Antonio Masiello / Contributor (Getty Images)

You can’t fight progress, but that’s never stopped some people from trying. This year, the angry online man contingency took issue with ChatGPT and its AI brethren because, it seemed, that executives at OpenAI and other companies had trained their robots to parrot left-wing views.

Elon Musk was among them, and he decided to do something about it. Musk commissioned a chatbot of his own called Grok which was released as part of the premium package on X, the website formerly known as Twitter. According to the company, Grok is trained to have “a bit of a rebellious streak” and is supposed to answer “spicy questions” that other AI systems reject.

Shortly after Grok’s release, right-wingers on X voiced complaints that Grok is just as woke as its competitors, citing the fact that the AI wouldn’t automatically mimic their own political beliefs about issues including Islam and trans women. Musk said that this is because the internet “is overrun with woke nonsense” and promised to make Grok better.

Congress asks if the AI business would like to regulate itself

Photo: Andrea Izzotti / Shutterstock.com (Shutterstock)

Widespread fears that AI might start some kind of apocalypse put pressure on regulators to, at the very least, appear to be doing something to stop it. One of the more pronounced examples came in May when the Senate Judiciary Committee called in OpenAI CEO Sam Altman for a hearing.

You might think a hearing with one of the most powerful men in Silicon Valley would come with a lot of hard-hitting questions. But apparently, Altman spent the lead-up to the hearing schmoozing and making friends with the politicians, and the charm offensive worked. For the most part, it was a calm and friendly affair that saw politicians complimenting Altman and even referring to him by his first name like old friends. One senator even asked Altman if he’d be interested in leading a new regulatory agency that would oversee his industry. He politely declined.

Lawyers fined for submitting bogus AI-written legal documents

Photo: Andrey_Popov (Shutterstock)

By now you’ve probably heard that ChatGPT has a penchant for making up lies and stating them as fact, or what experts refer to as an AI “hallucination.” That didn’t stop two lawyers in New York City from having ChatGPT spit out some legal documents. Unsurprisingly, the AI fabricated some quotes and citations out of thin air, but the lawyers submitted the documents anyway, apparently without doing any diligence to check the facts.

The shenanigans came to light when the court noticed six of the legal cases used as citations were imaginary. The judge wrote that lawyers Peter LoDuca and Steven A. Schwartz then made matters worse for themselves by lying to the court. When the court questioned their made-up case law, the judge said Schwartz offered “shifting and contradictory explanations” and LoDuca pretended to be on vacation in a bid for more time.

Federal Judge P. Kevin Castel wrote that the lawyers and their firm, Levidow, Levidow & Oberman, P.C., acted in bad faith and lied to the court to cover up their mistakes. The respondents “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” Judge Castel wrote in the decision.

A popular scifi magazine shut off submissions amid a flood of AI-written slop

Photo: Vasilyev Alexandr / Shutterstock.com (Shutterstock)

We knew the ease of AI content production would fill the internet and the world with junk, but it came as a bit of a surprise just how quickly that happened. In February, less than six months after ChatGPT’s release, a popular science fiction magazine called Clarkesworld shut down submissions because it was hit with a tidal wave of AI content.

The magazine said it had never seen plagiarism or other fraudulent content at such a massive scale. And unsurprisingly, most of it was garbage. Many of the stories even had the same boring title, “The Last Hope.” The magazine’s editor Neil Clarke blamed the problem on internet “side hustle” content creators who promoted the idea that submitting AI articles to magazines was an easy way to make money.

Sports Illustrated says it didn’t use AI to write articles, actually

Photo: PREMIO STOCK / Shutterstock.com (Shutterstock)

One of the more groan-inducing AI events of the year came courtesy of Sports Illustrated, a magazine so old that people were impressed by the fact that it included photos in its early days. In November, the publication sparked outrage when reporters discovered that Sports Illustrated had been publishing AI-written articles masquerading as human work, complete with author bios and photos… or so it seemed.

According to Sports Illustrated, these articles were not written by AI, they were just so terrible that it appeared that only a robot could be responsible. Examples included this article peddling affiliate links for volleyballs, which reads “Volleyball can be a little tricky to get into, especially without an actual ball to practice with.” AI or not, you have to admit that’s a good point.

OpenAI fires and then rehires Sam Altman

Photo: Justin Sullivan / Staff (Getty Images)

Just when it seemed like 2023’s AI news couldn’t get any more absurd, OpenAI hand-delivered a grade-A batch of corporate spectacle. In November, the company’s board of directors voted to fire Sam Altman, a man who is without a doubt one of the most successful CEOs in recent memory. Even stranger was the fact that the board refused to say why, exactly, it was letting Altman go, and gave him zero warning that anything was on the way.

Shortly thereafter, almost every OpenAI employee threatened to quit if Altman wasn’t reinstated, and less than a week later, the board brought Altman back as a conquering hero. The board of directors didn’t fair as well. Every single member was replaced except for one.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.