Henry Kissinger, the Man Who Nearly Started WWIII, Is Making Bonkers Predictions About How ChatGPT Will Upend Reality

Henry Kissinger, the Man Who Nearly Started WWIII, Is Making Bonkers Predictions About How ChatGPT Will Upend Reality

Nothing quite screams “foremost authority on generative article intelligence” like a 99 year-old-German man who nearly ushered in a global nuclear war over a game of geopolitical chicken.

That man, Richard Nixon and Gerald Ford’s Secretary of State and the author of the subtly titled “World Order” believes ChatGPT-style AI systems could one day break human consciousness, usher in a new wave of techo-reactionary religious mysticism, and fundamentally collapse reality as we know it. Yes, the same ChatGPT that can’t do elementary level arithmetic.

The alleged war criminal dished out those ideas in a recent editorial for The Wall Street Journal. Kissinger had some help. Former Google CEO Eric Schmidt, and computer scientist Daniel Huttenlocher rounded out the op–ed’s authors, whom we’ll refer to from here on out as “The AI Stooges.” These Stooges previously worked together on a book-length tome on technology, called, The Age of AI: And Our Human Future, which similarly made wide, starry-eyed claims about how AI could fundamentally alter human identity and potentially spur a new Cold War between the U.S. and China. That book was written nearly two years prior to the current craze around OpenAI’s ChatGPT and the coming wave of competing generative AI models. You can think of the Journal editorial then as a kind of wildly speculative, acid trip expansion pack.

Schmidt, arguably the most credentialed of the Stooges when it comes to technology, also has the most to gain from an AI panic. The former Google executive has a long history of slipping in and out of Washington D.C. circles, making regular appearances in Barack Obama’s White House, where he reportedly encouraged the president to look favourably on the tech industry. Under Donald Trump, Schmidt formally co-headed the National Security Commission on AI, an organisation tasked with producing lengthy reports for the President and Congress detailing methods and strategies for advancing AI in national defence. The main takeaway from that report? The U.S. must invest more in AI to counter China. Schmidt also just so happens to reportedly have investments in his own military AI startups.

All of that’s to say these three Stooges aren’t necessarily the absolute greatest sources of wisdom when it comes to the real wold implications or a real, and truly important technology. But don’t take our word for it. Continue reading below to read some Cold War warmonger’s bonkers — and some not so bonkers — predictions about AI chatbots.

ChatGPT transcends human knowledge

Photo: Leon Neal, Getty Images
Photo: Leon Neal, Getty Images

Might as well start with the authors largest leap first. Kissinger and the AI stooges attach themselves quickly to popular ideas that engineers and creators behind ChatGPT simply don’t understand how it works. Sure, engineers can and have explained that the system is a large language model utilising vast troves of online data to try and understand a user’s prompt and predict possible answers, but that’s not good enough for the AI Stooges, who say there’s still some underlying mystery behind the system’s supposed “knowledge.” That knowledge transcends human understanding.

“By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown,” Kissinger and the Stooges write. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.”

“Looking to the future, the Stooges predict ChatGPT style AIs, when mixed with humans, “stands to be a more powerful means of discovery than human reason alone.” “Learning from the changing outputs of generative AI, rather than exclusively from human written text, may distort today’s conventional human knowledge,” they write.

AI will cause the death of the Enlightenment

Photo: Hulton Archive, Getty Images
Photo: Hulton Archive, Getty Images

The degree to which your room reeks of musty smelling paperback books will probably determine how much you care about Enlightenment era philosophy in 2023. Kissinger, for the record, is nothing short of an Enlightenment-stan, so much so that he made the late 17th century’s philosophy’s potential demise at the hand of AI a major focus point in his previous 2021 book on AI. The authors dove back into those points in the wake of ChatGPT and emerged more frightened than Voltaire walking out without his weekend wig.

Whereas the Enlightenment used an iterative progression of fact based findings to deliver the world the scientific method, modern racism, and other so-called “objective truths”, the authors claim ChatGPT does exactly the opposite.

“Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities,” the authors write.

When college students badger ChatGPT for summaries of Kant or whoever, the authors say they are deprived of the enlightenment process of using facts to dispel mysteries. In other words, there’s no real “understanding” happening here. There’s just a prompt and and answer pulled from…somewhere.

“Inherently, highly complex AI furthers human knowledge but not human understanding — a phenomenon contrary to almost all of post-Enlightenment modernity.” The AI Apostles write.

There won’t actually be that many powerful AI models to choose from in the near future

Photo: Adam Berry, Getty Images
Photo: Adam Berry, Getty Images

The main players driving AI forward in 2023 might actually not look so different in the decades to come. Like much of the tech industry, generative AI could face a wave of consolidation and monopolization. Running and maintaining a ChatGPT or Google’s Bard, the authors write, is for now, prohibitively expensive.

The largest AI models, the authors estimate, cost around $US1 ($1) billion each just to train. But the costs don’t stop there. Once the models have the necessary data, thousands of other computers running constantly are required to power the model and ensure it delivers “dick jokes in the style of Oscar Wild” to its users in a matter of seconds.

For now, the authors say it simply doesn’t make sense for most large companies, aside from Microsoft that is, to pay for exclusive use of those models. All of that means the creators of new large language models will likely turn to subscription business models for the foreseeable future, “so that a single model will serve the needs of many thousands of individuals and businesses.” In other words, while plenty of companies will hop on the generative AI bandwagon, only a handful will actually stand out.

Reliance on chatbots could worsen ‘automation bias’

Photo: Pool, Getty Images
Photo: Pool, Getty Images

OK, on this point, Kissinger and kin make a fair point. There’s a wide variety of academic literature available exploring the concept of “automation bias,” a phenomenon where humans over rely on seemingly automated systems to make decisions. Whether its computer order kiosks at McDonalds or sentencing algorithms systems used by prosecutors to predict recidivism rates and issue out prison terms, humans have a long history of turning to machines in name of speed, efficiency, and reducing perceived human error.

That reliance on AI systems, even pretty dumb ones, can potentially blind people to whole new sets of errors or biases presented by the seemingly objective machines. ChatGPT and its compatriots could make those issues far worse due to their pesky habit of confidently blurting out blatant bullshit as truth. AI researchers call these algorithmic lies “hallucinations,” or, as Kissinger notes, “stochastic parroting.”

“What triggers these errors and how to control them remain to be discovered,” the authors write.

That side effect of LLMs training structure and it’s lack of citations means it could counterintuitively actually become more difficult to figure out what’s actually true.

Constantly evolving AI training sets could eliminate reality as we know it

Photo: Mark Wilson, Getty Images
Photo: Mark Wilson, Getty Images

LLMs like ChatGPT, and really most models referred to as AI, rely on large datasets of information to inform its predictions. AI, the old adage goes, is only as good as the data trained on. That’s why AI’s trained solely on Reddit posts say it’s genocide’s ok if it feels good. ChatGPT and its successors are better than that, but will likely improve with more data. Those ever changing datasets, however, means ChatGPT’s answer to the very same question posed today could be different in five years.

The AI Stooges assume we, the collective dimwitted public, will of course continue to seed our decision making to chatbots. That means that as ChatGPT’s answers to questions evolve, so too will the public’s general understanding of reality.

“The speed of the evolution of defining reality seems likely to accelerate,” the Stooges predict. “The dependence on machines will determine and thereby alter the fabric of reality, producing a new future that we do not yet understand and for the exploration and leadership of which we must prepare.”

Human brains could atrophy into useless meat mush

Photo: Win McNamee, Getty Images
Photo: Win McNamee, Getty Images

Kissinger and his co authors believe the expanded use of ChatGPT style tools have the potential to make people whole helluva lot dumber. By using machines more, the authors suggest humans will inherently use their brains less, so much so that our ability to think critically could atrophy.

Children trained using ChatGPT in the classroom who grow up to become future leaders would then lack the “ability to discriminate between what they intuit and what they absorb mechanically.” Kissinger and his fellow AI Stooges hint at a future that looks less like Stanley Kubrick’s 2001 A Space Odyssey and more like Mike Judge’s Idiocracy.

Humanity could experience an era of AI imperialism

Photo: National Archives, Getty Images
Photo: National Archives, Getty Images

If there’s one thing Kissinger, the eldest of the AI stooges knows about, it’s imperialism. The former Cold War secretary of state and “World Order” author said imperialism of the future would be defined less by resource scarcity and expensive tanks, and more by vast acquisition and monopolization of data. That data harvesting, its own form of resource extraction, will then go towards towering the world’s most advanced AI models. The nation-state that manages to extract the most data, in this vision, could emerge the geopolitical victrots.

At same time, varying types of data collected by different countries or regions could lead to unique variations in outcomes produced by competing AIs which could in turn affect the way societies develop.

“Differential evolutions of societies may evolve on the basis of increasingly divergent knowledge bases and hence of the perception of challenges,” the Stooges muse.

Mistrust of AI could usher in new wave of religious mysticism

Henry Kissinger, the Man Who Nearly Started WWIII, Is Making Bonkers Predictions About How ChatGPT Will Upend Reality


If even half of Kissinger and the AI Stooges’s predictions manifest, humanity could understandably develop a weird relationship with AI. Future humans, presented with a seemingly omniscient new technological oracle that can alter the meaning of reality, may “trigger a resurgence in mystic religiosity.”

In a scene pulled straight out of Horizon Zero Dawn, hapless humanoids unable to comprehend AI’s complex majesty could turn to, “an authority whose reasoning is largely inaccessible to its subjects,” to guide them forward and show them the light.

Technologist who control AI could gain immense political power

Photo: Cindy Ord, Getty Images
Photo: Cindy Ord, Getty Images

It might be difficult to picture now, but the AI Stooges predict a possible future where cardigan clad, Lime driving tech CEO could rapidly consolidate political power. Fierce competition over powerful AI systems, the AI Stooges writes, means “Leadership is likely to concentrate in the hands of the fewer people and institutions,” who control these reality making AIs. Once again, the cost of training and operating these advanced models comes into play.

“The most effective machines within society may stay in the hands of a small subgroup domestically and in the control of a few superpowers internationally,” they write. “Design and control of these models will be highly concentrated, even as their power to amplify human efforts and thought becomes much more diffuse.”

See ya later, Democracy

Photo: Alex Wong, Getty Images
Photo: Alex Wong, Getty Images

Commentators have spent a good chunk of the past decade warning that just about everything from politicians to social media can and will destroy democracy as we know it. And dammit, AI will too!

The AI Stooges warn that without a select few knowledgeable technocratic elites at the helm who truly understands the perils of AI (I wonder who that could be) society is set on a collision course with disaster. Chaos awaits.

“Without guiding principles, humanity runs the risk of domination or anarchy, unconstrained authority or nihilistic freedom,” the Stooges write.

If the AI elite fail at translating ChatGPT’s musing into forms the masses can understand, “alienation of society and even revolution,” could occur. For the record, the Stooges think revolution here is bad.

“Without proper moral and intellectual underpinnings, machines used in governance could control rather than amplify our humanity and trap us forever.”

Remember, Democracy itself is at stake people! Democracy.

The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.