2023-06-05 | Technology AI

Media and Politicians AI-Panicking

It's no longer just a moral panic, it's a global media meltdown. Geoffrey Hinton quit Google (after years of working on literal AI-assisted world domination) to conduct interviews on any outlet who'll have him, having very public realizations about the completely unexpected and sudden dangers of AI development. There is no media site or news channel on the planet NOT talking about the horrible, terrible, world-destroying dangers of AI right now. As a response, the general population is panicking as well, and inevitably, so are politicians. What's going on? Are LLMs really signalling the end times?

Nobody made these sweeping claims when it was just giant corporations milking AI to exploit their users. There was no public outcry, no entire media cycles spent on sounding "the alarm" on AI safety. It was just a few AI researchers gently talking about AI safety and responsible use in general terms, and normies basically laughed at them. So what has changed?


AI has made good headway in recent years, but nothing unexpected is happening so far. Every rational person who has spent a significant amount of time with GPT4 realizes that it's not generally intelligent. It's a fantastic tool and allows you to do more and better things than ever before, and it can replace skilled workers in some cases. But that's simply how tools work.

Do image generators replace artists? Yes, to some degree, but obviously there are limitations on what you can do with them. There is no question that we'll need fewer commercial artists, and those whom we do need will be more specialized. People sometimes act as if AI image generation has made non-commercial painting meaningless, but if you're a hobbyist who just enjoys creating beautiful things there is no reason why you shouldn't just continue to express and enjoy yourself. Of course, the IP crowd is freaking out over everything, as it has been since the invention of canvas and sheet music - so that's not particular to AI either.

Does text generation replace writers and text-writing experts? Yes, again to some degree. Rote software development, aka writing boilerplate code, has become feasible to do with AI. Which is fantastic news both for companies and for people who had to spend their lives writing megabytes of useless code. People don't realize how much of a tax these huge software frameworks were on the economy. Making huge amounts of boilerplate a necessity has been used as a way to create commercial lock-in for organizations, to generate an "ecosystem" of new jobs around particular frameworks, and sometimes I believe it's also been used to artificially slow down garage coders and small startups. And again, just as it is with image generation, the programmers who remain will be fewer but higher skilled and more meaningfully employed. Analogously, software development as a hobby should also continue to be a fulfilling activity.

So what is the real reason why media freak-outs are happening currently?

Corporate Interests

A big clue could be the history of OpenAI as a company. For most of its life, the company toiled away under the radar. Despite its name, there was nothing open about it. They may always have had huge corporate customers, but nothing user-facing. But there was a problem. As the dynamics of Silicon Valley go (which is shorthand for: how the world of startups and big corporations interacts with the talent pool of scientists and programmers), a huge portion of the population consists of programmers, and a huge portion of those are currently doing AI because that's where the money is. OpenAI went for a public-use model with GPT, because they noticed this would be the last possibly opportunity to do so before garage coders and the open source community overtook them. As they watched their technology base erode away into public knowledge, they realized they had to give the public access as long as there was still money to be made there. I believe they would not have chosen to release GPT products unless there was a realistic threat of democratization on the horizon.

Now AI is being democratized in a big way, and now all of a sudden every news outlet on the planet is running alarmist stories. I believe Elon Musk, of all people, saw the democratization of AI coming when he publicly called on lawmakers to heavily regulate the industry despite himself being heavily invested in that field (plus, he likely wanted to get his competitors outlawed). Well, it's here now. It's threatening all kinds of commercial interests, and it's no longer clear that the current corporate players will ultimately come out on top.

We're already in a long, slow recession globally. But corporations and politicians didn't necessarily have to worry until recently, because profits continued to accumulate at the top of the pyramid. But the world is changing. Turns out austerity and economic uncertainty does make the world more volatile, ultimately harming profits more than helping. Turns out draining resources from the populace, while maximizing gains in the short term, does lessen their purchasing power. There might be a new world war in the cards. We might actually, as a civilization, expand into space this time. And on top of that, everyone now has access to AI tools, continuing humanity's habit of empowering individuals with capabilities previously only available to organizations, and empowering organizations with capabilities previously restricted to governments.

The Future

The public at large suddenly has access to a plethora of tools, not just ChatGPT. AI architecture and expertise is commoditized as well. Anyone with access to Jupyter or Collab can start playing around with Tensorflow. Training huge models is a bit of a resource problem right now, but efficiency of both training and running NNs will only go up from here on out. It's here to stay. Despite the obvious dangers, we should be glad that these tools can now be used by ordinary people, instead of only being used against them.

If they want to put this particular toothpaste back into the tube, they'd have to clamp down on general-purpose computation, which has been tried before but failed. However, I do believe "they" will try exactly that. There have always been huge incentives to restrict the public's access to general computation, but now more so than every before.