• Stove Top
  • Posts
  • AI Is The Most Undemocratic Technology Maybe Ever

AI Is The Most Undemocratic Technology Maybe Ever

Who has a say in the future?

When you go high enough, Silicon Valley is not unlike DC in the sense that it is largely divorced from the real world. The best and most influential founders and VCs are just as out of touch as the Senators and cabinet members. This is pretty intuitive. It takes a while to become rich and powerful, and once you do, you tend to only hang out with other rich and powerful people.

AI is a prime example of the tech Twitter echo bubble. The e/acc wing of Twitter deeply believes that AI will save the world, and any opposition is simply decel propaganda. Even outside of the hardcore e/acc’ers, most of tech Twitter is excited about AI. But that’s largely not the case for regular, non-tech people. 

And who can say they don’t have a reason to? AI has already affected the jobs of writers, graphic designers, reporters, fast food workers, helpline workers, freelancers,…etc etc, with this trend likely accelerating before ever improving. It’s very possible that a large majority of the country opposes the advancement of AI.

But is that enough reason to stop? Although tech kingpins are famously bad at predicting how their inventions will permeate throughout society, even the biggest AI critic has to admit that it has a potential unlike any other technology ever invented. In the ultimate upshot scenario, AI is the last technology man ever has to invent. The last mountain to climb before an intergalactic utopia.

This opens up a debate that tech has rarely ever had to deal with. How much input should the masses have over potentially society-altering tech? Or, put another way, how much power should undemocratically elected innovators have over reality?

The reason this is a new debate is due to two interrelated factors. First, because most of Silicon Valley’s inventions only affect those who want to be affected. And second, the free-market has a way of regulating most of those inventions.

On the first point, if you don’t want the new Mac or new iPhone, you simply choose not to buy it. Don’t care for VR? Don’t buy a headset. Even crypto, the quintessential example of a tech echo chamber, has no impact on the lives of people who don’t indulge in the coins. If it wasn’t for the news, you could go your whole life without ever knowing crypto even exists.

That leads to the second point. If nobody buys a VR headset, that signals there is no demand for VR, and the tech giants will adapt accordingly either by continuing to iterate on the headset or ditching VR completely. This is how the ideas people want come to the forefront and the ideas people don’t die. It’s why entrepreneurs so passionately defend the pursuit of profit. Through the pursuit of profit, consumers should end up with what they want.

Neither of these two points applies to AI. It doesn’t matter if you want nothing to do with AI, because AI is coming for you and your job. Likewise, there is no way to fight back against AI in the free market because it is not just a demand-side product. If it cheaply replaces people, then corporations will always have an incentive to develop and use it.

So where does that leave us? With a technology that fundamentally changes everything, and that you, the person who is most affected, are unable to impact in any way.

Simply put, it’s perhaps the most undemocratic technology ever invented. Yes, even more than the totality of the industrial revolution. That only affected blue-collar workers. AI affects everyone, and right now, only a handful have a say in its development.

That’s the problem, but the solution is…more complicated.

Do we shut down or pause AI? Ignoring any debate on the merit of doing so, this is a completely impossible task. Other countries (read: China) are accelerating. Open-source is a thing. And there will always be people operating in the shadows, working and building. If you can’t get rid of recreational drugs, how are you going to get rid of AI?

Do we try to take into account regular people’s opinions somehow, maybe through something like a nationwide poll? Eh, this would probably be counterproductive. AI is a complicated technology that is evolving extremely quickly. Any mass poll would lag severely behind the pace of innovation, and considering that most people have no opinion of AI other than it’s going to take my job, this would most likely result in a “shut down AI” vote, which is impossible. It’d be a huge waste of time. Besides, the problems with pure democracy are well known. That’s why we have representatives.

Does that mean we should trust our representatives to keep us safe? This would be great, but is there any reason to believe they can do so? We’ve all seen the clips of representatives befuddled by concepts like “WiFi” and “ads”. And it’s not getting any better. Congress is the oldest it’s ever been. The two most likely Presidential nominees are so far over the hill the hill looks like a dot to them. How can we expect any of them to effectively regulate the most complicated technology ever created?

Should we trust the rank-and-file workers actually building AI to keep things safe, as some have suggested? It’s a nice thought, but this probably wouldn’t work either. Workers have a huge financial incentive to accelerate AI as fast as possible. It’s why they rebelled against the OpenAI board when they wished to slow things down. There’s no reason to expect them to act differently in the future, especially if they also start to believe losing their jobs to AI is inevitable. In that case, all the incentives are toward cashing out as much as possible.

Ok, but what about the regular non-AI workers? Can we trust the working class to band together to fight back against their collective extinction? There does seem to be some progress on this front, but there are definitely reasons to be skeptical. The threat of AI is so wide that it’ll take cooperation like we’ve never seen before to make any real progress. And it’ll have to happen fast, because labor will be left with no leverage once AI becomes good enough.

So, that leaves us with trusting what the Altman’s, Sundar’s, and Nadella’s of the world are telling us: AI is a technology that will make our lives exponentially better, even if it takes a lot of jobs in the transition period. 

It’s a scary proposition considering how poor those in the arena are at predicting what happens outside of the arena, but it’s all we have at the moment.

Join the conversation

or to participate.