Artificial Intelligence (AI) seems to be making its way from science fiction into the real world. We see companies like OpenAI and DeepMind make great progress in training and deploying Large Language Models (LLMs) and other AI (agents), which can answer so many questions we may have from writing letters to code. Having tried them myself, I find them quite helpful. Adding to this, AI is getting increasingly better month by month. In fact, some predict that AI will reach AGI (intelligence of human level breadth and superhuman level depth) by 2027.
Of course, there are doom-oriented thinkers for such a revolutionary technology. Even for the Industrial revolution where we made tools which were nowhere near replacing humanity we had luddites who wanted out. Even many decades after the industrial revolution we are aware of radicals like Ted Kaczynski, who were deeply suspicious of technological progress. Now, with the advent of AI we have a new wave of nay-sayers like Eliezer Yudkowsky, who have been ringing the alarm bells of the apocalyptic agency of AI for years. The problems that AIs pose are slowly dawning on even its godfathers. They are relatively recently seriously considering or perhaps even fearing these risks. There is a famous trend of giving the probability of doom from AI with the phrase P(doom). The public P(doom) of some thinkers in this field are given here. The problem is basically that AIs will eventually have wills of their own and that these wills will not be aligned with what is good or even endurable for humanity.
In this article, I am going to be optimistic and just look at the risks we will face when the best case scenario of AI materializes. That is, I will assume we will solve the famous AI alignment problem and see what type of society we will have with benevolent and powerful AI at hand. There are a few things that come to mind as dangers.
One imminent threat of AI is on the job market. Disembodied AI is currently replacing many white collar jobs (reference). This will only get worse when we manage to get robotic AI in various work places. There is of course the immediate threat of income inequality that will be exacerbated and the class consciousness it will foment. Again, I am going to be an optimist and assume there will be some solution to this via some forms of welfare states. Moving beyond people's financial needs we move to the need for meaning. We know get the opposite of Victor Frankl’s problem in his famous Man's Search for Meaning: no suffering and no meaning instead of facing suffering with meaning. Not struggling to make a living as ordained in Genesis will have negative mental repercussions. Pulling from various articles and papers, Google's AI Gemini puts it this way:
Joblessness is significantly linked to mental health problems. Individuals experiencing unemployment are more likely to report lower self-perceived mental health, increased rates of depression, and higher levels of anxiety and stress. Specifically, research indicates that unemployed individuals are twice as likely to experience depression symptoms and major depressive disorder compared to those with employment.
Put simply, people will likely have a crisis of meaning if they are not needed as participants in the world. Sure people can try to convince themselves they can mean something to each other in families and friendships but will such relationships be healthy? There is a deeper interpersonal issue at play here that deserves separate attention.
But, it's not all doom and gloom. Just because AI can automate many or perhaps all of the things we need to have functioning infrastructures for civilization, the notion of human uniqueness still remains. We are so far the only verifiable entities in the universe that are sentient. This, in my opinion, is at the heart of our peril and defense against benign but overwhelming AI.
Some think that human uniqueness is safe for the foreseeable future because AI is bounded in its capacity to think. People who hold this kind of view include famous mathematician Roger Penrose and computer scientist Yann LeCun. Dr. LeCun believes that we are not close to AGI and superhuman level synthetic entities. For him this is in large part because of his lack of esteem for LLMs when it comes to intelligence. Dr. Penrose asserts that consciousness simply can not rise out of computation. He argues this in his book The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Although I have only skimmed the book, here is a gist of its message. Dr. Penrose speaks of microtubules (the skeleton, transport lanes, and scaffolding of cells) giving rise to consciousness through quantum effects. There is a lot to be read, understood, and said here but for this essay we will just point out that Dr. Penrose argues that this effect is beyond computational. Being grounded in materialism Dr. Penrose believes there might be an natural explanation for this but that we have not reached there yet. Whether this bottleneck for consciousness is surmountable by mere humans or not, the conclusion is that, if Penrose is right, we do not have an immediate human obsoleteness problem.
Dr. Penrose's arguments are, according to him, speculative and not yet complete, and in general this question of human uniqueness/specialness in the face of AGI and Artificial Super Intelligence (ASI) remains open. For now it seems to me that we simply have to have faith in ourselves. For Christians, this means continuing to believe that we are fearfully and wonderfully made in the image of God. For the non-religious this means holding on to the notion of the Ubermensch, to the precarious belief of being the only sentient beings in the universe (quite a leap of faith granted materialist assumptions), or just forgetting this problem in some epicurean sense.