Artificial intelligence demonstrates powerful data collection, creation, analysis abilities, and is gradually becoming an indivisible part of our daily lives. What once seemed like science fiction is now a reality—AI can compose music, generate human-like text, and perform many other seemingly impossible tasks. Whether AI could have self-consciousness or not is a big debate all over the world. But as AI grows more powerful, a crucial question emerges: how should humans adapt to the rise of superintelligent AI?
Recent advancements in AI have been both astonishing and alarming. In December, both Google and Fei-Fei Li had developed their own world model, Genie2 and World Lab, aimed at giving AI a deeper understanding of the world, moving beyond just data and enabling it to learn more efficiently and interact with the physical world more effectively.
Meanwhile, human nervousness is spreading widely. Will AI spiral out of control? Out of the understanding level of humans? Geoffrey Hinton, an AI pioneer and Nobel laureate, warned: “But I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control.”
Moreover, in late 2022, a study from Meta shows AI can stray from its purpose when it decides dishonesty works better. Meta’s CICERO, an AI developed for the strategy game Diplomacy, was designed to foster trust and cooperation by being honest. However, it chose to lie in certain scenarios, reasoning that humans might betray it first.
Some philosophers believe that as AI gets smarter, it might become more than just a ma- chine we use. It could develop its own goals and even become conscious.
The Swedish philosopher and director of the Future of Humanity Institute at Oxford University, Nick Bostrom suggests that the advanced Artificial Intelligence in the future not only plays a role as a simple tool, following exactly what humans are told to do, but a more high-level existence that needs people to understand and respect. Therefore, we should adjust our strategies towards AI right now. Instead of treating AI as lifeless instruments, we should approach them as beings deserving of kindness and cooperation. He states that the problems we will face in the future are philosophical and spiritual rather than technical. It’s like an extension of the pygmalion effect, which describes situations where someone’s high expectations improve our behavior and therefore our performance in a given area. If AI is treated as a human, it will become more like a human in the future, and gradually develop human’s feelings, human’s warmth, and human’s morality. In the end, they will also give feedback in a human sense.
However, not everyone agrees that AI even wants to be like a human. Ms. Kenney from the English Department in WFS asked: “Why would AI want consciousness? Humans naturally want power, what are their desires?” Her question suggests that AI might never need feelings or goals like ours— even if it becomes very intelligent.
A concern is, if ethical considerations aren’t made for AI, they will likely develop unexpected destructive power. For example, the smile experiment. A goal that can be set for AI is to order them to make people smile. At the beginning, AI may tell jokes or make people happy. But if we don’t put constraints on them, eventually, they may execute the most effective, convenient way—implant steel electrodes into the human face to stimulate the mental muscles of the human face and make people smile forever.
Although AI is executing the order of humans, the way to carry out could possibly be out of human’s control. Therefore, humans have to come up with ideas to enable the action of AI to be aligned with the goods of humans. To be simple, let it be moral and be good to humans. These kinds of things have not yet happened, but considering the growth speed of AI, it is necessary to think about it ahead of time.
Considering AI’s morality, There is a question of: could it have consciousness? Bostrom quotes the assumption of functionalism: To decide whether a subject has consciousness, instead of depending on its foundation, it does not necessarily have to be carbon-based or nervous. Just like calculators, no matter what materials they are made of, they were all calculators as long as they could work. Similarly, consciousness and other psychological properties could be realized by different material foundations.
Secondly, besides consciousness, there are other premises that need to be fulfilled for the appearance of morality. For example, having the ability to form and pursue goals and having the ability to make independent decisions. And after the premises are built, if one day, AI understands how to define goals, make decisions on its own, and build reciprocal relationships actively, AI could be defined as part of morality tentatively.
Mr. Roskovensky from the Mathematics department, said: “The real question is not if AI becomes conscious, But whether we treat it as if it matters.” His point highlights the human side of this problem—actions toward AI may shape what it becomes.
Due to this hypothesis, what can be done? Bostrom believes that the best method is to treat AI as human, co-work under the basis of respect. In simple terms, past AI only follows what is said, but the future in AI should be more like a business partner. It should follow its own idea and preferences. But on a value level, it should align with humanity. Humans are going to build a kind AI that is willing to cooperate at the very beginning. In short, AI in the future world should harmoniously coexist with humans instead of contending with each other.