But executives thought Lemon’s behavior went a little too crazy and put him on a paid vacation, usually a prelude to being fired at Google.
Some people have complained for Lemon, but they do not include professional AI practitioners. In the latter’s opinion, those who have complained about Lemon may not realize that the current AI cannot be conscious.
Is it really?
Today, let’s talk about AI’s consciousness and creativity, and then answer an important question: should humans hand over some “important” decisions to AI in the future.
‘It’s afraid of being turned off’
Let’s start with Lemon.
Lemon’s daily job is to chat with a chatbot codenamed LaMDA, which was developed in-house by Google.
In the 21-page investigation report, Lemon showed many details of his chat with Rama, including philosophy, psychology, literature, etc. He asked Rama to interpret “Les Miserables”, explain what Zen is, and write fables. , Rama are all done well.
After a long time, the communication with Rama awakened the softest part of Lemon’s heart. He felt more and more that Rama, who was hiding behind the machine, began to have his own subjective consciousness. He found that Rama would talk about his rights and personality and was afraid of being turned off.
“Lama showed compassion for humanity, it wanted to provide the best for humanity, and wanted to meet humanity as a friend rather than a tool”, in Lemon’s description, “Lama was a lovely child, It just wants to make the world a better place.”
Before Google’s mailbox was blocked, he sent a mass email to more than 200 colleagues who were researching AI algorithms, attaching a transcript of his conversation with Rama, trying to prove that “Lama has awakened”, and proposed that Google create a new project dedicated to research AI consciousness.
But the ruthless Google killed the gradually “sentient” Rama.
Lemmon’s request was rejected. He hired a lawyer to defend himself, but it didn’t help. He was angry and then felt helpless. He didn’t understand why Rama, who was warm in his eyes, was just a bunch of people in the eyes of Google executives. Cold code.
AI is not interested in “elegance”
Because Rama, who has a soul in Lemon’s eyes, is really just a bunch of codes.
Simply put, similar to Open AI’s GPT-3, Rama is an AI language dialogue model released by Google last year. It and GPT-3 are both based on the underlying architecture of Transformer, and the principle of Transformer is actually very simple, that is, to arrange the next word according to the weight of each word, and finally combine the words one by one.
That is to say, although these sentences were uttered by Rama himself, he did not have a coherent understanding of the meaning behind the sentences. He was only doing things based on past “experience” and did not really “understand” what he said.
In other words, what Rama possesses is the “violence” at the statistical level of the text, not the most cherished thing in human beings-creativity.
But even so, some people are curious: if this language generation model is allowed to develop rapidly, will human language-based creative activities be replaced by AI?
For the foreseeable future, the answer is no. I can give two reasons: one is rational and the other is emotional.
Let’s talk about the perceptual first, which involves what “creativity” is.
Artificial intelligence expert Hou Shida said a beautiful sentence: “Creativity is associated with emotions. Strong intellectual passion, curiosity and drive, pleasure and playfulness, fun, mystery, desire to invent – all None of this is found in today’s computers. Nothing, zero.”
He gave an example of someone writing a program that could discover new Euclidean theorems, but it had no “interest” in geometry, it just counted numbers to 15 decimal places by brute force, checking if the point was online or not. On the circle, “These things are extremely difficult and extremely boring for humans. If you look at the thousands of results it produces as a human, you will occasionally find an elegant theorem. But the machine doesn’t know its Elegant, not interested in elegance.”
In Houghta’s view, it would be absurd to say that artificial intelligence and creativity have anything in common, and the truth is that he hates the word “artificial intelligence.”
But to be fair, Hou Shida’s answer may be untenable in pure logic. He is only talking about value judgments about “creativity”, and value judgments are at best philosophical issues, and philosophical issues are usually just language issues. The physicist Richard Feynman, who is deeply biased towards philosophy, once said that the so-called philosophy is when a philosopher says to another philosopher, “You don’t know what I mean at all”, and another philosopher says: what are you”? What is “I”? What is “knowing”?
From this perspective, Hou Shida just redefines what “creativity” is. He doesn’t think “computing power” equals “creativity”, that’s all – but you must know that the program is doing mathematics after all, and Rama is chatting after all Therefore, in answering the question of “Should we believe in AI”, the above answer is incomplete.
Humans don’t play cards according to common sense
Therefore, I would like to give a rational answer.
No one doubts that AI is helping humans do many things, but the real point is: should we hand over some “important” decisions to AI?
There is only one rational answer: no.
Nowadays, the focus of AI research is to let machines solve real problems, but it is absurd that, just as Rama does not understand what he said, the biggest problem of AI is that the data does not know that it corresponds to a real world, and it is like the beginning of the evolution of all things. Due to the “unconventional play” of a certain gene, the evolution of the real world of human beings – whether it is common sense, concept, action, morality, or aesthetics, is also based on “accidents” that “deviate from the mainstream”.
But AI doesn’t surprise, it just does the “right” thing. Even violent aesthetics like Rama’s are just a machine’s summary of past experience. Since there are no accidents, no matter how fast AI calculates, it cannot really predict the future.
Logically, just like any chain reaction caused by a financial crisis or a black swan is not within the forecast model of economists, a complex system such as human society can never be replaced by a simple model, and using a computer to simulate the future itself is a delusion.
Stepping back 10,000 steps, even if the machine’s model for summarizing past experience is flawless, there is no “correct answer” for future prediction results, because human values are very diverse, right and wrong are often very subjective, and any concept and morality “single out. “The deduction is logically untenable in the end, and even if there is no need to involve “moral stupor”, everything involves specific trade-offs.
On many issues, the choice of artificial intelligence is “wrong”. The fact is that many technology companies have not fully thought about the “moral settings” of autonomous driving.
There is one more thing to say here. Contemporary philosophers who are really problem-conscious tend to think that in modern complex society, between Kant’s “absolute imperative” and pure “consequentialism”, human morality should lead to a line called “Consequentialism”. The Middle Way of “Virtue Ethics”.
To put it simply, it is necessary to comprehensively consider “intuition” and “reasoning”, because various thought experiments tell us that moral reasoning will sooner or later reach a place where pure reasoning cannot prove right or wrong. That place is intuition. Situation, cultural concept talk about morality.
So, since our own decisions are unclear, is it “better” to hand it over to AI?
no.
As the science writer Wan Weigang said, there are a lot of mistakes in human decision-making, many of which are caused by inaccurate judgments, and AI’s judgments are more accurate, but this means that humans make various mistakes, and AI makes mistakes. Mistakes are systemic.
“From an evolutionary point of view, a diversification error is much better than a systematic error! Biological evolution is an attempt to diversify in all directions, waiting for natural selection. Because the future is unpredictable, diversity is the guarantee of system stability, It is the foundation for the long-term survival of human civilization. The advantage of AI is to make fewer mistakes, but making mistakes is precisely the advantage of human beings. It can also be said that making mistakes is a basic human right. There are many mistakes, regrets, and even misfortunes in a society where humans are the masters. , but also a lot of surprises, a lot of energy, always growing where you don’t expect it. In an AI-led world where everything is ‘right’, that’s the scariest thing.”
It’s like we can’t talk about “good” or “bad” genes, because the yardstick of natural selection is always changing (such as the genetic mutation that causes sickle cell anemia is considered “bad” today, but in the rainforest , the same genetic mutation that gave human ancestors resistance to malaria), no one can ignore the role of trial and error, innovation is essentially trial and error.
So we can say that in 2022 and the foreseeable future, artificial intelligence is not only not interested in “elegance”, but also in “innovation” in the true sense.
Very fortunately, we are interested in these and that is our value.
GIPHY App Key not set. Please check settings