1
AI, really awake?
will artificial intelligence wake up?
this is an old and novel topic.
The inherent unpredictability of "deep learning" deepens this anxiety.
The biological analogy of "neural network" makes "AI black box" even more worrying.
Recently, a Google engineer once again exploded the topic: Is AI awake?
In June, 222, Lemoine, an engineer at Google, said that he found that the answer of AI“ LaMDA "was highly personalized and thought that the AI had" awakened ".
To this end, Lemoine wrote a 21-page investigation report, trying to get the top management to recognize AI's personality.
However, Google executives have not made a statement yet, hoping to get a clearer recognition.
However, Lemoine seems to be the protagonist of a sci-fi movie. He didn't give up, and published his chat record with AI to the public, causing an uproar. After Washington post followed up the report, it even bombed the circle around the world.
is ai really awake? Controversy continues.
No matter what the truth is, one thing is certain:
Because of deep learning and the blessing of neural networks, artificial intelligence has become more and more "unpredictable".
2
That night, human beings slept peacefully
About AI awakening, it reminds people of another thing six years ago.
On March 13th, 216, human beings and AI played an ultimate intellectual contest in Go.
Before that, AI had repeatedly succeeded in competing with human beings.
However, human beings believe that Go is the impenetrable ceiling of AI.
because the total number of atoms in the measurable universe is about 1 8, and the Go moves are 2.8 * 1 17, AlphaGo can't win by counting the amount of computation and calculating power. Then, how can creative human beings be defeated by AI? If Go loses to AI, it means that it has completed the Turing Test.
However, in the first three games, Li Shishi lost again and again, which shocked the whole world.
In the fourth game, Li Shishi judged that there was a chess game in the black sky, and played a white 78. Li Shishi, an epic "one hand of God", embodies the intuition, computing power and creativity at the peak of human beings. This is also the last battle for human dignity.
When an author wrote the above paragraph (with some modifications) and mentioned that "after 23 years, no one will be spared", scientists established a mathematical model to judge that artificial intelligence may reach the intelligence level of ordinary people in 24, which will lead to an intellectual explosion.
In the face of more and more popular AI, machines are about to replace human beings, and AI is expanding rapidly.
five years have passed, and mankind has made great strides towards the "matrix".
So, after 18 years, really no one is spared?
3
The other side of AI: not stable enough
The above two things are essentially concerns about the awakening of AI.
an AI with free will is not credible, and will eventually threaten human beings.
Hawking warned mankind to face up to the threat posed by artificial intelligence.
Bill Gates thinks that artificial intelligence is "summoning the devil".
in 21: A Space Odyssey, the supercomputer HAL9 mercilessly obliterates human beings in the universe.
In The Matrix, human beings are imprisoned in the matrix by AI.
however, realistically speaking, the disbelief in the awakening of AI is still only human speculation.
Although the description in science fiction movies is cruel and cold, it has not been universally confirmed.
But another "untrustworthy" feature of AI is real.
it's not too smart or conscious, but unstable.
the consequences of this instability are really "frightening".
There are many examples of artificial intelligence "failure", which is the unstable side of AI.
this is the real "untrustworthy" place, and it is also the real threat of AI to human beings.
We don't want to see the "awakening" of AI, but we can't accept the "rashness" of artificial intelligence.
4
What human beings need is a credible AI
Therefore, human beings need a "credible AI".
whether ai is smart or stupid may not matter.
whether ai evolves or degenerates may be just a false proposition for the time being.
What human beings need is a reliable assistant and a trustworthy machine assistant.
I'm your creator. You have to listen to me. You can't make trouble.
Asimov put forward "Three Laws of Robotics" 7 years ago:
This is the direction of human beings in AI ethical thinking.
It can be called the moral code of artificial intelligence society.
for human beings, trustworthiness is our most important demand for AI.
if we trace back to artificial intelligence from Bayesian-Laplacian theorem, the goal is to solve the problem of "reverse probability", in fact, the essence is to solve the reliability of AI.
if it can't be credible, AI may bite human beings.
at the very least, when AI is with us, we should guarantee human beings two things: life safety and property safety.
Take autonomous driving as an example. If the artificial intelligence calculates with the probability that the accuracy rate is 99.99%, the error rate of .1% will still be frightening. If there are 1 million self-driving cars in the city in the future, even if the error rate is .1%, there will still be 1 hidden vehicles that pose a threat to human life safety.
if we can't have credible AI, we naturally can't be sure whether artificial intelligence brings us technological progress or countless potential threats.
but in fact, it is the most valuable navigation light in the field of artificial intelligence, and it is also the direction that technology companies are pursuing now.
5
What is trusted AI?
What are these 16 tech guys doing?
so, what is trusted AI?
Maybe many people don't know it yet, so we have to make this definition clear first.
We can watch a program "Burn, Genius Programmer 2 Trusted AI" first.
This variety show scored 8. in Douban in the first season, which opened people's minds.
in the second season, 1 6 AI technical guys were divided into four teams and stayed in the "little black room" for four days and three nights to complete the 6-hour task challenge.
in the competition, they need to compete with the "black producers" for countless times, cultivate a "credible AI" that helps human beings, beat the "black producers" and finally decide the strongest team.
Variety shows about program technology are very scarce in China and even in the world.
On the one hand, programs and codes are too hard-core for ordinary people to understand.
On the other hand, the conflict of program script setting is more difficult than other variety shows.
But "Burn, Genius Programmer 2 Trusted AI" constructs the game logic of the program through the actual scene of "anti-fraud".
Sixteen AI technical guys need to face the challenges of fraud transaction identification and joint anti-fraud.
AI cooperates with attack and defense to cover the whole anti-fraud link.
during the competition, programmers completed "anti-fraud through technology" by creating "trusted AI".
which algorithm and model produced by the team has better data recognition accuracy and coverage will win the competition.
Although it is not as profound and grand as The Matrix, it is not as thought-provoking as Artificial Intelligence.
But Burning, Genius Programmer solves practical problems in real life through real application scenarios.
When you watch the whole program, you will understand that this is the trusted AI: building an intelligent model according to the existing data and solving real problems very stably.
trusted AI has a wide range of technical applications, and anti-fraud is one of the important application scenarios.
trusted AI is not that far away, it is close at hand. It is not so mysterious, and it is often your little assistant.
At present, AI technology based on neural network is very cool, and it occupies the highest point of AI topic, providing too much imagination space with creativity and mystery, and it is also a shrine that many AI technicians look up to. However, it faces many problems, such as unexplained, poor robustness, over-reliance on data and so on, which hides many potential hazards.
trusted AI exists to solve these "trust crisis" problems.
If AI technology based on neural network has strong idealism, then AI technology based on big data collation is a down-to-earth executor.
6
Technical characteristics of trusted AI
To truly understand the help of trusted AI to human beings, we need to start from the technical bottom.
trusted AI has four technical characteristics: robustness, privacy protection, interpretability and fairness.
1
Robustness
Robustness refers to the ability of the system to survive in abnormal and dangerous situations and the stability of the algorithm.
1. The former refers to the ability of the system to resist attack, such as whether the computer software can not crash or crash in the case of input error, disk failure, network overload or malicious attack. For example, if an AI model is likened to the Great Wall of Wan Li, its robustness is that the Great Wall can still not collapse easily when it is artificially bombed in the face of bad weather (such as typhoon) and natural disasters (such as earthquake).
2. The latter refers to the stability of the algorithm itself in the AI model. If the disturbed panda photos are added, the "eyes" of the AI model are easily bypassed, which indicates that its robustness is poor; For example, in fraudulent transactions, due to the escalating modus operandi, the model trained based on previous data may face the stability test brought by new risk data, and it needs constant iteration to ensure the analysis and recognition ability of the model.
take AliPay as an example. Alipay has hundreds of millions of transactions every day, and it is not against retail investors, but against professional black gangs. They may attack in two ways:
In order to ensure the safety of funds, Ant Group introduced the technology of "intelligent game attack and defense", which has the ability to simulate, train and make up for the risk knowledge and model in advance. The robustness of the AI model using this technology has been greatly improved, and the realization of "left and right mutual beat" can not only "attack" more intelligently, but also "prevent" more safely.
2
Privacy protection
Traditional data protection methods objectively form "data islands", which affect collaborative operations in fields such as medical care and finance, and also restrict the development of AI technology and industry.
Therefore, the privacy computing technology to expand the value of data is particularly important to realize the "data fixed value dynamic".
in the field of AI, federated learning, as a new machine learning model and algorithm, is proposed to solve the problem of data islands. Under the premise of ensuring that each participant does not disclose the original data, that is, the data does not leave the domain, the data is jointly modeled by many parties to realize the availability and invisibility of the data, and then realize the "data is not moving."
3
Explanatory
Humans always have an inexplicable fear of all unknown things.
If the behavior of artificial intelligence can't be explained, only the result has no process, then it is like a blind box, and you never know whether Aladdin or Pandora is released.
AI model is an important basis for many important decisions, and its thinking process cannot be a black box in many applications.
human beings want to know the logic behind the model, gain new knowledge, and put on the brakes when it has problems, so as to ensure that the process and results of AI thinking are legal.
This requires the combination of data-driven and model reasoning ability to produce interpretable results.
4
fairness
AI fairness is an important part of trusted AI.
Only by realizing "fairness" can technology be truly promoted to benefit the whole society.
on the one hand, fairness needs to pay attention to the disadvantaged groups, give consideration to the development of backward areas, optimize AI under the principle of social ethics, and adopt AI technology.