Whose prototype was the first AI female anchor in Korea? What is the background of the prototype?

On June 6, 2020, 165438+ South Korea's MBN TV station launched South Korea's first artificial intelligence (AI) anchor, and successfully broadcast the main news and several short messages of the day. It is reported that this AI anchorwoman is named after TV presenter Kim? Where's Kim . Just appeared? Where's Kim It caused public concern, and many netizens felt after reading the news. Can you tell the true from the false? .

Using AI anchors to broadcast news can quickly broadcast news content to the audience in emergencies such as disasters, and can work continuously for 24 hours. Can save a lot of manpower, time and cost, can be used to try to make new programs, can effectively save resources.

Where's Kim How was it born? The report said that she? It was born after the comprehensive news video recording 10 hour hosted by Kim, and the long-term and in-depth study of Kim's action, sound and broadcasting process. It can quickly generate a broadcast video of 1000 words in one minute at most.

Make the news that happened that day into a broadcast draft, edit the subtitles and videos of the program director and upload news pictures. Where's Kim According to the data of deep learning, we can imitate Kim's actual broadcast tone, intonation and mouth shape for news broadcast.

It is understood that? Where's Kim AI anchor of MBN TV station and artificial intelligence development company? Money and brains? * * * The same research and development. The artificial intelligence video synthesis technology developed by the company integrates artificial intelligence, deep learning and convolutional neural network (CNN) learning technology, which can realistically restore the appearance of real people and make it difficult to distinguish.

Really? AI? Anchors have been used in China for a long time. 2065438+February 2009, the audience was delighted to see that Sa Beining had a virtual twin brother on the CCTV Spring Festival Evening stage? Sammy? This is the first time that the virtual host of AI is on the same stage with the prototype, which makes Xiao Sa feel very deeply that he is facing a career crisis in the future.

According to reports, it only takes about 30 minutes to build such an AI dual anchor. So, what is the secret?

As soon as I saw Sammy appear, Sammy couldn't help saying, God, it feels like looking in the mirror. ? Even from the appearance, direct call is simply? Long-lost twin brothers . And Xiao Sa is not a vase decoration, so he can control the scene very well, leaving almost no room for ridicule. So the small scattering point on the side. Grievance? Cut in slowly: Can you give me a word? ?

Obviously, compared with the rigid and mechanical virtual human technology, the virtual host based on the real prototype by introducing artificial intelligence has greatly improved in technology.

In order to be different from the real host, the technical team made some changes in the image design. Like cute this time? Sammy? Be taller and talk more. ? Zheng Yi, co-founder of American artificial intelligence company ObEN, said.

Of course, Xiao Sa is not an exclusive creation, because in addition to the virtual twin Sa Beining, other hosts Zhu Xun, Gao Bo and Yang Long's twin AI hosts also appeared one after another.

At the beginning of this century, after the BBC released the first virtual host Anaova, virtual host became a research hotspot in the scientific and technological circles. ? You can hear his voice and see his people. 20 19 Internet Spring Festival Gala is the first large-scale application of this technology in China.

The technology of creating such an AI virtual twin anchor is called PAI (Personal AI). With the support of more than 20 patent application technologies, images and sound models can be generated only by face scanning and half-hour recording data of hosts such as Sabinin.

On the basis of AI voice technology, there is no need for the host to input a large number of words to establish a voice database. Only a few dozen short standard soundtracks are needed. By extracting the characteristic parameters and using the transfer learning algorithm, its unique vocal model can be established. Therefore, any input text can be read or sung in the voice of the host, even in four languages: Chinese, Japanese, English and Korean. ? Zheng Yi said.

It is reported that with more and more data? Feeding? Sam will master more skills when he grows up, even including Sam's preferences and speaking style, which is more similar to him on another level. Coupled with motion capture training, as well as sensors and motion tracking equipment, it highlights the personalized characteristics of the prototype host and greatly enhances the recognition.