
The ability of a virtual interlocutor to imitate human communication is evaluated by the Turing test. The test involves conducting computer correspondence with two interlocutors, one human and one machine, and determining which is which. The AI Loebner competition for the most human-like computer program was held annually from 1991 to 2019, but no super-winner was declared. The Turing test has limitations, as many programs can pass it simply by manipulating symbols.
Virtual interlocutor: with whom do you want to communicate?
The topic of a virtual interlocutor’s desired and preferred portrait is a multifaceted one that requires a comprehensive analysis. It is worth noting that like in-person interlocutors, the ease of communication varies from person to person. While some individuals are easy to converse with, others can be challenging to find topics for even a brief non-committal conversation. Furthermore, people are not meticulous enough to pre-announce their communication preferences, especially if they have no prior knowledge of the person. In any case, communication is a dynamic process that involves a complex interplay of expectations and spontaneity, which is the essence of human interaction.
It is crucial to note that the value of a virtual interlocutor stems from its ability to imitate human communication. A virtual interlocutor can simulate human-like interactions, which include the use of natural language processing, tone, and non-verbal cues. Additionally, a virtual interlocutor can leverage machine learning algorithms to learn from previous interactions and improve the quality of future ones.
A virtual interlocutor’s portrait is a multidimensional question that requires a detailed analysis. The value of a virtual interlocutor lies in its ability to simulate human communication and learn from previous interactions, making it an effective tool for various applications, including customer service, mental health, and education.
Turing test
How to identify this ability before the virtual interlocutor is released “to the masses”? It is believed that the virtual interlocutor must pass the so-called “Turing test”. It is, however, not quite the format we are used to, with questions and correct answers to them. It is an empirical test, the task of which is to try not to let a person realize that he is communicating with a machine.
This test was developed by the English mathematician, logician, and cryptographer Alan Matheson Turing (1912-1954). As his years of life clearly show, it happened long not only before the appearance of modern chatbots, but even the first chat program “Eliza”. He outlined the idea of the test in his article Computing Machinery and Intelligence [A. Turing, 1950].
The test looks as follows: a person receives a task to conduct computer correspondence with two interlocutors, and at the end of the correspondence to determine which of his invisible interlocutors is a person and which is a machine. If the person could not give an unambiguous answer to this question or made a mistake with the answer, it is considered that the computer program has passed the test for the ability to simulate live communication with a human.
This test was invented by analogy with the Imitation Game, a popular party game of those years. In this game, one man and one woman split up into two different rooms, and guests handed them written questions without seeing who was in the room. The task of the guests was to use the questions to figure out which room the man was in and which room the woman was in, the task of the people in the room was to mislead the guests and keep them from realizing who was hiding in the room.
In each of these “secret rooms” there was a typewriter (there were no household or office printers in those years), on which a man and a woman typed the answers to the questions. They passed these answers to the other participants of the game so as not to give themselves away. The typewriter was needed to hide the handwriting, which could potentially be used to distinguish whether it belonged to a man or a woman.
If the guests guessed which room was occupied by a man and which by a woman, they were the winners. If the lurkers managed to mislead the guests to such an extent that they could not give an answer or gave an erroneous answer, the winners were the man and the woman who hid in the rooms and “tricked” the guests with their answers.
The Turing test was destined to live a long life after the death of the inventor. Thus, from 1991 to 2019, the AI Loebner competition for the most “human” computer program was held annually, and the winner was awarded the Lebner Prize in the amount of $2,000.
In fact, the competition’s ambitions were even cooler. It was originally planned that eventually the program that would pass the Turing test using text, visual and audio confirmation would receive a gold medal and a prize of $100,000 (one hundred thousand) dollars, after which the contest would be discontinued. However, it was discontinued without revealing such a super-winner.
Why? Partly because the year 2020 brings much more pressing concerns to humanity than the task of convincing a person that a human is talking to them, not a computer. On the other hand, the Turing Test competition had certain limitations from the beginning. For example, the nature of the questions for the contest was for a long time very limited to narrow fields of knowledge, and even there most of the programs submitted to the contest did best only when you could answer “Yes” or “No”.
In addition, many such programs, including the previously mentioned Eliza program, can pass the Turing test simply by manipulating symbols whose meaning is not fully understood. This is roughly like the USE in the part where you have to choose the correct answer from among those given. The correct answer in this case hardly serves as a sufficiently objective measure of intelligence.
Over time, alternatives to the Turing test have emerged. For example, the Marcus test, where a program that can “watch” a TV show is tested by asking meaningful questions about the content of the show. Or, for example, the Lovelace 2.0 test, where artificial intelligence must demonstrate the ability to create works of art [D. Wakefield, 2014].
One way or another, the AI Loebner contest has lost its relevance, and the chatbot’s crowning phrase “I am just learning” removes all the claims of users regarding the “unlikeability” of a virtual interlocutor to a human. Moreover, in business it is considered correct to immediately warn the client that he is communicating with a bot.
At the same time, it is still recommended to try to imitate the communication of a live person in everything, including pauses for sending a reply message. Of course, artificial intelligence allows a virtual interlocutor to instantly process a request and give a response, but such communication will be uncomfortable for humans, because cultured people are used to the balance between “talk” and “listen”, and rest for fingers when typing is necessary.
It seems that we have dealt with this part of the portrait of a virtual interlocutor. Now it remains to deal with another important question: what to talk about? This question is relevant both in ordinary live communication, and in communication with a virtual interlocutor. And as in real life it is not always possible to quickly find a common topic for conversation, so in a situation with a virtual interlocutor is not always clear what to want and expect from him.
Uncontrolled and chaotic self-learning of chatbots can lead to unpredictable consequences. In particular, irrelevant statements and attempts to flirt. It seems that people who are overly concerned with the sexual component of existence have already found everything they were looking for, because any “undertrained” or self-trained process of artificial intelligence training allows to collect all the most lewd things from the Web in a short time.
Everyone else should give preference to chatbots with the function of learning within the dialog, and then over time there is a chance to get an adequate interlocutor and a reliable assistant. For example, the previously mentioned ChatGPT can not only support a conversation on a free topic, but also write for you an essay on history or geography, solve problems in physics and math, come up with a script for a children’s party or a song for your birthday, and at your request with chords for guitar.
Portrait matters in chat.