Construction technologies

    Last week, an auction took place in New York, where a notebook of the famous mid-20th century mathematician Alan Turing was sold for $1 million. The reason for the high price of the lot is simple - back in 1950, the mathematician proposed a method that allows you to determine a kind of "boundary of intelligence" of a machine - the Turing test. But since the system was recognized as not ideal, other developments have appeared that allow you to distinguish artificial intelligence from ordinary intelligence. One of such programs is the chatbot Eugene Goostman. The bot owes its name to its creators, Ukrainian Evgeny Demchenko and Russian Vladimir Veselov. The virtual personality created by the developers successfully imitated the behavior of a thirteen-year-old boy from Odessa. On June 7, 2014, the School of Systems Engineering at the University of Reading (Great Britain) held a competition of artificial intelligence systems. A total of five machines and 25 living people took part in the competition. The judges of the competition corresponded with each of the thirty participants for five minutes. As a result, 33% of the judges were firmly convinced that Zhenya Gustman was a living person. The chatbot became the first program in the world to pass the classic Turing test. But instead of making a scientific breakthrough, the program only proved the test to be inconsistent. The fact is that Zhenya Gustman's only task was to impersonate a human. A machine programmed to deceive and imitate cannot actually make any logical decisions or create anything. Moreover, if a computer impersonates a human, it needs to significantly limit its abilities. The simplest example is multiplication. The average person is unlikely to instantly multiply six-digit numbers in his head. A simple computer can instantly perform more complex calculations, but it does not have intelligence. Moreover, when tested, like the Turing test, a machine will immediately give itself away as having excellent mathematical abilities. So, to maintain secrecy, it needs to be slow and stupid enough. The stumped scientists admitted that the test, which had long been considered a kind of standard, does not fulfill its function. Therefore, in December, a group of mathematicians from Johns Hopkins University (Baltimore, USA) and Brown University (Providence, USA) proposed an expanded version of the test. Now computers will have to answer questions from testers by recognizing visual images. According to the creators of the test, today's optics are not inferior in image quality to human vision. This means that it is quite possible to ask the computer what it sees. In this case, the machine must not only recognize and name objects, but also answer additional questions. However, there is no guarantee that a computer capable of even this can be considered intelligent. After all, very effective image recognition systems already exist. Let us recall that one of these systems is in service with the social network Facebook. As humanity approaches the threshold of creating artificial intelligence, the question of defining this intelligence is raised more and more often. Thus, last fall, researcher Mark Riedl from the Georgia Institute of Technology (USA) proposed his own system for testing machines for intelligence. According to the scientist, creative abilities can be a sign of intelligence. Riedl's program is called the "Lovelace test", in honor of Ada Lovelace, one of the first programmers. The scientist believes that if a machine can create a small creative work, it can be considered intelligent.