The current AI is like a baby: playing with the game is uncertain

(Original title: Now AI is like baby: playing games can't make maths) With the development of AI technology, almost every time there are new achievements, people will worry about how far we are from being ruled by machines. However, recent news about AI can give us a relief: at least they are no different from ordinary children. "I'm not playing games. I'm learning!" This is a phrase often spoken by many children when they are caught stealing a computer. However, in the AI ​​field, this is really the case. Because, compared to the simple strategy-based training such as chess movement, the game is often better able to simulate the chaos of the real world. In 2015, Google’s artificial intelligence startup DeepMind opened up 49 classic Atari games, including bricks, through self-learning. But this does not include the classic Pac-Man. Pac-Man's seemingly simple game, in fact, the strategy behind it, is much more complicated than Go. To play this game well, for the AI, it needs to complete four things: walking the maze, eating balls, eating fruit, and avoiding enemies. The real difficulty lies in how to make AI make the best choice based on these four situations in less than 1 second. To put it simply, it's like having four people in different departments complete an assessment of the current situation within one second and agree on how to act. Previously, the highest known score of this game was maintained by Wilson Oyama with 266,330 points. But a few days ago, this record was broken by a Canadian startup. The AI ​​they trained succeeded in achieving the theoretical maximum score of 999,990 previously believed to have been obtained only through cheating. (260.000 need to reach the 35th level. Source: High Score??/P> Regardless of whether it is Go or Pac-Man, as time goes by, AI has become more and more capable in games. Can it be in other areas? Can't understand or write questions, AI's lack of attendance in the college entrance examination When AI and humans do math problems, AI seems powerless. Of course, the technical difficulty here is not actually how to do the problem, but to communicate with the person in question and the reviewer. According to Chen Ruifeng, who is leading the development of college entrance examination robot Aidam, there are three major difficulties for AI to participate in the college entrance examination. The primary difficulty is the need for AI to understand the intent of the topic and translate it into an accurate, machine-understandable language. Second, it is a logical reasoning that uses existing question banks to infer the optimal problem-solving path and draw conclusions; Finally, the output is off. The machine needs to convert the previously thought process into a human-understandable language and output it to the test paper. (I want the machine to understand this joke, it is estimated that there is a little difficulty. Source: Disp) In short, if you want to answer a good question, AI must not only understand and answer the questions, but also need people to answer questions in a tangible way. These difficulties may sound simple, but to make AI truly accurate, it's hard to be willing. According to reports, on June 7 this year, Aidam again challenged the college entrance examination. This time, it answered the National Liberal Arts Libraries, scored 134/150 points, but still lost to the top players in the same field. This is not the first time AI has tried the standardized test. As early as 2015, the University of Washington also developed an AI that focused on the solution of the geometric part of the SAT, and its correctness rate was only a poor 49%. In 2016, Japan’s East Robo robot gave up its attempt to transform itself into a data analysis business after repeatedly smashing the failure of Dongda. All in all, although the scientific and technological community has repeatedly warned us of the impact that AI development will have on our society, it is still at an early stage of development. Before AI developed its ability to communicate smoothly with humans, we don’t have to worry about the harm it would do to human society. Source: Disruptor Dail 燑br>

This entry was posted in on