A new study details the advances made by an artificial intelligence (AI) program developed by scientists at IBM whose sole purpose is to best humans in debates.
Project Debater, the AI in development for several years at the tech giant, picks a side and argues its case using a technique known as ‘argument mining’, wherein the machine parses and links together the most useful relevant sections of arguments by parsing a vast archive of some 400 million news articles on a given subject.
A new study published in Nature details the power, and limitations, of the system and its underlying architecture, especially when compared against previous ‘triumphs’ of AI over humanity.
Also on rt.com
Project Debater tested its mettle in a showcase debate against expert debating champion Harish Natarajan in 2019. Debater successfully formed a coherent, complex argument as to why preschool should be subsidized for families, including composing opening statements, rebuttals, and closing summations.
While observers ranked the AI’s performance as decent, it still came up short against humanity’s finest.
“[A] combination of technical advances in AI and increasing maturity in the engineering of argument technology, coupled with intense commercial demand, has led to rapid expansion of the field,” explains argument technology researcher Chris Reed from the University of Dundee.
AIs have surpassed humans at a variety of tasks including games like chess, Go, and Starcraft, as well as poker. To add insult to injury, on January 14, 2011, the IBM computer Watson soundly beat two humans on the popular television quiz show Jeopardy!
Also on rt.com
However, while games involve a certain degree of high-level strategy formation, debating requires far more linguistic understanding as well as mental agility which, the researchers argue, fall outside the current generation of artificial intelligence champions, like Project Debater.
“Debating represents a primary cognitive activity of the human mind, requiring the simultaneous application of a wide arsenal of language understanding and language generation capabilities, many of which have only been partially studied from a computational perspective (as separate tasks), and certainly not in a holistic manner,” the researchers explain.
The researchers concluded that, despite the many leaps and bounds in technical development, artificial intelligence still remains a long way from matching humanity’s best, especially when it comes to extremely complex, challenging or ambiguous topics, as true free-form debating remains outside the ‘comfort zone’ of the AI… for now.
Think your friends would be interested? Share this story!
Source: RT