Jumbled-up sentences show that AIs still don’t really understand language

Researchers at Auburn University in Alabama and Adobe Research discovered the flaw when they tried to get an NLP system to generate explanations for its behavior, such as why it claimed different sentences meant the same thing. When they tested their approach, they realized that shuffling words in a sentence made no difference to the explanations. “This is a general problem to all NLP models,” says Anh Nguyen at Auburn University, who led the work.

The team looked at several state-of-the-art NLP systems based on BERT (a language model developed by Google that underpins many of the latest systems, including

Read More