Task descriptions are ubiquitous in human learning. They are usually accompanied by a few examples, but there is little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s and often many more examples.
I will first give an update on our work on Pattern-Exploiting Training (PET). PET mimicks human learning in that it leverages task descriptions in few-shot settings by exploiting the natural language understanding (NLU) capabilities of pretrained language models (PLMs). I will show that PET is particularly promising in real-world fewshot settings.
The second part of the talk examines to what extent current PLMs exhibit true NLU. I will introduce CODA21, a new benchmark that we argue tests for true NLU. Finally, I will review our recent work on neurosymbolic models and their potential for NLU at human levels.