Jakub Macina*, Kumar Shridhar*, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4136–4149, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
We explore the ability of large language models (LMs) in generating sequential questions for guiding math word problem-solving. We propose various guided question generation schemes based on input conditioning and reinforcement learning (RL) and found that on both automatic and human quality evaluations, LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver.
Presented at EMNLP 2022 in Abu Dhabi as a main conference paper:
How good are NLP models at solving math word problems? To improve NLP models in reasoning, in our interdisciplinary research at the intersection of #NLP and #Education we took inspiration from scaffolding theory which support human learners in discovering the answer on their own. pic.twitter.com/ahsbJ9n9u5— Jakub Macina (@dmacjam) December 3, 2022