• Instruction Tuning is a paradigm in NLP wherein language models are trained with natural language instructions to induce zero-shot performance on unseen tasks. This is motivated by how LLMs are expensive to train and test.

Papers

  • ⭐ Finetuned Language Models are Zero Shot Learners by Wei et al. (Feb 8, 2022)

  • ⭐Training Language Models to follow instructions with human feedback by Ouyang et. al., (Mar 4, 2022)

  • Cross-Task Generalization via Natural Language Crowdsourcing Instructions by Mishra, Khashabi, Baral, Hajishirzi, (May 22, 2022)

  • InstructDial— Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning by Gupta et. al (Oct 26, 2022)

  • Self-Instruct- Aligning Language Model with Self Generated Instructions by Wang et. al (Dec 20, 2022)

Links