CSEdPad

From PAWS Lab
Revision as of 21:50, 29 March 2024 by Peterb (talk | contribs) (Publications)
Jump to: navigation, search

Investigating and Scaffolding Students' Mental Models during Computer Programming Tasks to Improve Learning, Engagement, and Retention


In this project, we explore several ideas including:

Student self-explanations when working with solved programming exercises by gauging their similarity with expected expert self-explanations [1].

About the project

The project was originally funded by the University of Pittsburgh through the Personalized Education program (2018-2020), however, it is still going on, several years after the end of the two original rounds of funding. In the course of the project, we developed and evaluated several versions of the interactive recommender system Grapevine.

Motivation

Teams

University of Pittsburgh Team

  • Graduate researchers: Zak Risha, Kamil Akhuseyinoglu, Arun LR

University of Memphis Team

  • PI: Vasile Rus
  • Graduate researchers: Lasang Tamang, Priti Oli, Jeevan Chapagian, Rabin Banjade


Publications

  • Tamang, L. J., Alshaikh, Z., Khayi, N. A., and Rus, V. (2020) The Effects of Open Self-Explanation Prompting During Source Code Comprehension. In: Proceedings of The Thirty-Third International Florida Artificial Intelligence Research Society Conference (FLAIRS-32), Miami, FL, Association for the Advancement of Artificial Intelligence, pp. 451-456.
  • Alshaikh, Z., Tamang, L. J., and Rus, V. (2020) Experiments with a Socratic Intelligent Tutoring System for Source Code Understanding. In: Proceedings of The Thirty-Third International Florida Artificial Intelligence Research Society Conference (FLAIRS-32), Miami, FL, Association for the Advancement of Artificial Intelligence, pp. 457-460.
  • Ait Khayi, N. and Rus, V. (2020) Attention Based Transformer for Student Answers Assessment. In: Proceedings of The Thirty-Third International Florida Artificial Intelligence Research Society Conference (FLAIRS-32), Miami, FL, Association for the Advancement of Artificial Intelligence, pp. 3-8.
  • Rus, V., Akhuseyinoglu, K., Chapagain, J., Tamang, L., and Brusilovsky, P. (2021) Prompting for Free Self-Explanations Promotes Better Code Comprehension. . In: Proceedings of 5th Educational Data Mining in Computer Science Education (CSEDM) Workshop at EDM2021, Paris, France, Julne 29, 2021, CEUR.
  • Chapagain, J., Tamang, L., Banjade, R., Oli, P., and Rus, V. (2022) Automated Assessment of Student Self-explanation During Source Code Comprehension. In: Proceedings of The Thirty-Fifth International Florida Artificial Intelligence Research Society Conference (FLAIRS-35), Jensen Beach, FL, May 15-18, 2022, Association for the Advancement of Artificial Intelligence.
  • Rus, V., Brusilovsky, P., Tamang, L. J., Akhuseyinoglu, K., and Fleming, S. (2022) DeepCode: An Annotated Set of Instructional Code Examples to Foster Deep Code Comprehension and Learning. In: S. Crossley and E. Popescu (eds.) Proceedings of 18th International Conference on Intelligent Tutoring Systems, ITS 2022, Bucharest, Romania, June 29 - July 1, 2022, Springer International Publishing, pp. 36--50.
  • Banjade, R., Oli, P., Tamang, L. J., and Rus, V. (2022) Preliminary Experiments with Transformer based Approaches To Automatically Inferring Domain Models from Textbooks. In: A. Mitrovic and N. Bosch (eds.) Proceedings of the 15th International Conference on Educational Data Mining (EDM 2022), Durham, UK, July 24-27, 2022, pp. 667--672.
  • Tamang, L. J., Banjade, R., Chapagain, J., and Rus, V. (2022) Automatic Question Generation for Scaffolding Self-explanations for Code Comprehension. In: M. M. Rodrigo, N. Matsuda, A. I. Cristea and V. Dimitrova (eds.) Proceedings of 23rd International Conference on Artificial Intelligence in Education, AIED 2022, Part 1, Durham, UK, July 27–31, 2022, Springer, pp. 743-748.
  • Oli, P., Rus, V., Banjade, R., Narayanan, A. L., and Brusilovsky, P. (2023) When is reading more effective than tutoring? An analysis through the lens of students' self-efficacy among novices in computer science. . In: Proceedings of 7th Educational Data Mining in Computer Science Education (CSEDM) Workshop at LAK 2023, Arlington, TX, March 13, 2023.
  • Chapagain, J., Risha, Z., Banjade, R., Oli, P., Tamang, L., Brusilovsky, P., and Rus, V. (2023) SelfCode: An Annotated Corpus and a Model for Automated Assessment of Self-Explanation During Source Code Comprehension. In: Proceedings of The Thirty-Sixth International Florida Artificial Intelligence Research Society Conference (FLAIRS-36), Clearwater Beach, FL, May 14-17, 2022, Association for the Advancement of Artificial Intelligence.
  • Oli, P., Banjade, R., Lekshmi Narayanan, A. B., Chapagain, J., Tamang, L. J., Brusilovsky, P., and Rus, V. (2023) Improving Code Comprehension Through Scaffolded Self-explanations In: N. Wang, G. Rebolledo-Mendez, V. Dimitrova, N. Matsuda and O. C. Santos (eds.) Proceedings of 24th International Conference on Artificial Intelligence in Education, AIED 2023, Part 2, Tokyo, Japan, July 3–7, 2023, Springer, pp. 478-483.
  • Lekshmi-Narayanan, A.-B., Oli, P., Chapagain, J., Hassany, M., Banjade, R., Brusilovsky, P., and Rus, V. (2024) Explaining Code Examples in Introductory Programming Courses: LLM vs Humans. In: Proceedings of Workshop on AI for Education - Bridging Innovation and Responsibility at AAAI 2024, , Vancouver, Canada, February 26, 2024.
  • Oli, P., Banjade, R., Lekshmi Narayanan, A. B., Brusilovsky, P., and Rus, V. (2024) Exploring The Effectiveness of Reading vs. Tutoring For Enhancing Code Comprehension For Novices. In: Proceedings of ACM/SIGAPP Symposium on Applied Computing, SAC 2024, Avila, Spain, April 8-12, 2024.
  • Oli, P., Banjade, R., Chapagain, J., and Rus, V. (2023) The Behavior of Large Language Models When Prompted to Generate Code Explanations. In: Proceedings of Proceedings of the workshop on Generative AI for Education (GAIED) at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, December 2023.