Candidato: Eliane Cristina de Araújo
Título do trabalho: On Automating Formative Feedback to Leverage Programming Learning
Orientador(es): Dalton Serey e Jorge Figueiredo
Local: Auditório do SPLab
Banca examinadora: Dalton Serey e Jorge Figueiredo (orientadores), Patrícia Augustin Jaques Maillard (UNISINOS), Andrea Pereira Mendonça (IFAM), Leandro Balby (UFCG), Wilkerson Andrade (UFCG).
Resumo: As the interest on learning how to program grows nowadays, instructors that are used to deal with students’ difficulties, such as: learning the computational abstraction model, programming problem-solving and student's natural frustration when they cannot proceed on expected pace; have now to worry especially about scale. Yet, it could not be considered a new issue, the way we deal with scalability in programming learning was totally reframed by the new technologies, advances on analytics and strengthen of pedagogical ideas such as personalized learning. In this sense, most of programming courses count on automated assessment systems to support practical programming assignments, which are fundamental to this discipline learning process. However, the feedback provided by those systems about the students’ difficulties are distant from the instructors’ enriched feedback. The problem is that automated assessment systems do not provide adequate feedback to all phases of the programming process, so that students cannot be completely autonomous on their learning pathway. This thesis proposal aims to enhance automated feedback, on a formative perspective as referred by the literature [Shute 2007], to support two particular phases of the programming process: problem specification understanding and program quality improvement. By the end of this doctoral research, we aim to contribute with: (1) empirical evidences that readability and efficiency software metrics can be used to generate feedback targeted on improving code quality; (2) a strategy, based on software metrics, to assess students’ code on instructors’ perspective about code quality; (3) a method for delivering automated feedback on problem specification clarification and, finally, (4) a set of lessons learned on providing automated feedback on problem specification understanding and program quality improvement through an automated assessment tool.