I am currently assisting in the development of the proficiency assessment system for the Computer Science and Engineering (CSE) department at Southern University of Science and Technology (SUSTech), directed by Associate Professor Dr. Shiqi Yu.
The background of this project is that currently, with the assistance of the LLMs, traditional project-based assessments are becoming less effective in evaluating students’ true abilities, as students can easily use LLMs to complete projects without truly understanding the underlying concepts. Therefore, we are exploring new assessment methods that can better reflect students’ proficiency and potential in the era of AI.
We hightlight the following features of our work:
- Only Interview: We believe that interviews can better evaluate students’ true abilities and potential, as they can ask students to explain their thought process, solve problems on the spot, and demonstrate their understanding of concepts. This can help to mitigate the issue of students using LLMs to complete projects without truly understanding the material.
- Peer Review: We also incorporate peer review into our assessment process, where students evaluate each other’s work. This not only helps in assessing students’ understanding and communication skills but also encourages collaboration and critical thinking among students.
- Level-based Assessment: We also introduce a level-based assessment system that evaluates students’ proficiency at different levels, from basic to advanced. This allows us to better track students’ progress and provide targeted feedback and support.
- Comprehensive Evaluation: We aim to evaluate students’ proficiency and potential in various aspects of computer science, including engineering skills, research ability, communication skills, business acumen and etc. This comprehensive evaluation can help to better prepare students for their future careers in the field of computer science.
- LLM assists: We also explore how to leverage LLMs to assist in the assessment process, such as using LLMs to generate interview questions, provide feedback on students’ performance, and analyze students’ strengths and weaknesses. This can help to enhance the efficiency and effectiveness of our assessment system.
Currently, the primary goal of this project is to develop a prototype of the proficiency assessment system and conduct pilot testing with a small group of students, especially for the C++ course. We are also working on refining our assessment criteria and methods based on feedback from students and faculty members. In the future, we plan to expand our assessment system to include more students and incorporate additional features, such as adaptive testing and personalized feedback. We also aim to share our findings and best practices with other institutions and educators in the field of computer science education, to contribute to the broader conversation about how to effectively assess students’ proficiency and potential in the era of AI.
Comments