QuickTA
Computer Science Learning
Contextual Conversational Agent
Overview
QuickTA revolutionizes student support and engagemnet by harnessing the power of large langauge models (LLMs). Instructors can program LLMs for specific learning tasks, while students receive personalized assistance through a user-friendly chat interface.
QuickTA goes beyond real-time guidance by collecting valuable feedback and usage statistics, enabling continuous improvement and refinement. Our ongoing efforts focus on meticulous design considerations, exploring diverse use cases, and paving theway for deployment in database management courses.
Details
Features and Functionality
- Instantaneous conversations with a chatbot for personalized assistance in learning computer science topics
- Downloadable conversations for offline learning
- Comprehensive logging, analytics and insights on user interactions with the conversational agent
- Topic extraction and sentiment analysis on user conversations
Design
The application consits of three main views: Student, Professor and Admin. The students are able to interact with the chatbot and receive personalized assistance. The professor is able to program the chatbot and view usage statistics. The admin is able to view all the data and usage statistics, and manage user permissions.
Users
Varied user roles with different permissions and access levels and functionalities
Authentication
Privacy-preserving single sign-on (SSO) authentication and authorization using Shibboleth
Modularity
Pliable and extensible architecture to support future development and integration of LLMs and course configurations
Feedback
Real-time feedback and reports on user interactions with the conversational agent
Analytics
Keyword extraction and sentiment analysis on user conversations to provide insights on user learning and engagement
Notification
Handles user notification preferences, enabling push notifications via various channels
User Accessibility
Supports user accessibility by providing a high-contrast mode and text-to-speech functionality
Technologies
Responsibilities
- Developed the backend using Python, Django and REST framework
- Deployed the backend on a Docker container with reverse proxy using Ngnix
- Implemented authentication and authorization using Shibboleth
- Architected the model and database schema for the application
- Performed keyword extraction and sentiment analysis on user conversations
Related Publications
Date of issue: March 2023
Publisher: Learning Analytics and Knowledge Conference 2023 (LAK ’23)
Pre-trained large language models (LLMs) show promise in providing support to students through dialogues. However, current research in LLM-based support has highlighted the need to involve different stakeholders (e.g., instructors, researchers, students) in the design and deployment of these interactions. Based on our formative interviews with students and the prior literature, we are designing a system for instructors to: (1) program LLMs according to the task, (2) provide support to students through a chat interface, and (3) collect student feedback and usage statistics to inform future deployments. In this work, we report on our ongoing development of the system, design considerations, possible use cases of the system, and the path to the deployment of the system for a database management course. We hope that other researchers could build on this work to design systems that enable human-AI collaboration when it comes to improving the learning outcomes of students.