-
Couldn't load subscription status.
- Fork 174
CP3108 SR AI-powered marking #3126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Fix redux state not saving issue
Comment selector can now handle selecting multiple suggestions
…nto eugene-grading-comment-selector
This reverts commit 780a269.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a backend PR to accompany this frontend PR?
This is the accompanying backend PR: |
…ne-grading-comment-selector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements AI-powered marking capabilities for programming questions, allowing instructors to generate grading comments using LLM technology. The changes introduce LLM configuration options at the course level and integrate comment generation into the grading workflow.
Key Changes
- Added LLM comment generation UI in the Grading Editor and Workspace with a comment selector component
- Introduced course-level LLM configuration fields (API key, model, URL, and course-level prompt) in both admin panel and course creation flow
- Created backend integration functions for generating AI comments and saving final feedback
Reviewed Changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/pages/academy/grading/subcomponents/GradingEditor.tsx | Adds AI comment generation dialog, prompt viewing, and comment selector integration |
| src/pages/academy/grading/subcomponents/GradingCommentSelector.tsx | New component for displaying and selecting from LLM-generated comment suggestions |
| src/pages/academy/grading/subcomponents/GradingWorkspace.tsx | Passes LLM-related props to GradingEditor for programming questions |
| src/pages/academy/adminPanel/subcomponents/CourseConfigPanel.tsx | Adds LLM configuration UI with fields for API key, model, URL, and course prompt |
| src/pages/academy/adminPanel/AdminPanel.tsx | Includes LLM configuration in default course config and state synchronization |
| src/commons/dropdown/DropdownCreateCourse.tsx | Adds LLM grading toggle and API key field to course creation dialog |
| src/commons/sagas/RequestsSaga.ts | Implements API calls for generating comments and saving final feedback |
| src/commons/sagas/BackendSaga.ts | Updates grading submission to include LLM enablement flag |
| src/features/grading/GradingTypes.ts | Extends grading types to support AI comments and LLM assessment prompts |
| src/commons/assessment/AssessmentTypes.ts | Adds LLM prompt field to programming question type |
| src/commons/application/types/SessionTypes.ts | Defines LLM-related session state and course configuration types |
| src/pages/localStorage.ts | Persists LLM configuration in local storage |
| src/styles/_workspace.scss | Styles for grading comment selector component |
| src/styles/_academy.scss | Styles for LLM prompt dialog and configuration sections |
| src/commons/mocks/GradingMocks.ts | Updates mock data with LLM-related fields |
| src/commons/mocks/BackendMocks.ts | Updates mock backend to include LLM grading flag |
| src/commons/application/reducers/tests/SessionReducer.test.ts | Adds LLM fields to test data |
| src/commons/application/actions/tests/SessionActions.test.ts | Updates test expectations for grading actions |
Comments suppressed due to low confidence (1)
src/pages/academy/grading/subcomponents/GradingEditor.tsx:1
- Use strict equality operator (===) instead of loose equality (==) for type comparison.
import {
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/pages/academy/grading/subcomponents/GradingCommentSelector.tsx
Outdated
Show resolved
Hide resolved
src/pages/academy/grading/subcomponents/GradingCommentSelector.tsx
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
|
@sentry review |
src/pages/academy/grading/subcomponents/GradingCommentSelector.tsx
Outdated
Show resolved
Hide resolved
…cademy/frontend into eugene-grading-comment-selector
| } | ||
| } | ||
|
|
||
| .llm-grading-config { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: We should avoid using global styles and move to scoped CSS modules (files ending in .module.css/.module.scss)
| } | ||
|
|
||
| .llm-grading-config { | ||
| .bp6-form-group { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Abstraction violation. Use $ns so that if the Blueprint version changes, this style won't break
| } | ||
| } | ||
|
|
||
| .grading-comment-selector { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto on CSS modules
| enableSourcecast: state.session.enableSourcecast, | ||
| enableStories: state.session.enableStories, | ||
| enableLlmGrading: state.session.enableLlmGrading, | ||
| llmApiKey: state.session.llmApiKey, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we storing in frontend? Isn't this very dangerous?
| readonly llmApiKey?: string; | ||
| readonly llmModel?: string; | ||
| readonly llmApiUrl?: string; | ||
| readonly llmCourseLevelPrompt?: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should these be under session (or stored in the frontend at all)
| prepend: '', | ||
| solutionTemplate: '//This is a mock solution template', | ||
| postpend: '', | ||
| llm_prompt: null, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed? It's all done in the backend right?
| llm_course_level_prompt={llm_course_level_prompt || ''} | ||
| llm_assessment_prompt={grading!.assessment.llm_assessment_prompt} | ||
| llm_question_prompt={ | ||
| (grading!.answers[questionId].question.type === 'programming' && | ||
| grading!.answers[questionId].question.llm_prompt) || | ||
| '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this shouldn't be in the frontend to avoid prompt injection. Just use the value (that is already stored in the backend anyway)
|
LLM Key should never be put in session state, let alone persisted, let alone in the frontend. Paste the plain text once upon course creation, it's sent to BE and encrypted forever (you can't view it again). You should not be able to view it again, only change it subsequently via the dialog. |
Description
Creates a generate AI comments field in the Grading editor and workspace, also includes fields to enable LLM Grading and add a course specific API key in the course configuration panels.
Type of change
How to test
Nil. I don't know how to test frontend, do advise on what I should include.
Checklist
Do advise as well on how to update the documentation. Thanks