ICS Assessment GPT

This image shows a user interface for an "Ontario ICS Assessment" tool, presumably an online platform for submitting and grading assignments or projects. The top of the interface has a dropdown menu labeled "Ontario ICS Assessment." Below this is a large title "Ontario ICS Assessment" followed by the subtitle "By Peter Beens" with a printer icon next to the name. Underneath, there is a brief description stating, "Submit anything and have it graded according to the Ontario ICS achievement chart." At the bottom of the interface, there is a search bar with the placeholder text "What can this GPT do?" suggesting the interface includes a chat or search function related to the assessment tool. To the bottom left, there is a message button labeled "Message Ontario ICS Assessment…" indicating a feature to send messages. The overall color scheme is simple, with dark text on a light background. There is no additional contextual information outside of the interface elements.

Introduction

Just for fun I wanted to see if I could create a custom GPT (Generative Pre-trained Transformer) that could use The Achievement Chart for the Grade 10 Computer Studies Course to assess assignments from any of the ICS Computer Studies courses.

The instructions I gave it were:

Analyze and provide feedback (including Suggestions for Improvement) on anything that is submitted based on this rubric (formatted as markdown). Determine which categories would be most appropriate and grade according to those categories. Provide a list of the appropriate categories with a mark for each, and a total grade (e.g., 14 out of 20) and an average.

Here is the rubric: <snip>

As a “conversation starter” I included:

What can this GPT do?

This is the direct response from the GPT, when asked via the “What can this GPT do?” conversation starter I included:

As the Ontario ICS Assessment GPT, I am designed to analyze and provide feedback on submissions related to the curriculum of Information and Communication Science (ICS) based on a specific rubric. This rubric evaluates submissions across four main categories:

  1. Knowledge and Understanding – Assessing the subject-specific content knowledge and understanding its significance.
  2. Thinking – Evaluating the use of critical and creative thinking skills and processes.
  3. Communication – Judging the effectiveness of conveying meaning through various forms.
  4. Application – Analyzing the use of knowledge and skills in familiar contexts and their transfer to new contexts.

For each category, there are specific criteria that I use to determine the level of achievement, ranging from Level 1 (50–59%) to Level 4 (80–100%). I can provide a grade for each category based on these levels and offer an overall assessment that includes a total grade and an average.

You can submit any piece of work related to the ICS curriculum, and I will review it according to the rubric, providing detailed feedback on areas such as content knowledge, problem-solving skills, communication clarity, and application of concepts. This could include code reviews, project analyses, conceptual explanations, or any other relevant submissions.

Assessment of Essays

I initially tested it with essays and programs—one poorly written and one excellent—for each. Following are the assessments of the essays.

Here was the poorly written essay:

The use of AI in the classroom is a hot topic these days. Many peeps believe AI can totally revolutionize education, but others ain’t so sure. One major advantage is that AI can personalize learning, catering to each student’s needs. For example, it can recommend different activities based on how a student learns best. Also, AI can help teachers grade assignments faster, giving them more time to, like, teach and stuff. But some folks worry that AI might replace teachers altogether, which would be, like, totally uncool. Plus, there’s the whole privacy thing, with AI collecting all this data on students. Overall, AI in the classroom is like a double-edged sword, with both pros and cons. But whether we like it or not, it’s probs here to stay. So, like, let’s hope it helps us learn better and not take over the world or something.

…and here was the assessment response:

The submission on the use of AI in the classroom provides a casual and engaging overview of the topic, touching on key points of debate regarding the integration of artificial intelligence in educational settings. Here’s the feedback based on the provided rubric:

Knowledge and Understanding

  • Knowledge of content: The submission demonstrates some knowledge of content by mentioning specific applications of AI in education, such as personalizing learning and assisting in grading. However, it lacks depth in technical terminology and detailed examples. Score: 60–69% (Level 2)
  • Understanding of content: It shows understanding by discussing the significance of AI in education, including potential benefits and concerns. The submission acknowledges the complexity of AI’s impact but could benefit from a more nuanced discussion of how AI functions within these contexts. Score: 60–69% (Level 2)

Thinking

  • Use of planning skills: There’s evidence of planning in how the topic is structured, starting with the potential of AI in education, moving to its benefits, addressing concerns, and concluding with a reflection on its inevitability. However, the analysis could be more detailed. Score: 60–69% (Level 2)
  • Use of processing skills: The submission processes the topic by comparing advantages and disadvantages, but it lacks depth in analyzing how AI might be implemented effectively or how its challenges might be mitigated. Score: 60–69% (Level 2)
  • Use of critical/creative thinking processes: The critical thinking is evident in weighing the pros and cons of AI in education, but there’s room for more creative suggestions on how to balance these aspects. Score: 60–69% (Level 2)

Communication

  • Expression and organization of ideas and information: Ideas are expressed in a conversational and accessible manner, with a logical flow from benefits to concerns. However, the informal tone may not convey the seriousness of the topic effectively to all audiences. Score: 70–79% (Level 3)
  • Communication for different audiences and purposes: The style is engaging for a general audience, aiming to inform and provoke thought. However, it might not be suitable for academic or professional discussions due to its casual tone. Score: 60–69% (Level 2)
  • Use of conventions, vocabulary, and terminology: The submission uses relevant vocabulary (“personalize learning,” “privacy”) but lacks the use of technical terminology and discipline-specific conventions, affecting its effectiveness. Score: 60–69% (Level 2)

Application

  • Application of knowledge and skills in familiar contexts: It applies the concept of AI in a familiar context (education) effectively, discussing potential applications and concerns. Score: 70–79% (Level 3)
  • Transfer of knowledge and skills to new contexts: The submission hints at broader implications of AI but doesn’t fully explore how these insights could be applied to new or varied contexts outside of the classroom. Score: 60–69% (Level 2)
  • Making connections within and between various contexts: It makes some connections, particularly to ethical and privacy concerns, but could delve deeper into connections with social, cultural, and economic issues. Score: 60–69% (Level 2)

Overall Grade and Average

  • Total Grade: The total is calculated by adding the scores at the midpoint of their ranges for simplicity: 12 categories, with most scores in Level 2 (65%) and some in Level 3 (75%), resulting in an approximate total of 810%.
  • Average: 810% / 12 ≈ 67.5%

The submission provides a solid basis for discussion on AI in education, striking a balance between accessibility and insight. However, it could benefit from deeper analysis, more technical language, and a broader exploration of the implications and applications of AI in and beyond the classroom.

Here is the excellently written essay:

The integration of Artificial Intelligence (AI) into the classroom represents a pivotal advancement in education. AI’s capacity to personalize learning experiences is unparalleled, catering to diverse learning styles and individual needs. Through adaptive algorithms, AI analyzes students’ progress and provides tailored recommendations, fostering deeper comprehension and engagement. For instance, AI-powered tutoring systems offer immediate feedback and adaptive learning paths, enhancing students’ academic performance.

Moreover, AI streamlines administrative tasks, enabling educators to allocate more time to impactful teaching practices. Automated grading systems alleviate teachers’ burdens, allowing them to focus on facilitating discussions, fostering critical thinking, and providing personalized support to students.

However, ethical considerations such as data privacy and algorithmic bias must be addressed to ensure equitable access and safeguard student information. Educators must embrace AI as a complement to, rather than a replacement for, traditional teaching methodologies. By leveraging AI’s capabilities judiciously, educators can create dynamic and inclusive learning environments that empower students to thrive in the digital age.

…and here was the assessment response:

The submission effectively discusses the integration of Artificial Intelligence (AI) into educational settings, emphasizing its potential to customize learning and enhance administrative efficiency. Let’s evaluate it according to the provided rubric:

Knowledge and Understanding

  • Knowledge of content: The submission demonstrates thorough knowledge of AI’s applications in education, including personalized learning experiences and administrative efficiencies. (Level 4)
  • Understanding of content: It shows a thorough understanding of the significance of AI in enhancing educational outcomes and addressing diverse learning needs. (Level 4)

Thinking

  • Use of planning skills: The text suggests a strategic approach to integrating AI in education, considering its benefits and potential pitfalls, indicating planning skills with considerable effectiveness. (Level 3)
  • Use of processing skills: It discusses how AI can be implemented to improve learning and administrative processes, showing considerable effectiveness in processing how AI can be applied. (Level 3)
  • Use of critical/creative thinking processes: The submission critically evaluates AI’s role in education, including ethical concerns, showing a high degree of effectiveness in critical thinking. (Level 4)

Communication

  • Expression and organization of ideas and information: Ideas are clearly expressed and well-organized, demonstrating a high degree of effectiveness. (Level 4)
  • Communication for different audiences and purposes: The submission communicates effectively to an educational audience, outlining both benefits and challenges of AI in education, showing considerable effectiveness. (Level 3)
  • Use of conventions, vocabulary, and terminology of the discipline: It appropriately uses technical terms related to AI and education, demonstrating a high degree of effectiveness in using discipline-specific vocabulary. (Level 4)

Application

  • Application of knowledge and skills in familiar contexts: The text applies knowledge of AI to the context of education effectively, showing a high degree of application. (Level 4)
  • Transfer of knowledge and skills to new contexts: It suggests how AI’s capabilities can be transferred to educational settings, showing considerable effectiveness in transferring knowledge. (Level 3)
  • Making connections within and between various contexts: The submission makes significant connections between AI and educational needs, societal implications, and ethical considerations, demonstrating a high degree of effectiveness. (Level 4)

Overall Grade Calculation

  • Categories at Level 4: 7 categories × 4 = 28
  • Categories at Level 3: 4 categories × 3 = 12
  • Total Grade: 40 out of a possible 44 points.
  • Average (Grade/11 Categories): 40/11 ≈ 3.64, which rounds to Level 4 performance overall.

The submission is insightful, demonstrating a comprehensive understanding of AI’s role in education, effective communication, and thoughtful application. It effectively balances the promise of AI with critical considerations of its challenges, positioning it as a tool to enhance, rather than replace, traditional educational methodologies.

Assessment of Programs

Following are the two programs submitted and their respective assessments.

# Poorly written Python program to calculate the year a person was born

def main():
# Prompt user for age
print("Hey there! Can you tell me how old you are?")
age = input("Enter your age: ")

# Convert age to integer
age = int(age)

# Calculate birth year
current_year = 2024
birth_year = current_year - age

# Display result
print("Hmm... Let me do some math here...")

# Output birth year
print("You were probably born in the year " + str(birth_year) + ".")

# Call main function
main()

Evaluating the given Python program based on its functionality, clarity, and adherence to programming best practices, here are the assessments according to the rubric categories:

Knowledge and Understanding

  • Knowledge of content: The program demonstrates basic knowledge of Python programming, including taking user input, performing arithmetic operations, and printing output. However, it lacks error handling and does not account for edge cases, such as if a user enters a non-integer value for age. Score: 60–69% (Level 2)
  • Understanding of content: The program’s logic for calculating the birth year is straightforward and correctly implements the basic concept of subtracting the age from the current year. However, the understanding is somewhat limited by not considering variations in the current date or validating user input. Score: 60–69% (Level 2)

Thinking

  • Use of planning skills: The program follows a simple plan to achieve its goal but lacks sophistication in planning for user input validation or exceptions. Score: 60–69% (Level 2)
  • Use of processing skills: The script correctly processes the input to produce an output, but it does so without considering potential errors or the fact that a person’s birth year could vary depending on whether they’ve had their birthday yet this year. Score: 60–69% (Level 2)
  • Use of critical/creative thinking processes: The program does not exhibit significant critical thinking in handling varied inputs or situations, such as leap years, input validation, or user experience improvements. Score: 50–59% (Level 1)

Communication

  • Expression and organization of ideas and information: The program communicates its steps to the user through print statements, but the clarity and effectiveness of communication are hindered by a lack of input validation feedback and potential inaccuracies in the birth year calculation. Score: 60–69% (Level 2)
  • Communication for different audiences and purposes: It communicates in a friendly manner suitable for a general audience but does not employ best practices in programming to ensure clarity and robustness for users with varying inputs. Score: 60–69% (Level 2)
  • Use of conventions, vocabulary, and terminology of the discipline: The program uses basic programming terminology and syntax correctly but does not demonstrate the use of more advanced or nuanced programming concepts such as error handling or data validation. Score: 60–69% (Level 2)

Application

  • Application of knowledge and skills in familiar contexts: The program applies basic programming knowledge to solve a straightforward problem but does so without applying skills that would make the solution more robust or user-friendly. Score: 60–69% (Level 2)
  • Transfer of knowledge and skills to new contexts: The script shows limited ability to transfer programming skills to handle more complex or unexpected situations, such as invalid input. Score: 50–59% (Level 1)
  • Making connections within and between various contexts: There is minimal effort to connect the program’s functionality to broader programming or real-world contexts that would enhance its utility and reliability. Score: 50–59% (Level 1)

Total and Average Score

Calculating the scores:

  • Knowledge and Understanding: 65
  • Thinking: 60
  • Communication: 63
  • Application: 56

Total Score: 244 out of 400

Average Score: 61% (Level 2)

To improve this program, consider adding input validation to ensure the user enters a valid age, accounting for edge cases in birth year calculation, and perhaps including the current date dynamically to adjust the calculation based on the current year. These enhancements would significantly improve the program’s robustness, accuracy, and user experience.

def calculate_birth_year(age):
# Calculate the birth year by subtracting the age from the current year
current_year = 2024
birth_year = current_year - age
return birth_year

def main():
# Welcome message to the user
print("Welcome! Let's calculate the year you were born.")

# Continuously prompt the user until a valid age is entered
while True:
try:
# Attempt to convert user input to an integer representing age
age = int(input("Please enter your age: "))
# Check if age is negative
if age < 0:
print("Age must be a positive integer.")
continue # Continue the loop to prompt for age input again
break # Break out of the loop if age is valid
except ValueError:
# Handle the case where the input cannot be converted to an integer
print("Please enter a valid integer for age.")

# Calculate the birth year using the provided age
birth_year = calculate_birth_year(age)

# Display the calculated birth year to the user
print(f"If you are {age} years old, you were born in the year {birth_year}.")

if __name__ == "__main__":
# Call the main function if the script is executed directly
main()

This Python script aims to calculate and inform a user of their birth year based on the age they input. Let’s assess it according to the provided rubric.

Knowledge and Understanding

  • Knowledge of content: The script demonstrates thorough knowledge of Python programming concepts such as functions, conditionals, loops, and exception handling. It correctly implements the logic to calculate the birth year from the user’s age. (Level 4)
  • Understanding of content: The script shows a thorough understanding of how to apply programming constructs to solve a practical problem, including input validation and basic arithmetic operations within a user-friendly loop until valid input is received. (Level 4)

Thinking

  • Use of planning skills: The script is well-planned, with a clear separation of concerns (input gathering, input validation, calculation, and output). The continuous loop for input validation until a correct value is provided shows foresight in planning for user interaction. (Level 4)
  • Use of processing skills: The script effectively carries out its plan, with efficient use of functions to organize logic and control flow to handle invalid inputs gracefully. (Level 4)
  • Use of critical/creative thinking processes: The solution to prompt repeatedly in case of invalid input demonstrates creative thinking in ensuring robust user interaction. However, the script assumes the current year is statically set to 2024, which might limit its long-term applicability without modification. A dynamic approach to obtaining the current year could enhance its effectiveness. (Level 3)

Communication

  • Expression and organization of ideas and information: The script communicates its intentions clearly through comments and user-facing messages. The code is logically organized, making its operation understandable. (Level 4)
  • Communication for different audiences: The script uses simple, clear language suitable for its intended audience (users of any technical level) and effective feedback in case of errors. (Level 4)
  • Use of conventions, vocabulary, and terminology: It adheres well to Python coding conventions and uses appropriate terminology, making the code readable and professional. (Level 4)

Application

  • Application of knowledge and skills in familiar contexts: The script effectively applies basic programming concepts to solve the task of calculating a birth year, demonstrating a good grasp of these concepts in a familiar context (date calculations based on user input). (Level 4)
  • Transfer of knowledge and skills to new contexts: While the script is well-adapted to its specific task, it doesn’t explicitly demonstrate transferring these skills to new or varied contexts beyond the core functionality of calculating a birth year. (Level 3)
  • Making connections within and between various contexts: The script connects programming concepts to a practical application (calculating a birth year), but there’s limited exploration of broader connections, such as potential extensions or related applications. (Level 3)

Overall Grade

  • Knowledge and Understanding: Level 4
  • Thinking: Level 3.67 (Averaging Level 4, Level 4, and Level 3)
  • Communication: Level 4
  • Application: Level 3.33 (Averaging Level 4, Level 3, and Level 3)

Total Grade: 15.0 out of 20 (Average Level: 3.75 out of 4)

Suggestions for Improvement

  • Incorporate a dynamic method to determine the current year instead of hardcoding it. This can enhance the script’s longevity and applicability.
  • Explore additional features, such as handling leap years or providing more personalized output based on the birth year.
  • Consider expanding the script to make connections to other relevant contexts or potential applications, such as age verification or historical event comparison.

In Summary…

You may have noticed that the last assessment included a “Suggestions for Improvement” section, something I added to the GPT at the last moment.

Overall, I’m quite impressed with what it can do, although I did notice that it wasn’t always consistent with its grading.

In the end, a teacher’s professional judgment has to be what is most important, but I think this can be a real time-saving tool.

I initially created this tool for teachers to try, but I now realize that it’s an excellent tool for students as well, especially now that it incorporates the “Suggestions for Improvement” section.

This tool is built using the achievement chart from the ICD2O Grade 10 “Digital Technologies and Innovations in the Changing World” course, but I feel it’s also quite appropriate for the other ICS courses, such as ICS3C and 3U and ICS4C and 4U.

If you wish to use this custom GPT yourself, you will unfortunately need a ChatGPT Plus subscription. The URL for it is https://chat.openai.com/g/g-r6d5SNGnf-ontario-ics-assessment.

Sadly I’m not in the classroom anymore otherwise I’d be using it all the time, and encouraging my students to use it.

Please let me know if you find this tool valuable! Also, please let me know if there are any improvements I can make or if you spot any problems. You can reach out to me in the comments below or via Twitter at https://twitter.com/pbeens or via Mastodon at https://mstdn.ca/@pbeens.


If you’re interested in any of my other ChatGPT posts, please visit here.

2 Replies to “ICS Assessment GPT”

Leave a Reply