What I learned when I opened my exam up to Generative AI:
Firstly I want to explain my situation, I found that during my exams, the boys I teach tended to just get the work done but their level of enthusiasm towards doing the work was like you would expect in any exam.
Throughout the unit of work leading up to the exam, I set out the expectations of how we would use ChatGPT to generate code. Is ChatGPT the bee’s knees for code generation, maybe? Maybe not? Simply, it is the option that I have always used so one that I felt most comfortable with. My school doesn’t yet offer an enterprise level AI and once we make our decision on the direction we will take there, I’ll switch this unit across to that platform.
So originally the exam was just like almost any other. There was a stimulus, students interpret the stimulus and then respond to it. In my case, the stimulus was printed code in HTML, CSS and JS. The main part of the task was to interpret the code and create a wireframe diagram of what that code would produce with some annotations explaining how the CSS would style the HTML and the interactivity created by the JS. Relatively speaking it was a fairly quick exam to mark as I knew exactly what that code looked like and did, so I could quickly identify any errors that students made.
Moving forward I decided to allow students to generate their own stimulus. They would be given a brief much like what might be provided to you by a client and they would generate their own prompt. I gave them explicit colour codes to use with box a hexadecimal code and name, so that when annotating they would know what the output colours would be. After prompting ChatGPT, the students would generally receive an output of code and would use that code to develop their wireframe. Students had complete control over the way they developed their prompt and as such, had some control over the output. They were allowed to make revisions to their prompt, but as part of what they submitted, they had to share a public copy of the chat (electronically) which allowed me to see what stimulus they were working with.
Invigilation was a little more intense than on the previous paper exam simply because I needed to ensure that students were accessing the web browser and ChatGPT only rather than any other software. I also noticed after running the exam the first time that I was able to sit students somewhat closer together because with each stimulus being different, there was no real way for a student to copy another student’s work and get the correct answer. Sure there might have been some elements that were the same, but by and large I found that the output code was so vastly different it was actually quite cool to see the range of responses to the prompts.
Marking the exams was then a little more challenging on my part because I didn’t just have that single stimulus to look at, but hey, the way in which my students engaged with that stimulus they created I wouldn’t change it.

