ChatGPT – eAA Chair Graham Hudson in conversation with eAA member Paul Edelblut
Following the launch of ChatGPT, we have been asking eAA members for their thoughts on the implications of such developments for assessment, and more specifically e-assessment.
eAA Chair Graham Hudson reached out to eAA member Paul Edelblut for his opinions. Paul is Vice-President of Global Operations for Vantage Laboratories, Pennsylvania, and is a long-standing member of the e-Assessment Association. Here, Paul shares his initial thoughts on ChatGPT, GPT3, and the world of LLM (large language models).
- “These tools hold some amazing potential and will only continue to improve. As such, it is essential that the assessment community engage with and experiment with this technology. As we know, the assessment community is often reluctant to embrace new and different technologies. ChatGPT and GPT3 cannot be ignored.
- Most of these large language models are built on and learned from zetabytes of previously written text. That human-generated text was largely produced by white male authors. Users of machine-generated text must redouble efforts to clean this produced text of any latent or overt bias.
- I have seen a number of individuals noting a concern that these tools will result in more “cheating”, and faculty will not be able to distinguish authentic work from machine-generated work. That is a glass-half-empty view, and I take a slightly different approach. I would argue this will reinforce the need for secure examinations and higher-quality examinations that draw out complexities the machine-generated text can’t yet model.
- This technology will likely impact classwork done in a non-secure environment, shining a bright light on the value of secure examinations. Everyone in the assessment industry must be prepared to explain differences between summative exam scores and classwork scores because those differences may be large.
- On a webinar today (14 December), the British Council touched on the issues of machine-generated text. They noted that a hybrid solution—with the machines assisting in creation of items or passages and a continued and strong human review—seems to work. We have seen the same with automated marking of scripts, using the machine to do the heavy lifting and the humans to provide a final review.
- I am a HUGE proponent of engaging the students on this topic. One of the University groups I am part of had the faculty members poll their students on use of ChatGPT and found: 1) the students wanted to know the rules around using it, and 2) a large number reported not finding it helpful. I think this is emblematic because there are always ways to “cheat” and there will always be “cheaters”, so engaging with our students about their learning and assessments is an essential part of the solution for the majority who will use the machine-generated text appropriately within guidelines.
- Technology always catches up…. the site below is a system that can detect machine-generated text. A Professor ran 30 responses through as a test, and the system had 99+% accuracy in determining which responses were autogenerated. https://huggingface.co/openai-detector/
- Language is very complex, and my initial view is that I suspect a close read of some of the machine-generated text by experts will yield insights into the distinction between a student-generated work and one by a machine. This, then, puts an added burden/responsibility on teachers who may not know their students well enough due to class size or may not be expert enough to discern the subtleties.
Ultimately, as with all technology, there will be Luddites who refuse to engage, the Icarus crowd who goes too far too fast and gets burnt, and the wise realists who move at a measured pace with validation steps built in.”
Read the views of eAA Board members in this article:AI & ChatGPT: Challenge or Opportunity for e-Assessment?
We want to hear your views – let us know what you think.
We've been speaking to the eAA Board and eAA members to get their views on ChatGPT and the implications of such developments for assessment, and e-assessment.
What are your thoughts?https://t.co/eMlsvveOMe
— The eAssessment Association (@eAssess) December 21, 2022