Welcome to Lexappealdeals!

In the AI and ChatGPT Era, Universities Should Give Oral Exams

[ad_1]

Photo: Gorodenkoff (Shutterstock)

Imagine the following scenario.

You are a student and enter a room or Zoom meeting. A panel of examiners who have read your essay or viewed your performance, are waiting inside.

You answer a series of questions as they probe your knowledge and skills. You leave. The examiners then consider the preliminary pre-oral exam grade and if an adjustment up or down is required.

You are called back to receive your final grade.

This type of oral assessment – or viva voce as it was known in Latin – is a tried and tested form of educational assessment.

No need to sit in an exam hall, no fear of plagiarism accusations or concerns with students submitting essays generated by an artificial intelligence (AI) chatbot. Integrity is 100% assured, in a fair, reliable and authentic manner that can also be easily used to assess multiple individual or group assignments.

As services like ChatGPT continue to grow in terms of both its capabilities and usage – including in education and academia – is it high time for universities to revert to the time-tested oral exam?

The rise and fall of oral exams at universities

The oral exam has a history dating back to the ancient Greeks over 2,000 years ago. Philosophers defended their knowledge in the ritual of a public oral defence.

By the 10th century, oral assessment was also an important tool in the development of Muslim law and medicine. Those who were experts in munâẓara (the Islamic term for debates or disputes) were greatly esteemed.

At the medieval University of Paris in the 13th century, students were apprenticed to a master and, when regarded as ready, they took a public viva to graduate.

However, the oral exam experienced a decline as universities began to gravitate toward written assessments in the 1700s.

Academics at the time considered written exams more efficient, with the opportunity to numerically grade students individually. This contrasted with the complicated system of placing students in broad class categories that reflected their performance in oral examinations.

Examining written papers was also a silent process, and gave ample time for examiners to grade in the comfort of their own homes.

Oral exams: Finding renewed relevance as ChatGPT rises

However, there are countries and institutions that still embrace the viva in the contemporary age.

As I explained in my 2018 book, Assessing the Viva in Higher Education, Norway uses the viva in its postgraduate programs, and until recently it has been extensively used in undergraduate education.

High school students must also sit in at least one oral exam in a random subject. This is done in Year 10 (junior high school) and Year 13 (senior high school).

I video recorded these pre-viva examiners meetings, the tests themselves, and the post-test conversations on grading. Through analysis of spoken and bodily language, I argue that the viva is a rich form of assessment that takes into account content quality and the answering skills of students.

It also offers students the opportunity to explain and clarify what they had submitted. This is not possible in a purely written assessment.

What’s interesting was that in my study, I never came across cases of cheating: of students mimicking work undertaken by others, concealing crib sheets in their clothes, or writing on their forearms.

Similarly, Ken Purnell, a professor of education at CQUniversity in Australia, suggests how students might be asked to create and verbally share a reflective journal – such as how their learning in educational neuroscience is applied to inform their practice.

Chatbots cannot replicate this sort of task, ensuring student authenticity.

Another colleague at my university also recounted how she and her co-lecturers had introduced the viva for the first time.

They assessed 600 oral exams in under a week in a large first-year undergraduate literacy course for pre-service teachers. Apart from there being no integrity issues, lecturers were also rewarded with a weekend free from a pile of papers to mark.

Was it tiring for the examiners? Of course. But it was also fulfilling, as they could observe students turning their thoughts into words.

For the students in my study, the experience was nerve-racking and full of emotion. They remember the viva vividly, including the atmosphere and the questions. Much like in a job interview, they achieved a sense of relief and mastery after completion.

It left a lasting imprint on these students’ minds, and to them, it was an opportunity to grow as a person.

I argue that it is time to change our conversation to be more about assessment that actually involves a “conversation”.

Writing would still be important, but we should learn to re-appreciate the importance of how a student can talk about the knowledge and skills they acquired. Successfully completing a viva could become one of our graduate attributes, as it once was.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.


Stephen Dobson, Professor and Dean of Education and the Arts, CQUniversity Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Lexappeal
Logo
Compare items
  • Total (0)
Compare
0