At a time when so-called generative AIs amaze us with their prowess, how can we correctly assess the achievements of pupils and students? Is the emergence of the ChatGPT conversational robot likely to disrupt assessment practices? Are the issues being raised in a radically new way or is there ultimately nothing really new under the sun of these appreciative judgments that are grades and assessment judgments?
The significant fact is that generative AIs are capable, from instructions given to them, “prompts” , of creating text, images, or even music. Machines are taking over what seems to constitute us as our own. Will they not become capable of accomplishing, and better than us, any human cognitive task?
The risk is that the “generative intelligence” tool will be used massively to cheat. If the achievements targeted by educational actions are precisely of the order of complex cognitive tasks, the temptation may be strong, for some, to have intelligent machines do what they are supposed to have learned to do during their training, such as writing a dissertation .
Clearly identify the skills to be assessed
Any assessment made outside of strict examination conditions, particularly “at home”, becomes suspect. Of course, the remedies are quite obvious: impose essential assessment times in a “closed” environment; prohibit or, better, moderate or supervise the possible use of generative AI in an examination situation.
However, this possibility of cheating, which is only the modern figure of the classic “having someone else do it” with identity theft, must not distract us from the central problem which is always the same: how to allow the learner being assessed to “prove himself”, that is to say to provide authentic demonstrations which prove the reality of his achievements?
As in the periods preceding ChatGPT, two conditions are required. The first condition is to have a precise idea of the educational objective pursued. In other words, it is necessary to be able to define in operational terms the capacity or competence targeted by the educational action; and, consequently, by the control or the homework on the table, these aiming to say if the objective is achieved.
It will be observed that it will not be enough to designate a knowledge, but that it will be necessary to say what the “possession” of this knowledge makes the student capable of doing in a visible way. Say what the mastery of this knowledge makes one capable, concretely, of carrying out: how can one clearly distinguish the one who possesses this knowledge, and the one who does not possess it?
The second condition, which therefore goes hand in hand with the first, is to find exam “tests”, that is to say “tasks”, which require facing a situation where one can, precisely, prove oneself. For example: simplifying fractions, constructing an argumentative text, writing the summary of a philosophical text. We then come back to the obstacle of possible replacement by a generative AI: won’t it be able, precisely, to prove “me”, in my place, if I let people believe that its work is mine?
Create new exercises with AI
Beyond the problem of morality and policing, let us observe that this requires making the effort, in each case, to seek the central skill targeted in each cognitive task that can be the subject of school or university learning. To reason, therefore, in terms of concretely operational capacities, of the order of know-how.
These abilities may be visible in students through the results they obtain (for example, knowing how to orient oneself in a city using a map), and not in terms of content that can be listed in a program (such as knowing the list of prefectures). This may lead to distinguishing levels of learning objectives and assessment situations , depending on the type of ability at stake.
Rather than lamenting the (very relatively) new possibilities of cheating, the most useful thing is perhaps to ask whether generative AIs might not offer prospects for significant improvement in learning. Thus making intelligent use of the tool that they are, among other things in terms of content, illustrations, or ideas for scripting courses, or creating exercises.
For pupils and students, AI can help implement personalized learning, offering tailored resources, or involving interactive learning assistants.
Finally, by going beyond the simple problem of assessment, we could identify areas of work for pedagogy assisted by generative AI . A first path would consist of giving individuals the means to master digital tools, which are never anything more than tools.
Another avenue would be to try to understand, on this occasion, how logical intelligence works; and, more broadly, how thought is developed, by questioning, behind the problems of algorithms, and the functioning of technical mechanisms, the strictly ethical issues. This can be done by co-constructing the tests with pupils and students. Because the impact of generative AI, on assessment, as in general, will essentially depend on the use, good or bad, that will be made of it.
Author Bio: Charles Hadji is an Honorary Professor (Educational Sciences) at the University of Grenoble Alpes (UGA)
Tags: assessment practices