top of page
KnowledgeAccess

Can AI Help Us Detect AI?

Updated: 2 days ago

Safeguarding RPL from Synthetic Evidence


Following my recent video on AI and Academic Cheating, I've been contemplating the role of artificial intelligence (AI) in vocational education, particularly in the domain of Recognition of Prior Learning (RPL)


In today's discussion, I aim to delve into an increasingly relevant question:

  1. How do we know if someone's evidence was genuinely created by them or by AI?

  2. Can AI aid us in detecting synthetic evidence?


In the sphere of RPL, our assessment methods do not rely on exams or assignments, but rather on real-world evidence such as documents, videos, project summaries, and workplace samples that demonstrate an individual's application of skills within their job role.


However, AI has advanced to the point of generating highly realistic content. ChatGPT can produce workplace reports in mere seconds and create lifelike videos with only text input and a photo. This prevalent use of AI in generating resumes, cover letters, and LinkedIn endorsements poses a significant challenge for RPL assessors.


Can AI Help Us Detect AI

Our responsibility lies in ensuring that evidence meets the stringent criteria of authenticity, validity, sufficiency, and currency. If the evidence does not genuinely reflect an individual's skills or was not created by the candidate, the assessment cannot be considered valid.


So, can AI help us detect content that was created by AI? The answer is both yes and no.


Several emerging tools, such as Turnitin's AI detector, GPTZero, and Writer.com's detection tool, are designed to assess the likelihood that text has been generated by AI, providing a score to indicate its origin. However, these tools are not infallible, occasionally misidentifying well-written human reports as AI-generated or failing to detect lightly edited AI-generated text.


Similarly, while platforms like Sensity AI, Resemble Detect, and Hive Moderation are starting to enter the education space for videos and audio, they still lack widespread usage in RPL due to their current inconsistency.


Amidst the limitations of AI detection tools, the critical judgment of the assessor remains paramount. Assessors are uniquely positioned to evaluate evidence within its contextual framework, determining its alignment with an individual's job role and communication style, and assessing its specificity and personalisation. These evaluations align with the principles of assessment—fairness, flexibility, validity, and reliability—which hinge on human expertise. While AI may aid in workflow and consistency, the ultimate decision rests with the assessor.


Moving forward, there are four essential steps to address this challenge

  1. Updating candidate declarations with clear guidance on AI use, requiring transparency and justification

  2. Supporting assessors in identifying potential red flags, such as generic wording or inconsistent language use.

  3. Exploring how AI can bolster assessment without supplanting it, thus upholding assessor control.

  4. Maintaining an open dialogue with candidates, as a brief conversation can often reveal more about a candidate's competence than a polished document.


While AI brings benefits such as streamlined administration, improved consistency, and reduced bias when utilised correctly, its adoption in RPL necessitates caution to preserve trust and authenticity. The aim is not to eliminate AI usage but to employ it responsibly without compromising the integrity of our assessments.

 
 
 

Comments


bottom of page