top of page
KnowledgeAccess
Search

AI and Academic Cheating

Updated: Apr 30

Recently, the Australian Skills Quality Authority (ASQA) released its April 2025 edition of ASQA IQ, which explored some critical issues facing our sector, including academic cheating, the emergence of contract cheating services, and the appropriate and responsible use of Artificial Intelligence.


It got me thinking about how these risks and opportunities apply specifically to the context of RPL. I thought I'd share a few reflections.


ASQA rightly points out that academic cheating is a growing regulatory risk, especially with the rise of AI tools and contract cheating services.


Cheating might be detected in written assessments or online exams in traditional training, but the risks in RPL are slightly different.

AI and Academic Cheating

In RPL, we rely heavily on prior documents, work portfolios, experience claims, and some third-party evidence. That may create a unique vulnerability:

  • Students might submit work samples that weren't produced.

  • They might use AI to generate proof that seems credible, but doesn't reflect genuine workplace performance.

  • Third-party reports could also be fabricated or manipulated.


It's not just about plagiarism, it's about the authenticity of experience. This challenges the core of what RPL is meant to do: validate real-world skills and knowledge.


ASQA shares helpful tips for detecting academic cheating, like checking document metadata, looking for inconsistencies, and conducting supplementary oral questioning.


In the context of RPL, that might mean:

  • Conducting targeted competency conversations, not just reviewing evidence on paper, but discussing how and when skills were demonstrated.

  • Cross-checking third-party reports with accurate project outputs, KPIS, or other workplace records.

  • Being alert to language that sounds artificial or disconnected from the candidate's background or sector.


And sometimes, simply asking candidates to discuss how they achieved an outcome can reveal whether they truly possess the competence they claim.


Another point ASQA highlighted is the evolution of contract cheating services, some now even offer to complete assessments or mark assessments on behalf of providers.


While this might seem distant from RPL, being vigilant is essential.
There is growing evidence that:

  • Some services offer pre-written RPL portfolios,

  • Some resume builders offer to fabricate professional experience that looks tailored to qualifications,

  • And in some industries, fake references and work histories are becoming easier to purchase online.


For RPL assessors, this reinforces the need for triangulation of evidence using multiple methods to confirm competence and not relying solely on paper or document submissions.

ASQA also discusses how providers can manage AI to safeguard integrity, while recognising that AI also offers potential benefits.


In RPL, AI could have positive roles, but only if used appropriately:

  • Helping candidates organise their portfolios,

  • Reducing administrative barriers for assessors by pre-sorting evidence (without making judgments).


But crucially, as ASQA reminds us:

  • AI should never replace human assessment decisions.

  • RTOS must comply with the Principles of Assessment, particularly validity and authenticity, and the Rules of Evidence.

  • Assessments must still be based on genuine performance that reflects current, sufficient, and authentic capabilities.


In short, AI can help, but it cannot decide. Only a trained assessor, applying professional judgement, can determine competence.


Reflecting on ASQA's April 2025 update, I realised that the RPL space carries unique risks and opportunities for thoughtful innovation.


If we stay grounded in our compliance obligations, remain vigilant to authenticity risks, and ethically and transparently use emerging tools like AI, I believe RPL can continue to deliver strong, trusted outcomes, supporting learners and employers alike.

 
 
 
bottom of page