Video—Understanding assessment

Publication type
Date published

ASQA has collaborated with stakeholders across the VET sector to produce a video exploring assessment-related issues.

The video features a round-table discussion involving ASQA auditors and VET leaders – Andrew Shea from the Independent Tertiary Education Council Australia (ITECA) and Nita Schultz from the Victorian TAFE Association.

The video on assessment is conversational in style. It is available at full length and also in stand-alone chapters. It is also available as an audio file for those who prefer podcasts.

Media name
Understanding assessment video

  • Chapters
  • Read transcript

    Andrew Shea:

    Welcome everyone to this discussion brought to you by the Australian Skills Quality Authority or ASQA for short. My name's Andrew Shea. I'm the CEO of two enterprise based training organizations. I'm a non executive director with the Independent Tertiary Education Council Australia. I'm also chair of PwC's Skills for Australia, their education IRC. I'm joined today by Nita Schultz. Nita is the interim executive director with the Victorian TAFE Association. She has a long distinguished career across TAFE providers as well as a multidisciplinary providers. Nita, thanks for joining me today.

    Nita Schultz:

    Oh, you're welcome. And thanks for the opportunity. I'm bringing my experience in VET as a teacher and a manager and in my current role and looking forward to today's discussion.

    Andrew Shea:

    As you know, Vocational Education and Training is part of a competency based system. As part of the system, the assessment process, ensuring an individual is competent in all the components that they need towards a job role is integral. We talk about the assessment process regarding the rules of evidence and the principles of assessment and we explore those today. I'm joined today by Judith and Ian who are regulatory officers from the Australian Skills Quality Authority. We all know as part of the audit process, assessment is a key component and RTOs often have challenges in meeting the requirements of the training package as part of the assessment products that they actually use. Welcome today Ian and Judith really looking forward to talking to you about best practice and assessment and what you've found from many audits you've conducted.

    Ian:

    Thank you.

    Judith:

    Thank you.

    Andrew Shea:

    In this section we're going to explore what are the common issues that are found at audit regarding assessments. I'll hand over to Nita for the first question.

    Nita Schultz:

    Thanks. So when we think about the assessment system and the materials and the tools, it's a big topic and more than we can probably fit into today's discussion. But as experienced auditors', I'd really like to get your insights ends to the key issues and noncompliances that you find relating to assessment.

    Judith:

    Nita, there are two areas where we find noncompliance related to assessment and audit, two common areas. The first relates to design of assessment tools and the second relates to the implementation of assessment or how those assessment tools are used. So I'll just speak first about the design of the assessment tools. Common areas of deficiency that we see when we're reviewing assessment tools and also completed assessments is that the assessment tools don't ensure that all of the training package requirements are addressed. This includes the elements and the performance criteria for specific units of competency and the associated performance evidence, knowledge evidence requirements, and that the assessment tools do not allow for all of the assessment conditions specified in the training package for the relevant unit to also be addressed.

    We also find two, that just sometimes the assessment tools don't allow for the principles of assessment to be adhered to as part of the assessment process. That is, they don't include sufficient information to ensure that assessment is fair, that it's sort of flexible and that it's reliable and that it will gather valid evidence through the assessment process. And we also often find two, that the assessment tools don't allow for the collection of sufficient and valid evidence of the individual's skills and their competency. So those are the key areas where we see deficiencies with regard to the design of the assessment tools.

    Ian:

    And as Judith mentioned, the second common area is in the RTOs implementation of their own assessment system and the use of their own tools. At audit, we often find that the tools, the assessment tasks haven't been completed in full and the students have been deemed to be competent for all the knowledge and skills requirements. There's often an over reliance on the use of third parties. The system that the RTO is designed doesn't provide both parties with the full and complete and clear set of instructions so that each party knows their roles and responsibilities. And the evidence that's submitted by third parties is often not checked properly and validated for authenticity and for completeness.

    As Judith mentioned, there are issues with poorly designed assessment tools and some of the key things that we find the assessment doesn't actually assess a student for consistent performance over time and for a range of contexts. One of the issues that we often find too with the assessment tools are that they don't include clear instructions, and we'll talk about that a little bit later, for both the learner and for the assessor who's conducting the assessment. The use of simulated assessments is the issue for noncompliant findings is generally around the fact that the simulation isn't realistic. And so the evidences collected through that process, can't be relied upon.

    The industry suffers a little bit from plagiarism within the sector. And we expect RTOs to have a system in place to authenticate the student's work. And that's an area that's often lacking in the evidence that we see, that's presented to us at audit. And the last one I've got, this is the observation checklist that are used by RTOs often consist of a set of tick and flick boxes that don't provide sufficient information that the assistant will need to make an accurate assessment of a student's competence. So they're the key issues that we find that are lacking when we go out toward it, when we look at the implementation of the RTOs own assessment tools.

    Andrew Shea:

    The next section we're going to explore are what are the key documents as part of the assessment system? Ian, can you explain to me, what are the key things you expect to see as part of an assessment tool when you conduct an audit?

    Ian:

    The assessment tools, you generally comprise some form of a workbook for the learner to use and complete. There'll be a separate document we would expect to find for the assessor to use, some form of assessment guide. Now, with both documents they need to include a set of ingredients or set of instructions for the conduct of the assessment activity. The sort of things that we would expect to see to instruct both parties, the learner and the assessor, are details about the actual task that's to be performed, the context of the assessment, the assessment conditions that are required for it to be performed under, if there are any, and the evidence that must be gathered at the end of the process for a decision to be determined.

    With the assessors tool, we would expect to see some form of guidance in relation to the suggested answers, a marking guide or benchmark answers that provide the assessor the appropriate guidance to make a judgment of competence. And that encourages consistency where there are multiple assessors operating out of the same organization and looking at a variety of different student responses. The other types of documents that we would ask for in an audit would be a mapping tool or some form of documentation that explains and identifies how the assessment tasks meet all the requirements of the training product, the knowledge requirements and the performance skills.

    We look for some form of documentation, assessment policies and procedures that drive the process that the RTO follows in conducting their assessments [inaudible 00:08:39] to training and assessment strategies, which provide us with the framework that the organization is meant to be following when they're delivering their training assessment. And the other issue that we look for in the vocational space is some of a resource checklist. Often times they're hands on courses that do require a vast variety of resources for the training and for the assessment. So we'll look for some sort of identification that the organization has documented so that we can then follow that through to ensure that they've actually got access to the resources they need for delivery.

    Nita Schultz:

    Can I just follow up on that question about the resources? Often the assessment and the training occurs within a workplace situation. And so the resources may not be the resources of the RTO per se, but the resources of the workplace. So in those scenarios, what do you look for in terms of evidence of the resources?

    Ian:

    We would look to see that the organization has access to the resources that they need when they need them. So some things I might be able to buy in for the actual assessment process or for the delivery of that particular unit. But we wouldn't necessarily expect them to have those purchased and permanently available. So the key thing is that they need to have them ready to go when they're needed.

    Andrew Shea:

    So regarding the mapping itself, I understand the standards don't prescribe a specific way of mapping their assessments against the training package. However, there'd be reasonably expect to be demonstrated how each are being met. What are some of the tools that ASQA provide to give guidance to RTOs on best practice with assessment tools and how they can almost check themselves to ensure that they're on the right path?

    Judith:

    ASQA has developed a document called guide to developing assessment tools, which is available on our website and that covers the process through from looking and considering the initial requirements in the training package through to actually developing a full suite of assessment tools for implementation.

    Andrew Shea:

    Sure. And regarding the mapping itself, what would you expect to see or what do you see from RTOs that are able to clearly at audit, say, "Here's how I'm comfortable that we're meeting all these requirements."

    Judith:

    The process of mapping is useful in that it allows the RTO to align its assessment tools or tasks to the requirements of the training package and to be able to demonstrate that. However, there is no requirement to produce a mapping document per se. And many RTOs do demonstrate in other ways how their assessment tasks actually meet the requirements of the training package. For example, they may notate onto the assessment tool itself which aspects of the unit of competency, the actual assessment task is addressing. So it's not a separate mapping document that they may have recorded there which requirements that, that particular task addresses.

    Ian:

    And some of the purchased resources that you can buy will include a mapping guide. But it's the responsibility RTO to contextualize any documentation they purchased to ensure it meets all the requirements for their particular delivery mode.

    Nita Schultz:

    So from a mapping, if they were using a mapping tool and using those third party resources, they should revisit the mapping tool once they've contextualized the resources.

    Andrew Shea:

    Our next section is going to focus on the instructions given for assessors and students regarding assessments. Ian, you mentioned that there can be some issues in noncompliances found regarding the instructions for students. What do you see as key and important regarding the principles of assessment when looking at instructions for students and for trainers as well?

    Ian:

    Well, to meet the principles of fairness, students need to be provided with clear instructions to know when they're being assessed summatively, what they need to demonstrate, and what evidence they need to submit. Details that should be included, need to detail, is it an open or closed book assessment task. The location of the assessment, is it going to be conducted in class or out of class? The time allowed to complete each task, the expected response criteria. Now, what I like to see when I go and do an audit is some form of guidance. So does the question require the student to provide a 100 plus word response to the question or would a 10 word response cut the mustard? So some form of guidance for the students to know what they're required to provide as their response to the task.

    The assessment conditions. Now, in some areas there's a requirement that an assessment be conducted in a regulated facility. And that could be for instance, an early childhood education facility. So they need to comply with that requirement. Some form of a declaration of authenticity, and this is to counter or partially counter some of these instances of plagiarism. And are the other students required to answer all questions correctly? Are they required to get a 100% of every question or is some other level of response acceptable? And if they must achieve 100%, what are the options for reassessment for any questions, for instance, that they are unable to answer correctly in the first go.

    Nita Schultz:

    So while you weren't referring there, I think directly to graded assessment, there are some examples where graded assessment is used. So in terms of the instructions to the learner, is it satisfactory for the instructions to say, with graded assessment, if you get 70% correct, then your answer is satisfactory.

    Judith:

    To be clear, competency is not aspirational and if less than 100% is going to be considered acceptable, their assessment system needs to be very clear as to what is required from the learner in order to achieve competency. And that is really the benchmark at all times as the training package requirements and ensuring that the training package requirements have been met through the assessment process and as reflected in the evidence collected from the learner to ensure that there's sufficient and valid evidence to support the competency judgment.

    Nita Schultz:

    Thank you.

    Andrew Shea:

    So imagine Judith where there may be RTOs that have a pool of assessments or certain questions come from a question bank, the onus is on the RTO to ensure through a mapping document or a equivalent that no matter what questions a student answers, they've met the requirements of the training package if it is going to be less than 100%.

    Judith:

    Yes, that's correct.

    Ian:

    To meet the principles of sufficiency, the assessor needs to be given clear instructions regarding what evidence they need to gather and record in order to form a judgment of competence. So the tools that are provided and the instructions inherent within the tools need to be very clear about that process so that for instance, if the organization is conducting its validation, they'll be able to go back and look at the actual instructions that were there at the time and match that against the implementation of the tool as part of their own internal review.

    Andrew Shea:

    As part of that validation process, for those who have been around a while in the VET sector, we may refer to it moderation previously. So how can an RTO actually quality assure the assessors have used the tool as intended and that it is collecting the evidence required. What do you see at audit that's actually how RTOs are going about that. One of the best practice models that I've seen is they have a person allocated or nominated within the organization who part of their role is to check the completed assessment documentation prior to a decision being made and being signed off just to ensure that the tool has been fully completed by both the learner and also by the assessor in completing their responsibilities.

    Judith:

    One of the identified risks within the Vocational Education and Training sector is that students will, or former students would be issued with certification documentation without having met all of the assessment requirements. So I would certainly agree with Ian that review of completed assessment evidence prior to the issue of AQF certification documentation is crucial to ensure that that doesn't occur. And also for new assessors coming into an organization who may be unfamiliar with the assessment tools used by the organization. Some regular, I guess sort of checks early on in their experience of assessing could be useful and mentoring in terms of how to use the tools. Because often when we have new assessors come into an organization, that's when we can find that ... well, you may find that there's deficiencies in the instructions that are provided for assessors. So that checking process can be beneficial on many fronts.

    Andrew Shea:

    As part of the standards for RTO, systematic validation is required. We've come to the end of the sort of five year cycle where all training products needed to be validated. What evidence is ASQA looking for regarding that validation when you do actually conduct an audit?

    Judith:

    ASQA is actually has a fact sheet on its website about conducting validation that actually outlines our requirements in relation to that. We would want to see evidence of the validation process, who was involved in that process, what was reviewed, which assessment tools were reviewed, which completed assessments were also reviewed and what were the findings from that validation process, and any action that's been taken by the RTO or intended to be taken by the RTO as a result of that validation process.

    Ian:

    And one of the starting points we would start go to would be the RTA zone policies and procedures for conducting validation to say that they're following that process.

    Nita Schultz:

    So Ian reflecting on your comment earlier about potentially a word count described in the instructions of an assessment, we need to also ensure that there is enough of a response as well as words on the page. So what advice would you give in relation to that?

    Ian:

    Okay. The actual marking guide won't provide you with suggested words making up 100 words or in excess of 100 words. It's a bit of a framework to inform the learner as to how much of a response they need to provide. But just because you put 100 words down on a page doesn't mean to say that's sufficient because the assessor then needs to use their own knowledge and skills in that area to determine was that a quality response? Does it cover off all the issues to demonstrate the knowledge requirements that the student has to have?

    Nita Schultz:

    So the assessor guide would give them some dot points for example, some indication of what content would be covered for a response that would be competent. Yeah, make the requirements.

    Andrew Shea:

    I guess that emphasize the importance of assessor guides to have those benchmarks and expectations, and I suppose from a validation process, to ensure there is consistency across assessors. Those tools are really important and where an RTO can provide those at audit, I imagine it gives you comfort that they have invested the time in ensuring there is that consistency of outcome.

    Ian:

    It certainly makes our job a little easier when we're reviewing completed assessments to form our own opinion as to whether or not the assessment system is working when we can fall back on the actual suggested answers and to see whether the students have been assessed correctly. That's one of the issues that will be found with clause 1.9 in the standards is where students have been marked off and their answers have been accepted and it's quite clear from the marking guide that they haven't provided the correct response or an acceptable response.

    Andrew Shea:

    Our next section focuses on assessments that are conducted in the workplace. I'll hand over to Nita for the first question.

    Nita Schultz:

    Judith, some RTOs deliver and collect evidence in the workplace. And that often happens with trade apprenticeships in the cookery area, early childhood, many, how does an RTO develop an assessment tool and outline the context and the conditions of the assessment while still enabling that tool to be fair and considerate of the individual needs of the assessment process.

    Judith:

    Nita, as you have correctly identified, many training packages do actually specify what we refer to as the context and the conditions that the assessment, the context being where or the environment in which assessment must occur and the conditions of assessment being the characteristics of that environment. So for example, in relation to hospitality qualifications, there's often requirements around staff, customer ratios and then relation to early childhood. The context of assessment for some units of competency, it's specified that assessment must occur in the regulated education and care service. And the conditions of assessment must specify, for example, that the candidate must work with young children of particular ages. So it is very important that in designing and developing assessment tools that the RTO is cognizant of those requirements.

    Where assessment occurs within the workplace, the assessment tool or the assessment system must allow the assessor to actually demonstrate that those requirements have in fact been met and where there are, I guess, multiple types of environments in which students may undertake their assessment or a simulated environment. There must also be evidence that that simulated environment or the alternative environment has actually satisfied the context or the conditions. Earlier in refer to realistic simulation. So in terms of where there's flexibility for assessment to occur outside of the workplace, but in an environment that's a simulated environment that simulates the workplace, it may be important for the registered training organization to involve industry in determining whether or not that simulated environment is in fact realistic and does in fact simulate realistic industry conditions.

    Because at the end of the day, that's what many of these training products are actually preparing students for, is to actually work in industry, to work in the real environment, to be able to perform tasks in the real environment. So the assessment environment does actually need to reflect those industry requirements and the assessment system needs to gather and evidence that those requirements have in fact been addressed through the assessment process.

    Some of the common issues that we see as auditors with regard to the gathering evidence in the workplace include inadequate workplace inspection or selection of equipment prior to conducting the assessment. That is the workplace environment doesn't necessarily have all of the resources that are required or that the resources are not of the current industry standard. We also sometimes see instances where the workplace supervisor is actually signing off students as competent. That in fact is the role of the assessor and evidence gathered in the workplace may form supplementary evidence, but is the qualified assessor who should in fact be making that determination of competence based on the evidence may be gathered through the workplace and through other assessment tasks completed by students.

    Andrew Shea:

    Thanks Judith. So at audit you would expect to see the RTOs evidence where they have deemed a workplace as suitable to have that assessment, including having all the different components the student needs to use to demonstrate the skills and knowledge?

    Judith:

    Yes, that's correct. And that should be determined before the assessment occurs. In Nita's opening statements, she referred to apprenticeships and traineeships, I think. So with regards to, I guess, students in the workplace that assessments is actually often made before the student even commences training. But one of the issues with assessment is actually ensuring the availability of those resources for assessment purposes. So as part of that process of reviewing the resources that are available for the purpose of both training and assessment is identifying when those resources are available and confirming that they will be available at the time that they are required for assessment and in sufficient quantity to actually complete the assessment as well.

    One example that I can give from the early childhood area is in relation to the early childhood qualifications where, as I mentioned, students often are required to demonstrate their competency in regulated education and care services. Yeah. Not all regulated education and care services may have or be able to provide students with access to children of the full age range required to satisfy the requirements of some units of competency. So it may be for example, that a student needs to complete that unit of competency or complete assessment tasks related to that unit of competency across multiple workplaces in order to meet the requirements of the unit.

    Andrew Shea:

    The next topic we're going to explore is observable behaviors and how we collect evidence towards those. Ian, we often hear the term observable behaviors. And when it comes to a technical qualification such as painting and decorating or a trade, quite often the practical checklists an assessor can say that a student did X meeting certain requirements. It becomes more difficult when we're talking about soft skills such as communication. And what have you seen and what evidence do you expect to see from an art or an assessor actually collecting, or completing a practical assessment against those soft skills such as communication.

    Ian:

    The assessment tools need to be designed to include evidence criteria. And this enables a valid judgment of meeting the required quality and quantity of performance. These are the assessment decision making rules. These evidence criteria provide certain benchmarks that inform what is required as a response from the student. It could be a word count, as we've talked about previously, 100 to 200 words, in some cases it identifies the number of occasions a task needs to be performed. In hospitality area, it refers to having 40 service periods as part of the workplace arrangements. So there are criteria and benchmarks that need to be included that meet the requirements of the training package.

    Observable behaviors will generally be evidenced and recorded via some form of assessment tool where the assessor needs to watch the student perform the actual task and then make their judgment of competence. Referring to the example that you provided about the soft skills versus the technical skills, when developing an observation checklist to observe a learner, painting an assessor that is skilled and experienced with current industry skills would develop an observation checklist that outlines the things that they would expect to see in conducting a task and setting it up right through to completing the task and packing up and cleaning up the process. So these are, as you mentioned, the technical skills, the tasks that can be identified through a checklist. The ones that you mentioned a little bit more difficult to identify in that process would be the soft skills; the customer skills, communication skills that often appear as a requirement in a training package.

    So when developing an observation checklist to assess soft skills such as communication, the assessor needs to consider when those skills would be used during the observation, they'll need to create an assessment task around those skills. This identifies the context of the task that's to be demonstrated. For example, with communication, the assessment would need to allow the learner to demonstrate communicating with another person. So the instructions, the conditions of assessment, and the tasks should allow this to occur. The observation checklist should guide the assessor to consider those behaviors. And this is to record the behaviors observed and then make a judgment as to whether or not that's a satisfactory response, was demonstration of skill from that learner. And this evidence may also be supplemented by evidence from a third party, from a supervisor in a workplace or some other form of observation that can strengthen and reinforce what the actual assessors observed and noted on the observation checklist.

    Nita Schultz:

    So when we're looking at these observation checklists, is it okay just to reproduce the performance criteria as things that are going to be reviewed?

    Ian:

    The performance criteria list in detail all the elements that are required to be demonstrated. But at audit, the thing that I find lacking is any sort of supporting notation or detail of the task that was performed, where it was performed, what the actual task involved and the assessor's notation as to what they actually observed other than a sheet that has a bunch of tick boxes for them to identify. And the other thing that I often find at audit is the checklist is might be one, two, possibly three pages long and it's not a realistic reliable form of evidence that the student in the time that was allowed is actually completed all of those particular tasks. So I'm not saying that they didn't complete them, but the evidence that's presented is not reliable without some additional form of notation that explains how they are able to complete that task under those conditions.

    Nita Schultz:

    It's sometimes understood that the training packages the benchmark and therefore there is little confusion about other references to benchmarks. What do you mean by assessment benchmarks?

    Judith:

    Assessment tasks benchmarks provide guidance that are specific to the assessment tasks that have been undertaken by students. And the purpose of those benchmarks is to assist in reviewing of assessment evidence to ensure that there's consistent outcomes that sufficiently address the training package requirements. And as such, they define the required quality and standards of performance. And this might also include additional industry requirements that are not explicitly stated in the training package. For example, if the student is required to perform a task in accordance with particular industry requirements.

    Nita Schultz:

    So they can be quite nuance to the cohort and the conditions of that assessment; is that right?

    Judith:

    Yes, certainly. And particularly in relation to perhaps the environment in which the student is actually undertaking the assessment as well. So for example, they may be required to complete the task in accordance with an organization's policies and procedures. So yes, you're quite correct there. So they are more than just the training package, they actually relate specifically to the assessment tasks and the performance required in relation to those assessment tasks.

    Nita Schultz:

    And where you have multiple assessors for a particular qualification or unit, whatever cluster, the benchmark I understand can provide a point of consistency as well and point of reference for them. Often though courses only have one cohort and only have one assessor. In those situations, do they need benchmarks?

    Judith:

    Yes, we would suggest that there should be benchmarks regardless of however many assessors an organization has. The benchmark defines, as I mentioned, the standard of performance, which means that when the assessor is reviewing assessment evidence presented by different students, the assessor to some extent also needs to be able to justify why one student was determined competent and why another student was perhaps determined not competent. And the way I guess to be able to justify that is by reference to the actual benchmark. This also means that the student asks questions or appeals the assessment decision, the assessor is able to refer to the benchmark and to be able to clearly explain to the student why their response or why their assessment evidence has been determined as not competent with reference to the requirements specified in the benchmark.

    Nita Schultz:

    Thank you.

    Ian:

    And it's also difficult to validate your assessments if you don't know where the goals are. So it assist that process of internal review.

    Andrew Shea:

    Our next section is going to explore contextualizing purchased resources. Now Ian, a lot of RTOs will buy third party resources or off the shelf resources, why is it important they contextualize them for the use for their specific cohorts? And secondary to that, I just want to clarify, are there any actual third party products ASQA endorses or signs off or can guarantee compliance on because we do see this out in industry?

    Ian:

    Firstly I'd like to say ASQA doesn't endorse any particular third party resources, but it is mindful effect that there are some terrific resources available out there for RTOs to access. It is the RTOs responsibility though to ensure that whatever they use is contextualized and is checked that it is able to meet all the requirements of the training package and it also satisfies their requirements for their particular strategy for delivering the course. So RTOs should be going through any third party or purchase resources and undertaking some form of a, let's call it a validation to ensure that they are fit for purpose and they're able to meet the requirements of the training package.

    Andrew Shea:

    What would you expect to say at audit regarding that check that RTOs will have taken place to ensure it meets the requirements understanding the onus is on the RTO and they can't simply say, "Well, I bought them off X, it's on the RTO." So what would you expect to see at audit?

    Ian:

    As you know, ASQA audits are evidence based so we would firstly try to find some form of documentation to support that. It might be meeting notes from the group of staff that have gone through and checked those documents. We would also consider talking to the people who were involved in the process to validate that that had occurred. And we would also look at the actual product itself. And often times it's quite clear that there hasn't been any contextualization or any validation of the tools that they're using, they've merely just grabbed them off the shelf and run them out.

    Nita Schultz:

    Is it a requirement for industry to be involved in that contextualization review?

    Ian:

    The standards require that RTOs engage with industry in developing their training assessment strategies, and that would include the resources that they're intending to use. So it is best practice to involve industry at that level to get their, not so much their approval, but their support for the tools that are intended to be used for those particular around competency assessment tasks.

    Nita Schultz:

    Now, imagine from contextualization, a tool may actually be fit for purpose for one type of RTO, one student cohort, yet very much not the case for those who are in a different environment and the way they're being assessed. Now, is that something that you'd look for as far as that contextualization to meet the individual student cohort?

    Judith:

    Yes, certainly in terms of meeting the student cohort that's, I guess, one of the problems of using tools when they haven't been contextualized, as you've mentioned, they're developed with that particular cohort in mind initially. I guess, prior to actually using the tool, we would suggest that an RTO may consider actually checking the tool for content, also checking that it's going to be relevant to the workplace environment and consulting with industry to ensure that it's going to address their needs, that the tool will actually fully address all of the training package requirements, that it includes an appropriate level of task difficulty. But with particular reference to the question that you're asking is the system and the process for the collection of evidence as well.

    Because different learner cohorts in different environments, whether it's a simulator environment or workplace environment or a student who's studying independently through distance learning process, may be able to gather evidence from different sources and the tool needs to be able to support that evidence gathering process. And that may mean that the tool needs to be contextualized in different ways for those different learner cohorts.

    It's also important that the tools have clear task instructions, both for the learners, the assessors, and any third parties who may participate in that evidence collection process. And we would also suggest that if you're contextualizing the tool for different learner cohort that the tool is actually trial, to ensure that it's going to be sufficiently engaging, that it will be reliable with regard to the collection of the evidence for those different learner cohorts, and that it's going to be cost effective and that it's going to produce valid evidence. So valid evidence of the requirements are specified in the training package. And we would also suggest that you may consider whether or not there are some additional resources that may be required when the tool is being used with different learner cohorts as well.

    Nita Schultz:

    Would online assessment be an example of what you're just mentioning?

    Judith:

    Yes it is. And with online assessment, the tools should be contextualized prior to actually being added to a learning management system and made available to learners.

    Nita Schultz:

    So the RTO was diligently reviewing and to contextualize third party materials and finds in the process that the materials do not meet the requirements of the unit. What's your advice?

    Judith:

    The RTO may need to consider developing additional assessment tasks, for learners or considering alternative sources of evidence in terms of where learners can gather their evidence for example through the workplace rather than actually completing the alternative assessment task. Once again, the alternative evidence or the alternative assessment task and its appropriateness will depend on the learner cohort.

    Nita Schultz:

    Thank you.

    Andrew Shea:

    Is it fair to say that some of the training packages have been a long time out of being updated. The RTO as well as ASQA as the regulatory body need to order against the training package itself where there may be terminology that's now out of date and needs to be contextualized. Is that part of the contextualization working with industry to ensure that the evidence collected does meet both industry needs as well as the training package?

    Judith:

    Yes, that's correct.

    Andrew Shea:

    And what sort of evidence of industry consultation would you expect to see as part of that?

    Ian:

    Evidence can conform a range of formats, but generally I would see an audit correspondence between the two organizations, evidence of email communication and often some form of meeting that's been arranged to carry out that process.

    Judith:

    Industry may also be involved in actually reviewing the assessment tools prior to implementation as well to ensure that the assessment evidence that will be gathered will be valid and relevant to current industry requirements.

    Ian:

    And they'll often get somebody from industry to come and assist in the validation processes, forming a member of that panel.

    Andrew Shea:

    We're good.

    The next section we're going to explore, is utilizing mandated assessment tools. So I want to ask the question, Ian, where there are actually are licensed or regulated industries that have assessment tools that may have been mandated from a whole range of areas, what's the importance for an RTO for contextualizing those? What to expect to see from an RTO that's actually utilizing those tools when you're actually conducting an audit.

    Ian:

    In some industries, there are mandated assessment tools that RTO are required to use, but there's no guarantee that the third party mandated tools, that they'll meet all the requirements of the training package. So it's inherent on the organization to do their own mapping or their own assessment of the training package requirements to see if there are any gaps in the evidence that will be collected by using the mandated tools in order to make assessment of competency for all the requirements that are listed in the training product. One of the strategies that RTOs may employ would be to engage evidence that's collected through a third party in the workplace or under some sort of supervision that would then reinforce the evidences collected through use of the mandated assessment tool. There are a range of options for an organization to consider for further assessment to meet those gaps that still exist.

    Andrew Shea:

    And as far as those tools, I know that some of them of may ... It's coming back to a question earlier about a graded type system. The onus is still, whether it be an external party says that they can a 70% pass mark for a test, et cetera, it's still back onto the RTO at each stage to ensure that they have conducted some kind of mapping, to have confidence that each requirement is met. And they can't simply say, "Well, this other body said this is acceptable."

    Judith:

    That's right.

    Nita Schultz:

    So we've looked at various topics relating to the assessment process, systems and practices. Is there any other advice you'd like to provide in relation to assessment?

    Ian:

    I think as Judith has mentioned, it's really important for organizations to use their own experience and knowledge and utilize their own assessors to work together to enhance their assessment tools and to review what they are doing for the purpose of designing their own improvements.

    Judith:

    I would just like to refer RTO's to the guide to developing assessment tools on the ASQA's website. It's a comprehensive document and provides significant guidance that can overcome many of the issues that we identify as auditors that audit. As I mentioned, many of the issues that we identify do relate to the design of the assessment tools. And when the assessment tool is well developed and well designed and is implemented in a consistent manner we invariably find that the assessment is compliant with the requirements of clause 1.8 of the standards. And as I mentioned at the beginning, the root cause of many of the issues that we identify with assessment come back to the design of the assessment tool.

    Ian:

    And also ASQA has undertaken a range of strategic reviews since its inception back in 2011. And the reports that have been generated through those reviews will identify a range of assessment issues that are identified through an audit process. So they're worth getting ahead around as well.

    Nita Schultz:

    And I suppose just keeping an eye on your website generally as new information will be provided.

    Judith:

    Yes.

    Andrew Shea:

    On behalf of ASQA, Nita and myself, we'd like to thank you for participating today. Hopefully you found the value. The topic of assessment is a key one, and hopefully there's been some really good takeaways from today's session. Please note there are a lot of resources on ASQA's website and guides that were referred to today. I'd like to actually thank Judith and Ian as regulatory officers for the Australian Skill Authority in imparting their expertise today and their experience through audits. And we wish you well in the VET world.

     

  • Was this page helpful?
    CAPTCHA