top of page

One-size-fits most and other assessment dangers

Thoughts on why assessment might just be one of the most difficult aspects of VET.


When it comes to assessment, it seems to be one of the most discussed, most written about, most questioned and most difficult aspects of vocational education and training to get right. Why is that?


What makes assessment so difficult to get right? My money is on the fact that a large part of it is because assessment in the VET sector is exactly that – assessment for a vocation; a trade, a workplace role, a career calling. And it requires an evaluation of practical skills and application of knowledge specifically for that profession. After all, how can you tell if a hairdresser can cut your hair unless they do it? Or a mechanic fix your car? An early childhood educator change nappies on your toddler? An electrician safely wire your house?


These instances evoke a strong mental image of the actual task and it’s easy to see why an evaluation of practical skills would be required.

And I would argue that many providers can do this without too much difficulty; set up and geared up to accommodate the necessary performance components of a particular vocation – even in our COVID world, albeit with some changes compared to this time last year!


For the RTOs properly invested in providing simulated frameworks and/or simulated tasks within a workplace environment, it’s usually not too bad. However, some providers struggle to provide authentic assessment opportunities and tasks outside of the workplace. Why is that?


Is it the materials?


  • Have they not been written with an industry-realistic perspective?

  • Have they simply not been written well?

  • Are the tasks mired in theory and hypothetical ‘what would you do’s without any practical application?

Is it the assessment event?


  • Are any demonstrations lacking proper artefacts, tools, documents etc as would be found in a workplace?

  • Are the tasks unidimensional, lacking proper challenges, circumstances and contingencies?

  • Is the simulation too weak to be considered realistic?


When you change a nappy, cut hair, wire a house or fix a car there is some kind of tangible output at the end.

But what happens when the vocation calls for less concrete tasks with less easy-to-identify observable behaviours? Things such as effectiveness in a team, use of emotional intelligence to foster positive workplace relationships, or abilities to test plan performance? Although no less important in terms of their respective industries, these skills are a little less concrete than the haircutting or nappy changing mentioned above.


Already, we can envisage the impact of trying to assess the different types of skills, hard skills versus soft skills, through a one-size-fits-most approach.


Far beyond the scope of this article is another viewpoint which might analyse private versus public education providers, their commercial drivers, funding arrangements and allocations, and the influence that, all of that combined, might have on how materials are procured, developed and delivered through the RTO. That thought aside, it does bear recognising that various providers have varying degrees of understanding - and funds to invest - when it comes to assessment materials.


Assessment materials are part of the product delivered. Training and assessment involves the interaction of the trainer/assessor with the students, as well as the provision of materials from which any determination of competency will be provided. Why skimp on offering a great product to customers? As the interaction is only one part of that product, if the materials are not up to scratch, by default, the product becomes less great. And at worst, a matter of Consumer Law when the service (as the product) does not meet the specified purpose.


Myriad causal effects may be attributed to why this might be the case, and the rigour applied to trainer/assessor accreditation is but one factor in the puzzle for what might make assessment so difficult to get right. Through no fault of the assessor, other downward pressures may often mean they are asked to be the trainer, the assessor, the validator, the industry contact, the content developer, and the student support liaison for the RTO. Each one of those areas is a specialist area within itself, so how many specialists-in-one is a realistic expectation?


Coming back to the micro-level of analysis of what makes assessment difficult, and we’re forced to acknowledge the content and user expectations of that content. We have a system designed to cater to personal and individual differences, to encompass a variety of delivery methodologies, and to allow for workplace, classroom, blended, and remote (online) assessment. Designing and administering appropriate content is a skill, as what is suitable for one delivery methodology is rarely universally compatible across the board. Note: There is a hint of generosity in the use of the word ‘rarely’.


The level of creativity required to ensure the right type of evidence is collected against the requirements of a unit of competency has sky rocketed in comparison to the “good old days” when vocational education and training usually meant an apprenticeship or traineeship, both of which have inbuilt on-the-job time and therefore, an opportunity for skills performance – and observations – in the work environment. The level of understanding of the English language required to ensure the right type of evidence is collected against the requirements of a unit of competency remains constant. And yet perhaps mysteriously, constantly lacking.


So many times, we see tools seeking to assess performance with questioning, ignoring the implication of a ‘verb’.

We’re constantly amazed at how popular the Big Book of kNOw is for its list of instructional words, taken from many units of competency across various Training Packages. Of the 140 or so words listed, only a small percentage indicate suitable assessment via questioning. Creating assessments to gather and assess the right types of evidence appears a challenge and only one part of the puzzle.


As we move into ‘self-assurance’ as a more explicit requirement for RTO operations, this sends up a few red flags. Time and again, we see assessments that are at the centre of rectification activity and they have the following issues in common:


False sense of security

  • previously determined somewhere along the line as ‘meeting requirements’ – RTO believes ‘all is good’

  • questions/tasks listed against a unit requirement as collecting required evidence but only collecting some of what is required, therefore leaving a gap – mapping document looks full and the unit looks covered


Relevance

  • questions/tasks that aren’t relevant to the unit requirements

  • instruments collecting written responses where a demonstration is called for

  • questions/tasks that sound like they’re hitting the mark because they contain similar words to the unit, but are out of context


Isolation

  • questions/tasks that are without the context of a workplace environment

  • benchmarks that cannot guarantee reliable replication of judgment next time


The above list is not exhaustive but it makes a good start in pointing out that nothing will get fixed if it’s not seen to be broken. Self-assurance is a concept that relies on the drive to continuously seek improvement, and to have a focus on quality and compliance (whereby compliance should naturally fall out of quality anyway if one is to consider ‘quality’ as being a product doing what it is supposed to/intended to do).


"...nothing will get fixed if it’s not seen to be broken..."

I’m not suggesting that the reticence to improve assessment materials is deliberate; more that it relies on a specialised skill set and depth of understanding that, year-on-year, is shown to be lacking – three quarters of RTOs do not meet the requirements for effective assessment and are deemed by the National VET Regulator as in breach of that particular clause within the legislative instrument.


Creating an assessment task that is considered valid in terms of its authenticity, ability to replicate workplace conditions, pressures, resources, equipment and expectations, is one more thing. Creating an assessment task that does all of that AND allows the assessment to occur as per different cohort needs - on the job, in a simulated environment, or even in cyberspace - becomes an ongoing challenge.


There is no doubt technology has improved many aspects of our lives, but there are some things that just cannot surpass the critical eye from an experienced mentor. Whilst in some cases there can be, in others, there are levels of response that cannot be (and should not be) ‘automarked’ by an LMS; there are some cost efficiencies that do not translate to long term value, and there are certain unit requirements where a case study or a hypothetical-this-what-I-would-do-if-ever-in-that-situation-type of question just does not cut it.


In embracing the concept of flexibility, and its bid to be a competitive, viable career option, has VET lost its niche purpose – to train skilled workers for a specific vocation?


These are some of the challenges that present systemic shortcomings in terms of materials being used to assess vocational courses. As the VET sector balances on the cusp of yet another re-invention and seeks to again reinforce its reputation as a flexible, competitive, viable career pathway, it begs the question as to whether VET has become a one-size-fits-most attempt at skills education; an education marketplace trying to be everything to everyone? Instead of assessments void of any supporting workplace context or framework, perhaps all VET really needs to be a solid, respected option to learn the skills of a vocation is undiluted, well-written, industry-specific learning and assessment.


When combined with some of the other issues presented earlier in this piece i.e. ability to interpret and understand requirements (or lack thereof), investment in development of adequate and appropriate materials (or lack thereof), specialists with skills to develop such materials (or lack thereof), and realistic contexts and resources to replicate actual workplace conditions (or lack thereof), we have a recipe for systemic compliance issues. To the tune of about 75% non-compliance with assessment requirements across all providers across at least five years. And also, we get lots of fodder for discussion, debate, industry articles and ongoing PD.


- First published July 2019. Updated and re-released, September, 2020

15 views0 comments
bottom of page