Skills, Proficiency Ratings & Ai Inference Models

Skills, Proficiency Ratings & Ai Inference Models

Now that C-GPT has provided us with a working definition of skills proficiency lets dive in to see why this is a complex and highly polarising topic especially for organisations on a skills-based journey.  

There are many people claiming Ai can provide meaningful time or contextual based proxies to infer skills and proficiency data e.g. a software developer working at Atlassian for 4 years must be highly proficient in python coding because of the inferred skills from someone’s ‘time in role’.

Then there are other people who vigorously oppose, and distrust Ai led inference models, preferring instead to seek detailed assessment of skills using proven and trusted assessment tools and methodologies (where they exist).

Scale is however a challenge, for a skills-based strategy to be effective organisations need to create both a skills lens of their people (and of work itself), and ideally a proficiency views of their people’s skills. It doesn’t take Einstein to quickly work out that we cannot assess every skill for everyone in an organisation without running out of time, dollars and hope very quickly.  

There are plenty of reliable tools in the market for assessing technical (or hard) skills, and some well trusted assessments that provide insights into IQ and cognitive skills, but there are very few science and research-based products that can assess and codify durable human (or soft) skills which are the power skills of the future.

There are, however, several global organisations currently working on developing these assessments including people who have participated in TQSolutions skills research in 2024. Watch this space for more information on these product developments.

RedThread Research conducted a survey of skills vendors in 2024 to see how they approached and supported the verification of skills, the results are summarised in the graph below:

Self-identification is the main approach relied on by vendors in the skills market, followed by those using inference-based skills verification from existing HR or work datasets or by using external industry benchmarks. Formal assessment and formal observation also appear strongly in current methods to verify skills. 

Skills tech convergence:

Recent RedThread Research also shows that vendors are moving towards the centre of this framework but as you can see approaches towards skills identification and skills data structure varies greatly. 

So, what should you do? How should you approach the gnarly topic of skills and proficiency?

This is a question I have been asking participants in my 2024 Skills Research and I will lean on some of their opinions to unpack this topic further.


Is there a role for Ai inference, if so where and how should it be used?

TQSolutions is currently working with a customer supporting the roll out of its skills strategy and the technology platform that is enabling it.  We have been ‘under the hood’ of the vendor platform to better understand how its skills foundation layer is used by Ai to infer someone’s skills based on either their Core HR data records or their current role in the organisation (skills to people or skills to jobs).

Note, in this example the Ai is only inferring skills and not proficiency. The use-case and rationale for Ai inference is to provide a better user experience for the initial onboarding and user profile build, and to speed up the ‘time-to-value’ for the workforce as they engage with the platform. The sooner someone’s profile is built the sooner the Ai recommendation engine will kick in and recommend career or learning opportunities to them in the marketplace. 

The user can choose to claim the inferred skills, or they can reject them and they can add more skills to their profile than the Ai has inferred (most skills platforms direct their Ai to infer 10-20 skills dependent on the data it can access). In short, the user has complete control over the skills published on their profile regardless of whether they have been inferred or manually added.

There is also benefit to the organisation with this skills inference process.  Companies can rapidly build a base level view of their organisational skills profile without waiting for all workers to complete their profiles. Whilst this won’t be a full picture or 100% accurate, it is likely to be significantly more information than most organisations currently have in their core HRIS and Talent systems and will only improve over time as more skills are claimed, added and validated.

At the beginning of a skills transformation there is a lot to think about and do, sometimes an MVP, or base level is enough, and organisations can gradually work to enhance or improve their skills data, particularly their chosen approach to proficiency ratings.  This view was summed up beautifully in my conversation with Maryna Matthew (ex Novartis ) who said:

“My personal view was not to get so hung up on the (proficiency) rating yet. There's so much else for us to get right first…..it's such a complex set of changes happening simultaneously and I took the personal view, the rating piece can come later.”

Skill Proficiency

Turning to proficiency rating specifically, there are certainly plenty of people I have spoken with in my research who do not think we should be using Ai inference models for proficiency ratings.

I witnessed this firsthand on a recent TQSolutions client engagement when, during a vendor presentation / demo on its Ai inference engine, the client’s Ai Governance Lead wrote in the Teams chat “DO NOT DO THIS” in bold type. It was quite an eye-opener in relation to the strong opinions Ai inference models can generate when they relate to work and careers.  

Earlier this year I spoke with Gordon Ritchie from Skill Collective and in our conversation, he commented that we shouldn’t rely on Ai to solve the proficiency problem because of the importance and need for people to be able to understand and clearly describe what good looks like at each proficiency level, and therefore being able to know their skills deficit clearly and importantly, how to improve.  

“You won't be able to attach the right prescriptive learning to go from level two to three or level two to four. You won't be able to coach effectively in a performance discussion because you haven't seen the context or the capability.”

Gordon also raised an interesting point that proficiency level descriptors are more how people think and talk about work because they describe work/tasks associated with the various levels. This may be a useful link or connection for people to better understand the language of skills.

“We don't think about work that way. We describe a task. And I think that's the opportunity proficiencies give us. It's how employees think about their work. And ultimately, that's what we're trying to connect to. We're not just trying to give them a list of skills, we're trying to connect to the work that they're doing, how to get there.”

Having worked with IBM Watson Talent and used their skills and proficiency framework, Gordon has also seen the enormous benefit of having this level of detail both to assist employees in the self-assessment process and for People Leaders to calibrate skill and proficiency ratings.

“(The skills and proficiency framework) is the third person in the room, it's much less subjective than traditional self and peer assessments because there is a measure of objectivity. Have I seen you doing this? Yes, or no? The (proficiency) assessment process is much more reliable and trustworthy when you have that level of detail.”

Dr. Marcus Bowles at Capability.Co is a fan of capabilities, rather than skills, and strongly supports the use of capability standards which can be measured and rated:

“Inference covers a lot of sins, but I won't go there, we use standards. Human capability standards are standards written at seven levels. The standards include mindset and behavioural underpinnings with skills.”

Marcus also believes you must consider the context in which performance is demonstrated because performance may vary in different contexts:

“Skills can be demonstrated, the mindset and behaviours can be evidenced through demonstration, but they're mainly innate to a person and the context they're in. So, I could be a great leader and display very good authentic behaviours, inclusion practices in one context, but that doesn't mean it's transferable to another context. It’s a roundabout way of saying if you look through it from a skill and competency assessment lens, which I have a strong background in, you will not get any level of insight into the person that is authentic. You've got to be able to understand it in a context within which it's been performed.”

Both Gordon and Marcus advocate for more science and detail in the frameworks / descriptors being used to assess proficiency whether its skills or capabilities you are assessing. Moreover, there is a human overlay and calibration that is needed to ensure the assessment is reliable, authentic and trusted.

Early thinking:

I think there is a role for Ai inference models, particularly in the initial skills mapping process, whether mapping skills to people or skills to jobs. This will fast track user profile build and greatly enhance the initial user experience. The use of inference at this stage will result in significantly more ‘time to value’ for both employees and the organisation itself.

However, I do take on board the opinions of Gordon and Marcus and can see the value in adopting a more rigorous approach to proficiency definition and rating, including the use of humans in the loop to assess and validate ratings.  

So, is there a hybrid approach that can work?

During my research I have spoken with David Meza from NASA - National Aeronautics and Space Administration , Michael Smith from Randstad Enterprise and Sandra Loughlin, PhD from EPAM Systems , and I think their voices add support to a hybrid or combination approach to skill mapping and proficiency rating.

David at Nasa viewed skill mapping as a ‘graph problem’, Nasa knows what work needs to be done, and has people to do the work, but needed a way to connect them and to understand if their people had the right skills to do the work.  If they don’t Nasa needed to know how to train people or find other folks in the workforce (or external) with those skills.

“So, I created a graph database on occupations and all the different elements that make up those occupations. In those elements, you have things such as knowledge, skills, tasks, abilities. I also have information about people, how do I interpret what I know about them? their resumes, their CVs, their research papers. How do I turn that into knowledge, skills, tasks or abilities?

I created a comparison natural language processing model to compare those elements within the occupations to an individual. How closely related were they? Then I was able to make the connection between the occupation and skills, and skills and people. And because I have those connections, and I can tell this person has 75% of the skills necessary to do this job. So, I only need to upskill them in this other 25%.”

David and Nasa built their own skills architecture based on a graph database and using publicly available skills frameworks such as O*NET. They used internal and external data on roles and people to initially ‘infer’ skills which they then asked their workforce to review and validate.  They took this approach because it expedited the user profile build process and took some of the heavy lifting off the employees.

“Similarly, you have to do some validation of the individual. (This is) always difficult because it puts a lot of burden on individuals or their supervisor to say, yes, they have this skill, or they don't have this skill. We've tried this over the years, and it just died under its own weight many times because it's difficult to get somebody to go in and enter everything (skills) they have. So, what we try to do is, based on what we know about you, this is what we think you have. Can you tell me if you agree? Just give me the ones that are most highly correlated (top 10), can you tell me what level you have it? Novice, intermediary or expert?”

David noted that this self-rating is challenging and trying to validate or assess this will always be a little difficult, but the dataset yielded by this approach will still be hugely valuable to Nasa and its quest to make sure they have the right skills to go back to the moon and onto Mars.

As always, I tend to look to Sandra Loughlin at Epam Systems to see what ‘good looks like’ as they are hands down the most mature skills-based organisation I have explored (30 years+ skills journey). Epam has a very sophisticated way of building someone’s skill profile and proficiency ratings that stems from years of systematic skills-based talent practices.  Their approach blends human, digital and Ai observed data sources.  

Sandra’s insights and Epam’s approach was illuminating and didn’t disappoint.

“We will have data on (people) before they get here, we have all their LinkedIn data, any public information they've presented, like at conferences or they have published, it is all being ingested in the hiring process. Then we use skills-based interviewing, so we can assess a bunch of skills either directly or through behavioural interviews or portfolio reviews or code reviews etc.

 We are also using our friend Ai in the interview process, it's all being transcribed, and the Ai can suggest skills that we did not directly assess but are typically associated with the ones that we assess. This gives us a predicted skill profile in addition to the actual one. Once someone is in the organisation, most skills data and validation come from project work, we're a professional services company and we break down projects by tasks and skills. Part of a project managers job is to give feedback on people and their skills, meaning we're constantly getting validated skills. There is also peer feedback, peer reviews and you can get feedback from clients.

Skill data is weighted so your self-reported skills are weighted a lot less than an expert validated skill. We also get data from certifications and what courses people are going through, though these matter a lot less because you can click through content and not actually have learned anything. So, there are things that we weight more than others, but we try to pull as much data as we can.”

In conclusion:

Whilst there has been general excitement within the ranks of the HR tech vendor community about the possibilities of inferred skills and proficiency ratings at scale, it does appear most organisations are choosing to rely on self-reported or peer/manager reported data, or a hybrid where skills data is being inferred, and then validated, with proficiency ratings being added through self-reporting and manager validation.

i.e. the opinions of humans still count for a lot, and some human opinions count more than those of others.

I really liked the approach of Epam Systems which leverages the best of all worlds – human, digital and Ai observed, this is probably a step too far for most companies, but it does show what is possible over time.

I am going to lean on Michael Smith from Randstad for closing comments this month and his view that we need progress not perfection:

“I think it is a problem at the moment that lots of organisations are grappling with. What I've seen work best is where our large customers have had a combination approach, they've built their own internal ways of adding more value to the validation of skills, including self-claimed skills, assessments that are administered from the LMS, which back up that claim, but they are also bringing in peer and hiring manager recommendations as well.

They know it’s an incredibly difficult exercise to be able to say ‘we've nailed it’ so they seek to get the most robust, statistically significant empirical thought process around it. They are the ones that I've seen make the most movement, but they've made it with the thought process of, ‘it's still not perfect, but we value progress over perfection’, and we're going to do it with a learning growth mindset that we're going to continue to get value from this over time.”

 

 

 


Alessandro Alessandrini

Unlocking Organizational Excellence through people & financial insights

2mo

Great insights. Would love to get your feedback on how Fairgo.ai is evaluating skills as part of our live video interview solution.

Like
Reply
Martha Curioni

#PeopleAnalyticsforHR | Connecting the dots between HR and data/AI

2mo

Looking forward to reading it! Federico Bechini - in case you're interested.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics