What counts as evidence of capability?
News

What counts as evidence of capability?

In many professional contexts, capability is often inferred rather than directly observed.

Qualifications, experience, and past performance are commonly used as signals. They provide a sense of what someone knows, where they have been, and what they may be able to do.

In stable environments, these signals can be sufficient.

But as contexts change — particularly with the introduction of AI into decision processes — the relationship between these signals and actual capability becomes less straightforward.

A qualification may indicate knowledge. Experience may suggest familiarity. But neither necessarily shows how someone reasons in situations where information is incomplete, where tools are shaping outputs, or where decisions carry new forms of complexity.

In these conditions, the question of evidence becomes more important.

Not in a formal or bureaucratic sense, but in a practical one: how capability is made visible, and how it can be understood by others.

This is not always easy.

Much of what constitutes capability — judgment, interpretation, the ability to navigate ambiguity — is not easily captured in static formats. It is often revealed through process rather than outcome, and through how decisions are approached rather than simply what decisions are made.

For example, an outcome may appear correct, but the reasoning behind it may be fragile. Alternatively, a well-reasoned approach may lead to an imperfect result, but still reflect strong capability.

If only outcomes are visible, this distinction can be difficult to see.

As AI becomes more involved in producing outputs, this challenge becomes more pronounced. When part of the reasoning process is supported by a system, the path from input to outcome can become less transparent. This makes it harder to assess how decisions were formed, and where responsibility sits.

In this context, evidence of capability may need to shift.

Rather than focusing solely on what was achieved, it may become more important to understand how a problem was framed, how information was interpreted, how assumptions were questioned, and how decisions were reached and justified.

These are not new concerns. But they take on greater significance when the processes behind decisions are less visible than before.

This has implications not only for individuals, but for organisations and institutions.

If capability cannot be clearly evidenced, it becomes harder to build trust — whether in hiring, in collaboration, or in decision-making itself. At the same time, over-reliance on simplified signals can create a false sense of confidence.

Seen in this way, the question is not only how capability is developed, but how it is recognised — and made visible in ways that reflect real reasoning, in real contexts.