Working with AI is not intuitive
News

Working with AI is not intuitive

There is a growing assumption that working with AI is, or will become, intuitive.

As tools become more accessible and interfaces more user-friendly, it is easy to assume that effective use will follow naturally — that people will simply adapt as they go.

In some cases, this may be true at a surface level. Basic interactions can be learned quickly. Outputs can be generated with minimal effort. The barrier to entry is low.

But working well with AI is not the same as using it.

It involves a set of judgments that are not immediately visible, and not always straightforward to develop.

For example, knowing how to ask a system for information is one thing. Knowing how to frame a question in a way that produces meaningful, reliable output is another. Knowing how to interpret that output — where it is strong, where it may be incomplete, and how it relates to the context at hand — is different again.

These are not purely technical skills. They sit somewhere between reasoning, domain understanding, and judgment.

There is also the question of trust.

AI systems can produce outputs that appear confident and well-structured, even when they are based on partial or uncertain information. This can create a tendency to accept outputs at face value, particularly when they align with expectations or are delivered quickly.

At the same time, excessive scepticism can be equally limiting — leading to underuse, or the rejection of potentially valuable insights.

Working effectively with AI often requires navigating between these two positions.

This is not always intuitive.

It requires developing a feel for when to rely on a system, when to question it, and how to combine its outputs with human judgment. It requires recognising that AI does not remove the need for interpretation — it often increases it.

In practice, this kind of capability tends to develop through use, but not through use alone.

It develops when people are able to engage with AI in contexts where the consequences of decisions are visible, where reasoning can be examined, and where there is space to reflect on both successes and errors.

Without this, there is a risk that AI becomes either over-trusted or under-utilised — and that its role in decision-making remains poorly understood.

Seen in this way, working with AI is not simply a matter of access or familiarity. It is something that needs to be learned, supported, and made visible over time.