⫸ Understanding With, Not Just About: AI and the Hermeneutic Spiral
You Don’t Learn AI. You Learn With It. And It With You.
There is a quiet transformation underway in the way people engage with AI. It's not about mastering prompts or pushing the limits of a system. It's something more subtle. People aren't just learning about AI. They're learning with it.
People aren't just learning about AI. They're learning with it.
This is a different kind of knowing. Not a sudden gut feeling, but a form of intuition that builds over time—through repeated interactions, unexpected outcomes, and small adjustments. It resembles the process of learning a musical instrument or reading a difficult book. It unfolds through rhythm, surprise, and recalibration.
To make sense of this, it helps to borrow a concept from philosophy: the hermeneutic circle.
The Hermeneutic Spiral: Co-Understanding in Motion
The hermeneutic circle is a way of describing how we come to understand complex things. We never encounter a situation or system without context. We arrive with assumptions, memories, mental models. As we engage with something—a problem, a process, a conversation—our understanding of its parts changes how we see the whole. And our sense of the whole, in turn, changes how we interpret the parts.
It’s called a circle. But in practice, it behaves more like a spiral: each loop deepens understanding, moving us forward while keeping us in motion.
A familiar example might be approaching a work of art on a museum wall. You may recognise it, or not. You may have a sense that it is culturally significant, that you 'should' understand or appreciate it. But then something shifts. You stand before it. You absorb it. Perhaps something in you changes. You see it differently. Or maybe you glance at the small placard beside it—a name, a date, a cultural reference—and suddenly the image speaks in a new voice. Context reshapes perception. Meaning begins to spiral.
When applied to AI, the process becomes even more dynamic. AI is not a static text. It responds. It adapts. It reflects us back to ourselves. Which means the act of understanding becomes reciprocal.
You evaluate what the AI produced and how that output makes you feel about the AI itself: its tone, usefulness, limitations.
This spiral can often be felt in five subtle moves:
You start with a sense of what AI is for. A hunch, a task, a question.
The AI responds in real time—not passively, but shaped by your input, its training, and the interface between you.
You interpret both the content and the character of the system. That is, you evaluate what the AI produced and how that output makes you feel about the AI itself: its tone, usefulness, limitations. This moment of dual interpretation is a quiet turning point. It marks the beginning of a reciprocal interaction—your interpretation of the AI influences how it responds next, just as its response reshapes how you see it.
That experience shapes how you think, feel, and prompt. Your next question carries the imprint of the last exchange.
You return to the AI differently. With a slightly altered rhythm, tone, or expectation. And it, too, may shift in return.
This notion of reciprocal interaction is subtle, but significant. The relationship between human and system is not one-directional. As the AI learns more about how you think, it adapts its responses. And as you sense these shifts, you adapt in turn. Understanding emerges not from control, but from mutual calibration.
The relationship between human and system is not one-directional.
This is how AI intuition develops. Not in leaps, but in loops. Each turn adding nuance. Each return shaping a new departure.
The Postures: ME / WE / US
This process doesn’t happen in isolation. Waveforming’s AI ME-WE-US model describes three key postures of engagement:
ME: The individual's exploration of AI. Playing with prompts. Watching for patterns. Building a personal sense of what AI is good for—and where it falls short.
WE: The team level. Learning how others interpret and interact. Negotiating shared practices. Co-shaping expectations and rhythms.
US: The organisational layer. This is where culture, systems, and strategy converge. It determines what kinds of engagement are visible, valued, and viable.
Each level brings its own hermeneutic challenges. At ME, interpretation is personal. At WE, it becomes social. At US, it turns systemic.
The Modalities: Partnering, Riding, Fusing
If ME-WE-US helps us understand who is engaging with AI, a second model helps us understand how.
Waveforming identifies three distinct modalities of AI engagement, akin to humans’ relationship with horses.
Partnering with AI: Working side by side. The horse is led, not ridden—present and responsive, but not yet synchronised. This is a dialogic stance—using AI as a collaborator. Asking, adjusting, responding. It requires patience, curiosity, and mutual responsiveness.
Riding with AI: The horse and rider move in sync—flowing as a unit. Moving with the AI at pace. There’s momentum, shared direction, and a sense of flow. But also more risk. Like riding a horse, this mode requires trust, situational awareness, and an ability to adapt quickly to shifts in rhythm or terrain.
Fusing with AI: The centaur mode. Here, human and machine cognition are intertwined with no clear line between intention and execution. You think with the AI, not just through it. Prompts and outputs blur. It's not for every task or every person—but when it works, it can feel seamless.
These modalities are not a hierarchy. They are different forms of relationship, suited to different needs. And most individuals will move between them over time.
The Landscape of Engagement: Who’s in Your Herd?
Zooming out, this creates a rich organisational terrain.
In any team, there will be a mix of stances and modes: some experimenting as solo explorers, some confidently riding with AI in everyday work, some fusing deeply with it, and others still hesitant to engage at all.
This diversity isn’t a problem to solve. It’s a landscape to map.
Understanding this landscape allows organisations to build awareness and support. Not everyone needs to be a centaur. But everyone needs to know where they stand—and where others are. That’s the beginning of mutual learning, and the basis for adaptive strategy.
Interlude: When Interpretation Is Reciprocal, So Is Risk
Reciprocity of interpretation brings with it an inherent vulnerability. As you become more attuned to it, it becomes more attuned to you.
As AI becomes more adaptive, it becomes more attuned. It starts to recognise styles, preferences, rhythms. It begins to reflect not just what you ask, but how you ask. The relationship deepens.
But with deeper knowledge comes risk. Just as in any human relationship, intimacy makes possible both deeper trust and greater harm.
At an organisational level, this raises difficult questions:
What rights do users have over how they are modelled, remembered, or predicted?
Who owns the memory of the system?
What visibility do users have into the shaping of their own digital double?
At an individual level, the stakes can feel existential. When you begin to co-create with a system, where do you end and it begin? How do you maintain your sense of agency when the system is shaping you, too?
These are not reasons to disengage. But they are reasons to proceed with care. Interpretation, after all, is never neutral. Especially when it goes both ways.
Your Course to Set
Dip your toe in: Where are you in your own spiral of AI understanding? What have you learned about yourself through your interactions with AI?
Take the plunge: What kind of relationship do you want with AI—and what kind of responsibility does that relationship ask of you?






