I started writing this because I kept noticing the same pattern among people my age — especially other technologists.
There are still a few who reject AI outright. They don’t trust it, don’t want it, and would rather wait for the whole thing to collapse under its own weight. That posture is understandable, but it’s receding fast. The tools are already embedded in the systems we use, whether we like it or not.
More common is the second posture: people who do use AI, but only as a faster search engine. A place to look things up. A way to summarize articles. A turbocharged autocomplete that saves a few keystrokes but doesn’t fundamentally change how they work.
Both reactions miss something important.
AI isn’t just another tool to accept or reject. It’s a new layer of mediation — one that sits between intention and action. And if you don’t decide how it fits into your thinking, it will quietly decide for you.
That realization became the backbone of a small YouTube series I’m building called The Sovereign Protocol — a set of practical experiments around using AI without giving up control. Two episodes are live. A third is currently in production. This essay is an attempt to articulate the connective tissue underneath all of them.
My yt channel and these first two episodes are available here: https://youtube.com/@SomethingElseShow
Tools Don’t Decide. People Do.
Most conversations about AI start with capability.
What can it do?
How fast is it?
How much does it replace?
Those questions matter, but they’re downstream. The more important question is who is framing the work.
In every effective system I’ve seen — executive offices, engineering teams, creative studios — the most valuable role is rarely the one doing the typing. It’s the one deciding what matters, what doesn’t, and what never should have arrived in the first place.
That’s the role I think AI should play: not decision-maker, not authority, but assistant.
A real assistant doesn’t decide priorities.
A real assistant doesn’t commit your time.
A real assistant doesn’t answer every request just because it exists.
A real assistant filters.
The First Mistake: Automating Before Filtering
The most common AI mistake I see isn’t bad prompting. It’s skipping judgment.
People wire AI directly into email, calendars, notifications, and task lists — and then wonder why they feel more overwhelmed, not less. AI responds instantly, but it has no sense of consequence. Everything looks equally deserving of action.
That’s how you end up responding faster to things that never deserved your attention at all.
Before automation comes filtering:
Does this actually matter?
Does this require me, or just a response?
Does this need action now — or simply acknowledgment?
If you don’t answer those questions first, AI doesn’t save time. It accelerates exhaustion.
Judgment Is the Scarce Resource
We tend to talk about AI as if intelligence were the bottleneck. It isn’t.
The bottleneck is judgment.
Judgment is knowing when not to reply.
Judgment is recognizing false urgency.
Judgment is understanding context that doesn’t appear in text.
AI can draft five versions of an email. It cannot decide which relationships matter more than others. It cannot feel the cost of a “yes.” That responsibility doesn’t disappear as tools improve — it becomes more important.
That’s the core idea behind the Sovereign Protocol: you keep the framing, the values, and the final decision; the tool handles execution.
For those who want something concrete to work from, I’ve documented the system I use in a short, free PDF (requires you to give your email). It walks through how I set up my Sovereign Protocol assistant as a custom GPT — the framing rules, filtering logic, and boundaries I use to keep AI in a support role rather than a decision-making one.
You can find it here: https://hartlineent.com/spskit
Staying Human Is an Active Practice
One thing I didn’t expect when I started using AI heavily was how easy it is to stay digital forever.
If you save an hour with AI and immediately fill it with more screen work, nothing has improved. Efficiency without recovery just tightens the loop.
That’s why every system I build now includes an analog counterweight — something deliberately physical, slow, and unoptimized. Walking. Music. Cleaning a tool. Stepping outside. Not as nostalgia, but as balance.
High-tech only works if it serves a high-touch life.
Why I’m Writing This Here
I’m using Leaflet as a place to write between the videos — to slow ideas down, examine assumptions, and think in public without performance pressure.
The Sovereign Protocol series isn’t about mastering AI. It’s about resisting the quiet drift where tools start deciding what matters for us.
If you’ve lived through a few technology cycles, you’ve probably seen this pattern before. Tools promise liberation. Then they quietly demand obedience. The difference is rarely the tool itself — it’s whether someone is still paying attention.
They map they gave is not the land.
Everything is something else.