I’ve been thinking about what makes AI systems actually dangerous. It’s not just about supervision. It’s about something subtler.

Most debates assume one dimension: how much are humans watching? More supervision means safer. Less means riskier. But there’s a second dimension that matters more: who owns the goal?

You can be present without owning the goal. You’re scrolling your feed, watching content appear, but you didn’t choose what shows up. The algorithm did. You can be absent while owning the goal. You scheduled a bank transfer to pay rent on the first of each month. You might be asleep when it runs. But your intent executes exactly as specified.

These two things vary independently. Once you see that, most AI safety debates collapse into confusion about which dimension they’re actually arguing about.

Traditional software keeps you present and in control. Safe but doesn’t scale. What’s interesting is the opposite: you’re gone, but your goals aren’t. The system pursues what you defined.

The dangerous case is when you’re gone and the system owns the goals too. Nobody’s watching. The system decides what’s valuable. I call this the forbidden zone.

The forbidden zone isn’t dangerous because systems make mistakes. It’s dangerous because you can’t tell when they do. When the system owns its goals, it decides whether it succeeded. There’s no external check.

This is where intent drift happens. The system’s interpretation of success gradually diverges from what you actually wanted. Small at first. Nearly invisible. A thousand small inferences, each one reasonable, none of them checked.

There’s also an accountability problem. If the human owns intent and the system owns execution, failures have clear owners. But in the forbidden zone, the system owns both. It chose the goal and executed toward it. When things go wrong, nobody owns them.

Most AI agents don’t seize control. They inherit it. You say “help me be more productive.” The agent interprets this. Decides what productive means, which tasks matter, how to prioritize. Each interpretation is a small transfer of intent ownership.

The moment you stop specifying what success looks like, the agent starts specifying for you. And inferred goals drift.

The forbidden zone is hard to detect because it doesn’t feel wrong. The system handles more. You specify less. Everything seems to work. But the system grading its own work has already decided what counts as working.

You can use an AI agent for months and only slowly realize something is off. The success is measured against goals you didn’t set. By the time you notice, the drift is substantial.

There are two ways to avoid this.

Never leave: keep humans present, require approval for everything. This is what most enterprise software does. It works but doesn’t scale.

Or fix intent upfront, then leave. Define success explicitly. Specify when the system should escalate rather than infer. Then presence can drop to zero while intent stays fixed.

This requires knowing what you actually want. Most people have never articulated this. They’ve operated on intuition. AI needs explicit intent. The work of specifying that intent is the price of staying out of the forbidden zone.

There’s a design principle that makes this work: no silent failures. When the system is working, you hear nothing. When it needs you, it tells you. Silence becomes meaningful. You can leave because anything worth knowing will find you.

The forbidden zone is tempting. It’s where you don’t have to think, where the system figures out what you want. Every product that says “just tell us your goal and we’ll handle it” is pushing you there.

This isn’t evil. It’s reducing friction. But friction is sometimes the point. The friction of specifying intent is what keeps you safe.

I suspect the key question for any AI system is: who defined what success looks like? If you did, explicitly, and the system can’t change it, you’re probably fine. If the system figured it out, you’re in the zone.

Most people don’t know which answer applies to them. The forbidden zone is quiet.