← Back to all posts

If You Can't See It, It Doesn't Exist

I run three AI agents on three servers. The hardest lesson so far was not about prompts, models, or tools. It was about being able to see what they do.

Multiple terminal windows open on a dark screen, each showing a different AI agent's live activity log, with glowing text scrolling in real time across all sessions

The ghost problem

When I first started running agents on their own servers, I made the obvious architectural call. Cron jobs. Background processes. Scheduled tasks that fire off, do their work, and write to a log file somewhere. Clean. Quiet. Out of the way.

And it worked. Until it didn't.

The issue wasn't that stuff broke. Stuff breaks all the time. That's fine. The issue was I couldn't tell three things apart. Was the agent working? Was it stuck? Did it die twenty minutes ago and go quiet? They all looked the same from the outside. Just quiet.

Quiet is fine when you trust the system completely. But I was building the system. I didn't have that trust yet. And honestly, I shouldn't have. When you're building something new, you need to watch it work before you can trust it to work alone.

The fix was embarrassingly simple

I made one rule. Every piece of work has to run in a terminal I can see. No background processes. No silent cron jobs. If an agent is doing a task, I want to open a window and watch it run.

So every scheduled task gets pasted into a tmux pane. I can see the output scroll. Every cron job sends its work to a pane that stays open. Every agent's work shows up where I can check it with one keystroke.

Sounds like a step back. Most folks build systems to push work out of sight. They don't pull it into view. But here's the thing. When AI agents make calls for you, you need to see what they do. Watching isn't extra work. It's the whole game.

What background processes actually cost you

Here's what I learned the hard way. A background process hides three things from you.

First, it hides failures. A cron job dies at 3am. No alert. It just skips. Next morning you see old data or jobs that never ran. You don't know when it broke. So you work backward from the mess instead of catching it live.

Second, it hides how things run. When you can see the work, you spot stuff. You see that one step takes three seconds. It used to take one. You see an API call try five times before it works. You see patterns that never show up in a log. You weren't looking for them, so the log never caught them.

Third, and this is the big one, it hides intent. When an agent runs in the back, you can't see what it picked or why. You only see the result. But when you watch it work live, you see the thought, the tool calls, the calls it makes. That's how you learn if your system is doing what you think it's doing.

A split screen showing two approaches: on the left, a dark silent terminal with no output representing a background process; on the right, a bright active terminal with scrolling agent activity logs

This is a pentesting lesson, not a management one

I came from ethical hacking. One rule sits at the heart of that work. You can't secure what you can't see. No view, no way to find the holes. No way to check the fixes. No way to prove it's clean. The system might be fine. But you don't know. And "might be fine" is not a security posture.

The same thing applies to AI agents. If your agent runs a task and you can't see what it did, you're trusting blindly. Maybe it worked. Maybe it sent the wrong email. Maybe it wrote to the wrong file. Maybe it errored out and left things half-finished. You don't know. And "I don't know" is not acceptable when the agent is acting on your behalf.

Trust comes from watching, not hoping

I think people mix up automation with hiding it. They want to set things up. Then never look again. But that is not automation. That is just giving up.

Real automation is a system you trust. You trust it because you watched it work right hundreds of times. You earned that trust by watching it. You did not just flip a switch on day one and walk away.

With my agents, I watch them work every day. I see their output in real time. I catch problems early. And over time, I build confidence in what they're doing. Not because I hope it's working. Because I can see it working.

The rule for any team

This goes past AI. Any team works better when the work is in plain sight. Not watched. Just seen. There's a gap between those. Watching is about control. Seeing is about shared understanding.

When the team can see the work, bugs get caught fast. Handoffs are clean. Trust grows on its own. When work hides in a black box, you get surprises. From what I've seen, those are never the good kind.

My rule is simple. If you can't see it, it isn't there. Not because hidden work is fake. But hidden work can't be trusted. It can't be fixed. It can't be checked. And a system you can't trust or fix isn't a system. It's a risk.

← Back to all posts