Key Considerations When Deploying AI Agents (OpenClaw) Part 1
Some notes on deploying AI agents:
Lately, OpenClaw deployment services have been popping up like mushrooms. But honestly, most of what I’ve seen is pretty basic, just set it up and run it. I don’t see much value in that, because ChatGPT or Claude already do this stuff really well on their own. Basic tasks like searching for information, calculations, coding, etc. These tools already handle all of that.
Where OpenClaw’s Value Really Is
- Almost everything OpenClaw can do, Claude Code or Codex can do too, and often better. But those tools need someone sitting there typing in the input. OpenClaw’s real value is that it’s always listening for input from multiple channels.
- The cloud/web versions of ChatGPT and Claude can’t access local files (data) or local skills.
- It’s highly customizable to a developer’s needs. Though regular users probably don’t need this.
If the web versions of ChatGPT or Claude solve these problems, people probably won’t use OpenClaw much. But I think they’ll get there eventually, so the ultimate value of OpenClaw comes down to customization and no vendor lock-in.
Challenges When Deploying AI Agents
-
Making it work reliably: Output should be consistent. A hundred runs should give you the same result a hundred times. Depending on the use case, you can build skills with Claude Code and then drop them into OpenClaw.
-
Security: When integrating with chat, you typically face cases like: owner DM (direct message), the easiest case, no need to think too much. Family members DMing the bot. Random strangers DMing the bot. Dropping the bot into group chats, friends, work. Each case requires a different security mode. Not to mention prompt injection is basically impossible to fully prevent right now. The latest models like Opus 4.6 and GPT 5.3 are slightly better at resisting it, but it’s still not enough. OpenClaw has some security features for allowing/denying tool calls and sandboxing, but I’ve found them quite limited.
-
Cost: This is a real headache, because not everyone has the budget for this. In my opinion, if Anthropic let you hook up the $200/month Max plan, that’d be great, but unfortunately, doing so will get you banned. Right now, the best option for me is the ChatGPT Pro plan at $200/month. As for Chinese models, from my testing, they’re not reliable enough, even though they’re cheap. It feels like these models are overtrained on benchmarks; in real-world use, they still fall behind the US models.
-
Integration: How to integrate with existing tools and run smoothly with OpenClaw is another challenge, and that’s before even considering security.
-
Teaching users to use it properly. Everyone talks about deploying AI, but how do you actually use it right so it increases revenue and reduces costs, instead of increasing costs? Sales folks love throwing around the word “AI” without a single concrete use case. Most deployments just set up some random chatbot, leave it there, and burn the company’s money. Every time I see those ad posts and demos, it really ticks me off. It’s basically a scam, riding the trend to sell services. And unfortunately, this hurts the people who are doing legitimate work.
Bottom Line
There’s still a ton of work to be done based on the list above. There’s a huge gap between AI in theory and AI in practice. In this era, who’s going to seize the opportunity?