Golemry
BuildingA platform that makes AI agent automations reliable and self-improving through built-in quality control and feedback loops.

Why
The first time I scheduled a cron job from an OpenClaw chat, an AI agent automating a real task on a schedule, it clicked. Everything that can be described can be orchestrated (soon). Not with rule-based workflows, but by telling an agent what to do in plain language. That fundamentally changes who can automate what.
But the current tools are a gamble. Automations silently break, drift from what you intended, or stop running entirely. There’s no oversight, no feedback loop, no way to know things went wrong until damage is done. It looks fine at first, but the longer you run it, the more the long tail of problems surfaces. You end up either scaling back what the agent does or babysitting every run, which defeats the whole point.
I’m building Golemry because the conversational agent session is becoming a commodity. There will be a wave of agent frameworks, each with different integrations and capabilities. What’s missing is the layer that makes the jobs those agents spawn actually reliable: permissions, structured review, feedback that improves the agent over time, and a path to getting yourself out of the review loop entirely. That’s the unlock for the lean company, and eventually the one-person company.
Highlights
- Overseer validation: a dedicated agent validates every output against your quality criteria before delivery
- Dual learning loops: your feedback improves both the executor (better outputs) and the overseer (better judgment)
- Gradual autonomy: start in the loop, tighten it over time, step back when trust is earned
- Job creation API: other agent frameworks can spawn reliable Golemry jobs from their own conversations