I love the idea of using a shared event log for coordination. Smart!
I have a Symphony-style[1] factory, which keeps all the context in a single session, but I want to start splitting into stations with separate sessions, and I hadn’t worked out how to do communication between sessions.
Personally, I’ve had "maintenance" and "auditing" sessions successfully drop notes in a my loop’s "inbox" directory (even though it was intended for my use).
I’d say that works as a simple initial approach. Second step is clarifying the "return address" and protocol, but what’s nice is that the message can actually contain those, meaning the protocol itself can evolve seamlessly over time.
Which can also cause drift, though :/
When you have 5 workers + a judge all running in isolated VMs, what is a workflow for tracing a failure? Can I replay the event log locally, or inspect what each agent tried? And also, can agents share intermediate logs, results without going through the event system?
yes, you can read the logs, and see full actions, the agents can send messages to each other, and I am also about to make it possible for them to optionally read each other's traces
Seems like a powerful concept, and I'm excited to try it, but yikes does the code snippet seem like a textbook example of primitive obsession.
I'm going to check this out. I’ve been working on my end-to-end development method and these types of holistic pipelines I think are the future. (or the present!)
I've decided to start with concept capture, which then builds out strategy docs, which feed into specs, etc ... might be time for me to share, but I'm in the process of battle testing myself!
Looking at your post again, I guess I could script a concepting agent to help hone the idea?
> - building custom automated software pipelines for eg code review, pentesting, large-scale migrations, etc...
I'd be interested in hearing more about how you did this, sounds super valuable
Yeah, I guess we have sandboxes qwith our various code environemnts, and then we've been developing programs that run agents to do various things, which we iterated on to make them better at the task.
For example, one spawns copies of the env with a PR and runs agents in the dev env to verify by running and demonstrating functionality and then comments on github
another one is just a generic software factory that spawns a bunch of agents to coordinate on some repo, others do a redteaming flow, etc...
Very cool!
Interesting. I like it. Now let's say I currently use the OS process as my primitive for agents, just spawning `claude "foo bar baz"`, and orchestrating this way, using perhaps Unix style of files for intermediate data and piping for transformations. What would you are some good use cases of Druid for someone like me?
What do you do with those agents? It's useful if you want to iterate on a flow and have more control over the orchestration/environment