pjm331 a day ago

I've made a few attempts at manually doing this w/ mcp and took a brief look at "claude swarm" https://github.com/parruda/claude-swarm - but in the short time I spent on it I wasn't having much success - admittedly I probably went a little too far into the "build an entire org chart of agents" territory

the main problem I have is that the agents just aren't used

For example, I set up a code reviewer agent today and then asked claude to review code, and it went off and did it by itself without using the agent

in one of anthropic's own examples they are specifically telling claude which agents to use which is exactly what I don't want to have to do:

> First use the code-analyzer sub agent to find performance issues, then use the optimizer sub agent to fix them

My working theory is that while Claude has been extensively trained on tool use and is often eager to use whatever tools are available, agents are just different enough that they don't quite fit - maybe asking another agent to do something "feels" very close to asking the user to do something, which is counter to their training

but maybe I just haven't spent enough time trying it out and tweaking the descriptions

  • conception a day ago

    Roo code does this really well with their orchestration mode, there’s probably a way to have a claude.md to do this as well. The only issue with roo is it’s “single threaded” but you do get the specific loaded context and rules for a specific task which is really nice.

  • yumraj 16 hours ago

    What’s one use case where someone would do this? Very curious.

    • theshrike79 12 hours ago

      Agents use a separate context and won't pollute the main context.

      So if you have a code review agent or a tdd agent checking the current commit if it matches some specs you have, they'll start a separate "subprocess" with its own context and return whatever they find to the main Claude context.

    • nsonha 5 hours ago

      Like "do research on topic/library X and use the conclusion for next steps"

  • oc1 a day ago

    the same problem with mcp. as well as claude md. most of the time they aren't used when it would be appropriate. what's the point of this agents and standards when you can't make them reliably being used by your model..

bomewish a day ago

Has CC become much stupider in recent weeks, or is it me? Any anecdata out there?

  • _--__--__ a day ago

    People speculate somewhat seriously that Claude (especially given its French name) picked up at some point that you aren't supposed to work as hard in July and August.

    • sunaookami a day ago

      That one guy on Twitter that posted this wrote it as a joke and everyone took it seriously. It's not true. It works the same for me.

      • oc1 a day ago

        How do you know? It acts much lazier in the recent summer months for me..

        • stavros a day ago

          How have you disproved the hypothesis that it recently got dumber and it just happens to be summer?

          • AbstractH24 a day ago

            Clearly, it compared performance to last summer

            (Just to be clear, I have no idea what on this thread to take seriously and not and who is. I'm joking at least.)

            • stavros a day ago

              That won't do it, though, you'd have to observe it being dumber on June 1 and smart again on September 1 for years.

    • madrox a day ago

      How long before we hire psychiatrists instead of engineers to debug AI

      • OrsonSmelles a day ago

        Well, we could start with some ELIZA instances.

        • lubujackson a day ago

          I see that you feel we could start with some ELIZA instances. Can you tell me more about that?

      • taneq a day ago

        Robopsychologists, you say?

      • nialse a day ago

        To be frank psychiatrists, being MDs, would likely prescribe medication and I’m not sure how that would help. As a licensed psychologist I have ideas on how to debug AI though.

        • AbstractH24 a day ago

          Why, we'll just have specialized agents for ingesting Prozac and that'll magically solve everything.

  • boesboes 16 hours ago

    Yeah, it has become unusable for me. Maybe it always has been and I am just trying to solve harder problems with it and more critical of the results. But it’s still infinitely better than gemini for me, that can’t do anything useful. It even tried removing the entire security system from my rails app because it couldn’t figure out how to login in the tests.

    I did a test with a very detailed prompt, exactly specified what to fix and how. Claude did it, but not very well. Gemini? it got stuck in a loop until i told it to stop, gave it a hint and then it got stuck again and gave up after trying the exact same thing three more times…

    And while Claude managed to get through it, it couldn’t get it right even with some help. It took me 15 minutes to write the prompt, 15 minutes of claude implementing it & another 10 trying to get it to do it correctly. It would have taken me about half the time to do it myself i think..

    I am giving up on it for a while.

  • nico a day ago

    I don’t know about stupider, but definitely less reliable/available

    A couple days ago I was getting so many api errors/timeouts I decided to upgrade from the $20 to the $100 plan (as I was also regularly hitting rate limits as well)

    It seemed to fix the issue immediately. But today, the errors came back for about half an hour

    • SOLAR_FIELDS a day ago

      It goes down usually around 1400-1500 UTC. Europeans are still awake and once the west coast joins in the fray Anthropic falls over.

      Pretty rare to get a 529 outside of that time window in my personal experience, at least during the USA day.

  • laborcontract a day ago

    Insert something to the tune of: “never read files in slices. Instead, whenever accessing a file, you must read a file in entirety[..]” at the beginning of every conversation or whenever you’re down to burn more credits/get better results.

    A great deal of claude stupidity is due to context engineering, specifically due to the fact that it tries its hardest to pick out just the slice of code it needs to fulfill the task.

    A lot of the annoying “you’re absolute right!” come from CC incrementally discovering that you have more than 10 lines of code in that file that pertains to your task.

    I don’t believe conspiracies about dumbed down models. Its all context pruning.

    • oc1 a day ago

      so claude code does the same shit like cursor?

  • illusive4080 a day ago

    Not for me. It gets worse when context is nearly full. I like to compact or clear context more often than it does automatically.

    • yumraj 16 hours ago

      I’ve thought about that but always forget, good to know it helps.

      I wish there were a way to persist in-memory context in a file automatically, say on each compact or git commit. Yesterday CC crashed and restarting it and feeding it all the context was a pain since my updated Claude.md file was a couple of days old. It literally went from a Sr Engineer to a Jr post crash.

      • jswny 12 hours ago

        You can do that with hooks! Make a small script that triggers on a commit tool use or a compact hook and reads the conversation file (should be available via inputs to the hook) and back it up somewhere

    • nico a day ago

      Do you do this via settings or just keep track of it and manually ask it to do it more often?

      • furyofantares 19 hours ago

        (Not the person you're responding to, but) It says how close it is to compacting in bottom right, once it's getting close at least (30% left or something?)

        Whenever I see that I think about whether I can find a good point to compact or clear. I also just try to clear whenever it makes sense to avoid getting there and try to give smaller tasks that can be cleared after they're done when possible.

        Oh, I guess one thing I do is sometimes have it write a file with what was done, if I'm not actually sure if I want to clear or might want to come back to it. I also sometimes do this rather than compact during a large task - document status and clear.

  • audinobs a day ago

    I think it is like with a gambling game that you get on hot and cold streaks, runs based on chance.

    The model feels like it has got stupid when you get on a cold streak after a hot hand.

  • slantaclaus a day ago

    I feel like it’s gotten better recently

Dlanv a day ago

I wonder if this is also a good way to create experts for specific tasks/features of a codebase.

For example, a sub-agent for adding a new stat to an RPG. It could know how to integrate with various systems like items, character stats component, metrics, and so on without having to do as much research into the codebase patterns.

mortsnort 18 hours ago

It says they can be "fine tuned," but it looks like the agents are all using the same model with different system prompts? This would be more intriguing if they trained a debugger model from the ground up that could be used for the debugger agent. I suspect we'll get there eventually.

  • retinaros 17 hours ago

    words have no meaning in LLM. agents, fine tuning, reasoning all have millions of definitions

Garlef a day ago

One nice realization I had when using a similar feature in roo:

You don't need a full agent library to write LLM workflows.

Rather: A general purpose agent with a custom addition to the system prompt can be instructed to call other such agents.

(Of course explicitly mamaging everything is the better choice depending on your business case. But i think it would be always cheaper to at least build a prototype using this method.)

T0Bi 2 days ago

So everything claude-flow¹ already does but worse (I guess?).

¹ https://github.com/ruvnet/claude-flow

  • jampa a day ago

    > IMPORTANT: Claude Code must be installed first:

    > [...]

    > # 2. Activate Claude Code with permissions

    > claude --dangerously-skip-permissions

    Bypassing all permissions and connecting with MCPs, can't wait for "Claude flow deleted all my files and leaked my CI credentials" blog post

    • T0Bi a day ago

      There are already several of such blog posts.

      I use the .devcontainer¹ from the claude-code repository. It works great with VSC and let's you work in your docker container without any issues. And as long as you use some sort of version control (git) you cannot really lose anything.

      ¹ https://github.com/anthropics/claude-code/tree/main/.devcont...

    • data-ottawa a day ago

      I would like a simple tool to run Claude in a container with only read/write access to provided folders.

      I’ve set it up bespoke but the auth flow gets broken.

      • SOLAR_FIELDS 18 hours ago

        Claudebox is what I was playing with. You need to mount the oauth access token in as an env. It’s not some crazy vibe coded framework, just around 1k lines of shell helpers to set it up.

      • beefnugs 5 hours ago

        This is the main problem with all the AI stuff, you really need: whitelist only network isolation, idempotent clean virtual machine creation and cleanup, automated git branches and merging strategies, full chain multi host logging to an external log collector (when something unexpected happens to should be able to review an entire event log of where it went wrong so you can improve), social-graph like tracking of what works and what doesn't, constant background model testing (to detect when censorship is going to bite you) or when a new better model for quality for cost effectiveness can be swapped in, anything background like agents needs an orchestrator so you can set up daily or weekly budgets to try and keep a handle on costs, some defined methodology to reduce long running agent based production down to actual reliable code on an ongoing basis

        The tooling required for any of this to approach actual engineering reliability levels is unbelievable really

      • oarsinsync a day ago

        Have you considered asking Claude code to write this for you?

  • SOLAR_FIELDS a day ago

    That guy doesn't even understand how his own software works. Is anyone actually using this thing and putting their code into production?

    • lubujackson a day ago

      It's extreme dogfooding where he is making a mashed potato volcano where Claude agents are the potatoes and your sanity is the gravy.

    • AbstractH24 a day ago

      Not only are people using them, they are building startups based on them. And then selling said startups.

  • dchuk a day ago

    I’ll admit this looks comprehensive, but man oh man does this seem complicated and over doing it

  • nazgul17 a day ago

    Except it's not in alpha phase

  • himeexcelanta a day ago

    This looks like a yarn ball (in not a good way)

  • lvl155 a day ago

    What did you make me read. Right off the bat, it says v2 alpha.

    Bro…

lvl155 a day ago

Here my main problem with sub-agents WITHIN Claude Code. They don’t allow you to use other models. Let’s be honest it’s 99% Sonnet.

  • stillsut 20 hours ago

    Great point, I've found Sonnet really can't be beat on many tasks, but increasingly finding Gemini-Pro and o3 handle the tough bugs and refactors best.

    That's why I've been using agro to launch agents from each of the main LLM vendors and checking their results when I'm stuck: https://github.com/sutt/agro/blob/master/docs/index.md

  • furyofantares 18 hours ago

    I haven't used them yet but it says they can use MCPs. The only MCP server I use is zen-mcp-server for routing stuff to o3 and gemini.

    • lvl155 10 hours ago

      But that’s added layer and slow, no? Wouldn’t something like Opencode be a better option? You can pick anything out of major providers.