Towards a science of scaling agent systems: When and why agent systems work - Comments

Towards a science of scaling agent systems: When and why agent systems work

verdverm

gonna read this with a grain of salt because I have been rather unimpressed with Google's Ai products, save direct API calls to gemini

The rest is trash they are forcing down our throats

4b11b4

Yeah alpha go and zero were lame. The earth foundation model - that's just ridiculous.

That's sarcasm

---

Your "direct Gemini calls" is maybe the least impressive

edit: This paper is mostly a sort of "quantitative survey". Nothing to get too excited about requiring a grain of salt

CuriouslyC

This is a neat idea but there are so many variables here that it's hard to make generalizations.

Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.

zby

Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.

localghost3000

I’ve been building a lot of agent workflows at my day job. Something that I’ve found a lot of success with when deciding on an orchestration strategy is to ask the agent what they recommend as part of the planning for phase. This technique of using the agent to help you improve its performance has been a game changer for me in leveraging this tech effectively. YMMV of course. I mostly use Claude code so who knows with the others.

verdverm

gonna read this with a grain of salt because I have been rather unimpressed with Google's Ai products, save direct API calls to gemini

The rest is trash they are forcing down our throats

4b11b4

Yeah alpha go and zero were lame. The earth foundation model - that's just ridiculous.

That's sarcasm

---

Your "direct Gemini calls" is maybe the least impressive

edit: This paper is mostly a sort of "quantitative survey". Nothing to get too excited about requiring a grain of salt

CuriouslyC

This is a neat idea but there are so many variables here that it's hard to make generalizations.

Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.

zby

Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.

localghost3000

I’ve been building a lot of agent workflows at my day job. Something that I’ve found a lot of success with when deciding on an orchestration strategy is to ask the agent what they recommend as part of the planning for phase. This technique of using the agent to help you improve its performance has been a game changer for me in leveraging this tech effectively. YMMV of course. I mostly use Claude code so who knows with the others.