ProST: Progressive Sub-task Training for Pareto-Optimal Multi-agent Systems Using Small Language Models
PositiveArtificial Intelligence
The study on progressive sub-task training for multi-agent systems using small language models (SLMs) reveals critical insights into the effectiveness and efficiency of these systems compared to their larger counterparts. It identifies that SLMs encounter challenges with long-trajectory learning, which hampers their ability to learn all subtasks effectively. To mitigate this issue, the researchers propose a novel training strategy that introduces new subtasks progressively, akin to instance-level curriculum learning. This approach consistently improves the performance of multi-agent systems across various configurations. The Pareto analysis conducted in the study indicates that fine-tuned multi-agent systems yield superior effectiveness-efficiency trade-offs, positioning them as viable alternatives for addressing complex problems in environments like AppWorld. These findings underscore the potential of SLMs when paired with innovative training strategies, paving the way for advancement…
— via World Pulse Now AI Editorial System
