![](https://ruslar.me/mobile/static/img/info.png)
high-level task instructions, coordinating diverse robot capabilities, and validating task outcomes. Traditional logic-based and learningbased approaches often fall short in dynamic and ambiguous environments. Large Language Models (LLMs) offer a promising solution
by leveraging their advanced reasoning, contextual understanding, and adaptability to handle complex task dependencies and interpret
multimodal inputs. This paper introduces an LLM-driven framework for station identification, task scheduling, and object pickup validation. The proposed method achieved task scheduling of 80%, while object pick-up validation using few-shot prompting
demonstrated reliable performance. These findings highlight the potential of LLMs to improve coordination, adaptability, and reliability
in multi-robot systems, paving the way for scalable and intelligent automation solutions.