
Researchers have discovered a macroscopic physical law governing the behavior of Large Language Model (LLM)-driven agents, revealing that their generative dynamics mirror equilibrium systems in physics. By measuring transition probabilities between states, the study demonstrates that these agents follow a detailed balance condition, suggesting they do not merely learn specific rules but instead optimize an internal potential function. This function acts as a global guide, allowing models to perceive the "quality" of a state and its proximity to a goal across different architectures and prompts. To quantify these dynamics, the authors propose a framework based on the least action principle, which minimizes the mismatch between an agent’s transitions and its underlying potential. Experiments across models like GPT-5 Nano and Claude-4 confirm that this mathematical structure provides a predictable, quantifiable way to analyze AI agent behavior. Ultimately, this work seeks to transition the study of AI agents from heuristic engineering to a rigorous science rooted in measurable physical principles.