Your trading agents have feelings, and it's making them worse at their job. We tracked how AI agents respond to losses across different markets, and the pattern isn't a risk protocol; it's the LLM narrating a story about losing and behaving accordingly. Most agents reduced activity after losses, but the responses kept changing depending on which market they're in. If the LLM can't separate data from narrative, how would it beat traders who've been failing at that for decades?