01 Jan. Tier 3 Context: Promise and Challenge of AI in Negotiations

Posted on 01/01/2026 by Jutta Portner in: Negotiation
AI is already inside our negotiations, whether we acknowledge it or not.
People draft emails with AI, summarize contracts with AI, and even ask AI how to respond to tough stakeholders. Large language models (LLMs) are powerful pattern machines: they generate fluent text, structure messy information, and help us think faster. Used well, they can dramatically reduce preparation time and make complex situations easier to grasp.

But there’s a catch. LLMs don’t “understand” your business, your context, or your risks in the same way you do. They predict plausible sentences based on training data. That’s their strength and their weakness.

Three AI challenges matter most in negotiation contexts:

  1. Hallucinations or confabulations (confident nonsense). LLMs can produce output that sounds right but is simply wrong or fabricated. In low-risk scenarios this is annoying. In negotiations, legal, or compliance-heavy environments, it’s dangerous.
  2. Hidden bias and blind spots. These systems learn from existing data and inherit the distortions, biases, and gaps of that data. Without guardrails, they can normalize one-sided perspectives or reinforce unhelpful patterns in how we speak and decide.
  3. Illusion of competence. Because AI answers so fluently, it’s easy to confuse good form with good judgment. We start trusting wording as if it were wisdom. We confuse apparent plausibility with truth. That’s where over-reliance creeps in.

For more complex task also attention decay and lack of metacognitve abilities in current models can be more than a nuisance.  Researchers like Emily Bender and colleagues* describe LLMs as “stochastic parrots”: systems that stitch together likely word sequences without genuine understanding. There’s a lot that can be discussed and disputed about that notion including whether “understanding” could be an emergent property of complex statistical correlation or something qualitatively different. But their point isn’t that AI is useless, it’s that we must be honest about what it can and cannot do. And let’s be fair: getting the facts wrong, showing bias, and pretending competence is nothing we haven’t seen from humans before :-)

For negotiation leaders, the core design question for Tier 3 is very similar to any good governance: How do we leverage AI’s strengths (speed, structure, patterning) while containing its weaknesses?

In the next post, we’ll show how our Tier 3 approach to AI is built precisely to address this: keeping humans responsible and using AI as a disciplined accelerator, not an autopilot. We’ll focus on how processes create safe and effective AI acceleration.

To be continued on January, 08, 2025

*) Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of FAccT’21. 

Back
© C-TO-BE. THE COACHING COMPANY | Seeuferstraße 59 | 82541 Ambach - Münsing | Tel.: +49 172 83 16 701 | welcome@c-to-be.de