AI and organizations · human-AI interaction · human capital · AI-mediated markets · negotiation
AI tools raise worker productivity, but sustained use can erode the expertise on which those gains depend. We develop a dynamic model that decomposes AI's productivity effect into a skill-neutral component and one that scales with worker expertise. Even a fully informed decision-maker rationally adopts AI when front-loaded gains outweigh long-run skill costs, producing steady-state loss. When managers are short-termist or worker skill has external value, optimal policy turns this into the augmentation trap, leaving the worker worse off than if AI had never been adopted.
We simulate 3,000 negotiations between pairs of LLM-based agents across nine theory-grounded bargaining approaches plus an unprompted baseline, and analyze the resulting payoff matrix as a game. A variance decomposition attributes 69–81% of between-pairing variance to interaction effects. Every Nash equilibrium is competitive, capturing only 27% of the available value versus 64% for the jointly optimal pairing. Outcomes depend overwhelmingly on the pairing of approaches rather than on either side's individual choice, establishing the negotiator's dilemma in AI agents.