Adversarial training aims to defend against adversaries - malicious opponents aiming to harm predictive performance in any way possible. This strict perspective can result in overly conservative training. As an alternative, we propose modeling opponents as pursuing their own goals rather than working directly against the classifier. Employing tools from strategic modeling, our approach incorporates knowledge of the opponent's potential incentives as inductive bias for learning. We propose a method of strategic training designed to defend against all opponents within an `incentive uncertainty set'. This defaults to adversarial learning when the set is maximal, but offers potential gains when the set can be appropriately reduced.