Sagi Levanon, M.Sc. Thesis Seminar
Advisor: Prof. Nir Rosenfeld
Predictive machine learning tools are increasingly being used to inform decisions regarding humans.
When human users stand to gain from certain predictive outcomes, they may be prone to act strategically to improve those outcomes.
We argue that in many realistic scenarios the system and its users are in fact aligned in their goals.
In this work, we give concrete real-world examples for such environments and demonstrate using a series of experiments that they are incentive-aligned.
Moreover, we propose a novel strategic-loss that can be used to solve any strategic classification task and prove generalization bounds for it.
Lastly, we connect these two results by showing how incentive aligned environments helps generalization.