Rehill, PatrickBiddle, Nicholas2025-05-232025-05-230927-7099http://www.scopus.com/inward/record.url?scp=85205554543&partnerID=8YFLogxKhttps://hdl.handle.net/1885/733752523Methods for learning optimal policies use causal machine learning models to create human-interpretable rules for making choices around the allocation of different policy interventions. However, in realistic policy-making contexts, decision-makers often care about trade-offs between outcomes, not just single-mindedly maximising utility for one outcome. This paper proposes an approach termed Multi-Objective Policy Learning (MOPoL) which combines optimal decision trees for policy learning with a multi-objective Bayesian optimisation approach to explore the trade-off between multiple outcomes. It does this by building a Pareto frontier of non-dominated models for different hyperparameter settings which govern outcome weighting. The method is applied to a real-world case-study of pricing targetting subsididies for anti-malarial medication in Kenya.enPublisher Copyright: © The Author(s) 2024.Data-driven decision makingHeterogeneous treatment effectsMulti-objective Bayesian optimisationOptimal decision treesPolicy learningPolicy Learning for Many Outcomes of Interest: Combining Optimal Policy Trees with Multi-objective Bayesian Optimisation202410.1007/s10614-024-10722-185205554543