Discrimination‑free Insurance Pricing with Privatized Sensitive Attributes
Fairness in machine learning is vital, especially as AI shapes decisions across sectors. In insurance pricing, fairness involves unique challenges due to regulatory demands for transparency and restrictions on using sensitive attributes like gender or race. Traditional fairness methods may not align with these specific requirements. To address this, the authors propose a tailored approach for building fair insurance models using only privatized sensitive data. Their method ensures statistical guarantees, operates without direct access to sensitive attributes, and adapts to varying transparency needs, balancing regulatory compliance with fairness in pricing.