This month, Dr. Ustun discussed how machine learning models, personalized with sensitive features like sex, age group, and HIV status, can perform better for populations but worse for specific groups, potentially causing harm. He proposed formal conditions for fair group attribute use and outlined practical methods, such as “personalization budgets” and “participatory systems,” to ensure fairness in personalized predictions. You can find a recording to his talk here.