Artificial intelligence (AI) has the potential to transform biomedical and behavioral health research, as well as healthcare delivery. At the same time, AI, and particularly machine learning (ML), raises numerous ethical and societal concerns. Without an ethically robust set of guiding principles and corresponding practices, AI/ML could infringe upon personal rights (e.g., privacy), widen gaps in fairness and equity, and fan the flames of distrust. Particularly concerning is the fact that such problems may be realized unintentionally or unconsciously by the developers and users of such technology. Preventing adverse consequences must be addressed proactively if AI is to be adopted in practice.
Compliance with current regulations and review practices, while essential, will not suffice to achieve trust, which requires willingness by the public and users to rely on AI/ML. And, if we permit AI/ML to influence life changing decisions, we must account for their social, environmental, economic, technical, and legal implications. The overarching goal of the Ethics Core of the BRIDGE Center is to ensure that AI/ML is developed and applied in an ethical and trustworthy manner. In this respect, the core will support the Bridge2AI program to become sustainable by being more firmly grounded in ethics and trustworthiness. To realize this vision, we will use a four step iterative reflexive cycle: 1) Scaffold, 2) Assess, 3) Facilitate, and 4) Evaluate and educate, or SAFE, which will provide a platform for convening, analysis and curation, outreach, and original research in a multidisciplinary manner.