Upcoming Events
CM4AI Graph Community Detection Challenge
The Opportunity and A Call to Action We are excited to announce the launch of the CM4AI Graph Community Detection competition on Kaggle! Participating in this challenge will give you a unique opportunity to be part of groundbreaking advancements in biomedical research as part of the Cell Maps for AI (CM4AI) initiative. Challenge Dates: May 14, 2025 – July 31, 2025 Join the Frontier of Biomedical AI Research! The Bridge2AI Functional Genomics Grand Challenge (Cell Maps for AI/CM4AI) is pleased to announce our Kaggle competition focused on using the data and tools generated by CM4AI and leveraging emerging AI/ML methods, such as graph and quantum machine learning, to advance biomedical science and precision medicine. Competition Overview The goal of this competition is to develop methods that identify communities within biological networks to uncover hidden structures and provide new insights into biological systems. By participating, you will help push the boundaries of AI/ ML applications in the life sciences. Why Participate? Shape the Future of Science: Successful approaches can redefine how we understand cellular systems, paving the way for innovative therapeutic strategies for cancer and other human disease. Challenge Yourself: Engage in solving a cutting-edge problem that bridges the gap between AI/ML and molecular biology. Be Part of a Global Community: Collaborate and compete with experts, researchers, and enthusiasts from diverse fields with mentorship from CM4AI investigators. Key Details CM4AI Resources: https://cm4ai.org and https://youtube.com/@CM4AI Competition Link: https://www.kaggle.com/t/b25c9b18a199411892011bfb88680cf3 Objective: Detect and identify communities from CM4AI SEC-MS data, contributing to the Cell Maps for AI initiative. Who Should Join? This competition is open to anyone passionate about artificial intelligence/machine learning, computational biology, or biomedical research. Whether you're a seasoned expert or an enthusiastic beginner, your contributions can help drive the next wave of discoveries. Don’t miss this opportunity to be part of a transformative journey at the intersection of AI and molecular biology! Join the Challenge Now
MICCAI 2025 Multi Camera Robust Diagnosis of Fundus Diseases(MuCaRD) Challenge
Introduction Fundus imaging is an indispensable tool in primary care for the early detection of major ophthalmic diseases—such as diabetic retinopathy and glaucoma—and for guiding treatment decisions. By noninvasively visualizing the retinal vasculature and subtle changes at the optic nerve head, fundus exams also serve as indicators of systemic health, making them a first line of patient management. With the widespread adoption of high-resolution, digital camera–based fundus imaging, a variety of imaging modalities have rapidly entered clinical practice. Recently, deep-learning–based models for classifying fundus diseases have demonstrated high sensitivity and specificity and have proven their clinical utility by being integrated into numerous software medical devices (SaMD). For example, automated diabetic retinopathy screening systems and glaucoma-progression monitoring tools are already commercially available, contributing broadly to diagnostic support and patient screening. However, most models are trained and validated on data from a single camera type, which limits their performance when applied to images from new or infrequently used devices. To overcome these practical constraints, this challenge aims to develop AI models that deliver consistent diagnostic performance across diverse camera environments. Through the Multi-Camera Robust Diagnosis of Fundus Diseases (MuCaRD) challenge, we will evaluate both robust classification algorithms that generalize to unseen devices and adaptive learning techniques that can quickly fine-tune using only a few sample images from a new camera. Challenge Description Overview: The MuCaRD challenge addresses a critical gap in AI‐driven fundus screening: ensuring consistent performance across both familiar and unseen camera systems. Participants will develop and benchmark models under realistic constraints—training on a limited set of images from one device and then evaluating robustness and adaptability on entirely new devices. By simulating clinical and commercial deployment scenarios, MuCaRD promotes methods that generalize beyond a single data source and can quickly fine‐tune to novel imaging hardware. Tasks: Task 1: Zero-Shot Classification Train on fundus images from a single camera and evaluate on completely unseen devices. Participants perform two separate binary classifiers (glaucoma vs. normal, and referable DR vs. non-referable), submitting full model code and weights to the CodaLab platform. A hidden validation set (200 images each from Optomed Aurora, Mediworks FC162, Optos Ultra Wide, Canon CR2) and a similarly‐sized test set ensure no data leakage. Task 2: Few-Shot Test-Time Adaptation Extend Task 1 by leveraging a small support set (5 labeled images per new camera: 1 positive, 4 negative) provided online during validation and test phases. Models should demonstrate on-the-fly adaptation within a 10 s/image inference limit, showcasing both robustness and efficient fine-tuning. Datasets: AI-READI dataset: A rigorously curated set of high-resolution color fundus images acquired on Optomed Aurora and Eidon cameras across three Bridge2AI partner sites (UAB, UCSD, UW). Images span all four type 2 diabetes severity categories and include expert-verified annotations for diabetic retinopathy stage, image quality scores, and linked clinical metadata (age, sex, HbA1c, blood pressure, comorbidities). This cohort is optimized to benchmark zero-shot model performance and cross-device generalization. Mediwhale Collection: Training from one CR2 and testing images from 5 different cameras. Evaluation & Metrics: Performance is measured by the average of the Area Under the ROC Curve (AUROC) and the Area Under the Precision–Recall Curve (AUPRC) for each disease. To mirror clinical feasibility, all inference and adaptation steps must complete within 10 seconds per image, though this limit does not directly penalize the score. Submissions are limited to two runs per day during validation to curtail leaderboard overfitting. Important Dates Training Release: June 30, 2025 Validation Submission: June 30 – August 15, 2025 Test Submission: August 15 – August 23, 2025 Winner Announcement: August 30, 2025 Workshop: October 6–10, 2025 Awards Certificates will be presented to the top three teams in each task. The first and corresponding authors of the winning teams will be invited to co-author the challenge summary paper and to present their results at the workshop. Contact For inquiries, please email: g.young@mediwhale.com
B2AI Discussion Forum on Emerging ELSI Issues: “The Pulse of Ethical Machine Learning in Health” by Marzyeh Ghassemi, Ph.D.
Please join us on Tuesday, July 15th, 2025 at 12pm-1pm PST/3pm-4pm EST for the discussion forum: “The Pulse of Ethical Machine Learning in Health", by Dr. Marzyeh Ghassemi Registration not required! Additional details in the attached documents and message below. Bio: Dr. Marzyeh Ghassemi is an Associate Professor at MIT in Electrical Engineering and Computer Science (EECS) and Institute for Medical Engineering & Science (IMES). She holds MIT affiliations with the Jameel Clinic, LIDS, IDSS, and CSAIL. For examples of short- and long-form talks Professor Ghassemi has given, see her Forbes lightning talk, and her ICML keynote. Professor Ghassemi holds a Germeshausen Career Development Professorship, and was named a CIFAR Azrieli Global Scholar and one of MIT Tech Review’s 35 Innovators Under 35. In 2024, she received an NSF CAREER award, and Google Research Scholar Award. Prior to her PhD in Computer Science at MIT, she received an MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar, and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University. Professor Ghassemi’s work spans computer science and clinical venues, including NeurIPS, KDD, AAAI, MLHC, JAMIA, JMIR, JMLR, AMIA-CRI, Nature Medicine, Nature Translational Psychiatry, and Critical Care. Her work has been featured in popular press such as MIT News, The Boston Globe, and The Huffington Post.