Talks and Bios
Health disparities in the United States are one of the largest factors in reducing the health of the population. Disparities means some groups have lower life expectancy, are dying at higher rates from COVID-19, and utilize less mental health services, to name just a few examples. The future of medicine will be based on Artificial Intelligence and new technological platforms that promise to improve outcomes and reduce cost. Our role as AI researchers should be to ensure that these new technologies also reduce health disparities. In this talk I will describe recent work showing how we can work to reduce health disparities in the future of medicine. By ensuring that our task, datasets, algorithms and evaluations are equitable and representative of all types of patients, we can ensure that the research we develop will reduce health disparities.
Bio: Mark Dredze is the John C Malone Associate Professor of Computer Science at Johns Hopkins University. He is affiliated with the Malone Center for Engineering in Healthcare, the Center for Language and Speech Processing, among others. He holds a secondary appointment in the Department of Health Sciences Informatics in the School of Medicine. He obtained his PhD from the University of Pennsylvania in 2009. Prof. Dredze’s research develops statistical models of language with applications to social media analysis, public health and clinical informatics. Within Natural Language Processing he focuses on statistical methods for information extraction but has considered a wide range of NLP tasks, including syntax, semantics, sentiment and spoke language processing. His work in public health includes tobacco control, vaccination, infectious disease surveillance, mental health, drug use, and gun violence prevention. He also develops new methods for clinical NLP on medical records.
Noémie Elhadad is an Associate Professor of Biomedical Informatics, affiliated with Computer Science and the Data Institute at Columbia University, and serves as Vice Chair for Research in Biomedical Informatics. Her research is at the intersection of technology, machine learning, and medicine.
The year 2020 has brought into focus a second pandemic of social injustice and systemic bias with the disproportionate deaths observed for minority patients infected with COVID. As we observe an increase in development and adoption of AI for medical care, we note variable performance of the models when tested on previously unseen datasets, and also bias when the outcome proxies such as healthcare costs are utilized. Despite progressive maturity in AI development with increased availability of large open source datasets and regulatory guidelines, operationalizing fairness is difficult and remains largely unexplored. In this talk, we review the background/context for FAIR and UNFAIR sequelae of AI algorithms in healthcare, describe practical approaches to FAIR Medical AI, and issue a grand challenge with open/unanswered questions.
Bio: Dr. Gichoya is a multidisciplinary researcher, trained as both an informatician and a clinically active radiologist. She is an assistant professor at Emory university, and works in Interventional Radiology and Informatics. She has been funded through the Grand Challenges Canada, NBIB and NSF ECCS. Her career focus is on validating machine learning models for health in real clinical settings, exploring explainability, fairness, and a specific focus on how algorithms fail. She has worked on the curation of datasets for the SIIM (Society for Imaging Informatics in Medicine) hackathon and ML committee. She volunteers on the ACR and RSNA machine learning committees to support the AI ecosystem to advance development and use of AI in medicine. She is currently working on the sociotechnical context for AI explainability for radiology, especially the dimensions of human factors that govern user perceptions and preferences of XAI systems.
Ziad Obermeyer is the Blue Cross of California Distinguished Professor of Health Policy and Management in the School of Public Health at UC Berkeley, where he does research at the intersection of machine learning, medicine, and health policy. He was named an Emerging Leader by the National Academy of Medicine, and has received numerous awards including the Early Independence Award -- the National Institutes of Health’s most prestigious award for exceptional junior scientists -- and the Young Investigator Award from the Society for Academic Emergency Medicine. Previously, he was an Assistant Professor at Harvard Medical School. He continues to practice emergency medicine in underserved communities.
Recent advances in training deep learning algorithms have demonstrated potential to accommodate the complex variations present in medical data. In this talk, I will describe technical advancements and challenges in the development and clinical application of deep learning algorithms designed to interpret medical images. I will also describe advances and current challenges in the deployment of medical imaging deep learning algorithms into practice. This talk presents work that is jointly done with Matt Lungren, Curt Langlotz, Nigam Shah, and several more collaborators.
Bio: Andrew Ng is Founder of DeepLearning.AI, Founder and CEO of Landing AI, General Partner at AI Fund, Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University. As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work and research in the field of artificial intelligence.