The 2023/2024 virtual Rothenberg Speaker Series, in its fourth year, focused on a cutting-edge issue facing medical providers, the medical system, and the health lawyers, legal scholars, and policymakers who work in this space. By bringing in some of the nation's leading thinkers on emerging, truly pioneering areas of health law, this series provides a great education and conversation on the underlying legal and ethical principles that face society as it moves into a brave new world of artificial intelligence (AI) as applied to human health.
In the first speaker series event of the year, the Law & Health Care Program welcomed one of the nation’s leading health law scholars, I. Glenn Cohen, Deputy Dean and James A. Attwood and Leslie Williams Professor at Harvard Law School and Director of the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics. Professor Cohen kicked off the 2023/2024 series in September 2023 with an overview of the Regulatory and Ethical Challenges in Medical AI. His presentation offered a useful framing for the discussions throughout the year. He covered topics like the use cases for artificial intelligence in medicine, the ethics of building and implementing predictive analytics, the liability regime for the many entities and individuals who may determine if and how to use such AI, as well as data privacy and consent concerns. He also addressed concerns about bias in medical AI. Professor Cohen’s riveting talk set the stage for the remaining speakers in the series, who dove into novel subjects at the intersection of AI and healthcare.
In November 2023, Nita Farahany, the Robinson O. Everett Distinguished Professor of Law & Philosophy at Duke Law School, joined to speak about the battle for our brains. Professor Farahany is one of the leading scholars on the ethical, legal, and social implications of emerging neurotechnologies. Her presentation covered some of the highlights of her well-received 2023 book, The Battle for Your Brain: Defending Your Right to Think Freely in the Age of Neurotechnology. Professor Farahany discussed the need to establish a right to “cognitive liberty.” Noting that recent advances in both generative AI and neural interface technology “are now ushering in an era of brain transparency” affecting “everyone from individuals, marketers, governments, and even employers,” she cited some examples of misuses of such technology by employers or governments. Nonetheless, she noted that such technology has “incredible hopeful potential to transform” how we treat of neurological disease, mental illness, drug use disorders, and the resultant suffering, noting “we might even change our brains for the better… [some] breakthroughs could fundamentally change the human experience.” Importantly, Professor Farahany asserts that these “hopeful possibilities really can only be realized if we can confidently share our brain data without fear that it will be misused, which is why we can't go into this new era and naive about the challenges or complacent about the risk that the collection and the sharing that brain data will pose.” Her suggested pathway forward is to update our understanding of individual liberties to include “the right to cognitive liberty: the right to self-determination over our brains and mental experiences. … Cognitive liberty would both protect us from interference by others and give us a right to self-determination over our brains and mental experiences.” Following her talk, there was a robust question and answer session touching on regulatory controls of brain data, the impact such technology may have in a criminal law context, and possible legal safeguards against mental manipulation such as those included in the EU’s AI Act.
In January 2024, we were joined by Vardit Ravitsky, President and CEO of The Hastings Center and Professor of Bioethics at the University of Montreal and Harvard Medical School. Dr. Ravitsky spoke about “Constructing an Ethics Framework for AI in Biomedical Research.” Her talk focused on her work with the NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) Consortium. Bridge2AI will propel biomedical research forward by setting the stage for widespread adoption of AI that tackles complex biomedical challenges beyond human intuition. Focusing on “the way that we integrate ethical bioethical considerations into biomedical research,” and the knowledge that “AI is now absolutely everywhere,” she noted that “governance structures are struggling to keep up with the technological pace of innovation.” “In that context, it's not a surprise that AI is being integrated into health, very quickly, into healthcare, into the management of healthcare institutions and also into biomedical research.” Dr. Ravitsky discussed both individual level concerns and systemic level concerns regarding biomedical research in this area. Individual level concerns include informed consent, and the ethics of knowing that we are dealing with AI, as well as “the black box concern” which is the fact that many or all of the algorithms involved in health care and research are not easily understood by the public and by patients, effecting trust, uptake, and privacy. She also highlighted concerns at the systemic level, such as liability for any subpar care provided with or by AI tools, and the rapid pace of development, which poses a huge challenge for research, governance and implementation. Other concerns include diversity concerns and the reliability of data. One fascinating example she used was the use of voice as a biomarker of health. The human voice has the potential to be one of the cheapest and least invasive] biomarkers… a significant public health tool” with “immense clinical potential.” But such progress can be hampered by a lack of data diversity, say, for example, if such technology was unable to account for a variety of language or accents. Bridge2AI is focused on developing standards for this data collection, while at the same time collecting the samples and doing the research to develop tools, while remaining focused on disseminating knowledge. She concluded that while some have compared this project to building a plane in the air, it can also be nimble and responsive to the legal and ethical concerns that arise.
Finally, on February 29, we closed out the series with fascinating talk on “Three Narratives of Mental Health Chatbots: Salvation, Deception, and Harm Reduction” by former Maryland Carey Law Professor Frank Pasquale, now Professor of Law at Cornell Law School and Cornell Tech. Professor Pasquale’s talk focused on one specific use of artificial intelligence in health care – mental health chat bots. Professor Pasquale noted the proliferation of mental health apps and how both general wellness and medical devices are regulated. He then discussed the three predominant narratives regarding technology: the salvation narrative, the deception narrative, and the harm reduction narrative. Using these narratives to frame potential regulation in the area of artificial intelligence in mental health care, he noted that “policy and law are key.” When discussing accountability in this area, Professor Pasquale noted that the first wave of algorithmic accountability concerns will focus on whether mental health apps are safe and effective, and whether they adequately represent and respond to diverse communities. One potential structural safeguard in this area of concern is to assure that most apps are developed as intelligence augmentation for responsible professionals rather than as artificial intelligence replacing them. He also shared that “second-wave critics may question whether apps are prematurely disrupting markets for (and the profession of) mental health care in order to accelerate the substitution of cheap software for more expensive, expert, and empathetic professionals.” Regulating in this area, Professor Pasquale asserts, will take a more coordinated, “law & political economy” approach.
All of the 2023/2024 speakers brought a unique scholarly perspective to this cutting-edge are of law and health care technology. A list of all past events in this series, along with recordings, is available here. We look forward to announcing the upcoming 2024/2025 Rothenberg Speakers soon.
Interested in past events in this series? Visit our archives (with recordings).