Global Health in the Age of AI: Charting a Course for Ethical Implementation and Societal Benefit – Fondazione Giorgio Cini
arrow_back
Conferences and Seminars November 2024

Global Health in the Age of AI: Charting a Course for Ethical Implementation and Societal Benefit

Island of San Giorgio Maggiore, © Francesca Occhi. Courtesy Fondazione Giorgio Cini

The Fondazione Giorgio Cini is hosting a three-day symposium, entitled “Global Health in the Age of AI”, assessing the societal implications of artificial intelligence (AI) in the healthcare sector, including issues related to accessibility, equity, and potential impacts on healthcare disparities.

 

Scientific Programme

The scientific program, structured over three days, is developed by Prof. Luciano Floridi, Director of the Digital Ethics Center (DEC) at Yale University and Professor in the Department of Legal Studies at the University of Bologna. Professor Floridi is supported in this work by Dr. Jessica Morley and Ms. Renée Sirbu from Yale University, and two rapporteurs: Ms. Emmie Hine (Universities of Bologna and Yale) and Mr. Huw Roberts (University of Oxford).

 

Scientific Outputs

A series of papers authored by the conference participants will be disseminated in a series of scientific publications published in an open-access format.

 

 

Fellowships

The Fondazione Giorgio Cini is pleased to announce the availability of eight, fully funded fellowships for young researchers interested in attending the three-day symposium “Global Health in the AI Age: Charting a Course for Ethical Implementation and Societal Benefit,” to be held in Venice from November 7th to November 9th, 2024.  The deadline for Applications is September 15th, 2024.

The call for fellowships has closed.

 

Download Call for Applications 

 


Program

 

Thursday 7 November 2024

 

 

14:00 – 15:00 Opening Remarks.

 

15:00 – 15:30 Framing the debate.

 

Jessica Morley, Yale University, US.

 

 

15:30 – 17:00 The Ethics of AI in Health care.
plus

Although there is plethora of guidance for the ethical development and use of AI in general, specific sectors of application have seen less of this activity. In the domain of global health, the leading voice for the ethics of AI has been the World Health Organization which has issued specific guidance on the ethics and governance for AI for health, and more recently guidance for the development and use of Large Multimodal Models in health. In this talk I will discuss the proposed guidance by WHO, examining the six ethical principles that form the foundation of the documents, and situate it within the broader ecosystem of global governance. As the implementation of the WHO guidance is still ongoing I will then focus on its impact in the development of AI ethics tools and in the revision of institutional approaches such as ethics review boards.

Keynote: Effy Vayena, Swiss Institute of Technology (ETH).

Respondent: Ravi Parikh, University of Pennsylvania, US.

Q&A

 

 

17:30 – 19:00 AI and global health equity: How can we move from promise to practice?
plus

Many expect that AI will help to solve complex problems in medicine, whether by improving quality of care and access, improving diagnostic capacities, offering new more targeted therapies, or reducing costs. In particular, AI has been promoted as a means of addressing persistent problems of health equity, including in low and middle income countries. Yet AI is not a silver bullet for medicine. The past decade has seen a growing push to address serious issues with AI in medicine, such as recurrent issues around algorithmic bias. While essential for reducing the risk that AI systems replicate and amplify existing forms of societal inequality and discrimination, a narrow focus on the improvement of algorithm performance misses the broader context required for AI to operate as designed. This talk explores the kinds of investments that are needed, particularly in low and middle income countries and in higher income countries with significant health disparities, in order to ensure that AI systems promote health and equity. Without significant, foundational investments in the social, political, and infrastructural context necessary for AI, critical resources for health will be wasted, perhaps causing more harm than good.

Keynote: Amelia Fiske, University of Munich (TUM), Germany.

Respondent: Enrico Coiera, Australian Institute of Health Innovation, Australia.

Q&A

 

19:30 – 20:15 Panel: From Regulation to Standards and Implementation.

 

Sophie Van Baalen, Rathenau Instituut, Netherlands.

Alexandre Dias Porto Chiavegatto Filho, University of Sao Paulo, Brazil.

Federica Mandreoli, University of Modena and Reggio Emilia, Italy.

Moderator: Glenn Cohen,  Harvard Law School, US.

 

20:15 – 20:30 – Closing Remarks: Luciano Floridi, Yale University, US, and University of Bologna, Italy.

 

 

 

Friday 8 November 2024

 

 

9:20 – 9:30 Greetings from President Fondazione Giorgio Cini, Gianfelice Rocca.

Opening Remarks: Luciano Floridi, Yale University, US, and University of Bologna, Italy.

 

 

9:30 – 11:00 AI and Public Trust.
plus

Trust has emerged as a central issue in the context of data driven research and innovation, including AI. From trust in the institutions and companies that develop AI tools, to trust in the technology itself, there has been a lot of attention on how to secure, engender and maintain trust. Trust is perceived as fundamental for the development and introduction of new data-based technologies, such as AI, but also for the acceptability of the conditions and infrastructures that would enable the development of these technologies. And yet, how to ensure or foster trust remains elusive.  In this talk, I will engage with the issue of public trust in AI. Drawing from theoretical and empirical studies, I will examine questions such as, what does it mean for the public to trust AI? Is the concept of trust appropriate or relevant in this context, and what kind of normative implications arise for those who seek public trust? I will close by offering some reflections on why the ‘deficit of trust’ in AI seems to persist, and even increase, despite efforts to ensure trustworthiness, and what could be done about it.

Keynote: Angeliki Kerasidou, University of Oxford, UK.

Respondent: Federica Mandreoli, University of Modena and Reggio Emilia, Italy.

Q&A

 

 

11:30 – 13:00 AI and the Social Determinants of Health.
plus

Social determinants of health are the conditions of the places where we live, play, work and gather. These include a wide range of factors, such as socioeconomic status, neighborhood and physical environment, healthcare access and quality, social support networks, education and literacy, employment conditions, food environment, cultural and social norms, early childhood experiences, social exclusion and discrimination, access to transportation, and stress and psychosocial factors. The importance of studying social determinants of health to create effective public health policies and healthcare interventions is well-established. However, data is not always at the ready. Advances in artificial intelligence and the availability of massive datasets generated from digital and remote sensing tools offer opportunities for capturing granular details on the conditions of the spaces people occupy. This talk will highlight how data from social media, mobile phones, street images, and satellite images can be used to study social determinants of health and the impact of policies that impact health.

Keynote: Elaine Nsoesie, Boston University School of Public Health, US.

Respondent:  Kee Yuan Ngiam, National University Hospital, Singapore.

Q&A

 

 

14:30 – 16:00 The Challenges of real-world Implementation – Turbocharging AI in Clinical Practice.
plus

Leveraging large datasets and identifying complex underlying patterns in well curated data allows the technological advances in machine learning to offer products that enhance clinical accuracy, reduce health costs, improve efficiency, save time and resource, whilst minimising human errors. Whilst key applications include automated diagnostics, clinical decision support and predictive and pre-emptive personalized medicine for whole populations, the current reality of adopted products falls in diagnostic and descriptive domains.

The successful implementation of machine learning in  these domains requires a structured approach grounded in implementation science and the TURBO framework – testable, useable, reliable, beneficial and operable platforms, adhering to national research ethics, clinical and research guidelines such as STARD-AI, QUADAS-AI, local governance frameworks, national regulatory adherence and thorough health-system research approvals. Utilizing the UK’s NHS as a case example, tangible solutions include developing clear guidelines for AI integration, conducting pilot studies to demonstrate efficacy, and establishing multidisciplinary teams to oversee implementation. Here, collaborations with industry will become increasingly visible as clinical AI advances are realised.

If AI is the ‘new electricity’, it will bring both foreseeable applications and unexpected innovations. It is crucial to identify which platforms can seamlessly integrate with existing clinical pathways and which ones will necessitate disrupting current care models to enable their optimal adoption for patient benefit.

Keynote: Hutan Ashrafian, Imperial College London.

Respondent: Sara Gerke, College of Law, University of Illinois Urbana-Champaign

Q&A

 

16:30 – 18:00 Open AI meets Open Notes: Generative AI and clinical documentation.
plus

In this presentation, I discuss two innovations – patient online record access (‘ORA’) and its relationship to generative AI tools. Reviewing the challenges and opportunities that ORA invites, I explore the range of ‘traditional’ solutions proposed to deal with these challenges. I then discuss how generative AI could assist clinicians with documentation by reviewing findings that clinicians are already deploying this tool, and exploring current evidence of its effectiveness, including the benefits and risks of using it generative AI for clinical documentation purposes.

Keynote: Charlotte Blease, Uppsala University, Sweden.

Respondent: Alexandre Dias Porto Chiavegatto Filho, University of Sao Paulo, Brazil.

Q&A

 

18:30 – 19:15 – Panel: The Geopolitics of Global Health and AI.

 

Sandeep Reddy, Queensland University of Technology, Australia.
Jessica Morley, Digital Ethics Center, Yale University, US.
Tamara Sunbul, John Hopkins Aramco Healthcare, Saudi Arabia.

Moderator: Naomi Lee, BMJ Global Health.

 

19:15 – 19:30 Closing Remarks: Luciano Floridi, Yale University, US, and University of Bologna, Italy.

 

 

 

Saturday 9 November 2024

 

9:20 – 9:30 Opening Remarks: Luciano Floridi, Yale University, US, and University of Bologna, Italy.

 

 

9:30 – 11:00 Medical AI: Legal, Ethical, and Regulatory Considerations.
plus

Policies to regulate AI focused on safety are unfolding. In the healthcare arena, there is movement towards holding health systems and providers accountable for AI-based discriminatory decisions. From an AI perspective, it will be extremely challenging to prove when an algorithm makes a mistake. For diagnosis, it is seldom that biopsies and autopsies are performed, so errors are not easily discovered. For predictions, the counterfactual is not available so it’s impossible to prove that an algorithmic decision wronged a patient. Tracking errors made by AI sounds great in principle but cannot be operationalized by current data and modeling limitations. In addition, there is plenty of evidence that health systems have etched long standing structural inequity in the way care is delivered, predating adoption of any AI algorithm. It is odd that they will be held accountable for discrimination caused through their use of an algorithm but are not held accountable for other forms of discrimination. We have a long way to go.

[accordion_entry title=”11:30 – 13:00 Harmonizing regulation of AI in healthcare globally.”]

Integrating AI in healthcare raises concerns about safety, reliability, and ethical use, highlighting the urgent need for a harmonized global regulatory framework. The current regulatory landscape for AI in healthcare varies significantly across jurisdictions, with countries and regions adopting different approaches based on their specific needs and priorities. In this talk, I argue that while these efforts (e..g, the EU AI Act) are commendable, the lack of a unified global approach to AI regulation in healthcare can lead to inconsistencies, confusion, and potential risks for patients and healthcare providers. To address these challenges, fostering international collaboration and working towards a harmonized regulatory framework for AI in healthcare is crucial. This harmonization would ensure that AI systems adhere to consistent safety standards, transparency, accountability, and fairness, regardless of where they are developed or deployed. By establishing a standard set of principles and guidelines, regulators can promote the responsible development and use of AI technologies in healthcare while facilitating cross-border collaboration and innovation. Moreover, global harmonization of AI regulation in healthcare would benefit all stakeholders, including regulators, healthcare providers, AI developers, and patients. Through collaborative efforts, these stakeholders can establish comprehensive and efficient regulatory frameworks that prioritize patient safety, privacy, and ethical considerations in the development and deployment of AI technologies. This collaborative approach would foster greater trust in AI-driven healthcare solutions, encouraging wider adoption and improving patient outcomes worldwide.

[/accordion_entry]

Island of San Giorgio Maggiore, © Francesca Occhi. Courtesy Fondazione Giorgio Cini