Close

Presentation

Trusted Research Environments for AI and Integrated Science
DescriptionWe explore the multifaceted aspects of managing risk for research projects involving sensitive data and AI models which depend deeply on supercomputing infrastructures. This risk spectrum spans traditional technical cyber controls as well as policy and sociological (including human factors) risks. In the context of multi-facility, multi-institutional workflows such as IRI and the American Science Cloud (AmSC), our goal is to advance progress in developing secure and trustworthy infrastructures for AI and integrated science.

Three intertwined challenges emerge as we advance this vision for IRI and AmSC: technological, policy, and sociological. With the rise of AI and the increasing use of sensitive data for training models, our goal is to leverage this BoF to build a community of practice that will advance a secure and trusted research environment (TRE) that addresses challenges in all three domains. How do we best achieve a TRE that is transparent, reproducible, ethical, secure, worthwhile, and collaborative, with clear data provenance and assurance? How might trust be rightfully earned and retained through modern workflows through managed risk and secure governance?

The outcomes from this BoF are: (1) explore TRE challenges in the age of AI and science integration; (2) identify alignment and divergence in TRE practices; (3) learn from complementary efforts across institutions around the globe; and (4) build a community of practice committed to trustworthy integrated science in the age of AI. We invite an audience with interests in these topics to participate in advancing these outcomes.