The rise of Artificial Intelligence (AI), particularly generative AI, has captured widespread attention across various sectors. Companies like OpenAI, with their innovative programs such as ChatGPT and DALL-E, have showcased the remarkable potential of AI in generating human-like text and diverse imagery. However, the application of AI extends far beyond these publicly known chatbots, with significant implications and opportunities emerging within specialized fields like healthcare and medical research.
At NYU Langone Health, we recognize the transformative power of generative AI and are committed to exploring its benefits responsibly and securely. This guide aims to provide clarity on utilizing generative AI within our institution, emphasizing the critical need for compliant clinical tools for health care and adherence to stringent legal and ethical standards.
The Paramount Importance of Legal Compliance in AI Healthcare Applications
In the healthcare domain, the use of any technology, especially AI, necessitates unwavering adherence to legal compliance and institutional policies. This is particularly crucial for generative AI, given its capacity to process and generate sensitive information. Therefore, it is imperative to understand and strictly observe the following guidelines when considering the use of AI tools in clinical and research settings at NYU Langone Health:
- Restriction on Public Generative AI in Clinical Documentation: Under no circumstances should publicly accessible generative AI applications be employed for creating clinical documentation. This explicitly includes notes intended for medical records or any form of patient-related correspondence. The risk to patient privacy and data security is too significant to permit the use of non-secure, public platforms for these sensitive tasks.
- Prohibition of Protected Health Information (PHI) in Public AI: Sharing Protected Health Information (PHI) with public generative AI applications is strictly forbidden. This prohibition applies even to de-identified PHI and extends to all other categories of legally protected information. The confidential nature of patient data is paramount, and its exposure to public AI platforms poses unacceptable legal and ethical risks.
- No Clinical or Research Data in Public AI Environments: Similarly, the utilization of public generative AI applications with clinical or human subjects research data is prohibited, regardless of de-identification efforts. Research data often contains sensitive participant information and intellectual property, demanding robust protection within secure, compliant systems.
- Confidential Business Information Safeguards: Refrain from disclosing any confidential business information to public generative AI applications. Institutional strategies, financial data, and proprietary processes must remain protected from unauthorized external access.
- Meeting and Activity Recording Restrictions: Public generative AI applications must not be permitted to record or upload recordings of internal meetings or any other non-public NYU Langone activities. Maintaining the confidentiality of internal discussions and operations is essential for institutional integrity and security.
- Independent Verification of AI-Generated Content: It is crucial to understand that reliance on generative AI applications or their outputs for work-related tasks at NYU Langone Health is discouraged without independent verification. Users bear the responsibility for validating the accuracy and appropriateness of any AI-generated content before utilizing it in clinical, research, or administrative contexts. AI tools are aids, not replacements for professional judgment and expertise.
Accessing Secure and Compliant AI Tools at NYU Langone Health
Recognizing the need for secure and compliant clinical tools for health care, NYU Langone Health is facilitating access to a private generative AI environment. Access is being granted in phases to ensure optimal user experience and system performance. Prioritization for access is based on project type, favoring innovation and research initiatives, mentored explorations, and general exploration projects. Users approved for access will receive email notifications and must agree to a data use agreement and review detailed information regarding access protocols for our managed AI instance.
Guidelines and Policies for Responsible AI Usage
To ensure the responsible and compliant integration of AI into our healthcare and research practices, NYU Langone Health has established clear guidelines and usage policies:
- Utilize the Approved AI Tool: NYU Langone has designated a specific GPT platform as the approved compliant clinical tool. To gain access and ensure adherence to security and compliance standards, users must apply through the designated channel: How to Apply.
- Prioritize Data Security: Always obtain institutional permissions before introducing sensitive data into any AI programs. Data security is non-negotiable, and proactive measures are essential to safeguard patient information and institutional confidentiality.
- Avoid Unapproved Chatbots: To maintain robust data security and ensure data ownership, the use of non-approved chatbots for NYU Langone Health work is strictly discouraged. Relying on unvetted platforms can introduce significant security vulnerabilities and compliance risks.
- Explore NYULH Generative AI Resources: To further understand how NYU Langone Health is strategically leveraging generative AI and to access additional resources, please visit: Learn How NYU Langone Health Uses Generative AI.
For any inquiries regarding AI at NYU Langone Health and the utilization of compliant clinical tools for health care, please do not hesitate to contact us at: [email protected]. We are dedicated to supporting the responsible and innovative adoption of AI to enhance healthcare delivery and research, while upholding the highest standards of compliance and ethical practice.