Artificial intelligence (AI) has gradually become one of the most used entry points into the healthcare system. According to OpenAI Report January 2026“AI as an ally in healthcare”, more than 40 million people around the world use ChatGPT every day for health-related questions.a scale that places AI alongside primary care, urgent care and telehealth as the first point of access to medical information.
Health prompts now make up over 5% of all messages sent to ChatGPT globally. Among the platform’s approximately 800 million weekly users, approximately 200 million discuss health-related topics at least once a week.
Temporal data reinforces this change. OpenAI find that approximately 70% of health-related conversations take place outside of traditional clinic hours, when access to clinicians is limited. In rural and underserved areas, users generate hundreds of thousands of healthcare-related messages each week, signage this AI fills gaps where physical access to care remains limited.
Administrative complexity also promotes adoption. OpenAI reported that approximately 1.6 to 1.9 million messages per week focus on health insurance, including plan selection, billing disputes and coverage questions. These requests often overwhelm provider offices and payer call centers, push consumers towards AI tools that provide immediate explanations and advice for the next step.
THE report also highlights growing professional use. Sixty-six percent of U.S. doctors and nearly 50% of nurses reported using AI for at least one healthcare-related task, including documentation, information review, and administrative support. That overlap between consumer and clinician use suggests AI is incorporate throughout the healthcare workflow rather than remaining a standalone consumer tool.
Increasing comfort with AI for health
OpenAI’s findings align with broader consumer behavior tracked by PYMNTS Intelligence, which shows AI becoming a starting point for everyday decisions. PYMNTES find that more than 60% of U.S. consumers have used an AI platform in the past year, reflecting widespread adoption rather than early experimentation.
Advertisement: Scroll to continue
Most importantly, PYMNTS find that AI is increasingly actions as a first step instead of a complementary tool. A majority of frequent AI users reported starting tasks within AI platforms rather than within search engines or apps. This behavior covers learning, planning, financial tasks, and health-related matters.
Younger users are accelerating the change. PYMNTS data watch more than a third of Gen Z consumers now begin their personal tasks directly with AI. While healthcare represents only one category within that More broadly, it reflects a growing ease in using AI for sensitive and high-stakes subjects traditionally handled by professionals.
The OpenAI report reinforces this change in behavior. Among American respondents, 55% said they used ChatGPT to understand symptoms, 52% to get respond at any time of day, 48% to decode medical terminology and 44% to learn about treatment options. These are fundamental steps in the healthcare journey, determining how patients prepare for appointments and decide when to see a professional.
Benefits evolve faster than guardrails
The rapid expansion of AI as an entry point into healthcare creates clear benefits as well as unresolved risks. On the advantage side, AI absorbed require health systems to struggle to manage effectively. By answering basic questions, clarifying medical language, and helping users navigate insurance and administrative complexityAI reduces friction for patients and providers.
At the same time, scale increases risk. Generative AI can produce answers that appear authoritative but are incomplete or incorrect, and errors in healthcare carry higher stakes than in most consumer applications. Researchers and clinicians have warned that AI can generate dangerous advice when users lack context or ask ambiguous questions.
Confidentiality and liability remain open questions. As consumers share sensitive health information with AI tools, concerns persist about data protection and regulatory oversight. Responsibility also remains unclear when AI-generated advice influences patient outcomes, raising questions for developers, providers, and policymakers.