Technology

OsteopathicAI: How a professional standard for AI can strengthen our commitment to whole-person care

AOiA seeks feedback from the community on a draft AI standard for the osteopathic medical profession.

Topics

Artificial intelligence is already embedded in the daily reality of osteopathic medicine. Students use it to study. Residents use it to clarify clinical questions. Faculty use it to organize teaching. Health systems are testing AI for documentation, patient education, operations and clinical decision support. Whether we invited it or not, AI is in the room.

Here is the blunt truth: If we do not define an osteopathic standard for AI, someone else will. And if the profession is not part of the AI-driven learning ecosystem in healthcare, we risk becoming less relevant in a world increasingly shaped by data, models and automation.

That is why the American Osteopathic Information Association (AOiA) is drafting OsteopathicAI as a profession-level ethical and practice standard for use of human-centered AI in osteopathic medicine. This standard reflects not just how our institutions operate, but also how the broader osteopathic community practices, collaborates and leads. This framework is designed with a pathway toward AOA adoption, ensuring that as AI becomes embedded in professional practice, osteopathic medicine speaks with a unified, values-driven voice.

We want to hear from you. We are currently seeking feedback on a draft version of OsteopathicAI and plan to refine the standard based on input from the osteopathic medical profession. Please read the standard and share your comments on the AOiA website. We are accepting feedback through early April 2026.

Why an official position is urgent

Across our profession, AI use is uneven, informal and often unspoken. That creates predictable outcomes, including, but not limited to:

  • Fragmented policies and inconsistent safeguards
  • Variable patient transparency and trust
  • Increased risk of error, bias and privacy harm
  • Dilution of osteopathic distinctiveness through generic assumptions
  • Missed opportunities to strengthen education, research and whole-person outcomes through evidence and learning

An adopted professional standard creates the opposite:

  • A shared baseline for safety and trust
  • Clear expectations for responsible use
  • A framework that strengthens osteopathic identity while embracing modern tools
  • A platform for collective learning, including lawful de-identified data sharing that helps the profession contribute to and benefit from the broader healthcare data ecosystem

What OsteopathicAI is

OsteopathicAI is a profession-level ethical and practice standard describing how AI should be designed, selected, deployed and used so that AI augments, not replaces, osteopathic professional judgment and whole-person care.

It is not a product, not a vendor and not a single tool. It is technology-neutral and procurement-neutral. It imposes no requirements for a centralized data common, a unified architecture or a specific registry or reporting mechanism.

The goals are straightforward: to protect patients, protect trust, protect osteopathic identity and position the profession to be a leader in AI-integrated healthcare.

The minimum expectations

An AI-enabled system, dataset, agent, model or workflow should only be described as OsteopathicAI-aligned when it meets minimum expectations the profession can stand behind:

  • Human accountability: A clearly identified, appropriately credentialed human owner remains responsible for outcomes.
  • Whole-person intent: Use protects human dignity, autonomy and the therapeutic alliance, and respects the context that shapes real patients.
  • Safety and oversight for consequential output: When AI could materially influence diagnosis, triage, treatment, medication decisions, high-stakes education outcomes or professional standing, it requires human review with the ability to override, correct and halt use.
  • Truthfulness and transparency: Outputs include context about limits, uncertainty and what the tool can and cannot reliably do.
  • Privacy and security: Protected data are handled lawfully and securely, with safeguards against misuse.
  • Fairness and harm reduction: Reasonable steps are taken to identify and reduce bias and disparate harm.
  • Evidence proportional to risk: Higher-risk use requires stronger validation and ongoing monitoring.
  • Osteopathic distinctiveness, including osteopathic manipulative medicine (OMM): AI must strengthen, not dilute, osteopathic reasoning and osteopathic practice.
  • Implementation posture: Start in safer environments and lower-risk use cases, then scale with safeguards as evidence and oversight mature.

AI literacy is now a professional competency

AI is not simply a new tool. It is a new category of influence on how we learn, document, communicate and decide. That means AI literacy is now a professional competency. The profession needs to understand where learners and clinicians place themselves on an AI literacy spectrum, because literacy level shapes what guardrails are needed, what risks are tolerated and what training is required for safe and secure use.

Trust, patient transparency and appropriate AI use

Whole-person care requires trust. In practice, this means disclosure for patient-facing AI interactions, with informed consent when AI provides individualized guidance or recommendations, consistent with applicable law and institutional policy. Regardless of the tools used, clinicians remain accountable for what enters the record, what is communicated and what decisions are made.

Why OMM is a flagship priority

OMM is a high priority within the OsteopathicAI framework because it sits at the intersection of osteopathic distinctiveness, education and research value. Current general-purpose AI often struggles with the spatial reasoning, nuanced skill formation and osteopathic framing required to support this domain responsibly. That gap is not a reason to retreat from AI. It is a reason to lead.

OsteopathicAI should enable and protect efforts to build tools that strengthen osteopathic training and practice, including educational supports (simulation and feedback pathways), documentation support, patient communication support and research pipelines capable of demonstrating value through evidence gathered in osteopathic settings.

Data sharing is part of our future

Healthcare AI grows on data. Not marketing. Data.

Data sharing has become a growing theme across the healthcare AI landscape. If the osteopathic medical profession is not participating in responsible learning-at-scale, we risk being shaped by external datasets, priorities and definitions of quality. This does not permit use of identifiable or de-identified data to train open or public models.

OsteopathicAI supports lawful de-identified data sharing among two or more entities for education, quality improvement, research and human betterment, with practical guardrails such as governance and accountability, de-identification standards, security controls and contractual limits on misuse.

Your voice shapes the standard before adoption

AOiA has published the OsteopathicAI definition for community review because our profession’s input is what shapes how OsteopathicAI is finalized and carried forward for broader adoption—ensuring AI integration in osteopathic medicine is done with us and for us, not to us. Visit the AOiA website to read the draft definition and share your feedback. Please spread the word with others in the community as well—the feedback period will remain open until early April 2026.

One of the core goals of this survey is understanding how our members rate their own AI literacy, because that context shapes everything from how AOiA tailors education and resources to how we track our community’s growing confidence in navigating AI in practice. OsteopathicAI is a living document and will be reviewed annually or more frequently as the needs of our profession evolve.

Editor’s note: The views expressed in this article are the authors’ own and do not necessarily represent the views of The DO or the AOA. This article was partially prepared by generative artificial intelligence. Generated text was reviewed, edited and finalized by human authors.

Related reading:

Ethical considerations regarding AI use in healthcare: How much is too much?

Breaking down social media use in children: Is it time for a warning label?

Leave a comment Please see our comment policy