At 8:05 a.m., a resident pastes a patient summary into a “free” chatbot to polish the discharge note. At 8:17 a.m., someone at the front desk asks another AI to draft a billing email using the policyholder’s details. No one meant to “break” anything… but patient data just left for services with no confidentiality agreement or controls.
That, not malware, is today’s most common backdoor in hospitals: Shadow AI.
While LATAM faces a spike in cyberattacks (organizations in the region suffered 39% more weekly incidents than the global average in H1-2025, and victims named on data-leak sites rose versus 2023), the silent vector is the everyday use of unapproved AI tools. ( Check Point Blog )
Healthcare’s red flag is clear: an October 2025 report found 95% of organizations say staff are already using AI in email—often without formal approval or clear policies. Many employees assume any AI is “HIPAA-compliant,” which it isn’t by default. (Business Wire)
What is “Shadow AI” and why does it matter?
It’s using AI without approval or outside corporate channels: personal
ChatGPT/Gemini/Copilot accounts, browser extensions, or web apps where people
paste clinical text, prior-auth letters, policy numbers, or patient CSVs. The scale is
large: >80% over 80% of workers—including security staff—admit using unapproved AI;
in the UK, 71% of employees have done so,
with 51% doing it weekly. (cybersecuritydive.com)
Concrete risks for hospitals:
- PHI disclosure to services without BAAs, audit trails, or deletion guarantees.
- Loss of control over data residency, retention, and access logging.
- Compliance exposure (reportable breaches, sanctions).
- “Boomerang” social engineering: leaked snippets come back as highly credible spear-phishing—an active health-sector threat. (health-isac.org)
The “two-front” problem in LATAM: ransomware outside, Shadow AI inside
- Regional exposure: LATAM’s weekly attack volume sits well above the global mean; victims listed on extortion sites increased ~15% from 2023 to 2024. ( Check Point Blog )
- Informal AI use: 2025 studies show a strong majority of staff already use
unapproved AI, often sharing sensitive info from daily workflows (email,
docs, finance). (IT Pro)
Translation for day-to-day ops: you can harden servers against ransomware and still exfiltrate data via an innocent copy-paste into a public AI.
A hospital-ready anti–Shadow AI blueprint (no jargon)
1. One-page AI policy (usable today).
List approved tools, define which data never go into AI (PHI, finance,
legal), and how to request new use cases. Anchor it to NIST AI RMF (risk
governance) and ISO/IEC 42001 (AI management system). (NIST)
2. “Allow-list” + SSO.
Enable only enterprise AI with institutional login and audit logs. Block freebies on clinical networks/endpoints (proxy/DNS/URL filtering).
3. Email & web DLP.
Rules that detect PHI, policy numbers, MRNs—and stop posts to non-
approved domains. (Email remains a critical vector in health.) (health-
isac.org)
4. BAAs / data residency.
Require data-processing agreements (BAA or equivalent), define where data live, how long, and how they’re encrypted.
5. Pocket training (15 min/role).
Micro-modules for clinicians, admissions, finance, and IT: what’s OK, what’s not, real leak examples, and quarterly table-top drills.
6. Living inventory of AI use cases.
A simple register (sheet/dashboard) of who uses what, purpose, data
sources, risk. Update monthly.
7. Monitor and measure.
o % of activity in approved vs unapproved tools
o DLP-prevented incidents
o Time-to-approve new AI use cases
“Block everything” vs “channel the value”
Banning AI doesn’t work—people need to speed up their work. The winning play is to channel usage: provide corporate tools with controls and equal-or-better UX than “free” options. Recent research warns that without governance, Shadow AI is a growing driver of security incidents—leaders must combine policy, education,
and monitoring. (IT Pro)
Where HarmoniMD + CLARA fit
HarmoniMD (cloud HIS/EHR): role-based access, clinical audit trails, and HL7 connectors so data move without copy/paste into external services.
CLARA (HarmoniMD’s AI medical assistant): approved, governed AI
inside the EHR flow—verifiable summaries, documentation support, and in-
flow queries without exporting PHI to “shadow” tools. Admin usage panels
and governance aligned to AI-risk frameworks.
Conclusion
Hospitals in 2025 don’t just defend against ransomware; they must also close the
quiet leak: Shadow AI. The good news: with a simple policy, approved tools,
DLP, and a use-case register, you can lower risk without throttling clinical
productivity. AI isn’t the enemy; using it without rules is.
Want to see this in your operation?
Book a HarmoniMD + CLARA demo or let’s co-design a secure AI adoption plan
with clear risk and productivity metrics.