The Intelligence Layer: Transforming Healthcare with Embedded AI
About HARIS SHUAIB
Haris Shuaib is the Founder & CEO of Newton’s Tree and a leading voice in AI for healthcare. Formerly a medical physicist at Guy’s & St. Thomas’ NHS Trust, he founded the NHS’s first hospital AI team, led national fellowships in Clinical AI, and advises on safety standards for AI deployment. He is committed to building infrastructure that turns AI from promise into practice.
Executive Summary
Healthcare is approaching a pivotal inflection point. After years of experimentation and proof-of-concept pilots, artificial intelligence (AI) is beginning to impact real-world care delivery. However, without rethinking the systems into which these technologies are deployed, AI will fall short of its transformative potential.
We believe AI must be reimagined not as a tool to be adopted, but as a layer of infrastructure. This "intelligence layer" integrates seamlessly into the clinical ecosystem, monitors its own impact, and elevates the performance of both humans and machines. Done right, this layer will make AI boring—invisible, trusted, and indispensable.
1. The Evolution of Medical AI
Medical AI has progressed through three major phases:
2012–2020: The Radiology Phase
Characterized by image-based diagnostic algorithms, this era focused on narrow tasks like fracture detection or nodule segmentation. Most tools were designed by academic researchers, often without regard for clinical workflows. The result was limited adoption and skepticism from frontline providers.
2020–2023: Operational AI and Clinical Co-Design
COVID-19 catalyzed a shift toward AI that helps with real-world tasks—note-taking, triage, resource allocation. Clinicians became more involved in development, and the products became more workflow-native.
2024 onward: Foundation Models and Intelligent Agents
Today, we are entering the era of large language models (LLMs) and general-purpose agents. These systems promise unprecedented flexibility but also introduce new challenges: hallucination, data leakage, bias, and performance drift.
2. The Hidden Risks of Scaling AI
Many discussions around AI focus on technical capability. But in practice, the real risks lie in the intersection of human and machine behavior.
Automation Bias
When AI takes over "easy" cases, clinicians are left with edge cases requiring high cognitive load. Over time, trust in the AI grows, and critical thinking may diminish. Our monitoring systems have observed a 15% increase in clinician-AI agreement over three months—without any model updates.
Invisible Data Drift
System upgrades, protocol tweaks, or subtle hardware changes can alter how data appears to an algorithm, without being perceptible to a human. These hidden shifts can break AI performance without warning.
Amplified Human Bias
AI inherits and amplifies bias in both data and documentation. For instance, the choice of language (“back pain” vs. “lumbar strain”) can flip a model's output. This embeds physician bias into systemic decision-making.
3. The Intelligence Layer: A New Vision for Healthcare
We propose the creation of an "intelligence layer" that sits above existing health IT systems and below the clinical interface.
Like the PACS system revolutionized imaging by standardizing storage and communication, the Intelligence Layer will do the same for AI.
4. Building the Layer: Newton’s Tree’s Architecture
Our company is focused on infrastructure, not interfaces. We prioritize embedding AI where clinicians already work, without disrupting their habits.
Embedded Monitoring
Through our FAMOUS (Federated AI Monitoring System), we track:
Data quality and integrity
Changes in AI output patterns
Shifts in human-AI interaction over time
No New Interfaces
Clinicians shouldn't need to open new dashboards. AI insights should appear where decisions are made—in the chart, on the screen, at the bedside.
Standards-Driven Design
We've partnered with NVIDIA and others to co-found MONAI Deploy, a set of open protocols for deploying and governing AI in medical environments.
5. Ambient Scribes & CLIO: Next-Generation AI Oversight
In partnership with the UK’s AI Safety Institute, Newton’s Tree is leading a national project, CLIO (Clinical LLM Intelligent Oversight), to assess the safety of large language models used as ambient scribes.
Why This Matters:
Scribes aren't just note-takers—they influence clinical coding, follow-up tasks, and diagnostic impressions.
They rely on many models working together, each with their own risk profile.
We must understand how these systems behave in practice, not just in the lab.
CLIO will monitor multiple vendors across NHS sites, identifying risk patterns and ensuring safe deployment.
6. Global Perspectives: Scaling Responsibly
The UK leads in AI governance and training. NICE, NHSx, and national fellowships have created a strong ethical and regulatory foundation. But commercial adoption lags behind systems like the U.S., where fee-for-service models incentivize speculative screening.
The UK Advantage:
Evidence-led evaluation
Ethical oversight
Workforce training
The U.S. Advantage:
Speed of deployment
Commercial incentives
High-volume experimentation
The challenge is to combine both: safe innovation at scale.
7. The Future Doctor
The role of the physician is evolving. Today’s clinicians must become literate in AI, data interpretation, and system-level thinking.
Understand standard deviation, not just stethoscopes
Translate wearable data into healthspan decisions
Interpret algorithmic outputs with nuance
Medicine is becoming an information science. Tomorrow’s doctor is not disappearing—but they are changing.
Conclusion: Make AI Boring
The ultimate goal is not AI that dazzles, but AI that disappears. Reliable. Trusted. Embedded.
Just as PACS became an unglamorous but transformative force in medicine, the Intelligence Layer will become the backbone of 21st-century care.
Let’s make AI boring.