Skip to main content
Official Logo of Columbia Business School
Academics
  • Visit Academics
  • Degree Programs
  • Admissions
  • Tuition & Financial Aid
  • Campus Life
  • Career Management
Faculty & Research
  • Visit Faculty & Research
  • Academic Divisions
  • Search the Directory
  • Research
  • Faculty Resources
  • Teaching Excellence
Executive Education
  • Visit Executive Education
  • For Organizations
  • For Individuals
  • Program Finder
  • Online Programs
  • Certificates
About Us
  • Visit About Us
  • CBS Directory
  • Events Calendar
  • Leadership
  • Our History
  • The CBS Experience
  • Newsroom
Alumni
  • Visit Alumni
  • Update Your Information
  • Lifetime Network
  • Alumni Benefits
  • Alumni Career Management
  • Women's Circle
  • Alumni Clubs
Insights
  • Visit Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
  • More 

Why AI Is a Different Kind of Shock to Health

Columbia Business School professor Dante Donati explains the guardrails needed to ensure AI improves outcomes rather than deepening inequality.

Published
January 28, 2026
Publication
Digital Future
Focus On
Artificial Intelligence (AI), Healthcare
Jump to main content
Article Author(s)
Jonathan Sperling

Jonathan Sperling

Writer/Editor
Marketing and Communications
Photo
Category
Thought Leadership
Topic(s)
Artificial Intelligence, AI and Transformative Tech

0%

AI is often described as a tool that makes systems more efficient. In health, that promise is especially appealing: better information, better targeting, better outcomes. But research on digital platforms and public health suggests a more complicated reality. Technologies designed to optimize reach and engagement do not always improve health outcomes—and in some cases, they systematically miss the people who need help most.

That tension sits at the center of Columbia Business School professor Dante Donati’s research. His research has examined how digital platforms, targeting systems, and emerging AI tools have shaped health through care delivery, prevention, and information exposure, and everyday decision-making. 

As AI becomes more powerful and autonomous, Donati argues, its effects on health will be broader, faster, and harder to reverse than those of earlier technologies.

AI’s Health Impact Starts Outside the Clinic

AI’s impact on health begins long before patients enter hospitals or clinics. It starts upstream, in the systems that determine what information people see, which messages reach them, and how those messages are prioritized. For public health, where prevention and behavior change are often more effective than treatment, these upstream mechanisms are critical.

Donati’s own research on social media and malaria prevention illustrates this dynamic. In a large field experiment conducted in India, Donati and his collaborators studied a nationwide campaign that used Facebook and Instagram ads to encourage people to sleep under mosquito nets, a simple and well-established way to reduce malaria risk. The campaign reached millions of users across multiple states and was evaluated using a randomized design that compared outcomes across districts exposed to the ads and those that were not.

Despite its scale, the campaign failed to improve outcomes where malaria risk was highest. Survey data and administrative health records showed that the ads increased bed net use and reduced self reported malaria incidence among lower risk households, typically those living in solid dwellings. Among higher risk households living in poorer, non solid housing, where exposure to malaria was greater, the campaign had no measurable effect.

In other words, advertising algorithms optimized for engagement tended to reach urban and wealthier users who were easier to engage and already healthier. Rural and poorer populations, who stood to benefit the most, were systematically underreached.

For Donati, this result highlights what makes AI and algorithmic systems a different kind of technological shock. When systems are optimized for efficiency or engagement rather than impact, they can instead reinforce existing health disparities. As AI driven targeting becomes more sophisticated and more autonomous, its ability to scale these patterns increases.

When AI Improves Targeting, and When It Fails

Donati is careful to stress that these outcomes are not inevitable. The same technologies that fail under one objective can succeed under another. AI can improve health outcomes if it is designed to prioritize impact rather than attention.

Instead of asking which users are most likely to click on a message, targeting systems could be trained to identify individuals most likely to benefit from an intervention. Predictive models could focus on exposure, vulnerability, or lack of access, rather than ease of engagement. In this framework, AI becomes a tool for precision prevention, helping health campaigns reach people who are harder to reach but more likely to benefit.

The distinction matters. Systems built to maximize engagement favor convenience and scale. Systems built to improve outcomes accept higher costs or lower reach in exchange for effectiveness. For Donati, this is not a technical constraint. It is a choice shaped by incentives, governance, and design.

Whether AI improves health or deepens inequality depends on which objective institutions choose to optimize.

The Importance of Digital Guardrails

In digital environments, especially social media, platforms often create collective action problems. Even when users recognize harm, disengaging is costly because friends, information, and social life remain tied to the platform.

As a result, Donati argues that any sort of AI-related guardrails must operate at the system level. Policies focused solely on personal responsibility, such as encouraging users to spend less time online, are unlikely to succeed. More promising approaches shift responsibility toward platforms themselves.

Governments have begun experimenting with age based restrictions, content moderation requirements, and limits on exposure to harmful material. While the evidence on what works is still emerging, Donati sees this as a move from awareness toward accountability.

“I think of 2025 as an awareness year; 2026 is the year of accountability. This year governments will start stepping in, and hopefully companies do too, because at some point companies will have to face these issues,” Donati said.

Can AI Protect Consumers?

The same AI systems that optimize attention and consumption could be repurposed to protect users instead, according to Donati.

He points to the possibility of AI agents that operate on behalf of consumers. Such systems could flag deceptive marketing practices, warn users about prolonged exposure to violent or highly arousing content, or provide real time feedback about whether time spent online is likely to harm mental health. In online marketplaces, these tools could help users identify manipulation. On social platforms, they could counterbalance exposure to harmful content.

“All these systems are enabled by AI, more recently also by GenAI and chatbots, which means that we can also use the same systems to counteract these forces,” Donati said.

AI, Donati argues, is neither inherently beneficial nor harmful to health. But because it operates upstream, at scale, and often invisibly, its effects will be profound either way. The challenge now is to ensure that as AI reshapes health systems, it improves outcomes rather than widening the gaps it was meant to close.

Save Article

Download PDF

Share
  • Share on Facebook
  • Share on Threads
  • Share on LinkedIn

External CSS

Homepage Breadcrumb Block

Official Logo of Columbia Business School

Columbia University in the City of New York
665 West 130th Street, New York, NY 10027
Tel. 212-854-1100

Maps and Directions
    • Centers & Programs
    • Current Students
    • Corporate
    • Directory
    • Support Us
    • Recruiters & Partners
    • Faculty & Staff
    • Newsroom
    • Careers
    • Contact Us
    • Accessibility
    • Privacy & Policy Statements
Back to Top Upward arrow
TOP

© Columbia University

  • X
  • Instagram
  • Facebook
  • YouTube
  • LinkedIn