Skip to main content
Official Logo of Columbia Business School
Academics
  • Visit Academics
  • Degree Programs
  • Admissions
  • Tuition & Financial Aid
  • Campus Life
  • Career Management
Faculty & Research
  • Visit Faculty & Research
  • Academic Divisions
  • Search the Directory
  • Research
  • Faculty Resources
  • Teaching Excellence
Executive Education
  • Visit Executive Education
  • For Organizations
  • For Individuals
  • Program Finder
  • Online Programs
  • Certificates
About Us
  • Visit About Us
  • CBS Directory
  • Events Calendar
  • Leadership
  • Our History
  • The CBS Experience
  • Newsroom
Alumni
  • Visit Alumni
  • Update Your Information
  • Lifetime Network
  • Alumni Benefits
  • Alumni Career Management
  • Women's Circle
  • Alumni Clubs
Insights
  • Visit Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
  • More 

AI and Misinformation: How to Combat False Content in 2025

Professor Gita Johar explores how publishers, platforms, and people should address AI and misinformation online.

Published
December 10, 2024
Publication
Magazine
Focus On
Artificial Intelligence (AI), Strategy
Jump to main content
Article Author(s)

Jonathan Sperling

Affiliated Author
CBS Photo Image
Category
Thought Leadership
Topic(s)
Artificial Intelligence, Business and Society, Leadership, Strategy

About the Researcher(s)

Gita Johar

Gita Johar

Meyer Feldberg Professor of Business
Marketing Division

0%

Misinformation is nothing new—people and organizations have been publishing claims that contradict and distort well-verified facts for centuries. Long before the US political climate of the 2010s gave rise to terms like fake news and alternative facts, misinformation and disinformation were used by everyone from rulers in ancient Rome to 20th century satirists.

The misinformation ecosystem of the past decade, however, is new, thanks in large part to the rise of social media and, more recently, artificial intelligence. The ease with which content can now be created and shared, as well as the use of algorithms that are optimized for engagement, means misinformation can spread quickly, especially within an environment that often doesn’t cue people to fact check. In the past, misinformation was spread by a select few who wielded influence, but new platforms coupled with AI tools have democratized the practice.

It’s a tangled web that Gita Johar, the Meyer Feldberg Professor of Business at Columbia Business School, captures in a framework she calls The Three Ps: publishers, people, and platforms. Publishers, intentionally or not, may create false and sensationalized content—misinformation about climate change, for example. People consume and share this content, often through social media platforms, giving rise to often-problematic behavior.

Like a three-legged stool, Johar says, it’s not possible for one to exist without the other. Understanding this is the key to preventing the spread of misinformation, an increasingly potent hazard that impacts not only people but also private businesses, which stand to lose their reputations, partnerships, and ultimately revenue.

In a conversation with Columbia Business, Johar shared more about the rise of AI-fueled misinformation, how it can be prevented, and what exactly is at stake for businesses caught in the mix.

CBS Photo Image

“People have started realizing that AI is behind a lot of this misinformation. Over time, they’re not going to know what to trust anymore, plus there’s a deficit of trust in society as it is.”

Gita Johar, Feldberg Professor of Business

CBS: Why is fighting AI-based misinformation such an important battle for businesses?

Gita Johar: When businesses pay for ad placement on social platforms, it’s often done through very opaque auction systems. So they never know where it’s going to show up. Their ads can show up on misinformation sites, without the knowledge of the brand, because social media companies are not telling the brand every single website that ads were placed on. They’re simply providing metrics like the number of people reached, the impressions generated, and similar data points.

Now that AI has entered the picture, the amount of misinformation created on these sites is multiplying, so these ads will have an even greater probability of showing up on websites that can negatively affect their reputation. AI can also be used by competitors and disgruntled consumers to very quickly create fake news about your brand, making consumers see a brand less favorably.

CBS: How else is AI compounding the risks of misinformation?

Johar: People have started realizing that AI is behind a lot of this misinformation. Over time, they’re not going to know what to trust anymore, plus there’s a deficit of trust in society as it is. As AI does more and more, even if you have disclaimers saying such and such was produced by AI, what you’re going to see is consumers becoming more skeptical of information.

This is where you need to have those trusted sources of information. Groups like Media Bias/FactCheck rate information on its legitimacy and rate publishers as fake news publishers. The problem is, given polarization in society, people don’t believe those labels either. When people begin to mistrust information, there should be a reliable source or corner they can turn to, where they know the information is trustworthy. If you don’t know what to trust, I think that leads to very bad outcomes.

CBS: Can we trust AI to help us fact check or at least identify publishers of misinformation?

Johar: We know that AI hallucinates and cannot be trusted fully yet. That’s the issue with using AI to fact check. You need—and I am working on—a machine learning-based fact check but with clear parameters on how much to trust it, such as what the uncertainty is around whether these estimates are true or false. We also must involve people in fact checking so that in the end, everyone feels involved in this ecosystem, and then they begin to trust it more. Long term, it would be like a Wikipedia model for fact checking.

CBS: What role should regulators be playing here, if any?

Johar: This is exactly a problem where you need regulations and government intervention. You have things like that in the EU with the Digital Services Act, where they’re trying to fight misinformation by saying a platform is responsible. In the United States, our current law is based on the Telecommunications Act of 1996. The United States basically doesn’t regulate platforms, arguing that they are not responsible for the content shared on them and treating them like internet service providers. So they have no responsibility for the content on their platforms.

The minute you start regulating it, there are lots of avenues, like the EU leading the way with the Digital Services Act. It says that platforms will be fined up to 6 percent of their revenue globally if they’re found to propagate misinformation. That’s a heavy stick, but enforcement and implementation is an issue.

Now if you start regulating in the United States, then you run into the First Amendment. I think that is the problem with regulation. You really need a policy here that makes sense and prevents the wide and fast spread of misinformation, but at the same time without running into these First Amendment issues. Of course, such regulation relies on the fact that the definition of misinformation is very clear and widely accepted.

CBS: Government regulation aside, do advertisers and consumers have the ability to effect change?

Johar: I think advertisers can provide a big solution here. They can start to form a kind of trade association, like the National Advertising Review Board of the Better Business Bureau. If AI is going to start creating all kinds of false information about your brand, you have to be very careful to protect your brand. So it’s in the interest of all advertisers to form some sort of trade association and basically withhold advertising dollars from any platforms that are not seriously monitoring misinformation. I think a lot of work is needed, but businesses can lead the way here. They have all the power, but they haven’t used it because they are individual advertisers.

So advertisers need to work together to make this happen. It’s good for them, and it’s good for society. It’s really a win-win. Then they can actually force platforms to abide by rules and procedures and make sure that they’re actually monitoring and trying to prevent the spread of misinformation.

Key Takeaways for Business Leaders

  1. Actively monitor ad placements and prepare for potential misuse of AI by competitors or disgruntled customers to spread false information.
  2. By forming trade associations, advertises can collectively pressure platforms to adopt stricter monitoring and control of misinformation.
  3. Businesses should establish themselves as reliable sources of truth, build transparency into their operations, and advocate for balanced regulations that mitigate misinformation risks.

About the Researcher(s)

Gita Johar

Gita Johar

Meyer Feldberg Professor of Business
Marketing Division

You Might Like

Artificial Intelligence, Business and Society, Financial Institutions, Future of Work, Strategy
Date
December 10, 2024
Magazine Photo Image
Artificial Intelligence, Business and Society, Financial Institutions, Future of Work, Strategy

How AI in the Workplace is Transforming Business School Education

AI in the workplace is transforming business education. AI is here to stay, and it's essential for us to continue experimenting and sharing insights to shape the future of business school education, says Professor Shivaram Rajgopal.
  • Read more about How AI in the Workplace is Transforming Business School Education about How AI in the Workplace is Transforming Business School Education
Artificial Intelligence, Business and Society, Curriculum, Strategy
Date
December 10, 2024
Magazine Photo Image
Artificial Intelligence, Business and Society, Curriculum, Strategy

Weaving AI into the Classroom

AI-based teaching strategies are giving Columbia Business School students an edge.
  • Read more about Weaving AI into the Classroom about Weaving AI into the Classroom
Artificial Intelligence, Business and Society, Strategy
Date
December 11, 2024
Magazine Photo Image
Artificial Intelligence, Business and Society, Strategy

Building Trust in AI

Understanding how to successfully implement AI in your organization is one thing, but actually doing so can be a different challenge. Dive into a case study from Morgan Stanley to see the keys to building trustworthy AI implementation. 
  • Read more about Building Trust in AI about Building Trust in AI
Artificial Intelligence, Business and Society, Future of Work, Leadership
Date
December 10, 2024
Magazine Photo Image
Artificial Intelligence, Business and Society, Future of Work, Leadership

Creating an AI-Ready Workforce

Professor Stephan Meier and Todd Jick reveal how managers can set up employees for success and become an AI-ready workforce.
  • Read more about Creating an AI-Ready Workforce about Creating an AI-Ready Workforce
Save Article

Download PDF

More to Explore
Share
  • Share on Facebook
  • Share on Threads
  • Share on LinkedIn

External CSS

Homepage Breadcrumb Block

WS 25 Magazine

Official Logo of Columbia Business School

Columbia University in the City of New York
665 West 130th Street, New York, NY 10027
Tel. 212-854-1100

Maps and Directions
    • Centers & Programs
    • Current Students
    • Corporate
    • Directory
    • Support Us
    • Recruiters & Partners
    • Faculty & Staff
    • Newsroom
    • Careers
    • Contact Us
    • Accessibility
    • Privacy & Policy Statements
Back to Top Upward arrow
TOP

© Columbia University

  • X
  • Instagram
  • Facebook
  • YouTube
  • LinkedIn